A Single Platform for All Events
Reveal has partnered with Confluence to create the industry’s only enterprise-ready Event Streaming Platform, driving a new paradigm for application and data infrastructure. This platform provides a single platform for real-time and historical events, enabling you to build an entirely new category of event-driven applications and gain a universal event pipeline. Our platform makes it easy to build real-time data pipelines and streaming applications, as we integrate data from multiple sources and locations into a single, central Event Streaming Platform for your enterprise.
Derive Business Value from Your Data
Let our platform manage the underlying mechanics of data transport to and from various systems. Simplify connection of data sources to Kafka, building applications with Kafka, as well as securing, monitoring, and managing your Kafka infrastructure.
Kafka Java Client APIs
- Producer API is a Java Client that allows an application to publish a stream records to one or more Kafka topics.
- Consumer API is a Java Client that allows an application to subscribe to one or more topics and process the stream of records produced to them.
- Streams API allows applications to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams. It has a very low barrier to entry, easy operationalization, and a high-level DSL for writing stream processing applications. As such it is the most convenient yet scalable option to process and analyze data that is backed by Kafka.
- Connect API is a component that you can use to stream data between Kafka and other data systems in a scalable and reliable way. It makes it simple to configure connectors to move data into and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing. Connectors can also deliver data from Kafka topics into secondary indexes like Elasticsearch or into batch systems such as Hadoop for offline analysis.