DataCater
Search
K

Integrating DataCater with Apache Kafka

DataCater provides Streams as a building block for integrating your pipelines with Apache Kafka. Each stream object represents an Apache Kafka topic. Pipelines can read records from or write records to streams.

Creating a stream

Let's create our first stream. Navigate to Streams in the UI and click Create your first stream:
Next, you can see the form for creating a new stream:
Streams require at least two configuration options: name and bootstrap.servers.
First, provide a name for the stream. The name of the stream equals the name of the corresponding Apache Kafka topic.
Second, please provide serializers and deserializers for the keys and values of your Kafka records, allowing DataCater to correctly interpret your data. At the moment, DataCater supports JSON and AVRO as data formats. If you do not specify the format, DataCater expects JSON by default.
Third, please define the bootstrap server of your Apache Kafka cluster and the remaining connection properties. You can use any property from the official consumer and producer API of Apache Kafka.
Finally, click Create stream.
Alternatively, you can also use our API to create a stream. The following API call creates a stream for an Apache Kafka topic having the name customers and being part of the Kafka cluster running on localhost:9092:
$ curl http://localhost:8080/api/v1/streams/ \
-XPOST \
-H'Content-Type:application/json' \
-d'{"name":"customers", "spec":{"kind":"KAFKA","bootstrap.servers":"localhost:9092","kafka":{"topic":{"config":{}}}}}' \
-H'Authorization:Bearer YOUR_ACCESS_TOKEN'