In DataCater, streams hold your data. Pipelines consume records from a stream, may transform the records, and publish the transformed records to another stream. Thus, streams act as the source or the sink of a pipeline.
Streams are sequences of records (or events). Records consist of a key, a value, and metadata (e.g., offset, timestamp). Technically, streams are implemented as Apache Kafka® topics.
While DataCater Cloud does not support the integration of your own Apache Kafka topics, any self-managed installation of DataCater (free open core or enterprise) allows you to integrate DataCater with existing Apache Kafka topics from local or remote brokers.
You can use Configs to store common configurations from Streams, such as bootstrap.servers, at one central location, and share them among multiple Stream instances. Please have a look at our API documentation for more information on how to work with Streams and Configs.