In DataCater, streams hold your data. Pipelines consume records from a stream, may transform the records, and publish the transformed records to another stream. Thus, streams act as the source or the sink of a pipeline.
Streams are sequences of records (or events). Records consist of a key, a value, and metadata (e.g., offset, timestamp). Technically, streams are implemented as Apache Kafka® topics.
While DataCater Cloud does not support the integration of your own Apache Kafka topics, any self-managed installation of DataCater (free open core or enterprise) allows you to integrate DataCater with existing Apache Kafka topics from local or remote brokers.
A stream definition can be expanded by referencing a config definition. By referencing configs, information such as connection-configurations or configurations for the underlying Apache Kafka® topics can be defined in one central place and reused by multiple streams. The streams api documentation can be used for more information on how to work with streams and configs.