Kinesis Data Streams collects and processes a large amount of incoming data from an unlimited number of producers.
- Producers supply data to Kinesis, e.g., any IoT (Internet of Things) devices.
- Consumers are any entity that can consume the data.
- Kinesis Data Streams are used:
- Real-time analytics or feed data into other services in real-time with data retention.
- e.g.) analyze logs continuously or run real-time analytics on click system data
- By default, Lambda invokes your function as soon as records are available in the stream. Lambda can process up to 10 batches in each shard simultaneously. If you increase the number of concurrent batches per Shard, Lambda still ensures in-order processing at the partition-key level.
- Transient Data Store:
- Streams are deleted based on their rolling retention window. (24-hour default; can be increased to 7 days)
- Kinesis Data Stream provides an ordering of records.
- Kinesis Shards
- Shards are the capacity of a Kinesis Stream.
- Each shard has the same partition key, but the data are divided by the sequence number.
- Allows streams to scale. A stream starts with at least 1 shard (1 MB of ingestion and 2 MB of consumption capacity per second). Shards can be added or removed from streams.
- Kinesis Data Record
- Data record is the basic entity. Each shard consists of a sequence of data records.
- Data records are composed of a sequence number, a partition key, and a data blob. Data blob can be up to 1MB.
Interacting with Kinesis Data Streams
- Kinesis Producer Library (KPL) passes data to Kinesis Data Stream.
- KPL provides the efficient abstraction layer for ingesting data with automatic retry and better performance
- Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Kinesis data stream.
- Kinesis API (AWS SDK) is used to interact with Kinesis Data Stream through love level API operations such as (PutRecord or GetRecords).