Redis Sink¶
Download connector Redis Connector for Kafka 2.1.0
This Redis sink connector allows you to write messages from Kafka to Redis. The connector takes the value from the Kafka Connect SinkRecords and inserts a new entry to Redis.
Prerequisites¶
- Apache Kafka 0.11.x of above
- Kafka Connect 0.11.x or above
- Jedis 2.8.1 or above
- Java 1.8
Features¶
- The KCQL routing querying - Kafka topic payload field selection is supported, allowing you to select fields written to Redis.
- Error policies for handling failures.
- Storing as one or more Stored Sets.
KCQL Support¶
[INSERT INTO cache|sortedSet] SELECT { FIELD, ... } FROM kafka_topic_name [PK FIELD] [STOREAS SortedSet(key=FIELD)]
Tip
You can specify multiple KCQL statements separated by ;
to have a the connector sink multiple topics.
The Redis sink supports KCQL, Kafka Connect Query Language. The following support KCQL is available:
- Field selection.
- Target sortedSet, cache or multiple sorted set selection.
- Key selection - which fields to use as the key.
Cache Mode¶
The purpose of this mode is to cache in Redis [Key-Value] pairs. Imagine a Kafka topic with currency foreign exchange rate messages:
{ "symbol": "USDGBP" , "price": 0.7943 }
{ "symbol": "EURGBP" , "price": 0.8597 }
You may want to store in Redis: the symbol as the Key
and the price as the Value
. This will effectively make Redis a caching system,
which multiple other applications can access to get the (latest) value. To achieve that using this particular Kafka Redis Sink Connector, you need
to specify the KCQL as:
INSERT INTO cache SELECT price from yahoo-fx PK symbol
This will update the keys USDGBP , EURGBP with the relevant price using the (default) JSON format:
Key=EURGBP Value={ "price": 0.7943 }
You may want to store in Redis the fields [firstName, lastName, age, salary] from the topic redis-topic:
{"firstName": "John", "lastName": "Smith", "age":30, "salary": 4830}
You may want to use a composite primary key of firstName and lastName. To achieve that using this particular Kafka Redis Sink Connector, you need to specify the KCQL as:
SELECT * FROM redis-topic PK firstName, lastName
In this case the Key
would be:
Key=John.Smith
You may also want to use a custom delimiter in the composite primary key of firstName and lastName.
In this case you need to set the optional configuration property connect.redis.pk.delimiter
to dash:
property connect.redis.pk.delimiter=-
In this case the Key
would be:
Key=John-Smith
Sorted Sets¶
To insert messages from a Kafka topic into 1 Sorted Set use the following KCQL syntax:
INSERT INTO cpu_stats SELECT * from cpuTopic STOREAS SortedSet(score=timestamp)
This will create and add entries to the (sorted set) named cpu_stats. The entries will be ordered in the Redis set based on the score
that we define it to be the value of the timestamp
field of the AVRO message from Kafka. In the above example, we are selecting and storing all
the fields of the Kafka message.
Multiple Sorted Sets¶
You can create multiple sorted sets by promoting each value of one field from the Kafka message into one Sorted Set and selecting which values to store into the sorted-sets. You can achieve that by using the KCQL syntax and defining with the filed using PK (primary key)
SELECT temperature, humidity FROM sensorsTopic PK sensorID STOREAS SortedSet(score=timestamp)
Note
Notice we have dropped the INSERT
clause.
We can prefix the name of the Key using the INSERT statement for Multiple SortedSets:
INSERT INTO FX- SELECT price from yahoo-fx PK symbol STOREAS SortedSet(score=timestamp)
This will create key with names FX-USDGBP , FX-EURGBP etc.
Warning
This plugin requires JSON to be parsed using the AVRO Converter and therefore does not support schemaless JSON.
Geospatial add¶
To insert messages from a Kafka topic with GEOADD use the following KCQL syntax:
INSERT INTO cpu_stats SELECT * from cpuTopic STOREAS GEOADD
Error Polices¶
Landoop sink connectors support error polices. These error polices allow you to control the behavior of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.
Throw
Any error on write to the target system will be propagated up and processing is stopped. This is the default behavior.
Noop
Any error on write to the target database is ignored and processing continues.
Warning
This can lead to missed errors if you do not have adequate monitoring. Data is not lost as it is still in Kafka, subject to Kafka’s retention policy. The sink currently does not distinguish between integrity constraint violations and or other exceptions thrown by any drivers or target system.
Retry
Any error on write to the target system causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.
Lenses QuickStart¶
The easiest way to try out this is using Lenses Box the pre-configured Docker image, that comes with this connector pre-installed. You would need to go to Connectors –> New Connector –> Sink –> MQTT and paste your configuration.
Redis Setup¶
Download and install Redis.
➜ wget http://download.redis.io/redis-stable.tar.gz
➜ tar xvzf redis-stable.tar.gz
➜ cd redis-stable
➜ sudo make install
Start Redis
➜ bin/redis-server
Check Redis is running:
➜ redis-cli ping
PONG
➜ sudo service redis-server status
Installing the Connector¶
Connect, in production should be run in distributed mode
- Install and configure a Kafka Connect cluster.
- Create a folder on each server called
plugins/lib
. - Copy into the above folder the required connector jars from the stream reactor download.
- Edit
connect-avro-distributed.properties
in theetc/schema-registry
folder and uncomment theplugin.path
. option. Set it to the root directory i.e. plugins you deployed the stream reactor connector jars in step 2. - Start Connect,
bin/connect-distributed etc/schema-registry/connect-avro-distributed.properties
.
Connect Workers are long running processes so set an init.d
or systemctl
service accordingly.
Sink Connector QuickStart¶
Start Kafka Connect in distributed mode (see install).
In this mode a Rest Endpoint on port 8083
is exposed to accept connector configurations.
We developed Command Line Interface to make interacting with the Connect Rest API easier. The CLI can be found in the Stream Reactor download under
the bin
folder. Alternatively the Jar can be pulled from our GitHub
releases page.
Starting the Connector (Distributed)¶
Download, and install Stream Reactor. Follow the instructions here if you have not already done so. All paths in the quickstart are based on the location you installed the Stream Reactor.
Once the Connect has started we can now use the kafka-connect-tools cli to post in our distributed properties file for Redis. For the CLI to work including when using the dockers you will have to set the following environment variable to point the Kafka Connect Rest API.
export KAFKA_CONNECT_REST="http://myserver:myport"
bin/connect-cli create redis-sink < conf/redis-sink.properties
name=redis-sink
connector.class=com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkConnector
tasks.max=1
topics=redis-topic
connect.redis.host=localhost
connect.redis.port=6379
connect.redis.kcql=INSERT INTO TABLE1 SELECT * FROM redis-topic
Warning
If your Redis server is expecting the connection to be authenticated you will need to provide an extra setting:
connect.redis.password=$REDIS_PASSWORD
Do not set the value to empty if no password is required.
If you switch back to the terminal you started Kafka Connect in, you should see the Redis Sink being accepted and the task starting.
We can use the CLI to check if the connector is up but you should be able to see this in logs as well.
#check for running connectors with the CLI
➜ bin/connect-cli ps
redis-sink
INFO
__ __
/ / ____ _____ ____/ /___ ____ ____
/ / / __ `/ __ \/ __ / __ \/ __ \/ __ \
/ /___/ /_/ / / / / /_/ / /_/ / /_/ / /_/ /
/_____/\__,_/_/ /_/\__,_/\____/\____/ .___/
/_/
____ ___ _____ _ __
/ __ \___ ____/ (_)____/ ___/(_)___ / /__
/ /_/ / _ \/ __ / / ___/\__ \/ / __ \/ //_/
/ _, _/ __/ /_/ / (__ )___/ / / / / / ,<
/_/ |_|\___/\__,_/_/____//____/_/_/ /_/_/|_|
(com.datamountaineer.streamreactor.connect.redis.sink.config.RedisSinkConfig:165)
INFO Settings:
RedisSinkSettings(RedisConnectionInfo(localhost,6379,None),RedisKey(FIELDS,WrappedArray(firstName, lastName)),PayloadFields(false,Map(firstName -> firstName, lastName -> lastName, age -> age, salary -> income)))
(com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask:65)
INFO Sink task org.apache.kafka.connect.runtime.WorkerSinkTask@44b24eaa finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:155)
Test Records¶
Tip
If your input topic does not match the target use Lenses SQL to transform in real-time the input, no Java or Scala is required!
Now we need to put some records it to the redis-topic topics. We can use the kafka-avro-console-producer
to do this.
Start the producer and pass in a schema to register in the Schema Registry. The schema has a firstname
field of type
string, a lastname
field of type string, an age
field of type int and a salary
field of type double.
bin/kafka-avro-console-producer \
--broker-list localhost:9092 --topic redis-topic \
--property value.schema='{"type":"record","name":"User",
"fields":[{"name":"firstName","type":"string"},{"name":"lastName","type":"string"},{"name":"age","type":"int"},{"name":"salary","type":"double"}]}'
Now the producer is waiting for input. Paste in the following:
{"firstName": "John", "lastName": "Smith", "age":30, "salary": 4830}
Check for records in Redis¶
Now check the logs of the connector you should see this:
INFO Received record from topic:redis-topic partition:0 and offset:0 (com.datamountaineer.streamreactor.connect.redis.sink.writer.RedisDbWriter:48)
INFO Empty list of records received. (com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask:75)
Let us check Redis.
redis-cli
127.0.0.1:6379> keys *
1) "John.Smith"
2) "11"
3) "10"
127.0.0.1:6379>
127.0.0.1:6379> get "John.Smith"
"{\"firstName\":\"John\",\"lastName\":\"Smith\",\"age\":30,\"income\":4830.0}"
Now stop the connector.
Configurations¶
The Kafka Connect framework requires the following in addition to any connectors specific configurations:
Config | Description | Type | Value |
---|---|---|---|
name |
Name of the connector | string | This must be unique across the Connect cluster |
topics |
The topics to sink.
The connector will check that this matches the KCQL statement
|
string | |
tasks.max |
The number of tasks to scale output | int | 1 |
connector.class |
Name of the connector class | string | com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkConnector |
Connector Configurations¶
Config | Description | Type |
---|---|---|
connect.redis.kcql |
Kafka connect query language expression | string |
connect.redis.host |
Specifies the Redis server | string |
connect.redis.port |
Specifies the Redis server port number | int |
Optional Configurations¶
Config | Description | Type | Default |
---|---|---|---|
connect.redis.password |
Specifies the authorization password.
If you don’t have a password set up on
the Redis server don’t provide the value or you will
see this error: ERR Client sent AUTH, but no password is set
|
string | |
connect.redis.error.policy |
Specifies the action to be
taken if an error occurs while inserting the data.
There are three available options, NOOP, the error
is swallowed, THROW, the error is allowed
to propagate and retry.
For RETRY the Kafka message is redelivered up
to a maximum number of times specified by the
connect.redis.max.retries option |
string | THROW |
connect.redis.max.retries ” |
The maximum number of times a message
is retried. Only valid when the
connect.redis.error.policy is set to RETRY |
string | 10 |
connect.redis.retry.interval |
The interval, in milliseconds between retries,
if the sink is using
connect.redis.error.policy set to RETRY |
string | 60000 |
connect.progress.enabled |
Enables the output for how many
records have been processed
|
boolean | false |
connect.redis.pk.delimiter |
Specifies a custom delimiter
for a composite primary key
|
string | dot (.) |
connect.redis.ssl.enabled |
Enables ssl for the redis connection
|
boolean | false |
ssl.truststore.type.config |
Specifies the config type of the truststore
|
string | |
ssl.truststore.location.config |
Specifies the filepath for the jceks file of truststore
|
string | |
ssl.truststore.password.config |
The password jceks file of truststore
|
string | |
ssl.keystore.type.config |
Specifies the config type of the keystore
|
string | |
ssl.keystore.location.config |
Specifies the filepath for the jceks file of keystore
|
string | |
ssl.keystore.password.config |
The password jceks file of keystore
|
string |
Example¶
name=redis-sink
connector.class=com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkConnector
tasks.max=1
topics=redis-topic
connect.redis.host=localhost
connect.redis.port=6379
connect.redis.kcql=INSERT INTO TABLE1 SELECT * FROM redis-topic
Schema Evolution¶
Upstream changes to schemas are handled by Schema registry which will validate the addition and removal or fields, data type changes and if defaults are set. The Schema Registry enforces AVRO schema evolution rules. More information can be found here.
The Redis Sink will automatically write and update the Redis table if new fields are added to the Source topic if fields are removed the Kafka Connect framework will return the default value for this field, dependent of the
compatibility settings of the Schema registry. This value will be put into the Redis column family cell based on the
connect.redis.kcql
mappings.
Kubernetes¶
Helm Charts are provided at our repo, add the repo to your Helm instance and install. We recommend using the Landscaper to manage Helm Values since typically each Connector instance has its own deployment.
Add the Helm charts to your Helm instance:
helm repo add landoop https://landoop.github.io/kafka-helm-charts/
TroubleShooting¶
Please review the FAQs and join our slack channel