Elastic Sink¶
Download connector Elastic Connector for Kafka 2.1.0
This Elastic Search Sink allows you to write events from Kafka to Elastic Search. The connector converts the value from the Kafka Connect SinkRecords to JSON and uses Elastic4s’s JSON insert functionality to index a document. The Sink creates an Index and Type corresponding to the topic name and uses the JSON insert functionality from elastic4s client.
Note
This connector is suited for Elastic Search 2.x - For Elastic Search 5.x look here
Prerequisites¶
- Apache Kafka 0.11.x of above
- Kafka Connect 0.11.x or above
- Elastic Search 2.2
- Java 1.8
Features¶
- The KCQL routing querying - Topic to index mapping and Field selection, auto creation, suffixes and tagging
- XPack security integration
- Error polices.
KCQL Support¶
{ INSERT | UPSERT } INTO elastic_index SELECT { FIELD, ... } FROM kafka_topic
[PK FIELD,...]
[WITHDOCTYPE=<your_document_type>]
[WITHINDEXSUFFIX=<your_suffix>]
Tip
You can specify multiple KCQL statements separated by ;
to have a the connector sink multiple topics.
The Elastic sink supports KCQL, Kafka Connect Query Language. The following support KCQL is available:
- Field selection
- Selection of target index
- Auto creation of indexes
- Tagging indexes with a document type
- Adding index name suffixes with a data pattern
- Upsert and insert records.
Primary Keys¶
The PK
keyword can be used to specify the fields which will be used for the key value.
The field values will be concatenated and separated by a -
. If no fields are set the
topic name, partition and message offset are used.
Document Type¶
WITHDOCTYPE
allows you to associate a document type to the document inserted.
Index Suffix¶
WITHINDEXSUFFIX
allows you to specify a suffix to your index and we support date format.
All you have to say is ‘_suffix_{YYYY-MM-dd}’
Example:
-- Insert mode, select all fields from topicA and write to indexA
INSERT INTO indexA SELECT * FROM topicA
-- Insert mode, select 3 fields and rename from topicB and write to indexB
INSERT INTO indexB SELECT x AS a, y AS b and z AS c FROM topicB PK y
-- UPSERT
UPSERT INTO indexC SELECT id, string_field FROM topicC PK id
This is set in the connect.elastic.kcql
option.
Auto Index Creation¶
The Sink will automatically create missing indexes at startup.
Error Polices¶
Landoop sink connectors support error polices. These error polices allow you to control the behavior of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.
Throw
Any error on write to the target system will be propagated up and processing is stopped. This is the default behavior.
Noop
Any error on write to the target database is ignored and processing continues.
Warning
This can lead to missed errors if you don’t have adequate monitoring. Data is not lost as it’s still in Kafka subject to Kafka’s retention policy. The Sink currently does not distinguish for example between integrity constraint violations and or other exceptions thrown by any drivers or target systems.
Retry
Any error on write to the target system causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.
Lenses QuickStart¶
The easiest way to try out this is using Lenses Box the pre-configured docker, that comes with this connector pre-installed. You would need to Connectors –> New Connector –> Sink –> Elastic and paste your configuration
Elastic Setup¶
Download and start ElasticSearch.
curl -L -O https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.2.0/elasticsearch-2.2.0.tar.gz
tar -xvf elasticsearch-2.2.0.tar.gz
cd elasticsearch-2.2.0/bin
./elasticsearch --cluster.name elasticsearch
Installing the Connector¶
Connect, in production should be run in distributed mode
- Install and configure a Kafka Connect cluster
- Create a folder on each server called
plugins/lib
- Copy into the above folder the required connector jars from the stream reactor download
- Edit
connect-avro-distributed.properties
in theetc/schema-registry
folder and uncomment theplugin.path
option. Set it to the root directory i.e. plugins you deployed the stream reactor connector jars in step 2. - Start Connect,
bin/connect-distributed etc/schema-registry/connect-avro-distributed.properties
Connect Workers are long running processes so set an init.d
or systemctl
service accordingly.
Starting the Connector (Distributed)¶
Download, and install Stream Reactor. Follow the instructions here if you haven’t already done so. All paths in the quickstart are based on the location you installed Stream Reactor.
Once the Connect has started we can now use the kafka-connect-tools cli to post in our distributed properties file for Elastic. For the CLI to work including when using the dockers you will have to set the following environment variable to point the Kafka Connect Rest API.
export KAFKA_CONNECT_REST="http://myserver:myport"
➜ bin/connect-cli create elastic-sink < conf/elastic-sink.properties
name=elastic-sink
connector.class=com.datamountaineer.streamreactor.connect.elastic.ElasticSinkConnector
tasks.max=1
topics=orders-topic
connect.elastic.url=localhost:9300
connect.elastic.cluster.name=elasticsearch
connect.elastic.kcql=INSERT INTO index_1 SELECT * FROM orders-topic
connect.progress.enabled=true
If you switch back to the terminal you started Kafka Connect in, you should see the Elastic Search Sink being accepted and the task starting.
We can use the CLI to check if the connector is up but you should be able to see this in logs as well.
#check for running connectors with the CLI
➜ bin/connect-cli ps
elastic-sink
INFO
__ __
/ / ____ _____ ____/ /___ ____ ____
/ / / __ `/ __ \/ __ / __ \/ __ \/ __ \
/ /___/ /_/ / / / / /_/ / /_/ / /_/ / /_/ /
/_____/\__,_/_/ /_/\__,_/\____/\____/ .___/
/_/
________ __ _ _____ _ __
/ ____/ /___ ______/ /_(_)____/ ___/(_)___ / /__
/ __/ / / __ `/ ___/ __/ / ___/\__ \/ / __ \/ //_/
/ /___/ / /_/ (__ ) /_/ / /__ ___/ / / / / / ,<
/_____/_/\__,_/____/\__/_/\___//____/_/_/ /_/_/|_|
by Andrew Stevenson
Test Records¶
Tip
If your input topic doesn’t match the target use Lenses SQL to transform in real-time the input, no Java or Scala required!
Now we need to put some records it to the orders-topic topics. We can use the kafka-avro-console-producer
to do this.
Start the producer and pass in a schema to register in the Schema Registry. The schema has an id
field of type int
and a random_field
of type string.
bin/kafka-avro-console-producer \
--broker-list localhost:9092 --topic orders-topic \
--property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"id","type":"int"},
{"name":"random_field", "type": "string"}]}'
Now the producer is waiting for input. Paste in the following:
{"id": 999, "random_field": "foo"}
{"id": 888, "random_field": "bar"}
Check for records in Elastic Search¶
Now if we check the logs of the connector we should see 2 records being inserted into Elastic Search:
[2016-05-08 21:02:52,095] INFO Flushing Elastic Sink (com.datamountaineer.streamreactor.connect.elastic.ElasticSinkTask:73)
[2016-05-08 21:03:52,097] INFO No records received. (com.datamountaineer.streamreactor.connect.elastic.ElasticJsonWriter:63)
[2016-05-08 21:03:52,097] INFO org.apache.kafka.connect.runtime.WorkerSinkTask@69b6b39 Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:187)
[2016-05-08 21:03:52,097] INFO Flushing Elastic Sink (com.datamountaineer.streamreactor.connect.elastic.ElasticSinkTask:73)
[2016-05-08 21:04:20,613] INFO Elastic write successful for 2 records! (com.datamountaineer.streamreactor.connect.elastic.ElasticJsonWriter:77)
If we query Elastic Search for id
999:
curl -XGET 'http://localhost:9200/INDEX_1/_search?q=id:999'
{
"took": 45,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1.2231436,
"hits": [{
"_index": "INDEX_1",
"_type": "INDEX_1",
"_id": "AVMY4eZXFguf2uMZyxjU",
"_score": 1.2231436,
"_source": {
"id": 999,
"random_field": "foo"
}
}]
}
}
Configurations¶
Config | Description | Type | Value |
---|---|---|---|
name |
Name of the connector | string | This must be unique across the Connect cluster |
topics |
The topics to sink.
The connector will check that this matches the KCQL statement
|
string | |
tasks.max |
The number of tasks to scale output | int | 1 |
connector.class |
Name of the connector class | string | com.datamountaineer.streamreactor.connect.elastic.ElasticSinkConnector |
Connector Configurations¶
Config | Description | Type |
---|---|---|
connect.elastic.url |
Url of the Elastic cluster | string |
connect.elastic.kcql |
Kafka connect query language expression | string |
Optional Configurations¶
Config | Description | Type | Default |
---|---|---|---|
connect.elastic.write.timeout |
Specifies the wait time for
pushing the records to ES
|
long | 300000 |
connect.elastic.url.prefix |
URL connection string prefix | string | elasticsearch |
connect.elastic.cluster.name |
Name of the elastic search cluster,
used in local mode for setting the connection
|
string | elasticsearch |
connect.elastic.use.http |
TCP or HTTP. Elastic4s client type
to use, HTTP or TCP, the default is TCP
|
string | TCP |
connect.elastic.error.policy |
Specifies the action to be
taken if an error occurs while inserting the data.
There are three available options, NOOP, the error
is swallowed, THROW, the error is allowed
to propagate and retry.
For RETRY the Kafka message is redelivered up
to a maximum number of times specified by the
connect.elastic.max.retries option |
string | THROW |
connect.elastic.max.retries |
The maximum number of times a message
is retried. Only valid when the
connect.elastic.error.policy is set to RETRY |
string | 10 |
connect.elastic.retry.interval |
The interval, in milliseconds between retries,
if the sink is using
connect.elastic.error.policy set to RETRY |
string | 60000 |
connect.progress.enabled |
Enables the output for how many
records have been processed
|
boolean | false |
Example¶
name=elastic-sink
connector.class=com.datamountaineer.streamreactor.connect.elastic.ElasticSinkConnector
tasks.max=1
topics=test_table
connect.elastic.url=localhost:9300
connect.elastic.cluster.name=elasticsearch
connect.elastic.kcql=INSERT INTO INDEX_1 SELECT field1, field2 FROM TOPIC1
Schema Evolution¶
Upstream changes to schemas are handled by Schema registry which will validate the addition and removal or fields, data type changes and if defaults are set. The Schema Registry enforces AVRO schema evolution rules. More information can be found here.
Elastic Search is very flexible about what is inserted. All documents in Elasticsearch are stored in an index. We do not need to tell Elasticsearch in advance what an index will look like (eg what fields it will contain) as Elasticsearch will adapt the index dynamically as more documents are added, but we must at least create the index first. The Sink connector automatically creates the index at startup if it doesn’t exist.
The Elastic Search Sink will automatically index if new fields are added to the Source topic if fields are removed the Kafka Connect framework will return the default value for this field, dependent on the compatibility settings of the Schema registry.
Kubernetes¶
Helm Charts are provided at our repo, add the repo to your Helm instance and install. We recommend using the Landscaper to manage Helm Values since typically each Connector instance has its own deployment.
Add the Helm charts to your Helm instance:
helm repo add landoop https://landoop.github.io/kafka-helm-charts/
TroubleShooting¶
Please review the FAQs and join our slack channel