Pulsar Sink¶
Download connector Pulsar Connector 1.2 for Kafka Pulsar Connector 1.1 for Kafka
A Kafka Connector sink to write events from Kafka to Apache Pulsar. The connector takes the value from the Kafka Connect SinkRecords and inserts a new entry to Pulsar.
Features¶
- The KCQL routing querying - Allows for the table to topic routing
- Error polices for handling failures
- Payload support for Schema.Struct and payload Struct, Schema.String and JSON payload and JSON payload with no schema.
KCQL Support¶
INSERT INTO pulsar_topic_name SELECT [, FIELDS...] FROM [TOPIC] kafka_topic_name
Tip
You can specify multiple KCQL statements separated by ;
to have a the connector sink multiple topics.
The Apache Pulsar sink supports KCQL, Kafka Connect Query Language. The following support KCQL is available:
- Field selection
- Target Pulsar topic selection.
Examples:
# Select all fields
INSERT INTO persistent://landoop/standalone/connect/kafka-topic SELECT * FROM kafka_topic
# Select individual fields
INSERT INTO persistent://landoop/standalone/connect/kafka-topic SELECT id, product_name FROM kafka_topic
Payload Support¶
Schema.Struct and a Struct Payload¶
If you follow the best practice while producing the events, each message should carry its schema information. The best option is to send AVRO. Your Connector configurations options include:
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
This requires the SchemaRegistry.
Note
This needs to be done in the connect worker properties if using Kafka versions prior to 0.11
Schema.String and a JSON Payload¶
Sometimes the producer would find it easier to just send a message with
Schema.String and a JSON string. In this case your connector configuration should be set to value.converter=org.apache.kafka.connect.json.JsonConverter
.
This doesn’t require the SchemaRegistry.
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
Note
This needs to be done in the connect worker properties if using Kafka versions prior to 0.11
No schema and a JSON Payload¶
There are many existing systems which are publishing Json over Kafka and bringing them in line with best practices is quite a challenge, hence we added the support. To enable this support you must change the converters in the connector configuration.
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
Note
This needs to be done in the connect worker properties if using Kafka versions prior to 0.11
Error Polices¶
Landoop sink connectors support error polices. These error polices allow you to control the behaviour of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.
Throw Policy¶
Any error on write to the target system will be propagated up and processing is stopped. This is the default behavior.
No Operation Policy¶
Any error on write to the target database is ignored and processing continues.
Warning
This can lead to missed errors if you don’t have adequate monitoring. Data is not lost as it’s still in Kafka subject to Kafka’s retention policy. The Sink currently does not distinguish for example between integrity constraint violations and or other exceptions thrown by any drivers or target systems.
Retry Policy¶
Any error on write to the target system causes the RetryIterable exception to be thrown. This causes the Kafka connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.
Apache Pulsar Setup¶
The documention for Pulasr is available here.
Download and extract the binary release:
wget http://www.apache.org/dist/incubator/pulsar/pulsar-1.21.0-incubating/apache-pulsar-1.21.0-incubating-bin.tar.gz
tar xvfz apache-pulsar-1.21.0-incubating-bin.tar.gz
cd apache-pulsar-1.21.0-incubating
Apache Pulsar requires Zookeeper, if you have Docker we recommend running Pulsar in a contiainer.
docker run -it \
-p 6650:6650 \
-p 8080:8080 \
-v $PWD/data:/pulsar/data \
apachepulsar/pulsar:1.21.0-incubating \
bin/pulsar standalone --advertised-address 127.0.0.1
If you do not have Docker you can still run Pulsar locally and reuse the Zookeeper instances from you Kafka cluster.
Pulsar uses Apache BookKeeper for persistence which stores Ledger details under /ledgers
can is controlled via zkLedgersRootPath
in the bookies
config file. Using this approach you may see Zookeeper warnings in the Pulsar logs.
# start
bin/pulsar standalone
Warning
We recommend separate Zookeeper quorums for Kafka and Pulsar and do not advise you try this on Production!
If you wish to use a separate Zookeeper instance outside of Docker you will need to update the configuration files
of Apache Pulsar in conf
to start Zookeeper on a different ports please consult the
Pulsar documentation.
Pulsar Console Consumer¶
To consumer message in Pulsar that the connector inserts lets start Pulsars console consumer and instruct it to waiting for records.
./pulsar-client \
consume \
persistent://landoop/standalone/connect/kafka-topic \
--subscription-name lenses
---num-messages 0
Note the topic name, we will use this later in the connectors KCQL statement.
Installing the Connector¶
Connect, in production should be run in distributed mode
- Install and configure a Kafka Connect cluster
- Create a folder on each server called
plugins/lib
- Copy into the above folder the required connector jars from the stream reactor download
- Edit
connect-avro-distributed.properties
in theetc/schema-registry
folder and uncomment theplugin.path
option. Set it to the root directory i.e. plugins you deployed the stream reactor connector jars in step 2. - Start Connect,
bin/connect-distributed etc/schema-registry/connect-avro-distributed.properties
Connect Workers are long running processes so set an init.d
or systemctl
service accordingly.
Source Connector QuickStart¶
Start Kafka Connect in distributed mode (see install).
In this mode a Rest Endpoint on port 8083
is exposed to accept connector configurations.
We developed Command Line Interface to make interacting with the Connect Rest API easier. The CLI can be found in the Stream Reactor download under
the bin
folder. Alternatively the Jar can be pulled from our GitHub
releases page.
Starting the Connector¶
Download, and install Stream Reactor. Follow the instructions here if you haven’t already done so. All paths in the quickstart are based on the location you installed the Stream Reactor.
Once the Connect has started we can now use the kafka-connect-tools cli to post in our distributed properties file for MQTT. For the CLI to work including when using the dockers you will have to set the following environment variable to point the Kafka Connect Rest API.
export KAFKA_CONNECT_REST="http://myserver:myport"
➜ bin/connect-cli create pulsar-sink < conf/pulsar-sink.properties
name=pulsar-sink
connector.class=com.datamountaineer.streamreactor.connect.pulsar.sink.PulsarSinkConnector
tasks.max=1
topics=pulsar-kafka-topic
connect.pulsar.kcql=INSERT INTO persistent://landoop/standalone/connect/kafka-topic SELECT * FROM pulsar-kafka-topic
connect.pulsar.hosts=pulsar://localhost:6650
connect.pulsar.error.policy=THROW
connect.pulsar.max.retries=5
connect.progress.enabled=true
We can use the CLI to check if the connector is up but you should be able to see this in logs as well.
#check for running connectors with the CLI
➜ bin/connect-cli ps
pulsar-sink
In the logs of Connect you should see this:
INFO
__ __
/ / ____ _____ ____/ /___ ____ ____
/ / / __ `/ __ \/ __ / __ \/ __ \/ __ \
/ /___/ /_/ / / / / /_/ / /_/ / /_/ / /_/ /
/_____/\__,_/_/ /_/\__,_/\____/\____/ .___/
/_/
____ __ _____ _ __
/ __ \__ __/ /________ ______ / ___/(_)___ / /__
/ /_/ / / / / / ___/ __ `/ ___/ \__ \/ / __ \/ //_/
/ ____/ /_/ / (__ ) /_/ / / ___/ / / / / / ,<
/_/ \__,_/_/____/\__,_/_/ /____/_/_/ /_/_/|_|
v 1.0 (com.datamountaineer.streamreactor.connect.pulsar.sink.PulsarSinkTask:43)
Now we need to put some records it to the pulsar-kafka-topic. We can use the kafka-avro-console-producer
to do this.
Start the producer and pass in a schema to register in the Schema Registry.
Tip
If your input topic doesn’t match the target use Lenses SQL to transform in real-time the input, no Java or Scala required!
bin/kafka-avro-console-producer \
--broker-list localhost:9092 --topic pulsar-kafka-topic \
--property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"id","type":"int"},{"name":"created","type":"string"},{"name":"product","type":"string"},{"name":"price","type":"double"}, {"name":"qty", "type":"int"}]}'
Now the producer is waiting for input. Paste in the following (each on a line separately):
{"id": 1, "created": "2016-05-06 13:53:00", "product": "OP-DAX-P-20150201-95.7", "price": 94.2, "qty":100}
Now if we check the logs of the connector we should see 2 records being inserted to Pulsar:
INFO Pulsar client config: {
"authentication" : {
"authMethodName" : "none",
"authData" : {
"tlsCertificates" : null,
"tlsPrivateKey" : null,
"httpAuthType" : null,
"httpHeaders" : null,
"commandData" : null
}
},
"operationTimeoutMs" : 30000,
"statsIntervalSeconds" : 60,
"connectionsPerBroker" : 1,
"useTcpNoDelay" : true,
"useTls" : false,
"tlsTrustCertsFilePath" : "",
"tlsAllowInsecureConnection" : false,
"concurrentLookupRequest" : 5000,
"maxNumberOfRejectedRequestPerConnection" : 50,
"ioThreads" : 1,
"listenerThreads" : 1
} (org.apache.pulsar.client.impl.ProducerStats:102)
INFO Received Broker lookup response: Connect (org.apache.pulsar.client.impl.ClientCnx:242)
INFO [persistent://landoop/standalone/connect/kafka-topic] [null] Creating producer on cnx [id: 0x1812a6c4, L:/127.0.0.1:59273 - R:localhost/127.0.0.1:6650] (org.apache.pulsar.client.impl.ProducerImpl:804)
INFO [persistent://landoop/standalone/connect/kafka-topic] [standalone-0-1] Created producer on cnx [id: 0x1812a6c4, L:/127.0.0.1:59273 - R:localhost/127.0.0.1:6650] (org.apache.pulsar.client.impl.ProducerImpl:825)
INFO Delivered 1 records for pulsar-kafka-topic
If we now check back in the terminal we started the Pulsar consumer:
----- got message -----
{"id":1,"created":"2016-05-06 13:53:00","product":"OP-DAX-P-20150201-95.7","price":94.2,"qty":100}
Configurations¶
Config | Description | Type | Value |
---|---|---|---|
name |
Name of the connector | string | This must be unique across the Connect cluster |
topics |
The topics to sink.
The connector will check this matchs the KCQL statement
|
string | |
tasks.max |
The number of tasks to scale output | int | 1 |
connector.class |
Name of the connector class | string | com.datamountaineer.streamreactor.connect.pulsar.sink.PulsarSinkConnector |
Connector Configurations¶
Config | Description | Type |
---|---|---|
connect.pulsar.kcql |
Contains the Kafka Connect Query Language
describing the flow from Apache Kafka to Apache Pulsar topics
|
string |
connect.pulsar.hosts |
Contains the Pulsar connection end points | string |
Optional Configurations¶
Config | Description | Type | Default |
---|---|---|---|
connect.pulsar.error.policy |
Specifies the action to be
taken if an error occurs while inserting the data.
There are three available options, NOOP, the error
is swallowed, THROW, the error is allowed
to propagate and retry.
For RETRY the Kafka message is redelivered up
to a maximum number of times specified by the
connect.pulsar.max.retries option |
string | THROW |
connect.pulsar.max.retries |
The maximum number of times a message
is retried. Only valid when the
connect.pulsar.error.policy is set to RETRY |
string | 10 |
connect.pulsar.retry.interval |
The interval, in milliseconds between retries,
if the sink is using
connect.pulsar.error.policy set to RETRY |
string | 60000 |
connect.progress.enabled |
Enables the output for how many
records have been processed
|
boolean | false |
connect.pulsar.tls.ca.cert |
Provides the path to the CA certificate file to use with the Pulsar connection | string | |
connect.pulsar.tls.cert |
Provides the path to the certificate file to use with the Pulsar connection | string | |
connect.pulsar.tls.key |
Certificate private [config] key file path. | string |
Kubernetes¶
Helm Charts are provided at our repo, add the repo to your Helm instance and install. We recommend using the Landscaper to manage Helm Values since typically each Connector instance has its own deployment.
Add the Helm charts to your Helm instance:
helm repo add landoop https://landoop.github.io/kafka-helm-charts/
TroubleShooting¶
Please review the FAQs and join our slack channel