Kafka Connect source connector that monitors files on an FTP server and feeds changes into Kafka.
Provide the remote directories and on specified intervals, the list of files in the directories is refreshed. Files are downloaded when they were not known before, or when their timestamp or size are changed. Only files with a timestamp younger than the specified maximum age are considered. Hashes of the files are maintained and used to check for content changes. Changed files are then fed into Kafka, either as a whole (update) or only the appended part (tail), depending on the configuration. Optionally, file bodies can be transformed through a pluggable system prior to putting it into Kafka.
Each Kafka record represents a file and has the following types.
The following rules are used.
Tailed files are only allowed to grow. Bytes that have been appended to it since the last inspection are yielded. Preceding bytes are not allowed to change;
Updated files can grow, shrink and change anywhere. The entire contents are yielded.
Instead of dumping whole file bodies (and the danger of exceeding Kafka’s message.max.bytes), one might want to give an interpretation to the data contained in the files before putting it into Kafka. For example, if the files that are fetched from the FTP are comma-separated values (CSVs), one might prefer to have a stream of CSV records instead. To allow to do so, the connector provides a pluggable conversion of SourceRecords. Right before sending a SourceRecord to the Connect framework, it is run through an object that implements:
package com.datamountaineer.streamreactor.connect.ftp trait SourceRecordConverter extends Configurable { def convert(in:SourceRecord) : java.util.List[SourceRecord] }
The default object that is used is a pass-through converter, an instance of:
class NopSourceRecordConverter extends SourceRecordConverter{ override def configure(props: util.Map[String, _]): Unit = {} override def convert(in: SourceRecord): util.List[SourceRecord] = Seq(in).asJava }
To override it, create your own implementation of SourceRecordConverter and place the jar in the plugin.path.
connect.ftp.sourcerecordconverter=your.name.space.YourConverter
export CONNECTOR=ftp docker-compose up -d ftp
Once your containers are running. Login into your container:
docker exec -ti ftp /bin/bash
If you are using Lenses, login into Lenses and navigate to the connectors page, select FTP as the source and paste the following:
name=ftp-source connector.class=com.datamountaineer.streamreactor.connect.ftp.source.FtpSourceConnector tasks.max=1 #server settings connect.ftp.address=localhost:21 connect.ftp.user=ftp connect.ftp.password=ftp #refresh rate, every minute connect.ftp.refresh=PT1M #ignore files older than 14 days. connect.ftp.file.maxage=P14D #monitor /forecasts/weather/ and /logs/ for appends to files. #any updates go to the topics `weather` and `error-logs` respectively. connect.ftp.monitor.tail=/forecasts/weather/:weather,/logs/:error-logs #keep an eye on /statuses/, files are retrieved as a whole and sent to topic `status` connect.ftp.monitor.update=/statuses/:status #keystyle controls the format of the key and can be string or struct. #string only provides the file name #struct provides a structure with the filename and offset connect.ftp.keystyle=struct
To start the connector using the command line, log into the lenses-box container:
docker exec -ti lenses-box /bin/bash
and create a connector.properties file containing the properties above.
Create the connector, with the connect-cli:
connect-cli create ftp < connector.properties
Wait for the connector to start and check it’s running:
connect-cli status ftp
Check the records in Lenses or with via the console:
kafka-avro-console-consumer \ --bootstrap-server localhost:9092 \ --topic orders-topic \ --from-beginning
Bring down the stack:
docker-compose down
On this page