Lenses is extendable, and the following implementations can be specified:
Serializers/Deserializers Plug you own serializer and deserializer to enable observability over any data format (i.e. protobuf / thrift)
Custom authentication Authenticate users on your own proxy and inject permissions HTTP headers. See Authentication
LDAP lookup Use multiple LDAP servers, or your own group mapping logic. See LDAP
SQL UDFs User Defined Functions (UDF) that extend SQL and streaming SQL capabilities. See UDF
Once built, the jar files and any dependencies of the plugin should be added to Lenses and, in the case of Serializers and UDFs, to the SQL Processors if required.
On startup, Lenses loads plugins from the $LENSES_HOME/plugins/ directory and any location set in the environment variable LENSES_PLUGINS_CLASSPATH_OPTS. These locations are watched, and dropping a new plugin will hot-reload it. For the Lenses docker (and Helm chart) you may also use /data/plugins which is defined as a volume.
$LENSES_HOME/plugins/
LENSES_PLUGINS_CLASSPATH_OPTS
/data/plugins
Any first level directories under the aforementioned paths which are detected on startup will also be monitored for new files. During startup the list of monitored locations will be shown in the logs to help confirm the setup.
... Initializing (pre-run) Lenses Installation directory autodetected: /opt/lenses Current directory: /data Logback configuration file autodetected: logback.xml These directories will be monitored for new jar files: - /opt/lenses/plugins - /data/plugins - /opt/lenses/serde Starting application ...
Whilst all jar files may be added to the same directory (e.g /data/plugins), it is suggested to use a directory hierarchy to make management and maintenance easier.
An example hierarchy for set of plugins:
├── security │ └── sso_header_decoder.jar ├── serde │ ├── protobuf_actions.jar │ └── protobuf_clients.jar └── udf ├── eu_vat.jar ├── reverse_geocode.jar └── summer_sale_discount.jar
There are two ways to add custom plugins (UDFs and Serializers) to the SQL Processors; (1) via making available a tar.gz archive at an http(s) address, or (2) via creating a custom docker image.
With this method, a tar archive, compressed with gzip, can be created that contains all plugin jars and their dependencies. Then this archive should be uploaded to a web server that the SQL Processors containers are able to access, and its address set with the option lenses.kubernetes.processor.extra.jars.url.
lenses.kubernetes.processor.extra.jars.url
Step by step:
Create a tar.gz file that includes all required jars at its root:
tar -czf [FILENAME.tar.gz] -C /path/to/jars/ *
Upload to a web server, ie. https://example.net/myfiles/FILENAME.tar.gz
https://example.net/myfiles/FILENAME.tar.gz
Set
lenses.kubernetes.processor.extra.jars.url=https://example.net/myfiles/FILENAME.tar.gz
For the docker image set the corresponding environment variable
LENSES_KUBERNETES_PROCESSOR_EXTRA_JARS_URL=https://example.net/myfiles/FILENAME.tar.gz`
The SQL Processors that run inside Kubernetes use the docker image lensesio-extra/sql-processor. It is possible to build a custom image and add all the required jar files under the /plugins directory, then set lenses.kubernetes.processor.image.name and lenses.kubernetes.processor.image.tag options to point to the custom image.
lensesio-extra/sql-processor
/plugins
lenses.kubernetes.processor.image.name
lenses.kubernetes.processor.image.tag
Create a Docker image using lensesio-extra/sql-processor:VERSION as base and add all required jar files under /plugins:
lensesio-extra/sql-processor:VERSION
FROM lensesio-extra/sql-processor:4.2 ADD jars/* /plugins
docker build -t example/sql-processor:4.2 .
Upload the docker image to a registry:
docker push example/sql-processor:4.2
lenses.kubernetes.processor.image.name=example/sql-processor lenses.kubernetes.processor.image.tag=4.2
For the docker image set the corresponding environment variables
LENSES_KUBERNETES_PROCESSOR_IMAGE_NAME=example/sql-processor LENSES_KUBERNETES_PROCESSOR_IMAGE_TAG=4.2
On this page