Docker Image¶
The official Lenses docker image is available at the docker hub under landoop/lenses. The build is automated and the source code is available at github.com/lensesio/lenses-docker. It can be used instead of the Lenses archive or Lenses helm chart —for which it serves as the base. Support is available via our public channels on a best effort basis. Enterprise customers get priority support under their contract’s SLA and private communication options.
The free Lenses Box docker image, which includes a complete Kafka setup for development, is available under landoop/kafka-lenses-dev and its source code is available at github/landoop/fast-datadev. For more information please check the Lenses Box section.
Lenses Settings¶
The image uses the standard practice of converting environment variables to
configuration options for Lenses. The convention is that letters are uppercase
in environment variables and lowercase in Lenses configuration options, whilst
underscores in environment variables translate to dots in configuration
options. Only options starting with LENSES_
get processed.
Some examples include:
- Configuration option
lenses.port
would be set via environment variableLENSES_PORT
- Configuration option
lenses.schema.registry.urls
would be set via environment variableLENSES_SCHEMA_REGISTRY_URLS
Necessary configuration options are:
- LENSES_PORT
- LENSES_KAFKA_BROKERS
- LENSES_ZOOKEEPER_HOSTS
- LENSES_SECURITY_MODE
- LENSES_SECURITY_GROUPS
- LENSES_SECURITY_USERS
If LDAP is used instead of the basic security mode, then LENSES_SECURITY_USERS should be replaced with the options for LDAP setup.
Other important configuration options include:
- LENSES_SCHEMA_REGISTRY_URLS
- LENSES_CONNECT_CLUSTERS
More information may be found in the configuration section.
Optionally separate settings can be mounted as volumes under /mnt/settings
or /mnt/secrets
. Mounting a whole Lenses configuration file or a safety
valve to be appended to the auto-generated one, is also supported. For more
information about these methods and various quirks of environment variable based
configuration as well as secret management, please continue reading below.
License File¶
Lenses need your license file in order to work. If you don’t have one you may request a trial license or contact us for further information.
The license file may be provided to the docker image via three methods:
- As a file, mounted at
/license.json
or/mnt/secrets/license.json
- As the contents of the environment variable
LICENSE
- As a URL resource that will download on container startup via
LICENSE_URL
Volumes¶
Lenses stores its data within Kafka, so docker volumes can be avoided if desired.
The docker image exposes two volumes, /data/logs
and /data/kafka-streams-state
which can be used if desired.
The former (/data/logs
) is where Lenses logs are stored. The software also logs into
stdout, so your existing log management solutions can be used.
The latter (/data/kafka-streams-state
) is created when using LSQL in IN_PROC mode, that is LSQL
queries run within Lenses. In such case, Lenses takes advantage of this scratch directory to cache
LSQL internal state. Whilst this directory can safely be removed, it can be beneficial
to keep it around, so Lenses won’t have to rebuild the cache during a restart.
Process UID/GID¶
Lenses docker does not require running as root.
The default user in the image is set to root for convenience. Upon start, the
initialization script will use the root privileges to make sure all directories
and files have the correct permissions, then drop to user nobody and group
nogroup (65534:65534
) before starting Lenses.
If the image is started without root privileges, Lenses will start successfully
under the effective uid:gid
applied. In such case and if volumes are used
—for the license, settings or data—, it is the responsibility of the operator to
make sure that Lenses have permission to access these.
Broker Authentication¶
If the Kafka Cluster uses authentication, some additional files are needed in order to setup Lenses —or any Kafka client for that matter.
For SSL a truststore and a keystore may be needed, whilst for SASL/GSSAPI a jaas configuration file, a keytab and the system-wide kerberos configuration file (krb5.conf) are needed.
The docker image currently does not provide a special way to handle these files. They can be mounted as volumes and Lenses set to use them.
For more information about using Lenses with a security enabled Kafka cluster, please refer the relevant documentation sections: SSL Authentication and Encryption, SASL Authentication, SASL_SSL Authentication and Encryption.
Other means of Configuration¶
Lenses configuration options may be mount as files under /mnt/settings
and
/mnt/secrets
. The latter is usually used for options that contain secrets,
such as LENSES_SECURITY_USERS and in conjunction with the underlying container
orchestrator system’s secret management —such as kubernetes secrets.
For this functionality, a file with the name of the option’s environment variable
and content its value must be used. As an example, to set lenses.port=9991
,
one would mount a file under /mnt/settings/LENSES_PORT
with content
9991
.
If the traditional configuration approach with files lenses.conf
and
security.conf
is desired instead, they should be mounted under either
directory, such as /mnt/settings/lenses.conf
and
/mnt/secrets/security.conf
. Special care is needed for options
lenses.secret.file
and lenses.license.file
which point to the license
and secrets configuration files. It is advised to omit them and the
initialization script will take care to append them correctly to the provided
configuration files. If not omited, it is the responsibility of the operator to
set them correctly under the paths these files are mounted. When the traditional
configuration files are used, environment variables are not processed.
A hybrid approach, mixing configuration via environment variables but also files
is supported as well via the files /mnt/settings/lenses.append.conf
and
/mnt/settings/security.append.conf
. The contents of these files will be
appended to lenses.conf
and security.conf
respectively after the
environment variables are processed —and thus take priority.
JAVA Settings¶
Java and JVM settings may be set as described in java options
configuration section. The most commonly used setting is
LENSES_HEAP_OPTS
which restricts the memory usage of Lenses. The default
value is -Xmx3g -Xms512m
which permits Lenses to use as much as 3GB of
memory for heap space.
Cloud Service Discovery¶
In version 2.0 service discovery came into the Lenses docker as a preview feature.
Traditionally, except for the brokers, all other service and jmx endpoints —for Zookeeper, Kafka Connect and Schema Registry— should be explicitly provided to Lenses. This can be cumbersome for a larger cluster or dynamically deployed clusters.
The service discovery feature can help detect the various services endpoints automatically via the metadata services provided in widely used cloud providers, such as Amazon AWS, Google Cloud, Microsoft Azure, DigitalOcean, OpenStack, Aliyun Cloud, Scaleway and SoftLayer. The discovery relies on instances tags to work with.
A list of the available options follow. Options with default values may be omited when the default value corresponds to the correct setup value:
Variable | Description | Default | Required |
---|---|---|---|
SD_CONFIG | Service discovery configuration. Please look
at go-discovery and the examples below
|
— | yes |
SD_BROKER_FILTER | Filter for Brokers. Please look at
go-discovery and the examples below
|
— | when broker discovery is required |
SD_BROKER_PORT | Broker Port | 9092 | no |
SD_BROKER_PROTOCOL | Broker Protocol to use | PLAINTEXT | no |
SD_ZOOKEEPER_FILTER | Filter for Zookeeper nodes. Please look
at go-discovery and the examples below
|
— | when zookeeper discovery is required |
SD_ZOOKEEPER_PORT | Zookeeper Port | 2181 | no |
SD_ZOOKEEPER_JMX_PORT | Zookeeper JMX Port | — | no |
SD_REGISTRY_FILTER | Filter for Schema Registries. Please look at
go-discovery and the examples below
|
— | when schema registry discovery is required |
SD_REGISTRY_PORT | Schema Registry Port | 8081 | no |
SD_REGISTRY_JMX_PORT | Schema Registry JMX Port | — | no |
SD_CONNECT_FILTERS | Comma-separated filters for connect clusters’ workers.
Please look at go-discovery and the examples below
|
— | when connect worker (of one or more connect
distributed clusters) is required
|
SD_CONNECT_NAMES | Comma-separated names of connect clusters | — | only if more than one clusters must be discovered |
SD_CONNECT_PORTS | Comma-separated connect workers’ ports | 8083 | no |
SD_CONNECT_JMX_PORTS | Comma-separated connect workers’ JMX ports | — | no |
SD_CONNECT_CONFIGS | Comma-separated names of connect configs topic | connect-configs | only if more than one clusters must be discovered |
SD_CONNECT_OFFSETS | Comma-separated names of connect offsets topic | connect-offsets | only if more than one clusters must be discovered |
SD_CONNECT_STATUSES | Comma-separated names of connect statuses topic | connect-statuses | only if more than one clusters must be discovered |
Examples of service discovery configuration in various clouds follow.
Amazon AWS setup for brokers, zookeeper nodes, schema registries and one
connect distributed cluster without JMX and everything (ports, connect topics,
protocol) left at default values. Lenses VM should have the IAM permission
ec2:DescribeInstances
. The Schema Registry runs in the same instances as
Connect. This example would actually work if you used Confluent’s AWS templates
to deploy your cluster.
SD_CONFIG=provider=aws region=eu-central-1 addr_type=public_v4
SD_BROKER_FILTER=tag_key=Name tag_value=*broker*
SD_ZOOKEEPER_FILTER=tag_key=Name tag_value=*zookeeper*
SD_REGISTRY_FILTER=tag_key=Name tag_value=*worker*
SD_CONNECT_FILTERS=tag_key=Name tag_value=*worker*
Google Cloud setup for brokers, zookeeper nodes, schema registries and two
connect distributed clusters with JMX monitoring and default ports. left at
default values. Lenses VM should have the scope
https://www.googleapis.com/auth/compute.readonly
.
SD_CONFIG=provider=gce zone_pattern=europe-west1.*
SD_BROKER_FILTER=tag_value=broker
SD_ZOOKEEPER_FILTER=tag_value=zookeeper
SD_ZOOKEEPER_JMX_PORT=9585
SD_REGISTRY_FILTER=tag_value=schema-registry
SD_REGISTRY_JMX_PORT=9582
SD_CONNECT_FILTERS=tag_value=connect-worker-testing,tag_value=connect-worker-production
SD_CONNECT_NAMES=testing,production
SD_CONNECT_STATUSES=connect-statuses-testing,connect-statuses-production
SD_CONNECT_CONFIGS=connect-configs-testing,connect-configs-production
SD_CONNECT_OFFSETS=connect-offsets-testing,connect-offsets-production
SD_CONNECT_JMX_PORTS=9584
DigitalOcean setup for brokers, zookeeper nodes, schema registries and a connect distributed cluster with JMX monitoring, custom ports and SASL_SSL protocol. An read-only API token is needed from DO control panel, in order for service discovery to be able to get a list of running droplets. Private IPv4 Networking should be enabled for the droplets.
SD_CONFIG=provider=digitalocean api_token=[YOUR_API_TOKEN]
SD_BROKER_FILTER=region=lon1 tag_name=broker
SD_BROKER_PORT=9096
SD_BROKER_PROTOCOL=SASL_SSL
SD_ZOOKEEPER_FILTER=region=lon1 tag_name=zookeeper
SD_ZOOKEEPER_PORT=10181
SD_ZOOKEEPER_JMX_PORT=10182
SD_REGISTRY_FILTER=region=lon1 tag_name=registry
SD_REGISTRY_PORT=19081
SD_REGISTRY_JMX_PORT=19181
SD_CONNECT_FILTERS=region=lon1 tag_name=connect
SD_CONNECT_NAMES=production
SD_CONNECT_PORTS=19083
SD_CONNECT_JMX_PORTS=19183
Configuration Quirks¶
Lenses configuration is in HOCON format which at times can be challenging to convert to from other formats, such as configuration via environment variables, especially when these are set via nonstandard channels, such as yaml and docker environment files.
Yaml is well supported and multiline variables are supported. Quotes should be
avoided unless they are needed as literals, like in the url
and jmx
sections of LENSES_ZOOKEEPER_HOSTS. An example follows:
environment:
LENSES_PORT: 9991
LENSES_KAFKA_BROKERS: PLAINTEXT://kafka-1:9092,PLAINTEXT://kafka-2:9092,PLAINTEXT://kafka-3:9092
LENSES_ZOOKEEPER_HOSTS: |
[
{url:"zookeeper-1:2181",jmx:"zookeeper-1:9585"},
{url:"zookeeper-2:2181",jmx:"zookeeper-2:9585"},
{url:"zookeeper-3:2181",jmx:"zookeeper-3:9585"}
]
LENSES_SCHEMA_REGISTRY_URLS: |
[
{url:"http://registry-1:8081",jmx:"registry-1:9582"},
{url:"http://registry-2:8081",jmx:"registry-2:9582"}
]
LENSES_CONNECT_CLUSTERS: |
[
{name:"production",
urls: [
{url:"http://connect-1:8083",jmx:"connect-1:9584"},
{url:"http://connect-2:8083",jmx:"connect-2:9584"}
{url:"http://connect-3:8083",jmx:"connect-3:9584"}
],
statuses:"connect-statuses",
configs:"connect-configs",
offsets:"connect-offsets"}
]
LENSES_ALERT_PLUGINS_SLACK_ENABLED: "false"
LENSES_SECURITY_MODE: BASIC
LENSES_SECURITY_GROUPS: |
[
{"name": "adminGroup", "roles": ["admin", "write", "read"]},
{"name": "writeGroup", "roles": ["read", "write"]},
{"name": "readGroup", "roles": ["read"]},
{"name": "nodataGroup", "roles": ["nodata"]}
]
LENSES_SECURITY_USERS: |
[
{"username": "admin", "password": "admin", "displayname": "Lenses Admin", "groups": ["adminGroup"]},
{"username": "write", "password": "write", "displayname": "Write User", "groups": ["writeGroup"]},
{"username": "read", "password": "read", "displayname": "Read Only", "groups": ["readGroup"]},
{"username": "nodata", "password": "nodata", "displayname": "No Data", "groups": ["nodataGroup"]}
]
Docker environment files do not support multiline entries. Again, quotes should be used only when literals are expected and avoided in any other case.
LENSES_PORT=9991
LENSES_KAFKA_BROKERS=PLAINTEXT://kafka-1:9092,PLAINTEXT://kafka-3:9092,PLAINTEXT://kafka-3:9092
LENSES_ZOOKEEPER_HOSTS=[{url:"zookeeper-1:2181",jmx:"zookeeper-1:9585"},{url:"zookeeper-2:2181",jmx:"zookeeper-2:9585"},{url:"zookeeper-3:2181",jmx:"zookeeper-3:9585"}]
LENSES_SCHEMA_REGISTRY_URLS=[{url:"http://registry-1:8081",jmx:"registry-1:9582"},{url:"http://registry-2:8081",jmx:"registry-2:9582"}]
SD_CONFIG=provider=gce zone_pattern=europe-west1.*
SD_CONNECT_NAMES=production
SD_CONNECT_JMX_PORTS=9584
SD_CONNECT_FILTERS=tag_value=connect-worker
LENSES_SECURITY_GROUPS=[{"name": "adminGroup", "roles": ["admin", "write", "read"]}, {"name": "writeGroup", "roles": ["read", "write"]}]
LENSES_SECURITY_USERS=[{"username": "admin", "password": "admin", "displayname": "Lenses Admin", "groups": ["adminGroup"]}, {"username": "write", "password": "write", "displayname": "Write User", "groups": ["writeGroup"]}]
Monitoring and Prometheus¶
Lenses use the JVM, as such they can expose a JMX endpoint where applications
can connect to access metrics. To enable the JMX endpoint, please set the
environment variable LENSES_JMX_PORT
. Depending on your enviroment,
additional settings may be needed, such as:
LENSES_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=[LENSES_JMX_HOST] -Dcom.sun.management.jmxremote.rmi.port=[LENSES_JMX_PORT]"
A Prometheus endpoint is provided by default, through a jmx_exporter
instance that is loaded as agent into Lenses. Its port is 9102
and cannot be
altered —but may be exposed under a different port as per docker settings. The
java agent is loaded always. It exposes process and kafka client metrics.
It is a common practice when deploying into Kubernetes to expose a liveness endpoint. Lenses docker image does not have a dedicated endpoint but the address of Lenses itself can be used for this purpose.
Examples¶
These examples serve more as a quick reference guide.
Docker-compose example configuration.
version: '2'
services:
lenses:
image: landoop/lenses:2.0
environment:
LENSES_PORT: 9991
LENSES_KAFKA_BROKERS: "PLAINTEXT://broker.1.url:9092,PLAINTEXT://broker.2.url:9092"
LENSES_ZOOKEEPER_HOSTS: |
[
{url:"zookeeper.1.url:2181", jmx:"zookeeper.1.url:9585"},
{url:"zookeeper.2.url:2181", jmx:"zookeeper.2.url:9585"}
]
LENSES_SCHEMA_REGISTRY_URLS: |
[
{url:"http://schema.registry.1.url:8081",jmx:"schema.registry.1.url:9582"},
{url:"http://schema.registry.2.url:8081",jmx:"schema.registry.2.url:9582"}
]
LENSES_CONNECT_CLUSTERS: |
[
{
name:"data_science",
urls: [
{url:"http://connect.worker.1.url:8083",jmx:"connect.worker.1.url:9584"},
{url:"http://connect.worker.2.url:8083",jmx:"connect.worker.2.url:9584"}
],
statuses:"connect-statuses-cluster-a",
configs:"connect-configs-cluster-a",
offsets:"connect-offsets-cluster-a"
}
]
LENSES_SECURITY_MODE: BASIC
# Secrets can also be passed as files. Check _examples/
LENSES_SECURITY_GROUPS: |
[
{"name": "adminGroup", "roles": ["admin", "write", "read"]},
{"name": "readGroup", "roles": ["read"]}
]
LENSES_SECURITY_USERS: |
[
{"username": "admin", "password": "admin", "displayname": "Lenses Admin", "groups": ["adminGroup"]},
{"username": "read", "password": "read", "displayname": "Read Only", "groups": ["readGroup"]}
]
ports:
- 9991:9991
- 9102:9102
volumes:
- ./license.json:/license.json
network_mode: host
A Kubernetes pod and service example is available at github.com/lensesio/lenses-docker/. More information about running Lenses inside Kubernetes, is available at the kubernetes and helm section.