Configuration¶
Introduction¶
In this section, we will explore Lenses configuration; how it is laid out on disk, which are the available options, which options are mandatory, specific cases such as brokers that require authentication and more. It is the best place to start if you are new to Lenses and assigned the task to set it up. Even if you install Lenses via Docker or Helm, these same settings can be applied via environment variables and YAML configuration files.
Lenses requires two configuration files named lenses.conf
and security.conf
:
lenses.conf
- Here we store most of the configuration options, such as the connection details of your brokers or the port Lenses uses. You have to create this file for Lenses to work. For the complete list of the configuration options please refer to Options Reference.
security.conf
- Here we configure the authentication module. For more information about authentication methods and authorization, refer to Security Configurations.
Our Docker image and Helm charts create these files automatically on start by reading environment variables, ConfigMaps, and secrets.
Configuration Format¶
The Lenses configuration format is HOCON, which is a superset of JSON and property
files. No experience of HOCON is required; the examples provided with the Lenses
archive and throughout the documentation is all you need to setup the
software. Like in JSON, please remember that string values need to be
quoted. Numbers, true
, and false
can remain unquoted. If a string value
contains only alphanumeric characters it may be ok to be left without quotes.
For more information, please check the HOCON design document.
Quick Start¶
A typical example of lenses.conf
for a Kafka cluster without authentication
to the brokers, looks like this:
# Set the ip:port for Lenses to bind to
lenses.ip = 0.0.0.0
lenses.port = 9991
# License file allowing connecting to up to N brokers
lenses.license.file = "/etc/lenses/license.json"
# Directory where Lenses stores local storage. Currently Data Policies are stored here.
# If omitted it will create a directory named 'storage' under the current directory.
# Write access is needed as well as surviving between upgrades.
lenses.storage.directory = "/var/lib/lenses/storage"
# Set up infrastructure end-points
# The more brokers you can add here, the better
lenses.kafka.brokers = "PLAINTEXT://host1:9092,PLAINTEXT://host2:9092,PLAINTEXT://host3:9092"
# Broker JMX Port
# lenses.kafka.metrics.default.port = 9581
# Schema Registry options
# lenses.schema.registry.urls = [
# {url: "http://host-1:8081", metrics:{url:"host-1:9582", type:"JMX"}},
# {url: "http://host-2:8081", metrics:{url:"host-2:9582", type:"JMX"}}
# ]
# Connect options
# lenses.kafka.connect.clusters = [
# {
# name: "dev",
# urls: [
# {url:"http://host-1:8083", metrics:{url:"host-1:9584", type:"JMX"}},
# {url:"http://host-2:8083", metrics:{url:"host-2:9584", type:"JMX"}}
# ],
# statuses: "connect-status",
# configs: "connect-configs",
# offsets: "connect-offsets"
# }
# ]
# Processor Mode & State dir options
# lenses.sql.execution.mode = "IN_PROC"
# lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"
Next snippet gives an example of basic security.conf
that adds an admin user. For the
complete reference of available security options, check out Security
Configurations.
# Security mode. Can be BASIC, LDAP, KERBEROS, CUSTOM_HTTP
lenses.security.mode = BASIC
# Security groups is a mandatory section for all security modes.
lenses.security.groups = [
{
name: "adminGroup",
roles: ["Admin", "DataPolicyWrite", "TableStorageWrite", "AlertsWrite"]
}
]
# Here you can set user accounts for the BASIC security mode.
lenses.security.users = [
{
username: "admin",
password: "admin",
displayname: "Lenses Admin",
groups: ["adminGroup"]
}
]
Basic Configuration¶
Let us explore the most pertinent sections of Lenses configuration.
Host and Port¶
During startup, Lenses binds to all available network interfaces on port number
9991
. To adjust these to custom values, please set the ip
and port
options.
# Set the ip:port for Lenses to bind to
lenses.ip = 0.0.0.0
lenses.port = 9991
Enabling TLS¶
Lenses supports TLS termination for securing via encryption all HTTP connections. A Java Keystore file with the private key and certificate pair is required to setup TLS. Optionally you can tweak the protocols and ciphers offered to clients.
# Set the keystore location and passwords
lenses.ssl.keystore.location = "/path/to/keystore.jks"
lenses.ssl.keystore.password = "changeit"
lenses.ssl.key.password = "changeit"
# Optionally you can tweak the TLS version, algorithm and ciphers
# If you skip them, the default values will be used
#lenses.ssl.enabled.protocols = "TLSv1.2"
#lenses.ssl.cipher.suites = "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384"
The options lenses.ssl.keystore.location
, lenses.ssl.keystore.password
,
and lenses.ssl.key.password
are mandatory.
License¶
With your Lenses subscription or trial, you receive a license file. If you
do not have a license yet, contact us here. This license file
(license.json
for this guide) is necessary for the application to
start. Once you have uploaded it to the server that runs Lenses, update the
configuration to point at it. Although it would be better and more secure to use an
absolute file path, a relative path from the directory you run Lenses will also work.
# License file allowing connecting to up to N brokers
lenses.license.file="license.json"
If you run Lenses under a specific user account, make sure that this account has the necessary permissions to read the license file.
Kafka Brokers¶
Setting up access to the Kafka Brokers is very important. The simplest case
is when the brokers accept unauthenticated connections. In such case only the
lenses.kafka.brokers
setting is required which is the same as the bootstrap servers
you would set for any Kafka client. Please make sure to add at least a
few of your brokers in this list – do not settle for just one unless you have a
single broker installation.
lenses.kafka.brokers = "PLAINTEXT://host1:9092,PLAINTEXT://host2:9092"
Warning
It is important to set at least a few of your brokers here. If the brokers on this list are all down, then some parts of Lenses will fail to work properly.
Broker metrics¶
Lenses can take advantage of the Kafka Broker metrics to monitor the health of your cluster and show metrics and other information. Although it is not a hard requirement, allowing access to these metrics will make for more functionality and a better experience in the web interface. Brokers metrics may be exposed via JMX, optionally password protected, or the Jolokia JMX-HTTP bridge using either of the HTTP GET or POST modes.
In the most common case, the metrics are exposed via JMX. If Lenses is set up with Zookeeper access, it will discover the Brokers JMX ports automatically without any extra configuration needed. If access to Zookeeper is restricted, a common occurrence with managed cloud instances, you can provide the Brokers’ JMX ports manually via configuration. If all your brokers listen for JMX connections to the same port, set the default metrics port option.
lenses.kafka.metrics.default.port = 9581
If the brokers listen to different JMX ports (a setup we advise against), or if the broker’s JMX Endpoint are protected, you can pair Broker IDs and ports, like below:
lenses.kafka.metrics = {
ssl: true, # Optional, please make the remote JMX certificate
# is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX user
password: "admin", # Optional, the remote JMX password
type: "JMX",
port: [
{id:BROKER_ID_1, port:9581, host:"host1"},
{id:BROKER_ID_2, port:9581, host:"host2"}
]
}
}
In addition to JMX, Lenses supports reading broker metrics exposed via the
Jolokia JMX-HTTP bridge. The Jolokia agent exposes the metrics via HTTP and it provides
two sets of APIs, based on GET or POST requests.
The lenses.kafka.metrics.type
option may be set to JOLOKIAG
for the GET-based
API or to JOLOKIAP
for the POST-based API.
lenses.kafka.metrics = {
ssl: true, # Optional, please make the remote JMX certificate
# is accepted by the Lenses truststore
user: "admin", # Optional, the Jolokia user if required
password: "admin", # Optional, the Jolokia password if required
type: "JOLOKIAP" # 'JOLOKIAP' for the POST API, 'JOLOKIAG' for the GET API
default.port: 19999
}
}
If the brokers export their metrics on a different port (in case a machine runs
more than one Kafka Broker), then use lenses.kafka.metrics.port
to define the
mapping.
lenses.kafka.metrics = {
ssl: true, # Optional, please make the remote JMX certificate
# is accepted by the Lenses truststore
user: "admin", # Optional, the Jolokia user if required
password: "admin", # Optional, the Jolokia password if required
type: "JOLOKIAP" # 'JOLOKIAP' for the POST API, 'JOLOKIAG' for the GET API
port: [
{id:BROKER_ID_1, port:9581, host:"host1"},
{id:BROKER_ID_2, port:9581, host:"host2"}
]
}
}
Broker Authentication¶
Connecting to authenticated brokers is a bit more involved, but it is like any Kafka client. If you have clients that already use authentication, you will have Lenses up and running in no time at all. Kafka Brokers may be set up for authentication via the simple authentication security layer (SASL), SSL/TLS or both. SASL most commonly use GSSAPI (Kerberos). However, in the latest versions of Kafka, more SASL flavors such as SCRAM were added.
When configuring the Kafka client of Lenses, it is important to remember that there are three modules that require these settings: the main application’s consumer client, the main application’s producer client and the Lenses SQL in-Kubernetes processors’ Kafka client (both producer and consumer). If you do not use LSQL in Kubernetes, you can skip the related configuration sections.
Let us have a look the various authentication scenarios.
SSL¶
If your Kafka cluster uses TLS certificates for authentication, set the broker
protocol to SSL
and then pass in any keystore and truststore
configurations to the consumer and producer settings by prefixing with
lenses.kafka.settings.
the relevant configuration keys:
lenses.kafka.brokers = "SSL://host1:9093,SSL://host2:9093"
lenses.kafka.settings.consumer.security.protocol = SSL
lenses.kafka.settings.consumer.ssl.truststore.location = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.consumer.ssl.truststore.password = "changeit"
lenses.kafka.settings.consumer.ssl.keystore.location = "/var/private/ssl/client.keystore.jks"
lenses.kafka.settings.consumer.ssl.keystore.password = "changeit"
lenses.kafka.settings.consumer.ssl.key.password = "changeit"
lenses.kafka.settings.producer.security.protocol = SSL
lenses.kafka.settings.producer.ssl.truststore.location = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.producer.ssl.truststore.password = "changeit"
lenses.kafka.settings.producer.ssl.keystore.location = "/var/private/ssl/client.keystore.jks"
lenses.kafka.settings.producer.ssl.keystore.password = "changeit"
lenses.kafka.settings.producer.ssl.key.password = "changeit"
If you are using TLS certificates only for encryption of data on the wire, you can omit the keystore settings:
lenses.kafka.brokers = "SSL://host1:9093,SSL://host2:9093"
lenses.kafka.settings.consumer.security.protocol = SSL
lenses.kafka.settings.consumer.ssl.truststore.location = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.consumer.ssl.truststore.password = "changeit"
lenses.kafka.settings.producer.security.protocol = SSL
lenses.kafka.settings.producer.ssl.truststore.location = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.producer.ssl.truststore.password = "changeit"
If your brokers’ CA certificate is embedded in the system-wide truststore, you can omit the truststore settings.
Important
If you use Lenses SQL processors in Kafka Connect, you have to make sure that your
keystore and truststore files exist in the Connect workers nodes at the
locations dictated by
lenses.kafka.settings.consumer.ssl.truststore.location
,
lenses.kafka.settings.consumer.ssl.keystore.location
,
lenses.kafka.settings.producer.ssl.truststore.location
, and
lenses.kafka.settings.producer.ssl.keystore.location
.
SASL/GSSAPI¶
For Lenses to access Kafka in an environment set up with Kerberos (SASL), you need to provide a JAAS file as in the example below. If your Kafka cluster is set up with an authorizer (ACLs), it is advised to use the same principal as with the brokers, so that Lenses can have superuser permissions.
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/path/to/keytab-file"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="principal@MYREALM";
};
/*
Optional section for authentication to zookeeper
Please also remember to set lenses.zookeeper.security.enabled=true
*/
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/path/to/keytab-file"
storeKey=true
useTicketCache=false
principal="principal@MYREALM";
};
Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:
export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"
Lenses SQL processors need their own JAAS file. If you use the same keytab for both
Lenses and the processors, you can copy your jaas.conf
file and only replace
the paths to the keytab. For the kubernetes processors, the keytab is always mounted
under /mnt/secrets/kafka/keytab
:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/mnt/secrets/kafka/keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="principal@MYREALM";
};
/*
Optional section for authentication to zookeeper
Please also remember to set lenses.zookeeper.security.enabled=true
*/
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/mnt/secrets/kafka/keytab"
storeKey=true
useTicketCache=false
principal="principal@MYREALM";
};
Last, set the security protocol and kubernetes settings (if required) in the configuration file:
lenses.kafka.brokers = "SASL_PLAINTEXT://host1:9094,SASL_PLAINTEXT://host2:9094"
lenses.kafka.settings.consumer.security.protocol = SASL_PLAINTEXT
lenses.kafka.settings.producer.security.protocol = SASL_PLAINTEXT
lenses.kubernetes.processor.kafka.settings.security.protocol = SASL_PLAINTEXT
lenses.kubernetes.processor.jaas = "path/to/jaas-processors.conf"
lenses.kubernetes.processor.kafka.settings.keytab = "path/to/processor.keytab"
lenses.kubernetes.processor.krb5 = "/etc/krb5.conf"
If you use Lenses SQL processors in Kafka Connect, then you have to configure your
Connect workers with Kerberos as well. This will probably be already the case,
but if not, add your JAAS file and keytab to the Connect worker nodes and export
the Kerberos configuration in KAFKA_OPTS
. Please notice that you need to
provide your JAAS file via KAFKA_OPTS and not as a Kafka Connect
configuration:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/jaas.conf"
Note
A system configured to work with Kerberos usually provides a system-wide
Kerberos configuration file (krb5.conf
) that points to the location of
the KDC and includes other configuration options necessary to
authenticate. If your system is missing this file, please contact your
administrator. If you cannot set the system-wide configuration, you can
provide a custom krb5.conf
via LENSES_OPTS
:
export LENSES_OPTS="-Djava.security.krb5.conf=/path/to/krb5.conf"
By default, the connection to Zookeeper remains unauthenticated. This only
affects the Quota entries, which are written without any Zookeeper ACLs to
protect them. The option lenses.zookeeper.security.enabled
may be used to
change this behavior but it is important in such case to use the brokers’ principal for Lenses.
If Lenses is configured with a different principal, then the brokers will not be able
to manipulate the Quota entries, and will fail to start. Please contact our support
if you need help with this feature.
SASL_SSL¶
In this security protocol, Kafka uses a SASL method for authentication and TLS certificates for encryption of data on the wire. As such the configuration is a combination of the SSL/TLS and SASL configurations.
Please provide Lenses with a JAAS file as described in the previous section and add it to LENSES_OPTS:
export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"
Set Lenses to use SASL_SSL for its producer and consumer part. If your CA’s certificate is not part of the system-wide truststore, please provide Lenses with a truststore as well:
lenses.kafka.brokers = "SASL_SSL://host1:9096,SASL_SSL://host2:9096"
lenses.kafka.settings.consumer.security.protocol = SASL_SSL
lenses.kafka.settings.consumer.ssl.truststore.location = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.consumer.ssl.truststore.password = "changeit"
lenses.kafka.settings.producer.security.protocol = SASL_SSL
lenses.kafka.settings.producer.ssl.truststore.location = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.producer.ssl.truststore.password = "changeit"
For Lenses SQL processors in Kafka Connect, you will have to make sure the truststore
is located in the same path as in
lenses.kafka.settings.consumer.ssl.truststore.location
and
lenses.kafka.settings.producer.ssl.truststore.location
, unless you use the
default truststore, and of course, setup Kafka Connect with Kerberos.
SASL/SCRAM¶
For Lenses to access Kafka in an environment set up with SCRAM authentication (SASL/SCRAM) you need to provide lenses with a JAAS file as in the example below. If Lenses is used with an ACL enabled cluster, it is advised to use the same principal as the brokers, so it has superuser permissions.
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="[USERNAME]"
password="[PASSWORD]";
};
Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:
export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"
Last, set the security protocol and mechanism in the configuration file:
lenses.kafka.brokers = "SASL_PLAINTEXT://host1:9092,SASL_PLAINTEXT://host2:9092"
lenses.kafka.settings.consumer.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.consumer.sasl.mechanism=SCRAM-SHA-256
lenses.kafka.settings.producer.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.producer.sasl.mechanism=SCRAM-SHA-256
An alternative to the jaas.conf
file, is to configure JAAS within Lenses
configuration (lenses.conf
). The configuration format is HOCON, which is Scala’s
configuration format. As such, multiline strings should be enclosed within
triple quotes:
lenses.kafka.settings.consumer.sasl.jaas.config="""
org.apache.kafka.common.security.scram.ScramLoginModule required
username="[USERNAME]"
password="[PASSWORD]";"""
lenses.kafka.settings.producer.sasl.jaas.config="""
org.apache.kafka.common.security.scram.ScramLoginModule required
username="[USERNAME]"
password="[PASSWORD]";"""
Please notice that SASL/SCRAM is officially unsupported at this time for Lenses SQL processors in either Connect or Kubernetes modes, although it may work.
Zookeeper¶
Optionally Lenses can use Zookeeper - Lenses can work normally without it - to autodetect brokers’ JMX ports. Only if you want to manage quotas, the Zookeeper is required.
lenses.zookeeper.hosts = [
{
url: "ZK_HOST_1:2181"
},
{
url: "ZK_HOST_2:2181"
}
]
If your cluster is under a Zookeeper chroot, you must set this too.
# The Kafka Brokers' Zookeeper chroot if used
lenses.zookeeper.chroot = ""
Optionally, you could enable JMX (or Jolokia) which could be used to display node details in the Services screen. The configuration will be as the following:
lenses.zookeeper.hosts = [
{
url: "ZK_HOST_1:2181",
metrics: {
ssl: true, # Optional, please make the remote JMX/HTTP
# certificate is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX/HTTP user
password: "admin", # Optional, the remote JMX/HTTP password
type: "JMX", # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
url: "ZK_HOST_1:9585"
}
},
{
url: "ZK_HOST_2:2181",
metrics: {
ssl: true, # Optional, please make the remote JMX/HTTP
# certificate is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX/HTTP user
password: "admin", # Optional, the remote JMX/HTTP password
type: "JMX", # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
url: "ZK_HOST_2:9585"
}
}
]
If your Zookeeper is protected with Kerberos, please check zookeeper security.
Schema Registry¶
If you use the AVRO format to serialize records stored in Kafka, then you will most likely use a Schema Registry implementation. The most common ones come from Confluent and HortonWorks. Lenses supports both.
Confluent¶
In the most simple scenario, you only need to provide a list of your Schema Registry servers. Lenses also monitors the health of your nodes. For this check to work properly, a complete list of your schema registry servers is required.
lenses.schema.registry.urls = [
{
url: "http://SR_HOST_1:8081"
},
{
url: "http://SR_HOST_1:8081"
}
]
The Confluent Registry allows schema deletion, so it is possible to enable access to this functionality.
lenses.schema.registry.delete = false
Note
It is necessary to add the scheme (http://
or https://
) in front
of the Schema Registry address.
Optionally, you could enable JMX (or Jolokia) which could be used to display node details in the Services screen. The configuration will be as the following:
lenses.schema.registry.urls = [
{
url: "http://SR_HOST_1:8081",
metrics: { # Optional section
ssl: true, # Optional, please make the remote JMX/HTTP
# certificate is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX/HTTP user
password: "admin", # Optional, the remote JMX/HTTP password
type: "JMX", # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
url: "SR_HOST_1:9583"
}
},
{
url: "http://SR_HOST_1:8081",
metrics: { # Optional section
ssl: true, # Optional, please make the remote JMX/HTTP
# certificate is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX/HTTP user
password: "admin", # Optional, the remote JMX/HTTP password
type: "JMX", # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
url: "SR_HOST_1:9583"
}
}
]
The Confluent Registry stores all schemas into a Kafka topic. Lenses consumes
this topic in order to track changes. If the topic is left with the default name
(i.e. _schemas
), then no action in Lenses is required. Otherwise, you should set
the Schema Registry topic name.
lenses.schema.registry.topics = "_schemas"
Hortonworks¶
The HortonWorks Schema Registry is different in that it needs the full API path, it does not support monitoring, it does not offer schema deletion, and it uses a non-Kafka backend (such as a RDBMS), so Lenses cannot track live changes. Furthermore not all serialization modes are compatible with Confluent’s, hence the need to use HortonWorks serde libs.
To configure this Registry, enable the appropriate mode and provide the API path in full.
lenses.schema.registry.mode = HORTONWORKS
lenses.schema.registry.urls = [
{url:"http://SR_HOST_1:9090/api/v1"},
{url:"http://SR_HOST_1:9090/api/v1"}
]
Authentication¶
Depending on the Schema Registry mode, Lenses uses internally the AVRO serde classes provided by either Confluent or Hortonworks. As such the authentication configuration reflects the options of these classes.
There are three places in Lenses that AVRO configuration is used:
- The Schemas management screen, where you can view and manage your schemas
- The Lenses application where it used in the table SQL engine for data-browsing, and in the in-process and connect execution modes of the streaming SQL engine
- The streaming SQL engine in Kubernetes where you can run complex queries and perform stream processing
Each of these three modules needs to be configured for authentication to the Schema Registry. If you do not use Lenses SQL processors in Kubernetes, you may skip the corresponding settings.
BASIC¶
The Confluent Schema Registry offers support for Basic authentication. To use it, set these options in addition to the rest of your Registry specific configuration.
lenses.schema.registry.auth = "USER_INFO"
lenses.schema.registry.username = "USERNAME"
lenses.schema.registry.password = "PASSWORD"
lenses.kafka.settings.producer.basic.auth.credentials.source = USER_INFO
lenses.kafka.settings.producer.basic.auth.user.info = "USERNAME:PASSWORD"
lenses.kafka.settings.consumer.basic.auth.credentials.source = USER_INFO
lenses.kafka.settings.consumer.basic.auth.user.info = "USERNAME:PASSWORD"
lenses.kubernetes.processor.kafka.settings.basic.auth.credentials.source = USER_INFO
lenses.kubernetes.processor.kafka.settings.basic.auth.user.info = "USERNAME:PASSWORD"
lenses.kubernetes.processor.schema.registry.settings.basic.auth.credentials.source = USER_INFO
lenses.kubernetes.processor.schema.registry.settings.basic.auth.user.info = "USERNAME:PASSWORD"
Warning
Please note that although you could add the BASIC authentication username and password in the Schema Registry URL, it is a bad idea as Lenses display this URL in a few places.
Kerberos¶
The HortonWorks Schema Registry offers support for Kerberos (SPNEGO) authentication. The setup is more involved than the BASIC auth.
As with the brokers, a JAAS file is needed in order to setup Lenses for Kerberos. If you already have a JAAS file in place for connecting to the brokers, then instead of creating a new, append the snippet below to your current.
RegistryClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/path/to/keytab-file"
storeKey=true
useTicketCache=false
principal="principal@MYREALM";
};
Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:
export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"
Lenses SQL processors, when run in Kubernetes, need their own JAAS file. If you use
the same keytab for both Lenses and the processors, you can copy your
jaas.conf
file and only replace the paths to the keytab. For the kubernetes
processors, the Schema Registry keytab is always mounted under
/mnt/secrets/registry/keytab
:
RegistryClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/mnt/secrets/registry/keytab"
storeKey=true
useTicketCache=false
principal="principal@MYREALM";
};
If you are running the SQL processors in Kafka Connect, then you have to
configure your Connect workers with Kerberos as well. This will probably be
already the case, but if not, add your JAAS file and keytab to the Connect
worker nodes and export the Kerberos configuration in KAFKA_OPTS
:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/jaas.conf"
Note
A system configured to work with Kerberos usually provides a system-wide
Kerberos configuration file (krb5.conf
) that points to the location of
the KDC and includes other configuration options necessary to
authenticate. If your system is missing this file, please contact your
administrator. If you cannot set the system-wide configuration, you can
provide a custom krb5.conf
via LENSES_OPTS
:
export LENSES_OPTS="-Djava.security.krb5.conf=/path/to/krb5.conf"
Once your JAAS files are ready, proceed to configure lenses.conf
for
access to the Kerberized Schema Registry.
# Enable Kerberos Authentication for schema registry
lenses.schema.registry.kerberos=true
# Define the Schema Registry principal. Usually principals for HTTP services
# are in the form of 'HTTP/HOSTNAME@REALM'. The HW Registry option expects
# locally to write it in the form of 'http@HOSTNAME'.
lenses.schema.registry.principal="http@<REGISTRY-HOSTNAME>"
# Define the principal used by Lenses to access the registry
lenses.schema.registry.service.name="principal@MYREALM"
# Define the keytab
lenses.schema.registry.keytab="path/to/keytab"
# Options for Lenses SQL processors in Kubernetes. Please note that if you use
# SASL_PLAINTEXT or SASL_SSL for the Kafka Brokers, you have already set the
# first two options. You should merge the JAAS files, whilst the krb5.conf is
# a global configuration file.
lenses.kubernetes.processor.jaas="path/to/jaas.conf"
lenses.kubernetes.processor.krb5="/etc/krb5.conf"
lenses.kubernetes.processor.schema.registry.keytab="path/to/keytab"
Kafka Connect¶
You can add your Kafka Connect clusters to Lenses, so you can manage your connectors (create, remove, update), monitor them (ephemeral metrics), find out issues (e.g a failed task), view them in the topology view, and of course, to scale Lenses SQL processors. To set this up, you need to provide a list of Kafka Connect nodes (workers), the topics they use for storing their configuration, state, and source offset. Additionally, if you want to monitor your nodes and get alerts when a worker is offline, the list of the workers should be exhaustive (include all your workers).
lenses.kafka.connect.clusters = [
{
name: "dev",
urls: [
{
url:"http://CONNECT_HOST_1:8083"
},
{
url:"http://CONNECT_HOST_2:8083"
}
],
statuses: "connect-status",
configs: "connect-configs",
offsets: "connect-offsets"
}
]
Note
The cluster name
cannot contain dots (.
), nor dashes (-
).
Warning
If Lenses fails to find a Connect Cluster that is defined in lenses.conf
during startup, it will exit immediately.
Optionally, you can enable JMX (or Jolokia) which will be used to provide additional information about your Connect clusters in the Services screen and per-connector metrics in the Connectors and Topology screens. In that case the configuration will be as follows:
lenses.kafka.connect.clusters = [
{
name: "dev",
urls: [
{
url:"http://CONNECT_HOST_1:8083",
metrics: { # Optional section
ssl: true, # Optional, please make the remote JMX/HTTP
# certificate is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX/HTTP user
password: "admin", # Optional, the remote JMX/HTTP password
type: "JMX", # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
url: "CONNECT_HOST_1:9584"
}
},
{
url:"http://CONNECT_HOST_2:8083",
metrics: { # Optional section
ssl: true, # Optional, please make the remote JMX/HTTP
# certificate is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX/HTTP user
password: "admin", # Optional, the remote JMX/HTTP password
type: "JMX", # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
url: "CONNECT_HOST_2:9584"
}
}
],
statuses: "connect-status",
configs: "connect-configs",
offsets: "connect-offsets"
}
]
Authentication¶
If your Connect cluster requires authentication, additional configuration is required. Otherwise, you can protect your Connect via a firewall and let your users manage it only via Lenses.
BASIC¶
To configure BASIC authentication to connect, you should add the connection details
in the lenses.kafka.connect.clusters
:
lenses.kafka.connect.clusters = [
{
name: "dev",
username: "USERNAME",
password: "PASSWORD",
auth: "USER_INFO",
urls: [
{
url:"http://CONNECT_HOST_1:8083",
metrics: { # Optional section
ssl: true, # Optional, please make the remote JMX/HTTP
# certificate is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX/HTTP user
password: "admin", # Optional, the remote JMX/HTTP password
type: "JMX", # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
url: "CONNECT_HOST_1:9584"
}
},
{
url:"http://CONNECT_HOST_2:8083",
metrics: { # Optional section
ssl: true, # Optional, please make the remote JMX/HTTP
# certificate is accepted by the Lenses truststore
user: "admin", # Optional, the remote JMX/HTTP user
password: "admin", # Optional, the remote JMX/HTTP password
type: "JMX", # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
url: "CONNECT_HOST_2:9584"
}
}
],
statuses: "connect-status",
configs: "connect-configs",
offsets: "connect-offsets"
}
]
Warning
Please note that although you could add BASIC authentication username and password in the Connect URL, it is a bad idea as Lenses display this URL in a few places.
Lenses Storage¶
Persistent data is stored by default under the storage/
directory where
Lenses runs from. It is strongly advised to explicitly set where persistent data
will be stored, make sure the Lenses process has permission to read and write
files in this directory and put an upgrade and backup policy in place.
To configure the storage directory, set this option:
lenses.storage.directory = "/path/to/persistent/data/directory"
Lenses SQL Processors¶
The Lenses SQL Processors is a great way to do stream-processing using the Lenses SQL dialect. As we configured Lenses for access to the Kafka Brokers and Schema Registry, we already saw part of the processors’ configuration. Besides the Kafka client module of the processors, there are a few more settings to adjust.
IN_PROC¶
Out of the box, Lenses SQL Processors are running in the same JVM process. We
call this mode IN_PROC
. It is a convenient setup to get a feel of the
Processors functionality without any hassle. The only thing you need to set up
is a directory which can use to store some ephemeral data.
lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"
CONNECT¶
Connect mode lets you run Lenses SQL Processors within your Kafka Connect cluster(s). A Lenses SQL Connector is provided with your Lenses subscription (not part of the trial), which you have to add to your Connect cluster as any other connector. For more details on how you add the connector, the Lenses SQL Connector deployment.
Once you load the connector to one or more of your Connect clusters, Lenses
can automatically detect it. Then you only have to set Lenses to use CONNECT
mode and also set a directory which the connector (not Lenses), can use to write
ephemeral data.
lenses.sql.execution.mode = CONNECT
lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"
KUBERNETES¶
Kubernetes mode is the most scalable one for Lenses SQL Processors. Just type your streaming SQL query and fire as many pods as you like in your Kubernetes cluster. The first step to enable this mode is to add the Lenses Container Registry key to your Kubernetes cluster. For more information on how to do this, check how to setup Lenses SQL in Kubernetes.
Once your cluster is ready, you only have to provide Lenses with a Kubernetes configuration file so that it can access the cluster and the account to use.
lenses.sql.execution.mode = KUBERNETES
lenses.kubernetes.config.file = "/home/lenses/.kube/config"
lenses.kubernetes.service.account = "default"
AlertManager¶
Lenses continuously monitors your cluster and informs you on service degradation, consumer groups lag, and various operational issues. These actionable events are considered alerts and are forwarded to your AlertManager (AM) installation, so they can be deduplicated, grouped and routed accordingly. Check out the the alert manager integration for examples on how to configure your AlertManager for Lenses’ alerts.
Note
Not all alerts are actionable and thus forwarded to AlertManager. AM expects an alert that can be raised and brought down once fixed. As an example, an offline broker would raise an alert and once your broker comes back online, the alert will be dismissed.
In the configuration file, you have to provide a list of your AlertManager endpoints. The way that AlertManager clusters work is that an application sends the alert to all nodes and the nodes themselves make sure they will process the alert at least once.
lenses.alert.manager.endpoints = "http://ALERTMANAGER_HOST_1:9094,http://ALERTMANAGER_HOST_2:9094,..."
If you have more multiple Lenses installations, it is a good idea to set the AM source to something unique so that AlertManager can distinguish between your different installations. Also, it would be useful (but optional) to set the Lenses address in the AM generator URL so that you can navigate to the Lenses Web Interface through the alerts’ links. This URL should be the address your users use to access Lenses.
lenses.alert.manager.source = "Lenses"
lenses.alert.manager.generator.url = "http://LENSES_HOST:9991"
Grafana¶
If you have set up the Lenses Monitoring Suite, or have your own monitoring solution in place, you can set the Grafana address (or your own monitoring tool’s address) in Lenses, so you get a link to it from the web interface.
lenses.grafana = "http://GRAFANA_HOST:3000"
Advanced Configuration¶
Lenses Storage Topics¶
Lenses keeps a portion of its configuration and data inside Kafka Topics. You can find these under the System Topics category. They retain the information about your cluster, metrics, auditing, processors and more. When the application starts, it checks their existence and creates them if needed. Although usually not needed, you are allowed to override the default names for these topics:
# topics created on start-up that Lenses uses to store state
lenses.topics.audits = "_kafka_lenses_audits"
lenses.topics.cluster = "_kafka_lenses_cluster"
lenses.topics.metrics = "_kafka_lenses_metrics"
lenses.topics.profiles = "_kafka_lenses_profiles"
lenses.topics.processors = "_kafka_lenses_processors"
lenses.topics.connectors = "_kafka_lenses_connectors"
lenses.topics.alerts.storage = "_kafka_lenses_alerts"
lenses.topics.lsql.storage = "_kafka_lenses_lsql_storage"
lenses.topics.alerts.settings = "_kafka_lenses_alerts_settings"
lenses.topics.metadata = "_kafka_lenses_topics_metadata"
lenses.topics.external.topology = "__topology"
lenses.topics.external.metrics = "__topology__metrics"
Warning
These topics are created and managed by Lenses automatically. Do not create them by hand as they may need compaction enabled or a certain number of partitions. If you are using ACLs, only allow Lenses to manage these topics.
ACLs¶
If your Kafka cluster is set up with an authorizer (ACLs), Lenses should have at least permission to manage and access its storage topics. Make sure to set the principal and host appropriately:
kafka-acls \
--authorizer-properties zookeeper.connect=ZOOKEEPER_HOST:2181 \
--add \
--allow-principal User:Lenses \
--allow-host lenses-host \
--operation Read \
--operation Write \
--operation Alter \
--topic topic
Lenses also needs access to certain third party, system topics, in order to work:
- __offsets
- This is the internal Kafka topic, where consumer offsets are stored. Lenses needs read access to this topic, in order to track consumer lags.
- _schemas
- If you use Confluent’s Schema Registry, then Lenses needs read access to the topic where the schemas are stored in order to track changes in real-time.
- _connect-configs, _connect-offsets, _connect-status
- If you use Kafka Connect, then Lenses needs read access to the topics that your Connect cluster(s) use to store their state, in order to provider richer information about the connector instances.
Topology¶
The Topology screen offers a window to your data flows, a high-level view of how your data moves in and out of Kafka. Lenses builds the topology graph from your connectors, SQL processors and applications that include our topology libraries.
To build the graph, some information is needed. The Lenses SQL processors (Kafka
Streams applications written with Lenses SQL) are always managed automatically, so
you don’t have to do anything. Same goes for the more than 45 Kafka Connect
connectors we support out of the box. For any other connector it’s as simple as
adding it to lenses.connectors.info
:
lenses.connectors.info = [
{
class.name = "org.apache.kafka.connect.file.FileStreamSinkConnector"
name = "File"
instance = "file"
sink = true
extractor.class = "com.landoop.kafka.lenses.connect.SimpleTopicsExtractor"
icon = "file.png"
description = "Store Kafka data into files"
author = "Apache Kafka"
},
...
]
Your custom applications, on the other hand, need to embed our topology libraries. For more information about the topology setup, for both connectors and external applications, please have a look at the Topology Configuration.
Consumer Groups Lag¶
Lenses exposes the Kafka Consumer Groups lag via a Prometheus metrics endpoint
within the application. The default Prometheus path is used (/metrics
), so
you can add the Lenses address as is to your Prometheus targets. No additional
configuration is required.
Slack Integration¶
Lenses can post alerts directly to Slack. We strongly advise using the Alertmanager integration instead and post alerts to Slack via AM, as alerts without deduplication can cause too much noise.
To integrate Lenses alerting with Slack, add a incoming webhook. Select the #channel where Lenses can post alerts and copy the Webhook URL:
lenses.alert.plugins.slack.enabled = true
lenses.alert.plugins.slack.webhook.url = "https://hooks.slack.com/services/SECRET/YYYYYYYYYYY/XXXXXXXX"
lenses.alert.plugins.slack.username = "lenses"
lenses.alert.plugins.slack.channel = "#alerts"
Kafka ACLs¶
You can manage your Kafka ACLs through Lenses. If you are running Kafka 1.0 or later you do not have to set anything in the configuration file. If your brokers are configured with an authorizer, Lenses will allow you to see and manage ACLs.
When using Kafka 0.11 or older, you have to switch to ACL management via Zookeeper. To do that Lenses should be configured with access to Zookeeper, and the ACLs broker mode set to false:
lenses.acls.broker.mode = false
Note
The ACL management functionality is tested with the default Kafka authorizer class.
Producer & Consumer¶
Lenses interacts with your Kafka Cluster via Kafka Consumers and
Producers. There may be scenarios where the Consumer and/or the Producer need to
be tweaked. The settings of each are kept separate; prefix any option described
in the Kafka documentation for the new consumer with
lenses.kafka.settings.consumer
and for the producer with
lenses.kafka.settings.producer
. As an example:
lenses.kafka.settings.consumer.isolation.level = "read_committed"
lenses.kafka.settings.producer.acks = "all"
The Lenses SQL processors when used in Kubernetes, have separate Kafka client
settings under the prefix lenses.kubernetes.processor.kafka.settings
. These
settings apply to both the consumer and the producer part. As an example:
lenses.kubernetes.processor.kafka.settings.acks = "all"
Warning
Changing the default settings of the Kafka client may lead to unexpected
issues. Furthermore, some settings are set dynamically in runtime – for
example, IN_PROC SQL processors will get their own group.id
. We
encourage you to visit our community
or consult with us via one of the available channels if you need help
tweaking Lenses.
System Topics¶
Systems topics is a convention used by Lenses to separate between topics created by users and topics created by software — such as Lenses and Kafka Connect. Lenses shows System Topics in a separate tab in the Topics screen to minimize the cognitive load of the users.
The default setting includes Lenses system topics, SQL processors’ kstreams topics, consumer offsets,
schemas, and transactions. You can add topics of your own as well, but it is advised to keep
the default ones too, so they will not transfer to your user topics. The setting takes prefixes so,
for example, the lsql_
item matches all topics starting with lsql_
.
lenses.kafka.control.topics = [
"connect-configs",
"connect-offsets",
"connect-status",
"connect-statuses",
"_schemas",
"__consumer_offsets",
"_kafka_lenses_",
"lsql_",
"__transaction_state"
]
Tuning Lenses¶
Lenses comes tuned out of the box, but as every production setup may be different, there are many advanced options to tweak the behavior of the software. These settings include the connections to the Kafka services and JMX ports, the web server and the web socket part of Lenses, the SQL engine settings, the frequency of various update actions — like how often we update the consumers — and many more.
For a list of the advanced options, please check out the Options Reference
Table and also have a look at the lenses.conf.sample
file
that comes with the Lenses archive or under the
/opt/lenses/lenses.conf.sample
path in our Docker images.
Our recommendation is to install our software with the default settings and only go to the advanced section if you have a particular reason. Changing them without a good reason can lead to unexpected behavior.
We encourage you to visit our community or consult with us via one of the available channels if you need help or advice tweaking Lenses.
Runtime Configuration¶
Java Options¶
Lenses runs on an embedded Java Virtual Machine (JVM). You can tune it like any JVM-based application – we made sure to follow the same conventions that you see throughout the Kafka ecosystem. This means you get five environment variables you may use, LENSES_OPTS, LENSES_HEAP_OPTS, LENSES_JMX_OPTS, LENSES_LOG4J_OPTS and LENSES_PERFORMANCE_OPTS. Let us see them in detail:
- LENSES_OPTS
- This variable should be used for generic settings, such as the Kerberos configuration (e.g SASL/GSSAPI auth to the Brokers). Please note that in this option in our Docker image we add a Java agent (in addition to your settings), to export Lenses metrics into Prometheus format.
- LENSES_HEAP_OPTS
- Here you can set options about the JVM heap. The default setting is
-Xmx3g -Xms512m
, which sets the heap size between 512MB and 3GB. It will serve you well even for larger clusters. It is possible to set the upper limit (3GB) lower if needed. For our Lenses Box as an example, we set it at just 1.2GB. If you are using many Lenses SQL processors in IN_PROC mode or your cluster has more than 3000 partitions, you should increase it. - LENSES_JMX_OPTS
- This variable can be used to tweak the JMX options that the JVM offers, such as allowing remote access. Have a look at the Metrics Section for more information.
- LENSES_LOG4J_OPTS
- This variable can be used to tweak Lenses logging. Please note that Lenses uses the Logback library for logging. For more information about this, check the Logging section.
- LENSES_PERFORMANCE_OPTS
Here you can tune the JVM. Our default settings should serve you well:
-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true
Logging¶
Lenses uses the Logback framework for logging. For its configuration, upon startup,
it looks for a file named logback.xml
, first inside the current directory
(the directory you ran Lenses from), then at /etc/lenses/
and last in the
Lenses installation directory. The first one found (in the above order), is
used. It will also be printed in the startup logs, so you know which logback
configuration file is in use. This is useful, because the application constantly
monitors this file for changes, so you can edit it and Lenses will automatically
reload it without the need to restart anything.
To use a file at a custom location, set the LENSES_LOG4J_OPTS
environment
variable as in the example:
export LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:mylogback.xml"
Lenses scan the log configuration file every 30 seconds for changes.
Inside the installation directory, there is also a logback-debug.xml
file,
where we set the default logging level to DEBUG. You can use this to increase
the logging quickly.
Tip
For convenience, Lenses offers a basic log viewer within the web interface.
Once logged into Lenses, visit http://LENSES_HOST/lenses/#/logs
to check
it out.
Log Level¶
The default log level is set to INFO except for some 3rd party classes we feel
are too verbose at this level. You can use the logback-debux.xml
configuration to quickly switch to DEBUG.
For fine-grained control, you can edit the logback.xml
file and adjust the
global or per class log level.
The default logger levels are:
<logger name="com.landoop" level="INFO"/>
<logger name="io.lenses" level="INFO"/>
<logger name="akka" level="INFO"/>
<logger name="io.confluent.kafka.serializers.KafkaAvroDeserializerConfig" level="WARN"/>
<logger name="io.confluent.kafka.serializers.KafkaAvroSerializerConfig" level="WARN"/>
<logger name="org.apache.calcite" level="OFF"/>
<logger name="org.apache.kafka" level="WARN"/>
<logger name="org.apache.kafka.clients.admin.AdminClientConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.consumer.internals.AbstractCoordinator" level="WARN"/>
<logger name="org.apache.kafka.clients.consumer.ConsumerConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.producer.ProducerConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.NetworkClient" level="ERROR"/>
<logger name="org.apache.kafka.common.utils.AppInfoParser" level="ERROR"/>
<logger name="org.apache.zookeeper" level="WARN"/>
<logger name="org.reflections" level="WARN"/>
<logger name="org.I0Itec.zkclient" level="WARN"/>
<logger name="com.typesafe.sslconfig.ssl.DisabledComplainingHostnameVerifier" level="ERROR"/>
<root level="INFO">...</root>
Log Format¶
All the log entries are written to the output using the following
pattern:%d{ISO8601} %-5p [%c{2}:%L] %m%n
. You can adjust this inside
logback.xml
to match your organization’s defaults.
Log Location¶
By default Lenses logs both to stdout and to files inside the directory it
runs from, under logs/
. This may also be configured inside logback.xml
.
The stdout output can be integrated with any log collection infrastructure you may have in place and is useful with containers as well. It follows the Twelve-Factor App approach to logs.
On the other hand, the file logs are separated into three files: lenses.log
,
lenses-warn.log
and metrics.log
. The first one contains all logs and is the
same as the stdout
. The second contains only messages at level WARN and
above. The third one contains timing metrics for Lenses operations and can be
useful for debugging. If you ever need to file a bug report, we may ask you
for any of these files (in whole or part) to be able to debug your issue.
Lenses take care of the log rotation for these files.
Metrics¶
JMX Metrics¶
Lenses runs on the JVM; it is possible to expose a JMX endpoint or use a java
agent of your choosing, such as Prometheus’ jmx_exporter or Jolokia’s agent to
monitor it. The JMX endpoint is managed by the lenses.jmx.port
option. Leave
it empty to disable JMX.
The most interesting information you can get from JMX, are Lenses’ JVM usage (e.g. CPU, memory, GC) and the metrics of the Kafka clients Lenses use internally.
It is often the case with JMX that you need to tune it further for remote
access. As we have seen, it comes to LENSES_JMX_OPTS
environment variable. An
example of how you can configure it for remote access is below. If you use it
verbatim, adjust the hostname to reflect your server’s hostname.
LENSES_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=[HOSTNAME]"
Prometheus’ Agent¶
The Lenses Monitoring Suite is a reference architecture based on Prometheus and Grafana. As part of your Lenses subscription, you get access to resources such as templates and dashboards to assist on your implementation.
In the monitoring context, Lenses is considered a Kafka Client such as any of
your Kafka applications. You can use the resources provided in the monitoring
suite (jmx_exporter build and configuration) to enable the Prometheus metrics
endpoint via the LENSES_OPTS
environment variable:
export LENSES_OPTS="-javaagent:/path/to/jmx_exporter/fastdata_agent.jar=9102:/path/to/jmx_exporter/client.yml"
If you use your own jmx_exporter build and templates, the process is the same, just substitute our files for your own.
Our Docker image (landoop/lenses
) sets up the Prometheus endpoint
automatically. You only have to expose port number 9102
to access it.
Directories¶
Lenses needs write access to certain directories. Additionally, it needs a temporary directory with the execution permission.
Write Access¶
Unless configured otherwise, Lenses needs write access inside the directory it is
running from (WorkingDirectory in SystemD) and /tmp
.
- logs/
- In this directory two kinds of files are stored: log files and Lenses SQL
processors (when in-process mode) state. Both are safe to delete, although if
you delete the processors’ state, Lenses (the KStreams framework more
specifically) will need to re-build it. To change the log files location,
edit the
logback.xml
file inside the Lenses installation directory or copy it into the run directory and edit it there. To change the location for the processors’ state directory, use thelenses.sql.state.dir
option. - storage/
- In this directory Lenses stores configuration. Currently, the Data Policies are
stored in an H2 database. To change this directory, use the
lenses.storage.directory
option. This directory needs to be backed-up and survive upgrades, etc. - tmp/
- In this directory temporary files are stored, like JNI shared libraries. If
Lenses fails to start with an error like
Failed to read data
, please try to remove the directory/tmp/vlxjre
. Code execution should be allowed in this directory as it is required by the JNI libraries.
JNI libraries and Code Execution¶
Lenses and Kafka itself use two common Java libraries that take advantage of
JNI: the Snappy library and the RocksDB library. JNI uses native libraries —
.so files in our case— that it extracts inside /tmp
. Native libraries
means that they run via the Linux kernel itself, thus the kernel should be
allowed to execute this code. In some enterprise setups the /tmp
directory
is mounted with the noexec
option, leading to problems.
Apart from the obvious solution, mount /tmp
without noexec
, you can
configure Lenses to use a different temporary directory, where code can be
executed. For Snappy the option org.xerial.snappy.tempdir
can control the
temp directory. For RocksDB, the temporary directory for the JVM that runs Lenses
needs to be adjusted via the java.io.tmpdir
option.
LENSES_OPTS="-Dorg.xerial.snappy.tempdir=/path/to/exec/tmp -Djava.io.tmpdir=/path/to/exec/tmp"
Important
Please note that it is not just Lenses that uses Snappy, but the Kafka Brokers and Kafka Connect workers as well. As such, you may need the same workarounds for these services as well.
Plugins¶
Lenses can be extended via user-provided classes. There are four categories you can extend:
- Serde: custom serialization and deserialization classes so you can use all Lenses functionality with your own data formats (such as protobuf)
- LDAP Group Filter: custom plugin to query your LDAP implementation for groups your users belong to if you do not use AD or the memberOf overlay of OpenLDAP
- UDF for the SQL Table-based Engine: User Defined Functions (UDF) can extend the Lenses SQL Table Engine with new functions
- Custom HTTP authentication: A class that can extract —and verify possibly— user information sent via headers, so your users can authenticate to Lenses via an authentication proxy / Single Sign-On solution
Location¶
Lenses search for plugins under two directories:
- The
$LENSES_HOME/plugins/
directory, where $LENSES_HOME is the Lenses installation path - An optional directory set by the environment variable
LENSES_PLUGINS_CLASSPATH_OPTS
On startup these two directories and any first level subdirectory of theirs are added in the Lenses classpath (as the security plugins are required to be available during startup) and also they are passed to Lenses so it can monitor them for new jar files.
While any layout or a single directory may work for you, a suggested layout for the plugin directory is the following:
plugins/
├── security
├── serde
└── udf
Once populated with plugins, that directory could look like this:
plugins/
├── security
│ └── sso_header_decoder.jar
├── serde
│ ├── protobuf_actions.jar
│ └── protobuf_clients.jar
└── udf
├── eu_vat.jar
├── reverse_geocode.jar
└── summer_sale_discount.jar
Tip
Lenses continuously monitors the plugin directories and their first level subdirectories (that existed during Lenses startup) for new plugins (jars).
Custom Serde¶
Custom serde (serializer and deserializer) can be used to extend Lenses with support for additional message formats. Out of the box, you get built-in support for Avro, JSON, CSV, XML and more formats. If your data is in a format that is not supported out of the box, or maybe it requires to be hardcoded (such asprotobuf) you can write and compile your own serde jars and add them to Lenses. For more information about custom serde check the Lenses SQL section.
As mentioned, custom serde can be read from the plugins directories. Before Lenses 2.2, custom serde could be read from the locations below. These locations are still supported but will be deprecated in the future. If you use them, please switch to the plugins directories.
$LENSES_HOME/serde
$LENSES_SERDE_CLASSPATH_OPTS
if set
The plugins (and serde) directories are continuously monitored for new jar files. Once a new library is dropped in, the new format should be available to use within a few seconds.
Processors¶
Lenses SQL Processors support custom serde as well. If processor execution mode is set to IN_PROC, the default mode, no additional action is required. If it is set to CONNECT, then the serde jars should be added to the connector directory, alongside the default libraries. If the mode is KUBERNETES, then a custom processor image should be created with a custom serde.
Including custom serde to the Lenses Docker please see Lenses Docker plugins.
Options List Reference¶
Config | Description | Required | Type | Default |
---|---|---|---|---|
lenses.ip |
Bind HTTP at the given endpoint.
Used in conjunction with
lenses.port |
no | string | 0.0.0.0 |
lenses.port |
The HTTP port the HTTP server listens
for connections: serves UI, Rest and WS APIs
|
no | int | 9991 |
lenses.ssl.keystore.location |
The full path to the keystore file
used to enable TLS on lenses port
|
no | string | null |
lenses.ssl.keystore.password |
Password to unlock the keystore
file
|
no | String | null |
lenses.ssl.key.password |
Password for the ssl certificate
used
|
no | String | null |
lenses.ssl.enabled.protocols |
Version of TLS protocol
that will be used
|
no | string | TLSv1.2 |
lenses.ssl.algorithm |
X509 or PKIX algorithms used by TLS
termination
|
no | string | SunX509 |
lenses.ssl.cipher.suites |
Comma separated list of ciphers
allowed for the TLS negotiation
|
no | String | null |
lenses.jmx.port |
The port to bind an JMX agent to
enable JVM monitoring
|
no | int | 9992 |
lenses.license.file |
The full path to the license file
|
yes | string | license.json |
lenses.secret.file |
The full path to
security.conf containing securitycredentials read more
|
yes | string | security.conf |
lenses.storage.directory |
The full path to the directory where Lenses
stores some of its state
|
no | string | null |
lenses.topics.audits |
Topic to store system auditing
information. Keep track of WHO did WHAT and WHEN.
When a topic, config, connector is Created/Updated
or Deleted an audit message is stored.
*We advise not to change the defaults
neither to delete the topic*
|
yes | string | _kafka_lenses_audits |
lenses.topics.metrics |
Topic to store stream processor
metrics. When your state-less stream processors are
running in Kubernetes or Kafka Connect, this
topic collects health checks and
performance metrics.
*We advise not to change the defaults
neither to delete the topic*.
|
yes | string | _kafka_lenses_metrics |
lenses.topics.cluster |
Topic to store broker details.
Infrastructure information is used to determine
config changes, failures and new nodes added or
removed in a cluster.
*We advise not to change the defaults neither to
delete the topic*
|
yes | string | _kafka_lenses_cluster |
lenses.topics.profiles |
Topic to store user preferences.
Bookmark your most used topics, connectors or
SQL processors. *We advise not to change
the defaults neither to delete the topic*
|
yes | string | _kafka_lenses_profiles |
lenses.topics.processors |
Topic to store the SQL processors details.
*We advise not to change the defaults
neither to delete the topic*
|
yes | string | _kafka_lenses_processors |
lenses.topics.connectors |
Topic to store connectors’ details.
|
yes | string | _kafka_lenses_connectors |
lenses.topics.alerts.storage |
Topic to store the alerts raised.
*We advise not to change the defaults
neither to delete the topic*
|
yes | string | _kafka_lenses_alerts |
lenses.topics.alerts.settings |
Topic to store the alerts configurations.
*We advise not to change the defaults
neither to delete the topic*.
|
yes | string | _kafka_lenses_alerts_settings |
lenses.topics.lsql.storage |
Topic to store all data access SQL queries.
Know WHO access WHAT data and WHEN.
*We advise not to change the defaults
neither to delete the topic*
|
yes | string | _kafka_lenses_lsql_storage |
lenses.topics.external.topology |
Topic where external application
publish their topology.
|
yes | string | __topology |
lenses.topics.external.metrics |
Topic where external application
publish their topology metrics.
|
yes | string | __topology__metrics |
lenses.kafka.brokers |
A list of host/port pairs to
use for establishing the initial connection to the
Kafka cluster. Add just a few broker addresses
here and Lenses will bootstrap and discover the
full cluster membership (which may change dynamically).
This list should be in the form
"host1:port1,host2:port2,host3:port3" |
yes | string | PLAINTEXT://localhost:9092 |
lenses.kafka.metrics.port |
An array mapping the Kafka broker id to its metrics port.
|
yes | array | null |
lenses.kafka.metrics.port[*].id |
The Kafka broker identifier
(integer as defined in your Kafka broker configuration)
|
no | int | null |
lenses.kafka.metrics.port[*].port |
The port on the broker machine
to connect to in order to get the broker’s metrics.
|
no | int | null |
lenses.kafka.metrics.port[*].host |
The Kafka broker host name to use
for the given broker identifier.
|
no | string | null |
lenses.kafka.metrics.default.port |
Set this when all the Kafka brokers
use the same JMX/JOLOKIA port number.
When a machine runs more than one Kafka broker,
you need to use lenses.kafka.metrics.port[*]
to set the connection port.
|
no | int | null |
lenses.kafka.metrics.type |
Sets the metrics type. Available options are:
JMX, JOLOKIAG, or JOLOKIAP. JMX - Java Management Extensions is more common
Jolokia supports two APIs a GET and a POST based one.
Use JOLOKIAG if the metrics are exposed via GET requests.
Use JOLOKIAP if the metrics are exposed via POST requests.
|
no | string | JMX |
lenses.kafka.metrics.user |
For secure connections, the setting specifies the
user name to use when connecting to JMX/JOLOKIA endpoints.
The same user is applied for all brokers connections
|
no | string | null |
lenses.kafka.metrics.password |
For secure connections, the setting specifies
the password to use when connecting to JMX/JOLOKIA endpoints.
The same value is used for all brokers connections.
|
no | string | null |
lenses.kafka.metrics.https |
Set this flag to true when the metrics are exposed
via JOLOKIA and the connection is using HTTPS protocol.
|
no | bool | false |
lenses.kafka.metrics.ssl |
This applies for JMX exposed metrics only. Set the value to true
when secure connection is required.
|
no | bool | false |
lenses.kafka.connect.clusters |
Defines the Kafka connect clusters
|
no | array | null |
lenses.kafka.connect.clusters.name |
The name for the connect cluster to recognize it by
|
no | string | null |
lenses.kafka.connect.clusters.statuses |
Comma separated topics which hold the Connect cluster status
|
no | string | null |
lenses.kafka.connect.clusters.configs |
Comma separated topics which hold the Connect cluster config
|
no | string | null |
lenses.kafka.connect.clusters.offsets |
Comma separated topics which hold the Connect cluster offsets
|
no | string | null |
lenses.kafka.connect.clusters.username |
if the connect endpoints are protected
by user/password this is the user to use
|
no | string | null |
lenses.kafka.connect.clusters.password |
if the connect endpoints are protected
by user/password this is the password to use
|
no | string | null |
lenses.kafka.connect.clusters.auth |
if the connect endpoints are protected
this is the protection mode
(URL, USER_INFO, SASL_INHERIT, NONE)
|
no | string | null |
lenses.kafka.connect.clusters.urls |
A list of all the workers endpoints
|
no | array | null |
lenses.kafka.connect.clusters.urls.url |
The connect worker endpoint
|
no | string | null |
lenses.kafka.connect.clusters.urls.jmx |
old style still supported
|
no | int | null |
lenses.kafka.connect.clusters.urls.metrics.url |
The metrics connection endpoint
|
no | string | null |
lenses.kafka.connect.clusters.urls.metrics.type |
The metrics connection type (JMX or JOLOKIA)
|
no | string | null |
lenses.kafka.connect.clusters.urls.metrics.user |
if the metrics connection is protected by user/password
this is the user to use
|
no | string | null |
lenses.kafka.connect.clusters.urls.metrics.password |
if the metrics connection is protected by user/password
this is the password to use
|
no | string | null |
lenses.kafka.connect.request.timeout |
The maximum time (in msec) to wait for Kafka Connect to reply
|
no | int | 10000 |
lenses.zookeeper.hosts |
A list of all the zookeeper nodes
|
no | array | null |
lenses.zookeeper.hosts.url |
The Zookeeper node endpoint
|
no | string | null |
lenses.zookeeper.hosts.url.jmx |
old still supported
|
no | string | null |
lenses.zookeeper.hosts.metrics.type |
The Zookeeper node metrics type (JMX or JOLOKIA)
|
no | string | null |
lenses.zookeeper.hosts.metrics.url |
The Zookeeper node metrics endpoint
|
no | string | null |
lenses.zookeeper.hosts.metrics.user |
if the metrics connection is protected by user/password
this is the user to use
|
no | string | null |
lenses.zookeeper.hosts.metrics.password |
if the metrics connection is protected by user/password
this is the password to use
|
no | string | null |
lenses.schema.registry.urls |
A list of SR nodes
|
no | array | null |
lenses.schema.registry.urls.url |
The SR node endpoint
|
no | string | null |
lenses.schema.registry.urls.metrics.url |
The SR node metrics endpoint
|
no | string | null |
lenses.schema.registry.urls.metrics.user |
if the metrics connection is protected by user/password
this is the user to use
|
no | string | null |
lenses.schema.registry.urls.metrics.password |
If the metrics connection is protected by user/password
this sets the password to use
|
no | string | null |
lenses.zookeeper.hosts |
Provide all the available Zookeeper nodes details.
For every ZooKeeper node specify the
connection url (host:port) and the metrics endpoint.
The configuration should be
[{url:"hostname1:port1", metrics:{url:"URL", type:"JMX"}}] |
yes | string | [] |
lenses.zookeeper.chroot |
You can add your
znode (chroot) path ifyou are using it. Please do not add
leading or trailing slashes. For example if you use
the zookeeper chroot ``/kafka` for
your Kafka cluster, set this value to
kafka |
no | string | |
lenses.zookeeper.security.enabled |
Enables secured connection to your Zookeeper.
The default value is false.
Please read about this setting before enabling it.
|
no | boolean | false |
lenses.schema.registry.urls |
Provide all available Schema Registry node details or list
the load balancer address if one is used. For every instance
specify the connection url and if
metrics are enabled endpoint
|
yes | string | [] |
lenses.schema.registry.kerberos |
Set to true if the schema registry
is deployed with kerberos authentication
|
no | boolean | false |
lenses.schema.registry.keytab |
The location of the keytab file if
connecting to a kerberized schema registry
|
no | string | null |
lenses.schema.registry.jaas |
The location of the jaas file if
connecting to a kerberized schema registry
|
no | string | null |
lenses.schema.registry.krb5 |
The location of the krb5 file if
connecting to a kerberized schema registry
|
no | string | null |
lenses.schema.registry.principal |
The service principal of the above keytab
|
no | string | null |
lenses.schema.registry.service.name |
The service name of the above keytab
|
no | string | null |
lenses.schema.registry.auth |
Specifies the authentication mode
for connecting to the schema registry endpoints.
Available values are: URL, USER_INFO, SASL_INHERIT or NONE
|
no | string | null |
lenses.schema.registry.username |
When a USER_INFO authentication
mode is used, this specifies the user name value
|
no | string | null |
lenses.schema.registry.password |
When a USER_INFO authentication
mode is used, this specifies the password value
|
no | string | null |
lenses.schema.registry.settings.* |
Prefixes all the schema registry
client configuration you might want to use.
For example for SASL_INHERIT you need to provide
lenses.schema.registry.settings.sasl.jaas.config=PATH
|
no | string | null |
lenses.kafka.connect.clusters |
Provide all available Kafka Connect clusters.
For each cluster give a name, list the 3 backing topics
and provide workers connection details (host:port) and
metrics endpoints if enabled and on Kafka 1.0.0
or later See example here
|
no | array | [] |
lenses.alert.manager.endpoints |
Comma separated Alert Manager endpoints.
If provided, Lenses will push raised
alerts to the downstream notification gateway.
The configuration should be
"http://host1:port1" |
no | string | |
lenses.alert.manager.source |
How to identify the source of an Alert
in Alert Manager. Default is
Lenses but you mightwant to override to
UAT for example |
no | string | Lenses |
lenses.alert.manager.generator.url |
A unique URL identifying the creator of this alert.
Default is
http://lenses but you mightwant to override to
http://<my_instance_url> for example |
no | string | http://lenses |
lenses.grafana |
If using Grafana, provide the Url location.
The configuration should be
"http://grafana-host:port" |
no | string | |
lenses.sql.settings.max.size |
Used when reading data from a Kafka topic.
This is the maximum data size in bytes to return
from a Lenses SQL query. If the query is bringing more
data than this limit any records received after
the limit are discarded.
This can be overwritten
in the Lenses SQL query.
|
yes | long | 20971520 (20MB) |
lenses.sql.settings.max.query.time |
Used when reading data from a
Kafka topic. This is the time in milliseconds the
query will be allowed to run. If the time is exhausted
it returns the records found so far.
This can be overwritten in the
Lenses SQL query.
|
yes | int | 3600000 (1h) |
lenses.sql.settings.max.idle.time |
Used when reading data from a
Kafka topic. This is the time in milliseconds
to wait when reaching the end of the topic.
This can be overwritten in the
Lenses SQL query.
|
yes | int | 5000 (5 seconds) |
lenses.sql.settings.skip.bad.records |
Used when reading data from a
Kafka topic. If the flag is set to true,
the SQL engine will skip records which can
not be read. This can be overwritten in the
Lenses SQL query.
|
yes | boolean | true |
lenses.sql.settings.format.timestamp |
Used when reading data from a
Kafka topic. If the flag is set to true,
the Avro date and time fields are rendered
to the UI in a human readable format.
This can be overwritten in the
Lenses SQL query.
|
yes | boolean | true |
lenses.sql.settings.live.aggs |
Used when reading data from a
Kafka topic. If the flag is set to true,
it will enable running aggregate queries
on the tabled based SQL engine.
This can be overwritten in the
Lenses SQL query.
|
yes | boolean | true |
lenses.sql.sample.default |
Number of messages to take in every
sampling attempt
|
no | int | 2 |
lenses.sql.sample.window |
How frequently to sample a topic
for new messages when tailing it
|
no | int | 200 |
lenses.metrics.workers |
Number of workers to distribute the load
of querying and collecting metrics
|
no | int | 16 |
lenses.offset.workers |
Number of workers to distribute the
load of querying topic offsets
|
no | int | 5 |
lenses.sql.execution.mode |
no | string | IN_PROC | |
lenses.sql.state.dir |
Directory location to store the state
of KStreams. If using CONNECT mode, this folder
must already exist on each Kafka
Connect worker
|
no | string | logs/lenses-sql-kstream-state |
lenses.sql.monitor.frequency |
How frequently SQL processors
emmit healthcheck and performance metrics to
lenses.topics.metrics |
no | int | 10000 |
lenses.kubernetes.processor.image.name |
The Docker/container repository url
and name of the Lenses SQL runner
|
no | string | eu.gcr.io/lenses-container-registry/lenses-sql-processor |
lenses.kubernetes.processor.image.tag |
The Lenses SQL runner image tag | no | string | 2.3 |
lenses.kubernetes.config.file |
The location of the kubectl config file | no | string | /home/lenses/.kube/config |
lenses.kubernetes.pull.policy |
TODO
|
no | string | always |
lenses.kubernetes.watch.reconnect.limit |
TODO
|
no | long | -1 |
lenses.kubernetes.incluster.name |
TODO
|
no | long | -1 |
lenses.kubernetes.processor.heap |
The amount of memory
the underlying Java process will use
|
no | string | 1024M |
lenses.kubernetes.processor.mem.request |
TODO
|
no | string | 128M |
lenses.kubernetes.processor.mem.limit |
TODO
|
no | string | 1152M |
lenses.kubernetes.processor.jaas |
TODO
|
no | string | null |
lenses.kubernetes.processor.krb5 |
TODO
|
no | string | null |
lenses.kubernetes.processor.kafka.settings |
Prefix all the Kafka configurations
required to run the resulting Kafka Streams application
resulted from the Lenses SQL streaming code.
|
no | string | null |
lenses.kubernetes.processor.kafka.protected.settings |
An array of the keys
prefixed with lenses.kubernetes.processor.kafka.settings which
contain sensitive information. If for example
ssl.key.password is set in the settings,
then this value should be added as an item here.
|
no | string | null |
lenses.kubernetes.processor.kafka.protected.file.settings |
An array of the keys
prefixed with lenses.kubernetes.processor.kafka.settings which
are pointing to files. If for example
ssl.keystore.location is set in the settings,
then this value should be added as an item here.
|
no | string | null |
lenses.kubernetes.processor.kafka.keytab |
TODO
|
no | string | null |
lenses.kubernetes.processor.schema.registry.settings |
Prefix all the Schema Registry configurations
required to connect to your instance.
|
no | string | null |
lenses.kubernetes.processor.schema.registry.protected.settings |
An array of the keys
prefixed with lenses.kubernetes.processor.schema.registry.settings which
contain sensitive information. If for example
ssl.key.password is set in the settings,
then this value should be added as an item here.
|
no | string | null |
lenses.kubernetes.processor.schema.registry.protected.file.settings |
An array of the keys
prefixed with lenses.kubernetes.processor.schema.registry.settings which
are pointing to files. If for example
ssl.keystore.location is set in the settings,
then this value should be added as an item here.
|
no | string | null |
lenses.kubernetes.processor.schema.registry.keytab |
TODO
|
no | string | null |
lenses.kubernetes.service.account |
The service account to deploy with.
This account should be able to pull images
from
lenses.kubernetes.processor.image.name |
no | string | default |
lenses.kubernetes.pull.policy |
The pull policy for Kubernetes containers:
IfNotPresent or Always |
no | string | IfNotPresent |
lenses.kubernetes.runner.mem.limit |
The memory limit applied to the Container | no | string | 768Mi |
lenses.kubernetes.runner.mem.request |
The memory requested for the Container | no | string | 512Mi |
lenses.kubernetes.runner.java.opts |
Advanced JVM and GC memory tuning parameters | no | string | -Xms256m -Xmx512m
-XX:MaxPermSize=128m -XX:MaxNewSize=128m
-XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
-XX:+DisableExplicitGC -Djava.awt.headless=true
|
lenses.interval.summary |
The interval (in msec) to check for new topics,
or topic config changes
|
no | long | 10000 |
lenses.interval.consumers |
The interval (in msec) to read all
consumer info
|
no | int | 10000 |
lenses.interval.partitions.messages |
The interval (in msec) to refresh
partitions info
|
no | long | 10000 |
lenses.interval.type.detection |
The interval (in msec) to check the
topic payload type
|
no | long | 30000 |
lenses.interval.user.session.ms |
The duration (in msec) that a
client session stays alive for.
|
no | long | 14400000 (4h) |
lenses.interval.user.session.refresh |
The interval (in msec) to check whether a
client session is idle and should be terminated.
|
no | long | 60000 |
lenses.interval.schema.registry.healthcheck |
The interval (in msec) to check the
status of schema registry instances.
|
no | long | 30000 |
lenses.interval.topology.topics.metrics |
The interval (in msec) to refresh the
topology status page.
|
no | long | 30000 |
lenses.interval.alert.manager.healthcheck |
The interval (in msec) to check the
status of the Alert manager instances.
|
no | long | 5000 |
lenses.interval.alert.manager.publish |
The interval (in msec) on which
unresolved alerts are published
to alert manager.
|
no | long | 30000 |
lenses.interval.topology.custom.app.metrics.discard.ms |
The interval (in msec) when
an already published metrics entry is consider stale.
Once this happens the record is discarded.
|
no | long | 120000 |
lenses.interval.metrics.refresh.zk |
The interval (in msec) to get
Zookeeper metrics.
|
yes | long | 5000 |
lenses.interval.metrics.refresh.sr |
The interval (in msec) to get
Schema Registry metrics.
|
yes | long | 5000 |
lenses.interval.metrics.refresh.broker |
The interval (in msec) to get Broker metrics. | yes | long | 5000 |
lenses.interval.metrics.refresh.alert.manager |
The interval (in msec) to get
Alert Manager metrics
|
yes | long | |
lenses.interval.metrics.refresh.connect |
The interval (in msec) to get Connect metrics. | yes | long | |
lenses.interval.metrics.refresh.brokers.in.zk |
The interval (in msec) to refresh
the brokers from Zookeeper.
|
yes | long | 5000 |
lenses.kafka.ws.poll.ms |
Max time (in msec) a consumer polls for
data on each request, on WS API request.
|
no | int | 1000 |
lenses.kafka.ws.buffer.size |
Max buffer size for WS consumer | no | int | 10000 |
lenses.kafka.ws.max.poll.records |
Specify the maximum number of records
returned in a single call to poll(). It will
impact how many records will be pushed at once
to the WS client.
|
no | int | 1000 |
lenses.kafka.ws.heartbeat.ms |
The interval (in msec) to send messages to
the client to keep the TCP connection open.
|
no | int | 30000 |
lenses.access.control.allow.methods |
Restrict the HTTP verbs allowed
to initiate a cross-origin HTTP request
|
no | string | GET,POST,PUT,DELETE,OPTIONS |
lenses.access.control.allow.origin |
Restrict to specific hosts cross-origin
HTTP requests.
|
no | string | |
lenses.schema.registry.topics |
The backing topic where schemas are stored. | no | string | _schemas |
lenses.schema.registry.delete |
Allows subjects to be deleted in
the Schema Registry. Default is disabled.
Requires schema-registry version 3.3.0 or later
|
no | boolean | false |
lenses.allow.weak.SLL |
Allow connecting with
https:// services evenwhen self-signed certificates are used
|
no | boolean | false |
lenses.telemetry.enable |
Enable or disable telemetry data collection | no | boolean | true |
lenses.curator.retries |
The number of attempts to read the
broker metadata from Zookeeper.
|
no | int | 3 |
lenses.curator.initial.sleep.time.ms |
The initial amount of time to wait between
retries to ZK.
|
no | int | 2000 |
lenses.zookeeper.max.session.ms |
The max time (in msec) to wait for
the Zookeeper server to
reply for a request. The implementation requires that
the timeout be a minimum of 2 times the tickTime
(as set in the server configuration).
|
no | int | 10000 |
lenses.zookeeper.max.connection.ms |
The duration (in msec) to wait for the Zookeeper client to
establish a new connection.
|
no | int | 10000 |
lenses.akka.request.timeout.ms |
The maximum time (in msec) to wait for an
Akka Actor to reply.
|
no | int | 10000 |
lenses.kafka.control.topics |
List of Kafka topics to be marked as system topics |
no | string | [“connect-configs”, “connect-offsets”, “connect-status”,
“connect-statuses”, “_schemas”, “__consumer_offsets”,
“_kafka_lenses_”, “lsql_”, “__transaction_state”,
“__topology”, “__topology__metrics”]
|
lenses.alert.buffer.size |
The number of most recently raised
alerts to keep in the cache.
|
no | int | 100 |
lenses.kafka.settings.consumer |
Allow additional Kafka consumer settings
to be specified. When Lenses creates an instance
of KafkaConsumer class it will use these
properties during initialization.
|
no | string | {reconnect.backoff.ms = 1000, retry.backoff.ms = 1000} |
lenses.kafka.settings.producer |
Allow additional Kafka producer settings to
be specified. When Lenses creates an
instance of KafkaProducer
class it will use these properties during initialization.
|
no | string | {reconnect.backoff.ms = 1000, retry.backoff.ms = 1000} |
lenses.kafka.settings.kstream |
Allow additional Kafka KStreams settings
to be specified
|
no | string |
The last three keys, allow configuring the consumer/producer/kstreams settings of Lenses internal consumer/producers/kstreams.
Example: lenses.kafka.settings.producer.compression.type = snappy