Clients / Python¶
The Lenses python-library
is a Python client enabling Python developers and data scientists to take advantage
of the Rest and WebSocket endpoints Lenses exposes. Users can:
- Manage topics
- Manage schemas
- Manage processors
- Manage connectors
- Browse topic data via SQL
- Subscribe to live continuous SQL queries via SQL
Installation¶
Dependencies¶
Install Pip3:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3.5 get-pip.py
rm -f get-pip.py
Virtual Environment (Recommended)¶
Install virtualenv and virtualenvwrapper:
pip3.5 install virtualenv virtualenvwrapper
cat << EOF >> ~/.bashrc
export WORKON_HOME=$HOME/VirtEnv/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3.5
export VIRTUALENVWRAPPER_VIRTUALENV_ARGS=' -p /usr/bin/python3.5'
export PROJECT_HOME=$HOME/VirtEnv
source /usr/bin/virtualenvwrapper.sh
source ~/.bashrc
# All virtualenvs and their packages will be installed under ``~/VirtEnv/.virtualenvs``
ls ~/VirtEnv/.virtualenvs
get_env_details initialize machinelab postactivate ...
Create a python virtual environment:
mkvirtualenv myvirtenv
Activate the virtualenv (activated by default after creation):
# To activate just run workon myvirtenv
[user@hostname]$ workon myvirtenv
(myvirtenv)[user@hostname]$
Exiting virtualenv:
(myvirtenv)[user@hostname]$ deactivate
[user@hostname]$
Remove installed virtualenv along with the relevant packages:
rmvirtualenv myvirtenv
Install Lenses Python¶
Clone the Python repo and execute the following command inside the repo:
python3 setup.py install
You should already have Python3 installed on your system.
Or just use pip3
:
pip3 install lenses_python
Getting started¶
Basic authentication¶
First, we import the Python 3 client:
from lenses_python.lenses import lenses
data=lenses(<url>,<username>,<password>)
Next get the credentials and the roles for our account:
data.GetCredentials()
Example code:
data=lenses("http://127.0.0.1:3030","admin","admin")
data.GetCredentials()
{
'schemaRegistryDelete': True,
'success': True,
'token': 'c439f615-e511-4dfd-863b-76ad285b7572',
'user': {
'email': None,
'id':'admin',
'name': 'Lenses Admin',
'roles': ['admin', 'read', 'write', 'nodata']
}
}
Kerberos authentication¶
In this case we will connect with Kerberos. First of all you should install kinit and set the configuration in the /etc/krb5.conf file. Then, you should run the kinit utility to create a Kerberos token, which will later be retrieved by the Python 3 library.
from lenses_python.lenses import lenses
data = lenses(url=<url>,kerberos_mode=1)
SQL Data Handlers¶
Using the Lenses SQL Engine we can utilize Lenses rest endpoints to execute SQL queries on Kafka topics.
This is achieved using the SqlHandler
.
data.SqlHandler(<query>, <optional argument is_extract_pandas>,<optional argument stats>, <optional datetimelist>, <optional formatinglist>)
If we do not supply the is_extract_pandas
parameter, the output will be in JSON format – the value of this parameter can be either True
or False
. Otherwise, the output
will be pandas data frame. The stats
parameter is an integer. If we want the output as pandas data frames, we have the option to use the
two optional arguments named datetimelist
and formatinglist
.
With these arguments, we convert the datetime strings to datetime objects.
datetimelist
- is a list which contains all the keys including datetime strings.formatinglist
- is a list which contains the date format for each element in the listdatetimelist
.
If all the formatting of dates is same, we put only one in formatinglist
.
For more information about the format check this page.
Example of use arguments datetimelist and formatinglist:
data.SqlHandler(
'SELECT * FROM `nyc_yellow_taxi_trip_data`',
['tpep_dropoff_datetime'],
['%Y-%m-%d %H:%M:%S'])
Managing Topics¶
Topics Information¶
To list topics:
data.TopicsNames()
To obtain the information for a specific topic and return a dictionary:
data.TopicInfo(<topic name>)
If we want to list all topic information, we execute the following command:
data.GetAllTopics()
The output of this is a list of JSON records, with all topics information.
Create new topic¶
data.CreateTopic(<topic name>, <replication>, <partitions>, <optional configuration>, <optional configuration filename>)
Example of configuration:
configuration = {
"cleanup.policy": "compact",
"compression.type": "snappy"
}
There are three options for setting the parameters:
- Topic name and configuration (Fourth) argument. This will create a topic as described in the above
configuration=
example. - Topic name and (file) configuration (Fifth) argument. This will create a topic as described in the file.
- Provide only the configuration file with the topic name being part of the configuration file.
Example: Create topic named example_topic:
configuration = {"cleanup.policy": "compact","compression.type": "snappy"}
data.CreateTopic("example_topic",1,1,config)
Update topic configuration¶
data.UpdateTopicConfig(<topic name>, <optional configuration>, <optional filename>)
Example configuration:
conf={
"configs": [
{
"key": "cleanup.policy",
"value": "compact"
}
]
}
data.UpdateTopicConfig("example_topic", conf)
There are three options for setting the parameters:
- Topic name and configuration argument (second). This will update the topic with the provided configuration.
- Topic name and configuration file (third). This will update the topic with the configuration from the file.
- Provide only the configuration file with the topic name and config in the default section. This will load and update the topics specified in the configuration file. For example:
[Default]
topicname: example_topic
config:{
"configs": [
{
"key": "cleanup.policy",
"value": "compact"
}
]
}
Delete topic¶
data.DeleteTopic(<topic name>)
Delete topic records¶
With this one we can delete specific records of given topic.
We should give as input the topic, the partition and the offset.
For example, if we want to delete the first 100 records of topicA
and from partition
0, we should have something like this:
data.DeleteTopicRecords("example_topic","0","100")
"Records from topic 'example_topic' and partition '0' up to offset '100', are being deleted.
When the process is completed, an audit will appear in the audits tab."
Managing SQL Processors¶
Using Lenses we can deploy and manage SQL processors.
Create new Processor¶
data.CreateProcessor(<processor name>, <sql query>, <runners>, <cluster name>, <optional namespace>, <optional pipeline>)
The parameters are:
sql query
- The Lenses SQL to run.runners
- The number of runners to spawn.cluster name
- The cluster name, either the Connect cluster name or the Kubernetes cluster name. UseIN_PROC
.optional namespace
- Kubernetes namespace, only applicable in Kubernetes mode.optional pipeline
- Kubernetes pipeline tag, only applicable in Kubernetes mode.
Example for creating a k8 processor:
data.CreateProcessor("new-k8processor",
"SET autocreate=true;INSERT INTO topicB SELECT * FROM topicA",
1,
"dev",
"ns",
"pipeline")
Example for creating a IN_PROC processor:
data.CreateProcessor("new-processor",
"SET autocreate=true;INSERT INTO topicB SELECT * FROM topicA",
1,
"dev")
On successful registration Lenses will return the id for the new processor,
which will look similar to lsql_818a158c50a24d71952652ab49e75637
.
Deleting a Processor¶
data.DeleteProcessor(<processor id>)
Example for deleting a processor with processor id lsql_818a158c50a24d71952652ab49e75637:
data.DeleteProcessor("lsql_818a158c50a24d71952652ab49e75637")
Resume a Processor¶
data.ResumeProcessor(<processor id>)
Pause a Processor¶
data.PauseProcessor(<processor id>)
Scale a Processor¶
Scaling a processor involves changing the number of runners, either threads for IN_PROC
mode, connect tasks
from CONNECT
mode or pods for KUBERNETES
mode:
data.UpdateProcessor(<processor name>, <number of runners> [string])
Example of a processor scale which runs in execution mode IN_PROC
:
data.UpdateProcessor("lsql_818a158c50a24d71952652ab49e75637", "2")
Managing Schemas¶
Best practice is to use AVRO as the message format for your data. Lenses supports schema registries from both Confluent and Hortonworks (compatibility mode 0).
List all Subjects¶
data.GetAllSubjects()
Example output:
data.GetAllSubjects()
['telecom_italia_data-key',
'cc_payments-value',
'reddit_posts-value',
'sea_vessel_position_reports-value',
'telecom_italia_grid-value',
'fast_vessel_processor-value',
'reddit_posts-key',
'telecom_italia_grid-key',
'telecom_italia_data-value',
'nyc_yellow_taxi_trip_data-value',
'sea_vessel_position_reports-key',
'cc_data-value',
'fast_vessel_processor-key',
'logs_broker-value']
List Subject Versions¶
List the versions of a specific schema subject:
data.ListVersionsSubj(<name of subject>)
Example for getting the versions from a subject:
data.ListVersionsSubj("reddit_posts-value")
[1]
Get a Schema by ID¶
You can print the schema by using the schemas ID:
data.GetSchemaById(<subject\'s id>)
Example getting a schema by using the schema ID:
data.GetSchemaById("1")
{'schema': '{"type":"record","name":"CreditCard","namespace":"com.landoop.data.generator.domain.payments",
"fields":[{"name":"number","type":"string"},{"name":"customerFirstName","type":"string"},
{"name":"customerLastName","type":"string"},{"name":"country","type":"string"},
{"name":"currency","type":"string"},{"name":"blocked","type":"boolean"}]}'}
Register new Schema¶
data.RegisterNewSchema(<name of schema>, <optional schema>, <optional filename>)
There are three options for setting the parameters:
- Topic name and schema argument (second). This will register the topic with the provided schema.
- Topic name and schema file (third). This will register the topic with the schema from the file.
- Provide the schema file only. This will register the schema specified in the configuration file.
Example of schema registration:
config={
'schema':
'{"type":"record","name":"reddit_post_key",'
'"namespace":"com.landoop.social.reddit.post.key",'
'"fields":[{"name":"subreddit_id","type":"string"}]}'
}
data.RegisterNewSchema("example_schema", config)
{'id': 13}
Example file configuration without a schema name:
[Default]
{'schema':
'{"type":"record","name":"reddit_post_key",'
'"namespace":"com.landoop.social.reddit.post.key",'
'"fields":[{"name":"subreddit_id","type":"string"}]}'
}
Delete specific Subjects¶
data.DeleteSubj(<name of subject>)
Delete Schema by Version¶
data.DeleteSchemaByVersion(<name of subject>, <version of subject> [string])
Example of deleting a schema by version:
data.DeleteSchemaByVersion("example_schema", "1")
1
Get Global Compatibility¶
data.GetGlobalCompatibility()
Update Global Compatibility¶
This command updates the compatibility on the Schema Registry servers:
data.UpdateGlobalCompatibility(<optional compatibility>, <optional filename>)
Example:
config={'compatibility': 'BACKWARD'}
data.UpdateGlobalCompatibility(config)
Out: {'compatibility': 'BACKWARD'}
There are two options for setting the parameters:
- Compatibility. This will update the schema registries with the provided compatibility level.
- Provide only the compatibility file. This will set the schema registry to the compatibility specified in the configuration file.
For example:
[Default]
compatibility:{"compatibility": "BACKWARD"}
Get Compatibility of a Subject¶
data.GetCompatibility(<subject name>)
Example with a subject that has a defined compatibility:
data.GetCompatibility("reddit_posts-value")
{'compatibilityLevel': 'BACKWARD'}
If no compatibility has been defined for the subject, the GetCompatibility
method will fail as shown below:
data.GetCompatibility("reddit_posts-value")
...
...
...
Exception: Http status code 404.{"error_code":40401,"message":"Subject not found."}
Change Compatibility of Subject¶
data.ChangeCompatibility(<name of subject>, <optional compatibility>, <optional filename>)
Example: change compatibility of a subject
conf={'compatibility': 'BACKWARD'}
data.ChangeCompatibility("telecom_italia_data-key", conf)
There are three options for setting the parameters of ChangeCompatibility:
- Subject name and compatibility (Second) argument. This will update the subject with the provided compatibility level.
- Subject name and compatibility file (Third) argument. This will update the schema with the compatibility from the file.
- Provide only the compatibility file. This will set the subject to the compatibility specified in the configuration file.
Example: Update Compatibility
[Default]
compatibility:{"compatibility": "BACKWARD"}
Managing Connectors¶
Lenses allows you manage Kafka Connect Connectors to load and unload data from Kafka.
List Connectors¶
To get the available connectors for a given connect-cluster run:
data.ListAllConnectors(<name of cluster>)
Example: Get the connectors from a connect cluster with name dev
:
data.ListAllConnectors("dev")
['logs-broker', 'nullsink']
Get Connector Information¶
This command retrieves the status plus configuration of connectors:
data.GetInfoConnector(<name of cluster>, <name of connector>)
Example: Get info from a connector with name logs-broker
that belongs to dev
cluster
data.GetInfoConnector("dev", "logs-broker")
{'name': 'logs-broker',
'config': {'connector.class': 'org.apache.kafka.connect.file.FileStreamSourceConnector',
'name': 'logs-broker',
'topic': 'logs_broker',
'file': '/var/log/broker.log',
'tasks.max': '1'},
'tasks': [{'connector': 'logs-broker', 'task': 0}],
'type': 'source'}
Get Connector Configuration¶
data.GetConnectorConfig(<name of cluster>, <name of connector>)
Example: Get configuration from a connector with name logs-broker
that belongs
to the dev
cluster.
data.GetConnectorConfig("dev", "logs-broker")
{
"connector.class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"file": "/var/log/broker.log",
"name": "logs-broker",
"tasks.max": "1",
"topic": "logs_broker"
}
Get Connector Status¶
data.GetConnectorStatus(<name of cluster>, <name of connector>)
Example: Get the status for a connector named logs-broker
that belongs to
the dev
cluster
data.GetConnectorStatus("dev", "logs-broker")
{
"connector" : { "state": "RUNNING", "worker_id": "172.17.0.3:8083" },
"name" : "logs-broker",
"tasks" : [{ "id": 0, "state": "RUNNING", "worker_id": "172.17.0.3:8083" }]
}
Get Connector Tasks¶
data.GetConnectorTasks(<name of cluster>, <name of connector>)
Example: Gets the connector logs-broker
task status for the given task identifier
data.GetConnectorTasks("dev", "logs-broker")
[
{
"config": {
"file": "/var/log/broker.log",
"task.class": "org.apache.kafka.connect.file.FileStreamSourceTask",
"topic": "logs_broker"
},
"id": {"connector": "logs-broker", "task": 0}
}
]
Get Status of specific Task¶
data.GetStatusTask(<name of cluster>, <name of connector>, <task id>)
Example: Get the status of a task for connector logs-broker
data.GetStatusTask("dev", "logs-broker", "0")
{'state': 'RUNNING', 'id': 0, 'worker_id': '172.17.0.2:8083'}
Create new Connector¶
data.CreateConnector(<name of cluster>, <optional configuration>, <optional filename>)
Example configuration:
conf={
"name": "example_connector",
"config": {
"connector.class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"topic": "logs_broker",
"file": "/var/log/broker.log",
"tasks.max": "1"}
}
data.CreateConnector("dev", conf)
{'name': 'example_connector',
'config': {'connector.class': 'org.apache.kafka.connect.file.FileStreamSourceConnector',
'topic': 'logs_broker',
'file': '/var/log/broker.log',
'tasks.max': '1',
'name': 'example_connector'},
'tasks': [],
'type': None}
There are three options for setting the parameters:
- Set the cluster, name of the connector and the config.
- Set the cluster, name of the connector and the file to load the connector config from.
- Set the configuration file name only, specifying the cluster and connector name in the default section.
[Default]
cluster: my-cluster
connector: my-connector
config: {
"config": {
"connect.coap.kcql": "1",
"connector.class": "com.datamountaineer.streamreactor.connect.coap.sink.coapsinkconnector"
},
"name": "name"
}
Set Connector Configuration¶
data.SetConnectorConfig(<name of cluster>, <name of connector>, <optional configuration>, <optional filename>)
Example: Updates the nullsink
connector configuration
conf={
"connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"task.max": 5,
"topics": "nyc_yellow_taxi_trip_data,reddit_posts,sea_vessel_position_reports,telecom_italia_data",
"file": "/dev/null",
"name": "nullsink"
}
data.SetConnectorConfig("dev", "nullsink", conf)
{'name': 'nullsink',
'config': {'connector.class': 'org.apache.kafka.connect.file.FileStreamSinkConnector',
'task.max': '5',
'topics': 'nyc_yellow_taxi_trip_data,reddit_posts,sea_vessel_position_reports,telecom_italia_data',
'file': '/dev/null',
'name': 'nullsink'},
'tasks': [{'connector': 'nullsink', 'task': 0},
{'connector': 'nullsink', 'task': 1},
{'connector': 'nullsink', 'task': 2},
{'connector': 'nullsink', 'task': 3}],
'type': 'sink'}
There are three options for setting the parameters:
- Set the cluster, name of the connector and the config.
- Set the cluster, name of the connector and the file to load the connector config from.
- Set the configuration file name only, specifying the cluster and connector name in the default section.
[Default]
cluster: my-cluster
connector: my-connector
config: {
"connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"task.max": 5,
"topics": "nyc_yellow_taxi_trip_data,reddit_posts,sea_vessel_position_reports, telecom_italia_data",
"file": "/dev/null",
"name": "nullsink"
}
Restart Connector Task¶
data.RestartConnectorTask(<name of cluster>, <name of connector>,<task id>)
Example: Restart the task with id 0:
data.RestartConnectorTask("dev", "example_connector", "0")
Delete a Connector¶
To delete a connector, call the DeleteConnector
method, supplying the name of the cluster and the connector name:
data.DeleteConnector(<name of cluster>, <name of connector>)
Example: Delete connector with name example_connector
data.DeleteConnector("dev", "example_connector")
Continuous Queries¶
Lenses allows clients to submit SQL and subscribe to the output of the query continuously via web sockets.
Subscribing¶
A client can SUBSCRIBE to a topic via SQL as follows:
data.SubscribeHandler(<url>, <client id>, <sql query>, <write>, <filename>, <print_results>, <optional datetimelist>, <optional formatinglist>)
Parameters:
url
- The Lenses WebSocket endpoint to subscribe to.client id
- A unique identifier of the client.sql
- The Lenses SQL query to run.write
- A boolean parameter and is pre-defined as False. If defined as True, data is saved in a file. The file name is defined by filename parameter.filename
- Name of the file to store the output to.print_results
- Print the incoming data, default True.
The optional arguments datetimelist
and formatinglist
convert datetime strings to datetime objects.
With these arguments we convert the datetime strings to datetime objects:
datetimelist
- is a list which contains all the keys including datetime strings.formatinglist
- is a list which contains the date format for each element in the listdatetimelist
.
If all the formatting of dates is same, we put only one in formatinglist
.
For more info about the format check this page.
Example: Subscribe to a topic named reddit_posts
url="ws://localhost:3030"
sql="select * from reddit_posts"
data.SubscribeHandler("ws://localhost:3030", "clinet001", sql)
{'content': 'reddit_posts', 'correlationId': 1, 'type': 'SUCCESS'}
{'key': '{"subreddit_id":"t5_2s5fm"}',
'offset': 0,
'partition': 4,
'timestamp': 1553779813333,
'topic': 'reddit_posts',
...
...
...
Publishing¶
A client can PUBLISH messages to a topic. The current version supports only string/json. In the future, we will add support for AVRO.
data.Publish(<url>, <client id>, <topic>, <key>, <value>)
Example: Publish to the example_topic
topic
url="ws://localhost:3030"
data.Publish("ws://localhost:3030", "clinet001", "example_topic", "foo", "bar")
{'content': None, 'correlationId': 1, 'type': 'SUCCESS'}
Unsubscribe¶
A client can UNSUBSCRIBE from a topic:
data.Unscribe(<url>, <client id>, <topic>)
Example: Unsubscribe from a topic named example_topic
url="ws://localhost:3030"
data.Unscribe(url, "client001", "example_topic")
{'content': None, 'correlationId': 1, 'type': 'SUCCESS'}
Commit¶
A client can COMMIT the (topic, partition) offsets:
data.Commit(<url>, <client id>, <topic>, <partition>, <offset>)
ACLs Handler¶
Create/Update ACLs¶
data.SetACL(<resourceType>,<resourceName>,<principal>,<permissionType>,<host>, <operation>)
resourceType
, string, requiredresourceName
, string, requiredprincipal
, string, requiredpermissionType
, string, required(either Allow or Deny)host
, string, requiredoperation
, string, required
Example:
data.SetACL("Topic","transactions","GROUPA:UserA","Allow","*","Read")
The following operations are valid (depending on the Kafka version):
Resource Type | Operation |
---|---|
Topic | Read |
Topic | Write |
Topic | Describe |
Topic | Delete |
Topic | DescribeConfigs |
Topic | AlterConfigs |
Topic | All |
Group | Read |
Group | Describe |
Group | All |
Cluster | Create |
Cluster | ClusterAction |
Cluster | DescribeConfigs |
Cluster | AlterConfigs |
Cluster | IdempotentWrite |
Cluster | Alter |
Cluster | Describe |
Cluster | All |
TransactionalId | Describe |
TransactionalId | Write |
TransactionalId | All |
Get ACLs¶
data.GetACLs()
Return a list of dictionaries:
Example: Get all ACLs
data.GetACLs()
[{'permissionType': 'ALLOW',
'resourceType': 'TOPIC',
'host': '*',
'resourceName': '*',
'operation': 'ALL',
'principal': 'user:lenses'},
{'permissionType': 'ALLOW',
'resourceType': 'TOPIC',
'host': '*',
'resourceName': 'transactions',
'operation': 'READ',
'principal': 'GROUPA:UserA'}]
Quota Handler¶
Get Quotas¶
data.GetQuotas()
Return a list of dictionaries:
Example: Get all Quotas
data.GetQuotas()
[{'url': '/api/quotas/clients/client001',
'child': None,
'entityName': 'client001',
'properties': {'request_percentage': '75',
'consumer_byte_rate': '200000',
'producer_byte_rate': '100000'},
'entityType': 'CLIENT',
'isAuthorized': True}
]
Create/Update Quota - All Users¶
data.SetQuotasAllUsers(config)
config
The quota constraints.
Example: Set quota for all users
config={
"producer_byte_rate" : "100000",
"consumer_byte_rate" : "200000",
"request_percentage" : "75"
}
data.SetQuotasAllUsers(config)
Create/Update Quota - User all Clients¶
data.SetQuotaUserAllClients(user, config)
Where,
user
The user to set the quota for.config
The quota constraints.
Example: Set User Quota for all clients
config={
"producer_byte_rate" : "100000",
"consumer_byte_rate" : "200000",
"request_percentage" : "75"
}
data.SetQuotaUserAllClients("user01", config)
Create/Update a Quota - User/Client pair¶
data.SetQuotaUserClient(user, clientid, config)
Where
user
is the user to set the quota for.clientid
is the client id to set the quota for.config
holds the quota constraints.
Example: Set user quota for a client
config={
"producer_byte_rate" : "100000",
"consumer_byte_rate" : "200000",
"request_percentage" : "75"
}
data.SetQuotaUserClient("user02", "client002", config)
Create/Update a Quota - User¶
data.SetQuotaUser(user, config)
Where
user
is the user to set the quota for.config
holds the quota constraints.
Example: Set Quota for a user
config={
"producer_byte_rate" : "100000",
"consumer_byte_rate" : "200000",
"request_percentage" : "75"
}
data.SetQuotaUser("user03", config)
Create/Update Quota - All Clients¶
data.SetQuotaAllClient(config)
Where
config
holds the quota constraints.
Example: Set Quota for all clients:
config={
"producer_byte_rate" : "100000",
"consumer_byte_rate" : "200000",
"request_percentage" : "75"
}
data.SetQuotaAllClient(config)
Create/Update a Quota - Client¶
data.SetQuotaClient(clientid, config)
clientid
The client id to set the quota for.config
The quota constraints.
Example: Set Quota for a client
config={
"producer_byte_rate" : "100000",
"consumer_byte_rate" : "200000",
"request_percentage" : "75"
}
data.SetQuotaClient("client003", config)
Delete Quota - All Users¶
data.DeleteQutaAllUsers(config)
config
The list of quota settings to delete.
Example: Delete Global Quota producer_byte_rate && consumer_byte_rate
config=["producer_byte_rate","consumer_byte_rate"]
data.DeleteQutaAllUsers(config)
Delete Quota - User all Clients¶
data. DeleteQuotaUserAllClients(user, config)
user
The user to set the quota for.config
The list of quota settings to delete.
Example: Delete User Quota for all clients
config=["producer_byte_rate","consumer_byte_rate"]
data. DeleteQuotaUserAllClients("user01", config)
Delete a Quota - User/Client pair¶
data.DeleteQuotaUserClient(user, clientid, config)
user
The user to set the quota for.clientid
The client id to set the quota for.config
The list of quota settings to delete.
Example: Delete User Quota for a client
config=["producer_byte_rate","consumer_byte_rate"]
data.DeleteQuotaUserClient("user02", "client002", config)
Delete a Quota - User¶
data.DeleteQuotaUser(user, config)
user
The user to set the quota for.config
The list of quota settings to delete.
Example: Delete User Quota
config=["producer_byte_rate","consumer_byte_rate"]
data.DeleteQuotaUser("user03", config)
Delete Quota - All Clients¶
data.DeleteQuotaAllClients(config)
config
The list of quota settings to delete.
Example: Delete Quota for all clients
config=["producer_byte_rate","consumer_byte_rate"]
data.DeleteQuotaAllClients(config)
Delete a Quota - Client¶
data.DeleteQuotaClient(clientid, config)
clientid
The client id to set the quota for.config
The list of quota settings to delete.
Example: Delete Quota for a client
config=["producer_byte_rate","consumer_byte_rate"]
data.DeleteQuotaClient("client003", config)
TroubleShooting¶
For troubleshooting or additional information please join our slack channel.