Contents Menu Expand Light mode Dark mode Auto light/dark mode
Try Aiven for free for 30 days! Get started ->
Fork me on GitHub!
Aiven
  • Platform
    • Concepts
      • Authentication tokens
      • Beta services
      • Cloud security
      • About logging, metrics and alerting
      • Projects, accounts, and managing access permissions
      • Service forking
      • Backups at Aiven
      • Static IP addresses
      • TLS/SSL certificates
      • Bring your own account (BYOA)
      • Disaster Recovery testing scenarios
    • HowTo
      • Create a new Aiven service
      • Create a new Aiven service user
      • Create an authentication token
      • Change your email address
      • Enable Aiven password
      • Disable user 2FA setting
      • Download a CA certificate
      • Fork your service
      • Tag your Aiven resources
      • How to pause or terminate your service
      • Enable public access in a VPC
      • Handle resolution errors of private IPs
      • Add storage space
      • Restricting access to your service
      • Scaling services
      • Migrate services
      • Monitoring services
      • Use Prometheus with Aiven
      • Increase metrics limit setting for Datadog
      • Manage static IP addresses
    • Reference
      • List of available cloud regions
  • Integrations
    • Datadog
      • Send metrics to Datadog
      • Send logs to Datadog
  • Aiven tools
    • Aiven CLI
      • avn account
        • avn account authentication-method
        • avn account team
      • avn card
      • avn cloud
      • avn credits
      • avn events
      • avn mirrormaker
      • avn project
      • avn service
        • avn service acl
        • avn service connection-pool
        • avn service connector
        • avn service database
        • avn service es-acl
        • avn service flink beta
        • avn service index
        • avn service integration
        • avn service m3
        • avn service privatelink
        • avn service tags
        • avn service topic
        • avn service user
      • avn ticket
      • avn user
        • avn user access-token
    • Aiven API
      • API examples
    • Aiven Terraform provider
      • Get started
      • HowTo
        • Upgrade the Aiven Terraform Provider from v1 to v2
      • Concepts
        • Data sources in Terraform
      • Reference
        • Cookbook
          • Apache Kafka and OpenSearch
          • Multicloud PostgreSQL
    • Aiven Operator for Kubernetes
  • Apache Kafka
    • Get started
    • Sample data generator
    • Concepts
      • Access control lists permission mapping
      • Compacted topics
      • Partition segments
      • Scaling options in Apache Kafka®
      • Authentication types
      • NOT_LEADER_FOR_PARTITION errors
    • HowTo
      • Tools
        • Configure consumer properties for Apache Kafka® toolbox
        • Use kcat with Aiven for Apache Kafka®
        • Connect to Apache Kafka® with Conduktor
        • Use Kafdrop Web UI with Aiven for Apache Kafka®
      • Security
        • Configure Java SSL to access Apache Kafka®
        • Manage users and access control lists
        • Monitor and alert logs for denied ACL
        • Use SASL Authentication with Apache Kafka®
        • Renew and Acknowledge service user SSL certificates
      • Administration tasks
        • Get the best from Apache Kafka®
        • Manage configurations with Apache Kafka® CLI tools
        • Manage Apache Kafka® parameters
        • View and reset consumer group offsets
        • Configure log cleaner for topic compaction
        • Prevent full disks
        • Set Apache ZooKeeper™ configuration
      • Integrations
        • Integration of logs into Apache Kafka® topic
        • Use Apache Kafka® Streams with Aiven for Apache Kafka®
      • Topic/schema management
        • Creating an Apache Kafka® topic
        • Create Apache Kafka® topics automatically
        • Get partition details of an Apache Kafka® topic
        • Use schema registry in Java with Aiven for Apache Kafka®
        • Change data retention period
    • Reference
      • Advanced parameters
      • Metrics available via Prometheus
    • Apache Kafka Connect
      • Getting started
      • Concepts
        • List of available Apache Kafka® Connect connectors
        • JDBC source connector modes
        • Causes of “connector list not currently available”
      • HowTo
        • Administration tasks
          • Get the best from Apache Kafka® Connect
          • Bring your own Apache Kafka® Connect cluster
          • Enable Apache Kafka® Connect on Aiven for Apache Kafka®
          • Enable Apache Kafka® Connect connectors auto restart on failures
        • Source connectors
          • Create a JDBC source connector for PostgreSQL®
          • Create a JDBC source connector for MySQL
          • Create a JDBC source connector for SQL Server
          • Create a MongoDB source connector
          • Create a Debezium source connector for PostgreSQL®
          • Create a Debezium source connector for MySQL
          • Create a Debezium source connector for SQL Server
          • Create a Debezium source connector for MongoDB
        • Sink connectors
          • Create a JDBC sink connector
          • Configure AWS for an S3 sink connector
          • Create an S3 sink connector by Aiven
          • Create an S3 sink connector by Confluent
          • Configure GCP for a Google Cloud Storage sink connector
          • Create a Google Cloud Storage sink connector
          • Configure GCP for a Google BigQuery sink connector
          • Create a Google BigQuery sink connector
          • Create an OpenSearch® sink connector
          • Create an Elasticsearch sink connector
          • Configure Snowflake for a sink connector
          • Create a Snowflake sink connector
          • Create an HTTP sink connector
          • Create a Redis™* stream reactor sink connector by Lenses.io
      • Reference
        • AWS S3 sink connector naming and data format
          • S3 sink connector by Aiven naming and data formats
          • S3 sink connector by Confluent naming and data formats
        • Google Cloud Storage sink connector naming and data formats
        • Metrics available via Prometheus
    • Apache Kafka MirrorMaker2
      • Getting started
      • Concepts
        • Topics included in a replication flow
      • HowTo
        • Integrate an external Apache Kafka® cluster in Aiven
        • Set up an Apache Kafka® MirrorMaker 2 replication flow
        • Setup Apache Kafka® MirrorMaker 2 monitoring
        • Remove topic prefix when replicating with Apache Kafka® MirrorMaker 2
      • Reference
        • Terminology for Aiven for Apache Kafka® MirrorMaker 2
  • PostgreSQL
    • Get started
    • Sample dataset: Pagila
    • Concepts
      • About aiven-db-migrate
      • Perform DBA-type tasks in Aiven for PostgreSQL®
      • High availability
      • PostgreSQL® backups
      • Connection pooling
      • About PostgreSQL® disk usage
      • About TimescaleDB
      • Upgrade and failover procedures
    • HowTo
      • Code samples
        • Connect with Go
        • Connect with Java
        • Connect with NodeJS
        • Connect with PHP
        • Connect with Python
      • DBA tasks
        • Create additional PostgreSQL® databases
        • Perform a PostgreSQL® major version upgrade
        • Install or update an extension
        • Create manual PostgreSQL® backups
        • Restore PostgreSQL® from a backup
        • Migrate to a different cloud provider or region
        • Claim public schema ownership
        • Manage connection pooling
        • Access PgBouncer statistics
        • Use the PostgreSQL® dblink extension
        • Enable JIT in PostgreSQL®
        • Identify and repair issues with PostgreSQL® indexes with REINDEX
        • Check and avoid transaction ID wraparound
        • Prevent PostgreSQL® full disk issues
      • Replication and migration
        • Create and use read-only replicas
        • Set up logical replication to Aiven for PostgreSQL®
        • Migrate to Aiven for PostgreSQL® with aiven-db-migrate
        • Migrate to Aiven for PostgreSQL® with pg_dump and pg_restore
      • Integrations
        • Connect with psql
        • Connect with pgAdmin
        • Visualize PostgreSQL® data with Grafana®
        • Monitor PostgreSQL® metrics with Grafana®
        • Monitor PostgreSQL® metrics with pgwatch2
        • Connect two PostgreSQL® services via datasource integration
        • Report and analyze with Google Data Studio
    • Reference
      • High CPU load
      • Advanced parameters
      • Extensions
      • Metrics exposed to Grafana
      • Connection limits per plan
      • Resource capability per plan
      • Terminology
  • Apache Flink
    • Get started
    • Concepts
      • Apache Flink® for data analysts
      • Apache Flink® for operators
      • Standard and upsert Apache Kafka® connectors
      • Requirements for Apache Kafka® connectors
      • Built-in SQL editor
      • Event and processing times
      • Windows
      • Watermarks
      • Checkpoints
    • HowTo
      • Create Apache Flink® integrations
      • Create an Apache Kafka®-based Apache Flink® table
      • Create a PostgreSQL®-based Apache Flink® table
      • Create an OpenSearch®-based Apache Flink® table
      • Create an Apache Flink® job
      • Define OpenSearch® timestamp data in SQL pipeline
      • Create a real-time alerting solution - Aiven console
      • Create a real-time alerting solution - Aiven CLI
    • Reference
      • Advanced parameters
  • ClickHouse
    • Get started
    • Sample dataset
    • Concepts
      • Online analytical processing
      • About databases and tables
      • Columnar databases
      • Indexing and data processing
    • HowTo
      • Use the ClickHouse® client
      • Use the query editor
      • Add service user accounts
      • Grant privileges
    • Reference
      • Supported table engines
  • OpenSearch
    • Get started
    • Sample dataset: recipes
    • Concepts
      • Access control
      • Backups
      • Indices
      • Aggregations
      • OpenSearch® vs Elasticsearch
    • HowTo
      • Use Aiven for OpenSearch® with cURL
      • Migrate Elasticsearch data to Aiven for OpenSearch®
      • Manage OpenSearch® log integration
      • Upgrade to OpenSearch®
      • Upgrade to OpenSearch® with Terraform
      • Upgrade Elasticsearch clients to OpenSearch®
      • Connect with NodeJS
      • Connect with Python
      • Search with Python
      • Search with NodeJS
      • Aggregation with NodeJS
      • Control access to content in your service
      • Restore an OpenSearch® backup
      • Dump OpenSearch® index using elasticsearch-dump
      • Set index retention patterns
      • Create alerts with OpenSearch® API
    • OpenSearch Dashboards
      • Getting started
      • HowTo
        • Getting started with Dev tools
        • Create alerts with OpenSearch® Dashboards
    • Reference
      • Plugins
      • Advanced parameters
      • Automatic adjustment of replication factors
      • REST API endpoint access
  • M3DB
    • Get started
    • Concepts
      • Aiven for M3 components
      • About M3DB namespaces and aggregation
      • About scaling M3
    • HowTo
      • Visualize M3DB data with Grafana®
      • Monitor Aiven services with M3DB
      • Use M3DB as remote storage for Prometheus
      • Write to M3 from Telegraf
      • Telegraf to M3 to Grafana® Example
      • Write data to M3DB with Go
      • Write data to M3DB with PHP
      • Write data to M3DB with Python
    • Reference
      • Terminology
      • Advanced parameters
  • MySQL
    • Get started
    • Concepts
      • Understand MySQL backups
      • Understand MySQL high memory usage
    • HowTo
      • Code samples
        • Connect to MySQL from the command line
        • Using mysqlsh
        • Using mysql
        • Connect to MySQL with Python
        • Connect to MySQL with Java
      • Create additional MySQL® databases
      • Connect to MySQL with MySQL Workbench
      • Calculate the maximum number of connections for MySQL
      • Migrate to Aiven for MySQL from an external MySQL
      • Service migration check
      • Prevent MySQL disk full
    • Reference
      • Advanced parameters
      • Resource capability per plan
  • Redis
    • Get started
    • Concepts
      • High availability in Aiven for Redis™*
      • Lua scripts with Aiven for Redis™*
      • Memory usage, on-disk persistence and replication in Aiven for Redis™*
    • HowTo
      • Code samples
        • Connect with redis-cli
        • Connect with Go
        • Connect with NodeJS
        • Connect with PHP
        • Connect with Python
        • Connect with Java
      • DBA tasks
        • Configure ACL permissions in Aiven for Redis™*
        • Migrate from Redis™* to Aiven for Redis™*
      • Estimate maximum number of connection
      • Manage SSL connectivity
      • Handle warning overcommit_memory
    • Reference
      • Advanced parameters
  • Apache Cassandra
  • Grafana
    • Get started
    • HowTo
      • Log in to Aiven for Grafana®
      • Send emails from Aiven for Grafana®
    • Reference
      • Advanced paramters
      • Plugins
  • Aiven community

avn service flink beta¶

Here you’ll find the full list of commands for avn service flink.

Manage an Apache Flink® table¶

avn service flink table create¶

Creates a new Aiven for Apache Flink table.

Parameter

Information

service_name

The name of the service

integration_id

The ID of the integration to use to locate the source/sink table/topic. The integration ID can be found with the integration-list command

--table-name

The Flink table name

--kafka-topic

The Aiven for Apache Kafka® topic to be used as source/sink (Only for Kafka integrations)

--kafka-connector-type

The Flink connector type for Apache Kafka; possible values are upsert-kafka and kafka

--kafka-key-format

The Apache Kafka message key format; possible values are avro,avro-confluent, debezium-avro-confluent, debezium-json, and json

--kafka-key-fields

The list of fields to be used as Key for the message

--kafka-value-format

The Apache Kafka message value format; possible values are avro,avro-confluent, debezium-avro-confluent, debezium-json, and json

--kafka-startup-mode

The Apache Kafka consumer starting offset; possible values are earliest-offset starting from the beginning of the topic and latest-offset starting from the last message

--jdbc-table

The Aiven for PostgreSQL® table name to be used as source/sink (Only for PostgreSQL integrations)

partitioned-by

A column from the table schema to use as Flink table partition definition

--like-options

Creates the Flink table based on the definition of another existing Flink table

Example: Create a Flink table named KAlert with:

  • alert as source Apache Kafka topic

  • kafka as connector type

  • json as value and key data format

  • the field node as key

  • earliest-offset as starting offset

  • cpu FLOAT, node INT, cpu_percent INT, occurred_at TIMESTAMP_LTZ(3) as SQL schema

  • ab8dd446-c46e-4979-b6c0-1aad932440c9 as integration ID

  • flink-devportal-demo as service name

avn service flink table create flink-devportal-demo ab8dd446-c46e-4979-b6c0-1aad932440c9  \
  --table-name KAlert                                                                     \
  --kafka-topic alert                                                                     \
  --kafka-connector-type kafka                                                            \
  --kafka-key-format json                                                                 \
  --kafka-key-fields node                                                                 \
  --kafka-value-format json                                                               \
  --kafka-startup-mode earliest-offset                                                    \
  --schema-sql "cpu FLOAT, node INT, cpu_percent INT, occurred_at TIMESTAMP_LTZ(3)"

avn service flink table delete¶

Deletes an existing Aiven for Apache Flink table.

Parameter

Information

service_name

The name of the service

table_id

The ID of the table to delete

Example: Delete the Flink table with ID 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276 belonging to the Aiven for Flink service flink-devportal-demo.

avn service flink table delete flink-devportal-demo 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276

avn service flink table get¶

Retrieves the definition of an existing Aiven for Apache Flink table.

Parameter

Information

service_name

The name of the service

table_id

The ID of the table to retrieve

Example: Retrieve the definition of the Flink table with ID 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276 belonging to the Aiven for Flink service flink-devportal-demo.

avn service flink table get flink-devportal-demo 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276

avn service flink table list¶

Lists all the Aiven for Apache Flink tables in a selected service.

Parameter

Information

service_name

The name of the service

Example: List all the Flink tables available in the Aiven for Flink service flink-devportal-demo.

avn service flink table list flink-devportal-demo

An example of avn service flink table list output:

INTEGRATION_ID                        TABLE_ID                              TABLE_NAME
====================================  ====================================  ==========
ab8dd446-c46e-4979-b6c0-1aad932440c9  acb601d7-2000-4076-ae58-563aa7d9ab5a  KAlert

Manage a Flink job¶

avn service flink job create¶

Creates a new Aiven for Apache Flink job.

Parameter

Information

service_name

The name of the service

job_name

Name of the Flink job

--table-ids

List of Flink tables IDs to use as source/sink. Table IDs can be found using the list command

--statement

Flink job SQL statement

Example: Create a Flink job named JobExample with:

  • KCpuIn (with id cac53785-d1b5-4856-90c8-7cbcc3efb2b6) and KAlert (with id 54c2f4e6-a446-4d62-8dc9-2b81179c6f43) as source/sink tables

  • INSERT INTO KAlert SELECT * FROM KCpuIn WHERE cpu_percent > 70 as SQL statement

  • flink-devportal-demo as service name

avn service flink job create flink-devportal-demo JobExample                        \
  --table-ids cac53785-d1b5-4856-90c8-7cbcc3efb2b6 54c2f4e6-a446-4d62-8dc9-2b81179c6f43 \
  --statement "INSERT INTO KAlert SELECT * FROM KCpuIn WHERE cpu_percent > 70"

avn service flink job cancel¶

Cancels an existing Aiven for Apache Flink job.

Parameter

Information

service_name

The name of the service

job_id

The ID of the job to delete

Example: Cancel the Flink job with ID 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276 belonging to the Aiven for Flink service flink-devportal-demo.

avn service flink job cancel flink-devportal-demo 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276

avn service flink job get¶

Retrieves the definition of an existing Aiven for Apache Flink job.

Parameter

Information

service_name

The name of the service

job_id

The ID of the table to retrieve

Example: Retrieve the definition of the Flink job with ID 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276 belonging to the Aiven for Flink service flink-devportal-demo.

avn service flink table get flink-devportal-demo 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276

An example of avn service flink job get output:

JID                               NAME        STATE    START-TIME     END-TIME  DURATION  ISSTOPPABLE  MAXPARALLELISM
================================  ==========  =======  =============  ========  ========  ===========  ==============
b63c78c70033e00afa84de9029257e31  JobExample  RUNNING  1633336792083  -1        423503    false        96

avn service flink job list¶

Lists all the Aiven for Apache Flink jobs in a selected service.

Parameter

Information

service_name

The name of the service

Example: List all the Flink jobs available in the Aiven for Flink service flink-devportal-demo.

avn service flink jobs list flink-devportal-demo

An example of avn service flink job list output:

ID                                STATUS
================================  =======
b63c78c70033e00afa84de9029257e31  RUNNING
Did you find this useful?

Apache, Apache Kafka, Kafka, Apache Flink, Flink, Apache Cassandra, and Cassandra are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. M3, M3 Aggregator, M3 Coordinator, OpenSearch, PostgreSQL, MySQL, InfluxDB, Grafana, Terraform, and Kubernetes are trademarks and property of their respective owners. *Redis is a trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Aiven is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Aiven. All product and service names used in this website are for identification purposes only and do not imply endorsement.

Copyright © 2022, Aiven Team | Show Source | Last updated: February 2022
Contents
  • avn service flink beta
    • Manage an Apache Flink® table
      • avn service flink table create
      • avn service flink table delete
      • avn service flink table get
      • avn service flink table list
    • Manage a Flink job
      • avn service flink job create
      • avn service flink job cancel
      • avn service flink job get
      • avn service flink job list