HowTo¶
In this section we have collected some step-by-step instructions to get you started with Apache Flink® in your own projects. If there’s an example you’d find useful that isn’t here, please open an issue and let us know?
- Create Apache Flink® integrations
- Create an Apache Kafka®-based Apache Flink® table
- Create an Apache Kafka-based Apache Flink table with Aiven Console
- Example: Define a Flink table using the standard connector over topic in JSON format
- Example: Define a Flink table using the standard connector over topic in Avro format
- Example: Define a Flink table using the upsert connector over topic in Avro format
- Create a PostgreSQL®-based Apache Flink® table
- Create an OpenSearch®-based Apache Flink® table
- Create an Apache Flink® job
- Define OpenSearch® timestamp data in SQL pipeline
- Create a real-time alerting solution - Aiven console
- Architecture overview
- Requirements
- Set up Aiven services
- Set up sample data
- Create a pipeline for basic filtering
- Create a pipeline with windowing
- Create a Flink SQL job using PostgreSQL® thresholds
- Create an aggregated data pipeline with Apache Kafka® and PostgreSQL®
- Replicate the filter stream of data to OpenSearch® for further analysis and data visualization