
This article is part of a series designed to demonstrate the setup and use of the Confluent Platform. In this series, our goal is to build an end to end data processing pipeline with Confluent. Disclaimer: While knowledge of Kafka internals is not required to understand this series, it can sometimes help clear out some parts of the articles. In the previous articles, we set up two topics, one to publish the input data coming from PostgreSQL and another one to push the data from…
Lire la suite