IBM Buys Data Streaming Platform Confluent
Digest more
Data contracts are foundational to properly designed and well behaved data pipelines. Kafka and Flink provide the key capabilities.
Hadoop and big data platforms were originally known for scale, not speed. But the arrival of high performance compute engines like Spark and streaming engines have cleared the way for bringing batch and real-time processing together. But what happens when ...
A monthly overview of things you need to know as an architect or aspiring architect. Unlock the full InfoQ experience by logging in! Stay updated with your favorite authors and topics, engage with content, and download exclusive resources. This article ...
IBM acquires Confluent for USD 11 billion to boost real-time data streaming and strengthen its AI and hybrid cloud strategy, with the deal closing by mid-2026, news
What are the challenges you faced in building a fully managed cloud-native Kafka services? How does Confluent engineering solve these things? Will LaForest: When we began Confluent’s journey to create a fully managed service there was an eagerness to get ...
The Apache Kafka phenomenon reached a new high today when Confluent announced a $50 million investment from the venture capital firm Sequoia. The investment signals renewed confidence that Kafka is fast becoming a new and must-have platform for real-time ...
Event-driven architectures are wonderful. But Kafka was never intended to be a database, and using it as a database won’t solve your problem. It’s a tale as old as time. An enterprise is struggling against the performance and scalability limitations of ...
When Confluent launched a cloud service in 2017, it was trying to reduce some of the complexity related to running a Kafka streaming data application. Today, it introduced a free tier to that cloud service. The company hopes to expand its market beyond ...
The move reflects a rapidly intensifying race among technology giants to strengthen the data foundations required for generative and agentic AI.