0
Confluent says it will release a series of updates to its data streaming platform every quarter.
In this quarter, the updates consist of a number of new features built from the Apache Kafka open source distributed event streaming platform. They include schema linking, new controls to shrink capacity for clusters on-demand and new fully managed Kafka connectors.
The new capabilities “can make a huge difference in creating a data mesh versus a data mess,” Schema Linking gives organizations the freedom to develop without the risk of damaging production, Rosanova told The New Stack.
“Dev and prod generally don’t talk to one another — because production environments are so sensitive, you don’t want to give everyone access,” Rosanova said. With Schema Linking, built on top of Cluster Linking, schemas can be shared that sync in real-time across teams, organizations and environments, such with hybrid and multicloud environments. “This is far more scalable and efficient compared to workarounds I’ve seen where people are literally sharing schemas through spreadsheets,” Rosanova said.
Much verbiage is devoted to scaling, but how to dynamically adjust network resources for resource savings when needed to avoid redundancy is often not addressed. As Rosanova noted, organizations maintain high availability by beefing up their capacity to handle spikes in traffic and avoid downtime.
“We added a simple, self-service way to scale back capacity so customers no longer have to worry about wasting resources on capacity they don’t use. These clusters also automatically rebalance your data every time you scale up or down,” Rosanova said. “This solves the really hard challenge of rebalancing workloads while they are running. It’s like changing the tires on a moving car. Now you can optimize data placement without disrupting the real-time flow of information.”
New Connectors
Confluent’s new release now features over 50 managed connectors for Confluent Cloud. The idea behind Confluent’s Apache Kafka connectors is to facilitate network connections for data streaming with data sources and sinks that organizations select.
In the last six months, Confluent more than doubled the number of managed connectors it offers, Rosanova said. “Once one system is connected, two more need to be added, and so on,” he said. “We are bringing real-time data to traditional, non-real-time places to quickly modernize companies’ applications. This is a significant need that continues to grow.”
Kafka has emerged as a leading data streaming platform and Confluent continues to evolve with it, Rosanova said.
“We are improving what businesses can accomplish with Kafka through these new capabilities. Real-time data streaming continues to play an important role in the services and experiences that set organizations apart,” Rosanova said. “We want to make real-time data streaming within reach for any organization and are continuing to build a platform that is cloud native, complete, and available everywhere.”
Confluent’s connector list now includes:
Data warehouse connectors: Snowflake, Google BigQuery, Azure Synapse Analytics, Amazon Redshift.
Database connectors: MongoDB Atlas, PostgreSQL, MySQL, Microsoft SQL Server, Azure. Cosmos DB, Amazon DynamoDB, Oracle Database, Redis, Google BigTable.
Data lake connectors: Amazon S3, Google Cloud Storage, Azure Blob Storage, Azure Data. Lake Storage Gen 2, Databricks Delta Lake.
Additionally, Confluent has improved access to popular tools for network monitoring. The platform now offers integrations with Datadog and Prometheus. “With a few clicks, operators have deeper, end-to-end visibility into Confluent Cloud within the monitoring tools they already use,”blog post.
The post Confluent’s Q1 Updates: ‘Data Mesh vs. Data Mess’ appeared first on The New Stack.