4 Key Takeaways from the Gartner Magic Quadrant for Data Integration Tools
The Magic Quadrant (MQ) is an influential series of reports published annually by the IT consulting firm Gartner, Inc. The latest edition of the MQ for Data Integration has been recently released (you can read it here) and it identifies several key trends that are shaping the data management market. In this post I will briefly review some of them from Denodo’s perspective:
1. Cloud Integration in Hybrid (on-cloud, on-prem) and Multi-Cloud Scenarios Continues to Gain in Importance.
Or, as Gartner puts it: “Another major shift is customers asking for hybrid deployment (cloud and on-premises), as in 2017, but now with the expectation of multi-cloud and cloud-to-cloud integration”.
More and more companies today have data distributed among on-prem data centers and one or several cloud providers. This means they need a unified way of querying, managing and governing the data in all these environments.
While this may sound like a re-statement of the classic data integration problem (reconciling dispersed data silos), the traditional ETL/DW techniques show serious limitations in these scenarios. Reasons include the higher latencies that make replicating big amounts of data impractical or even unfeasible, the stronger security implications, and the restrictions for moving data out imposed by some cloud providers.
Data virtualization (DV) is a key technology for solving these limitations. It was specifically designed for providing unified querying, security and metadata management in highly distributed data architectures, minimizing data replication. Advanced DV systems like Denodo also include automatic optimization techniques to minimize data transfer between locations to maximize query performance. Unlike traditional ETL/DW tools, all the principles that guided the design of DV technology are directly applicable in hybrid and multi-cloud integration scenarios.
2. Cloud Strategy Should Be About Data Location Independence, Not About Choosing a Cloud Provider.
This is one of the most thought-provoking comments in the report. In Gartner’s words: “Cloud is a concept, not a place”. The report warns how cloud providers “… give a false sense of location for data…”. Before talking about cloud providers, companies “… referred to cloud as the concept that different parts of your IT infrastructure could be designed to run anywhere and resources could be connected and disconnected from each other on demand. This is the real cloud.”
Data virtualization is designed to provide this type of ‘location independence’: data can be queried, managed and governed centrally, independently of where it actually resides. It also decouples data consumers from the complexities of the IT infrastructure: systems can be replaced, providers can be changed and data can be relocated without affecting users and without breaking consuming applications.
3. Self-Service Data Integration is Here to Stay, But Beware of the Risks.
Gartner highlights that “Data integration is everywhere and is everyone’s responsibility”, and reckons that having different tools for different roles in the data integration process is mandatory for modern platforms. But the report also warns about the “regressive behavior” which results in citizen integrators creating even more data silos, and asks to change “the belief that anyone can integrate” and consider instead “if everyone should integrate”.
In previous articles, I have discussed extensively why we think data virtualization is a key component of successful self-service initiatives. In a nutshell, data virtualization allows data architects to expose data to each type of consumer using the formats, naming conventions and schemas that best suits their needs, without needing to create new data repositories (e.g. data marts). This decisively simplifies the job of final users and citizen integrators, who can see all data as if it were in a single place and in a format that they can easily use. In addition, all data accesses can be secured, governed and audited from a single point, independently of the particular mechanisms supported by each data source and/or each data consumer for those purposes.
4. Data Virtualization Enters the Mainstream and Denodo Keeps Gaining Momentum.
According to Gartner, as much as 40% of organizations use non-batch integration styles such as data virtualization, and the number keeps growing. Data virtualization also plays a key role in several of the key architectural patterns identified by Gartner such as the Logical Data Warehouse.
Gartner also highlights the “strong mind share, momentum and customer support” of Denodo as one of its key strengths, citing that more than 95% of the customer inquiries received by Gartner about data virtualization involve Denodo. This is also reflected in the positioning of Denodo in the quadrant: moving up to the “Challengers” quadrant, achieving the biggest jump with respect to the previous report of all the participants.
In summary, at Denodo we think the latest edition of the Gartner Magic Quadrant for Data Integration corroborates what we are seeing in the market: the shift to highly distributed data architectures and business-centric data management is deeply transforming the way that organizations are integrating their data. Data virtualization is the only data integration technology designed from the beginning with a distributed data architecture in mind. That explains why it is already playing a very significant role in this transformation and why we think its role will only expand in the coming years.
You can read the Gartner Magic Quadrant report here.
Gartner: Magic Quadrant for Data Integration Tools, Mark A. Beyer, Eric Thoo, Ehtisham Zaidi, 19 July 2018.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
- Why Data Mesh Needs Data Virtualization - August 19, 2021
- No Single Data Repository Can Be Your Silver Bullet - April 14, 2021
- Unlocking the Potential of Machine Learning in a Data Lake - March 27, 2019