Today, the market offers a wide range of IaaS options for data storage, with several public clouds vying for the attention of enterprise customers.
Still, on-premises systems remain an option, and several companies prefer to maintain their own private clouds. In addition, they also continue to run many other SaaS applications and leverage other data sources such as web applications, APIs, PDF content, and Excel spreadsheets.
In this scenario, many organizations maintain not just one or two simultaneous repositories, but several, which can be flexible and cost-effective.
However, this also has its drawbacks.
When we need to unify data for holistic analysis, complex queries, or data science applications, the data integration process often introduces an additional layer of complexity.
This is because most data integration tools need to copy all data to an intermediate repository, which is time and resource consuming.
Data virtualization, on the other hand, leaves the data in its source systems and simply exposes an integrated view to customers. As users analyze and explore, the data virtualization tool fetches the data in real time from the connected systems.
This method eliminates the need for further ETL processes in the data pipeline, set up exclusively for consumption, reducing the workload and speeding access to information.
Denodo is the leader in data virtualization, and its solutions bring all the ease of this modern data fabric approach to enterprises.
How it Works
Denodo data virtualization works on a simple three-step principle: Connect-Combine-Consume.
Connections can be made to a wide variety of data sources, whether structured or unstructured, including databases, big data systems, streaming services, cloud repositories, NoSQL sources, or flat files.
Specialized connectors are employed to access data repositories or applications and perform the necessary conversions and normalizations so that all base views are presented as relational views.
In the combination step, it is possible to generate combinations of data independent of its original format (relational database, NoSQL, Hadoop, etc.).
Finally, the consumption layer provides a unified exit point, from which users can consume the data in Denodo’s own data catalog, which ends up working as a data marketplace, or through reports, dashboards, mobile apps, web apps, and other interfaces.
The Power of Data Virtualization
Since data virtualization takes such good care of data integration, data stewards can stay focused on maintaining the environments and ingestion, without having to invest time in the intricate data integration processes, creating new layers within the data warehouse or data lake, or generating intermediate repositories.
This reduces the workload of the IT team, enabling it to focus on other demands, and provides more agility to business teams, who can easily consume data through data virtualization.
In addition, by providing a unified data consumption layer, data virtualization facilitates the maintenance of company data governance policies.
The Denodo Platform enables:
- Data-lineage queries, from source to availability
- The export of all metadata to data governance tools (IBM IGC, Collibra, etc.)
- The seamless integration of data models
Finally, the platform itself acts as a data catalog directly for the end consumers, enabling them to understand what data is available and where they can find it.
With data virtualization, users can still create their own personalized visualizations in an intuitive way through a click and drag functionality, save them, and consume them on demand.
DataLakers Technology is an official partner of Denodo, a company named “Leader” in the 2020 Gartner Magic Quadrant for Data Integration Tools, because of its data virtualization tool.
We have a specialized, certified team ready to provide our customers all the practicality of Denodo solutions for data management and governance, generating cost reduction and agility through data virtualization.
Interested? Send an email to email@example.com.
- Data Virtualization: Easy Data Integration for Complex Pipelines - September 16, 2021