Leaving Behind Your Legacy Systems
I was on a recent visit to a major railroad and logistics company who is our customer. While at lunch with the chief IT architect discussing our ongoing project, he said with a twinkle in his eye:” I hope to leave behind my legacy here when I retire in a few years, by having helped the company leave behind its legacy systems and move to a modern data and application architecture that will allow us to compete aggressively into the future”.
The company’s homegrown “train operating systems” that controlled every aspect of its railroad operations including engines, cars, scheduling, loading, signaling, billing, etc. , was literally built over 30+ years with mainframe technologies. While it worked well, rapid changes in the industry, the high cost of mainframe systems, and the increasing shortage of skills needed to operate them, led the company to embark on a major migration – modernization effort from legacy to a modern data and application architecture combining elements of client-server, hybrid on-premise and cloud, IoT, mobile, SOA and REST architectures that would take several years to complete. Data virtualization underpinned this effort by providing layers of abstraction to ensure smooth migration, performance, and agility in IT, with minimal disruption to business operations and customers.
According to the Software Engineering Institute, companies are presented with three approaches to dealing with legacy systems: Maintenance, Modernization, and Replacement. Fearing risk of disrupting their critical systems, many companies held on to their mainframes or used minimally intrusive techniques such as screen-scraping or terminal emulation to achieve modernization. However now that there is increasing pressure for business agility even in traditional industries such as banking, insurance, transportation and healthcare, legacy migration projects have become commonplace.
Data Virtualization is one of the key technologies that is used to provide a broad abstraction layer over disparate data sources, combine information semantically into business canonical objects, and publish them as data services in multiple formats for multiple consumers and deliver the information on-demand avoiding large-scale data replication. Perhaps riding on the popularity of Data Virtualization (DV), erstwhile mainframe connector products have been released anew as Mainframe (Data) Virtualization (MV). Though they sound similar, there are significant differences:
- Both technologies allow a mainframe system to be accessed using SQL or XML/Web Service interfaces. MV does this stand-alone for each legacy system. DV on the other hand allows portion of the legacy functionality to move to another system in a modular fashion, while combining that in the DV abstraction layer to present to consumers. Thus DV allows modernization and facilitates modular migration. Not only does this improve business functionality, but it can significantly cut the high cost of hardware and software licensing, and skills needed to operate large mainframe data and systems.
- The investment in MV pays dividends only as long as the underlying legacy systems are in use. Thus it is closely tied to mainframe continuity. DV has a broader role to abstract a whole range of systems including legacy, structured databases and data warehouses, enterprise applications, SaaS and cloud applications, unstructured data etc. Thus there is continuous use and payback from DV for not only legacy systems but also in migration and modernization of on-premise enterprise applications to SaaS or Cloud, integration of systems following mergers and acquisitions, and so on.
- As with most connector products, MV has a focused purpose of making a legacy system accessible using other protocols such as SQL. The functionality of the underlying legacy system is relied upon for processing, security, data consistency and so on. But in the case of DV it adds a lot of new functionality on top of the underlying systems that it abstracts. For example, data integration and transformation across multiple sources, intelligent caching for performance, logical data models and data lineage, security and service level authentication at the virtual layer, and many more.
Thus while both technologies allow legacy systems to be accessed more easily, data virtualization also integrates with other data and enables agility across best-of-breed applications in support of changing business needs.
The pressure to leave behind legacy systems in several, but not all, uses will continue as companies find more agile and lower cost ways to store and process data, deliver business applications and engage customers in the digital age. Data Virtualization provides the most easy, powerful and low-risk way to make it happen for your organization.
Latest posts by Suresh Chandrasekaran (see all)
- Data Virtualization Performance and Source System Impact (Part 2) - September 27, 2016
- Data Virtualization Performance and Source System Impact (Part 1) - September 21, 2016
- Brexit or Data Lakexit … Beware the Perils of Going it Alone - August 30, 2016