Monday, 25 February 2013
Guest blog - Cloud and mainframes: a perfect couple
This week, I’m publishing a second blog entry from Marcel den Hartog, Principal Product Marketing for CA Technologies Mainframe Solutions.
I know what you’re thinking: another Generation Z person wanting to ride the cloud hype and make sure we don’t forget about his favourite platform. But bear with me while I explain there is a lot of logic behind this...
I am old enough to remember the time when many of us in IT were surprised by the rise of the distributed environment. And, like many others, I did my fair share of work building applications on both mainframes and distributed servers. I was, however, lucky enough to work in an environment with little or no bias to any platform; our management simply asked us to pick the best platform for any given application.
Soon after building the first distributed applications, we found that we needed the data that resided on the mainframe. Not a big surprise, because most of the mission-critical data was stored on the mainframe AND we had a requirement that almost all the mission-critical data in the company had to be up-to-date, always!
This was not easy, especially in the early phases. We did not have ODBC, JDBC, MQSeries, Web services, or anything else that allowed us to send data back and forth from distributed applications to the mainframe. We had to rely on HLLAPI, a very low-level way of transmitting data by using the protocols that came with 3270 emulation boards like IRMA or Attachmate. And for the data that did not require continuous updates, we relied on extracts that were pumped up and down to a couple of distributed servers every night, where the data was then processed to make it usable for different applications.
The “fit-for-purpose” IT infrastructure we had back then was not ideal because it required a lot of work behind the scenes, mainly to make sure the data used by all these applications was up-to-date and “transformed” in the right way. But the technology became better and better, and with the move from text-based Clipper applications using HLLAPI to communicate with CICS transactions, we slowly moved to more modern, visually attractive programming languages that used more modern protocols like ODBC and JDBC. But since not all data on the mainframe or on distributed systems was capable of being accessed by these protocols (like VSAM files, IMS databases and others), for many applications we had to rely on data transport and transformation routines (and the servers to store and manage it) for many years. Even today, companies pump terabytes of data up and down their various platforms for many different purposes.
The next phase in IT was the switch from home-grown applications to buying applications like ERP, CRM, or other third-party (standardized) applications. And again, these applications not only required data from the mainframe and distributed applications, we also had to extract data FROM these applications to update the other applications our companies relied on. After all, every company runs with a mixture of applications on a mixture of platforms. And to complicate things even more, we do not buy all applications from the same vendors. So we have to exchange data between commercial applications as well. More complexity, more work, more money and more stuff to manage.
Welcome to the world of cloud. Where again, we will run (commercial) applications on a different platform and where we are (again) asked to make sure that our companies most valuable IT asset (our data) is up-to-date across all the different platforms: mainframe, cloud, and distributed. After all, we don’t want our customer data to be different across the different applications; the fact that we have to duplicate across all our different environments is already bad enough.
The efforts to create a pool of IT resources that enables us to create the “fit-for-purpose” IT infrastructure we are all aiming for has, until now, resulted in a very complex IT infrastructure that requires a lot of management. Here is where a latest IBM mainframe technology, comes in. The “pool of resources” can be found externally in the form of cloud services, as well as internally in blades that are configured to run virtualized systems on the mainframe. Adding the mainframe to this pool has big advantages. If you run Linux on the specialized engines (IFLs) on the mainframe, you need a lot less infrastructure to allow the Linux servers to communicate with each other and with your existing zOS applications. So there is no need for cabling and other network infrastructure. This not only greatly reduces the amount (and added complexity) of the hardware needed, it also means less power consumption and more flexibility. In the beginning of this article we agreed that many applications need access to mainframe data. By bringing the applications that need this data closer TO the mainframe, and by reducing the amount of infrastructure devices needed to connect it all, we make things simpler, faster, more manageable, and more reliable.
To make the mainframe part of an internal pool of IT resources that is flexible enough to accommodate the business needs of our management, there is one last step we must take; we need the right software to orchestrate, provision, and decommission servers on demand. There are already a number of vendors who offer software that helps you to provision both internal and external (cloud) resources in a matter of minutes. Simple drag-and-drop applications that provision servers and configure the network and connection settings make it possible to create complex environments in a matter of minutes. But none of them has supported the mainframe (yet). In the near future, however, the mainframe will also be supported, and this will open a new world of possibilities for everybody who already owns a mainframe. Soon, the mainframe will not just be used as part of an internal or hybrid cloud environment, it will act as one of the components you have in your pool of resources.
Yes, a stand-alone mainframe can offer many of the advantages that “cloud” offers: on-demand capacity, virtualization, flexibility and reliability. But not until you are able to use the mainframe as a component in a cloud infrastructure, so that you can bring the applications that need access to their data closer to the mainframe, will the two really be a perfect couple.
If you need anything written, contact Trevor Eddolls at iTech-Ed.
Telephone number and street address are shown here.