Bank Systems & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Infrastructure

07:05 PM
Stefan Bernbo, Compuverde
Stefan Bernbo, Compuverde
Commentary
50%
50%

Virtualization: The Smart Choice for Financial Institution Data Centers

Virtualization can improve network performance while keeping costs related to infrastructure at a minimum.

In the average data center, servers are the most expensive portion to maintain. For a financial institution operating strict budgets, server costs quickly become an issue. However, there is a way to avoid unnecessary server related costs: Virtualization. In fact, IDC predicts that between 2003 and 2020, virtualization will have saved over 106 billion dollars in Asia alone, not to mention the environmental benefits.

Stefan Bernbo, Compuverde
Stefan Bernbo, Compuverde

In virtualization, software is used to recreate the functions of physical hardware, thereby reducing hardware-associated costs and providing a new level of flexibility. However, the exploding popularity virtualization requires massive amounts of storage, drawing attention to the need for flexibility, efficiency and quality management. Financial institutions wary about big investments in new technologies are interested in virtualization, but not yet sure it’s for them. Yet virtualization is becoming an option that’s hard to ignore, for several reasons.

The Growth of Virtualization

Virtualization is a significant trend for several reasons but mostly due to its cost savings. Financial institutions operate on very tight budgets, so they will utilize any method to reduce costs. One of the primary advantages of virtualization is the opportunity to better utilize a data center’s hardware. Most of the time, physical servers in a data center are idling. By installing virtual servers inside the hardware, a financial institution can use its hardware more efficiently and reduce operating costs.

The other primary benefit of virtualization is that it allows for much greater flexibility. It is easier to have infrastructure deployed in the form of virtual machines instead of physical machines. Say an organization wants to change its hardware. The data center administrator can easily move the virtual server to the newer, more powerful hardware, thereby creating better performance for less money. Before virtual servers existed, administrators had to install the new server, reinstall the software and migrate all the data from the old server, which is not an easy process. Migrating a virtual machine is much easier than moving a physical one and much less costly.

Virtualization’s New Popularity

Not every data center wants to virtualize its servers and systems. Data centers with significant numbers of servers (20 or more) are starting to think about converting their servers into virtual machines. These organizations can noticeably reduce their operating costs and improve their overall network flexibility. Furthermore, virtualization makes servers much easier to manage. The job of administrating a large number of physical servers can be very demanding on data center staff, which in turn drives up operational costs. Virtualization makes data center management easier because it allows administrators to run the same number of servers on fewer physical machines.

Storage for the Future

Virtualization is meant to reduce strain on data center infrastructures and costs while increasing flexibility; however, without the proper hardware, virtualization will create more issues than it fixes.

The problem is a direct result of the popularity of VMs. The early implementations of virtual machines used the local storage found in the physical server, which made it impossible for administrators to migrate a virtual machine from one physical server to another. By using shared storage, either NAS or a SAN, administrators were able to fix this issue. The success of this method paved the way for deploying an even a greater number of virtual machines, because everything was located in the shared storage. Over time, this environment has matured into today’s scenario where all the servers and VMs are connected to the same storage.

The only challenge? Data congestion.

A single point of entry can turn into a single point of failure very rapidly With all the data forced through a single gateway, data flows quickly slow to a crawl during peak hours of operation. Considering the rate at which VMs and data are growing, current architectures just don’t cut it. The infrastructure needs to keep up with the rate of data growth.

Looking to the Pioneers

The original adopters of virtualized servers- like telcos or service providers- have dealt with this issue and by taking steps to reduce its impact, but the problem isn’t limited to just them. Financial institutions are also starting to virtualize their data centers and will encounter this challenge as well.

However, hope still exists. Financial organizations looking to reap the benefits of virtualization while also avoiding data congestion are making sure that their architectures are scaling at the same rate as their VM usage. The most popular method is removing the single point of entry. NAS or SAN storage solutions of today inevitably have only one gateway that controls the flow of data, which leads to congestion during periods of high network use. Organizations should look for solutions with multiple data entry points that distribute the load across all servers. This method allows for optimal performance with minimal lagtime.

While this approach represents the most straightforward fix, the next generation of storage architecture suggests another alternative as well.

Storage and Computing Combined

The idea of running VMs inside the storage node itself (or running the storage inside the VM hosts) to turn it into a computer node is an intriguing and promising idea for storage architects.

With this approach, the whole architecture is flattened out. If the organization is using shared storage in a SAN, the VM usually hosts from the top of the storage layer, turning it into one giant storage systems with only one point of entry. To fix the data congestion issues that result from approach, some businesses are starting to move from the typical two-layer architecture that keeps virtual machines and storage running out of the same layer.

Looking Forward

The rate at which virtualization is growing shows no signs of slowing down. Furthermore, increasing numbers of companies are going to adopt virtualization and will encounter the same challenges as described above. By following the example of early adopters, organizations can improve network performance while keeping costs related to infrastructure at a minimum.

Stefan Bernbo is CEO and founder of Compuverde

 

Register for Bank Systems & Technology Newsletters
Slideshows
Video
Bank Systems & Technology Radio
Archived Audio Interviews
Join Bank Systems & Technology Associate Editor Bryan Yurcan, and guests Karen Massey and Jerry Silva from IDC Financial Insights, for a conversation about the firm's 11th annual FinTech rankings.