Management Strategies

07:07 AM
Connect Directly
RSS
E-Mail
50%
50%

Taking Center Stage

Banks explore the promise and potential peril of centrally-managed data storage

Attaching storage directly to computers or servers will soon seem as quaint as squirreling money under the mattress.

Just as one's hard-earned dollars belong inside a bank, valuable data belongs on centrally managed storage networks, say industry proponents.Storage Area Networks, or SANs, look to fill this void by providing speedy access to well protected, low-maintenance enterprise storage.

Banks, facing increased storage needs from Web banking and CRM initiatives, are quickly becoming converts to the technology. Some 41% of depository institutions were evaluating or implementing SANs as of mid-2000, although just 4% were actually using them, according to research firm IDC. "As a rough guess, half of those that were 'evaluating' are moving into 'using' this year," said Karen O'Brien, an IDC analyst.

Another reason more banks are likely to jump on the SAN bandwagon: an eventual reduction in storage costs. M&T Bank, Buffalo, N.Y., plans to have 100 of its approximately 500 servers accessing data over a SAN within 12 to 16 months. The motive: for every dollar spent on its SAN, M&T will save 80 cents in current storage expense by eliminating the need for disk drives and backup devices for new servers, according to Ron Strozyk, assistant vice president of systems at $21.7 billion M&T. Although this saving only defrays part of SAN installation costs, Strozyk believes the investment worthwhile since it "goes towards establishing an infrastructure for a platform that we feel is needed. We'll be able to manage the distributed data and put practices, policies and procedures in place similar to what we have today on our mainframe side."

Over time, these policies and procedures will further reduce costs by moderating users' demand for storage. A file naming scheme, for example, indicates when a file can be moved to less expensive storage media. "People will tend to keep files or data that they will never, ever look at again," said Strozyk. "They keep storage people tied up because they just don't clean up." With these policies in place across the organization, the bank can hold down its incremental storage expense, to the relatively modest rate of 30% growth per year.

BUILDING A NETWORK

Because high-bandwidth fiber-optic links have yet to become inexpensive and ubiquitous in Buffalo, M&T's initial foray into SANs will involve only one storage facility. "The telecommunication pieces aren't really robust enough to include servers that are in other buildings," said Strozyk. Still, the bank can install separate SANs at each location and connect them as increased telecom bandwidth becomes available.

Chicago-based Northern Trust already has such a storage network in place, with a SAN that spans three different locations. The primary and secondary data centers, located several miles apart, maintain mirror images of all production data. If the primary data center were to fail completely, the bank would be able to cut over to the secondary center in a matter of hours, said Karl Huf, vice president of mainframe services at $36.8 billion Northern Trust. Less drastic failures would show no discernible deterioration in service.

Northern Trust first considered SANs during the conversion of a large imaging application from a mainframe system to Microsoft Windows NT-part of a Y2K initiative. The bank initially planned to use standard networking protocols to access optical jukeboxes from the Windows system, said Huf, but "we did some basic numbers and saw that it was going to take way too long. At the same time we had fall into our laps this high-bandwidth fiber channel SAN connectivity."

The bank built its SAN out of hardware components from Hitachi Data Systems and Qlogic. Connectivity software from Tantia Technologies joined the mainframe and Windows NT platforms over the SAN.

Northern Trust eventually plans to move its open systems-based (Windows NT and Unix) production applications onto the SAN. "Open systems are growing at an incredible rate," said Huf, noting that their storage rate of growth exceeds that of mainframe applications. Placing both open systems and mainframe applications on a common platform will enable the bank to manage its storage needs more effectively. "Why would we want to have large storage management groups in both Unix-NT and mainframe?" said Huf.

THE OUTSOURCING ALTERNATIVE

Indeed, as the management issues surrounding SANs become more complex, banks are likely to take a much closer look at Storage Service Providers, or SSPs, third-party data specialists who will install and oversee a data repository for a fee.

But there are still some concerns among banks regarding the use of SSPs. Northern Trust, for one, is not likely to use them anytime soon. "I'm not comfortable-for your mainstream, lifeline applications-taking your assets, your data, and handing them over to someone else," said Huf.

"We've been leading-edge in multiple things over the years and we've learned an awful lot. We're just used to doing it ourselves," he added.

Although many banks are loathe to relinquish control over their customers' data, the economics of outsourcing may eventually prove too compelling, especially to smaller institutions, which typically don't have the same depth of talent at their disposal. Configuring a storage area network is more difficult than an Ethernet network, "where you just go through a magazine and buy some hubs," said Steve O'Brian, senior product manager at McData, a Broomfield, Colo., storage vendor.

SAN cost is also a factor in the SSP decision. While large financial institutions have the resources to purchase and self-operate SANs, smaller banks may balk at the estimated $3 to $4 million capital outlay required to implement a storage system, and instead utilize a SSP.

"When your FX revenues are $2 million per day, a $4 million dollar price tag for this is no big deal, especially when you can depreciate it over five years," said Mike Thongpaithoon, director of client services at Politzer & Haney, a Boston-based treasury management software provider. "The entry cost for outsourcing is a lot cheaper because you don't have to do that initial capital outlay. You might pay a premium in your monthly service charge, but your time to market is fairly quick because the infrastructure is already built and the people are there to service you."

Out of the more than 60 banks using P&H's Web-based products, 32 use an outsourced environment, said Thongpaithoon, a former manager of technology operations at State Street Bank. "Those are typically regional and smaller community banks that need to compete with the big boys. But I still have some big customers-banks in the billion-dollar plus range-outsourcing because they just don't want to deal with the hassles."

SSPs of many stripes, on the other hand, see the potential for a thriving business by shouldering other companies' storage burdens. SSPs employ a utility model in which they charge a monthly service charge and a per-unit usage rate. Recognizing the parallels, other utilities are getting into the business.

"Telecoms are huge users of SANs," said Chuck Foley, chief technology officer at Inrange Technologies, a Lumberton, N.J. networking solutions company. "Now they're starting to be storage service providers."

Also, Houston-based Enron created a commodity market to trade terabytes of storage, allowing buyers to negotiate for capacity from suppliers such as StorageNetworks, using Enron's fiber-optic networks for service delivery.

"People are going to try to create markets for anything they feel is high in demand," said Thongpaithoon, citing evidence of ample supply gearing up to meet the rising demand for outsourced storage.

Several major SSPs are currently building or breaking ground on two million square feet of new data center space in the Boston area alone, he said.

Along with storage, firms of this type typically handle optical networking, Web hosting and application hosting-a wide swath of outsourced services that have spawned the moniker "xSP" (service providers for just about anything).

But for banks that want to manage their own storage, a galaxy of companies provide integrated SAN solutions drawing upon hardware from vendors such as Compaq, EMC, Hitachi Data Systems and McData.

"SAN providers deliver a turnkey solution from maybe ten to twenty different vendors, all qualified, certified and supported through that provider," said Marc Oswald, director of technology alliances at Brocade Communications Systems, San Jose, Calif., and head of the Roadmap Committee of the Fibre Channel Industry Association. He describes the resulting "fabric" of storage and switches as a homogenous SAN.

Although a common routing protocol can interconnect fabrics from different SAN providers, the inability to seamlessly join fabrics might create an integration hassle for companies involved in mergers and acquisitions.

"Those are things that are difficult to do today, because not all products are created equal," said Oswald. "Yes, you can connect them, but you have to be aware of the capabilities and services of the various components and look at the big picture."

"In the end it's the applications that really determine how you bring together SANs that were acquired through different sources," he said.

------------------------------------------------------------------------------

COUNTING GRAINS OF SAN

Fiber-optic technology makes centrally managed storage feasible on a large scale. Organizations with data requirements hitting the terabyte mark, such as banks, are prime candidates for Storage Area Networks, or SANs.

SANs help pay for themselves by eliminating the need for disk drives and backup devices for new servers. Some 41% of depository institutions were evaluating or implementing SANs as of mid-2000, although just 4% were actually using them, according to International Data Corp.

Essentially, a SAN links storage devices the way that a local area network (LAN) links workstations and servers. The two networks-the LAN and the SAN-are bridged through SAN switches.

In the old storage model, a workstation would request information from a server, which would then locate that information on a directly-attached storage device. In the SAN model, the server passes requests for information through a SAN switch, which can access every storage device on the SAN.

One benefit of a SAN is speed. Although going through a SAN switch would have been unacceptably slow under prior networking schemes, the spread of fiber-optic bandwidth allows data to travel up to speeds measured in gigabits per second per second, even over distances of 10 kilometers or more.

"There are smarter products out there today that actually allow you to go over 100 kilometers without any intermediary components," said Brocade's Marc Oswald, a board member of the Fibre Channel Industry Association.

They're also reliable. Centrally managed data has fewer potential points of failure, simplifies backup and facilitates business continuity through "mirroring" and "distributed clustering" techniques.

With mirroring, two storage devices in separate locations maintain identical contents so that one can instantly take over if the other encounters difficulties. Distributed clustering creates similar redundancy for servers and switches.

And they're efficient. Instead of having large amounts of residual disk space scattered throughout an enterprise, SANs can consolidate those stray gigabytes to useful effect.

Finally, they're scalable. Although expanding storage capacity for an enterprise still takes lots of planning, many of the ongoing maintenance headaches are greatly reduced.

Comment  | 
Print  | 
More Insights
Register for Bank Systems & Technology Newsletters
White Papers
Current Issue
Bank Systems & Technology - August 2014
Modern core systems are emerging as the foundations of effective channel integration and customer engagement initiatives.
Slideshows
Video