Bank Systems & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Infrastructure

10:30 AM
Connect Directly
Google+
Twitter
RSS
E-Mail
50%
50%

Q&A: MetLife Bank CIO on Outsourced Data Center Services

MetLife Bank has embarked on an 18-month project to outsource data center operations to FIS and move all of its applications to an FIS-operated data center in Little Rock, Ark.

To address aging infrastructure in its Irving, Texas-based data center and create a foundation for its revved-up growth strategy, MetLife Bank (Bridgewater, N.J.; more than $16.5 billion in assets) has embarked on an 18-month project to outsource data center operations to Jacksonville, Fla.-based Fidelity National Information Services (FIS) and move all of its applications to an FIS-operated data center in Little Rock, Ark. MetLife Bank CIO Mark LaPenta recently spoke with BS&T about the drivers and goals of the new strategies, and the results so far.

How is MetLife Bank's data center strategy changing?

LaPenta: When we acquired mortgage capabilities and a platform in fall 2008, market conditions did not afford companies to spend capital on infrastructure improvements. We went through a "rent, buy or build analysis" and felt our capital dollars were better spent on renting data center capabilities as opposed to reinvesting, reinventing or refreshing the current data center. ... It did not make sense to want to spend capital on what boils down to a very critical foundational utility, but a utility nonetheless. So we decided to outsource most of the data center capabilities and also take advantage of newer technologies in that space. For instance, ... [we're moving to] a virtualized environment. It gives you economies of scale; you can reduce 1,000 [servers] to 80. That saves electricity, it saves costs, it makes it easier to manage, and those savings are passed along to us, the client.

In addition, the bank has an aggressive three-year growth strategy -- we really wanted a platform that could scale almost on the fly [and provide] capacity on demand as our bank grew. In times when the mortgage [business is] cyclical and volatile, we could also ratchet down capacity so we could manage costs better, as opposed to one fixed infrastructure instance that you're going to pay for, regardless of the market volume size.

How does MetLife Bank's data center strategy relate to that of its parent, insurance company MetLife?

LaPenta: Even though [we are] moving to a more federated model, we continue to leverage the power of a Fortune 40 [company] where appropriate. There are still certain capabilities, like network, email or payroll, that don't make sense or don't have value to diversify for a bank, and we still leverage the power of the great parent we have. But in the case of the bank's applications, which are ... different from the needs of an insurance giant, we had our own data center, [and] we did have certain applications that were already outsourced in a SaaS or ASP [arrangement].

Our new model is "pay by the drink," with some minimums, so [capacity] can grow as the business grows. We're also getting a savings out of this; had we put the capital into reinvesting or refreshing our own data center, it would not have been as cost effective compared to going to an outsourcer, which can leverage certain economies of scale.

What are some of the lessons learned so far?

LaPenta: In an 18-month project, you may spend 12 months just planning before you implement anything -- and we did that. Most of the work was in the planning. When we got to the migrations, we were moving over literally hundreds of applications on a dense schedule -- on weekends, at night and on Monday mornings. For most cases, there was almost no disruption. People would come in Monday and not even know their application was moved. ... The project's coming down the stretch this year and is still on plan.

Did you consider using the data center migration as an opportunity to change any of the bank's applications or processes?

LaPenta: We debated that a lot, because where we want to go in the bank's three-year vision, there is going to be a transformation and a renewal of a lot of application capabilities. That, along with the regulatory environment changing so much, means that a lot of front-end systems will need to be changed. However, we thought this was not the time. We wanted to basically lift the application the way it is today, because you would entertain great risk if you tried to change the app for the future on the fly, while at the same time trying to move it. We were coming from an aged environment that did not use virtualization, so our first and foremost priority was to ... prove the veracity of an application running in a virtualized environment.

Besides the server reductions, what are some other benefits MetLife Bank has gained from outsourcing its data center?

LaPenta: We definitely have reduced costs. In the pro forma of what it would have taken to invest in our own data center, to bring it up to 2011-2012 capability, leveraging the mature Fidelity environment has definitely been a save over what that number would have been; it's even a save over current run-rate costs, because with some [aspects] of our aged environment, you're doing some extraordinary efforts to keep things going at the level of availability needed. Also, as we add and expand new applications to the bank, we've put them in the new data center from Day One, and it's been easier to stand up new capabilities in the new environment than it would have been in the old. So on the infrastructure side of our application enablement, we're getting a quicker time-to-market cycle.

One of the key things we've picked up as a future benefit [is that] we are going to be able to offer tiered levels of service. If you set up availability of 99.9 percent -- maybe some applications don't need 99.9 percent, so why have that level of availability for an application that doesn't need it? This will enable us in the future to not have a one-size-fits-all environment. We'll be able to ratchet up or ratchet down the computing capability and capacity based on the actual needs of the business.

Katherine Burger is Editorial Director of Bank Systems & Technology and Insurance & Technology, members of UBM TechWeb's InformationWeek Financial Services. She assumed leadership of Bank Systems & Technology in 2003 and of Insurance & Technology in 1991. In addition to ... View Full Bio

Register for Bank Systems & Technology Newsletters
Slideshows
Video
Bank Systems & Technology Radio
Archived Audio Interviews
Join Bank Systems & Technology Associate Editor Bryan Yurcan, and guests Karen Massey and Jerry Silva from IDC Financial Insights, for a conversation about the firm's 11th annual FinTech rankings.