Grid technology uses divide-and-conquer tactics to distribute computationally intensive tasks among any number of separate computers for parallel processing. It allows unused CPU capacity - including, in some cases, the downtime between a user's keystrokes - to be used in solving large computational problems. While this technique has long been used to satisfy the insatiable computational needs of Wall Street's "quant jocks" at trading firms and investment banks, grid computing is taking hold in other areas of financial services, including insurance, corporate banking and even retail finance.
Like the Internet, grid computing started in academia and the national defense industry and has moved into the commercial marketplace, along with its proponents. Phil Cushmaro used to work on grid computing applications for the defense industry, designing device drivers and operating systems for Concurrent Computer Corp. (Duluth, Ga.). Now, he's CIO of Credit Suisse First Boston (CSFB), the corporate and investment banking arm of the Credit Suisse Group (Zurich, $689 billion in assets).
Grid technology has come a long way since Cushmaro's earlier experiences. Fifteen or 20 years ago, he says, "distributed computing" solutions existed, but they were heavily embedded in vendor solutions. "You couldn't re-deploy those solutions, or the components of those solutions," he explains.
It's one thing to write a program that distributes calculations among multiple processors, and quite another to create an extensible grid, capable of executing any number of applications on any number of processors. Wall Street firms took the former approach when first tackling its computationally intensive tasks. "A lot of the grid solutions that you find today in the industry are home-baked," says Cushmaro. "Depending on the size of the firm, [there are] anywhere from 50 to 500 CPUs that are concurrently working and calculating information overnight."
In the push to cram as many calculations as possible into the overnight window between trading sessions, Wall Street developers couldn't wait for off-the-shelf software or industry standards to evolve into a true grid. Now, the software market has evolved to the point where it's possible to allow disparate applications to take advantage of any available computing power in a heterogeneous computing environment, at any time of day.
Drawing upon the work of the Globus Alliance, a technology consortium that publishes industry standards for grid computing, IT administrators can mix and match applications and processors. "Now, you can basically take anybody's computer and anybody's software, and as long as it obeys this standard, it can participate in the grid," says Jason Bloomberg, analyst at ZapThink (Waltham, Mass.), a technology consultancy.
That's in theory, at least. True interoperability will take time. "Once you agree on a standard, you have to implement it, and once you implement it, you have to do interoperability testing on different products using that standard," says Lawrence Ryan, director of the financial services industry practice at Hewlett-Packard (Palo Alto, Calif.). "There are standards today, yes, but these standards are also evolving very rapidly. They're not there yet."
That's why companies including HP, Fujitsu Siemens Computers, Intel, NEC, Network Appliance, Oracle, Sun Microsystems and others have formed the Enterprise Grid Alliance (San Ramon, Calif.), which will promote standards for enterprise grid applications.
The relative youth of standards hasn't stopped firms from pressing ahead. For its part, CSFB uses grid management software from DataSynapse (New York) to mediate between requests by specific applications and the pool of available processor capacity. Part of that job is figuring out which requests should get priority. "It could be based on the relative time-sensitivity of each department, or even on how profitable a given trader is," says Frank Cicio, chief marketing and strategy officer, DataSynapse.
So the most profitable traders are rewarded with quicker response times. But grid isn't just about speed. Even if CSFB didn't enjoy a substantial speed boost from grid (which it does), Cushmaro would still embrace the technology. That's because it's easier to guarantee the availability of a centrally managed grid than it is to provide the same quality of service for mission-critical applications in separate areas, he says. "Complex environments tend to break and they're very difficult to fix," explains Cushmaro. "If you look at the reasons for outages on the Street today and the time duration of coming back from an outage, it is usually somehow related to the complexity of the environment."
Grids reduce complexity by de-emphasizing the importance of any given processor. A malfunctioning node in a grid doesn't bring any single process grinding to a halt. In fact, even individual transactions can make it through an outage unharmed. "When a transaction is sent out to be executed, if the network or the power fails, we automatically move that over to an available processor, right where it left off," says Cicio.
For some firms using commodity computing resources, it's hardly worth it to troubleshoot a hardware problem. "With one of our clients, when there's a problem with a particular node in the grid, they don't even bother repairing the node," says Bob Boettcher, vice president of financial services, Platform Computing (Toronto), a grid software company. "They just rip out the board and insert another."
- Page 2: Insuring the Grid
- Page 3: What's in Your Grid?
- Page 4: Productive Screen Savers
- Page 5: New Architectures
- Page 6: Grid is Good