No one could possibly make sense of the millions of discrete events being logged every day by security and network devices. Enter security event management (SEM) systems, which collect, correlate and analyze the vast sums of data created by today's security infrastructure. Originally designed to address the information overload caused by intrusion detection systems (IDSs) and firewall logs, SEM systems have been extended to include switch and router logs, vulnerability scans and OS and application logs.
While SEM systems still emphasize log correlation, vendors are adding new functions, such as real-time incident response and long-term data storage. Other products are blending network behavior anomaly detection and real-time passive network monitoring with log correlation to provide insight into activity on the network and monitor changes to essential business assets. These new features can help security architects better respond to attacks and unusual events, meet the demands of compliance reporting, and more quickly detect unwanted changes to critical systems.
But, each organization has distinct architectural requirements, and security architects who fail to account for these requirements risk deploying an expensive and complex system that doesn't actually meet their needs. Further, an SEM system also represents a significant and long-term investment of money and people power. Over the lifetime of a system, enterprises will find themselves feeding it more computing resources, including database capacity, event collectors and offline storage. The more information sources you can direct to your SEM system, the more value you can wring from its correlation and analysis functions. Thus, it's essential to plan out the capacity of any solution to ensure that it can meet today's and tomorrow's data needs.
Job 1: Simplify
An SEM system must aggregate, normalize and correlate events from disparate products that would otherwise require manual gathering and review, regardless of whether you choose it for real-time incident response and attack mitigation, or long-term auditing and compliance reporting. It is a serious architectural commitment since it's only as valuable as the data it gathers and is fed. So a full-blown deployment will have components throughout a bank's network. Although the details vary for specific SEM products, the general deployment principles are the same.
At the heart of SEM is the analysis center -- a database-driven system that generates alerts based on both canned and user-defined rules and lets operators run queries against its storehouse of information. The analysis center gathers information from several sources. In some cases, it will receive raw log data directly from other devices, such as firewalls. In other cases, a bank can deploy intermediary collection devices to gather event data and pass it on to the analysis center. These collectors may accept data passively, or be configured to poll devices such as servers at regular intervals to download logs and events. As an added benefit, they often can do basic correlation and aggregation, weeding out duplicate or insignificant events and delivering normalized information to the analysis center for faster processing.
In addition to information from firewalls, IDSs and other security products, SEM systems have expanded the kinds of information they can accept to include switch and router logs, vulnerability assessment data, NetFlow and sFlow information, OS and application logs, and so on. The result is that security architects can configure a system to correlate and analyze ever-greater volumes of information. "I'm a big proponent of collect everything," says Erik Hart, VP and information security officer at Chicago-based Cole Taylor Bank ($2.9 billion in assets). He has approximately 175 devices and applications feeding upwards of 5 million events a day into an SEM solution from Network Intelligence (Westwood, Mass.).