«

»

Oct
04

Continuous Monitoring

From: Cloud Centrics Technology Blog

By Andrew Benhase

For more than 15 years the security industry has struggled with how to accurately represent security threat information consistently and make it relevant and useful for decision-making purposes.

Security event information has traditionally taken the form of ASCII log data aggregated into a centralized database where a process often takes a considerable amount of time to create static views of very stale, albeit correlated threat data.

The problem with this approach is that this data is largely one dimensional in nature and the events have often already occurred by the time initial analysis has been completed and operational decisions have been considered and executed.

An Analyst for example, can see the basic threat information provided from the devices and basic network tuple information. But rarely does that view include more detailed information that enables the security analyst (or operator) with a broader view of the entire state of the network, including services.

One of the critical gaps in this process is the fusion of additional datasets found within the network itself today along with Internet-based threat intelligence and a view of current anomalous activity taking place in real time.

There is a wealth of information that exists within the network today but less than 25% of it is commonly consumed for use in decision-making processes as they relate to cyber security events.

NIST SP 800-137 defines continuous monitoring largely as an aggregation of information about network endpoints and bases threat information on the vulnerabilities found and mitigated within those endpoints.

Continuous Network Monitoring represents an equally if not more important concept where the correlation of additional intelligence databases found within the network can provide a far more holistic view of the threats in real time with the ability to drastically reduce Time to Implement (TTI) changes to mitigate or manage persistent threats within the network.

What is Continuous Network Monitoring?

Forensic value can be derived from long term archive and retrieval of correlated log data, but making network or service impactful decisions based solely on this information often leads to errors that can have limited effect on the actual security event that was reported.

Industry wide steadfast reliance on post log analysis, static views of patch management compliance and event correlation is a methodology that ceased to be relevant to current threats in the last decade.

Continuous network monitoring and archival storage, review and analysis of contextual network flow patterns helps to insure a real time holistic view of current patterns and behaviors taking place at any given point in time within the network itself.

In the current era of advanced persistent threats that are commonly generated from large well organized enterprises, the reliance on these legacy systems of threat analysis are providing false senses of security. Events are taking in place in real time, data is being exfiltrated from networks and large-scale system compromise is occurring often without the owner of the information system knowing for many hours or potentially days. By the time the event has been discovered, large-scale compromise has occurred and the damage has been done all without the owner/operator any the wiser.

The only way to combat these kinds of advanced attacks is with a combination of cloud based intelligence services that globally aggregate and correlate composite threat information derived from a globally dispersed number of device feeds in addition to locally significant traffic analysis for detection of anomalous activity.

When these capabilities are then merged with local device profiling and Identity databases, the Analyst or Operator is provided with a multi-dimensional view of not only statistically generated global reputation and analysis of threats but also a combined view of locally significant reputation relevant to the local network.

This near real time analysis of fused information across multiple data sources can lead to minute zero, hour zero types of information security responses based on statistical analysis of traffic patterns, isolation of anomalous traffic events and correlation of external intelligence sources providing a higher degree of trust to the Decision Maker.

 

Global Intelligence

In the evolving world of Advanced Persistent Threats, small changes take place to the attack but often the behavior remains very similar. Slight changes to an attack will often confuse or entirely disrupt pattern-based engines that utilize specific signatures to identify known threats.

Through the use of a global intelligence model, threat is determined through statistical analysis of specific observed behaviors reported over time from a large sampling of sources across a global geography. Those aggregate observations over time are what are then used to ultimately determine the trust level of the individual host system creating an inbound network flow.

When this model is combined with locally significant information such as validated Identity of the endpoint, and local reputation, a significantly higher level of trust can be achieved to make very accurate and informed decisions about security events.

 

Network Based Application Identification

Typically a host system (workstation endpoint or network server) runs an Agent-based technology that attempts to identify the application, port, and protocol running on the network. The challenge with this type of system is host compromise and/or local obfuscation and ultimately trust in the information that you are receiving is accurate.

By enabling a network based application recognition system, it ensures that observed traffic is identified as it actually exists in the wild between devices in the network and host/application compromise cannot corrupt the results.

A network switch today has the inherent intelligence today to identify unique applications and not only report on this traffic but also potentially take automated actions such as flow controls, termination of flow or sequestration of flows – none of which have reliance on any agent or host based feedback mechanisms.

 

Conclusions

While no single solution will solve every possible security reporting requirement, the overriding goal should be to establish the highest possible trust in the data being reported prior to making critical decisions about service disruptions or mitigation of suspect traffic flows.

Relying on single sources of event data that are post processed and correlated simply does not address the current and growing expansion of highly evolved attacks and exploitations observed today.

The enemy is often simply the amount of time it takes when a person is put into the loop and decisions and analysis are required to take action. This time period often leads to large scale compromise and exfiltration of large volumes of data.

The critical need for higher order trust, which can be only be derived from a fusion of multiple data sources that do not rely on a single reporting technology is the key enabler for this solution.

Network Flow Visualization, Local Identity Databases, Locally significant security events, Network Based Application Identification and Global Threat Intelligence fused into a common view should be considered as the five primary requirements for Continuous Network Monitoring.

Leave a Reply

Please Answer: *