In our reports and at conferences over the past year – most notably at our annual breakfast at the RSA Conference in March – we have introduced a concept that we call the Actionable Situational Awareness Platform (ASAP). What is ASAP, and how do we see it shaping the evolution of information security?

The ASAP concept describes an emerging trend in which correlation, statistical analysis, machine learning, integrations and even standards are leveraged to separate signal from noise, break down silos and produce meaningful results in the form of action. The goals encompass many of information security's core responsibilities: to obtain accurate actionable intelligence more easily and consistently; to reveal an accurate view of security posture; and to more accurately monitor and prevent threats across all layers of the business.

We've seen varying levels of success from vendors as their efforts culminate around the ASAP model from different perspectives and pieces of the problem, with each jostling to become 'the platform' on which customers choose to standardize. Realistically, however, enterprises are likely to continue to deploy multiple vendor and open source technologies and services indefinitely. For this reason, it is important to consider that an ASAP deployment made up of multiple, integrated vendors could be equally as successful as a single monolithic offering from one company.

The 451 Take

We don't see the ASAP concept shaping the industry so much as the nature of security is driving the need for better situational awareness among defenders, coupled with more timely and effective response. Multiple researchers indicate that it may be months before organizations are aware of an attack in their environment. When the evidence of an attack exists (and analyses of incidents have consistently demonstrated that such evidence usually does exist), organizations often fail to recognize it for a variety of reasons. Security operations teams may simply be overwhelmed by the volume and frequency of alerts. This problem is so widespread, industry-wide terms for it have existed for years, e.g. 'information overload syndrome' and 'alert fatigue'. Three problems with security data lead to this problem: a general lack of quality, integrity or confidence in the data; an overwhelming quantity of data to review, prioritize and act upon; and data from siloed point products that often lack necessary context. ASAP seeks to leverage advances in automation and analytics to reduce or eliminate these problems, turn insight directly into effective action and free precious human expertise, so that people can do what they do best.

The numbers problem

Defenders in the world of information security are bedeviled by scale in two distinctive ways: the threat landscape is large and complex, and the IT landscape is large and complex. Combined, the vast amounts of noise created by both internal and external sources drown out the signal that defenders require to keep businesses safe.

In its 2016 Internet Security Threat Report, Symantec noted that it had discovered over 430 million new – and unique – examples of malware in 2015, a 36 percent increase from the year before. The emphasis on uniqueness is noteworthy, since attackers will often seek to alter malware for a specific attack just enough to render it unrecognizable by the victim's detective tools, thus increasing the number of unique variants and making detection all the more difficult. (It's worth noting that attackers often make use of automation to do this via capabilities many security operations can only dream about today.) The sheer number and diversity of threat actors – from organized criminals to nation-states seeking some strategic or tactical advantage against others – must be added to this as well. The motivations may differ from one threat actor group to another, but both may use similar techniques, from 'industrialized' attacks leveraging botnets to tools shaped for a specific mission.

Compounding the problem of scale and complexity in the threat landscape is the scale and complexity in the defender's environment. As if IT complexity alone weren't difficult enough, the security technology market is one of the most fragmented in the industry. We are aware of more than 1,300 vendors in information security (some investors tally quite a few more), with multiple segments and subsegments characterizing the market. Each new category of unanswered or poorly answered attacks tends to breed another new market for security tools. Given the nature of this market, which is more like an arms race characterized by gamesmanship thanks to the actions of intelligent adversaries than steady progress toward a definable goal, the pace rarely, if ever, slows.

How can security teams get a handle on the numerous complexities in play in order to recognize the most serious threats facing them at any given time, both within their organization and in the threat landscape as a whole? The short answer is: they often can't. According to FireEye's Mandiant team in its 2016 M-Trends report, the average time attackers are present in an environment before they are discovered is approximately 5 months. While this number has fallen over time, that's still quite a long time for a knowledgeable adversary to discover a lot of opportunities and do significant damage.

These factors highlight the asymmetric nature of information security. The attacker seems to enjoy all the advantages, such as being able to target specific weaknesses in order to achieve a well-defined objective. The defender, on the other hand, must make the most of limited resources to assess and defend against its most significant threats. Has the defender no advantages to tip the balance in their favor?

A play for the upper hand

They do. The defender knows their environment better than the attacker. Prior to penetrating a target, an adversary's knowledge is only as good as their reconnaissance. After penetration, the adversary comes in blind and must invest time in learning the target space and identifying opportunities for further exploit. The defender already has that visibility (or should) before the attack even appears.

The defender also has (or should have) complete freedom of movement and action within their environment while the attacker is constrained. The attacker must gain whatever control it needs through illicit means. This means that the defender can take action, and swiftly, while the attacker must cover their tracks and act covertly when they take actions that could tip off defenders to their presence and intent.

Note the word 'should' in each of those descriptions. If these advantages were universally and consistently exploited, it seems unlikely that attacks would linger undetected within a victim environment for nearly half a year. If scale and complexity are a good deal of the problem, how have we dealt with these challenges up to now?

In the larger world of IT, automation – long a holy grail of operations management – is finally reaching a level where some IT organizations can reliably say that they have solid control over the environment's availability and performance, as well as accurate and current insight into the number and configuration of assets. The needs of cloud providers to define IT consistency at scale has been a significant driver in the rise of trends, such as DevOps. The advantages of virtualization have been a key enabler in this, giving rise to programmable infrastructure where many assets can be defined, deployed and administered throughout operations lifecycles virtually on demand. These are just a few of the attributes of what may be considered 'new IT.'

What about the data?

In order for security teams to capitalize on this increased emphasis on automation, however, they need visibility. This visibility must be deep and accurate throughout the environment with the ability to recognize the nature of malicious activity before it has an impact, whenever possible.

This means that data is required. And therein lies another problem with the practical execution of the ASAP ideal: low data quality and overwhelming data quantity. These factors are exacerbated by the isolation of data sets from each other, leading to a lack of context that is necessary for actionable decisions to be made based on insight.

The fragmentation of the security market means that technologies and their data are often so severely siloed that the right hand (figuratively speaking) has no idea what the left hand is doing. In addition, the insight that would alert operations teams to serious threats is locked away from the very tools that would lead to decisive action. Either this allows the attacker to remain undetected and unstopped, or there is a dependency on people to take action that far exceeds (in speed or scale) the ability of people to respond. User activity that reveals the 'signature' of a certain type of threat behavior often goes unrecognized because the intelligence about these telltale signs is not well-integrated with the monitoring tools that see this activity in progress. Malware may be confronted with new and growing techniques to recognize and stop it – but when it succeeds, how will we know? Must we wait until sensitive data walks before we realize that, in fact, that wasn't a legitimate administrator rifling through confidential content, or storing sensitive data in unusual locations before it was packed up and shipped out the (digital) door?

When data is obtainable, it is often low quality, which doesn't provide much visibility into actual activity. The sheer volume of machine events, for example, may reveal nothing of consequence without correlations to additional evidence that could distinguish a threat from recurring noise. When people become the primary means of making such distinctions, failures resulting from alert fatigue are likely to follow (never mind that this is not exactly the best use of the truly valuable analytic capabilities of people that exceed those of machines).

More than automation alone – in areas such as data refinement or machine learning – is required to invest this data with the necessary context. Initiatives to integrate across data sets must recognize the proliferation not only of tools but of potentially valuable and disparate data sources. This proliferation of tools isn't going away any time in the foreseeable future, which is why more effective partnering and integration may be more successful than a monolithic platform that could just yield a single large point product.

Of course, there are trade-offs to each approach. A single monolithic platform offers vendor-driven consolidation, integration, maintenance and support. However, it runs the risk of missing out on the insight available from other sources, which may not be well-aligned. Integration across multiple tools takes advantage of the strengths of each, but it does so at a cost that may be considerable for deploying and maintaining the integration.

Defining ASAP

To sum up, this is the essence of actionable situational awareness and a platform that enables it:

ASAP begins with the gathering of data that provides the input intended to inform the defender about risks and threats to the business, which is obtained from sources internal to the organization and from external third parties. Care should be exercised when taking in this data, in order to ensure its accuracy and relevance.

This data is ingested, normalized and made available for synthesis, which entails the discovery of enough of the actionable signal within the impotent noise to detect and prevent activities which may lead to losses. This data may be managed within a central system of record – or, as analytic and data management techniques evolve – it could be an integration of systems that achieve a similar end. The distinction from legacy techniques involves breaking down silos of data and tools, as well as of culture and process, which reduces noise and develops context that leads consistently to actionable findings. People come into the process only when they present an advantage over what machines can and should do unaided.

Finally, the insights extracted from synthesis lead to consistent and effective action. The distinction from what typically exists today is that action is often poorly linked to the findings of synthesis. It is too often defined by ad hoc, human-intensive and non-repeatable processes that take limited advantage of automation. Ideally, the situation will be one where an automated workflow can be safely executed. In situations where human interaction plays a role (and it certainly does when countering an intelligent adversary), there should be a focus on utilizing human assessments where people present an advantage over machines, but not as a substitute for what machines can do better. In such cases, people must be allowed to take the necessary actions, making the most of human recognition, decision-making and response with techniques that recognize human limitations.

Again, a platform doesn't necessarily mean a single tool or technology. In fact, the definition above specifically embraces the set of tools and techniques that enable situational awareness. Nor is such a platform necessarily made up of technology alone. Technology and automation may be able to facilitate action more quickly and more precisely across a complex environment than people can, but when people can make a difference, their ability to do what machines can't should take the position of greatest importance during assessment. This does not mean using people as a substitute for what machines can do better, but don't! We should be seeing the end of the road for that.

Three key domains

We see the Actionable Situational Awareness Platform as made up of three component domains:

Gathering – The collection of data from multiple sources is further divided into two subdomains: internal and external data. Internal sources include data gathered from tools that monitor the activities of people and technology. They also include data regarding the security posture, such as asset inventories, configuration details and vulnerability assessments. Some techniques, such as the continuous recording of security data from endpoints, may incorporate aspects of both. External sources include raw data and data that has been analyzed and processed into intelligence regarding actual incidents and events, which includes threat actors and tactics observed in other organizations or in public environments. In addition, they may include open source intelligence ('OSINT'), as well as commercial and curated research, or data shared among organizations to enhance visibility across a community of common interest.

Synthesis – In order to spur meaningful action, information must yield insight. This distinguishes security intelligence from security data. For example, when the attributes of a specific attack method appear in internal data, the ability to synthesize the recognition of malicious activity based on machine-learning algorithms – and perhaps confirm them through correlation to external threat intelligence – can lead to action that contains a threat or prevents future exploits.

Of course, machine learning is the flavor of the month when it comes to analytic innovation. In actual practice, the ability of people to recognize meaningful information obviously plays a role here – and a leading one in current security operations. But that's not entirely because technology is only now catching up with demand. For all the buzz surrounding machine learning, machines still have a ways to go before they become as adept as people in many respects. But the limitations of people regarding the volume and speed of data they can handle contributes to many failures in security management. As analytic technologies continue to evolve, their potential for enabling more effective synthesis of insight across multiple sources of security data is considerable. Where people can be most effective, synthesis should be expected to embrace a more immersive experience that allows responders to absorb and act on information in a more natural manner.

Today, analogs to such an immersive experience may be found in heads-up displays in aircraft. These displays deliver flight status information directly in the pilot's line of sight. The so-called 'glass cockpit' is another example of efforts to optimize the delivery of information a pilot needs to assure a safe flight or assist in taking appropriate action under demanding workloads, without overwhelming the user so much that a serious failure could result. Other analogs may be found in first-person game platforms and other environments that simulate real world activity and interaction.

Action – While response typically embraces human action, one of today's key focus areas for turning an overwhelming amount of data into action is automation. Analytics and automation are primary weapons in the fight against the volume of security data and complexity of security management.

One of the objectives of automation is to relieve people of the burdens accompanying mundane or repetitive tasks and to enable them to make the best use of limited (as well as expensive and often hard-to-find) human expertise. To that end, we have begun to see the emergence of security orchestration in the definition of 'playbooks' or 'cookbooks' that prescribe specific actions that technology can use to respond to emerging security events. These events include the containment of an attack, collecting forensic evidence or ratcheting up the requirements for authentication when questionable access attempts are detected. In the latter example, adaptive authentication has been with us for a while already.

Looking ahead, the longer view of the way forward for security automation should dovetail with what we're already seeing in IT automation. This is likely to be through intersections with DevOps techniques that enable programmable infrastructure management and 'software-defined' techniques, leveraging concepts such as 'infrastructure as code' and modern implementations, such as containers and microservices. However, orchestration is not the only example of automation we see emerging today. For example, the consolidation of information and workflow management for incident responders takes its lead from forensic investigation and case management systems that have long played a role in security response, while systems that automate the sequence of penetration testing take advantage of the findings of prior actions in order to determine where to turn next.

As some of these examples suggest, many of these techniques have already been brought to bear on making cloud and service provider platforms manageable and responsive at scale. These providers need this consistency and standardization in order to deliver valuable services at a profit. Carriers, meanwhile, have long engaged tools to deliver both insight and automation at scale. This suggests the advantages that cloud and service providers may have not only in furthering security automation but in synthesizing insight across multiple environments and organizations, as well as integrating automation with technology trends that characterize 'new IT.'

The present and future path

Does this rosy picture accurately reflect the state of play today? Hardly. To paraphrase William Gibson, this optimistic future is already here in many respects, but it's not very evenly distributed. Today, security-information-and-event-management (SIEM) vendors may claim to have the footing necessary to chart the path for ASAP evolution. But the limitations of rule-based systems that can only alert with what is already known, in addition to the cumbersome demands of maintaining extensive rule bases (which are analogous to the top-heavy, static signatures of legacy antivirus), are themselves contributors to the gaps in actionable situational awareness. Regardless, SIEM is currently the anchor of many a security operations team. As a center of gravity for future acquisition, the leading SIEM vendors have already begun to make moves to broaden and modernize their capabilities by incorporating user behavior analytics with SIEM plays or acquisitions, such as IBM's of Resilient Systems or FireEye's of Invotas, to augment incident-response automation and security orchestration, respectively. In the past, 451 has seen the emergence of some of these technologies as introducing 'SIEM for your SIEM' – in other words, augmenting SIEM with the capabilities needed to approach broader, more actionable situational awareness.

But SIEM is only one element of such an approach. It is also typically restricted to a single enterprise. As DDoS attacks regularly illustrate, the exploitation of security exposures in one entity can have an impact on many, many others. The sheer extent of exposure recently revealed by Mirai, and the many interlocking aspects of response that challenge society to mount an effective defense, further highlight the urgent need for both scale and efficiency in dealing with such complexity. Individual organizations need to see progress toward ASAP to enable them to manage their own risks – but society as a whole needs ASAP on a much broader scale to deal with what may come.

In future research, we will examine the evolution of security in the light of the ASAP concept, particularly in areas such as security analytics, threat intelligence, security orchestration, and the automation of security assessment and incident response. We'll dive more deeply into each of the three component domains of data gathering, synthesis and action. We will also explore manifestations of the concepts we are seeing among technology and service providers; intersections with other larger trends shaping the development of 'new IT' (such as DevOps); infrastructure-as-code and software-defined implementations; and consider the role these domains play in shaping the adoption of ASAP in the real world.
Scott Crawford
Research Director, Security

Scott Crawford is Research Director for the Information Security Channel at 451 Research, where he leads coverage of emerging trends, innovation and disruption in the information security market. Well known as an industry analyst covering information security prior to joining 451 Research, Scott has experience as both a vendor and an information security practitioner.

Carl Brooks
Analyst, Service Providers

Carl Brooks is an Analyst for 451 Research's Service Providers Channel, covering cloud computing and the next generation of IT infrastructure. Previously, he spent several years researching and reporting on the emerging cloud market for TechTarget. Carl has also spent more than 10 years supporting small and medium-sized businesses as an IT consultant, network and systems integrator, and IT outsourcer.

Keith Dawson
Principal Analyst

Keith Dawson is a principal analyst in 451 Research's Customer Experience & Commerce practice, primarily covering marketing technology. Keith has been covering the intersection of communications and enterprise software for 25 years, mainly looking at how to influence and optimize the customer experience.

Want to read more? Request a trial now.