I’ve often said that security is simple, all you need is people to do the right thing, systems configured to do the right thing, and to know that they are all doing the right thing. However, with hundreds if not thousands of people involved in your business, getting them all aligned with security guidance and systems securely maintained during high levels of change and release activity, always leaves plenty of room for errors and mistakes. Perhaps we should look at how ‘knowing’ can help improve our cyber defences.

The General Data Protection Regulation places two critical requirements on us regarding ‘knowing’ our environment: to be able to demonstrate that we are protecting data, and to report breaches within 72 hours. These requirements underline the need for ‘knowing’ and ‘knowing now’.

How realistic is it to know our environment? There are multiple ongoing changes that make some sort of collating and documenting our environment a fruitless task: as soon as the job is done, it all needs updating again. This is a real problem for maintaining our knowledge of the environment, and therefore a more dynamic solution is required. Systems management tools have advanced a long way in recent years, combining inventories, managing configurations and changes. On-boarding of all systems is a challenge, discovering and then adding agents to endpoints and servers is a hugely time consuming activity and there is no certainty that all components of the environment are captured.

During investigations, we need not only to know what is in the environment but also what communications took place across a spectrum of systems and network devices. Integration of logs into management systems is not readily achievable, resulting in lots of manual correlation of logs (where they are captured) with system and application records. A truly time consuming activity when speed is of the essence.

While some of the above is achievable on a subset of the environment (for example the web server estate), our environments contain much that is legacy and hence not seen as worthy of spending on for additional controls. However, we increasingly need to be assured that our data is protected, even within legacy systems and applications that have reached the end of their life span. It seems that an asset or inventory based description of our environment doesn’t easily provide the knowledge we need to cover all activity in the environment and to maintain its integrity. Furthermore, it doesn’t address the problem of correlation during investigations. This version of ‘knowing’ can be more accurately described as recording.

Utopia comes down to being able to know what is going on across the environment and in as close to real time as possible. Rather than decisions based on what is in the environment, a better baseline is to make decisions according to what is going on in the environment and reporting this in near real time. Naturally, there will be millions of things going on in the environment concurrently, and simply reporting all of them makes no sense and does not improve on the current situation. That is, unless the reporting can be filtered and risk rated to reduce the amount of output (for example, not reporting pre-approved activities), and to highlight higher risk activities such as the transfer of financial data. But utopia is not data loss prevention (DLP). I can’t ask a DLP solution which systems are serving http traffic externally, or what websites HR routinely uses in the business processes, or what we are sending to cloud providers. In a utopian world, we use the real current activity in our environment to answer the questions that enable us to know about our environment so we can make cyber security decisions based on real use cases from our environment. Using this approach gives transparency across the organization, because real activity from our environment helps to demonstrate the risks, confirm compliance to policy and regulation, but more importantly, gives us the tools to investigate events easily and demonstrate the actual state of our environment.

Utopia is the ability to capture every interaction in the environment, not just at the network layer but also looking into the packets to identify users, data and applications, dynamically managing risk and demonstrating compliance. It has the ability to learn people and system behaviours and categorise these into typical activities for those groups and datatypes. In Utopia, it tells me which interactions are risky based on my risk appetites (volumes, destinations, users, deviations from normal activity, etc.). It provides information across the organization and allows me to drill down, from high level risk to the level of detail I need to conduct investigations. I can extract system types and behaviours, communications types, and provide the flexibility to build queries that I use to demonstrate that I am protecting our data.

Please follow and like us: