In another year of high profile cyber incidents, how are solutions shaping up to help enterprises respond to risk? Andy O’Kelly, Chief Architect with eir Business looks at the technology solutions available to enterprises.
With cyber security regularly hitting the front pages in the past year, the risk posed to the enterprise should be clear. Traditional security measures are rapidly depreciated in the arms race of evolving threats and counter-measures, and reassuring perimeters are collapsing as services become virtual and mobile. At the same time, the protection of personal data will imminently be subject to regulation with real teeth, and the enterprise IT function must face into the transformative opportunity presented by digital, or become irrelevant.
“Over half (55%) of Irish executives surveyed said that they believe their organisation is unlikely to detect a sophisticated attack on their business, a figure that has barely changed over the past two years. By contrast, only a third (33%) of executives globally say the same today, a significant drop from 56% two years ago… Although an encouraging 68% have an incident response plan including root cause analysis, two in five (42%) have no communications response strategy for a significant cyber-attack involving data compromise, and 15% stated that they had no breach detection capability whatsoever.
Furthermore, more than two out of three respondents both in Ireland and globally said that up to 50% more budget was needed to keep their organisation within its risk appetite, highlighting a requirement for increased funding within organisations to mitigate against growing cyber threats. Irish organisations however are on the right trajectory, with security budgets continuing to rise and almost two thirds (65%) of executives surveyed saying that their organisation’s information security budget had increased in the past 12 months.”
The Cisco 2017 Annual Cybersecurity Report supports this view that budget is the primary constraint in deploying advanced security products and solutions, but calls out other significant factors also. With 65% of companies using six or more security products, compatibility is the next highest inhibitor to deploying new security solutions. Integrating multiple products presents management complexity that is itself a risk. Budget and compatibility are followed closely by certification and talent. The latter problem of finding and keeping the right talent is a stark one: with only 56% of the typical 5,000 security alerts being investigated on any given day, six or more evolving products to be mastered, and security on a 24×7 rotation, this is an area that is at best, operationally straining, and at worst, broken.
Even if you can find the right talent, fixing such a capacity deficit feeds back into the budget dilemma as you compete with higher profile global firms. This is an area screaming out for improvement through better automation, and third party service expertise.
Technology is evolving to meet the challenge. Here are a few examples.
Know your enemy
While fine-grained security protection specific to your own environment and policy will frequently require dedicated solutions that are closely integrated to your own specific digital estate, any cloud service that provides perspective and insight around evolving global threat profiles is valuable to an enterprise. While the insidious threat of a Day Zero attack tailored to compromise your enterprise needs to be considered, as WannaCry proved, you don’t have to be specifically targeted by an attack to be the victim of it.
Phishing scams frequently offer a URL that resembles a valid well-known address but points towards a malware-laden web service. A good awareness programme within the enterprise is vital, but human nature can always do with some additional help. While you may visually miss the disguised link before clicking, the device you are using usually needs to resolve the name in the link to an IP address, and uses DNS to do so. If the DNS service can make an informed assessment of the validity and reputation of the address and the type of service it hosts, it is in the ideal place to intercede by redirecting the user to a warning and avoiding a risky connection being made.
This is how the Cisco Umbrella service (formerly OpenDNS) works, and it is a relatively easy layer of additional protection to deploy for the enterprise. In the WannaCry attack for example, the malware was looking for the existence of DNS domains that functioned as a kill-switch. As outlined the previous December, Umbrella has the concept of ‘newly seen domains’ as potentially risky places to visit, particularly if this is an obscure or seemingly random domain name. Compromised devices that referred DNS lookup for the WannaCry kill-switch domains to Umbrella were protected, as the session looked suspicious and was redirected.
Beware of ‘hidden’ threats
As the confidentiality of network traffic is increasingly protected as standard by encryption, such traffic becomes immune to the integrity protection previously performed by devices that inspected packet payload to scan for known viruses or inappropriate content.
One way of overcoming this is to break the encrypted session to make the payload visible within the security device only as a ‘man-in-the-middle’, and re-establish the session from the security device onwards. Doing this at scale is problematic, particularly within your own secure environment, without potentially compromising the efficiency of the network e.g. where the growing volume of ‘East West’ traffic within the data centre needs to be monitored. And in any case, with browsers now supporting HTTP Strict Transport Security (HSTS), the insertion of a ‘man-in-the-middle’ to web servers with HSTS enabled is effectively prevented. This is a sensible measure to protect from eavesdropping and hijacking of your connection, but as a consequence also makes legitimate scanning impossible. (Good tech article in this here).
An alternative proposed by Cisco Systems seeks to passively extract the characteristics and pattern of encrypted packets – packet length and times, byte distribution, encryption headers and the initial data packet – in hardware ASICs with no impact on network performance. These encrypted traffic analytics can be used to spot abnormal traffic activity, contributing to a cloud-based cognitive analytics engine which uses machine learning and statistical modelling to crowd source a correlated global risk map, identifying threats and improving incident response.
Having this level of logs and visibility of internal traffic is part of a wider response to cyber threats, a Zero Trust approach.
Implement a Zero Trust model
In more innocent times we considered an inside and an outside to our enterprise network topology, and we even had a nice Visio diagram to show the auditor. Everything inside was safe and trusted with unrestricted communication between devices and services, while everything outside was unsafe and kept at bay by a trusty firewall perimeter enforcing a set of restrictive (or not so restrictive) rules.
We know more now. A Zero Trust model assumes there is no real difference between inside and outside. Users and services inside a network are no more trustworthy than users and services outside the network. Whether you consider this to be paranoid or prudent depends on your appetite for risk, and this is likely to be informed by whether you have been on the downside of a cyber-security incident or not – and whether you are aware of it.
To move to Zero Trust, traffic should be restricted (explicitly white-listed) between inside services. Protecting services from each other limits the threat surface of a vulnerable or unpatched server and restricts the propagation of malicious traffic. Doing this in an environment that is virtualised, with multiple active data centres for business resilience, and cloud in the mix or as a future platform, is a service management challenge.
The Holy Grail is to enforce a policy for an application, consistently and dynamically, independent of the underlying virtual or physical location of the service. Traditional firewalling has depended on identifying source and destination address identity, and inserting a hardware filter between them based on these addresses and approved application ports. This works fine for ‘North South’ traffic flows in and out of an organisation to the internet. It does not readily translate to an environment where ‘East West’ traffic inside the data centre needs to be secured, and where services are virtualised.
Programmable security to deal with changing risk profiles
The Cisco model for this new service world is Application Centric Infrastructure. ACI turns the data centre network into a programmable fabric. Applications are allocated to an End Point Group, which can be thought of as analogous to a DMZ that segments services (e.g. a Web Layer EPG, a Database EPG). EPGs can only communicate with each other based on a contract which is like a stateless firewall. The contract is abstracted from underlying network configuration like VLAN and IP address that are typically tied to the location and device to which a server is connected – these can change (a new VM in another data centre or cloud, a change of IP address or VLAN) and the contract policy remains enforced by the network.
ACI also supports micro-segmentation, where lateral communication between services within a croup is prohibited (similar to a private VLAN), but once again in way that is abstracted from underlying configuration. ACI micro-segmentation can leverage service attributes in VMware too – for instance the Guest Operating System could be used allowing a programmed response to a security advisory, for instance moving a server to a Quarantine EPG prior to patching.
Security policy can be further represented and enforced in ACI through service graphs, defining permitted communication paths or service chains for an application. This might include multiple EPGs and additional virtual or physical security devices.
Crucially, ACI is programmable, providing the means of automating a rapid security incident response: a detected event could trigger a contract change.
The challenge when retrofitting such a solution in a large enterprise will be the profiling of what is normal and safe behaviour for legacy applications to allow rules to be inferred, validated and enforced in new contracts. Just as many a consumer internet FAQ will outrageously recommend turning off your firewall as part of a troubleshoot, many an enterprise has given up on their best intentions and drilled through their own firewalls to get services within their data centres working, and chosen to hope that inside is safe.
eir Business is a co-sponsor of Dublin Information Sec 2017
For more information on eir’s security portfolio contact Andy on LinkedIn.