New blog from Andy O'Kelly about what's changing inside the data centre?

What’s changing inside the data centre?

Andy O’Kelly

Chief Architect eir Business

News

How do you optimise and orchestrate services across what have historically been independent domains of LAN, Storage Area Network, servers and security controls? Andrew O’Kelly looks at how the data centre is evolving.

As each of these domains has been virtualised, there has been operational friction between them, with different unaligned tools and people, making co-ordinated virtualisation of business services difficult – a Tower of Babel.

The cloud giants – Amazon and Google – have been driving an agenda for simplified commodity hardware building blocks under software control, minus the cost premium or proprietary/territorial beef of domain ‘brands’ with heritage. Gartner is tracking this as the ‘White Box’ hype.

Enterprise customers who are familiar with the consumption models and programmable nature of those cloud services, want a similar flexibility around the IT resources that continue to reside within their own infrastructure footprint. Wouldn’t it be great if that footprint had all the benefits of cloud, while residing in your own data centre?

SDN frees you up to focus on what to do, not how to do it

Software Defined Networking (SDN) is an abstraction of the network topology (all that switch plumbing) from the control plane, allowing you concentrate on what you want the network to do, not how. That control is opened to the developer, who can now programme the network without the intercession of a network admin intermediary, something of a heretical notion for many networking veterans. Allow the declarative control of the environment (‘What to Do’), and remove the lag and complexity of ‘How to Do It’.

The related area is ‘DevOps’, where the operation of IT is optimised by developers, adding a pace and agility underpinned by the flexibility of cloud compared to the former rigidity of even centrally virtualised server environments. The enterprise IT heritage companies are increasingly focussing on this area rather than taking on public cloud head-to-head. For customers who want to retain physical control of their servers, driven by legal and territorial concerns (and the Snowden disclosures and the challenges to the Safe Harbor legislation), keeping an eye on the disruption of the cloud giants makes sense.

It would be naïve to think that the cloud giant blueprints (which are published,like Facebook’s Introducing data center fabric, the next-generation Facebook data center network) will apply to every enterprise. There is a growing maturity around the bounded cloud potential of a ‘lift and shift’ of so-called legacy applications to equivalent machines procured on an op-ex cloud basis. Most of the complexity and restriction of the code will remain, and only a re-write of the application changes the game, targeting a more atomic micro-process based architecture where service failure is expected, rather than presuming high availability of ‘Big IT’ processes at considerable hardware and software premium.

Still, for new applications, particularly those targeted at digital transformation, enterprise IT are looking enviously at the agility and scale of the cloud giants. The perennial holy grail of ‘open standards’ continues to appeal too when the potential lock-in with the runaway public cloud winners is profound.

The plumbing may not be as hyped as the above, but it still needs to get done.

So what is changing inside the data centre?

Up to the last couple of years, the data centre design was based on a classical core/distribution/access structure, with lots of ‘top of rack’ access switches feeding into distribution switches, which feed into a core. This was not aligned to the evolving operational needs of the data centre: virtual machine mobility features expected pervasive VLANs, introducing problematic Layer 2 requirements that had been dreaded in campus networks for years, with spanning tree continuing to impact on stability and blocking efficient capacity.

The tiered model was also very much based on traffic profiles being primarily north/south (entering and exiting the data centre), as opposed to east/west (between servers within the data centre), which was the prevailing emergent pattern. At the same time, the standard unit of server connectivity has moved to 10Gig, and the density of virtual machines also climbed, resulting in fewer, faster network ports.

Enter the new data centre model, leaf/spine, where every server-facing top of rack leaf switch connects to every spine on over-provisioned uplinks, with the spine performing a simplified forwarding task based on the consistent predictable two hops between any leaf. This scales out very well, and is a cost effective way of increasing data centre network capacity. Whatever about the differences of approach between the SDN leaders, VMware (with NSX) and Cisco (with ACI) both agree that the leaf/spine model is best for data centre.

 

 

 

 

 

 

As Chief Architect in eir Business, Andy provides vision and direction on emerging business and technology trends, and promotes eir solutions to key customers.

Andy’s twenty eight years experience in the ICT industry spans the public sector, a market data software company, and enterprise network services, and roles as both technology expert and business management to Managing Director level.

Andy is a graduate in Computer Science from Trinity College Dublin.