• Home
  • Are You Ready For The Big Data Revolution?
shutterstock_525907495.jpg

Are You Ready For The Big Data Revolution?

We regularly hear about the mammoth quantity of data currently being produced and stored in the world’s data centres, but the truth is real Big Data is just beginning. The Internet of Things (IoT), currently in its infancy, has the potential to rapidly reduce existing data residing on planet Earth to the equivalent of a grain of sand on a 3-mile beach.

Last year Intel announced a total of approximately 15 billion smart devices in the world. Whilst impressive, this is dwarfed by the mammoth 200 billion that are expected by 2020! Add to that the increasing amount of data these devices produce and suddenly we’re talking an order of magnitude that’s impossible to comprehend, let alone handle with today’s infrastructure.

So, why am I thinking about this now? Last Thursday I returned from Discover, HPE’s hallmark technology event attended by business and government customers from all over Europe. During this event, I attended a wide variety of different sessions, ranging from IoT through to the impact of GDPR, product roadmaps and so much more. Whilst varied, one subject that came up at nearly every talk was the immense competitive benefits which can be gained from understanding, analysing, and acting on this data. Dr Tom Bradicich, HPE’s VP and General Manager of Servers and IoT Systems, pointed out that as a rule of statistics, the bigger the data set, the more accurate the result. Whilst it may sound like the stuff of science fiction, logically with the amounts of data we’re talking and the rapid progression of technologies like Azure Machine Learning, we may literally be on the brink of predicting the future with a degree of accuracy previously believed to be impossible.

For organisations to gain value from this exponentially growing mass of data, they first need to have the infrastructure in place to handle this. Traditionally this meant hiring a team of IT specialists to design, procure, implement, configure and finally manage the essential computing power, storage and networking infrastructure required to host the underlying software. Invariably this would be an incredibly complex process with endless dependencies that only a handful of people within the business could claim they understood. Changes would be tense affairs to be avoided and the thought of ever trying to do a full hardware refresh would be enough to keep the CxO responsible waking up in cold sweats for months.

Luckily, in the words of a certain B Dylan, “the times, they are a-changing’”! Services such as Microsoft Azure and AWS now allow businesses to purchase this compute, storage and networking infrastructure on a pay-as-you-go model, spinning up virtual machines as and when required. The back-end infrastructure supporting these machines stretches across data centres around the world and amounts to billions of dollars’ worth of investment from underlying providers. These services are intuitive and, when used correctly, can also be incredibly cost effective. They can be provided as underlying VMs or can have services (such as SQL databases or web server technology) automatically provisioned to them. However, that’s not to say they’re perfect; public cloud technologies come with their own set of concerns for any prudent organisation – namely security and compliance.

The recently announced GDPR rules, due to come into effect in 2018, ensure individual organisations are held entirely responsible for their own data. It is not enough to simply pay a third-party supplier and assume they’ll take care of it. Instead, businesses must ensure the cloud is only used when appropriate, and likewise other provisions need to be made when not.

For companies whose data needs to remain on hardware hosted by the owning party, traditional methods do not have to be the only option. Converged and hyperconverged offerings now provide companies with the opportunity to buy compute, storage and networking infrastructure in portion sized chunks. Instead of spending months scoping out requirements and compatibilities, stakeholders can invest in a set amount of infrastructure safe in the knowledge that, should requirements increase, they can simply buy a few more ‘blocks’ of resource. This, in turn, gives a business flexibility and the ability to scale effectively when needed. Indeed, options even exist for “pay-as-you-go” on premise infrastructure nowadays although, as before, this has its own set of challenges outside the scope of this blog.

So, which option is right for your business? In some cases, this may be a “no-brainer” but for most there is no one size fits all. Instead, some sort of middle ground is required – cue hybrid! By utilising the advantages of both offerings, companies can retain the control and technical flexibility of owning the infrastructure whilst also benefitting from the resource flexibility and intuitive interfaces that accompany public cloud. For business dependent infrastructure, this may mean keeping it on a system your staff can physically see and touch (and subsequently fix) if something goes wrong. Likewise, if peaks and troughs are to be expected, the public cloud can be used to accommodate these, with on-prem infrastructure being favoured when available but the cloud still utilised to ensure a fluid end user experience (make sure you consider your compliance position though).

In her opening speech, Meg Whitman (HPE CEO) declared that there are 3 types of organisation in the world: “those which have transformed, those which are transforming, and those which are about to transform”. Wherever you are on this journey, Ultima’s expert team of consultants can work with your in-house team to help you achieve your Big Data, data centre, cloud and IoT goals. Please speak to your Account Manager for more information.

  - By Tom Walker (Vendor Manager)

Related Resources