The concept of cloud computing finds its roots as far back as the 1960s. Psychologist and computer scientist Joseph Carl Robnett Licklider, also known as JCR Licklider, first formulated the very earliest notions of the concept through his idea of the ‘Intergalactic Computer Network’. It was inspired by his time as director of the US Defence Advanced Research Projects Agency, during which he faced numerous challenges in trying to establish a time-sharing network of computers. His 1968 paper The Computer as a Communication Device outlined a vision wherein every person on the planet was interconnected and could access data and programs from any website or location.
More than five decades later, and the tech powerhouses of this world – IBM, Google, Amazon – have taken the concept and run with it. By harnessing the potential of the public cloud, they supply companies with infinite compute and data resources – a genuine pay-as-you-go model that requires no physical infrastructure, provides cutting-edge technology on demand, and offers a system that is both robust and secure.
Over the past year or so, we’ve witnessed an expansion in the definition of the cloud. Previously, cloud technology was concentrated in the public sphere, offering computing services over the internet on either a free or pay-per-use basis. Today, however, the cloud encompasses a whole spectrum of services – from private infrastructures for individual enterprises to edge devices and the very far reaches of the Internet of Things (IoT).
“I think enterprises are starting to look at a much wider landscape with regards to compute options, and whether we can maintain a common architecture across them,” said Steve Robinson, General Manager of Cloud Technical Engagement at IBM. “They’re also considering more fluidity in terms of where their applications and workloads run, and a much richer set of considerations than just simply, ‘how quickly can I get on a public cloud provider?’”