Artificial intelligence (AI) is making its mark. From virtual assistants to data analytics, biometric security features, fraud detection and medical diagnosis, its applications are vast and varied. And, as Tim Mercer explored in his recent thoughts on the ‘future CEO’, it won’t be long before it has earned a seat at the boardroom table too.

But what does the next phase of growth and innovation look like, until then? And with 2023 having ushered in a number of changes on a regulatory front, how might we expect the next 12 months to take shape? 

Rapid and somewhat unprecedented, AI progress has given rise to a number of concerns throughout the tech space. Can we really trust these burgeoning systems? How much does AI truly understand? And if you’re not concerned about sentience and security, you might be wondering whether the tech will have an adverse impact on your job long-term.

Those concerns may have even been fuelled by the emergence of Google Gemini more recently — reported to be the first AI model to outperform human experts on massive multitask language understanding (MMLU) and has outperformed ChatGPT’s free tool in widespread testing too.

Ongoing changes to AI regulation 

The reality is, the tool itself isn’t inherently bad. It’s about what we, as humans and handlers, choose to do with it that counts. Heard of Ross O’Lochlainn’s ‘IKE-AI effect’ theory? Similar to the ‘IKEA effect’, he predicted organisations would overestimate the value of ‘soul-less, average garbage’ content, just because they made it with AI. It’s a very different sector of course, but a trend that parallels the very fears we’re seeing in other spaces.

Nevertheless, prevailing concerns have caused such a ripple that countries across the globe have moved to regulate them more comprehensively. The European Union reached political agreement on the AI Act in December. Intended to ensure the safety of AI systems on the EU market and provide legal certainty for investments and innovation in AI, the Act will enforce harmonised rules for the development, market placement and use of AI systems in the EU, following a proportionate risk-based approach. 

Overseas, Canada implemented a voluntary code of conduct in October to govern how AI is developed within its borders. Companies that sign on to the code are agreeing to multiple principles to boost data transparency, address potential bias for accuracy, and more. And in China, regulations have long revealed a considerable interest in generative AI and protections against synthetically generated images, video, audio, and text. 

Is ongoing scepticism limiting AI’s potential?

We’re taking back control. But at what cost? There’s room to suggest this ‘hysteria hype’ could just be stifling competitive-edge and innovation.

Andrew Ng — Google Brain cofounder and Stanford professor, widely regarded as one of the pioneers of machine learning — recently weighed in on these concerns by way of experiment, in which he tried to coax ChatGPT into coming up with ways to exterminate humanity. In his newsletter, he shared how multiple prompts failed to trigger the ‘doomsday scenario’ so many are scared of. It’s a humorous example, but Ng isn’t the first AI luminary to doubt misconceptions either.

No matter your stance, or the regulatory backdrop, AI is here to stay. And these energy-intensive applications demand advanced hardware to power their workloads. Building data centres with greater power density therefore plays a key role in enabling transformation. But with increased capacity comes greater heat output. Of course, the liquid cooling solutions required to combat this represent a number of technical and environmental hurdles in itself. So, how can organisations keep pace? 

AI dependance is shifting data centre strategy

For firms utilising multiple AI tools, colocation services can be a holy grail this year. By renting rack space in specialised data centres, businesses leverage advanced cooling systems and expertise to manage high-power density workloads, as well as benefits such as physical security, redundancy, and 24/7 monitoring.

This, coupled with a hybrid cloud approach, facilitates seamless integration of multiple AI tools and technologies. Plus, it provides instantaneous compute resources closer to their end users while efficiently managing the escalating data volumes driven largely by AI dependence.

Even industries forced to hold their cards close to their chest — finance and healthcare, for example — are growing hungrier to embrace such approaches. If we’ve learnt anything in recent years though, it’s to avoid knee-jerk implementations at all costs. That’s unless spiralling budgets and mismatched workloads are what you’re aiming for this year (we doubt it). Learn more about what a successful cloud migration strategy looks like, to underpin any AI-related endeavours.

It’s uncertain, but it’s exciting — as long as we harness it the right way.

Did the cloud and colocation talk catch your interest? Talk to our experts to see how you can unlock the true potential of your infrastructure in 2024.