Monday, 25 September 2023

Generative AI: Are we at a new generational moment in tech?

Are we at a new generational moment in tech? Are we all about to leap into a new world of generative AI and large language models (LLMs)?

Are we right to believe that we stand at the precipice of a new dawn in technology? Are we all about to leap into a new world of generative AI and large language models (LLMs)?

If so, what does that mean for the average business?

 

Are we at a new generational moment in tech? The global consultancy business McKinsey has asked this question as it explores the commercial potential of the new generation of AI and sets out an action plan for CIOs.

It estimates that the global economy stands to gain as much as $2.6 trillion to $4.4 trillion annually. This is a staggering value add when we consider that the UK’s entire GDP in 2021 was $3.1 trillion.

So while the question might be clever wordplay on the parts of the experts at McKinsey, there is no doubt that this new technology has the potential to bring huge – and unpredictable – change.

 

The first generation of digitalisation: the beginning of the Internet and the worldwide web

In the first generation of the Internet, the US Department of Defence Advanced Research Projects Agency (ARPA) developed an experimental network called ARPAnet to link together four supercomputing centres for military research. 

It wasn’t until 1989 when this technology made it widely into public hands. The British scientist Tim Berners-Lee is widely credited as having invented the worldwide web while working at CERN. The web was originally conceived and developed to meet the demand for automated information-sharing between scientists in universities and institutes around the world.

When investors began looking at the commercial applications made possible by the worldwide web, they pumped huge amounts of money into fledgling Internet-based startups from 1995 onwards. As founders became wealthy overnight, people rushed to create Internet-based businesses. These speculative investments in “dot-coms” created a boom that lasted until the spectacular market crash of 2000. 

Trillions of dollars in market value was lost during the stock market crash between 2000 to 2002, but there were some notable survivors of the dotcom boom – including Internet giants Amazon, eBay and Google.

 

The second generation: Web 2.0

The term Web 2.0 was coined by information architecture consultant Darcy DiNucci in 1999 to differentiate the post-dot-com era. In the second generation, websites and apps were characterised by greater user interactivity and collaboration, more pervasive network connectivity and enhanced communication channels. It makes use of user-generated content for end users.

Today’s new generation of digitalisation – the artificial intelligence and machine learning algorithms of web 3.0 – is set to draw on the vast store of information generated by the second iteration of the worldwide web. The current wave of generative AI and LLMs has been trained on this web 2.0 user-generated content. 

 

The next generation: AI and LLMs

Artificial intelligence & machine learning algorithms have become very powerful. Trained on the vast swathes of data owned by today’s tech giants, they have the remarkable ability to answer broad questions in natural language.

Layered on top of new decentralised data structures and a wealth of data the potential applications of this technology are mindboggling. They go far beyond the targeted advertising innovations of Amazon and Facebook.

However, some technologists have raised concerns – not least about the propensity of these models to “hallucinate”. 

 

The arguments against the new generation of AI and LLMs

You can’t solve the problem of AI hallucinations with more data. And even companies leading the field, like Google, admit they don’t know what causes these hallucinations.

Perhaps more concerning is the idea that generative AI is polluting the datasets on which future models will be trained. John Thornhill writes in the FT that, “By adding more imperfect information and deliberate disinformation to our knowledge base, generative AI systems are producing a further ‘enshittification’ of the Internet, to use Cory Doctorow’s evocative term. This means training sets will spew out more nonsense, rather than less.”

Furthermore, some have questioned the controls over the people who own these models. Naomi Klein argues, “There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.”

The problem is, she continues, “Silicon Valley routinely calls theft ‘disruption’ – and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don’t apply to your new tech; scream that regulation will only help China – all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands.”

 

Being part of the new generation  

It seems clear that while companies need to explore the potential of these new technologies with an open mind, they need to do so with some serious guardrails in place. 

According to McKinsey, this tech exploration (and associated guardrails) should include the following nine steps:

•    Move quickly to determine the company’s posture for the adoption of generative AI.

•    Develop a financial AI capability that can estimate the true costs and returns of generative AI.

•    Reimagine the technology function and focus on quickly building generative AI capabilities in software development, accelerating technical debt reduction, and dramatically reducing manual effort in IT operations.

•    Take advantage of existing services or adapt open-source generative AI models to develop proprietary capabilities.

•    Upgrade your enterprise technology architecture.

•    Upgrade your data architecture.

•    Create a centralized, cross-functional generative AI platform team to provide approved models and application teams on demand.

•    Invest in upskilling key roles—software developers, data engineers, MLOps engineers, and security experts—as well as the broader non-tech workforce. 

•    Evaluate the new risk landscape and establish ongoing mitigation practices to address models, data, and policies.

In the rush to innovate, it’s likely that most early initiatives will not succeed. However, within that bunch, there will be the next generation of Amazons – the companies that will succeed because of this new technology.

 

What next?

Would you like to know more about any of the topics discussed in the blog? 

We can help you understand the practical applications that are already available – such as Microsoft Copilot. And we can talk you through the business and architectural changes needed to get started on the next step of your digitalisation journey.

 

Get in touch with our team.

Call us: 0808 164 4142

Message us: https://www.grantmcgregor.co.uk/contact-us 

Further reading

You can find more insights and ideas about artificial intelligence and other tech topics elsewhere on our blog:

•    What are the risks of ChatGPT and large language models (LLMs)? And what should you do about them?

•    What is Microsoft Copilot? And is it something my business needs?

•    AI’s new role in cyber security

•    The ethics of AI and the problem of bias