Will AI steal your job?

Will AI steal your job?

There are growing fears that large language models will put vast numbers of people out of work. Those fears are overdone, argues Carl Frey, the Oxford academic who famously predicted 47 per cent of all jobs in the US were at high risk of automation.

Concerns about Artificial Intelligence (AI) resonate with many, from Hollywood screenwriters to truck drivers. As technology advances rapidly, there's growing unease about the implications of Generative AI for our work, social fabric, and the world at large. Will there be any task beyond the reach of machines? 

Over the past 10 years, my collaborators and I have delved deep into the ramifications of AI. A decade ago, I co-authored a paper with Michael Osborne estimating that nearly 47 per cent of jobs in the US could, in theory, be automated as AI and mobile robotics widened the spectrum of computer-capable tasks. 

We grounded our predictions on the belief that, regardless of the technological advancements of the day, humans would continue to have the upper hand in three pivotal areas: creativity, intricate social interactions, and dealing with unstructured settings, such as in the home. 

Yet, I must admit that there have been significant strides even in these areas. Large Language Models (LLMs) such as GPT-4 now offer impressively human-like textual reactions to an extensive array of prompts. In this era of Generative AI, a machine might just pen your heartfelt love notes. 

The bottlenecks to automation we identified a decade ago, however, remain relevant today. If GPT-4 crafts your love letters, for example, the significance of your face-to-face dates will amplify. The crux of the matter is that as digital social engagements become more intertwined with algorithms, the value of in-person interactions, which machines can't yet duplicate, will surge. 

Furthermore, while AI might pen a letter mirroring Shakespeare's eloquence, it's only possible because it draws from existing works of Shakespeare for training. Generally, AI excels in tasks defined by explicit data and objectives, such as optimising a game score or emulating Shakespearean prose. Yet, when it comes to pioneering original content instead of iterating on established ideas, what benchmark should one aim for? How to pinpoint this goal is often where human creativity comes into play. 

What is more, many jobs can’t be automated, as our 2013 paper suggested. Generative AI – a subset of the vast AI landscape – doesn't strictly function as an automation tool. It requires human input for initiation and subsequent refinement, fact-checking and editing of its results. 

Finally, the quality of content from Generative AI reflects the calibre of its training data. The old adage "garbage in, garbage out" holds true. Typically, these algorithms rely on vast datasets, often encompassing vast swathes of the Internet, rather than meticulously curated datasets crafted by experts. Thus, LLMs tend to produce text that mirrors the common or average content found online, rather than the exceptional. As Michael and I recently argued in an article in The Economist, the principle is simple: average data leads to average results. 

AI needs people

So, what does this portend for the future of employment? For starters, the newest wave of AI will consistently require human oversight. Interestingly, those with less specialised skills might find themselves at an advantage, as they can now produce content that meets this "average" standard. 

A key question, of course, is whether future progress might change this soon, enabling automation even in creative and social realms? Without a significant innovation, it appears doubtful. To begin with, the data LLMs have already consumed likely represents a substantial portion of the Internet. Thus, there's scepticism about whether training data can be sufficiently expanded in the coming years. Moreover, the proliferation of subpar AI-generated content could degrade the overall quality of the Internet, making it a less reliable training source. 

Additionally, while the tech world has come to expect the consistent growth predicted by Moore's Law – the notion that the number of transistors on an integrated circuit (IC) doubles roughly every two years – there's growing consensus that this pace might plateau by around 2025 due to inherent physical limits. 

Thirdly, the energy expenditure for developing GPT-4 was believed to account for a significant portion of its USD100 million training cost – and this was before the surge in energy prices. With the pressing issue of climate change, the sustainability of such practices is under scrutiny. 

What is needed is AI capable of learning from more concise, expertly curated datasets, prioritising quality over quantity. Predicting the advent of such a breakthrough, however, remains elusive. One tangible step is to foster an environment that promotes data-efficient innovation. 

Reflect on this historical perspective: as the 20th century dawned, there was a genuine race between electric vehicles and the combustion engine to dominate the emerging automotive sector. Initially, they seemed neck and neck, but vast oil discoveries soon tipped the scales towards the latter. Had we implemented an oil tax during that era, the trajectory might have favoured electric vehicles, thereby reducing our carbon footprint substantially. Similarly, imposing a data tax could spur efforts to make AI processes leaner in terms of data consumption. 

As I have discussed in previous writings, many job roles are bound to undergo automation. Yet, it won't necessarily be due to the current generation of Generative AI. Unless there are significant innovations, I anticipate the challenges highlighted in our 2013 study will persist, limiting the extent of automation for years to come. 

Please confirm your profile
Please confirm your profile to continue
Confirm your selection
By clicking on “Continue”, you acknowledge that you will be redirected to the local website you selected for services available in your region. Please consult the legal notice for detailed local legal requirements applicable to your country. Or you may pursue your current visit by clicking on the “Cancel” button.

Welcome to Pictet

Looks like you are here: {{CountryName}}. Would you like to change your location?