All Posts

It's Happening Again, Just Like In Covid 2020, But This Time With AI

3 March 2026

13 min read

AI is moving faster than most can adapt. As industries transform in real time, the real risk isn’t the technology, it’s whether people and businesses can keep pace with it.

It's Happening Again, Just Like In Covid 2020, But This Time With AI

The world is changing faster than most businesses can keep up. Change Agents brings together the people actively reshaping their industries – ahead of SXSW London.

In a recent viral essay, Something Big Is Happening, AI founder Matt Shumer compared the current AI moment to February 2020, just before COVID – when the warning signs were visible, but most people had not yet grasped the scale of what was coming.

If February 2020 was about underestimating what lay ahead, he said, this current AI moment is about whether we prepare the population fast enough now it’s here. Just look at news last month about agentic AI: it created a rip-off version of Monday.com in less than an hour, which crashed the stock by more than 20%.

Tech London Advocates & Global Tech Advocates founder Russ Shaw CBE has publicly called for six months of AI and digital training for every adult in the UK. In his view, the real risk is not whether AI advances. It’s whether AI literacy keeps pace, or leaves people behind.

Ahead of SXSW London in June, where we are covering AI, Innovation and London 2050 across a series of conference talks, we spoke to Shaw about why AI training needs to happen at scale and what happens if it doesn’t.

SXSW London: A piece went viral recently that likens the current AI moment to what we experienced pre-COVID in February 2020. Is that too alarmist or not alarmist enough?

Russ Shaw: Look, it's probably not alarmist enough. I think we have to be ready. There is substantial change already underway, and a much larger amount of change about to hit us. People have just started to use AI. But when you look at tools like agentic AI and where that's going, this whole process is really starting to accelerate quite dramatically. And I do worry that UK PLC – larger companies, smaller companies – are not adapting and embracing AI quickly enough to really understand how it works. And not just from a cost-cutting perspective, but how it can fundamentally transform how businesses operate.

You have AI now that can essentially do jobs for you, which is what that piece covers.

I spend a lot of time with startups, and the guy who runs our Tech Boston Advocates group – a guy called Jesse Witkowski – has just published a book called The Entrepreneurial Golden Window. The premise is really interesting. He said the startup VC model that's been out there for decades is being disrupted. Why? Because more and more people are saying, 'I can become an entrepreneur, but I don't need to know coding or hire coders, because I can use AI. I don't need to go out and raise half a million or a million or two million. I can iterate over and over again until I've got something tangible and worthwhile.' And if they need to build a company, they're building a skeleton organisation that helps them accelerate growth in a cost-effective way. So many people now are touching AI and saying, 'I can see my job is going to go away in a year or two – I'm going to start doing this in my spare time, and if something takes off, fantastic.' I think we're going to see a whole new wave of entrepreneurs emerge. Many won't be successful, but the wave is coming.

Where does the UK stand in terms of AI readiness compared to the US and Europe? Are you saying we’re behind?

I'm saying we're behind the curve vis-à-vis the US. I suspect we're probably ahead of the curve vis-à-vis Europe. This is the third largest tech ecosystem in the world, after the US and China. It is a global hub for AI, which started – we're in 2026 now – about eleven or twelve years ago, when Google acquired DeepMind. Where I'm worried is that we're kind of saying, 'God, the UK is this great hub for AI,' and we've got some really cool things going on with companies like Google DeepMind – but I'd like to see British businesses, from the larger corporates down to smaller and medium-sized ones, say, 'Yeah, we're embracing AI too.' That's where I feel like we're behind the curve. All of this investment is flowing into the sector, but we need more proof points coming out of UK PLC that demonstrate we are truly a world leader in AI.

Right

People always ask me: is AI going to kill more jobs, or create more jobs? I say, longer term, it will create more jobs. In the near term, we're going to see a dip, where jobs will be cut, because businesses are going to be under pressure to drive down costs while we're all figuring out what AI is really going to do for us. And when that plays out, I think you're going to see a whole wave of job creation. What AI is eliminating right now is a lot of the mundane routine tasks, but it's forcing workers to use their minds to judge, evaluate, and analyse.

How does the UK compare to other countries in terms of AI policy and guard rails?

I think the US is further ahead than most. China is hard to decipher, because they're in a very different ecosystem. Although what's interesting to me about China and players like DeepSeek is that they are pushing an open-source route on AI, whereas the US players are looking at a much more closed-source approach. So you're going to have two different models for people to choose from. My preference is usually to go open source – I think it's the more democratic way to do it, although I think companies will be nervous about open source and will want to make sure that what they're doing with their data and their large language models is truly protected from a security point of view. That's one big issue to grapple with.

I think the second is: in terms of AI readiness, the government came out just over a year ago with its AI Action Plan, which I thought was really good. And I think the UK has been smart about it. Some people may criticise this, but in terms of how you regulate AI from a policy point of view – the EU moved quickly, and I applaud them for that, but I think they've taken a one-size-fits-all approach, and I don't think that works. The reason the UK is slower in its regulatory approach is that it's basically saying: what applies to financial services and fintech doesn't apply to automotive, and it doesn't apply to defence. What will come out from the UK is much more of an industry-sector-by-sector approach to how we think about AI.

Sure.

The UK is in a really interesting position. You've got government strongly promoting an AI agenda through the AI Action Plan. You've got investors pouring money into the UK. And you've got a regulatory regime that is supportive of startups and scale-ups, while also saying: let's not cut this off before we know exactly where it's going, and let's apply good, sensible British rule of law – which is highly respected around the world. So we're in a good place. What I want to see are two things: better adoption amongst British businesses, and better training of British workers on AI.

The government has its Tech First initiative, which was announced at London Tech Week last year by the Prime Minister – £187 million coming largely from the private sector, with eleven companies signed up for it. That's now being deployed, which is good. However, there are mixed views on it. Some people are saying the providers of those courses are mainly US providers – so are you building in an inherent bias towards US AI companies? That's a fair critique. I think the government has since brought in a number of British companies, which is good.

Who should be paying for the AI training? Is it the government, employers, individuals?

There probably needs to be some degree of government funding, but the bulk of where this is coming from is private sector companies. I applaud that, because the government doesn't have a lot of money to spare, and we don't want that to slow things down. And many of the beneficiaries of who's being trained will be the companies that are doing the training. So I think that's a good thing.

I would like to see academia – universities in the UK – step up more, so that it's not just entirely private-sector driven, and it's not just about using these tools, but about the critical thinking that sits behind them. You can go in and sign up for a 30-minute or 60-minute course on how to access ChatGPT and what steps to follow – okay, great, that's a start. But then there has to be a whole element of: why are you using AI?

So many people I talk to say, 'Oh, I throw stuff into AI, and what I get back is a starting point, and then I have to do A, B, C, D, and E.' Great — over time, that will become less and less necessary. But how do we get people to really critically evaluate, think, and deploy AI in such a way that uses much more advanced thinking, rather than just following step A, B, and C? That's what we need to achieve, and that's where I think universities play a key role.

What are the first steps companies should be considering in regard to AI training? Is it purely about prompting, or should they be thinking steps ahead?

I think there's a spectrum that we need to get people through. First and foremost, there's a huge amount of confusion in many companies. HR departments, for example, are being bombarded all the time from so many providers. What's the difference between Anthropic and OpenAI and this and that? Helping people understand that these tools are rapidly changing and evolving – and therefore what should you be looking for in an AI tool that matches what you need, either as an individual or as an organisation – I think that's absolutely critical as a starting point.

Then there needs to be a component of: what can AI actually do for you in its current form, and where is it going? Take agentic AI. We're in that wave now.

And then the more advanced element is: as you embed these tools into your organisation, what does it do to your products, your services, your work culture? If everybody in a company is using AI all the time, are you stopping and pausing and having sessions where employees talk with each other, share insights and knowledge, have actual conversations – while these tools and capabilities are whirring away in the background? There's a whole trajectory that organisations and training programs need to go through.

Very few companies are at that higher end today — it's probably the big tech companies. But I did a session with some retail tech startups about seven or eight months ago on their views on AI, and they said: 'We really want to use AI, but our databases are in such a mess, we don't even know where to begin. We need to clean up the database before we can overlay the tools.' So you've got such a wide spectrum of readiness for AI in organisations.

That's really interesting.

How are we implementing these tools culturally, and what does it mean for the workforce in terms of our day-to-day interaction? There's a lot to think about there.

On that cultural point, if I may – in companies that don't have clear strategies for how they're using AI, many places I've spoken to, it's a bit of a free-for-all. Somebody's using ChatGPT, somebody's using Claude, somebody's using Gemini, whatever. And then if a company says, 'Okay, we're sorting this out – we're all going to use this one tool,' that might mean people can't use Gemini anymore. What about all those people who've learned Gemini? They're going to say, 'I'm comfortable with Gemini, I want to use it.' You're going to get organisational friction around the tools people have been using. But the sooner companies can figure that out and come up with guiding principles for how to use AI, the better off everyone will be. And some companies are saying 'Don't even use it, don't even touch it' – but we know people are still using it anyway. So you've got a spectrum of employee allowance around AI use that also needs to be addressed.

What's one policy change from the government that you feel would make the biggest difference in AI?

I think one of the biggest issues we're facing is that the lines are blurred between what is AI-generated and what is not. You can see it visually, you can see it in content. I would be very supportive of a government policy that says if content is AI-generated, you have to label it as such. Deep fakes, for example – it's so hard now to tell what's real and what's not. And we've seen over the past decade how much damage misinformation can do. If we don't address this with AI, it will hyper-accelerate that damage.

I look at TikTok – I scroll through there a couple of times a day just to see what's out there – and you can now see a little line towards the bottom that says this video incorporates AI. You can look at it and say, 'Yeah, I get that.' But as AI gets better, those warning labels – like we have on cigarettes – are going to become more and more critical. People need to know: I'm watching something made by AI. I may still trust it, and I think as AI improves people will feel better about it. But we need to be transparent about it.

They managed to do that with advertising content on socials, so that’s possible.

Ok. So, to close this out, let’s have a final lightning question round.

Give me one word that describes where UK tech is heading.

Upwards.

The most overrated thing in tech right now?

Oh gosh. I was going to say AI, but I don't think that's the case. I would say – I don't know if it fits under tech, but I think crypto is overrated.

What's the most underrated?

I think quantum is underrated. I don't think it gets enough attention, because people struggle to understand what exactly a qubit is and what that means for their day-to-day life. I try to explain to people that quantum can disrupt things like encryption if we're not careful.

I was going to ask what keeps you up at night. But maybe that’s the answer to that.

It’s the gulf between those who are tech and digitally enabled and those who are not. There’s an opportunity here to close that gap.

SXSW London runs from 1-6 June, 2026, in Shoreditch, east London. 900+ speakers, 300+ music artists and 100+ films. Find out more here.