Big Tech’s New Side Hustle: Outsourcing AI Safety
- adamorridge
- May 5
- 4 min read

I’m writing this story while scrolling through a demo from a startup that claims it can stop AI from going rogue.
A few years ago, that would’ve sounded like the plot of a bad sci-fi flick. Now it’s just a Tuesday in tech. The irony isn’t lost on me - companies building powerful AI systems are also scrambling to hire (or more often, outsource) people to keep them in check. And make no mistake: that scramble is turning into serious business.
At a recent conference in London, I watched a founder pitch software that “flags AI risk before it becomes a headline.” The room was packed. Investors nodded. Government reps took notes. The vibe? A mix of excitement and low-key panic. Because as AI explodes in capability and complexity, no one wants to be the next cautionary tale. But here’s the kicker: Big Tech, once flush with sprawling in-house trust and safety teams, is increasingly passing the buck.
Outsourcing trust and safety - functions like moderating AI behaviour, filtering misinformation, or ensuring AI models aren’t regurgitating toxic content - is now a booming mini-industry. Startups are offering what’s essentially “risk management as a service,” and companies like Meta, Amazon, and Alphabet are quietly signing on. According to reporting by Wired in 2023, layoffs of internal safety teams have been followed by a rise in external vendors offering more agile, less controversial alternatives.
There’s a reason for that. Keeping a full-time team tasked with slowing things down in the name of ethics doesn’t always sit well with executives racing to ship new products. Contractors, on the other hand, are faster, cheaper, and come without the PR baggage of whistleblowers. It’s the classic tech move: offload the risk, keep the speed.
This shift isn’t just about optics - it’s about money. The global market for AI in security is expected to top $30 billion in 2025 and more than double by 2030, according to Mordor Intelligence. And while a chunk of that includes things like fraud detection and cybersecurity, a growing slice is dedicated to keeping AI systems aligned with human intent - and public expectation.
Governments are taking notice, too. In the UK, the Department for Science, Innovation and Technology (DSIT) is launching a prototype AI safety platform. Its mission? To help organisations assess whether their AI systems are behaving as intended. It’s part of a bigger national push to make the UK a global leader in “AI assurance” - a term that basically means testing, auditing, and certifying AI systems, much like we do with pharmaceuticals or aircraft.
According to the DSIT, the UK’s goal is to grow its AI assurance market from a few niche firms to a £6.5 billion powerhouse by 2035. It’s betting that, as AI systems get embedded into sectors like finance, health, and law, demand for third-party safety checks will skyrocket. And it wants British firms to be first in line.
That vision is being shaped by people like Ian Hogarth, head of the UK’s Frontier AI Taskforce. In a recent TIME profile, Hogarth - who previously founded a music startup - described AI risk as a "security problem." His team, backed by a £100 million government fund, is building safety research tools to model how cutting-edge AI might misbehave in the wild. It’s part national interest, part survival strategy.
Because here’s the uncomfortable truth: we’re building machines we don’t fully understand, and we’re doing it at full tilt. Companies want to move fast. Regulators are playing catch-up. And the public? They're split. In Australia, for example, trust in AI is among the lowest in the world. Only 30% of Australians believe the benefits outweigh the risks, according to a global study from the University of Melbourne and KPMG. The message is clear—if companies and governments want to avoid backlash, they need to prove that AI is being developed responsibly.
That’s where the new AI safety economy comes in. Whether it’s startups offering ethical audits, government-backed sandboxes for testing AI behaviour, or vendors screening large language model outputs, one thing’s certain: there’s money to be made in mitigating the risks of the technology we just built.
Of course, this raises its own set of awkward questions. Is it smart to let the same companies profiting from AI be the ones paying for its policing? Are contractors really incentivised to slow down innovation when their client’s success depends on speed? And what happens when something slips through the cracks?
Still, it’s hard to ignore the momentum. The demand for AI safety isn’t going away - it’s accelerating. And just like cybersecurity or data privacy before it, what started as an afterthought is fast becoming a boardroom priority.
As I closed my laptop after that startup pitch, one line stuck with me: “We don’t build AI - we make sure it doesn’t burn the house down.” In a world racing to automate everything, it might just be the most lucrative job of all.