In recent weeks, former safety workers and researchers at OpenAI and Anthropic have gone public with their grievances, sparking a debate about the safety of artificial intelligence.
Mrinank Sharma quit as a safety researcher at Anthropic, and wrote on X, “The world is in peril,” describing the constant pressure at work “to set aside what matters most.” Days later, Zoë Hitzig, a researcher at OpenAI, announced her resignation in The New York Times, saying she had “deep reservations” about its strategy to target users with ads. Then came news that OpenAI had fired a safety executive — who had raised concerns about the launch of erotic content on ChatGPT — on grounds of sexual discrimination, and disbanded its mission alignment team.
The recent moves come at a time of rising concerns about job losses from AI, the mental health effects of AI chatbots, and the environmental impacts of AI data centers. While some countries are rolling out AI regulations, President Donald Trump has signaled his disinclination to curb AI. The growing economic and political clout of AI companies, combined with their close ties to the Trump administration, have had a chilling effect on workers, Mary Inman, who has represented whistleblowers for over three decades, told Rest of World.
A lot of people have a more jaundiced eye towards tech now. So maybe we’ll have another awakening.”
Inman is a founding board member of Psst, a nonprofit that provides a secure digital safe for disclosures, as well as legal support to whistleblowers. The digital safe — which matches an individual’s concerns with those of other employees at the same organization — makes it easy for even workers in overseas locations with few protections to participate, said Inman, who has represented Facebook whistleblower Frances Haugen, and Tyler Shultz, the Theranos whistleblower.
The interview has been edited for length and clarity:
What are some ways that AI companies discourage their employees from speaking up?
The biggest problem is that not only do they have nondisclosure agreements, they also have a mandatory arbitration clause, which means disputes never see the light of day. Disputes are resolved basically in a black box, not in a court, and not made public. What we also see is that the confidentiality agreement that you sign on your way in is very different from the one that you sign on the way out; it’s much more restrictive. [Anthropic has committed to reviewing and updating its whistleblower policy. OpenAI revised its policy following criticism, including from its own employees.]
The confidentiality agreements often contain gagging language that violates an SEC [Securities and Exchange Commission] law that says you can’t interfere with a whistleblower’s ability to report to external regulators. Companies can also have a nondisparagement clause, and if people refuse to sign it, they are told they would lose their equity. The [Berlin-based] AI Whistleblower Initiative has called on AI companies to publish their whistleblowing policies, and only a few did.
I represented Frances Haugen, one of the Facebook whistleblowers. Even though it wasn’t that long ago, it feels like an entirely different time. What’s different now is that all the big AI firms think they’re untouchable. They’re emboldened because of the AI arms race, and that we have to win it before China does. There’s no appetite to slow them down or to curb them. The industry is vulnerable because there are so many immigrants in it, and the incentive to do something under this administration is minimal. So the impediments are very high, but there are some people speaking up, so there are glimmers of hope.
How does Psst enable whistleblowers to come forward?
Psst was founded to try and provide a safe space to encourage AI whistleblowers, in particular, to speak up. For whistleblowers, there’s often a first mover problem. The digital safe where people can deposit into, it’s encrypted, and we don’t unlock the keys until there are matches. We will have a lawyer contact you and say, “There are two more reporters, they are interested in pairing up if you’re interested, would you reconsider?” So it’s sort of like Tinder for Whistleblowers. It’s trying to collectivize the act of whistleblowing, creating safety and strength in numbers. You may not risk your career based on your one little piece of the puzzle, but you’ll think about risking your career if others have brought other pieces, and it increases the odds that it will have an impact. If you can be part of a collective, you can be anonymous.
It’s sort of like Tinder for Whistleblowers. It’s trying to collectivize the act of whistleblowing, creating safety and strength in numbers.”
There’s also the global piece of it: You have content moderators in Nigeria, data labellers and robot operators in the Philippines. India has a huge tech industry, but they criminalize certain whistleblowing behaviours. With Psst, even overseas workers can deposit information. Most importantly, you can get free legal advice. Whistleblowers need psychosocial support, they need various types of legal support; we’re here to make sure that they’re not silenced by the documents that they’re asked to sign on their way in and out. We are saying to these tech workers: “Don’t sign these without getting free legal advice. Talk to us before you sign.” We don’t want them to feel disempowered: “Oh, I’m in sub-Saharan Africa, what can I do?”
What are some potential areas in the AI industry that workers can blow the whistle on?
It can be about investor harm: Stuff happening in other parts of the world could rise to a level that it could be considered a security risk. It could be about AI washing: Any number of industries are claiming that they’re capable of so much more because of AI when that’s not the case. There could be money laundering, antitrust concerns, or facilitating work in a sanctioned country. There can also be ethical concerns about copyrights or the environment. The frontier AI labs are all paired with military contractors: Anduril with Meta, Anthropic with Palantir. [Anthropic is battling the Pentagon after it was revealed that Claude was used in the capture of Venezuelan leader Nicolás Maduro.] AI is used in border surveillance. So I do think you can get to be a whistleblower in some of these places.
People are more suspicious of AI, there’s more skepticism. My children’s generation looks at Silicon Valley differently because of Elizabeth Holmes and Sam Bankman-Fried. So the bloom is off the rose for Silicon Valley, I think. A lot of people have a more jaundiced eye towards tech now. So maybe we’ll have another awakening.
