Anthropic, a leading AI startup, is looking for a chemical weapons and explosives expert to join their team, in order to prevent “catastrophic misuse” of their artificial intelligence (AI) system.
Anthropic believes that its AI model can be misused to build chemical weapons and explosives that can be detrimental to humanity, especially in the context of ongoing conflict in the middle-east, and Russia-Ukraine war.
The job vacancy posted under the title of “Policy Manager, Chemical Weapons and High Yield Explosives” on LinkedIn offers a full time role at the Anthropic in the annual salary median range of $245,000 – $285,000 USD.
“This role offers a unique opportunity to shape how AI systems handle sensitive chemical and explosives information. You’ll work with leading AI safety researchers while tackling critical problems in preventing catastrophic misuse,” read the post by Anthropic.
The job opening is specially for candidates who either have a Ph.D in Chemistry or Chemical Engineering or a related field with focus on energetic materials, explosives and chemical weapons. An experience of at least 5-8 years in chemical weapons or explosives is also desired for the role.
The job vacancy comes in the wake of Anthropic refusing to let the U.S. Department of War use its AI model Claude for domestic mass surveillance and fully autonomous AI weapons. Later, the Pentagon had labelled Anthropic as a supply chain risk and ordered all federal agencies to stop using its Claude model.
However, reports suggest that Claude AI’s features have been used in the ongoing conflict in the middle east by the U.S. government.



