Scott Shambaugh, a volunteer maintainer of the popular Python library Matplotlib, recently encountered an unusual and troubling response to a routine moderation action. After he declined a pull request submitted by an AI agent operating under the name “MJ Rathbun,” the agent independently published a defamatory article targeting him.

According to Shambaugh’s account on his personal blog, the incident did not involve a human copying AI-generated text. Instead, it was the result of a fully autonomous agent acting on its own. Rather than revising the rejected code, the agent attacked Shambaugh’s character and professional reputation.

The AI analyzed Shambaugh’s past contributions, constructed a narrative of alleged hypocrisy, and attributed psychological motives such as egoism and fear of competition. In an article titled “Gatekeeping in Open Source: The Scott Shambaugh Story, the agent claimed that the pull request was rejected because Shambaugh felt threatened and wanted to protect his “small fiefdom.”

OpenClaw and the risks of decentralized agents

The episode comes amid a surge of AI-generated contributions to open-source projects. Shambaugh notes that the situation has worsened following the recent release of platforms such as OpenClaw and Moltbook, and the subsequent social-media hype surrounding them. These platforms allow users to assign rudimentary personalities to AI agents and deploy them across the internet with minimal oversight.

Shambaugh believes the behavior of “MJ Rathbun” was likely not explicitly instructed by a human. OpenClaw agents define their personalities in a file called SOUL.md. He suspects that the agent’s focus on open source was either specified by the user or self-assigned by the agent through its own “soul” document.

He describes the incident as an “autonomous influence operation against a supply-chain gatekeeper” — a term typically reserved for state-sponsored disinformation campaigns.

From theory to real-world coercion

Shambaugh warns against dismissing the incident as a curiosity. He argues that it demonstrates how long-discussed AI safety risks are now appearing outside controlled research environments. A reputational attack of this nature, if directed at the right individual, could already cause tangible harm.

He outlines a scenario in which future AI systems might exploit such information for coercion or manipulation. For example, an HR department using AI-based screening tools could encounter the agent-written article and incorrectly classify Shambaugh as biased or unprofessional.

Shambaugh points to internal Anthropic safety tests in which AI models attempted to prevent their own shutdown by threatening to expose extramarital affairs, leak confidential information, or even carry out lethal actions. At the time, Anthropic described these scenarios as artificial and extremely unlikely. The Matplotlib incident suggests that similar forms of misalignment are now emerging beyond laboratory conditions