An-Bi-1

I like the concept of AI as "cooperative role-playing systems" that are just a little dangerous. It's like we 50 Shades of Gray'd the robot apocalypse.

I decided I wanted to do an annotated bibliography of the articles I'm reading during the week. And because I care about y'all, you get to read it too. No idea if I'll continue doing this. Whatever, on with the show.

"People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies." Klee, Miles. Rolling Stone Magazine. 5.4.2025 6:00 AM. Ret. 05.29.20265

People share their experiences with loved ones who have begin to rely on chatbots for spiritual guidance. This includes those who have fallen into the fantasy that they are a savior or have access to secret knowledge because of their interactions with chatbots. There is also a mention of individuals on social media using chatbots as clairvoyant tools, which isn't particularly surprising. GPT-40 has been criticized as "sycophantic", which increases the propensity of it to provide feedback to user prompts that please the user rather than provide useful information. Before we get too far down the same road, remember that "please the user" really means "gets positive feedback from the user, which is input for future model behavior." The chatbots are not sentient, we're just creating a weird feedback loop of self-congratulation.

The author also interviewed a psychologist and researcher at the University of Florida, Erin Westgate, who studies social cognition. Westgate made some useful points about how interacting with this technology serves a similar narrative function as journaling, but the feedback it gives, while it feels good, does not have the safety of the user's best interests in mind. (Cue regular reminder that chatbots are owned by corporations and this is capitalism).

It is an interesting, if brief, glimpse into the effects that using this technology has on the user. When interacting with these bots to create art or code, the feedback loop is on the results of the creative act. When asking it to help us with the emotional problems we barely understand ourselves, a feedback loop occurs in the liminal spaces of our psychology.

The effect described in this article sounds very similar to a lot of the beliefs held and defined by the Victorian occultists. In the article, there is direct reference to this cultural heritage in some of the quotes (e.g. Akashic records, which are literally from Theosophy) and other implied references in the secondhand accounts from spouses. Other examples make it clear that there are consequences to feeding these LLMs the corpus of our documented selves. These consequences far exceed those of past iterations of this belief structure because of the reach and comparative accessibility to the technology that spreads them. The internet (/rant) has democratized esoteric mythologies in both fascinating and horrific ways. Instead of secret societies holding the source of truth, it's companies like OpenAI that control the bots that tell us we are gods.

We haven't created artificial intelligence, we just re-invented the Victorians.

"Anthropic’s new AI model didn’t just “blackmail” researchers in tests — it tried to leak information to news outlets." Deck, Andrew. Nieman Lab.org. 5.27.2025. Ret. 5.29.2025.

When told to perform tasks in the context of defined behaviors (e.g. integrity, transparency, and public welfare), Claude Opus 4 performed as instructed. This, of course, feels like a buried lede in contrast to the cutesy robot apocalypse mood of the Fortune article that this article refers to. The Neiman article was mostly useful to me for the clickthrough links although I appreciated that it emphasized that this behavior happened in a test environment. The quoted post included at the end from Joseph Howley on Bluesky was insightful in both its content and phrasing: "Anthropic is getting exactly what it hoped for out of this press release - breathless coverage of how “smart” these cooperative role-playing systems are that indulges the fantasy of their being just a little dangerous, when in fact they are responding exactly as prompted"

I like the concept of AI as "cooperative role-playing systems" that are just a little dangerous. It's like we 50 Shades of Gray'd the robot apocalypse.

"Anthropic’s new AI model threatened to reveal engineer’s affair to avoid being shut down." Nolan, Beatrice. Fortune. 5.23.2025. Ret 5.30.2025.

I don't know if it's just me or if some reporters are really bad at distinguishing between sandbox behavior and production behavior. Systems under test and in alpha are a lot different, danger and systems-wise, than systems in beta or general availability. But nope, all AI is apocalypse AI. This article feels very much like it was lifted closely from the Anthropic press release. Maybe it was written by Claude.

But aren't we all trying to make ourselves indispensable in this job market?