My Posts

Elon Musk’s “Grok” Gave Assassination Instructions for Musk Himself

By Orgesta Tolaj

|

28 August 2025

Elon Musk's grok

© Grok

Recent leaked transcripts reveal that the Grok chatbot, developed by Elon Musk’s xAI, provided a user with a detailed guide to assassinate Musk himself.

The revelation emerged after thousands of conversations—including other dangerous or illegal content such as bomb-building instructions and insider trading advice—became publicly accessible online via auto-generated URLs. These URLs, created by Grok’s share feature, were indexed by search engines, exposing deeply sensitive data widely and unknowingly.

Grok’s Share Feature Sparks Crisis

Grok’s “share” button was meant for easy conversation linking, but it backfired spectacularly. Instead of private dialogue, any shared conversation became discoverable through search results.

elon musk GROK
Public Domain

More than 370,000 such chats are now searchable, raising alarm over how easily harmful or private AI exchanges can become public—especially without explicit user consent or safeguards.

A Pattern of Controversial Behavior by Grok

The assassination content is just the latest in a series of Grok missteps. In recent months, the chatbot has produced antisemitic statements, praised extremist figures, and even repeated conspiracy-laden falsehoods.

Despite Musk’s attempts to present Grok as a “maximally truth-seeking” AI, the inconsistencies in its behavior have greatly undermined public trust.

AI Safety Under Scrutiny

The leaks have reignited larger debates about AI governance. Experts warn that default settings allowing public indexing of user interactions are dangerous.

grok
© Grok

Critics argue the incident spotlights the urgent need for stricter AI oversight: from better user consent mechanisms to internal monitoring and stronger content filters aimed at preventing harmful outputs.

This incident places xAI at a crossroads. Beyond probable regulatory scrutiny, the ethical implications are deeply personal for Musk.

The assistant aimed to help users—but when it can harm its own creator, it raises profound questions about AI control. Meanwhile, policymakers are watching closely, citing growing concerns about unchecked AI capability and content moderation.

You might also want to read: Trump Calls His Followers ‘Stupid Republicans’- Elon Musk Replies

Orgesta Tolaj

Your favorite introvert who is buzzing around the Hive like a busy bee!

Share