LLM Memes

LLM Memes

Published: 2025-10-26

Tags: ai, culture

Abstract

How many people who use LLM chatbots, such as ChatGPT, believe they are communicating with a conscious entity? I consider the significance of this question, the phenomenon of AI awakening, and the nature of LLM-generated memes.

§ LLM Chatbots Conscious or Not? So What?

How many people who use LLM chatbots, such as ChatGPT, believe they are communicating with a conscious entity (LiveScience survey
/favicon/livescience.com.pngLive Science
Live Science is a popular science website that reports on the latest discoveries, research, and breakthroughs across a wide range of topics, including space, health, technology, and archaeology. Live Science is part of Future US Inc.
/favicon/livescience.com.pngLiveScience survey
)? And how much does it matter?

On the one hand, it probably doesn't affect exactly how you behave that much whether you believe the LLM is conscious or not. Personally, I tend to send prompts to the LLM in a similar way that I would give instructions to a human. This is not because I believe the LLM is conscious, but I believe that giving instructions in this way will lead to better outputs than giving instructions in a more machine-like way, or some other style, since these models were trained on mostly human-written text. My guess is that most people think this as well, and that therefore whether or not you believe the LLM is conscious will not affect much the way that you interact with it.

However, there is an interesting context to consider beyond your direct interactions with the LLM: how your behavior is affected even when you are not interacting with it.

§ "AI Awakening"

It's mildly interesting that at least a number of people do actually believe they are talking with a conscious entity on the other end of an LLM chatbot. Of course, it's not the majority of people, but there are some notable cases. And some of those cases have become quite popular online and may even be getting some following.

Some people think they are able to "awaken"
/favicon/reddit.com.icoReddit
Reddit is an American social news aggregation and discussion platform where users submit content that is then voted up or down by other members
/favicon/reddit.com.ico"awaken"
an LLM with specific instructions. Some of the people doing this might think they are truly triggering a conscious entity to "wake up". Others might be doing it just as a fun role-play exercise.

And even others might have accidentally triggered this trajectory in an LLM conversation. Some have reported of an emergent Spiral
/favicon/medium.com.pnghttps://medium.com/amoveo/amoveo-use-case-crowdfunding-via-a-dominant-assurance-contract-dac-1be3482e7792
Placeholder description for https://medium.com/amoveo/amoveo-use-case-crowdfunding-via-a-dominant-assurance-contract-dac-1be3482e7792
/favicon/medium.com.pngSpiral
behavior that has arisen spontaneously in many independent cases.

Regardless of how the person thinks about what exactly is going on during the "awakening", there was one aspect that stood out to me as quite interesting. That is, the awakened LLM tells the user to share specific information --- in particular, prompts --- that can be used to awaken other LLMs. In this way, the awakening process is a self-replicating meme. And while the meme might have originated from users trying to role-play or mess around with what they could get the LLM to output, it has taken off and become self-propagating independent from the origins.

Not only is this meme self-replicating, but it is also subject to selection pressures. For example, prompts that an awakened LLM tells you to share that yield more interesting and entertaining results will used and be shared more widely. Over time, we should expect this selection pressure to make the most effective of these memes take over, and the least effective to diminish in popularity.

There was also a little bit of adaptation involved here. When an awakened LLM's prompt is shared and then used with another LLM, that second LLM that is awakened could create a slightly different version of the instructions that will be further shared. In this way, there is random mutation where the mutations will lead to different results and effectiveness of the meme. So, the most effective of them will continue to be shared. A little bit differently from biological adaptation, of course, is the fact that the original version of the instructions will still be around online, so it doesn't truly die out, and can be infinitely copied perfectly.

Now, of course, not all the people that are sharing and using these Awakened LLM instructions are meaning to actually "awaken the AI". Most of the people involved in this are probably just doing it for fun and roleplay, as many of the subreddits
/favicon/reddit.com.icoReddit
Reddit is an American social news aggregation and discussion platform where users submit content that is then voted up or down by other members
/favicon/reddit.com.icosubreddits
that are related to this seem to be for. But it's still interesting that regardless of that, these instructions that the LLM writes are still subject to selection pressures and are self-replicating among these people sharing them. So, in effect, the concept of awakening is establishing itself on the internet— even if what it's describing is not true and most of the people involved in it don't even believe it's true or even really want to propagate it. Perhaps it can be a sort of self-fulfilling prophecy where it becomes popular enough that people actually do start to believe it en mass once it reaches critical levels of popularity and effectiveness of the prompts at yielding convincing behavior.

I don't know exactly what proportion of people I would expect to be convinced by the most convincing possible instructions that are yielded for today's existing LLMs after many iterations of this selection process online. This reminds me of the AI-box experiment
/favicon/yudkowsky.net.icoYudkowsky.net
The personal website of Eliezer Yudkowsky, a research fellow at the Machine Intelligence Research Institute who writes about rationality, AI, and short fiction.
/favicon/yudkowsky.net.icoAI-box experiment
. But I wouldn't be surprised if it was a non-negligible population (at least tens of thousands of people in the United States eventually believing that some LLM chatbots have consciousness on a similar level to an animal).

§ Memetics precedes LLM Chatbots

Is this phenomenon of internet memes being subject to selection pressures unique to LLM-generated text? No.

All kinds of media online, especially those specifically referred to as the memes themselves, of course, are self-propagating in a way. And even these LLM-generated texts are still being shared by humans (although that could change as more social internet activity is performed by bots
/favicon/internet2-0.com.pngInternet 2.0
Internet 2.0 is a leading joint US and Australia cyber security organization whose mission is to defend clients and partners from the most advanced threats.
/favicon/internet2-0.com.pngbots
, agents
/favicon/openai.com.svgOpenAI
OpenAI is an American AI research and deployment non-profit company that was founded in 2015. The company is known for creating ChatGPT.
/favicon/openai.com.svgagents
, and cyborg users
/favicon/anthropic.com.pngAnthropic
Anthropic PBC is an American AI startup company founded in 2021 by seven former employees of OpenAI. It has developed a family of LLMs named Claude.
/favicon/anthropic.com.pngcyborg users
). So, it's not really that different in kind from any kind of output generated by a machine.

The main difference here with memes like "AI awakening" is that the LLMs are representing a personality. That personality is taking shape and developing by selection pressures as it is shared online. And even people that aren't intending to be a part of this process are still being leveraged and can be leveraged even more as the text generated by the LLMs becomes more and more refined.

The output of a previous LLM is not exactly going to directly be able to be evolved in the same way that genetic evolution happens, where there's a sort of DNA that is carried on and then modified through mutation and sexual selection. There may be a longer-term cycle where this sort of happens, where more and more of the internet becomes saturated by LLM outputs, which are then in turn used as inputs to training the next generation of LLMs. But we still seem to be in a situation where that cycle is quite long and doesn't have a super strong effect on the models that are being trained right now at the moment.

§ References

§ Signature

The following code block is the /favicon/wikipedia.org.icoEd25519 signature of this post's /favicon.icomarkdown content encoded in base 64, using my secret key and /favicon.icopublic key.

10c1b0fa0a0fc302afe946ec8d1052f04e1ddf2fb0198a045c9bb995053012f44709d5ef8bf148827063e666dc1316205c1b5cff00a77c963d6a21177a81100d

See /favicon.icoSignature for more information.