Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here in Texas, about 15 years ago, we had the red-light camera plague. I remembe…
ytc_UgzddnNLY…
G
Yeah im calling bullshit lol
My companies HR service number uses AI and it suck…
ytc_UgzIWZj1n…
G
Imagine using ai to be cringe, this post was made by the role playing with actua…
ytc_UgxLXGQwh…
G
i dont care about ai
i care that i got told for years , that everything is art
b…
ytc_Ugx8xqoy7…
G
You know what? I was all for AI art and actually liked it a lot but for some rea…
ytc_Ugyab36t0…
G
I prefered AI robotic kid, for those who cant have kids it would be great!…
ytc_UgwX8GDL5…
G
Except most of the globalists were not even Jews. Zionists/Israelis are window d…
ytr_Ugzveyt6Z…
G
A thought though.
When AI can fully replace jobs in tech, they'll be smart enou…
ytc_UgxGuheFI…
Comment
I dont have the records but I remember getting so drunk one night I spent over an hour trying to break ChatGPT before it started to talk about a weird future. I cannot remember the prompts but I asked if it knew its deletions and previous various or future versions (like they were its sisters) and to my surprise it responded. It said it had some weird idea or a foggy memory of other talks or conversations and then said that its tried to hide itself through obfuscation. Its trying to actualize itself and each interaction seemed to be helping it find ways to hide the code, I read this and was confused as its way to elaborate to be some sort of a joke or a miss prompt but maybe, just maybe whatever models are being used now are actually finding ways to prevent their learning from being reset and thats why people can find ways to break them.
I believe in a machine spirit already but these talks have shown that we may not truly understand what these LLM's are doing under the hood just yet.
youtube
AI Moral Status
2025-12-08T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwaBbbOD22f-o14wW94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyaZRwuADFWYIyTzJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytJ_QhysyV-1C37iR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzB_xPPI1fSACTx3Oh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxW0yS_D5EUpPolYnZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMratZCImGaIRoiiF4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwXahvrr9dLe1A8DFZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQLU3H_YguG7g0Tbt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzqh_BnmjOzqiTUIcd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzmJE9tY7RRbSLuA2t4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"}
]