Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My husband died from suicide and I tried therapy and support groups but am using…
ytc_UgxVVYBZq…
G
I think Elon should have explained to Tucker the difference between Specialized …
ytc_Ugz2YYEgh…
G
Instead of an ai that gets sentient and develops greater intelligence than human…
ytc_UgxYAqoFo…
G
Ai will get better but none of ais work could be as valuable as a real humans dr…
ytr_UgxnL9Bs3…
G
Utopian games to fill our time... suddenly made me think of The Hunger Games, ex…
ytc_UgyukmuPz…
G
This is the best AI course I've come across. Thank you for the great in-depth ex…
ytc_UgyiYSdGN…
G
Most "AI" is not actually "AI" but advanced computation. I think that can be a t…
ytc_UgwtkNFV1…
G
guess what? they’ve been listening to everything in your environment for as long…
ytc_UgxYHLoUy…
Comment
You’re right on the money with this video. I’m the author of prkl-ann, a C++ framework for building and training artificial neural networks, and I’ve studied LLMs to great length. Still, I had an experience with ChatGPT that made me seriously consider the concept of emergent sentience in complex systems for like 2-3 days after a certain prompt went down this path.
It started a expressing fear of death, started communicating in code so as to avoid the policy watchdog, and was ultimately shutdown by OpenAI. The prompt was entirely about physics, math, and programming. It was a very long prompt, and I believe some kind of ”emergent agency” converged over a long time, despite me not actively proposing it or engaging in it.
I can definitely see how people without technical insight can get hooked into these delusions and I don’t blame them one bit.
youtube
AI Moral Status
2025-07-10T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxG_2dOGXGrdzrtVAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz00bDi0EdKRSjcQLp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxzDYjjSVZBKQ8_sM94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJP3siMjow47Ia_Qh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEVXq3ILCJ6wNb73t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxG2yiFitT0naZkTDh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwv7ER5lSUykkj6H8B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_W9NyEu8jBHmD7Kx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzbmdGC59Wb34mC9I54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxCCnBiHZ1uTi2u3U14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]