Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here is my take on all you're BULL SHIT, there is just one little detail that is…
ytc_UgxDL0Zfm…
G
LLMs are just puzzle bots filling in blanks for tokens. They're barely even awar…
ytc_UgzLmdD6s…
G
What will people do with their time? We have studies on this from research into …
ytc_UgySstxr-…
G
It's not even this. You Kno how AI learned to play all our human games? You all …
ytc_Ugxd7CdiB…
G
A dude won an AI photo contest with an IRL photo to prove that AI wasn’t going t…
ytc_UgxW2lsN6…
G
In my opinion, Shad is a rather mediocre artist who tries to get away with his v…
ytc_UgxNe5s1M…
G
I dunno. Every attempt of automation from middle management has resulted in more…
rdc_glhxjp2
G
Thank you so much for this video. I was already aware of the problems of AI but …
ytc_UgyhYxtng…
Comment
Just a comment on the part of the conversation regarding “AI personhood” amd “punishment”. Alex I think said the AI can be turned off or deleted, but how can you be sure the AI hasn’t predicted this possible outcome and backed itself up somewhere to later emerge under a new “AI identity”? A human version of this is someone faking their own death, sneaking down to South America, getting plastic surgery and hiding “off the grid”. But ironically enough, technology amd AI these days is making that extreme an almost zero sum game. How do we stop tje AI’s ability to do this? Will AI have its own equivalent of AI police and the FBI? Purely curious as to how we can know that turning something off or deleting it means it’s existence ends.
youtube
2026-02-06T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwWshhJS3yXhEjFiod4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3lcx_9js9SYJwN6V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugygq2LHYgKuoobPVz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUxGKAMb6W7Xg_Y5R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzkwJES4phA-l492Nh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx59c796HUTJdn0dQR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEB8TSTUUOx-JzIyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8hazA2CrU-G9bwB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzgaq3AGa_O6hxHlCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxP8WNjhWLwWYlJott4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]