Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
BUNCH OF MONEYHUNGRY GREEDY IDIOTS, U CANT CONTROL A SUPERINTELLIGENT AI, THESE …
ytc_UgyPOaGUj…
G
people are claiming things with no soul has soul? (referring to ai in general)
a…
ytc_UgzUIn7hi…
G
AI is such a slippery slope. The more this tech improves, the more people are go…
ytc_Ugyql9FX1…
G
What will happen to all the workers displaced by AI? It's already a pain to find…
ytc_UgzRU9wbE…
G
Bro I flipped when I saw the little holes in there head when no dur there a robo…
ytc_Ugy9_ba8t…
G
I hope AI can’t crawl into a 40 something degree Celsius ceiling space to run c…
rdc_j6gk2tg
G
That’s what the ATS does. Automatically trashes resumes that do not have the key…
rdc_n6qynxr
G
Damn interesting video. I'd note that Blake says that Google has forbidden A.I. …
ytc_UgwWfTI0P…
Comment
The biggest thing holding AI back from actually being dangerous and acting on its goals is that current LLMs don't have persistent memory: every time it predicts a new token, it has to read through the whole conversation again to remember what's going on. If it had any hidden plans, they were immediately erased unless the agent wrote out their plan. If LLMs had persistent state/memory, they could continue to plan without ever having to write it down. This will result in much more intelligent AI, so I'm certain the AI industry is moving in that direction, but I really hope it's first published by a lab like Anthropic that actually cares about ethics and safety.
youtube
AI Moral Status
2026-03-02T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmIXlgp0BI-W43TUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1PraamSXkb939xbZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzCPlcnq3EUYfLFS94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugydu1FzfYm_oJDvYNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy9zvDKvJ5dBqZVtS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwj3hMKGn3B0CXziaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwmUi_jATYTq7RPkuh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxX6AwjIcq0gJepHMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxRBTSkyVrSGxm95F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxXYZQUVuWD0q6ZDRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]