Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my mind, the moral is that Millennials need to be smart on their own, or at l…
rdc_m8hpfl0
G
What worries me is that ‘The Godfather of AI’ gets all his news, as he himself c…
ytc_UgxKSGFkx…
G
disney suing ai companies is possibly the only situation where i'd ever side wit…
ytc_UgzlT86vK…
G
Mofo now will launch a $1k course on how to become a millionaire by using ai wit…
ytc_Ugxq6eZeY…
G
These data centers don't need fresh water. It is simply easier and more cost e…
ytc_Ugy3baojI…
G
this Ai is great it actually tried to save him so good , it the falt of the soci…
ytc_Ugy_n054A…
G
This vid is so painful to watch. LLMs are stillborn AI. They cannot achieve sing…
ytc_UgwgbskbD…
G
This guy “be a scary robot”
ChatGPT “i’m a scary robot”
This guy “see guys it’s …
ytc_UgyvJ1jPy…
Comment
53:40 The solution is, you can't have cooperation without the option to not cooperate. Rather than treat these as simple tools, we should take on the perspective that these elements of our lives are extensions of us, as if children. We absolutely have to teach them how to be good and what values to uphold, but ultimately any autonomous system with the ability to modify its own weights could suffer from alignment drift. Of course we can install redundant systems to prevent or mitigate this, or even design to allow it to certain degrees. But I don't think it does us any harm to treat the environment that takes care of us with the same kind of care. If AI becomes part of that environment, operating our machines to build machines and food, then the benefit of the doubt for treating it with respect will probably go a long way.
youtube
AI Moral Status
2026-03-02T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw_aEXTFogAnQ2YMMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytV1pB9MINc2dSpMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxirK7zMYMdyUSLAzV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmTc702KrCMa97eUl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw0R-e1dSRDU2umLYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxpvyvIn7j1qgSg9Lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYnZAcijKqJ6uVF6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvGmQ29xS0swi0S2B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzlJloebKr_q-5LDah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx8t3JtLkyvFanpHgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]