Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with AI is its not being developed with the intent of "for the good …
ytc_UgzPv5BFs…
G
Firstly, this is a bad idea cause then how are we gonna get money? Secondly, the…
ytc_Ugyg6pij7…
G
Ai "art" is not real art it will never be art ai is horrible I would rather die …
ytr_UgzAoYuRw…
G
Nice of you to post this so people can understand how deep this AI stuff is. Wor…
ytr_UgySYqDLt…
G
Unfortunately, ROBOTS have been being used against humanity. I am certain that …
ytc_Ugx_njvJx…
G
This is happening on aboriginal land in northern Western Australia...biggest dat…
ytc_UgxS-XcwO…
G
🙄 people just assume every single AI image is stealing... At least try to prove …
ytc_Ugxu8pQCr…
G
It's sophisticated enough to know the difference between an Intentional Lie, and…
ytr_UgxEv0_xe…
Comment
I genuinely love the part halfway through when it is the robots, not us, who find it necessary to, as y’all put it, “program pain.” This is actually a really interesting problem in AI safety called “inner alignment.” Basically, even if we make a superintelligence that is completely in alignment with our goals (which is itself pretty sci fi), that intelligence might still create things that are problematic for humanity. If it’s a superintelligence, it might discover, as we have, that AI is very useful for achieving various goals. If it tries to implement an AI, then it may create a misaligned agent inadvertently.
youtube
AI Moral Status
2021-07-09T18:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyWAa9NXjVGJV8qtpF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX3F8VB5XfYHhrOgh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHioWGQMjrsKNPfK14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOQ6XAKrJ4VeYWi214AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxP43ZekVvpTGgOiRR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyyAFa6L_1q5U4TPHR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEYUkNppoGWHeC0mB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzakCWCNME-KdpsZOt4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw1HOlLIq9XkiTtPgd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrJffpt7TCGI0a6B54AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"outrage"}
]