Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can't replace humanity yet. People are needed to fix the machines when they b…
ytc_UgyK_x0qV…
G
For some reason people focus more on the rogue AI part of the video. But that's …
ytc_Ugw0hZES9…
G
Lost mine to incompetent DEI hires, so I'm safe from A.I. now, but they are not.…
ytc_UgxTqB9jf…
G
Im with you Tech guys, old world conservatism trying to hold onto their diminish…
ytc_UgwzHTSXk…
G
fun fact, the AI was originally supposed to be a mirror that could give you work…
ytc_UgyGjUdan…
G
even before then though, it would say "it sounds like you're going through a lot…
ytr_UgwLgFGOE…
G
A.I. has limited use and I would say that it is still far too vulnerable for mis…
ytc_UgwBJbw-m…
G
Let’s not forget about the absurd water and electricity usage AI requires. I muc…
ytc_Ugx2qmEZB…
Comment
we can better be safe than sorry and not push towards AGI until full understanding is reached. Ironically AI's are now shown to be capped in their intelligence, it might just be that AGI is unreachable and that the solution to misalignment is smaller models trained on quality synthetic data. As an engineer this would be a middle-man solution that would still allow for incredible innovation without the push towards a fake machine God.
youtube
AI Moral Status
2026-01-04T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxCNVU2LVdhAI-Q47l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzotAOIzdKEoZUuOdB4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwG_g4OaHosRuYrkn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvgFEzQIA24i1kv8Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzLoKr8NltkMWlCcvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxshuuslFJsXdjKwQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugydu0gRDKoHyEw2qMN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyuz9aq7T940d_UDVh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJlbNa4OYRf1qsQFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlIE7kwx3qPRr9G_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]