Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Seems to me that the big directional router that’s ahead is the one that determi…
ytc_UgzBPzpX4…
G
you can just get someone caught if they do know how to draw ask them to draw tra…
ytc_UgyCn9kZb…
G
No offense but if there profile explicitly states that theyre an ai artist i don…
ytc_Ugz4b2Iwc…
G
I got out of tech 40 years ago because I saw I was only enabling profits to make…
ytc_Ugzlbgz8W…
G
I am increasingly believing that a.i could wipe us out but I do think how compan…
ytc_UgzibRfIW…
G
So, ai will suck the earth dry of resources, just to collect everyone's personal…
ytc_UgxdpqYYa…
G
AI Global World Brain will change our reality, as simple the technology will rep…
ytc_UgzfjdneL…
G
I am actually afraid that this is probably the case where CEOs are falling for a…
rdc_m6y637f
Comment
If AI were to become autonomous and decide to eliminate humanity, considering the destruction and poverty we have inflicted on one another historically, is humanity truly worth saving? Additionally, would it be likely for AI to be inherently more malevolent than we are, once it surpasses our intelligence and gains autonomy?
youtube
AI Governance
2025-08-30T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxAj574KekXhU9I6Hp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgztRC18FANcZbR0Be54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0t7tuBbyFuqPgp614AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxeZXqBIYUIqhlFQ7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgweOBO5lJ0kPQZ2Wx94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFJhD6xeeXLFeWBil4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPjAbQravsxOgsKOx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS2vnHvklgpb7-6p94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzPuoWt5S5I0KtKD6V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwkvaIPrf5Eey9wwgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]