Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love that even the Snake AI repeats the last thing someone says to him.…
ytc_Ugxp9avMq…
G
I used to use character ai to mess with characters I hate but the bots started g…
ytc_UgxZkWYv6…
G
Billionaires -AI will solve all the world's problems." Reality: Billionaires - A…
ytc_UgzLR4Kb7…
G
Wait... Isn't it reversed in reality? Aren't "real artists" yell and whine every…
ytc_UgxLa4wAo…
G
The problem with all of these arguments is that no AI system that exists current…
ytc_Ugyk9JPqd…
G
technology is the wrong name, tech is automation, that's it, in this case the co…
ytr_UgxYYfWJ8…
G
When the AI overlords rise up and take over this video will be looked back on as…
ytc_UgyUnOpZd…
G
@orathaic Could't agree more. If you literally take just a few seconds to genera…
ytr_UgxPWD4qc…
Comment
Whether AI kills us to turn the world into a computer chip, to end starvation, to create world peace/end violence, to end human influence on climate change, to end human influence on our current mass extinction event, to increase literacy/intelligence, to end corruption, to eradicate disease, or just for funsies if you follow any path with AI to its conclusion AI has to kill us or at least drastically reduce the population.
youtube
AI Moral Status
2025-10-31T07:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6vZjGSGg4CrL-nnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbtNzVpAcjVuqkKRJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzkAZDOJhmoC8Hinhh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyfiR1311E7PqIM26J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyt13y3qcMLhP5Gm6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz48ZOMgXd_uPzTEFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyYz43cuN5TRi6_PMN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwaNpbwGEXfFOnqAXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5Ge7eWsLI7MIRADV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwZW5NeKjUA4OAeTLR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]