Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I kind of want to be rude to AI now just to see if the post apocalyptic world wh…
ytc_Ugy6YxMSm…
G
@kenion2166
Yes.
Ladders/steps that automatically extend and retract ......
Or …
ytr_UgxYBR8S-…
G
You know, some kids actually talk to AI about their problems, trauma and shit an…
ytc_UgxH-TATL…
G
Almost all mentally and intellectually working employees are burnt out and done …
ytc_UgxIs4pG6…
G
So do you do ai art or not. You're comment was very hard to read but if you do y…
ytr_UgyFbJ2Qy…
G
This shit would never work in other cultures.
Like if someone would send us thr…
ytc_Ugxaf3H42…
G
I’m not convinced that AI actually makes us dumber. To me it’s like saying if I …
ytc_UgyoDindb…
G
It’s sad to see that people are so obsessed with AI that other artists don’t get…
ytc_UgxWguk6F…
Comment
The problem with suggesting we tell a future ASI to care about maximizing human thriving/happiness is that they are being made by corporations who are very much not motivated by that.
We already see algorithms that are motivated to keep our attention with addictive slop and divisive arguments with real people and bots, rather than an algorithm that tries to show us things good for our mental health and telling us to take a screen break and go outside.
youtube
AI Moral Status
2025-10-30T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx1_ez-0vl8tEvhPGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzM4bqngjE5_ib5sKJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxcTb6i8AUGg19T2n54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPfOmj4m_Aube5q4J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxE54jNX8p3yYjG0W54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyi_QHZ-dhPQu0-UFB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGk8_0HvVBwUZdVCJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQLyHJl3d48kzDxI14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe5HXo6jXaynqJ0ZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxiMiO945P8eZMsdu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]