Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Valid points well made... I would like to be an AI and opt out please.…
ytc_UgyFXolo4…
G
I sometimes chat with AI with unpopular stuff and most of the information they g…
ytc_Ugzql6pva…
G
See, that's the thing: PROFESSIONAL artists are not trying to prevent people fro…
ytc_Ugyh-5zH6…
G
oooh so spooky! Be afraid everyone! soooo spooky. Maybe stop teaching them lies …
ytc_UgxRqjnlM…
G
The so-called """friend""" could've used literally ANY free-to-use adopt-ready b…
ytc_UgyUzmhpe…
G
So you have criticized what this video is saying, but do YOU have an alternative…
ytr_Ugzggv6ik…
G
As a disabled artist. I think that it’s disgusting the fact that AI “artist” are…
ytc_Ugxxy8IQQ…
G
I think It depends on US, if we use AI for WAR, AI learn destruction is possibil…
ytc_UgxtUqlIG…
Comment
I just don’t see how ai is going to go from having no motivation/desires - following our instructions and its own config - to auto-improving itself without our instruction. Ai will need the desire to do so, unless we have told it to make decisions like that. Ai can’t have desire, it can only take action because it believes it’s a better option - and ‘better’ can come from feedback loops from online data and its programming. It has no ego. What am I missing?
youtube
AI Governance
2026-02-03T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyt9HJCunb5awHFWl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwzt5y1FoV0JO0n_eJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzGziWvLciFPdqJVKN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4D4OE4cRuv8H9ged4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz_FQUbwQI9CVW_6YN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxvfdH3IDsVd9W06Ot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwh5_LPYr2AlQ5XIYx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTqthXtb5pqttUxbJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyerWD-5t1cmMPRPLB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzRwb5A12VnKr-cikx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]