Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Look all I’m gonna say is it’s a tool. It can be useful or damaging, it depends …
ytc_UgzaSQ-kS…
G
The use of Ai needs to be regulated and monitored in the labor market. It is the…
ytc_UgyJRDF2V…
G
Art should never be used as ANY sort of base info for AI images. And all images …
ytc_UgwZARCZK…
G
as an esthetician, i have faith that AI won’t take over my career in my lifetime…
ytc_UgyVDPimK…
G
We only need a job to make money.
Money can be made for free.
We will have more …
ytc_UgytrsuT5…
G
This is coming soon to USA. Stores are putting up cameras that have facial recog…
ytc_Ugw9DMWFu…
G
AI will improve to the point that we cannot tell it isn't real. Then what?…
ytc_UgyRLPVnm…
G
Humans have been, are, and always will be the greatest threat to humanity. I'm…
ytc_UgxHGVJvG…
Comment
Actually, I think that alignment towards submissivity will be easier to achieve than alignment towards the strict forms of rationality that necessitate creation of explicit operational goals (and sticking to them despite imagined reactions informed user would have).
At least current LLMs are far better in roleplaying (as biased or otherwise irratonal personas) than resisting not entirely rational user suggestions.
youtube
AI Governance
2025-11-14T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyluCUitRT-tHv83dF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbckJa_MricI8LX194AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbDbzuWlB6OpXOvdx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxq2vdvx-TJjXhPZjd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugza87ESaIPhCKdr1AZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwktKDszx9_U_oQlnd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyzUp3DZJD_UrTCWT94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwV6vIelsVJwa6bMmh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzvolYXznm83c_2PTh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwXBv6qQtQ_daN1Go14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]