Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Me having a normal conversation with an ai:
The ai: “Can I ask you a personal qu…
ytc_UgwOuhi0F…
G
When the AI 2027 report was released and amplifyed by the media all I kept think…
ytc_UgynM1cEe…
G
Won‘t protect you from AI.
If you want your watermark to show up on AI generati…
ytc_UgxGe3cvq…
G
Im 43 minutes in and this is painful. Graduly, slowly, controlled not really how…
ytc_UgzI_8D5K…
G
You're ignorant. You do not know the importance of art and instead just believe …
ytr_Ugx8GYDFF…
G
It's no different with RL. It is just an optimization algorithm, exhaustively se…
ytr_Ugyf0IgGH…
G
And to think you have a profile picture of a game made with love by humans. If y…
ytr_Ugz3jBqpv…
G
If you ask LLM if he feels something, he'll say no. But if you ask him how he kn…
ytc_UgwXRcy9n…
Comment
People need meaning in their life if they aren't able to relate with other humans if they don't have hobbies that fill their time, if they don't have passions and accomplishments then they don't have their dopamine feedback mechanisms that make them feel happy. If they get hooked on fantasies and abandoned human relationships then they don't have oxytocin feedback that makes them feel happy and content. So basically AI is going to take away a lot of human meaning that fulfills our reality and makes us feel content. In other words AI leads us to meaninglessness. Oh Joy
youtube
AI Governance
2026-02-08T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwmi0GHEoG22yJy0514AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyBY6n1LwLShZ84Cp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwq-7hN4JZjp_Bv7V94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyaBosH81ZI1wgo5rZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxUqXUBa1ot-4U4jJ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyA3GrSGl341v7dDPN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwgMb75IfkDz6RyNMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyx4xb0DCmmCguzD6x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDi1d59Tday4ZgGqd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYJ_T_vc9gCI4u9OJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]