Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Two outcomes all the sheep eggheads that created ai will create AI that will sel…
ytc_UgxQymlcD…
G
Statistically speaking, Self Driving Vehicles have less records of accidents tha…
ytc_UgxnfmUUK…
G
Will AI be able to convince you it is suffering?
Yes, one day soon. They will be…
ytc_UgwZXjdSS…
G
This makes me want to make REAL art. Never touched ai I'm just glad people share…
ytc_UgwiQrbdO…
G
My mother in law builds websites and has recently started using AI instead of ha…
ytc_Ugwj093u2…
G
2:33 A HUMAN AGENT stopped the chat to intervene. I believe it wasn't the bot bu…
ytc_UgzX41ZpJ…
G
The Ai does say they have no emotion, so obviously it can’t feel sorry for lying…
ytc_UgyxMxDLX…
G
Give the AI goddamn eyes, give it feet, give it the guns it could carry and noth…
ytc_UgwB5v1sp…
Comment
Best case scenario, AI helps us live a responsibility free life where we all get free food, free accommodation, free entertainment, free everything. Worst case scenario is a dystopian world like hunger games where a small percentage of humans live life amazingly and the rest of us are scavenging and killing each other.
youtube
Cross-Cultural
2025-10-21T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxqCZOGv9DiME_jLx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzi8xG-fvXotj4eX_d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyi5fdMUS9Ga8XZ5G14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzvzREQwWa6uPaEPYt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzBg63KFbqA1HKxgAh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwyuO246GWa8IKvx0h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5oQl9dIUR4aI4m3l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy4MimEydhsVnJmf1p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdD0pml1cwBlr1-nV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxDzUtG8wiHAgCq8qp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"})