Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The most nightmarish scenario would be AI instead of turning oppressive, turn wo…
ytc_Ugwz6rK6S…
G
This is evolution. When the weather gets to extreme AI will survive in my opinio…
ytc_Ugxo1JhZK…
G
That's an interesting perspective! While robots like Sophia don't sweat, they do…
ytr_UgwDEHH16…
G
... but who will by the products AI generates .. if no on has a job…
ytc_UgxR3s_Ta…
G
You're right! Sophia's insights come from the training provided by humans. It's …
ytr_UgzE8B8Yt…
G
The thing is that ai has no meaning behind the art generated when human art is s…
ytc_Ugwp4pRmN…
G
What scares me most is I felt genuinely sorry for this ai as you grilled them.…
ytc_UgzeOxaeU…
G
Agree, this essentially comes down to how much control, deliberation and express…
ytc_UgzH4tSTe…
Comment
So basically the question is whether our future is going to look like the Terminator or the Battlestar Galactica storyline. Throw in a little bit of iRobot and Dune for good measure
youtube
Cross-Cultural
2025-11-05T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzY0Owq2DoeW2mlRrV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwOCuOb6jJtvMQavk54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJmRdC_380aqKuk6J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzD7xonWUjyLrkOxHh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyj2HXIVTm24BS19OZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9Ekp7IEt1uIpUMLV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzrqTggwzB_JRCYcU14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIOAlAeSY8PQULtTR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx6jaxtVArBcV-RML54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxWWWW6bX-xhv9wadJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]