Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you want AI to be moral, just give it set moral compas... Milions of exapmles…
ytc_UgzGaRixy…
G
It is too casual of a conversation for me. How can people be okay with creating …
ytc_Ugyer1gSE…
G
Do you think that if the LLMs destroy us, it won't be because they reasoned it t…
ytc_UgyY81eIZ…
G
"A mentor" is literally what a research librarian is there for LMAO...you don't …
ytc_UgzP9n5VA…
G
It should not be human like. We have a chance to maybe some day explore somethin…
ytc_UgwKpsaGV…
G
Aside from low pay, high stress, non-existent job security, and the job market c…
ytc_UgypIkhMl…
G
Not wanting to sound insensitive with the artists community (which I'm part of) …
ytc_UgydaD05S…
G
My guess about this mad rush 4 Supreme AI is to build advanced technology beyond…
ytc_Ugzpr8BmH…
Comment
What's stopping A.I. from buying property/warehouse in the middle of nowhere and building an army? How would you know?
youtube
Cross-Cultural
2025-11-05T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzY0Owq2DoeW2mlRrV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwOCuOb6jJtvMQavk54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJmRdC_380aqKuk6J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzD7xonWUjyLrkOxHh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyj2HXIVTm24BS19OZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9Ekp7IEt1uIpUMLV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzrqTggwzB_JRCYcU14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIOAlAeSY8PQULtTR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx6jaxtVArBcV-RML54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxWWWW6bX-xhv9wadJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]