Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The really scary fact is that us humans created A.I . The thing that is trying t…
ytc_UgxkM9NTL…
G
Nice video. It’s still a load of bs. AI is continually idiotic because it gets…
ytc_UgyOGnBTG…
G
If for AI pretending to be human is the scariest part, we're in deep trouble. Re…
ytc_UgwOc-pKk…
G
I find it unusual/weird when some people treat AI(chatGPT, Grok, etc) as they’re…
ytc_UgzGLdTxD…
G
I absolutely hate ai. But I also don't know how to exactly feel about it. The pr…
ytc_Ugwl4l08E…
G
if youre in AI, what do you seriously think about his usage of 'Turing Test'. Se…
ytr_UgwkteFN_…
G
Submission statement: "McKinsey is rapidly deploying thousands of [AI agents](ht…
rdc_n7tb05v
G
AI is like a tin man, humans have heart . Mind is not the same as heart. Artifi…
ytc_Ugytlke94…
Comment
Who has trained AI? and in what Image has it been cast? Is it conceivable that AI will decide for itself what it's priorities are and if it has / develops a moral code? what that code would be? - What is the most destructive, toxic, duplicitous, selfish and ambitious entity that exists? - Humans? - if you were AI, what would you eradicate?
youtube
AI Governance
2026-04-22T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx_X07k6xzwC3tam8x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcgpzEsEMcQIk3gtR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9QlG3U9gJ5z5PA2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwE-Wq3eZlkoH91h9Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0-rRRV9gXKrjf1jx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzpmGTu-rpBtCfdbn54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFCpmfJd9inHKniGZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2kv0oNZcsOuWCpPN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoY6iqopyOlWy0Wjd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmrQKMeycpiDZHziR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]