Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI accidentally proved the concept of human soul by showing us what art looks li…
ytc_Ugy-LTsla…
G
Disinformation and mass manipulation? We didn’t need AI for that. I feel like th…
rdc_moda60p
G
10:04 - you say that calling it “lying” is anthropomorphizing, and follow that u…
ytc_UgxZx4aQ6…
G
Iam awaterbottle can’t happen. these robots aren’t gonna have access to missiles…
ytr_UgyEsaIOC…
G
Is there anything I can skill into that isn't imminently about to be replaced wi…
rdc_m859vsm
G
AI gives wealth access to skill while, at the same time, denying the truly skill…
ytc_Ugx6qJUYh…
G
It’s the small-timers, the small-minded, and the perpetually perplexed, who comp…
ytc_UgyA3qwI8…
G
How does the number get better than ever when People are losing their jobs? Who …
ytc_UgxbFfBkE…
Comment
The difference between using nuclear bombs and outsmart AI is that both side still has a common sense that using nuclear weapons are inhumane,but the AI won’t think on that way. They might just think human are much useless than us and spend more energy than us and why we have to serve them😂
youtube
AI Governance
2025-06-29T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwquaengYT7QHoDOEZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFLKyHSySKN86fECh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybcpXvEPsiR2dLplp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyIo9pPPyM6ohJVNNZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9OkmL5CKKMrrp_VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfnWk5mo4hL2UtfkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgykxVOuJeZi_UF5-Pp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxI8JpxZec8Zr5NfO94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUi4BjIMB_WSdiUDp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCp5U50_1Sab8zm514AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]