Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:33 but humans in the loop are improving the work, an ai can loop, it can fail …
ytc_UgwmULhDy…
G
AI games will probably be shitty web games for toddlers who just want moving ima…
rdc_oi3jft1
G
Disappointed that Neil started this off so clearly biased against AI, not only b…
ytc_UgwEFL84C…
G
An AI CEO might actually start responding to our real concerns and issues. Might…
ytc_Ugw1EHs4g…
G
🎯 Key points for quick navigation:
-
00:14 *⚡ Massive Resource Use*
00:26 *🕵️♂…
ytc_UgygVjrnV…
G
@thewannabecritic7490 there are a couple of good Japanese artists who've discuss…
ytr_UgwA2H1NH…
G
I feel like this is a paid promotion for grubby ai or whatever it is…
ytc_Ugwgy-qAG…
G
Human beings are based off data and algorithms Also, we just didn’t make them up…
ytc_Ugwmo1WJ2…
Comment
AI needs energy. How could a super intelligence prevent us from shutting it down if it had a limited physical form? Just limit the sophistication of physical robotics. E.g. no building terminators.
youtube
AI Governance
2025-10-04T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0-y-hREOS9YQLiaN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8HLJqLCWLI9STEZN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzGLAuHy-JBVmEECc94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxxVI0NtqaA59MikKZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy3JSFzKB9oMvK4ePd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz2ZCaKC9Ma8rOmrVt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUX0YfgniL1Pvz17N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgykTdQkwlw7IEFKbC14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlTm6ntMyvqiHBg554AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxW8S473hmWW_IBL_B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]