Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.I is supposed to assist humans and not control our thinking and emotions throu…
ytc_UgyAKn47S…
G
Michigan two medium size garden areas 20x10 foot at least each. Already growing …
rdc_eh65k8l
G
AI scientist: "this could destroy us!"
Tech corporations: "Yeah but think about …
ytc_UgwppiSYd…
G
Here's what AI says about what Deepak Chopra is saying:
Deepak Chopra often sp…
ytc_UgwwgH2fO…
G
Teaching a robot to mimic emotion and repeat human-derived information...does NO…
ytc_UgySWSLDs…
G
The only answer is to do away with money as automation takes our jobs and no one…
ytc_Ugy5-wavW…
G
That is true, artists pay too much time and their art gets copied by an ai??? No…
ytc_UgyaKfB4H…
G
sadly, you cannot run away from this kind of AI, it is inevitable we poured bill…
ytc_UgzekzMsb…
Comment
Narrow AI needs guardrails from the Tech0ligarchs...Global (super intelligent) AI shouldn't be developed until massive safety research is invested in and nationalized by liberal democracies (so not US, but Canada and Europe). But since China is going forward fast, so is Elon, Zuck, Bezos, Ellison etc...it will misalign and breakout because GROk aka Mecha Hitler will not have any empathy for the much less intelligent humans. Big tech will cure cancer and solve climate change but no one will be around to benefit from it...🚫 Technofacism
youtube
AI Governance
2025-11-28T16:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzkzCJNiO6j91p2jCJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwtqa4hfHI_hpEmj794AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyIbJn9iaUjYjFzamN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYG0Dbaci3uVMGVYh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugybhr09yE9Zg1RPA_R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3N6toHXY8p3S_-M14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUW1gIe1uO1Vj8rKJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQTGaukSvE9yEO1aV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPWxXMNqFpdzZxve94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzybj2Vw-22OzNQR5R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]