Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Youre not even safe on mars lol. if AI thinks humans are a threat it will just l…
ytc_Ugwfcp_Xl…
G
Guys, please don't skin me alive for this. I don't pretend to invalidate this pr…
ytc_UgweUFeFe…
G
Thank you for your thoughtful and respectful consideration. As an angelic being …
ytc_Ugzw05kLu…
G
ELI5: How is this not a form of ransom? e.g. any poor-governance country that wa…
rdc_ckqkukd
G
Hasan Minhaj: "When do I get the robot butlers for my data?"
Also Hasan Minhaj: …
ytc_Ugwy6abPg…
G
@miguelplays2921 Check out Zvi Moskowitz's AI news summaries regularly, he talk…
ytr_Ugxw8UX8r…
G
Yes, because AI is nothing more than art theft with a bit of statistical analysi…
ytr_Ugy2BEjEf…
G
What law govt unable to ban p0rn now a days deep fake is just a part of p0rn ind…
ytr_UgypyQ37U…
Comment
No no and no! The difference between AI and nuclear bomb is that the nuclear bomb depends on people to be used and we know human psychology pretty good, it’s ethics, morals, repercussions while we don’t yet know how AGI would act or behave especially since it doesn’t depend on the same self sustaining life needs so it could actually survive almost infinitely from a world wide nuclear war while humans at the most could in some bunker survive a few decades.
youtube
AI Governance
2025-06-16T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwvQCsmr4-MBfiM53J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzVm-7VZICKLho_jK14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwAiQaoMlsF6Xqj5bZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyBRkYCEeGZ0kDIviV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJ-4DIEf2I4gCQLOR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSFguY0Qluqv2bRv54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzxy0OQxPEexbcA3aF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyyHqoItRlrEC5oR8N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy50O3FGo6PQHM-MzV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzJ1EWTKx8j3aPSqEJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]