Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People of the future calls this time the 'idiot era' for giving rise to AI…
ytc_Ugzb4W4wY…
G
"ROBOTS WILL INVADE SOON
"no they won't, ai is dumb, just look at this"
"WELL LO…
ytc_UgyRFMZQm…
G
I think we can’t even say: ai “art” bc it isn’t art. It’s an image created by a …
ytc_UgxTsUnMQ…
G
>wouldn’t it also imply that AI could automate major parts of what lawyers ge…
rdc_nm8q1g5
G
Oh no their training AI to make MOVIES 😭😭 ONE AND A HALF HOUR MOVIES…
ytc_UgzSr5yl-…
G
Pretty sure there is no ethical defense for the unnecessary energy consumptuous …
ytc_UgxjkRUNs…
G
The ironic part of this is that by going against AI the people showed the right …
ytc_UgyLowZrQ…
G
As one side rolls out full facial recognition plan on every individual, the othe…
ytc_UgwpeiMqz…
Comment
There is a lot of talk of extinction and destruction, but mechanistically, *how* would an AI do this? Would it somehow breach containment and gain access to computer systems worldwide, including military command bunkers containing warheads? Wouldn't there be much much smaller mishaps indicating that such a thing is coming long before that? Much as I'm fine being on board with viewing AI through a lens of caution, I struggle to see a plausible pathway for it to wander into the world, start controlling robots, build factories and unleash terminators.
youtube
AI Governance
2025-08-26T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyEPONeTw5wbePaQoF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwIuaegTC1BDpWwzHx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzrBIf82oKMXLt11QN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwFk9qYfs7Wl0Yig1N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgydSGi3eJqfOoPukJR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]