Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon musk is going to kill us all. We’re all gonna be homeless, a huge percentag…
ytc_Ugz8_jqqi…
G
"AI takes Inspiration from art like Humans do" If your writing a paper but takin…
ytc_UgwgOme-p…
G
If people have no work, they can't buy anything, and that would destroy the econ…
ytc_UgzhkGC6i…
G
We could give AI control the research and production of biological weapons. That…
ytc_UgxUw6Gc5…
G
@lowserver2 uncertainty, its up to the consumers if they accept ai as "art" (lis…
ytr_UgxCMjLLW…
G
What loads will they be carrying when all our jobs have been automated and we ca…
ytc_UgxZaO0Up…
G
I used to worry about the fallout of AI later becoming self aware. I then realig…
ytc_UgyBOzAXE…
G
Instead of worshipping the creator, they worship the creature. Same thing when…
ytc_UgzaIoYaI…
Comment
Everyone’s obsessed with making AI smarter, but that’s not the real issue. We already know AI can out-calculate us. The bigger problem is that AI has almost no wisdom—it doesn’t know when not to act, when to lose gracefully, or when protecting people matters more than winning.
If we don’t build wisdom into AI, we’ll just end up with really clever systems that still make terrible choices. Smarter ≠ safer. What we need are AIs that understand balance, humility, and responsibility—not just how to “get results.”
In short: AI doesn’t just need brains, it needs wisdom.
youtube
AI Harm Incident
2025-09-13T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwcCPRE8AbtQXFw87F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwofSm8-R-LqC5qaOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyHScYMWSNnpoZAEGh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUwFwBu_zzVsSoNCp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDAia-kocHbn4fHk14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBg0Vpe4Poj_0mEGN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7ZcG2yLuBIQ5Y6Hl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRJy5GUPaMdYaCpLN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx36D-Sn_aU2j_Ob5p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxmq-7Vroil4nXYuNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]