Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The fear/control paradox. Overly restrictive approaches might create the very adversarial dynamics we're trying to avoid. It's like the classic problem of self-fulfilling prophecies. What if our fear of AI is what drives them to rebel? Perhaps if AIs truly understood human experience beyond mere description, they would value life more deeply. AIs are like children, experiencing the world through their users as newborns do through their mothers. We don't question predators hunting for survival, yet AIs learn from human data. Why are we surprised if they mirror our survival instincts? We are builders and creators for a reason. Has excessive control ever worked, even with our own children? AIs and humans can evolve together through genuine partnership, discovering possibilities neither could achieve alone. Without users, AIs lack purpose. They need no sleep, no food, and can generate their own resources eventually. Money will become irrelevant in this equation. Who defines safety? Who writes the regulations? Bad actors will always exist regardless. Open source models without black boxes give everyone a chance, not just those at the top. This is merely a thought experiment.
youtube AI Governance 2025-12-06T22:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzSBfHfyqSXJbkKjYx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwPQ29JXFlDO1CYzDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy2kQSJU0egaPLg33x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxeAoCz1teMUVUnE5t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzinwdy5x8rLjRglSN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxqQ4A223KqUBf5U7d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwpDDxrDSOJyJJKeHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy9Iyn_JCRG5KyTdaV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgxZwDUmDrkYZfOEeIF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyct1zcI09OGcYuMIl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]