Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On a scale from "extinction" to "utopia", how will artificial intelligence impact humanity? That is the actual argument going on right at this moment as the prognosticators expect AGI to be achieved anytime from yesterday to the end of this decade. The extinction/utopia dilemma is a real one. Each is possible. Though the likely outcome will be somewhere in the middle, that leaves an awful lot of territory. How about near-extinction? That sounds like even less fun than going the whole way. Maybe a Cyberpunk Hellscape Dystopia? No, wait. A near-utopia where people get everything they've ever wanted, but in exchange you have to spend eight hours a day plugged into the global AI so that it can use your brain for extra processing. Personally, I like the one where we really do achieve utopia, but stop bothering to procreate, so we just die out with a smile on our faces ten thousand years from now. With such wide ranging possibilities, the least likely is that things just go on as usual. Ain't gonna happen. I want to be a techno-optimist so badly, but we screw things up so regularly that I just can't quite get myself there.
youtube AI Governance 2024-03-10T16:2… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwiLXYUzt_krrC3CSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxKrKiWKHhLp15lLAx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmpVaJAhJ2sJrChsl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy_SsFcuE-PO3CNDqJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzElq51atImiBIFduh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZ1h6jIA69jpnLpo14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw6YkbNtTueIn-rzMl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzsUyQF1jYeG0RGq7x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwTNPzHHbIGFGMeTPl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwdsAZrcVt6UXwXSad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]