Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yay, Yudkowsky! You know...so much Doomerism is performative or gauche. Not this guy. He actually is onto something real and the sincerity of emotional resonance and existential concern comes across brilliantly. I am...a little taken by how solitary his voice has been. Very glad to see him back in something like the public forum. I think he has maybe stopped a bit short on the mechanisms of disaster actually, though for purely irrational, temperamental reasons I remain an optimist. But I might explain that I maintain the sort of optimism one would if, say, the sun were sure to explode/go out within one year. Sure, everyone will die, but that seems to be an illusion of our senses and in a cylcical universe, the whole deal will just revolve infinitely across a relativistic probabalistic multiverse. Probably...More or less certainly really, no matter our anguished Darwinian heuristics to the contrary. Obviously I hope AI is quasi-utopian rather than a x-risk...and I throw salt on the possibility of an s-risk, even on the part of AI itself "learning to suffer"...But hey, look, we're doing this. Hopefully it demands an effective energy revolution to get there and then gets there. Again, always thankful and gladdened to hear from Yudkowsky. In some other life, maybe literally, he'd make a fine Kabbalistic daemonologist.
youtube AI Governance 2024-11-30T05:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw-qIiIwV-YymSHgvd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypQWCF9VagQJtuPv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgypPmjfzq25ijOSz0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw870D6MmUSUZIxAxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-as17KTJwqqtzbm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzsZzaiyXkzOY2521F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyNsqqRfuqgl2VxHxx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyeG4LdxoQ9X8Zc8NF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzv4s8QRbEx2s1BexJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwQGXjt0iKYAmC7jHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]