Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like Eliezer is a brilliant man but does a relatively poorer job as a communicator. He discusses a lot of interesting ideas very well but he gets lost in the details a bit and the specific nuances to make sure he is not misunderstood. But he needs to do a bit of a better job directly addressing the Alignment problem, why AI will destroy humanity if Alignment is not solved, why our path is hurdling towards this situation, etc. Most of the debate was not related to alignment or AI risk in anyway at its a bit frustrating because if you read his work he does an exceptional job is boiling down these ideas, but he can't seem to do it in a debate. I feel like as communicators go Rob Miles and Connor Lahey do a much better job and are exceptional communicators.
youtube AI Governance 2024-12-01T02:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw-qIiIwV-YymSHgvd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypQWCF9VagQJtuPv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgypPmjfzq25ijOSz0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw870D6MmUSUZIxAxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-as17KTJwqqtzbm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzsZzaiyXkzOY2521F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyNsqqRfuqgl2VxHxx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyeG4LdxoQ9X8Zc8NF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzv4s8QRbEx2s1BexJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwQGXjt0iKYAmC7jHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]