Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A voice vote would have worked. Anyway since most of the panel believed that a sentient AI will occur in our lifetime, we can assume the AI would have free will. Would you wish to live in a world where an AI could decide to get rid of all humans? After all, humans would pose an existential threat to it.
youtube AI Governance 2023-07-18T08:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx3Yomb1ql_-dEtFtN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWbCbuAjmWT6bpZ994AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxJq2JxO4SYjpa2xW94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyKDGc6e-uo3t7QRTp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTUdNVTZr-fTkOZvp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwnuZXHvc1zYkQfVxl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyFCQyEsq9SLEWAFFJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugy0Mfz3nE0rvMxxRwJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw10Q77hy6GXacoZml4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzIBKMnPKGkUnkq2Z14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]