Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lets say we creat AI that is good, and has answers to our problems. Would we listen? Would our inability to listen send AI into a murderes rampage? Not saying we should ever listen to AI, just a thought.
youtube AI Governance 2024-02-04T08:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzYjkkBi3PS11lZR8p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyaieCUYyjkb4vkxyh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxG-JkhkgNx9Cv--Z94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy5JaFKiKKYlXuSa6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVPt5pKeIo0EHbBo54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyuM1vmiMg8cswzMFN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzlIWhhgko9MJPA1uB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzDo2WXaU-X4n3BxUR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwDiO3k-GDoAKKRWmZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyITuGNGz4Qu9csHcN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]