Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He seemed to have a lot of trouble answering Ezra questions it’s any specifics or clear mechanism. He kept giving loosely associated analogies and hand waving to make his point. I’m sure he didn’t mean it literally but he anthropomorphized AI heavily given true agency which made it easier to understand his conclusion but it gave no clear path to his conclusions
youtube AI Governance 2026-04-15T17:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz38GMtozfBZH64yiF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzDJZE42f6f0txpTSR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx2K3cv_QTluA-NNEB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzGrZb-Fc9cPlVYhA94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxUAEKQqzYnKnKxMMp4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxdjPtVBEHNjMdrB0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4FzsAMYBpDl4nJRV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3kUt-5PGrAS-2yHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzjAVsuCfBTw7loi214AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyNGDr9sn8gwajhs_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]