Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We need to stop anthropomorphising these algorithms. They dont "decide", "think" or "choose", they calculate. The danger is we are giving power and autonomy to an algorithm that we dont fully understand, and so we can't predict what conclusion our inputs will draw.
youtube AI Governance 2026-03-19T00:3… ♥ 36
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzefk0ERwhgizAGqWV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzMp826dOOeGp880yp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx_UjX-RltaggbEf5N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzIjQtMarlbAx3iByh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwzR9MjPdKGcrKP7354AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwpKTeo_IY0iwIeXTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyNsOK0DZFI5s-cCA94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxAyLbQ7hLtijrrufJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyhvPtJYu6VDPVNdrp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxrFusZ-h8-OIupFuR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]