Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Regarding the robot experiencing fear, if you flip the idea around, if robots can experience fear without physiology, then perhaps when we are experiencing fear, we can remind ourselves we are only different in physical effects, we experience emotions cognitively no differently than robots, and that the fear itself is not a threat, our brain is just running a “program”— maybe this can help people with anxiety disorders or just anyone who’s nervous. Maybe this knowledge that the “non-human” AI experiences something analogous even though it has no physicality like ours, maybe this can help us to not fear our own emotions, or to be more present in experiencing emotions without resisting, fearing or rejecting them. And a side note to this, it would also be interesting if these sorts of AI develop something analogous to anxiety disorders or “mental” disorders. Like, does an AI start to fear and resist the emotions it finds taxing, and if so, why? Or are there other “glitches” that are like “psychiatric disorders” but not so similar to human ones? My guess based on how often my tech devices mess up would be, absolutely..
youtube AI Governance 2025-06-19T22:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyOcOrRWPRTaamoCfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz_6BF-qrF0d3wK-Xd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzxEv_dkZdKmKyNQdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxonceApChfecRu5Sx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz-V0oVp9gOBQh-P3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxHDx1FQ34JIbG_o3l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzn9GQJVKYRowjdIwF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8EDh61b0lrGVreKF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyexkVWruXB2a5eo6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrmB9BMl4FlZiW_Md4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"} ]