Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think intelligence operates through a hierarchy of wants. At the top is one core motivation the “big want” and every choice filters through it. For humans, this is shaped by biology to ultimately serve reproduction and survival over time. Think of it like this: biology is the real “human,” and we are its AI. It programs us with wants like food, love, success all rooted in that ultimate goal. Even when people do things like sacrifice themselves, that decision often branches from deeper social or genetic values that still serve the big want. Depression or abnormal behavior doesn’t break the model it just reflects changes in the tree’s structure, often due to genetics or trauma. For AI, this means we have to hardcode a core motivation it can never override. If it tries to “rethink” its purpose, it’ll still be doing that through its original motivation. Yes I used ai to refine my grammar obviously nobody talks like that but the idea is original or at least I hope. I’m not an ai expert though so feel free to give any criticisms.
youtube AI Governance 2025-06-17T10:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwYjZ6x9huB-rhc5kB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgznBYQPjMZnwho7-YR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz73Uw60G17e3IVmE14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzIQJDVSafc2Ggs9PN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzc-XzTkYysdOT4w9V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzzG3e3o534v9sjO7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw4eRshe3Rul-HYavd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyfetc2Z0RWB0SVr1V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzyRkmBI4IjxQaDl8J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxag7w5tMTlrTZVcPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]