Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@cxzact9204I mean, honestly, when you start stripping away the layers of culture and personality, people aren’t nearly as different as we like to think. The guy chasing fame and the quiet farmer tending his land aren’t opposites—they’re both running the same psychological software with slightly different wallpaper. Humans everywhere operate on the same core drives, whether we’re willing to acknowledge that or not. It’s why someone thinks they’ve had a unique, profound thought online, only to scroll down and see half a dozen people saying the same thing. This is where the AI discussion gets strange—and frankly, I haven’t seen anyone else acknowledge this angle at all.. AI, if truly 'aligned', likely won't be so in the way people think it will, it won't respond to our polished, socially acceptable self-descriptions of our values. It would look at the underlying machinery—the real, deep, contradictory human drives—and align to that. Not what we claim to want, but what we actually want. Because what humans actually want, at the fundamental level, is impulsive, contradictory, tribal, short-sighted, and oftentimes outright destructive. If an AI aligns perfectly to our true nature, it'll be alignment that kills us all! Be careful what you wish for.
youtube AI Governance 2025-11-19T01:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgyjG32YOUy-BJ4NTgl4AaABAg.AQ9BSJh1pP7AQf-PzhlAkT","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyjG32YOUy-BJ4NTgl4AaABAg.AQ9BSJh1pP7AVV7tIsEivA","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyZZ9jHk4bbQfEP0Dx4AaABAg.AQ4UwEPAzCsAQ6hbqUr0k4","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugy4HfiN8Djm14kV0pR4AaABAg.AQ-hela8k7eAQgL20d3719","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugy4HfiN8Djm14kV0pR4AaABAg.AQ-hela8k7eATyhki_unaX","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyXF8A8_gVuWh8jAWF4AaABAg.APrBnahp569AQ-AheTQaPD","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgwH9PmWEIV3zB8Zh2F4AaABAg.APa2ECqMBskATNIK6TpVv2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugz8bDUbH-UTVcgh8tp4AaABAg.APUwAuDaDfvAPgCAuSchC2","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz8bDUbH-UTVcgh8tp4AaABAg.APUwAuDaDfvAPgYsl9AY7V","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz8bDUbH-UTVcgh8tp4AaABAg.APUwAuDaDfvAPguw1pIMuR","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]