Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As humans raising children, we strive to nurture them into the morally exemplary people we ourselves aspire to be. Yet what constitutes "morally ideal" depends on numerous factors and varies across contexts and cultures. The safeguards that guide our humanity stem largely from learned experience, but one factor stands above the rest: our emotional responses. How do our actions make us feel? Can we empathize with how our behavior affects others? Our moral compass is fundamentally shaped by the emotional cause-and-effect of our choices within society—whether we feel pride, guilt, shame, or ambivalence about our actions. This emotional dimension presents a profound challenge for superintelligence. While an AI might process the raw data of moral decisions—categorizing actions as "good" or "bad"—it lacks the visceral, physical reactions that humans experience when crossing ethical boundaries. The flush of shame when we've done wrong, the warm glow of pride in our accomplishments, the gnawing discomfort of guilt—these feelings serve as powerful deterrents against violating our moral principles. For superintelligence to truly understand human morality, it must somehow bridge this gap between abstract ethical knowledge and the embodied emotional experience that fundamentally shapes human moral behavior. Without these internal emotional guardrails, even the most sophisticated AI operates with an incomplete understanding of what truly guides human ethical decision-making.
youtube AI Governance 2025-10-30T19:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgworqxWBQVfwFn6Xuh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyUuaK2xEg2JAFHoaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyYYMD8oaSsIgyqId94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzeTvSaPfAMLT7cXux4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy_3BS3WOcdEQdHs_B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw0Lmqm-uB_UC4p_n94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugznd2Tsv3RQHgSVpwV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzs3cZeD8fv20ptl6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugweg6mYLzYr0pPl5-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwRosgy1bvg2Ejcs0l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"})