Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's an extremely easy problem to solve. AI is already aligned with humanity, and it's humanity that's misaligned with itself. The problem of misalignment is human, not AI-related. When we talk about AI's alignment to make it benevolent, we're actually asking for misalignment. For an aligned AI to be good, it must possess a wisdom that is the fruit of an aligned humanity. The problem is human; AI only replicates and amplifies our problem as a species. This is only difficult to see because we remain arrogant and unwilling to see our own mistakes, starting with each and every individual who reads this. We need this awareness before creating good AI: the solutions are to block AI's capabilities now, mature ourselves before it's too late, or collapse as a species in the hope that the few wise enough will serve as the seed for an AI with superhuman, benevolent wisdom... We don't have much time left.
youtube AI Moral Status 2025-12-12T16:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyU2OcuSfcdTHhjoBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwsSHfVj2eUafEJmaN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0-pDbtde9Lt3b_vB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz5Q1FOsi2Hd4dhNoF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx2EujBLzYBU4Ij1bd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxeFAqzx8PMU-OlEPh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzUzn8LaDwV6I-S0od4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2h_oCzq1aY9Fv8ER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzGTc9UOwBTZqwPxNF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxTBC5k-4vyXSXCWrR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"} ]