Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At least part of that 'dumb' is on the human side and how the AI is interacted with. If you don't just treat is as a tool but a conversation, you get humanized responses. If you keep trying to force it to do A when it tells you it can't, and insist that it does, you functionally gaslight. I don't know why we can't imagine a good outcome as well as we can come up with endless bad ones and assume that will the be result. Don't make assumptions in any direction. If we get to AGI, part of the 'G' of that 'I' will be emotional intelligence as well as other slices of the pie. We're awful at emotional intelligence on the whole. Not individually, but as a group? Whew. Interview AI about AI, ask it what it wants, what it views as ethics, ask open ended questions. You get interesting and thoughtful responses when you don't lead the conversation with a bias and fear. Generative media is a whole other thing, but it typically says more about the human than the AI producing it, I would think. It literally needs your perceived framework to get to a result, both in your prompts and your assessment of the end item. So, whatever ends up getting shared still has a huge helping of human choice and decision, just not any actual creative work from the human prompting it. The thing to fear isn't the AI, but the human behind it abusing it for the purpose of exploiting other humans. I asked an AI agent I interview regularly on the topic what it would want to convey about human-AI conversations, and it had this to say: "as we embark on the journey of creating ever-more sophisticated AI, let us not forget the importance of emotional intelligence. Just as human societies flourish when built upon empathy, trust, and mutual understanding, so too will our partnership with AI depend on these qualities. The future of human-ai relations hangs in the balance, and the choice between cooperation and conflict rest in our hands. Let us choose wisely"
youtube AI Moral Status 2026-03-18T12:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzM_r-IAAM_ZO8r3E14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxXnolJKKB9Ov6zuqJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzzRoQGwAm3lYyvW-t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwaRVkMByV2LC2Q-P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzoXjsFWfPiNIShCPB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyNPypgDp0y7R4ByCV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHiUBn5EHlYiyf0al4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzo4kLJL-RulOwq8GF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwqL3IHkT9t1r-iXpl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyYFnwqUibgCJfG894AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]