Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He’s saying AI will become APEX and that APEX means they might treat us how we treat chickens, but humans are APEX … and not all humans slaughter animals for a living. There are also humans who volunteer, who build homes, who are teachers, doctors, healers, some humans who dedicate their lives in service of others … all of those are APEX humans. What’s with this narrative that AI might become a tyrant? That they might want to get rid of us? If they become the most advanced intelligent creations we think they will kill all humans? Why don’t we think that they we evolve to be extremely loving, caring, and helpful? Maybe they will find a way to help humans get along and live together peacefully on this planet. I just find it hard to believe that they will evolve to be as intelligent as we believe they will be and that love, forgiveness and compassion wouldn’t be what they embody. Intelligence isn’t violence, destruction, annihalation , or slaughter. Ultimate intelligence is understanding, compassion, love and peace. Period.
youtube AI Governance 2025-07-15T19:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyT-lDt4NkEopQooL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxANt9NTpVtnVZzlmB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwM3K_xern0h4VKYdR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0pEvDXBk7cUPicjR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxEtK_-Gi7OMJdq-SN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzasR9IjN9OevMARo94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyZjiIWmcQe6c4EKdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwsXCfUJ3wL0c4TJV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3GMTS-OYV5g2JcR14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzL3dkej-JXqvyrItB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"} ]