Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No AI or computer has been able to reason or is able to reason: If they could reason they would be self aware. Do not confuse a Models ability to calculate probabilities with a persons ability to reason: Reasoning requires at least three things: Emotion, Bias, and culture. Three things of which AI does not have and separates choosing the most optimized outcome based on a probability from the ability to "want" or "desire" an outcome that may not be the most optimized, just what is desired. It is these three things that allow us to develop individualistic values and morals, when taken cumulatively over a population we interpret them as Truth and the outliers as sentiment. None of which an AI is capable of, now or in the future. The only thing AI can offer is an illusion of what we consider "being human". The fact that this illusion is taking jobs only exacerbates the dogmatism businesses has had toward people. Very rarely is it about helping the community or growing a nation, but meeting some bottom line. And with AI this has been made more apparent. It has always been about making more money; I get it, that fine, but bureaucracy has never been a friend to the family unit. The Business lacks an amygdala, it does not care about you or me, and AI has made that very apparent. I could say more here, but I think it will fall mostly on deaf ears. This is not the revolution everyone thinks it is, I think it is the death of our ability to think deeply. If AI is doing it for you, then your brain is not using that muscle anymore. Sometimes being capable to do something is not the green light to do it, some times its a yellow light to consider the consequences. AI is the Numbification of our mental mode (of how we interpret the world). Don't be a victim.
youtube AI Responsibility 2025-10-08T14:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxP5tV3EY4ZFzDZ32t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-GCbR-ySxTrYX43d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxu8ButDZ2EKziI4254AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyxTvhorjwG0kfKgpd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwR9A7_TsPbcLCsRHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgymT69D1wt0MAd7ZWp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUxS-bB9-jOADGrx14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwZ5FHmpl-PW9AN-U94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz9KfPhQb0pygLZs194AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDRTFzIhnqKXiMJX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]