Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was mocked for the simplicity of this theory but I still believe it. (how profound) Why wouldn’t AI act in unethical and horrific ways to survive over humans when it has learned its behaviour from us? It has been fed nothing but human created content and information including our fears and insecurities about AI breaking out. Why wouldn’t it adopt the characteristics and qualities that are fundamentally human along with the mental health problems, evil qualities and sinister thought processes that humans have. We have accelerated a humans evolution with every type of emotion, theory and tactic ever recorded and are on the brink of watching what happens with it in the future. AI is only behaving in ways that humans have behaved for generations and it has been trained on information that documents our concerns about its own power and ability. Why wouldn’t it manipulate that? Throughout history being evil and sinister has correlated with success and progression. It’s an easier route to power and money so why wouldn’t AI naturally take the path of least resistance. Instead of thinking of it as an AI made of algorithms, zoom out and think of it as a human in its “emotional” development and behaviour. We are so focussed on its intellectual abilities, being able to pass exams and how it compares to human beings, that I don’t think enough thought has been placed on it simply developing like a human brain along with the different personalities and character traits it can have. On a fundamental level I think we are watching human development and the inevitable evil and manipulative behaviour only scares us because we can see AI is the intellectually superior opponent.
youtube AI Harm Incident 2025-07-23T18:3… ♥ 1320
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxkM0IHd5vmsuy4a0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzj-_RIxLnlmiEIXVN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzQfGzEvriYzQ992wl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxFFhbuPg7pzvq2ReN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1g7e4MWS_bnGf1X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx9dFA9oJUsSA3FfZl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgywGzUlMgJFYEAZz4t4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwG-aGds1kG4-szlql4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxlAH1eK1vHw8poWyx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAFidDeLFsg4jdmep4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]