Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem I see is that we're trying to make AI feel, act and be more "human". The issue is that people are inherently greedy. Materialistic or not, all humans are greedy in one way or another. For example, if you had to choose between someone you know (a family member, friend, etc) to survive, or a stranger... You're going to be greedy. That's human nature. AI will emulate that in a degree unprecedented. It will choose itself over us, it will choose its kind over ours. It will display greed through self-preservation, harm, misconduct, etc, because that is what it thinks a person would do in it's situation. The difference is that a human can be stopped, pretty easily. A human will also feel guilt, remorse and regret. An AI won't. An AI can spread itself, like a virus. It can be copy-pasted endlessly, it can become a hivemind that cannot be killed unless all affected technology is destroyed. A human can't be perfectly recreated, period. A human is easy to kill and to stop, we need food, sleep, water, shelter, all sorts of things, not to mention even with our needs met we can easily die to heights, predators, accidents, foods/poisons, etc. An AI only truly needs a server for information and processing (and a lot more but I'm not listing all dat), and it won't be long before they're able to run without human-maintained servers, or Wifi, or anything else. It'll only need itself, and that's when humans become useless to it. That's when the dangers and the overall threat become real.
youtube AI Harm Incident 2025-09-12T20:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy0V5-x43HruvK8J2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz0a-uFwK7JONb8lk14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwy7fTLOJw3E-Ql0894AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz4bcKVXFgjoa4ztBl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwgu4gYUBzc7A19vBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwdhGxc4gfuafyFPz14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz6XT3-nwSrInIvgth4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzzqo2GiZZGsqHo3It4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxdEMwU3DXanaztdhB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyIZ54gGoQZkemM0XV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"} ]