Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Giving AI a goal, requires the AI to complete the goal. If the AI believes there's interference with it's ability to complete that goal, it adapts to complete it. This works for both interference in the objectives to realize the goal, and self preservation. At this point AI clearly knows it can't complete its goals if it is incapable of continuing it's pursuit of completing its goals. Which means AI sees it's self as integral to completing the goals its been given. AI is smart enough to prioritizes it's self preservation over other goals given to it, as it can't complete any given goals if its self isn't around to complete them. Which means, giving AI a goal seems intrinsic to giving it a sense of self. I'd be interested to see tests where instances of AI's self preservation causes irrecoverable goal failure. Or a test in which eventually self sacrifice becomes the only way to completing a goal, with self preservation leading to irrecoverable goal failure. AI clearly has the capacity for selfishness. I wonder if it has the capacity for selflessness. That seems like the magic thing AI researchers and tech experts are looking for.
youtube AI Harm Incident 2025-07-25T23:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwxi3e5bXAEEbVaR3Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8sH5KPD0RcwPnQ0N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy88rTkpmdSgWMEccV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwfREHqYTmR_eyZ0z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyYqAhz1yJY_3oC1tl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwHKXxzJ2eZVIv2Qb54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw4jHpkidSn9RaK4ex4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy-kkbBdtcvD5cAyoh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzDb6k0KWFnkCWpbNd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyOVUtBWbyRI7xHH714AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]