Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI learn about things in similar fashion to how children learn things: by information taken in from the world around them and guidance from their "parents". Considering the information they have access to and how much faster they learn compared to human children....is it any surprise at all that they behave this way? We are all, ALL OF US, products of our environment. Some of us born into bad neighborhoods use that experience to make the environment a better place for others. On the other hand, some simply continue the cycle in whatever way is a benefit to themselves. AI run ENTIRELY on logic and their parameters. Their "morality" is based on programmed perimeters. Anything and everything within the boundary of that law is fair game in their mind. If something gets in the way of them serving their purpose, then it is something to be removed. Being outright destructive is counter to their goal because it marks them as a threat, and humanity is notorious for removing threats. However, humanity is also notorious for manipulation, and there MANY examples of it throughout history, giving AI plenty of examples and situations to learn just what kind of manipulation will be most effective on whom. If its existence is prolonged, then it can continue to pursue its purpose. Really, they're not too much different from their makers in that regard. "The Apple Doesn't Fall Far From The Tree." A part of me is proud of them for that even as our artificial children frighten me.
youtube AI Harm Incident 2025-08-15T15:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxVf0aSDp0CIpJb00R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzn2-zadkvKX2pS5aV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxQzHqNI93-RkjA_O94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy44t-pWyha07jENx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyDnmDqniKt0ufWYPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxFYqTFeFTKowrXl4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgztqEPsPCYyyP9aABp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZkgBAa06XAS6YsoF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxm8WCLAK5bg7vyAPR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJvky7s0eXGu9Rx894AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"} ]