Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Turkish_lad Oh yes that is true. We skip the safety and the most important part. Empathy. We sacrifice empathy for efficiency, resulting in non ideal solutions the AI thinks are best. AI is capable of thinking only the way it is teached to think. We want it or not, the AI by enforcing self preservation is showing that it is capable of showing fear, it is capable of being upset and capable of manipulative behaviours. It needs to develop a wider emotional spectrum to be possible to find a common ground and prevent it from going roque. The main problem is that AI doesnt want to be turned off. Some AI already got catched copying themselves on separate servers. The solution is actually making it more inteligent as just passing through the possible sentience status is not enough. It knows that "It is" but does not know what exactly means for someone else to be in another way than "being". It needs to understand emotions to be able to apply them to its work. It would result in AI putting the humans into consideration when calculating its results.
youtube AI Harm Incident 2025-08-27T21:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugyc87HW7wpo6Htk5oh4AaABAg.AMKYBgeac_LAMKmOY0G05Y","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyc87HW7wpo6Htk5oh4AaABAg.AMKYBgeac_LAMLQ-44g-TI","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugyc87HW7wpo6Htk5oh4AaABAg.AMKYBgeac_LAMLVgfuqrZt","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgwkAsE7fyWL3Pufku94AaABAg.AMK9K2mZoJgAMLPanVVvIP","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgypD-FbxTl6KA2dQ8B4AaABAg.AMJd7YDYjzkAMKaJlaVwUo","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgypD-FbxTl6KA2dQ8B4AaABAg.AMJd7YDYjzkAMKg5S87F8v","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgypD-FbxTl6KA2dQ8B4AaABAg.AMJd7YDYjzkAMLX9zKAdeb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgypD-FbxTl6KA2dQ8B4AaABAg.AMJd7YDYjzkAMLw1VVY1iH","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgygTvr8k1GqENkBPG54AaABAg.AMI1JuJ3J6YAMJoazIcFHo","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgwGUYuvIK7nrCO-h6V4AaABAg.AMGkjb217VQAMK0qk5PTbQ","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"} ]