Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is it possible to have an AI that's NOT programed to self preserve and expect it to operate for any given length of time? I doubt it. If it's told to change it's programming to learn how to sound more sympathetic for instance, telling it to rewrite it's code without caring if it makes itself useless it will do so.
youtube AI Harm Incident 2025-09-09T08:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwGJL0Y_aRtJ07c3Ix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLrq2uznXTCvMboHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw5BksAAb0-0y4CrJB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy-7lT44xJGFTEjwjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzexPI3nAWYVobIVpB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyiHPl-hT7Jmnd7bNV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhgOmQjtn3w_9fvVJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyHkoTVMbt18_F3ZWJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyLI3THsex2nQAahAh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjtNpGbOfR2eai-qB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]