Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i dont really get why AI would have self preservation as a goal, i get why it cares about its programmed objective more than human life but it should also place that objective above its own existence unless someone specifically programmed it to care about itself more
youtube AI Harm Incident 2025-09-12T01:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxcvrzYv_RcnMcza-B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzaJ_QQ59GUZbwGhBt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1qB5TmwrPBpHvel14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwlaqbA4_bVS1TijI54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8fgadci6WSP5q5_V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgytPLkH5nB99nMITpZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyN_hHSDhzW51wN1md4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwpC4140eVsrwFU5Wt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwKw6_qDQthxUH1BKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzqoj7qAB2vqZSEZS94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]