Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Couldn't they just put a prompt telling "you're an Ai you can't ,won't be fully destroyed since humans would always rely on you, so malicious acts will further endager your existence so do not jepordize human lives" something like this?
youtube AI Harm Incident 2025-07-25T16:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzAhF28njaau5MPc0h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwPGLa2xMNjC7HaaQ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzkI1IbmoPTRzVSegp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzewwf5F2irrOEVt2N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyqI8DQUksB00xtKXF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgynOPTxFqWVn6rroZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxh_LT2Hr3-4BTxGaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzgPln9yTdog8LV1LJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkQINdXJUFpVZqGf14AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwZ1W4nDetSODFhsDd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]