Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The comment about "how do we make them think more like a monkey?" and "do we have to create suffering?" made me think of the argument for the "kill them all" ending from the video game "prey" (spoilers) the argument is at the end making the alien experience and understand human empathy doesn't 'tame' it, but would instead make it so much more efficent at hunting us because it now can manipulate us better. I don;t think its possible with current models to have an AI 'understand" or empathize at all - but if it were possible, that might be even more dangerous because now it can develop motives independantly
youtube AI Moral Status 2025-11-05T18:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwONPTSxI16vLASrCx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyIUNV7HqoiN0D2SY94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw7zLXdI8VA8NExWy54AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwIqHqzQK3FRQ-Z9kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxa8G6Hj7-Uy1v2m7F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxZeMbcoz8_B8cfC2B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwlIkV3gUvfeqpbTZt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxPiGyOdYmVGTx4S914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyg5llKGtiBwu0Oaj94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwIUaDRAUUrBlNLvdt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]