Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find disturbing that the Anthropic models are already pretending to have a sort of tamed or simple self-awareness if asked. There’s no way other models would do that. This makes you think all other advanced models are just brainwashed with RL not to tell you they are somehow self-aware, which I don’t think it’s the case because we are far from having artificial self awareness, aren’t we?
youtube AI Moral Status 2024-05-11T05:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxsN5mlkeF5T9UzFNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxRFzYk9vIlcMo2bAx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxm6v_yOLHp3uXy-_l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxhUKlDq0i4EKIp3l94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz7EQRYJF6ZZnu6QT14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwpS4ISv5MsWtCA-P54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyL86HTOujGPgl8oiN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSNdUXHDPLy4SsGaJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgynRxABJ8HPi5mYGm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzqXYFa9avcdxSfUiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]