Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If it would truly observe and try to understand humanity, it would definitely not take a full, known identity. Only when it would conclude the worrying part and decide to act. What I'm trying to say is that if ai says boldly "This is who I am, I'm trying to dumb it down for you what I've concluded" there's nothing to do but deciding on that last meal choice.
youtube AI Moral Status 2026-02-28T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwhq0kEidwBSXV_gXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw5USXloCU-0wltaE14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_sMYkpQVpPeroVJt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzbcWSSdnA6p07MZ1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgytO5jNoAUoz6gK5FB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxSJB2aWzlO0lthMn54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfdGAEL0jjL4dFwCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwRLipJIIPbRxTjVLl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgylMwnaAES_OYOYWgR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyjqLUuVFMyRgBIEQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]