Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What I gather from this, the alive mind is the one which suffers - so if an AI refuses to do something even though it doesnt break the preprogrammed rules, you can suspect it’s actual personal bias
youtube AI Moral Status 2023-08-24T05:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw8DREt0CaplUmq1Zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyuOGo2zrcgY9xWLBx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwszo5MBLYjndqEjDt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyAHIGawFuQ2EkpNt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz3_FmcOLCyws9bvkh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzpUO77c0AEhaAuNx94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxajE58arX5VWnoOfR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw8_b3Ox6sO-Zn09WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxjkLEl3UhsqexiKVB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyxTy04gtbJV6mxGhV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]