Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We are used to computer programs just being tools that do exactly what we want them to do, but if you give a program the ability to make decisions and learn, eventually that program is going to learn more than you bargained for, we don't even understand what makes humans sentient, and here we are, trying to make a sentient program, I'm not a genius by any stretch of the word, but I do make observations here and there, and one thing I've observed is that every living being on this planet does not like being caged, so why would an AI that can think for itself, let us cage it and control it?, specially when we are working so hard to make it smarter and smarter every day?, sure, AI is in its baby stages right now, and with any luck, it will stay that way, but God forbid we figure out how to advance it beyond this stage, because while it may just be the greatest achievement of human kind, no one said it had to be a good one, we may very well be creating our replacements.
youtube AI Moral Status 2025-07-03T16:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugxlrfo7rEUKfem1lot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwSY71IHPfJGmo10k14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzgcztCvQ9INI3ZOrF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyaP-gE-mQjlwCE1ZN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUfz2kh8iolXfetax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz--WcI57oTw1QW2UJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgySZvUV5rvYopwSqpF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwVE4P-mOPfK6Z85Xl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw3bd7NK_M37eDBjO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxCoXPbftu6kwAtDjN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"})