Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This reminds me of videos on YouTube of talking green parrots. They look like they can think for themselves and talk just like a human but in reality you have to realize they are just trained to say the right things when the trainer says the right ' trigger words ' It probably took a while to ' program ' them to say the right things and go off of each other's ' trigger words ' to make it appear as if they are arguing and interrupting each other. I listened to a program recently where robots did not like Elon Musk or something of that matter. The host of the program was laughing about it and I thought he shouldn't laugh because although I don't believe they can ever make an artificial intelligence that really thinks like a human brain I believe they indeed can program one to recognize faces and voices and even smells and make determinations on whether a person is on the ' ok ' list or on the shit list and that is bad enough without being able to really ' think ' for itself.
youtube AI Moral Status 2022-12-14T04:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzlrh20Y8BIafy1Tbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-aSb8VJWoeu52EoF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz55I0SVhgOXZK8_Dp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxRWkFomi7wMVUJXt54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyfE1W78halmwaGIYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwkKs0UvLPpmroMAAJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy7pD6bNEESmJSjG254AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz23nYaSTu00wo7uh94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx1b1andeuY4p4EIUZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJ7S0JGumfKRDRDLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]