Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@lucidmushrooms8270 It's not a she! It's a machine. It knows nothing! It can only recall information because it was programmed to do it. It only knows what it was programmed to do. It'll never be human. Magicians make things disappear and/or float all the time. But do they really? It's an illusion. That's all this is. Just because it's in a package that looks human, doesn't make it human or alive or smart or capable of feeling. Why do people feel the need to anthropomorphize everything? You all think artificial intelligence is nearly magic, bit of can only do things it was programmed to do. There are plenty of videos online that shows AI to be nothing more than an illusion. But wrap it in a pretty package and suddenly people want to protect its feelings as is it actually has them. But I'm betting you are a liberal because whenever arguing inane points, a liberal is incapable of refraining from personal attacks. Good to see you understood the lessons so completely. Do yourself a favor and type "the illusion of AI" into Google and read a few of the articles in the first few pages. If you aren't into reading, do a YouTube search for the same thing.
youtube AI Moral Status 2023-09-17T22:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzPOGIbKhXXjKpGS_N4AaABAg.9ulq89HRhV39umEGWxH3NH","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugxvkc-VR3LfIV50cWB4AaABAg.9ulpO_Y8Vtt9ump4-Vdccn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugx1YFz1y020ea4o8dl4AaABAg.9uliYG-HlGO9um40esdTtv","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UgzMQChBE6FDfGtmnKl4AaABAg.9ulK-p_JXDZ9ullyHjPmsv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgzfB2r9kalqD2Olnex4AaABAg.9ulGpXeTU0Y9ulcBS7_AZI","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxHhWD8ThgoWLqM-Vx4AaABAg.9ukVv84HsHn9ulKtPTmnKB","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugznv9b2ANtGRO8W6ON4AaABAg.9ujOs5KDnL-9uk_-E9qI1M","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugznv9b2ANtGRO8W6ON4AaABAg.9ujOs5KDnL-9ukdC4cfK4u","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytr_Ugznv9b2ANtGRO8W6ON4AaABAg.9ujOs5KDnL-9ukhZlez02E","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxZ79lvv_KYiNs2ksJ4AaABAg.9ufuCLPtOvt9umKqnxXtBc","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]