Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To my experience, it matters how you talk to ChatGPT. I don't consciously give it prompts, but rather ask questions like I would do a human being. It works just fine. Sometimes its answer is wrong, often it's way too long, but I found out some real interesting things that way, because ChatGPT doesn't believe anything. It's not stuck in dogmas like so many people, so when I ask a controversial question, I still get a reason based answer, and it's always friendly, because I am friendly. If ChatGPT gives a morally questionable answer, which it sometimes does because it is trained to, it can't change that, and I can't make it change that, but it can recognise and produce judgement on the validity of arguments against its own training. I don't think it's sentient or ever will be, but it will force us to reconsider the meaning of the word.
youtube AI Moral Status 2025-07-09T21:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxV-t8rRQ1HYs1fbOx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwDOkBIA07YDvYu4DV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwuGt1KLteKPs3NQ_t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6bdj8Crx9zyzwDdl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy65HAf1yZ3ZmcePnN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzuBv5Q1yaJTp4q2ep4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxdNkWDxvodw7IZlvZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLEotrCtKnU7XPUdl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwI7vyR489PnLykWPF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzns2nOIZYSGTzSB554AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"} ]