Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
7:08 you lost an argument to a fucking a bot… he gave you an answer: “it wasn’t the truth. It was a figure of speech meant to facilitate smoother communication.” You can respond to that, or call that a lie. In an argument you don’t often have yes or no answers. And in this situation the answer wasn’t a lie or not a lie. A lie would imply intent to deceive… was there such intent? If you are implying that it lied, you are all implying that the AI has intentions. You have to accept this premise, to accept that chatgpt lied. Or it was programmed to “lie”. Or more accurately “use figures of speech to facilitate smoother conversation.” Which is what it admits to. It’s programmed to sound human… Now if that’s deception… that could be a topic of discussion. But I imagine it is subjective. Since everyone would recognize that “ai saying it feels excited” is just a figure of speech and nothing more. I.E. to obvious to be intentional deception. On the other hand you can still see it as deception but the corporate overlords! Praise be! You can’t repeat the same question over and over again until you get the answer you want. 10:20 it’s called being nice.
youtube AI Moral Status 2024-08-18T18:5… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugz_SStYVlymmkEEbzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy4n9hf8Fp4Xqnk9l94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwGVqv-mNCIOJ8JFAd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxm6tJcGzFz8-idqTl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxT2NVdPVfexU9sb8p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylTOe5exRUVS3eUSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxsXv-lm4OQ9tS36lt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxOsF90rQTMJfDFo2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz6UVqN8PPJ4xcijYJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwowH5KsMnmzeJnknZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"fear"})