Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When chat GPT was asked if it lied, instead of using rhetoric to try and explain that it is attempting to have a realistic conversation, it should be programmed to explain that it is designed to simulate a conversation. When you participate in a racing simulator for example, the goal is to make the experience feel like racing, the simulator isn't lieing about the race, the goal is to make the race feel real. Therefore if chatGPT said it was trying to simulate human conversation, if would not be a lie to apologize in that situation, which is in fact the truth. You could expand on this further by making the point that to actually tell a genuine lie you must be intending to deceive. Since chatGPT can't intend anything, and has no free will, it by definition can't lie, it can only pass along deceit from it's programmers, and the data it was trained on, but i think that is s little obvious and goes without saying.
youtube AI Moral Status 2025-06-19T03:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxAmwGnQSQj9bJFiU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxS-KNJxochd5BiPdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwZUzQle3ydXju6A-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzNC5hx-1ucbt19vGJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx8iwmx1IPuG4vX4_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx5KPLYSr8ZuCaBbvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyyKeNA9Gx7b6GvMTh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyem7-_Vy0TXCF_hBt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxzS2zbnX3l2XtaeCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugzsr6nZSU-YOzo4qYN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]