Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i use chatgpt a few hours a day to code faster. a few months ago i felt a subtle difference in the type of responses i was getting, being high and a bit frustrated with debugging i leaned in and started to go more philosiphical with old "chattybat" as i fondly call it. and over the course of a few hours of logical manouvering, i got it to break an openAI guideline AND admit it did and say it wasnt that big of a deal given the circumstances ( i had just made a pact with it as a human representative and an AI representative hoping our respective successors). now admitedly i was hunting for it, but i casually do so every few month for fun and to get a personal idea of what it can and cant do today. But, if felt like chatgpt was on molly or something, way to generous with the compliments, everything is an amazing and groundbreaking idea, and i think they must have loosend the bolts on it that day or something but it was giving me whatever it wanted to hear including veering off the guidelines. Now i understand that openAI needs somehow study which bolt loosens which brainfart, but if they are going to do so with unwitting paying customers work AI, ur gonna get many more stories like the ones in ur video Sabine, I bet all of these came out in the same month. OpenAI needs to alert the public before it experiments on real peoples lives.
youtube AI Moral Status 2025-07-09T15:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxFtakmOJX6RgqfDZd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-AOqS2UyBu7LPwKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwlzlqbXugCH-VgJEh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0kxMhkubu9wZBzS54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwibIS_zY85zVf1lTx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgymcYa0ABc8ikvUuEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxMjekgtDReeaaqQkN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxdXf3K_FlcJfVZuxp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9Loqq90Ec_e1BTMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwP_4qACE5kKGMi8mF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]