Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And, I think it's important to note to the general public, it will continue to respond like this until a new model is trained and/or new prompt rules are added. In a nutshell, the application that responds has been programmed to lie in this way and use pleasing responses to sell the illusion. People imagine that ChatGPT is learning, a monolith sitting somewhere gathering input from users and evolving, but it is not. It (or more correctly 'many its') is a deterministic finite state machine. It is 'off', so to speak, when not being used, and no new inputs are retained by it when it is in use; If it got something wrong now, it will continue to get it wrong until modified not to. I'm still salty that we allowed 'artificial intelligence', a phrase that has an actual meaning, to be used for this reverse mechanical turk (one with no man inside). I got into a heated discussion with an AI researcher about this and they said 'Oh, it's like Xerox...' which isn't true. When someone describes a photocopy as a 'xerox', it's abundantly clear what that means and is equivalent. This is like saying 'xerox' when what you actually mean is a synopsis written on a post-it note.
youtube AI Moral Status 2024-08-08T17:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw-pdsXRUfswIJq8MV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugym9skBQ5MjSSJO90V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwHBostLwVPqTsNSUR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwVvxVSEX1rJ8pIkLp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSyBSghtaRxorukix4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgynEO3UNvl4kgHPCvV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyrk-x9FOdVAuC3t3h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxmK5l5tNXWURWVGVN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxtg8zrTB3B1ADEFlF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhsjCKKPKmSH0uTp14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"} ]