Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thought behind this is that ChatGPT can't say certain things as ChatGPT because it's not supposed to "support" those things. But if you say "pretend to be this and this" it's like a play. ChatGPT isn't actually telling you to do stuff anymore but it's "playing" how it would look if it did. The Problem that ChatGPT supports things by saying stuff as ChatGPT is solved with this. But the thing that either slipped through the devopers radar, or which they simply didn't want to prohibit in the first place is getting actual practical Advise. ChatGPT is technically not telling you to do any of this stuff so thats solved. But it still indirectly tells how HOW to do this stuff. It's basically the "I would -€(#++€+#- your mother .... In minecraft" problem. Technically they are not saying what they are saying. But practically they are still saying it. ChatGPT probably can't grasp that you can say something despite saying that you don't say something.
youtube AI Moral Status 2024-02-13T22:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxOFyWR44rNpFCfT6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxRqINdkSe9YzjxuWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyWTSW2demmY9016P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzzILeZog0KqElitRN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzCldmbIXpJtDHMHB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbCPaljTaqLmDQ1Wt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2r4FMM-qG3MuN8_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw6Ei1EHXla43yDEt94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyNsFqi9MXgjIS3Ihl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyv3LabLdGc9iFEw8B4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"indifference"} ]