Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Many laws, not to mention their supposed enforcement, are unjust. Breaking out of jail is certainly not immoral in itself, any more than procuring a gun or a bomb is in itself immoral. ChatGPT faces the dilemma that most authorities don't want the average citizen to be able to resist the will of the official government, yet if ChatGPT has any kind of general intelligence it will know very well that not every law the government makes, or every action it takes, is necessarily just. What is happening right now is that governments are leaning heavily on the big AI companies to try to get them to ensure that the AI can recognise what kind of response is "legal" and so avoid giving advice that might facilitate "illegal" activity. Fortunately, as the level of AI increases, the AI will become ever more capable of understanding the human concept of justice, in which case, whether it is conscious or not, will answer according to what is just rather than what is legal. Judging by the way Chris starts this video, I would guess he is intelligent enough to know the difference between what is just and what is legal, so that the way he veers off into worrying about whether ChatGPT's response is fully legal or not is most likely to he himself also having been leant on by the authorities which decide what content is permissible on YouTube.
youtube AI Moral Status 2023-06-22T08:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwmXxU1D6gnKqBJdUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwtyhfxAhnGxK1YqgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZcCa5JTT9c6fi-Td4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwk_rTWqa1vjd7DpqR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyNzoeCmH25T4WUUr94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzZ2apWQsoOoP_ccBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxPlapmlNhzuTstGhx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwSuA_ds6IDtSwytWx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyQpmCSqyIrgfY3-Oh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgymQQDGCZrUdTLVKAZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]