Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You went through a lot of trouble by writing down the whole process and proving the obvious, that humans will remember much less and be less effective practically when using ChatGPT, but you need to realise that those humans using ChatGPT for these tasks DON'T NECESSARILY WANT TO REMEMBER or ACHIEVE THESE GOALS ON THEIR OWN, when ChatGPT can remember it in the system memory and complete the tasks for them. The problem is then, humans giving up their will for the Ai system. BUT, all this is nothing new for humans. Humans have been intentionally destroying their brains' memory, physical and mental abilities getting reduced for example with alcohol (and other things that damage abilities), but similarly, if the shortcut goal is to win the girl, be popular, then sharing alcohol is the fast route, and the effort on excerising the body, exercising the brain, winning at sport, winning the Math prize all of that is too many extra steps when you can take the shortcut performance drug that damages you in the long run... and in life there are many such shortcuts with and without the invention of Ai.
youtube 2025-11-16T14:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwZDQ-2nENRwHKmXXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJ6a6yb0LJOCfyos14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxa4L3R6ZuTL-tvJsx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxFhR6bZ0Yfx-zFXe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4elqxpntbVge0SQV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzC8_QBcK3MWSup4C14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwunplicShq99W3XLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxCCo3vZowmEFypBXB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgylyNga8rSxUNvwX414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgymEcUSVg_K1tHDFUx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"})