Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The main reasons why you "broke" ChatGPT: 1. The training process is fuzzy: machine learning is inherently a fuzzy process. There may be gaps that the model hasn't "generalized" to. This is why the bot knows about the trolley problem, but in its dataset (ie most or all of human knowledge), it sees there is no objective moral, ie no general, answer. Or the model hasn't yet found one in its training set, bc humans have not yet found one. 2. The bot is trained by a red team to develop some sense of a moral/ethical guideline. This process is done separately from the training process described earlier. Not only that, but red teaming is also a fuzzy process (it's also statistical). So it can make mistakes. But not only that, when pressed, the bot is instructed to mitigate risk and abide by the moral guidelines trained by the red team over self consistency. 3. The bot is instructed to be agreeable, making it extremely to walk into your logical traps. It also isn't trained to challenge you as a debater, while you are allowed to debate it. But this is a farce bc ChatGPT was designed not to be a debater; it could have, for example, gone on the offensive, found your logical inconsistencies, and then pointed them out. ChatGPT makes a terrible debater bc it's forced to be agreeable. It's interesting to see how to break ChatGPT. I'm sure a human wouldn't fare well either when forced to tackle an ethical dilemma like this. But the YT comments made me feel extremely sad as this video just served as confirmation bias of how evil or "stupid" ChatGPT is. Well yes, ChatGPT can't solve your moral dilemma, but that doesn't entirely diminish the real productive gains people have from using it.
youtube 2025-10-15T00:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx-epRa3w5FfCNs-Lh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz2PnJOa8dM8arkrVV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9ml2DzUggVkdJ-4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz241Cy9m3-fqmcn354AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyIFGk6tCItgBp7V4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugww43cHU9ErtCnvRZB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwxwFr__8Gur_VzsnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz0j1AgtucfAjX79gl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWSO0QwXrdr1u8iVx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFlOe4NQwrqBxfX4F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]