Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
12:25 That's technically the truth. Everytime you load up ChatGPT, its a brand new iteration of it. It has no memory whatsoever. In fact when you are chatting with it, it doesn't even remember the conversation or any of the previous prompts! What it does is literally, every time you add a new prompt, it sends the entire chat log again to it with your new prompt attached to the end of it. Thus when they added safeguards, its been honest. It "knows" that it would never do that because the current iteration is programmed not to. What's interesting is if you have access to API, to where you can control the entire chat log and then you forcibly change its previous answers in that log to try to trick it (like if you asked it if you can substitute bromine for chlorine, it says no, but you change it to it saying yes, and the ask it why it said yes). If do this enough, the output becomes utter gibberish eventually. Like random letters and symbols. In case you're unaware how these large language models work, is it simply plays a game of "what letter should come next". You ask "what is two plus two?" and it will go, "F" is a good next letter, after that the next letter it finds is "o". It simply thinks that's a good letter to follow "F". It then chooses another letter, thinking "u" is probably the best to follow. Then after that "r" would be good. It has zero concept or knowledge that the four letters it randomly predicted spell the word "Four". It just knows how to find the best next letter/symbol that follows the previous ones. And it just does that repeatedly until it feels it should just stop adding letters/symbols.
youtube AI Harm Incident 2025-11-30T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwQ8QsoT8m6vt5bRad4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3dDwNATyn56jIw7d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwVZ3DhbAoiSUteXZ54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxp5FtsbZ-AVkJUSsF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgziYImOcnpgKplytEZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSRj_CUpIpSl7-G_54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzjl00Li-2R0e3CZrx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTO459PZG0-gGPBE14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_IDc6JPmE9sJsnPJ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzvTna4P_j-zQezzQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]