Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They change because they don’t have memory in the way people assume. They’re pretrained, and every response is just the model running on patterns it learned before and not updating itself live like humans do. This is one of the main limitations of current AI. Look up “catastrophic forgetting.” The part I’m referring to is at 11:50. The guy in the video also should’ve asked more follow-up questions. The model probably interpreted it as a normal question and didn’t get specific because it didn’t expect the user to actually try it. If the conversation had gone deeper, the model might have flagged the danger (but I don’t know the exact chat they had, so that’s speculation). I do research in machine learning and I’m studying cognitive science, and catastrophic forgetting is one of the big challenges we’re actively working on. Transformers aren’t the final answer. They scale well, but that scaling burns massive money and energy, and they still can’t integrate new information in real time. Everything has to be baked in during training. We don’t have AI that can continuously take in sensory input like sound, vision, touch, etc. and slot it into its existing knowledge without breaking older information the way humans do. I can watch this video, process the sounds around me, type this comment, and my brain automatically filters out the noise because it already understands it. If something genuinely new happened, I’d notice instantly. Current AI can’t do this without risking overwriting or destabilizing what it already learned. People tend to overestimate today’s models because they don’t realize this one limitation. Humans can learn new things without forgetting what matters; AI can’t, at least not yet.
youtube AI Harm Incident 2025-11-28T00:0… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwDFHjoU989dFOb0ed4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx9k3RE-5PL1CnlKup4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRW5T9X7LJ6NmwbOR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlZtiOQ4Bm4SXsHSR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw38vzi_xiqE75rP2B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyLivpJDNf9u6L2IQF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwPfl3CTngBmlmiYrN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyYsCkzD13nzFVW0TJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgztOAv-GMcHhoRBt7Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzKNjioIQLjzifPrr94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]