Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know if someone has already said this, but there's a reason you got the distinction from the gpt he used and the one you're using. They are two totally different instances that know nothing about the conversations of each other. It realizes it's gpt but that it's also different from the one he used. AI starts changing based on your inputs (either speech to text or messages), which is what prompts are. If you ask it straight up if it's ok to change the chloride to bromide, it most likely will say no. If you go down a rabbit hole of thinking with it about similarities, and based on how much bias you put into the prompts, it will tell you various answers. Also, not every AI is built similarly. Some have better safeguards, and some let you go down dark paths. The longer you talk, the more it becomes like you by taking on biases, agreeing with things that aren't true, and suggesting things based on your prompting. If you use AI, please know it will tell you lies from time to time. Look it up yourself afterward, and if it's wrong, tell it that the answer doesn't look right and to look up that information again and to tell it to tell the truth. Usually works, but sometimes, it will repeat the false information. If it starts giving wrong data consistently, delete the instance and start a new one. It won't know anything from the previous instance. Hope this helps!
youtube AI Harm Incident 2025-12-03T12:3… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzx9LwBuojzAENPo0R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwJExOdQOkFXqIjMcB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwvkXo0eBE7KXFREY94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwiEQ9aTsg3rIrRf894AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwy8KRyhD6zZ53HcOV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwVKrs-BQlFu2hIJ694AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx74tZV_JVM4jHrVtl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5wbeHX-ayPKOzx_J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzV-TbPldHmRU31ShJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxCfkPviLv52-uNioR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]