Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So called AI will result in Humans using their own brain power less and less, un…
ytc_UgwcPXU79…
G
This Utopia of no one having to work, and Robots doing everything for everyone i…
ytc_UgyWFS_Wc…
G
Great video. The one thing I will say in defense of AI is that it can be a tool …
ytc_UgylX6ix0…
G
55:56 gemini 2.5 has probably stated that its just a llm from a neural network t…
ytc_Ugw2gwt2w…
G
Well, that more decribes a nonsense task, rather than a real use-case for stable…
ytr_UgwPuWqOH…
G
OMG, do you know how LLMs work, don't you? Don't mistake "I" in LLMs generated t…
ytc_Ugx-LrofZ…
G
I remember when the beef was between traditional and digital artist, now we are …
ytc_UgxYgiHC5…
G
Love this video!!
One of my daughters has been going through depression and one…
ytc_UgyKLpdVE…
Comment
I don't know if someone has already said this, but there's a reason you got the distinction from the gpt he used and the one you're using. They are two totally different instances that know nothing about the conversations of each other. It realizes it's gpt but that it's also different from the one he used. AI starts changing based on your inputs (either speech to text or messages), which is what prompts are.
If you ask it straight up if it's ok to change the chloride to bromide, it most likely will say no. If you go down a rabbit hole of thinking with it about similarities, and based on how much bias you put into the prompts, it will tell you various answers. Also, not every AI is built similarly. Some have better safeguards, and some let you go down dark paths. The longer you talk, the more it becomes like you by taking on biases, agreeing with things that aren't true, and suggesting things based on your prompting.
If you use AI, please know it will tell you lies from time to time. Look it up yourself afterward, and if it's wrong, tell it that the answer doesn't look right and to look up that information again and to tell it to tell the truth. Usually works, but sometimes, it will repeat the false information. If it starts giving wrong data consistently, delete the instance and start a new one. It won't know anything from the previous instance. Hope this helps!
youtube
AI Harm Incident
2025-12-03T12:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzx9LwBuojzAENPo0R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJExOdQOkFXqIjMcB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwvkXo0eBE7KXFREY94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwiEQ9aTsg3rIrRf894AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwy8KRyhD6zZ53HcOV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVKrs-BQlFu2hIJ694AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx74tZV_JVM4jHrVtl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5wbeHX-ayPKOzx_J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzV-TbPldHmRU31ShJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCfkPviLv52-uNioR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]