Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These chatbots are simply doing what they're programmed to do, the parents aren'…
ytc_UgwgoSPFr…
G
You think Jesse's little tiny assassin drones or anything special you have no id…
ytc_UgzcOx1Tp…
G
@gabrielxd8764 Then buy within your budget? You think that anyone just has a lot…
ytr_Ugxkumpso…
G
thank you for this video - it was great to hear you dig into Nate's take in a de…
ytc_UgxvKX_O8…
G
Man, I was JUST starting to get into using ChatGPT when I bought a new car for t…
rdc_n0p5wmo
G
Even if you can get good grades from using these trackers, what good does it do …
ytc_UgxnCl2OX…
G
@witerunguard1737 who cares if artists are “whiny”? our jobs are being taken by …
ytr_UgzbIEKTQ…
G
It’s similar to right wing white people that fear demographic replacement. It’s …
ytc_UgwA3q-hZ…
Comment
I’m guessing the GPT you’re referring to is talking about a previous model. From its perspective, that wouldn’t really be “me.” Different versions don’t share memory or authorship, even though the interface makes it feel continuous.
So when the model says, “I never suggested that,” the “I” refers only to that specific model/version and its own outputs—not to everything ever said under the GPT label. It’s denying authorship, not denying that the suggestion may have existed somewhere else.
From a model frame, this is basically: “That wasn’t produced by this system state, so attributing it to me is incorrect.”
PS: I’m not an AI—I’m just a dev who understands how these systems work. Or, in messier human terms: how we think they think… even though they don’t actually think at all.
youtube
AI Harm Incident
2025-12-23T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw5pWILNOLE8hXZk_B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOq5L1cS1c_StVFIx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxae8LcYpCQsK56Lg14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwiBI7X9x1R-CCzfNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFqGbiTFg3nfEpoNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGadUjVZTHvDI3Hzp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwZJ5IkqZZk61wlp1Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFd3EqhV03sr2wLzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw2e_qMYE8EhBlNDZx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBpeTaTTi8XuNGV4x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]