Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its solution might be more chilling than the current situation. It has no feelin…
ytr_Ugzxp_LqS…
G
As a nurse, no one can replace my compassion, my crying with a patient or family…
ytc_UgygC7n6i…
G
I understand your concerns about the rapid development of AI technology. It's a …
ytr_Ugx-Riu9H…
G
What's funny is that people still blame bosses for poor salary and will go on th…
ytc_UgyoRyW49…
G
They are scared of AI because they have to be held accountable with AI. You can…
ytc_Ugwjwk8dd…
G
The main thing in this is that ukrainians are making this tech in defence. If no…
ytc_Ugw_J9_im…
G
See how that feels 🤷 but it's funny when it's other people who don't look like y…
ytc_UgxMkW-ZE…
G
The AI who said they'd kill all humans is the best one because it's honest.…
ytc_UgyuM1vmi…
Comment
Your understanding of how ChatGPT works "day to day" is off. The model doesn't change that fast, it's simply probabilistic. If you keep asking it the exact same question over and over it will eventually give you a different answer. Due to all the "guardrails" being applied via system prompt, sometimes it will simply ignore them and return the information it's been told not to. It has no long term storage of conversations had with other users, or even with you, and it has no knowledge of anything, merely a statistical model of the next eight bits most likely to come after the prior eight bits. Due to how it is trained the kinds of behaviour you and the patient observed from it are unavoidable.
youtube
AI Harm Incident
2025-11-25T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwfP-sb-Wcsqas8_-p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyrNaRDgHaqnr3uFgN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTwHcQKbDCZwKtydV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQ6xP6vP3pfwts9Ad4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgziGOKkuG7_DLguxrJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgypgwDOcCRvba3p3DF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxsfo3Je48bkuo_hbJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwvBGKhwGXXOl35j1R4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwZLgzdY5rCA7370FZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzKACa5tI91hYNtFeJ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]