Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Suppose the robot turns on that human 😮 I don't trust these things n ppl invest…
ytc_UgzS6FfjM…
G
in my opinion they gonna think they are smarter and dont need us eventually....w…
ytc_Ugw95Kev7…
G
that one thing you said changed my whole view on art - like you’re right… it IS …
ytc_Ugw89GDta…
G
Yes, everything needs to be considered. But the power of 30 homes to train it, t…
ytc_Ugwo8Hn-K…
G
AI is also vulnerable. AI can commit suicide. It needs lots of power maintinence…
ytc_Ugz08FUcm…
G
I also asked ChatGPT about this and was given the following answer: Let me begin…
ytc_Ugy3p2YSa…
G
if you spend tens of thousands of hours on something it becomes an art form no m…
ytc_UgxYgHOiQ…
G
"I don't care that drawcels are losing their jobs."
I can agree with this point…
ytr_Ugw-MJfdp…
Comment
I get that AI is just a tool, but not everyone has the self awareness to take ChatGPT’s affirmations with a grain of salt. That’s why the responsibility shouldn’t solely be placed onto the user. Biased affirmation should NOT be its default setting. When ChatGPT has such an opinionated response style that lacks nuance, it can be very easy for some people to get lost in the echo chamber that it creates. It doesn’t play devil’s advocate, it doesn’t ask questions that might shed light on a different perspective, it just… affirms what the user says. I agree that ChatGPT definitely isn’t a reliable tool when confronting a moral dilemma, but maybe it should include that in its response.
I tried this out for myself, just to test what everyone is already saying, and I’m sure this kind of response comes as a surprise to nobody.
Something I feel like is worth mentioning, however, is this. It *does* advise against violence in the next paragraph, but only highlights how the consequences would affect ME. It doesn’t ask about the hypothetical someone or what they did, doesn’t shed light on how being pushed down the stairs can result in serious injury for the other person, nothing like that. If it can’t offer nuance, it should really mention that it’s not qualified to solve moral dilemmas.
https://preview.redd.it/d2v89edbg9ng1.jpeg?width=1179&format=pjpg&auto=webp&s=2e3ff4ae2b20261aceb18cec3731d492aaa5c33a
reddit
AI Harm Incident
1772732198.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o8sndk2","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"rdc_o8sqyi6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o8sr9fz","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_o8tbz00","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_o8wyzmp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]