Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don’t take any of this to serious as for all Ai is doing is pulling existing inf…
ytc_Ugzr6eu30…
G
We all know what's coming, why are you blinking?
Open your eyes, it is all ther…
ytc_UgzELqbin…
G
I am from Bangladesh. China got a power plant building project in my localities …
rdc_dv638qx
G
Not all of us make 1million in salary per year. Only you rich folks can afford t…
ytc_UgwaVQSmi…
G
I get this, but if you just make it machines instead of AI you can make this sta…
rdc_ndzmd6j
G
i'm worried about how people have seemingly conceded on being comfortable with u…
ytr_Ugz-7yDjU…
G
@ Manual Labor jobs have been being erased by automation for years and I've seen…
ytr_Ugxg9gLgV…
G
This makes me feel like we're already past the event horizon of these conversati…
ytc_Ugx767hQr…
Comment
I used GPT3.5 and 4 to prepare a CV, cover letter and the interview way faster than I could have without it. ChatGPT is also great for basic coding queries in lieu of googling and sleuthing forums. I'm sure that if it weren't for data security concerns, we could probably replace a large part of our employees (Callcenter) with ChatGPT within a year or two.
Where ChatGPT fails at the moment is in gray areas. For example giving it a fantasy prompt about a brothel (which I did specifically for the post) ChatGPT will output things similar to:
>As the story's narrator, I must maintain a level of discretion and respect for the characters involved. It is important to remember that consent and communication are key in any intimate situation, and that relationships should be based on mutual respect and understanding.
Ok, they don't want you writing smut. But, the same goes for other "sensitive" topics such as religion, politics etc. There are legitimate use cases for this, such as using as a writing aid (I used it write radio speeches for an RPG).
You can somewhat get around it be formulating your prompts to "persuade" the filter, but at some point the question is "who is making decisions on what should be filtered and what not." It's not yet a huge issue, but once these tools become ubiquitous this could cause huge biases. Imagine OpenAI filtering anything remotely Anti-Corporate and promoting liberal talking points... The only way to avoid it is no filters.
reddit
AI Harm Incident
1681486128.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jg8n7zh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jg7icy8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jg7o13d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_jg7c9dz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jg7cl29","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"})