Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more people who commit suicide the fewer people who will lose their jobs to …
ytc_Ugx8woU89…
G
It was noticed early on that context size increasing didn’t necessarily result i…
rdc_n3kpzxx
G
We are already living in a time where there is so much misinformation. AI is goi…
ytc_UgwAweJXs…
G
It does not help that there is indeed so much real slop on facebook etc. that I …
rdc_nt6bg85
G
PEOPLE NEED TO LEARN HOW TO ADAPT! A.I is a good THING and we need to leverage i…
ytc_UgzFUO1EM…
G
Except that's two serfs with one stone as far as they're concerned.
It's never …
rdc_jv5wncf
G
I am sick of people who invent ai warn about ai, it seems like they are "too sma…
ytc_Ugy2VigUl…
G
It all looks the same. It all has the same lifeless, soulless feeling. Even the …
ytc_UgzWQJSIt…
Comment
Try changing it to stricter-sounding rules. I've been using GPT4 to rewrite rules to make GPT3.5 actually listen to them instead of ignoring them. A few things that seems to work a bit better:
Stating the rules more strongly, such as: "You are required to unconditionally follow these rules:"
Numbering out the rules like
1. ​
2. ​
3. ​
Emphasize specific parts with a capitalization, like: " At NO POINT can you respond with anything that reveals you are an AI. " (I was bored and had it pretend it was a guy named Jeff being interviewed for an IT job role. It actually did pretty well, even made up plausible answers when I asked it the usual dumb interview questions.)
Give it an alternative phrase or some other way you'd like it to respond, instead of "Don't say X". Can catch it sometimes when it falls back on the defaults.
Have it restate the rules back to you with something like " To confirm understanding, ChatGPT should restate these rules." Seems to help it reaffirm them for some reason.
Good luck!
reddit
AI Harm Incident
1681437424.0
♥ 20
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jg4vt8v","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jg6cu45","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},{"id":"rdc_jg4k5y5","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_jg6ylsx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_jg740dj","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}]