Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is not the ai robots fault this is lack if supervision and communication wi…
ytc_UgxIHY9Vm…
G
Well, thats settled. Sora's gonna have a longer time leveling up since he picked…
ytc_UgwtKeRLb…
G
Over the last decade of knowing this guy, following all that he does through the…
ytc_UgzovkEnO…
G
Our children are all either going to be communists, or they will starve to death…
ytc_UgwtE2sso…
G
When i think of the pain Ive felt in my life. Physically and emotionally abused…
ytc_UgzZg8uCw…
G
Yeah cause it's the fastest way to win.
AI doesn't have a conscious. It only …
rdc_o7ohrwj
G
How did this turn into a vpn ad played along with by this ai with no pre plannin…
ytc_Ugy3_hIXs…
G
I actually think it is and will, just indirectly. Its dumb on its own but if AI …
ytc_UgzDCtbZA…
Comment
Summarized Article:
Here are the key points from the paper "How Is ChatGPT's Behavior Changing over Time?":
- The paper evaluates how the behavior of GPT-3.5 and GPT-4 changed between March 2023 and June 2023 versions on 4 tasks: math problems, sensitive questions, code generation, visual reasoning.
- For math problems, GPT-4's accuracy dropped massively from 97.6% to 2.4% while GPT-3.5's improved from 7.4% to 86.8%. GPT-4 became much less verbose.
- For sensitive questions, GPT-4 answered fewer (21% to 5%) while GPT-3.5 answered more (2% to 8%). Both became more terse in refusing to answer. GPT-4 improved in defending against "jailbreaking" attacks but GPT-3.5 did not.
- For code generation, the percentage of directly executable code dropped for both models. Extra non-code text was often added in June versions, making the code not runnable.
- For visual reasoning, both models showed marginal 2% accuracy improvements. Over 90% of responses were identical between March and June.
- The major conclusion is that the behavior of the "same" GPT-3.5 and GPT-4 models can change substantially within a few months. This highlights the need for continuous monitoring and assessment of LLMs in production use.
reddit
AI Harm Incident
1689798552.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jskk6er","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_jsli3y1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_jslohgf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"rdc_jsmf36x","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_jsmzofs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]