Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The idea that AI is supremely dangerous is in direct contradiction of the simula…
ytc_Ugzzlh3UJ…
G
If the enemies also use Ai. And all the things will be perfect. All i want to sa…
ytc_UgxgVEGQO…
G
Anyone who's played chess daily, understands that the game imitates life. In fac…
ytc_UgxkQBqM4…
G
With the state of drivers today, driving should be a right that is ripped away f…
ytc_Ugxyry2jg…
G
If you start teaching AI “our values” they will eventually attack humans because…
ytc_Ugwx9ul-Q…
G
In the future, all the A.I. robots have to do is watch five minutes of this vide…
ytc_UgycTzVpM…
G
@comicserif9283but you are not taking direct action in the creation of it. You …
ytr_UgwLq2h04…
G
Honestly, AI brings more coding jobs. I really think of building an empire of fi…
ytc_UgwZGocje…
Comment
Fine then I'll talk.
1: The title has nothing to do with the paper. This is not a quote, doesn't take into account what the paper says about the various improvements of the model, etc.
2: The quote used isn't in full. To quote:
>Figure 4: Code generation. (a) Overall performance drifts. For GPT-4, the percentage of generations that are directly executable dropped from 52.0% in March to 10.0% in June. The drop was also large for GPT-3.5 (from 22.0% to 2.0%). GPT-4’s verbosity, measured by number of characters in the generations, also increased by 20%. (b) An example query and the corresponding responses. In March, both GPT-4 and GPT-3.5 followed the user instruction (“the code only”) and thus produced directly executable generation. **In June, however, they added extra triple quotes before and after the code snippet, rendering the code not executable.**
Which means that by the paper's own admission, the problem is not the code given but that their test doesn't work.
​
For the prime numbers, the problem was fixed in march notably because their prompt didn't work which means they didn't manage to test what they were trying to do. Quote:
> Figure 2: Solving math problems. (a): monitored accuracy, verbosity (unit: character), and answer overlap of GPT-4 and GPT-3.5 between March and June 2023. Overall, a large performance drifts existed for both services. (b) an example query and corresponding responses over time. GPT-4 followed the chain-of-thought instruction to obtain the right answer in March, but ignored it in June with the wrong answer. GPT-3.5 always followed the chain-of-thought, but it insisted on generating a wrong answer (\[No\]) first in March. This issue was largely fixed in June.
>
>\[...\] This interesting phenomenon indicates that the same prompting approach, even these widely adopted such as chain-of-thought, could lead to substantially different performance due to LLM drifts.
​
The "sensitive question
reddit
AI Harm Incident
1689753378.0
♥ 106
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jsm5wzy","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_jsl8ta1","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_jsl0p6a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jskabl2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jskaeh0","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}]