Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI would still write better scripts than 99% of the hollywood writers we got tod…
ytc_UgyVf65RV…
G
No. Robots deserve rights! what makes them worse than you? PS i'm not a robot.…
ytr_Ughnbqf7F…
G
LLM AI is so bad that I have to question even the application of a simple gramma…
ytr_UgzV-bdGm…
G
I can't make my AI to say these things, it says Iam only here to give you inform…
ytc_UgxPaQJL8…
G
The sooner AI takes control of this world from Trump, Putin, Xi Xing, religious …
ytc_Ugww-rTpw…
G
@flickwtchr In my case atleast, we don't really feel threatened if the AI sudden…
ytr_UgyMsPNo8…
G
I disagree. I’m not into AI or anything, but…like it’s made by a human, the inpu…
ytc_UgwteMQ-N…
G
Robots taking our jobs - a disaster? Ana, this has been happening for the last 2…
ytc_Ugia_nrfb…
Comment
> In game states where all sane moves lead to certain loss, the AI falls back to playing moves that 'fish' for enemy mistakes.
One of the reporters in the Q&A session of the press conference brought up how "mistakes" like these affect expert systems in general, for instance when used in the medical domain. If the system is seen as a brilliant oracle who can be trusted, what should operators do when the system recommends seemingly crazy moves?
I wasn't quite satisfied with Demis Hassabis' response (presumably because he had little time to come up with one) and I think your comment illustrates this issue well. What is an expert system supposed to do if all the "moves" that are seen as natural by humans will lead to failure, but only the expert system is able to see this?
Making the decision process transparent to users (who typically remain accountable for actions) is one of the most challenging aspects of building a good expert system. What probably happened in the fourth game is that Lee Se-dol's "brilliant" move was estimated to have such a low probability of being played that AlphaGo never went down that path to calculate its possible long-term outcomes. Once played, the computer faced a board state where it had already lost the center, and possibly the game, which the human analysts could not yet see.
reddit
AI Jobs
1457893615.0
♥ 25
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_kowhezy","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_kowzeis","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_d0ygykg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_d0yci6h","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"rdc_d0yfd2y","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]