Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are private versions of Copilot in the works that companies will have to p…
rdc_jptuj2o
G
@tradingwizard562 Yeah eh it makes sense, if the government is going to take awa…
ytr_UgxLKpBKp…
G
IMO, it's already automated enough to kick out all the people working in the fie…
ytr_UgygsE2rZ…
G
The Real investors are the Governments and now that AI is getting better and bet…
ytc_Ugzjl76mm…
G
Man, with how some teachers be acting.... this will most likely happen.
The p…
ytr_UgyzY7gHB…
G
I say. Is this not quite a scoop, re water use by these AI companies ~ water pol…
ytc_UgyIbVVF8…
G
Like millions of creative people across the world, I’ve been profoundly demorali…
ytc_UgzT36Y-B…
G
I understand enough of AI architecture to think something stranger is happening …
ytc_UgygvSk7-…
Comment
The problem is people are like wow, 80-95% accurate?? That's really good! Humans get stuff wrong all the time too, so it's probably better!
The real issue though is humans generally make rational or predictable errors that you can work with or around or plan for. The 20-5% of the errors AI makes are just full blown hallucinations. They could be anything. You can't work around it.
reddit
AI Responsibility
1755626863.0
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n9hzee8","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"rdc_n9ig08d","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_n9ixia5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n9kka6l","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n9jts9g","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]