Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It just occurred to me the connection between nuclear proliferation and AI, like…
ytc_Ugy4E7Ins…
G
My company is trying basically every tool. I switch among them but I generally I…
rdc_ohunyyf
G
I think AI is perfectly fine for idea generation, or for use in a personal proje…
ytc_UgwuUxMUv…
G
I respect Geoffrey Hinton’s work and agree AI has serious risks, but I’m skeptic…
ytc_Ugxr0yP_1…
G
13:46 AI “musicians” love to use this argument as well when Jason Becker litera…
ytc_Ugyl7Hzoo…
G
AI would be goal oriented, because we want them to accomplish a task. If they ge…
ytr_Ugzjiad5p…
G
There will be a day where AI doctor can be trusted than a regular doctor. Not fa…
ytc_UgynpUxzQ…
G
Ask someone who’s been working on her feet for 35 years, I cannot fucking wait f…
ytc_UgzMGIwig…
Comment
And we all know AI has no chance. The junior might.
And next time the Jr probably will.
The AI probably will for that specific bug as well. The next 20millionth time.
A related bug though, probably not.
reddit
AI Jobs
1755759712.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n9qmpxp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"rdc_n9rqib6","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"rdc_n9s3pfb","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_n9upl4c","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_n9rezpb","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]