Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is nothing biomechanically different than neurons servings as ones or zero…
ytc_UgyZiZ99V…
G
Long story short: absolutely no one has any clue on how to manage this complex s…
ytc_UgztnFM1f…
G
This guy must’ve been raised by a robot, he’s such a weirdo. I don’t have anyth…
ytc_UgwcxukQc…
G
When you log in ChatGPT, before you choose which chat logs you want to enter, or…
ytc_Ugx2mYcx_…
G
Robots can never really have the human definition of a "soul." Sure, they can be…
ytc_Uggx0h3sz…
G
Best answer. Thank you.
Software developers using AI now also know.
AI at pres…
ytc_Ugy7k-zAx…
G
What is the point of making money or creating human-like intelligence if almost …
ytc_UgxNCvhJc…
G
That's fixed if the code distribution on which the AI is trained is high quality…
ytc_UgzEPrjqa…
Comment
There's a lot of truth in your first paragraph for now, but it seems like companies just won't care if generative AI produces flawed output so long as it's good enough. And it will only improve as time goes on.
reddit
AI Jobs
1709186605.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_ksluedv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_ksm1uv5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_ksnqepb","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_ksnqonn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_ksnff5n","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})