Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Could you see a day, when you could pick up one of these robots at Walmart, in t…
ytc_UgxQSg-Wj…
G
It’s all about necessity, do we actually need it. Humans will only do what they …
ytc_UgwUThrrN…
G
RIGHT!? That was probably the most comprehensible breakdown of AI modeling that…
ytr_UgyAJVJSy…
G
If you really really pay attention. Pause and watch this video. It'll blow your …
ytc_UgzmUYujm…
G
I'd be curious to find out the reasons for the first two AI's saying they'd dest…
ytc_UgxBjYfrp…
G
Generative ai was made in the 1950s if you don’t believe me look it up…
ytc_Ugy8D-NQr…
G
Why is no one talking about the fact that maybe AI will take over but wont kill …
ytc_UgwkNFPKt…
G
The question about if we're living in a simulation is: If yes (could be) then th…
ytc_Ugw4rBZ8C…
Comment
I'd say you're wrong about your second point. AI coding, in the last 2-3 months has become seriously impressive.
reddit
AI Jobs
1774576719.0
♥ -6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ocoskbj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_oconian","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_ocpfc00","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_ocrw7et","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_ocp46qc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]