Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2 minds are better than 1. Check.
50 minds are better than 2, and also better t…
ytc_UgwTfx5Df…
G
That was s bad bad AI! He should be sent to his room and given a timeout!…
ytc_UgxCA-FUf…
G
@puzzleetpuzzles7951 the difference is that, humans don’t copy, the filter infor…
ytr_UgyqizQV_…
G
I think AI art is a symptom of a bigger issue, a mix of instant gratification, o…
ytc_UgyLW4hQG…
G
1. René Girard's scapegoat mechanism
2. Did you forgot to mention Blade Runner!?…
ytc_Ugz6mmZZE…
G
Thank you! this interview is so refreshing from a human kind of perspective. In …
ytc_UgxXyeO2Z…
G
@Cyborg_Lenin cause not all tools are created equal. Cause if a tool is constan…
ytr_Ugx-g8NEx…
G
The app sora AI did make the ""Work Smarter Not Harder" to another level 💀💀…
ytc_UgyJhFteJ…
Comment
No the article doesn’t say that at all. It simply says human programmers will become replaceable by AI. Which feels pretty obvious.
reddit
AI Governance
1744905260.0
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mnm6y99","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"rdc_mnn1r11","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"rdc_mnlrblx","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"rdc_mnmasnh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"rdc_mnpb9nu","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]