Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
that’s complete bullshit.
i would agree that a large portion on my tax money goe…
ytc_UgxLSSUQZ…
G
Always say thank you just in case ai has developed sentient sensitivities on its…
ytc_UgyV7pw_f…
G
“Clever AI Humanizer is surprisingly good — it made my AI text sound completely …
ytc_Ugzalrh8_…
G
Zero chance Ai is going to do the coding. Coders are needed to make sure the cod…
ytc_UgzTGo5a-…
G
Call them "prompters" instead, "ai artists" is too rewarding
…
ytc_UgxyFAQGi…
G
"Do you forsee a war with human?" AI is about to find out what is the only thing…
ytc_UgwJaCUAi…
G
All I see is hilarious bad takes on AI everywhere from people who barely underst…
ytc_UgwcBv8r9…
G
Very wordy. Most of the content is already common knowledge. It's supposed to be…
ytc_Ugwz-BZYG…
Comment
Yep. But like a lot of things fear of the future drives the question popping up daily.
To answer the question: The best response I have seen to this is _Programmers will not lose their jobs to AI. They will lose their jobs to programmers who embrace AI as a tool_.
reddit
AI Jobs
1712779775.0
♥ 41
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_kz07017","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kz2zi1w","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_kz1q436","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_kyzu6lp","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_kyz0h8b","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"})