Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@roseyfurry7654 Dude, “Ai glazer” seems like a petty excuse, people who just hat…
ytr_UgzF7aXVI…
G
I feel like there is no reason for such a surprise when AI companies project the…
ytc_Ugzn4q1aq…
G
I'm in the middle; I'm an end user of generative AI, but I don't consider it art…
ytc_Ugz5lkYig…
G
At no point is it reasonable to take at face value someone who is the only one w…
ytc_UgyKGlBzV…
G
The basic question here is “whose values” are guiding “Claude”? Which “humanity…
ytc_UgxCINhYL…
G
@Avoc777 I mean salvation from extinction. We're facing extinction on like 5 sep…
ytr_UgymD5_AW…
G
Analysis: Disappearing Jobs & Human Impact in an AI-Dominated Economy - I asked …
ytc_Ugx-jIocS…
G
So what you're saying is that the AI is right but not in the most correct way? L…
ytc_UgzcxzPiF…
Comment
They “plan” is that sr devs last long enough for AI to keep advancing to the point it can easily replace sr devs as well. That’s why they’re not training anyone up now. They don’t expect to need anyone soon enough.
reddit
AI Governance
1748821549.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mvi42uw","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mvair4t","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_mvakt5x","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mvb41m7","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_mvb0vnp","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]