Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It can hack into systems; defence systems of countries and threaten us. It can c…
ytr_UgyhMqLep…
G
People do realize Tesla actually builds these right? They’re set to release in 2…
ytc_UgxPsRf1r…
G
Totally different, 2012 never had any legitimacy or basis in fact. AI is real an…
ytr_UgyE5keWS…
G
The AI bros arguing AI is just iNsPiReD have no fucking clue how AI actually wor…
ytc_UgyX8TyIK…
G
@group555_The difference is that AI is not art, not even legally is it considere…
ytr_UgwTTtRQj…
G
The capitalists want to use AI to wipe out the working class. Part of that is tr…
ytc_Ugy0BVknU…
G
The social manipulation, subterfuge, coercion, and enforcement of goodthink, et…
ytc_Ugx2FxLR1…
G
To be honest that's not an *insane* amount for what these are. They're not in th…
rdc_lucij4m
Comment
Good test cases exist to make sure a piece of code behaves in a way that the user requires. That's context that's impossible to determine just from reading the code alone. That context comes from requirements.
> "It's not about " is it okay?" it's about " what does okay look like?" - Kevlin Henney
An LLM will simply vomit out a bunch of test cases that assert that the code behaves the way it currently does. Those test cases will be brittle, coupled to the implementation and will make the implementation hard to change.
reddit
AI Jobs
1728304182.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lqrgwot","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_lqrmdqa","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_lqrnarq","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_lqrhkex","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_lqrmqdy","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}
]