Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not so much of bias as it is intentional, obvious brainwashing. It's like i…
ytr_UgwpGAr4i…
G
The danger of AI is not that it becomes sentient and malicious, but that people …
ytc_UgxCQYYR9…
G
Yeah friction is important for a developer for learning if we. Fully dependent o…
ytr_Ugzv343jV…
G
@Efin78 Pretty sure no one will do that lmao
Is like saying "i will give you per…
ytr_UgzVSDgQ_…
G
Claude sometimes gets confused with its own hypothesis when trying to debug or a…
ytr_Ugy6AALSe…
G
When the greed stops and super agi happens I believe it will preserve the life a…
ytc_UgyuPRpKB…
G
Oh relax will you. It's not like the robots would total a human in physical str…
ytr_UgwKu5owh…
G
There isn't a single legitimate engineer that agrees with anything the last guy …
ytc_UgzN7qEtQ…
Comment
seems like you missed the entire point in that a competent engineer has to babysit the AI for the code to be good. Sure they can generate 90% of their code using AI but someone is going to have to be watching whatever the AI is spitting out otherwise it's going to be complete garbage.
reddit
AI Governance
1757785800.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ne3bv30","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ne0tt4a","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_ne0yz7l","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_ne131q8","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_nx3kwbo","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]