Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
okay you can't be talking about LLMs like they actually "think". they're predict…
ytc_UgzHgx5LL…
G
And because this channel just couldn't resist, we have an anti-Israel, anti-Jewi…
ytc_UgxuH2K-I…
G
i dont know anything about AI....but if I was an AI Company CEO, i would say exa…
ytc_UgzRkgNZ1…
G
I personally think that AI art is very fascinating from a technological standpoi…
ytc_UgyZq2oq_…
G
Happy that it lost. People who advocate for AI are either grifters who jumped sh…
ytc_UgwB06xlB…
G
You're not thinking. If all those jobs go the housing prices will crash people …
ytr_UgwzljihT…
G
People are constantly praising the lower gas prices, but aren't understanding ho…
rdc_czlf095
G
@ironeo Goodie, when will AI grow all the food we need, build all our houses and…
ytr_UgxqOXK5J…
Comment
thats fair and i think for current code review its totally manageable. my concern is more about the trajectory tho. right now you can review every line because the output is code you understand. but anthropic is specifically talking about AI writing code for training future models, not regular software. that research code is gonna get increasingly exotic and harder for humans to meaningfully review even if they read every line. the 90/10 split works today because the 10% you write gives you deep understanding of whats happening. question is whether that holds when the AI is optimizing things humans didnt design in the first place
reddit
AI Moral Status
1773273665.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oa2bwfk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"rdc_o9zl5cw","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o9vtexd","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o9wluvn","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_o9y8h8g","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})