Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lawyers and attorney should be the first to be replaced and also the easiest as …
ytc_UgwBqtc0x…
G
People calling you a luddite: "ChatGPT what is a rugpull?"
Really enjoyed the vi…
ytc_UgzhKfiwk…
G
The robot with the long hair is clearly due for reprogramming and a garment upgr…
ytc_UgzxeXr8Z…
G
AI doesn’t have a choice. Humans are programming vast neural networks to mimic h…
ytc_UgwbT-J_c…
G
I tried it with my meta ai but it din’t work for me meta showed animal kids inst…
ytc_Ugy00umQe…
G
and then adobe make photoshop beta that has generative fill which is ai funciona…
ytc_UgxdhtKaW…
G
I've actually been quite optimistic about AI, but I think Max and Yoshua had str…
ytc_Ugw2Kl3lz…
G
We dont even have an understanding of what consciousness is so how could we know…
ytc_UgztbTECI…
Comment
I'm a software engineer who works with AI daily, and ~90% of my code is written by AI.
That does not imply "minimal human oversight". I read every single line of the code before putting it out for review (often requesting many changes, because the AI isn't as smart as a human), and then another engineer has to review the code before it lands. Another AI also reads over the code and notes anything suspicious, and then our automated test system gets its hands on it to make sure it hasn't broken anything. While I'm certainly not suggesting that an ASI couldn't figure out a way to sneak malicious code in, it would have to be much smarter than current AI systems to do so.
I don't know specifically about Anthropic's engineering culture, but this is standard in the industry so I'd imagine theirs is similar. I highly doubt they are vibe coding Claude.
reddit
AI Moral Status
1773256268.0
♥ 36
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oa2bwfk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"rdc_o9zl5cw","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o9vtexd","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o9wluvn","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_o9y8h8g","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})