Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is gonna be integrated in our consciousness AI human hybrids is are fate like…
ytc_Ugzg1Qk8K…
G
Silly people. Don't realise that it gives them motive only to keep it behind a v…
rdc_f1vgvqa
G
Maybe this will lead to everything being done by AI and the world becomes FREE!…
ytc_Ugz1LnxQ6…
G
"Wont keep your art safe"
When are these folks gonna look in the mirror and go …
ytc_UgzL75LxR…
G
Anyone who thinks that AI is taking more than a tiny fraction of blue collar job…
ytc_Ugz_QJUhi…
G
I'm in grad school now; never used AI, never will. If AI has no haters left then…
ytr_Ugxc5m_Jq…
G
Hey @shadowninja3492, thanks for commenting! It's like taking a regular human to…
ytr_UgzvIhQ-w…
G
Guys, guys, guys, when you get any "im not a robot"CAPTCHA, the companies use th…
ytc_Ugy4CveKb…
Comment
Just to clarify this - no decent software engineer thinks this. Because any software engineer worth their salt would at least look at AI-generated code or get a basic understanding of how AI works before coming to a conclusion like that. The people actually saying this are either incompetent software engineers, or managers and execs who are trying to threaten workers and/or tell shareholders that the company's profits are going to go way up as their costs drop by firing programmers.
The truth is that AI as we know it isn't ever going to cross the finish line. AI fundamentally doesn't have the ability to understand what it's writing. It's just imitating code that it has seen, without knowing the why or how of it all. It's inherently very error-prone and unable to handle larger-scale tasks. Once you get past basic boilerplate code, AI's incompetence is undeniable.
youtube
2025-05-19T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwSsAqOMzbCzLQT8OV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJGxfHE60KhBeoRKV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugznumg3TeNgF7TQXT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz8bLkhMpSNkV8ViCl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJT-Msyep1Nyhbkf14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6oqJ0yV9BbyiO6mJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkRLjLyPtrFbsGrRV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7aaZCFltqUuuLSGh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwypLWw6YQxrzLvEoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJOfzhpitFMZ8hiqV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]