Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love Bernie Sanders. But it is pathetic that he is still one of the Left's and…
ytc_UgxkPlIvK…
G
I think the AI means mother's love cannot be bought by money. Mother's love is i…
ytc_UgyC9CUA_…
G
J'ai vu le film Robocop . Terminator, je pensais pas que les robots aurait exis…
ytc_UgzcgHYCo…
G
AI isn't unbiased. It has to be programmed by humans, so it's biases come from i…
ytr_Ugxq844fr…
G
What a surprise, AI trained by the leading corporations that love destroying for…
ytc_UgzW44nXy…
G
We're not taxing companies as much, with the excuse of it hurts companies and jo…
ytc_UgyyIhjOT…
G
Hot take but there is definitely a difference between good ai art and bad ai art…
ytc_Ugztan2hS…
G
Even if the AI is significantly worse at the job than a human, it could still be…
rdc_mxyda0k
Comment
Same issue with self-driving cars. The major problem is accountability, combined with an 80–90% success rate on even the most basic tasks. The real root of the issue is that you don’t know which 10–20% will fail. And when it does fail, it becomes increasingly expensive and difficult to find the cause.
The amount of code an AI can churn out is impressive, but it operates on a scale that is utterly unmaintainable by humans.
The interesting part is that while a self-driving car will likely cause fewer accidents overall, when it inevitably misreads something and runs over a two-year-old toddler, you’ll be hard-pressed to find someone willing to claim responsibility. Is it the passenger in the car? The manufacturer? The software developer? Or will the state ultimately handle the financial settlement? Who goes to jail?
Now think about software written in the same way. If something critical fails - financial systems, medical devices, infrastructure - who the hell is actually accountable when no human fully understands the system that was produced?
youtube
AI Jobs
2026-03-08T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmU_IztqW-0pATFbp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxi7ozN3A7zzNkzTnp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyNl2H9Ar0rOu1v6ZJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx56IHMcwIrBYLl99J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhSfJLSFFL5gIcxzB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy9lpBoO8_TiJFjcr54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyE28uGHzzYl4rEBFt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxeN-xluOs7VdnTbTZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyuNqFqmmcAvmlbvH94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKnCWoZw7HTe087OB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]