Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And even then, the change to Industrial required true, consistent, precise autom…
ytr_Ugw2GC9a2…
G
The statement that programmers using AI are 35% more productive is exagerated an…
ytc_UgwUyrdzc…
G
Not really, not. If you go by purely AI generated content, then the programs wil…
ytr_UgxYAdO-n…
G
🤖Anybody else find it ironic how much ai generated b-roll was used to create an …
ytc_UgwAIXo12…
G
Man for $1600 i expect this robot too have gyroscopic cup holders so the damn cu…
ytc_UgzTBRX22…
G
Only reason CEO and stuff like that isn't on the list is because they're not gon…
ytc_Ugxltd1Wm…
G
It's the point at which AI has surpassed human control, essentially. So, yeah, t…
ytr_UgwLRB1fF…
G
Holding myself accountable
2/24/2025 1:51:55
Today, I committed to learning…
ytc_Ugz6LBoRd…
Comment
Why was the self driving car tailgating in traffic so that it didn't have time to brake?
"Given an incompetent programmer made unsafe code, what should this same incompetent programmer program it to do after the unsafe action causes a no-win scenario?"
That is an invalid question.
The answer to self driving is the same, in this and all scenarios.
The car should slow if it is unable to stop for the predicable failure conditions (loose loads falling off, kids running out between parked cars, and the like).
And, in any scenario, the car should stop within its lane.
99.9% of times, that's the safest. "But what about that 0.1% of the time?" doesn't matter. The human got it wrong then too, so just brake in your lane, and you eliminate all the moral questions, and overall, save lives.
There should be no choice.
youtube
AI Harm Incident
2023-10-11T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzyoQHfkvKymBmesal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjDv4Z1CBjO3WJHB94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTShSLnJ9cwL-Lbw54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw2KRypsW4jIcRnBj14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5i-H8AgNVVhk3L5Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLQD8OSQSPK_gIU7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxcoaaNEw2BPDxcR8B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugy1OKJlc-JkmZ4FchN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzCOQgM8jWOa4MaGkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxcp38aM6btPES_LwF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]