Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Because these kinds of organizations are more about power than they are about ac…
rdc_esq9xyo
G
43:49
lmao, i was hoping to learn something about AI, turns out it's still west…
ytc_UgwWEfQyZ…
G
Bro my mom came up to me showing me “art” her friend made. Not only was it obvio…
ytc_Ugy-1WmMN…
G
Please i want this AI thing to win so bad and automate every single job so that …
ytc_UgxVoF0Ca…
G
@citizen_of_earth_ Naah, have been studying the relationship between human visio…
ytr_Ugylj41EC…
G
Or, how about we stop blaming AI and just get the kids off the Internet. Don't p…
ytc_Ugx4rIrl0…
G
The silence of some of the industry "giants" is baffling to me. Sure they're scr…
ytc_UgyBV37F5…
G
Mainstream media "journalists" are paid to lie and make you unintelligent. Outsi…
ytc_UgwdPzEDx…
Comment
This is one of the biggest issues I have with autonomous cars. It cannot make safe decisions when driving in “blind” conditions. San Francisco is extremely hilly and you cannot see the right-of-way car over the crest of the hilled road. California also has roads and highways with major blind sharp curves that depending on the conditions can make the cause more likely to wreck. I’ve been in situations where until you come around that curve you have no idea that traffic has come to a dead stop for whatever reason. When any electric or self driving car senses an emergency it basically slams and locks its brakes. Especially in slick conditions you need to pump the brakes to both slow down as fast as possible while reducing the risk of hydroplaning or fishtailing as much as possible. Autopilot and autonomous vehicles aren’t capable of this. And the manufacturers with all their consultants and engineers and software developers and race car test drivers know this very well.
youtube
AI Harm Incident
2025-12-11T21:4…
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzh3kEj7uXYrS7RYGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyLfw0_JBMULyiTOfJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwYO_AnycqQFCEP11h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHGDfK2hN_8Zn8zCl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxDZ3mVrWP6gKRXjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyE0WvNDMJ3YvHlRl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgICezwwGPqk69zdZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyg7zY0rhE3yI8F1K14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpxF9V0X7B7QHIFvZ4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzVBesaxN3UpdEjP0V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]