Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is here to stay, and expand. Like most things there is good and bad to it. Ca…
ytc_UgyOMPW3e…
G
Why would it be illegal?
If face recognition is illegal then fingerprint shoul…
ytc_UgxDO3Nn2…
G
This ain’t Detroit: Become Human, AI has no thoughts and no feelings, I don’t ha…
ytc_UgwEfDh1d…
G
I know why! Its because their drivers aren’t paying attention bc they believe th…
ytc_Ugwe8me_s…
G
Humans are the dumbest, smartest animals on the planet. Just can't help creatin…
ytc_UgzCGGfrC…
G
And if that happens and one programmer out of a 100 employed can just prompt AI …
ytc_UgzozJqz-…
G
Work is not a human need. I have interests that are on hold due to work. Let AI …
ytc_UgzDrGm8B…
G
If I'm a health insurance company am I going to pay for my customer to visit a P…
ytc_Ugxgle0a9…
Comment
The problem is this would be difficult to implement. For example think about the driving test you took to get your license if you had one. There likely wasn’t a point in which the self driving car would fail. Where self driving cars fail isn’t the normal almost autonomous parts of driving, it’s the unexpected. In this case of course there is gross misconduct within the company and there does need to be a system to find these blatant problems however most of the problems aren’t blatant as with self leaning, all of the common problems are solved very quickly and efficiently.
Let’s say there was a test though. The choices could be three main ways.
1. A review of the code. I don’t know whether you have looked at big systems code but it would be very difficult to implement as often code can be very messy and long.
2. A controlled test which simulates road conditions such as a test in which the car would have to break for a pedestrian. The problem with a controlled test is you know what will happen so can prepare for it. This means it would be hard to fail the test which would lead to almost every system passing. Let’s put it this way, I give you a test and give you as much time as you want and no boundaries on the research for it. Your going to pass that test even if you have no clue about the subject. So a controlled test wouldn’t really do much.
3. An uncontrolled test such as a driving test the average person will complete. This won’t really work either because most self driving cars don’t have gaps like Ubers software does and even if they did, the chance of it occurring during the test is extremely low.
In conclusion doing a test in controlled circumstances would just lead the car to pass, a review of software although theoretically possible would be extremely difficult to implement and a normal test would lead to the car passing in most circumstances. The only real solution which makes sense would be trail. You oversee how it copes in the real world and the
reddit
AI Harm Incident
1573259680.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_f6xvg2j","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_f6y669s","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_f6xcnzm","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"rdc_f6xim11","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_f6y4amz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"})