Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is this would be difficult to implement. For example think about the driving test you took to get your license if you had one. There likely wasn’t a point in which the self driving car would fail. Where self driving cars fail isn’t the normal almost autonomous parts of driving, it’s the unexpected. In this case of course there is gross misconduct within the company and there does need to be a system to find these blatant problems however most of the problems aren’t blatant as with self leaning, all of the common problems are solved very quickly and efficiently. Let’s say there was a test though. The choices could be three main ways. 1. A review of the code. I don’t know whether you have looked at big systems code but it would be very difficult to implement as often code can be very messy and long. 2. A controlled test which simulates road conditions such as a test in which the car would have to break for a pedestrian. The problem with a controlled test is you know what will happen so can prepare for it. This means it would be hard to fail the test which would lead to almost every system passing. Let’s put it this way, I give you a test and give you as much time as you want and no boundaries on the research for it. Your going to pass that test even if you have no clue about the subject. So a controlled test wouldn’t really do much. 3. An uncontrolled test such as a driving test the average person will complete. This won’t really work either because most self driving cars don’t have gaps like Ubers software does and even if they did, the chance of it occurring during the test is extremely low. In conclusion doing a test in controlled circumstances would just lead the car to pass, a review of software although theoretically possible would be extremely difficult to implement and a normal test would lead to the car passing in most circumstances. The only real solution which makes sense would be trail. You oversee how it copes in the real world and the
reddit AI Harm Incident 1573259680.0 ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_f6xvg2j","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_f6y669s","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_f6xcnzm","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"rdc_f6xim11","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_f6y4amz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"})