Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Starting 10:44 - > If the edge case is not in the simulation you have not really solved the problem. This feels a bit short-sighted. You haven't _proved_ that you've solved the problem, but it's very possible your model has learned that something in the path of travel that is growing in apparent size as it gets closer requires the car to stop or swerve. Who cares if it's a moose? > By some estimates, before they can truly say that they are safer than human drivers, robotaxis should expect to clock billions or even trillions of miles. Those numbers feel big and so they sound convincing. Tesla FSD already claims 8.4 billion miles driven. Since you've shown that it hits kids anyways, maybe this estimate should have been junked instead of getting into the script.
youtube 2026-03-27T16:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxOo1CBHRoow8ZAZaR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzahJqty6ReJGAyhjJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxFT5-AAmtk3ROiJVp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxpZcA7zf8AeTWUM-d4AaABAg","responsibility":"expert","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwr77mVIHwU5dooscV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]