Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While this is interesting, I think it leaves a bigger question is un-answered. As human beings, we have to pass a driving test before we're allowed to take a vehicle on to the public highway... Equivalent tests for autonomous vehicles exist, but they vary state-by state and the control over the effectiveness of these tests is not clear. What's even more interesting is that there does not seem to be any learning process or feedback loop to require manufacturers of autonomous vehicles to revise their "self driving modes" in the after an accident. In other words, the specific incident that Devin describes in this video clearly highlights a vulnerability in "self driving". So where is the regulation forcing manufacturers of autonomous vehicles to modify their self-driving to ensure that this scenario can never repeat? Where's the FOIA or public safety investigation to find out what measures companies like Tesla have taken in response to other accidents involving self-driving mode? That, for example, talks to "consciousness of guilt"... But most of all: why are states allowing the manufacturers of autonomous vehicles to test their products on public roads? Think about an equivalent scenario - do you think it would be acceptable for the manufacturers of bulletproof vests to make and sell a product, claiming that it could "stop a bullet" and only revisit their design if their product fails and people die? Probably not. So why should manufacturers of self-driving vehicles be allowed to "test operate" their products on the public before passing some form of reasonable safety test? It's easy to bash on Tesla... but I really think the problem here is with a failure of regulators to keep up with changes in technology...
youtube AI Harm Incident 2025-08-16T19:4… ♥ 1
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwFTfkYz-fC-KTCppl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwJhcSugGTCjptTDjN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw9J4u0dlujrt4gD_N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw25t2WfPPkmln2elp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzJyDlDdT8YAIVa2rZ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"resignation"},{"id":"ytc_Ugxw40FTgi3aZLLq7v14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgynxNiFwdnZNh__HRV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwl0SzoAcHqkv42tP14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgyeRU23y7dYcXEZWcB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},{"id":"ytc_UgyZyjIKMBhYR2s2j-p4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}]