Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do validation testing of autonomous cars. In most cases I find the software, with its better-than-human sensors and 3D modeling of its surroundings, to be a better driver than I could be on my best day. It never gets emotional about its encounters, it just keeps doing the job. But it can also get confused over simple things we would think nothing of. Ideally it should be supervised by an attentive, technically competent, well paid human in a large vehicle like a big rig. It's a great tool for humans to use to make driving safer and more efficient. It's also very expensive. I think it would be foolish of regulators to approve fully-driverless large vehicles. Even though autopilot could fly an aircraft flawlessly most of the time, I still want an experienced human pilot in the cockpit supervising the technology.
youtube AI Jobs 2025-10-19T20:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyindustry_self
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwcqwWgqeFGdHrwYfV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxl03K-ty9rc85LDa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyDK4X2xLNWeRQnCH14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzUkg-KlXWRzCu9c3V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwP5aVgCm8rHt82qVd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyqnbWBl5JESw3srDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyF3qCdjVzO4XdsaQl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwC-lycXEmBc_jpoN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwlQFvRczgiZKCMHYl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxriczs2NTQcszeXPZ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"mixed"} ]