Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a security engineer I can guarantee Aurora and automated driving has or will be compromised resulting in a total disaster. Simply crawling the DW I found multiple accounts for Aurora employees and executives. That tells me they have been compromised and they have very poor security. If they were compromised and these trucks are on the roads I can see this turning into a terrorist act situation because it takes very little work on the attacker/bad actor side to slide a root code giving the attacker(s) remote access/execution abilities then close the back door so no one notices. If Aurora plans to push this automated driving thing they better invest "$20+ million" in real cyber security and hourly source code audits. That should be a min requirement from the US government but you know the name of the game in good ole corrupt broken America. Its not what you do its who you buy and politicians are for sale!
youtube AI Jobs 2025-05-29T04:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyu3uor8MRcGBMjFsh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFpJjgJQCSZQIVIUx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwPijGwIQf4mMEwW4F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxjaz8T4LgSsgP3y3l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx_EjQswA5Oi88vU-p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzsYtIYSFdRibDeS6V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyLQ4NV-hcyNdCW9xZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyg_uNLiRKLtIdVODd4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyz1qfjFywK4-wvrkt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8DRJV2xUpbaAaIDh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]