Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Same issue with self-driving cars. The major problem is accountability, combined with an 80–90% success rate on even the most basic tasks. The real root of the issue is that you don’t know which 10–20% will fail. And when it does fail, it becomes increasingly expensive and difficult to find the cause. The amount of code an AI can churn out is impressive, but it operates on a scale that is utterly unmaintainable by humans. The interesting part is that while a self-driving car will likely cause fewer accidents overall, when it inevitably misreads something and runs over a two-year-old toddler, you’ll be hard-pressed to find someone willing to claim responsibility. Is it the passenger in the car? The manufacturer? The software developer? Or will the state ultimately handle the financial settlement? Who goes to jail? Now think about software written in the same way. If something critical fails - financial systems, medical devices, infrastructure - who the hell is actually accountable when no human fully understands the system that was produced?
youtube AI Jobs 2026-03-08T16:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzmU_IztqW-0pATFbp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxi7ozN3A7zzNkzTnp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyNl2H9Ar0rOu1v6ZJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx56IHMcwIrBYLl99J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhSfJLSFFL5gIcxzB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy9lpBoO8_TiJFjcr54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyE28uGHzzYl4rEBFt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxeN-xluOs7VdnTbTZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyuNqFqmmcAvmlbvH94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzKnCWoZw7HTe087OB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]