Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m an embedded software engineer and it saddens me to see people rely on “artificial intelligence.” There is no such thing. All a computer knows is 0 and 1. Everything else is human programming and subject to flaws we call bugs. What they call AI is really just machine learning where the computer references a whole bunch of stored data and makes a decision. It’s decisions are only as good as it’s programming and the amount of data it has to reference and the reliability of its sensor inputs. The human brain can infer knowledge from similar situations when encountering a new situation, AI can not. It has to wait for that scenario to be added to its database. What if a sensor is blocked or otherwise unreliable due to minor damage or snow or dirt or oil on it? Does the onboard AI disengage and make you drive? What if the Tesla looses connection to the database in rural areas? Does the AI disengage or just keep going unable to make vision decision? Way too many things that can go wrong and honestly, way too many bad programmers! As a programmer, I would never put my life in the hands of a computer. And now I move whenever I’m on my bike and I see a Tesla behind me…
youtube AI Harm Incident 2022-09-07T14:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxw-UBLqnPAe76v1J14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxE3a5XeuEfGanrAr94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyujZLiKB7YJdmfxjp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyOFSKcBoxRtdNiuLB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyTqbe8OWUI91hpNMZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz1fuN24aW5un6xtFJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyOtVWY0qXHGSz-5Mx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzBa_qhMsDtWcVViPd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwLQ3uEMEvVkTDhGTN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5lZkm_5X1nlxRWS14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"} ]