Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am extremely disappointed in this video. It posits incorrect information. It repeatedly claims that the decision-making in modern artificial intelligence (AI) systems (e.g. machine learning (ML)) is "programmed" (chosen) by a programmer/person. This is absolutely false! All modern AI are mostly based on machine-learning neural networks, and in such systems the knowledge is NOT "programmed" (decided or defined) by a programmer/person. The programmer uses thousands (or millions) of training situations/events (you can think of these rather simply, as a simulated situation) -- whereby the ML system is given a situation (the state of the world on which it must take some action/decision), and the desired decision/action it must take. This is a single "training" event, done to train the ML system. This type of training is repeated thousands/millions, even billions of times, to give the ML system an "intuitive" understanding of how to act/decide in/across many similar situations. If the designers want the ML system to behave as people would, they would make each training event - or specifically the "decision/action" in each training event - to be what a real person would do. Thus the ML system is in-effect being trained on the behavior of actual humans, and it would intuit how a large numbers would react. It would learn to do what humans would do (and would want it to do). It would learn to mimic humans in similar (car accident) situations. This is in fact how ML (artificial intelligence) systems of today are trained. The "programmer" is not teaching the ML system on what decision to make. The programmer is given a "training dataset" which contains all of the 1000s/millions/billions of "events", and they use the dataset to train (teach) the ML/AI system in how people want it to behave.
youtube AI Harm Incident 2021-11-29T17:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz3NDLJm5vOL8_5Ki14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzzec3Twn63agGPyDB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz8wJCpFoQ2L1TPwT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxDg--Hfm2lG0jR6Ut4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxWoJcDFo_ekiyvEmt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8hBTPSf8XBnRxR9t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw4jM93_9cAtGe9wgN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCoTNgNzS8ucWLuet4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyKUDGVaTLJ7c09rdd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxIAyCois5Y25HZHYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]