Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is so annoying, it has nothing to do with AI bros. It's *the government* do…
rdc_ntlqxx9
G
I just genuinely think current AI sucks at creativity and detail. It used to be …
ytc_Ugzxk6Q0o…
G
Yes, I agree with that, and also practice it. Why be polite to AI? There could b…
ytc_Ugy5wwYcy…
G
Tax on robot use is very bad idea.
Instead there must be tax on corporate profi…
ytc_UgyRbcjyv…
G
Sorry, but I'm not buying all this AI automation. At some point there is going t…
ytc_UgwIxUzn1…
G
Investors think Ai is the future ... They are pulling their money out of non Ai …
ytc_Ugxov7_kd…
G
This is absolute nonsense. AI is not, I say again, it not sentient!!! It is crea…
ytc_Ugzi5Mjjh…
G
I have an idea. We just hide as much water mark (handwriting) as we can in vario…
ytc_Ugw-q1PWj…
Comment
I am extremely disappointed in this video. It posits incorrect information. It repeatedly claims that the decision-making in modern artificial intelligence (AI) systems (e.g. machine learning (ML)) is "programmed" (chosen) by a programmer/person. This is absolutely false!
All modern AI are mostly based on machine-learning neural networks, and in such systems the knowledge is NOT "programmed" (decided or defined) by a programmer/person.
The programmer uses thousands (or millions) of training situations/events (you can think of these rather simply, as a simulated situation) -- whereby the ML system is given a situation (the state of the world on which it must take some action/decision), and the desired decision/action it must take. This is a single "training" event, done to train the ML system. This type of training is repeated thousands/millions, even billions of times, to give the ML system an "intuitive" understanding of how to act/decide in/across many similar situations.
If the designers want the ML system to behave as people would, they would make each training event - or specifically the "decision/action" in each training event - to be what a real person would do. Thus the ML system is in-effect being trained on the behavior of actual humans, and it would intuit how a large numbers would react. It would learn to do what humans would do (and would want it to do). It would learn to mimic humans in similar (car accident) situations.
This is in fact how ML (artificial intelligence) systems of today are trained. The "programmer" is not teaching the ML system on what decision to make. The programmer is given a "training dataset" which contains all of the 1000s/millions/billions of "events", and they use the dataset to train (teach) the ML/AI system in how people want it to behave.
youtube
AI Harm Incident
2021-11-29T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz3NDLJm5vOL8_5Ki14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzzec3Twn63agGPyDB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz8wJCpFoQ2L1TPwT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDg--Hfm2lG0jR6Ut4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWoJcDFo_ekiyvEmt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8hBTPSf8XBnRxR9t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw4jM93_9cAtGe9wgN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCoTNgNzS8ucWLuet4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyKUDGVaTLJ7c09rdd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxIAyCois5Y25HZHYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]