Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not using Juniors is actually creating a new market. After college, they will th…
ytc_Ugx1rNAbt…
G
Anyone who believes a model is going to win the AI race is delusional. Models wi…
rdc_kol0z7r
G
Your Team needs to introspect how did they went from this to complete crap opini…
ytc_Ugy7c5m40…
G
Oh so demons are going to use AI to bring forth the Antichrist and the apocalyps…
ytc_Ugy2Z0e_V…
G
Yeah. Here's the thing... ChatGPT and others like it? They're your phone's predi…
ytc_UgwgpmzHN…
G
My opinion: AI is a tool, like websites for a report. It can be used, but seriou…
ytc_UgzfjhIk1…
G
People are really stupid enough to try using Ai for millitary purposes? Anyone e…
ytc_UgzbJVDWL…
G
18:00 I legitimately was crushed when I first thought that ai will take over art…
ytc_UgzrpLgSf…
Comment
"The autopilot turns off one second before impact, who is the manslaughter charge going to stick to"
If i placed a landmine, did i kill the person that stepped on it?
The major difference between a human and an AI is that humans (usually) have reasoning. AI can "reason" to a certain extent but in the end it cannot solve a situation it has never experienced before.
A human that sees two low lights at a distance coming towards them much too quickly and in a odd way will understand that their eyes are being fooled by appearances and spend extra time figuring out what those lights are.
And a human will also see that those lights are passing real objects near to them that they themselves pass mere seconds later.
And a human will understand those subtle differences that makes the vehicle in front of them into NOT A CAR and will then be able to reason that it's something that needs to be re-evaluated.
Humans also have exceptional spatial reasoning, where an AI gets some number crunching wrong and gets to try a billion times more, a human that fails at spatial reasoning usually ends up dead.
For instance, an AI would happily drive off the edge of a fallen bridge as it cannot SEE a problem ahead. A human will not because it can SEE that the LACK of a problem ahead, including a bearing surface to ride on, which means it will fall from a bridge.
Humans are well suited to connect unlikely dots. If it looks like a car, walks like a duck, quacks like a duck and flies away like a duck then it's most likely a duck!
An AI with radar will see that whatever is ahead is closing in fast, but looks like a car. It doesn't perceive that it quacks, walks and flies away.
On the flipside though, humans are also well known to get things wrong and overestimate their abilities. An AI is well suited to calculate the stopping distance required to within one feet.
Where humans think they can come to a stop in time no matter how fast they are coming up to something. Which usually ends with a sudden forceful stop.
youtube
AI Harm Incident
2022-09-30T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyD01XuCc3TMxZvTrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEE3H8wEeY7SJ6C654AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzAa2OAHEIeT6tBx_N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmXleybRBST2NUUDx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgztsG6Eu386q_0W8qp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNaqX4kEjDzeQAomB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzvRSZ5UMTp5W4I5qR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf5TmnZOvSJgMtxGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzp0vvrjwNT-jYIzHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyd94zeSEsiHzQrWaJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}
]