Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think people are just living their lives while AI development has become anoth…
ytc_UgzOUEZvn…
G
Nobody was forcing AI to lie about the founding fathers.. That was a hallucinati…
ytc_Ugxh-EBM8…
G
Nah man, manual labour is out too. Robots are physically good enough for any man…
rdc_jf707i7
G
Not being anywhere close to a computer engineer myself, in my interactions with …
ytc_UgwYp-S-J…
G
Do you even know how AI images are generated and from where they get the data to…
ytr_UgxHT565w…
G
I TOTALLY AGREED. GFAI (1:09:17) you mentioned an extremely important point whic…
ytc_UgwOqyL2I…
G
One thing that wasn’t mentioned and it’s important, is where Eric Schmidt positi…
ytc_UgwD61fBA…
G
They are definitely not programming themselves. Just shows how little people in …
ytr_UgzwO74sC…
Comment
The research is clear - humans are terrible at staying alert when they are merely monitoring a process. Concentration is far better when we are actively operating the process.
At the current level of development - where assisted driving systems are still woefully unreliable and dependent on human override - the whole concept is highly questionable.
Especially when it is being irresponsibly overhyped as a "Full Self Driving Autopilot" system. Why Tesla customers are prepared to pay a jaw-dropping $15,000 to be Beta testers for an unfinished product defeats me. It's a triumph of unethical marketing over normal engineering prudence and common-sense.
youtube
AI Harm Incident
2022-10-02T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyD01XuCc3TMxZvTrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEE3H8wEeY7SJ6C654AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzAa2OAHEIeT6tBx_N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmXleybRBST2NUUDx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgztsG6Eu386q_0W8qp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNaqX4kEjDzeQAomB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzvRSZ5UMTp5W4I5qR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf5TmnZOvSJgMtxGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzp0vvrjwNT-jYIzHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyd94zeSEsiHzQrWaJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}
]