Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2:34 For any techbros and art vultures who still don't understand why AI generat…
ytc_Ugx8esUJr…
G
Can we pause for a moment and reflect on the statement of "having FAITH in Ilya"…
ytc_UgyPRjann…
G
I don't think ai can ever be the same as human creativity as it hasnt got the sa…
ytc_UgxOOOrh5…
G
I can see ai being useful in art to do sketches and then pain over it but not th…
ytc_UgwaS8WnO…
G
So we're just gonna begin with 2 strawman arguments?
Or at best, 2 arguments th…
ytc_UgwxciWRn…
G
@Londonlife-q9y i'll say it again, ai hallucinates frequently. if you get inform…
ytr_UgxpciBN-…
G
Nonsense. Mankind can not stop Progress. It would be like trying to hold a river…
ytc_UgwFNAzmN…
G
While i totally think you're a plug, i agree, ai will be bad as it develops.…
ytc_UgwwiLHek…
Comment
Another thing that was not brought up in this video but is important to consider: Tesla's reported comparison of AI and human driving safety is heavily biased because:
- When Autopilot causes a dangeorus situation, a driver often takes over, but it is already too late to prevent a crash. This is reported as a human error when Autopilot caused it.
- Autopilot disengages before a crash for liability reasons, which at least some of the time leads to a crash being reported as a human fault despite, again, the AI being in control until just before collision.
youtube
AI Harm Incident
2022-09-04T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwkNLEsJJlkcW95_1x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZCzeLWRYK1ZeM1sp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOLC4MeOt0846tv6p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyReFE13Esonf8RuE94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw3lyrLvRf2V9PiYUt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzbKsiufsyPpWRJc0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwKWUxBNKb_E0-fvF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3azlfIiuaIk7mJUV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwixm2Q69mBUuzA1oR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwYH_sxlT-cR7eM9Bh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]