Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As somebody who used quite a lot AI for legal compliance checks: Andrew Yang is …
rdc_n5rm2y6
G
No, you're not.
Example:
[Have we, professional developers, already lost the b…
rdc_oc3bcp4
G
The whole fear mongering about LLMs becoming sentient is actually a form of indu…
ytr_Ugxm3F5eH…
G
If we as humans created AI, then that begs the question; Are we auper intelligen…
ytc_UgyYl39ar…
G
Let's see
- used no prds
- gave vague and too big prompts
- expected one shot …
ytc_UgxuvOikA…
G
13:55 and they call us arrogant and ignorant for not caring for people who "cant…
ytc_UgwEL8Yjd…
G
i once ask chatgpt if it was concious he said no and explained to me why he cant…
ytc_UgxadBcbe…
G
So wait a moment, the mother saw her kid in the car loading a gun!!!
Rest my ca…
ytc_Ugz4Y3XBY…
Comment
I think there's a gap here when we narrow down the safety problems with 'self-driving' to power in corporations, and when we assume that automated systems have no possibility of error because they can't get tired or distracted. Humans can't glitch because of an unexpected code interaction and often stop to think if solutions don't make sense. Humans also have physical bodies that interact with the actual world, giving us a lot of tools to process unfamiliar situations in an embodied sense.
A big problem with self-driving which is alluded to in your introduction but ignored in the conclusion, is that software is a terrible structure to try and encode safety into. It's incredibly abstracted and fluid, and it's also impacted by human error. You don't have to look far to find software glitches caused by spelling errors or incorrect character use, and that's especially true when you have a technology scaling up.
A better comparison than pretending that computers aren't victims of human error, I think, would be realising that every car piloted by the same software is likely to be impacted by the same human error. Thus, it's less of a reduction in error and more a question of whether you want to trade driving your own vehicle out for a set driver (with a known reputation that you can look into).
Honestly though, people need to stop imagining that computers and machines are distinct magical creatures separate from the humans that make them. Software is programmed by people, deployed by people and runs on physical machinery with distinct real world affordances. Obviously.
youtube
2026-02-11T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyzb-896IknvLKXd0Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTNrQ5vvypk0xxoiJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzIuOlJ6BMj2qMw0rJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwq8lBAqxYa4dV4paB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyNUOUlecBZNzTSGdx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0Y6EQ_KzGMeUQZAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEA9ONgqIAYH061YJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvrHSxKVjppgFk2jh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwuw5yjyWRpL1lro694AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytH4OWu-84YEPmfjR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]