Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are so many things you are not considering, you imagine a nightmarish chan…
ytc_Ugz6M8EpI…
G
You can blame people for not having self-driving cars yet. But you can also thin…
rdc_jhsiaki
G
@ The cyber cab that Tesla just launched is meant to be a public Uber style cab …
ytr_UgxcJ6ksD…
G
the problem is not with AI.. not with the tool. it is the problem with the peopl…
ytc_UgwjXl1Jp…
G
Nice video. Thank you for sharing and informing. In the Gartner hype cycle, whic…
ytc_UgwHNQ718…
G
IK this is a year later...but lowkey hit her with the " Can Ai draw with a pen a…
ytr_UgzuC9mgi…
G
There has been too many red flag situations with robots showing signs of possibl…
ytc_UgxRuhKWA…
G
I'm an indie gamedev with conflicting thoughts on Machine Learning. I'm both an …
ytc_UgwbMqZ4A…
Comment
I think one of the reasons that people are hesitant to trust self driving cars is that most people believe they are good enough drivers to avoid getting into a fatal crash. If that is true, then your risk of crashing could theoretically be lower than the risk of a crash inducing software malfunction. One interesting statistic I would like to see is how many of those yearly fatalities were the fault of the person that died. For example, if I am T-boned by a bus that ran a red light, I may die from that crash but it was not my fault. In an autonomous car, I would be susceptible to that same accident. However, if I were driving along and I took my eyes off the road and ran into a ditch and died, then the self driving would prevent that (barring any software bugs). In the case of fatalities, are most of those victims the victim of their own mistake? If so, then self driving would likely prevent those deaths. If most are not at fault for their fatal crash, it may not make much difference.
youtube
2023-07-28T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxB5rxAiOTb56L2EDd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3MjWKI1QW2f0hCWx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgykL4Q_a42Hfzsb-Gp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw5YTpGh1ca74En4gB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx_23ILDQcjn2Pqdh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-SfffaK5C9AgrtGl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwCRS5qQpTknuQJ50x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7nUD0wWXDLv1Tu594AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEl87TFSWSuNuVBRx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFPGPqkLHrC3uPYlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]