Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Life is evolving and quickly. Everyone, learn as much as you can about AI. It's …
ytc_UgzyC-kfn…
G
When you are in self-driving mode and pushing on the accelerator It warned you t…
ytc_UgyKkiVL2…
G
honestly just study math, get a masters in math, be great at math, and the progr…
ytc_UgxJS0dvR…
G
2023: A.I taking people's work
2024: (ending) a.i wars depseek VS chat GPT
2025…
ytc_Ugyb5wpJU…
G
Yeah but, what do you think it would feel like, existentially, if a robot/AI cou…
rdc_j4xe4nn
G
Yep, traditional dev methods are pretty much done for. But writing code is the e…
ytr_UgwVq6J-_…
G
There is one undeniable fact, AI doesn't care. When AI determines humans are the…
ytc_Ugw-Xd9KK…
G
3 subtle inference lies in this video : 1 that AI should be given personhood and…
ytc_Ugyo87rFF…
Comment
if all of the people on that road were using autonomous cars, they would all be maintaining the security distance between them and be able to break all simultaneously one after another because the distance in between every car is making smaller. By doing so the risk of the incident would be diminished at almost 0%. This is indeed an easy explanation of how the entire situation should work; because what if there are just around 10 to 35 autonomous car on the road and all the others aren't, and what if those that aren't using autonomous cars are driving too close to other cars, unfortunately, accidents aren't things that can be predictable, but we might be able to develop an evolved enough system to take the percentage down as far as possible.
We should indeed have the power to do so, but maybe not the money, we'll see what the future have for us in terms of cars, I guess that autonomous cars will be taking over in 5-6 years.
youtube
AI Harm Incident
2018-01-09T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzmTWvkx_16kZxRipB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwU3JKTd7DXea630ch4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5Al8HpdWeQXVA14V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzR-I1zg7Fd24xkO2B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTRsoOWGCfnwZIeM14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLGAxBlRBrU6Q9HzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGRQqzWI3R8ooBq5t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzTIXhunuFErqKh5k54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypFlEyseWL_KumX5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMlxjfJsg85RHvnPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]