Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Spot on compost and atom bomb. Great video. Consciousness might be like a seque…
ytc_Ugzph_OgI…
G
When I talk about ai artist I talk about people who do image or video…
ytr_Ugz2G7DrL…
G
And AI has the power to destroy humankind.
What Media has stressed the negative…
ytc_UgyrsXmJR…
G
My Tesla shows to buy full self driving service at the cost of £7,000 which is n…
ytc_UgyctpbGB…
G
Guys I think I found out how to tell if a robot is not human just tell them to c…
ytc_Ugwcy5lRs…
G
In some instances, driver turnover has reached up to 300% annually. Even in the …
ytc_UgwASrUdd…
G
lol I know that’s the standard chatgpt voice. I swear I can hear the voice getti…
ytc_UgybHaQtz…
G
Ai should be making stuff cheaper, but corporates want infinite growth instead. …
ytc_UgzZZICLS…
Comment
This is a very thought provoking subject. However, I do think the scenario outlined has a clear answer. Swerving into other drivers would obviously never be the answer. If a truck in front of you is hauling large heavy objects, the self driving car should make the premeditated decision to keep a safe distance. This way, if the heavy objects suddenly start falling off the back, there would be ample time to for the car to stop and allow the heavy objects to tumble forward away from the car. Minimizing risk to others doesn't have to mean heightened risk to self if the car is proactive in avoiding incidents. There will always be freak accidents that driving algorithms won't be able to compensate for, and that's where making cars safer structurally comes in.
Regardless, putting other people's lives in danger because of you or your car's misfortune due to a fluke will never be considered okay in my opinion. The video assumes that not swerving for the tumbling objects means certain death for occupants, and I don't see that being the case in most any scenario.
youtube
AI Harm Incident
2015-12-08T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UghSiRcVXA-3FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg36gd_wQOCXHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggzSEiGsQNLKngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghidMHZsCybB3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjzNTXzuzIxOngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UghfmsovrnUJPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjQy7gtc5pA_XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugio_pXgICTxCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggKQCpjXBYZKXgCoAEC","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgggitcG_CbrUXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}
]