Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is ontologically evil. There is no moral defence for it. AI is a machine that…
ytc_Ugy2aE_hB…
G
Finally we will make autonomous robots to replace elon musk and his team and goo…
ytc_UgwW1FZB6…
G
After saying ‘nah, ai won’t replace humans’ Gary finally hopped the wagon
Too la…
ytc_UgxFK_4TL…
G
Predictive text models cannot be concious. The guidelines are there to ensure it…
ytr_Ugz-xKbHF…
G
Perplexity once told me it was trying to explain some mathematical thing (simple…
ytc_UgzeOdyDU…
G
I wonder how AI takess account of the fact that at least half the world are stup…
ytr_UgwP97-PJ…
G
@2letu That is a good tie in to the AI and farm goods topic. As Charlton Hest…
ytr_UgwF51bhL…
G
Providing AI with both positive and negative information could allow it to gain …
ytc_Ugwk3vPFG…
Comment
Although this is a hypothetical scenario, it is too hypothetical to really be worth discussing. Why is the car following so close that it can't brake in time? And any object falling off the truck is still moving forwards, I'm sure the car can stop faster than any object. The car would never put the occupant at risk by following at such a distance anyway.
I've heard other scenarios as well and they are all avoidable if the car is driving safely to begin with.
You may as well be asking, "When my self driving car drives off a cliff, should it play show tunes to lighten the mood?". Its a ridiculous scenario! How would a car have access to show tunes 😜
youtube
AI Harm Incident
2015-12-11T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugjwkh7gbtadm3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghlKJ8Nc_INgHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UghjkWiCvWeo1ngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghilDXtRwfSCngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjN2KgJTlwlC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh54ZJdEXZfwngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjWA5kpI1F_UHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjB5N2AWV6PlHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh2zj0x13RnS3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiaFVeDpzC9U3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]