Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Could we just add mandatory code that makes AI think we are cute? Just like kitt…
ytc_UgwbcS8dX…
G
They are absolutely right. My grandfather lost his job to automation. He was a s…
ytc_UgxnJpg7K…
G
Encryption isn't fundamentally needed. For example you can store something physi…
rdc_fg0zvpw
G
I used to use ChatGPT long before it was once better at everything and very deta…
ytc_Ugw1YZyzX…
G
I'm fully with ya. However, the point made at this time 13:16 is different than …
ytc_Ugz0gFGQE…
G
I don't use to use AI for coding, but I thought about what could be a good use: …
ytc_UgzYrgCjh…
G
How is this the beginning?
I have witnessed this happening three decades ago…
rdc_lgqq6b0
G
florianschneider3982 you don't get to determine what is or isn't immoral. ai its…
ytr_UgyzdXe2e…
Comment
I agree with +0X0Patrick0X0 that the other cars should be linked to make room for an emergency maneuver. Lacking that, say all neighbors are not self-driving, the other option is to add an element of randomness to the decision.
The randomness can be equally weighted to avoid law suit, or skew the weight to optimized for owner safety or least total human damage. Similar to multiple button pusher at an execution to ease the weight of taking a human life. Transparency and guideline on how such randomness shall be assigned is key to addressing the ethical dilemma.
youtube
AI Harm Incident
2016-01-06T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugj_f2_hIfbFIngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjuSAOvpXKjoXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgjXkfuodsaTaXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgiH4bJgUd72t3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugis_iWcr_zaLHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugg1rrdyzbR2AXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UghCPalsjYnrLHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjHJF2WYdJEkngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiXr1C50oWCgXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh_wlHO5sE7gngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]