Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Or
What if this is just the final outcome of training an LLM naturally? Like w…
rdc_kcq24x0
G
Autonomous AI was surfaced in movie namely iRobot (2004) where neuron network or…
ytc_UgxEFklYo…
G
because ai can produce better art then them (which isn't saying much) they are u…
ytc_UgysDRrVB…
G
I wonder where we'll be in 20 years. I'm hopeful but man living through it is …
ytc_UgyNY5klH…
G
Art isn’t made for a competition, it’s a form of expression and enjoyment. You s…
ytc_UgxnreYIT…
G
tbh, that ai drawing looks good and it looks even better than many of the quoted…
ytc_UgwAhOwCH…
G
@crowe6961 by the time that happens we're all dead and gone, I use ai and I prac…
ytr_UgwvgDnrz…
G
So, it appears that private public entities are acting as law enforcement which …
ytc_UgyadR4WW…
Comment
10 years ago I wrote a paper in university concerning the ethical implications and liabilities in the event of a crash involving an autonomous vehicle. In this paper I attempted to explain how difficult it is to definitively point a finger at who's responsible. Is the "driver" responsible? Is it the manufacturer who is at fault? Maybe even the programmer who designed the self-driving algorithm? It's a whole can of worms that is decidedly complex and ethically challenging to answer. My prof dismissed the thesis of my paper as being a dumb idea as it's not relevant or applicable. I failed that assignment.
Well here we are! Hate to say I told you so... But I told you so.
Writing software that can 100% make an informed decision is incredibly hard. I can't for the life of me understand why having more data available through additional sensors could ever make it worse at making an informed decision. Harder? Absolutely. More processing and higher costs will be incurred without a doubt. As long as you have quality data, which in this case with expensive equipment is to be expected, then more of it will always (well, usually) help you make a better informed decision. Ss mentioned in the video, the self-driving software at any given moment needs to make a "yes" or "no" decision (over-simplification but good enough as an example). The more data you have, the more variables can contribute to making a safer decision.
youtube
AI Harm Incident
2022-09-03T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxLKj91_yUZcmtzKP54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfuReyxukU6hInDXB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzCvmcpbSIrarILhTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVN0ZCKCao6_Zjh414AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxkerxGD8MNCT62-Vl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwhqePNsfSq-qGeLD54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxgKW9FglOYf1rTYZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3hys9fA5p-pXqASB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUQtiA2BozN-OGUjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwuu7-b-u-guWrgrxl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]