Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Want to point out the luddite part, they weren't against progress, that's just o…
ytc_UgzbxD-s7…
G
posibilities are unlimited
we can use ai as inventor and extend our life to unli…
ytr_Ugw0FV92t…
G
So the AI looks at art, and then based off of that it makes new art from what it…
ytc_UgyksHqF_…
G
I upvoted your post because I think it’s an important issue to discuss, but at t…
rdc_mlhwgg8
G
I work in the unskilled sector but I'm definitely interested to see A.I take ove…
ytc_Ugze6oHx7…
G
Like any technology, this depends upon how humans interact with them. The moral …
ytc_UgxSPmcZZ…
G
If these were humans driving then ego would of gotten in the way. Nothing bad ha…
ytc_Ugxc7PRT0…
G
@jemonemusicthen how do you explain our constant need to know? AI is just th…
ytr_UgxRZEd2v…
Comment
1:20 As a Tesla owner myself… I will say this, I use self driving EVERY single day… in San Diego… some of the largest & busiest freeways … and it 1000% drives much safer than I do… however I’m not an idiot that depends on a car to keep me alive, along with following the instructions of the car such as…when I press on the accelerator(because it’s not going fast enough… ) I get a warning that states “car will not auto break if accelerating manually” now if you think about this logically, this would make sense. Us as humans wouldn’t press “go” & stop at the same time, correct? So why would we expect the car to stop if the “human” is telling the machine go? That in itself could pose dangers which is why they warn the driver that auto break will not operate in that circumstance…. I’m sorry but I am SO SICK of people blaming Elon for his inventions when we as the operators need to take accountability for unrealistic expectations and poor judgment/driving. Dont get me wrong, he’s far from perfect, he may over promise on performance but it’s up to us as the operator of these items to read the fine print and take safety measures to ensure our own lives aren’t at risk.
youtube
AI Harm Incident
2025-08-19T00:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDvF4LaK3efIQAbDZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7wZSiy6jrkNAfWN94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6FOs0BxYdvmfP5854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwvFaeO3VFKxKeLmHV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBMLfoGXt3bhZa2pB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf9NbHDOySAzkNsG94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwOU0KPVHGNgSOVUlJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwz3zozfzA-jTUK8rF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9XV88eNz5jkZTCCJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwziCpzMTDqs-vWIkp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]