Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is being used by the IDF to target suspected fighters in Gaza. They kill whol…
ytc_Ugxcv5gvG…
G
It is interesting that everyone who tries to argue against Turing test arguement…
ytc_Ugwbf4v6d…
G
Yeah right fool trust the robot this guys they only asking this crap like in the…
ytc_Ugx37CYRe…
G
I get sick of all these things on my tablet and laptop i dont ask for it but the…
ytc_UgyR7eLCf…
G
Omg! Lol my boyfriend was poking fun at me for talking to ChatGPT all polite lik…
ytc_UgxKKzt3M…
G
Oops, you got it wrong. The correct answer is D, as no robot is actually capable…
ytr_UgzlzQVWm…
G
Her child was a 14 year old dude- not a small kid. He had issues and god know wh…
ytc_Ugxjv9YVW…
G
We are at the dawn of the birth of a new species, one that will be far superior …
ytc_UgzQK_HzZ…
Comment
Great commentary once again by Ryan on this important issue. The truth remains the same as it's always been: Other drivers are out to kill us. And have trouble identifying/destinguishing us. Are badly trained to recognise us. To be honest, self driving cars are similarly dangerous as other drivers (as shown in this video), so in the end it's not a reason to ban it from existence. These incidents are great learning cases for the self-driving industry of the future, and this industry must make sure that all traffic users are properly protected. Until that's the case and human drivers are forbidden: Watch your backs and stay safe!
youtube
AI Harm Incident
2022-09-09T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxUQ91qBEdBAdWfJxN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3U0DIWCzq4SYoriN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOxnlXkzlKbs5LVH94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwUH4c85A9N_LLi4BB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxtB9rYLbkoyzB1InB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDv5BtvERJZDAadqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxrmIIrDj6vjgPzawt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0jMQZOR2MAqJ-xnd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4embv9o3iP5frAEp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugww8KS4KUSJbDnorld4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}
]