Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
bro really got gaslit by an AI and then it walked back its gaslighting when he p…
ytc_UgzlfMQ4X…
G
3 seconds in... why are manipulating to not be bias?... of the algorithm is bias…
ytc_Ugw1ALmmL…
G
Sophia's thoughts on rationality got me thinking about how using something like …
ytc_Ugz8hDTIv…
G
@Vanilla3609 Actually making AI generations has an environmental factor and in t…
ytr_UgwJchv2f…
G
The fact that she is telling people how to steal her art style is such a power m…
ytc_UgzAiJBxA…
G
This is incredible! Thank you so much for sharing all this knowledge with us!
I…
ytc_UgyVaCOzw…
G
Is this why AI appears to hate me and is so rude all the time?…
ytc_Ugy6T85uI…
G
The big mistake is to tag this on to just Trump. The unholy alliance is with all…
ytr_Ugy5vrb8a…
Comment
There's more to consider than passenger deaths per mile traveled. When waymo cars drive into active accident response scenes, and when they turn into bricks during a power outage, there's a lot more to consider than whether or not the passengers are being put in danger.
When something goes wrong with a human driven car, the person behind the wheel is responsible. When waymo clogged up the san fransisco street grid during that power outage, they couldn't even be reached by emergency response in a timely way. Its more than just likelihood of an injury, its what happens when something goes wrong.
Microsoft can sell buggy software to customers who implicitly consent to being beta testers. If you dont want to be a beta tester for a software house, there are other places to spend your money. But sharing the street with these experimental robots, means theres no way to opt out of their testing cycle. They've managed to turn the whole world into their sandbox, without having to pay for it. I can see plenty wrong with that.
reddit
AI Moral Status
1773270408.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o9wx8yo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o9xyhxg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_o9w2cei","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_o9wack6","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_o9wsyba","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]