Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Exactly! Ai art is just soulless automation. Art made by human hands, however me…
ytc_UgzseIE5a…
G
LLM’s will never be sentient. Their code does not allow it. AGI will develop fro…
ytc_UgwvW2OCe…
G
Human witnesses are even more flawed. You just need to verify any result from f…
ytc_UgxYR4eCG…
G
TL;DR maybe AI is like the microwave - we all got super excited to use it for EV…
ytc_UgxVMoW4s…
G
i cant wait for it, i feel the whole world of jobs is bullshit, they just sit an…
ytc_UgzYc1yQA…
G
Say someone you love the most in the world is on a boat with you in the ocean. T…
ytc_UgwT9380H…
G
All they gotta do is make it an opt in system and pay out royalties. It would ma…
ytc_UgzRymbHx…
G
This is soooo AI generated. Why bother reading ts when noone even bothered writi…
ytr_UgzjJJOnl…
Comment
If we consider the cars to be able to make meaningful decisions, shouldn't that decision be put on par with our reaction and be considered as such? A reaction is a decision taken in a matter of milliseconds - in humans, it is even systematic (on average).
If you were driving the car, chances are you would just try to survive. And if we consider the car a living, thinking object - as we hopefully do - shouldn't it try to survive as well? Put a horse in place of a car here, and now you get ethical dilemmas with horses.
If the horse was taught to crash into the most harmful object, it is still the horse's fault to some extent.
The question eventually boils down to whether or not we should trust our life into the car before we hop in. If we do, we must accept that the most meaningful decision for the car is for it to minimise the damage to itself.
Will we ever treat artificial intelligence the same as ours? If the answer is yes, then you will see AI go to jail for something like this. If we consider them as tools, then we shall do as we always did.
youtube
AI Harm Incident
2018-06-21T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxNu6orq72VYmfHfwB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZh_afhC_OOFGLQXR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyCTPsECAP96PGh3Hl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgxJKKQ9sqMp_Ti81H54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxqNM8gW2hHoyqHe5l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxnIUdclFIQ-4Rv2z54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzlBBbufEX3_0ASe354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwCGqFtlXMJ6s8DwaZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtXdfQuJN50FtVCfZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyosQqVFTeUat0GwLx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]