Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Man, I wish we could smash AI Art, AI "artists" and the creators of the AI Art g…
ytr_Ugzkx8KZP…
G
As does most "upgrades" we get to life these days. You should just be happy with…
rdc_nck67cs
G
@pastelcatnip this isn't political. Neither side is helping the issue at all. H…
ytr_UgxM1O2rR…
G
Remember The AI Can Be Stupidest on World
But they Can Also The Smartest And …
ytc_UgyjDzCzH…
G
Whoa that is just crazy but sadly I think this guy would probably do it anyway i…
ytc_UgxJomGbt…
G
A question I have is can you create Ai without it ever malfunctioning.. makes me…
ytc_Ugxy6Vi1Z…
G
digital drawing tools are judt drawing on a eletronic device no diffrence from u…
ytc_UgxfhCINY…
G
Why tf do we allow these fools to hold a gun to the worlds head for the sake of …
ytc_UgwlB7oV_…
Comment
[not native here, sorry for grammar] I see that (one little part of) the problem is we would need "ratings" and "data". imagine the minimum problem: killing human A or B (assuming inevitable), with no significance differences, totally in the same condition (say, both with helmets or belts, same car security rating, position on the road, etc..) THEN we need to come to a "value" of their life. when we deplete information about probabilities, status of the vehicle we need to come to "who they are"....sadly.
and we are uncomfortable with this, our culture is... we like to think that there's no "rating" in what we do, that my carbon footprint or my job or my hobbies or my behaviour has no relevance in comparison to other's footprint or job, because i "like my job" and "i'm free to choose", "i have the right to happy no matter what".
human A and B is an oversimplified version, because you simply can't find 2 humans with just one difference. but we still need data to compute, and to choose A,B,C,D,.. we need to put computed data on a scale.
and the scale is the problem.
what's worst is that at some point we will NEED to choose, otherwise we can't introduce self-driving cars, THEREFORE not lowering the general incident rate, HENCE allowing more human to die just because we can't find a human capable of authorizing an emergency procedure. on purpose, this time. i love tech progress :).
youtube
AI Harm Incident
2015-12-08T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UghSiRcVXA-3FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg36gd_wQOCXHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggzSEiGsQNLKngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghidMHZsCybB3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjzNTXzuzIxOngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UghfmsovrnUJPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjQy7gtc5pA_XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugio_pXgICTxCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggKQCpjXBYZKXgCoAEC","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgggitcG_CbrUXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}
]