Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ralions4255 This screams "I'm not like other girls" while trying to be "edgy" a…
ytr_Ugx2i3aEC…
G
the motivation is to make sure no one is below the poverty line, especially with…
ytr_UgxYtgwQR…
G
The AI race, between the west and east is ongoing and we seem to be spending way…
ytc_UgxBlVThV…
G
I'm afraid when AI takes over the world you are the first in line Alex :D…
ytc_UgxAw4SGm…
G
WTH is this? where is the terminator 3000 model I ordered? I don’t want this I R…
ytc_UgzrVFYP0…
G
The AI can be more human than some humans themselves. You could probably code em…
ytc_UgwXhklaA…
G
If you are an engineer and you think AI is good enough to work on its own right …
ytc_UgzC5bSCJ…
G
I watched the music video for krigsgaldr by heilung and i was so floored by the …
ytc_UgwC8J8Dd…
Comment
There is one question that needs to be fully solved before I will support robotaxis: responsibility
The first step should be a set of ethics guidelines that answer questions like: if the AI has to decide between killing 2 pedestrians or one person in the vehicle what should it do?
Then there needs to be a definition of responsibility based on the ethics guidelines, so if the vehicle followed ethical driving standards and someone died then it's an accident but if the vehicle didn't follow the "rules" then it's the company's responsibility.. but here it gets murky. If a human drives badly and someone dies then the driver is held responsible.. so who should be responsible when a Tesla does it? Elon Musk?
youtube
2026-04-02T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzxPq5sv8g1dPzz-Kd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxF-jMFDU627cs7o594AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6kggfjF_NX0mcGml4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwz9sVn15HrMLlfrvF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzgyf1-OXl2x5PcfHt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]