Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if I saw a singing robot like that, I will pick a mazo and smash that scarry thi…
ytc_UgxBtVuqY…
G
It's a case of stupid games get stupid prizes. You want to cut corners at work u…
ytr_Ugwl74BRP…
G
She makes valid points. And she even tries to steer the conversation towards how…
ytc_UgypB1BBL…
G
That, but infinitely less depressing.
Let AI develop sentience, leave it to its…
ytr_Ugyg26i-c…
G
AI is a tool to put people out of work. Employers will pay big bucks for that to…
ytc_UgxDlSRvU…
G
I'm not worried, there's no way AI can be as wildly and creatively incompetent a…
ytc_Ugz9DxiPN…
G
Guys chill out! There is this trick where if you put the food on the ground it w…
ytc_UgwXCIWQV…
G
LLMs are not AI they are prediction models that predict next word. LLM accuracy …
ytc_UgyLxny-P…
Comment
I would never feel safe being a passenger in a self-driving car if the car doesn't prioritize your safety. Car: "Now throwing you off a bridge because of someone in the car ahead has more passengers but the driver in their car was stupid, sorry for your luck." While it would be nice to minimize the damage, sacrificing the passenger should never be an option no matter how many people are at risk. If they prioritize the most people to survive I can say for certain that self-driving cars won't catch on because passengers would be too scared that some idiot on the road may make a mistake and just because they have more people in their car the self-driving car will sacrifice the passenger by swerving off a road. Humans are programmed to be self-preserving first, then considerate of others second. If self-driving cars were to reverse this logic then it wouldn't make sense.
youtube
AI Harm Incident
2016-06-25T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugi0oNCeHP92AHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjQqqQ8pvsVC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UggUueruHXVu1ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgijXoYPKjY_1HgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghGnVVF0cNqSHgCoAEC","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Uggfmpuz0HRxeHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggRgo7ALDJJCHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghT-lpLHZCE-HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UghO2h5e1TxTNXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghNM3jgeKUHEngCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}]