Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The difference between your comparison and generative AI is that the example req…
ytc_UgwgLqMYx…
G
That's an interesting point! Sophia does highlight how humans can sometimes bene…
ytr_UgyGvDzdV…
G
One big and crucial thing about AI is:
It cannot learn on its own. No matter wha…
ytc_UgwDNdSP6…
G
So if a robot decided to oppose abortion, they should be labeled sexist, bigoted…
ytc_Ugy1YurAq…
G
Wow, Interesting, thought provoking, Creative, terrifying, shocking look at how …
ytc_UgxxWLzQ5…
G
AI is just a tool to aid people. Developers only spend 10% of their time actuall…
ytc_UgwB-Zilz…
G
I'm happy that AI is already smarter then the PhD that came up with this stupidi…
ytc_UgwTqKFdl…
G
I have the feeling that the ceo of ai is controlled by ai tbh mp…
ytc_Ugx4-IcmZ…
Comment
Self driving cars are a version of the trolley problem. Do we want to choose fewer deaths, or the deaths of those who are not supposed to have died without our having made this choice? Self driving vehicles make it more evident that those who are behaving safely could be killed by a car which would not have killed them had the cars which killed them been controlled by a thinking human. I hate to think that I could teach my child how to be safe and have my child be killed by a vehicle that did something completely unpredictable and potentially, unavoidable.
youtube
2023-08-08T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxAcV3-jeGRD8Ee6zx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnChjmSX_yHITtIzR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfQrExD2D6_UQHyEp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugykx2wAY5dREF-SFFJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVTuSUCocmKajIjtN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwx1y44FZI776ewm9t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwXrz0slgBdwc2zw1l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFCnmOfLiYdmU-wYF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwaemTH8eWUccvEhjt4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyt4Mx2dB7uiJ5TBdV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]