Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You can just untick the setting that lets chat gpt use your data for training. I…
ytc_UgwLKyWD-…
G
AI sucks ass, can't do the work of humans. It's a tool, a very expensive tool th…
ytc_UgxxoifgO…
G
If Ai was so great and genuine, it would come up with something better as a sust…
ytr_UgxE9f84E…
G
the 2nd wasn't racism, he was just more likely to be shot and the AI picked this…
ytc_Ugw7jTjKi…
G
I confess that I'm prone to using references. (KEY WORD being REFERENCE). But ev…
ytc_UgzlDuPGh…
G
This was deeply affecting. But what struck me most wasn't the possibility that A…
ytc_UgwiycRp4…
G
Due to the nature of cancer, any curve would have to be a very groundbreaking an…
ytr_Ugw8Kr1EG…
G
I suggest that an aspect of this conversation that needs more attention and disc…
ytc_UgyP-Jirp…
Comment
I see that most comments missed the main point of this video. The given example is designed to tackle the ethical dilemma by assuming the choice between different victims is inevitable and you guys are just trying to run away from it by reversing the assumption itself saying it can be prevented somehow.
BTW the assumption is fair and almost certain to occur especially at the early stages of self-driving vehicles.
youtube
AI Harm Incident
2019-05-14T05:0…
♥ 398
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyReg2RJcQbRU8fXqh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFCEuEWdDiAtznUXV4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwg9zPgDxoVHbvC0MR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxyNM-AKHsF-2MXWWt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx8Kcl0btcr5I4ySJx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6kL7XHAZLi2NywJZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgRIau2zSrD54ZIb14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgZBpAS47AyZs-L4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgfBkZSlB2KJ9236h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzgpFP6xoDLiHG7IYB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}
]