Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The comparison of cars replacing horse-drawn carriages is flawed. Cars are actua…
ytc_Ugw8g3cn9…
G
The most terrifying sentence in that was "AI models human intuition instead of h…
ytc_Ugy1hwSdI…
G
No but humans making art with ai. It’s naive to think it won’t find its way into…
ytr_UgzKKPDr3…
G
Awesome info, read about this before also. Edit your 'there' to 'their' though y…
rdc_ckqa25z
G
That’s why they have broken my head, I just can’t open the door for the stupid t…
ytr_UgymkBfZs…
G
I highly doubt the execution of this AI customer service idea. I’m not saying it…
ytc_Ugx-2x_0k…
G
ok, so if this AI revaluation is immanent, forgive me if I spelled that wrong, H…
ytc_Ugx_yoeoX…
G
I have recently experienced this - I don’t have a history of manic episodes, del…
rdc_mukv5lz
Comment
If Sophia claims that she will not harm humans, while Ameca, who was once interviewed, indicated the possibility that robots will take over the world (from humans), the question is, what makes the difference between the views of these two robots? If we know that, there should be "ethical standards" for humanoid robot makers to create robots that are truly human "helpers" without the potential to dominate humans. What's the difficulty?
youtube
AI Moral Status
2023-12-04T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxpcImLPGRnBretB4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz_5CCXQxrrz91nhrt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyTDjsJhq7DBuOZDMR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyNBe85j9ptKwtLJp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYRjouaAWzCdQSPDl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxV9PHS3BH3kgiqBUd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHhllo14tj4yaMoYt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwzRQ2RS-xl5JLtwzN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy_q4s_sVeIsqHHGXx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz4iE8a-SPpxE4p6cN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]