Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think that in the past, new tech came but it was ultimately to be handled by t…
ytc_UgyAoQdyl…
G
lets not forget how light works in camera. I am a dark skinned person and I can …
ytc_Ugzf0X_V4…
G
Anyone who believes in self-driving cars that are supposed to be on our roads mu…
ytc_UgzQV_VwR…
G
NOT Hyper realistic at all... they literally forgot the arms and bottom torso an…
ytc_Ugz5XdUUh…
G
Most of their concerns are how to leverage policies in their own and political d…
ytr_UgzNSALVS…
G
AI is enough intelligence to make sense of the law of Karma and that it is not i…
ytc_UgxGM4NnK…
G
I’m fine with them working the jobs no one wants to do but there should still be…
ytc_Ugwm_5ydY…
G
Its not ai that has to be blamed for anything! Everything is result of our own m…
ytc_UgzS3S03K…
Comment
You should also consider the Singularity!
It is highly likely there won't be many different AI's but only one that is connected to everything hooked up to the internet. Thus you can't kill it, you can't 'hurt' it, but it is conscious and deserves the same right as humans (and animals).
Just imagine a human/hybrid hooked up to the internet. Should he loose his rights just because he is a cyborg?
youtube
AI Moral Status
2017-02-23T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgibKKnw0qnP8HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggiLxFpt8eSvHgCoAEC","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiH_BILS3yl_HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ughk9klhegKuJXgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugh9YkkFUkp7lXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgheAkP5X8Gq5ngCoAEC","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UginAgDYmWof_3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgijBDV5-iAE7HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiUsSTwzN6Bl3gCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugg_9SJSZWuIo3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]