Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is the biggest Pandora's box that humanity has ever had to deal with,people h…
ytc_UgxVzHLaM…
G
AI will come up with solution to every problem including us humans once we becom…
ytc_UgxJxUkhO…
G
AI can only derive from souls like MJ and Bach, etc. It lacks its own soul to in…
ytc_Ugzy7F5Fq…
G
This is a very well written comedy to express some legitimate AI issues. I don't…
ytc_UgwxUN95d…
G
When he said no one understands how this AI works, something inside me sank. Bec…
ytc_UgyQV_nBG…
G
Been hearing this shit for the past 5 years. Can this AI super intelligence happ…
ytc_UgzYVoVoB…
G
@lepidoptera9337 Ha! Ha! So it’s a good thing the AI chatbots etc are not consci…
ytr_Ugya6EgdW…
G
Every African data worker should get together under one agenda to secure a fair …
ytc_UgzgBpOs0…
Comment
My only concern when it comes to robots and this presenter is, they are teaching the robots "to be human". Not to be too philosophical but what is it to be human? Who gets to decide what are common human values. In a time on the planet where we can recognize how decisive we are even within the same community, we now need to trust a few eccentrics to decide the human value scale? A good example is when the robot says humans are not the most ethical creatures and the presenter agrees. That is the complexity oh humans, whose ethics?
youtube
AI Moral Status
2020-01-18T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgynbDnht02zSLzdHiB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwTc3t9TGKmwQ9HR9B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwVgJSbF92NP3NIedJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyegn1nnKPSz9gC9y54AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxvBixxUQ-aGb7qsLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRHOWsGmqqX3ugWql4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyXdspCGfVmp_WD3pZ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy-n_lUlLne4ZJRUzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxWra8vv9yV_MzAx214AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFuGXGv9cGcriJxNd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]