Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love how the most likely situation is just not mentioned here at all. "AI" is …
ytc_UgzlMW-Zh…
G
AI didn’t impact nearly every aspect of most people’s lives. Just made the Inter…
ytc_UgymIT_-N…
G
Yeah, I'm gonna call it now. The day after I die, AI invents immortality and rev…
ytc_UgxuWHxkn…
G
3:50 ok, yes China winning the AI race(its a race now ok) is a risk, but what is…
ytc_Ugxi3PWee…
G
More dangerous than AI is brain technology that can read people's minds and cont…
ytc_UgxN9iejj…
G
So if I understand correctly in the medical field AI can play the role of a doct…
ytc_UgwwPe_H4…
G
I'm a machine learning engineer and I've been working in AI for a decade. The PD…
ytc_UgxEpmyfN…
G
"...making AI MORE LIKELY to follow human commands???...." Oh, well what could p…
ytc_UgzJmnixU…
Comment
If we were to make a robot that would be capable of conscious thinking like a human, have wishes, needs (like a need for exploration or need for company - it would make good constricting device as well, if for example some needs could only be satisfied with the help of a human), then it would be preferable to give it rights on the level of a human, but such AI should be as rare as possible.
For most jobs we would use robots for, sufficient level of programming would be enough, no need for a thinking, feeling machine to for example harvest corn in the field. Even human interactions heavy jobs like waiter or servant could be made possible with machine learning of today, when only proper databases for voice recognition and understanding commands will be ready.
Fully conscious AI should be only made for the purpose of high level decision making, research or if it would work in the conditions that weren't previously explored and which can be suprising (like a spaceship's AI).
youtube
AI Moral Status
2020-08-14T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy6dKYY2Pz1ySbziSd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDO6H8_ZXfOnNhhgJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwq09tVrSm_tiDh4mZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUKA_GYgqqlc5RyiR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwpDFadINPM-rpu6Nt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-o_rLhv4UI8ynRsZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx3y5eMySIp2aUQLNN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIbVfqQ-S41-cOoAR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzcGhOARlbguAAPRAF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqP4EfvDqt1tq0uyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]