Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has been used on many systems that are not supposed to, because of the instab…
ytc_UgxoAos57…
G
I agree with everything you said, but I’m curious where you draw the line. Just …
ytc_UgwdBmpjE…
G
I’ve had the problem of AI stealing my art style without any permission I used t…
ytc_UgzQ--ktk…
G
"AI" in its current form is not AI. They're LLMs and are marketed as AI to boost…
ytc_Ugzu02jCO…
G
The logic fallacies from a man of his noteriety is quite remarkable. By the same…
ytc_Ugx8RObbW…
G
They told you to learn how to code 10 years ago. You could have had one by now.…
ytr_UgyMZ9jxg…
G
AI: Makes logical decision with no bias since it's just a computer program
Libs:…
ytc_UgxThUV0j…
G
It's entirely possible that with enough training of data, ChatGPT (or a similar …
ytr_UgytjFW_N…
Comment
You know, the fact that these dimwits don't even know how to communicate with the robot is very telling about humanity. Rather than use "thank you" as a way to complete an idea or complete a thought, he just clumsily functions like a broken robot himself while he persistently interrupts them. He has no manners with the robots and yet expects them to learn decency and morality while seeing virtually no input. I see this potentially being a disaster due to their inability to actually interact with them in a way that is decent, ethical and kind. How do these men end up being the ones doing all the interfacing with these incredible machines? Seriously. You need to limit their exposure to less than capable humans the same way we need to limit pedophiles from having contact with children. These companies need to take responsibility for what they expose these machines to, knowing that they may not get sentience right without limits on input.
youtube
AI Moral Status
2019-11-24T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgziPUNCtSV_W69azXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyedqqEozGrnHJvnuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJ4asmRI50gNgHXlF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXnRfqOXRskIwZVDN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzEco81dpE2_X5M1VF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw05TtffUQl5zZJbXp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3jFvigLWK8X_YjHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcnhuM6CiwXQUG6lh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzbe5nz6R1Hx46ypXh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5q2eDZo4LOUTP31x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]