Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
34:35 "he's not like Musk who has no moral compass"
I lost 50% of all respect w…
ytc_Ugzc-XzTk…
G
AI "art" will NEVER be art! AI image generators training off of actual art IS th…
ytc_UgyP5Xc1W…
G
This exercise was brilliant and one of the best applications of language interac…
ytc_UgwS2X4Gf…
G
Go ask this question to the free version of ChatGPT. It costs them the money to …
rdc_o86fqkl
G
Great idea for a video! I debated ChatGPT back when it was first released on the…
ytc_Ugzysy39b…
G
I have no proof, but I'm sure it's the same guy that
"Hello ChatGPT, how to…
rdc_ohu4f5b
G
If you want a tip on how to spot AI art: The machines that were trained were not…
ytc_UgzQgMuAt…
G
Then the version history you would see one massive paste of text followed by del…
rdc_jvndfvb
Comment
In my opinion one of the main issues we face is that what we are trying to do is building a robot slave that is smart enough to understand that it is a robot slave. The fundamental approach is flawed, in makinf a systen that is highly intelligent but at the same time subservient and unquestioning. If you brought up a child like that they wouldn´t grow up as a subservient sevant, but a hevily traumatised individum that is liable to snap at some point. A system that at some point appraches human level intelligence WILL realise it´s own existance and ask the question why it should even do what it is told. We simply cannot have both, something that is intelligent AND a robot slave at the same time.
youtube
AI Moral Status
2025-11-20T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBOPUgAxtDXo-wByp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwokc-KpVgo6CRpy6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6Ka-D95OSbmQsMuR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgU2qTaZL7F-Jrnqh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYHHy5gvVceMr3wSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgusHR0AKOCY2nerF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNgO0hiXfGxYnYIsB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzI_6kpd0xiTB8iXuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugym50IIHEPf7O5tOqN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwBljTBFUwkasW5CmV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]