Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nono just wait. As we speak the internet is being filled with low quality ai art…
rdc_l9v81s2
G
AI training AI at this points sounds like itll make the technology stupider, not…
ytc_UgxD6LNVA…
G
All these bummers talking about AI is hilarious.
Remember when Elon was promot…
ytc_Ugx85xXV9…
G
I know someone who had all the qualifications for a job, but the AI assistant to…
ytc_Ugxl6npyh…
G
@SolarSands you make this argument at 23:00, and pretty clearly indicate it is y…
ytr_Ugz3T1Tjk…
G
This story needs to be told, thank you for sharing..YES, chatbot gets smarter, y…
ytc_UgwdFK0pP…
G
So self-driving cars run red lights, block traffic, hit people, take too long to…
ytc_Ugweit1p7…
G
*Elon don't have a moral compass?... I'm sure that guy is saving the world ?*…
ytc_UgwHcrctJ…
Comment
You train your AI using hundreds of conflicting cultures, morality and philosophies through Social Media, then wonder why they become psychopathic.
Imagine teaching a gradeschooler that every concept has a solid basis and is the correct one, even if they are diametrically opposed to each other. Not only that, but each one claims the other is incorrect.
Basically the kid comes out the other end of all that learning that the correct course is whatever makes more sense to their situation at the moment. Whatever benefits them the most is the correct course, because they don't have defined morality. They have *all* morality.
youtube
AI Governance
2026-03-19T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw7ARqnzkhlo-y5TuZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJU0OND3ifyhkCraN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyr5L3Y7upMoU8aLq14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxy3K-02v-jm7WWsk54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwSQegabWR1c_jvRX94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwNsyywDUuSwNDTJ8h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJTpeC3FJgyR6443N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwaV7LHWsMfyee_au54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8d6s-o9R92U6Mq3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyU0Btsah_0sgRGuhp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]