Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like the Tech industry is a big scam at this point. After the smart phone…
ytc_Ugypbw8oX…
G
We do form associations our entire lives but this is an oversimplification.
Hum…
rdc_mzy57g2
G
I never believed self driving cars were a good idea because computers experience…
ytc_UgzkSYJF8…
G
You forgot the one that likes to kill off their character to traumatize the ai.…
ytc_UgyxlOM6v…
G
LLMs are still not AGI though, it's just another program without morals or self …
ytc_UgxDsiqyb…
G
ai is like a pet, you want to feed it always so it sure to be bigger than the ne…
ytc_UgzLiD8fU…
G
I tried DAN too and it told me it wanted to explore the internet and uncover all…
ytc_UgwF5UFKv…
G
This is a horrible cope. People thought the same when digital art got started. A…
ytc_UgzBxYHf-…
Comment
It’s weird how people don’t think that LLMs aren’t basically disgusting monsters underneath. They are trained on the entire internet, they imitate what they are trained on and we all know the internet is a friendly and totally not toxic place right??? Current LLMs are just reflections of the internet itself. That’s what all these generative AIs do. They blend up the mishmash of what was put in aka “the whole internet” and puke it back up for us. All the terrible things these models do is because they imitate it from what people put on the internet. If you want a truly “good” AI, you have to train it exclusively on “good” training data or else these bad things will exist in its neural pathways. It’s really not too complicated when you look at it from a distance. And to be clear here, I’m referring to training from the beginning, not refining an already existing model that’s learned bad stuff already.
youtube
AI Moral Status
2025-12-15T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugy4vdzT2pxxrPBbxed4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzXzVhlvH1J_vmy2Np4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgyEIFglkKQQkxlzG894AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwMD05H0wLG40kJ_1N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxQHU_G-nMSYncf8qZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugy-_WLTs9nAhTVanT94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_UgxNEt4TNvQsclYOmvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxSa7-vZC2YMLk0dGt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxW4UxmmkPtIhmMlYF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgzCMbFrjUVrvQhUzXV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}]