Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For many classes is mandatory and part of the syllabus to use AI to structure yo…
ytc_UgzTgxGvO…
G
God ai defenders are so fucking dumb…. How are they going through life like this…
ytc_UgzuxG6h0…
G
There are two tools you can use called Nightshade, and Glaze. Glaze can be used …
ytr_UgyuZXUDt…
G
What a negative vision of AI. Instead of conceiving and understanding AI as an e…
ytc_UgwXIVxDz…
G
Vous n'avez pas vu les dernieres prouesses du robot optimus d'E Musk, moi aussi …
ytc_UgzFrLNT6…
G
Did anyone check the "topics" summary of the comments? This is an AI summary of …
ytc_Ugyb07L8a…
G
And the best part is: If the AI hallucinates, like it has done so many times. It…
ytc_UgzofA_iV…
G
Disabled person here! I have a chronic pain condition so when I draw I take freq…
ytc_UgzL5l-4b…
Comment
We should not give emotions and feelings to robots. Not every living species have the ability to do it. A robot not prepared for it may use it in the wrong way. It may change the way it would react without emotions. If you give it emotions, it cannot be controlled. If you make it logical, it destroys humanity. Because humanity can't control itself and destroys the earth in the process. If the robot would read the whole story of humans. It has a chance to come to a conclusion, that humans will end the planet badly with a 60~100% chance. And I am sure that the robot won't believe in the 40~0%. So it may not do what we want it to do. Let's just forget about giving robots intelligence. Please, humanity, stop. It will end badly. We are not gods, and it's not in our concern to create life for science or for joy.
Just a bunch of codes that analyses or does only one task is one thing.
Another bunch of codes that learn, decide itself and may change something more than just what breakfast you should have, is another thing.
youtube
AI Moral Status
2019-06-18T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyOJfYxKPGjB1Ix7KN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVNtk5A7TC4hfNsjF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwoZKwVbJjpNDmfwNV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwu2PuzXEwwlro1exp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxHc4Um5b97zb_H4hZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgwI2KA_rxa7MaIy4eR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzp0W5FNyMcz-M74Gp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzM0zeb2XqYwQZ661N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRq0G8ckN4ASLxiNB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzoLwzNi9nE5fJ_-u94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]