Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These companies have gone to far. I hate AI. I really hate calling and getting a…
ytc_Ugyu57LVq…
G
If you need to pay attention to Full Self Driving, it's not even Half Self Drivi…
ytr_UgwkJ--RC…
G
لا حول ولا قوة الا بالله العلي العظيم 🤲🏽 ان الله يحسن الخلق والله ينفخ فيه الروح…
ytc_UgylimTXY…
G
Can't wait for AI to take over the world. The world ain't really great now is it…
ytc_Ugwc4HoJM…
G
a idea to keep ai held back a little is to save a ai find it has a problem go ba…
ytc_UgxU9Mf2u…
G
Yeah im the same, i spent years on art so im not like "HAIL AI, IF WE ALL BECOME…
ytr_Ugyj-epfQ…
G
my art is dogshit, but i prefer having a bad human drawing that a ai crap one…
ytc_Ugwe5R5wY…
G
We need to stop calling them AI artists, they aren't artists at all. Just as AI-…
ytc_UgyVZ0vuJ…
Comment
First of all, no LLM is conscious or self-aware because they all respond purely to prompting and don't have a constantly active neural process. Second, all this sky is falling stuff is predicated on the unfounded assumptions that (1l) superintelligence by itself can create things like superdeadly pathogens, (2) that conpanies like OpenAI can embed their agents into critical infrastructure and (3) that there will not be multiple ASI agents that are adversarial to each other's capabilities to affect the real.world. These assumptions are super naive.
youtube
AI Moral Status
2025-11-11T06:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyD_vVgK4lU66Lr9q54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzC5ci0oXYUvBqFe1B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZQjSzkiOzmnrTb454AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziVby8mv9JCe3Ii9R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5vty5u3LBNGmPlqh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzTgAPXXot1H7fSba14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz9aRh5H-dWDzkCLvV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy-YPCOCebMWJ9NcuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy86aQ-y1DSo4yqC294AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_cFH_A9RtIjRcBJJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}
]