Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2025 AI is much smarter than any monkey and has been for a few years now…
ytc_UgxfSr2UQ…
G
What he is saying here is that if people tell lies and let evil in their lives—A…
ytc_Ugw1HBkAv…
G
AI is knowledge based, and by its nature, limited.
Intelligence can only arise…
ytc_Ugwqjt3YG…
G
Lets say you can build a logic bot and a hallucinatory one and a third ai that i…
ytc_UgwnWgz4M…
G
Yesterday I challenged AI. I asked it to show me „15 anatomically correct hands“…
ytc_UgxnE-lYr…
G
Bro Wait...So he's mad that A.I. guys who, 'needs there jobs' get fired for supp…
ytc_UgzcNMckd…
G
The 1 in a million user that doesn’t have a single non-innocent character ai cha…
ytc_UgzUKXESH…
G
Simple solution: Alice takes her severance package and invest it in the S&P500. …
ytc_UgxM3GkPH…
Comment
The big difference between AI and Human is that we as human have emotions, gut feeling, and think of complete new way of resolve something. AI have no conscious, it is limited by what is knows. Human gets new incredible idea's via it's subconscious, which is our true undefined power. Even if AI ever will get conscious, which I don't believe that is possible, it will never get subconscious. AI will always be a tool, nothing more. AI would say: "Probability of success: 2%." Human says: "Screw it, let’s try.", being unreasonable let us to great victories.
youtube
2026-02-05T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzHIX539Cu-TZ8cKFp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxdvGu88epw0ZqEZNR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzbSW-cSuHSCIA7Yrt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzewYhDFoE59O2HB7R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwVJdBB7DNwxpCPMHN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxzJ24O0ToKh1vOu7N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7YABpw2L4CeTnWqZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEK4g-NYgqa7xku_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwG-yf3ki3l-x6_aTl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyz4nvME-KOBQ7kwCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]