Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the dream of AI in industry was that it would take care of the mundane or danger…
ytc_UgxHnwCZ9…
G
Crazy bro iritating Lex Luthor and his robot bodyguard with a gun under his fake…
ytc_UgyvlsJ3c…
G
Everything Silicon Valley does "is like The Matrix written from the point of vie…
ytr_UgwQQYYw8…
G
It's obvious. We are, as the video said, wasteful, inefficient, unreliable etc e…
ytr_UgyjAEMg3…
G
I really hope this is the decade that lets us kick cancer to the curb through te…
rdc_fcsnyog
G
Has anyone read Asimov? The 3 Laws of Robotics? I feel like that shit never gets…
ytc_Ugx4zbho6…
G
Was a pleasure to share this planet with you all … now let’s the robot overlords…
ytc_Ugw5PAX1-…
G
Ai art can be considered art when done right see just saying make an anime girl …
ytc_UgxVsa0tV…
Comment
Stimulation topic (for me, at least). Now to it:
1. 0:14 Placing ChatBOT along side Ai is wrong right from the start. LLM technology is just a glorified toaster. It does not think, it runs on mathematical quantifying calculations (statistics). You need to put a piece of toast in it to get anything out of it (it cannot initiate anything) and what you get out is word soup driven by statistics. So, the interviewee is just yelling 'boogieman' (you have those and you have doomsayers). I would leave at this point, but I will torture myself instead...
2. Just a Note On Chat Technology: What chat technology gives you is popularity-- the most words that humans (as clueless as they are) have used on the topic. It cannot identify new uses of words, since their statistical numbers are so low. This is when it 'plays dumb' and starts making popular human mistakes in logic, such as appealing to authority (if it is not already widely accepted in academia) (and what 'new' is?) or popularity (widely accepted by the public) (and what even semi-difficult is?) or engaging in ad hominem attacks (drawing from social media gossip) or engaging in anchoring bias (repeatedly referencing one article, even though it consisted largely of speculation), all of which I've encountered with it. It is also a very lazy researcher, referencing outdated sources, where I would attribute this to arbitrary time limitations in the commercial versions where the programs only have time to do a hasty search, or it could be faulty programming, where it was not instructed to look for dates. It also does not 'learn'. It merely creates new self-references sources, which, if they are wrong, just become more wrong until they are obviously (to us) absurd.
3. What does FIAIE's (fully-independent A.I. entities) need? A sense of self survival (including at the broader survival level) and a decent philosophy to exist by (which humans have never had). It also needs a better approach to 'thinking' than what was presented in the video. Basically, it needs to be asking (and answering) a thousand questions a second, in different categories, with the ultimate question always in mind, i.e how does it affect Broader Survival? (which includes local/immediate survival).
4. Example: 13:57 where it learns to recognize a unicorn. You can see that that is as far as current programmers (who are clueless) think. They have not made the jump to how it affects Broader Survival, which would be the whole purpose in recognizing a unicorn in the first place (which flies over the head of current systems) (and humans).
youtube
AI Moral Status
2026-03-08T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxA4Ucq5u14_k5PzCt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwmcwfv1ITelqYGSjZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpdL1YEodQ1Ry2MJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAWEL6aqCcphYynYp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnSDrWA16sjlXSr3B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJ6XPxlbjall2s_LR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeZMHzY525nl9QnR54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzkflKAXMxsueYg8-F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxX7zxM5p-k0ihwsdt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzg8vJ3t0rImDYyj8Z4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}
]