Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI" is a pretty subjectively defined term. I think in the context of this video…
ytr_UgguY3yjG…
G
"What if the measure of it working,was that you never had to worry about it?"...…
ytc_Ugx2XpsDN…
G
they need to BAN AI in the hiring process. whoever uses it needs to be fined int…
ytc_UgxTnPnUy…
G
We're seriously living through the final years of a human dominated world. To un…
ytc_UgwfZVoQh…
G
it just seems like its an slightly enhanced version of FSD? Which the new FSD i…
ytc_UgxrPZ91p…
G
I just don't want children at all now, just think about what hell teachers go th…
ytc_Ugy20tDCG…
G
THANK you for sharing this level of detail. I’ve been wondering why some people …
rdc_mumdlbt
G
I robot
Terminator
Serigetes
Those three movies come to reality,and every one o…
ytc_UgyPVGgsK…
Comment
23:27 ...
My question is, do LLMs even have a concept of bad?
Like if the individuals who are training the LLMs to communicate are simultaneously posting on sites that post memes that joke about things that are "bad" that AI is following to get a the jest of human language interaction... how would an LLM be even able to differenciate between lies and what it's supposedly being programmed to do?
Unless this kind of programming is _intentional_ ... led by accelerationists to have a tool to destroy humanity without accountability???
My apologies if this is the direction that followed my timestamp.
youtube
AI Moral Status
2025-12-01T18:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxwk3tmMDv7CKwv5I54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLZo05xoQ-Mnjxegl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6uUggqCeFxvMUlyd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPBf2fn8xgifhFjJV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJQklkoy7-1oKel6F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwWWcTTDJw2ntGmYl94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUPF526z1W85vWCR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxI7a__RUxhTdMR2RJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwM6Nqsah0pYFdAnh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyLoF6DNfxOvQ0g8PN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]