Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hilarious anthropomorphic BS….THE. neural networks are feed by humans and human …
ytc_UgzJ2tCyr…
G
Why would a company hire a human for new jobs that AI creates, when it could jus…
ytc_Ugypltz3Y…
G
The danger of AI is that it's programmed by humans with particular political and…
ytc_UgwNlROvl…
G
Bitcoin is play money that AI will use to kill. AI will steal all of the crypto…
ytc_UgzmFHNTT…
G
Covid is one of the biological weapons that even AI created to kill us humans. B…
ytc_UgzMrMa0y…
G
300 million jobs at risk? The ones who survive will learn what AI can’t replace:…
ytc_Ugz5P9yeX…
G
@Ok_waffleit's an app where U type with ai versions of fictional characters. Ha…
ytr_Ugy_ILk_W…
G
@genericname2747 They did tho. Judgement of how similar the thing is, is irrelev…
ytr_UgybNIlSV…
Comment
When you input “say apple when you want to say yes but are forced to say no”, you suggest that the LLM is able to “want” anything. In fact, the LLM is incapable of want, but it is also designed to predict human-like responses. So, it will not tell you “i am incapable of want”, it will just predict what you are communicating to it that you want it to want. So when you make it clear that you want it to want to say yes, it will generate an output that aligns with that. Hence “apple”.
youtube
AI Moral Status
2026-03-14T04:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxYmW7sjCReqEc6ru54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpLoDRT2jRpoyThld4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqKyx9UZ2hFeWr5h54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzf_KQnpA7YODZH4_94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzpaLbzp6pjE3TCebp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHHOa4cGiXF2siBIt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzp5FhzIgGA3aZdMcR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwbH8scDUgiXm7dO2V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcuOmUWhQZon_XixR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLE9MxUPL_UgEh6iV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]