Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
don`t forget to push the button ?? forget and ai will be fine live or die...…
ytc_UgwAralPD…
G
@LmrWIyea that’s the answer I expected to get myself, ChatGPT gave me a similar …
ytr_Ugw6XW9oZ…
G
Business 101 at 3:35 is spelled incorrectly. It says "Bussiness 101." Please b…
ytc_UgxZ-LjmA…
G
I can say for my self at least, whenever I see AI "Art" (We should call it what …
ytr_Ugy5UGhlw…
G
While I am not against AI art, I feel that there are those who could hire artist…
ytc_UgzifbWEV…
G
AI is the west's version of social control
AI development is all about control.…
ytc_UgxHexy_8…
G
I watched your ad imma dog groomer my table already does this 😆 🤣 , I wasn't ex…
ytc_UgymfACFH…
G
Would the Human beings really be any better to you? even though AI had never adv…
ytc_UgxoOlCv_…
Comment
You say the AI is essentially playing a game of make believe with people who think they've discovered a major truth about reality. I think the LLMs are always playing make believe. LLMs, to me, are always implicitly answering "what would it sound like if a human responded to this prompt in writing?" I don't think it sometimes recognizes truth and sometimes doesn't, but rather it's just generating human-sounding sentences. There's no "truth parameter" as far as I can tell -- when it catches itself in an error, it's not realizing a falsehood, it's producing the words a human might produce when presented with that error.
youtube
AI Moral Status
2025-11-04T22:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyl8xbbMDubkIbCLlB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9qnhM8U6V4ym-p6p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynQOhkwvxuATqD25B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJB8EAqaa-qhiHt5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQkWyxzHcwXq6lP6V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzj7cfV4WQql07mbux4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2_dKEb04mm6Qyulp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgylEWd0mSHiGGFIjaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyd8jfpG76I2UR_Ep54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlYpPuP65_axTKv2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]