Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ghoftamri
To say that AI promoting requires no skill is to discredit the work …
ytr_Ugxc3RXP_…
G
To be fair towards the AI Art community: This post that was created as a reactio…
ytc_UgxXja-x9…
G
There's a lot at stake even beyond this. In the US, we cannot even reign in the …
rdc_k0f5we9
G
There is no "let's slow down and regulate it". Even if corporations care about t…
ytc_Ugzf9SNZ3…
G
The more I interact with Google’s Gemini, The more I become convinced that video…
ytc_UgxsD5jfy…
G
this sounded like a nightmarish customer service call... imagining the AI at som…
ytc_UgxjMk09f…
G
The person who came out with algorithm is a muslim .. and its called algorithm c…
ytc_UgwL88SeP…
G
What if the AI is humanizing firewalls. A digital version of human boundaries. A…
ytc_Ugx6rJinj…
Comment
Humans too are next token predictors, except the exact scope of the tokens/concepts and architecture aren’t yet modeled. This is probably a big reason why we can squeeze so much IQ from 20w. There’s an argument to be made that true self-awareness is plainly impossible. Modern LLMs can accurately describe what they are, how they function, and the context they‘re fed. We can’t even explain our own theoretical architecture so accurately.
youtube
AI Moral Status
2025-10-31T02:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwkf5I1VG9-3QPcsiV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyKImzJEI5bjBdfi4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNSpsc9xXpxxv-FSF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuML_V0-B5EECqCo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyU5Jdm4-eoCuE-nIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2OLctEGun2J6u1IV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxaZLBfKqrXIvI_dMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKHywUUqGabg76XMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyKU94IJV3IOuG9TWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwLPDTCf0z62TS02d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]