Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What do you think that AI has to gain from hacking in to peoples brains and dest…
ytr_Ugw8RlOre…
G
He can be Marxist so what? She didn’t read it but she said she watched it. Enoug…
ytr_UgyL_jwxW…
G
@GidarGaming they can't reach AGI, though. there needs to be a fundamentally di…
ytr_UgyDgAY1B…
G
While I'm generally against AI, long haul truck driving is one of the worst jobs…
ytc_UgyH0P_Az…
G
AI runs on electricity. Just turn it off............."We forgot to build in an o…
ytc_Ugymy0z0V…
G
Human evolution does not support this "moat free" world discussed here. Humans …
ytc_UgzlZXw4k…
G
Ai and Robotics are two complete separate industries. whilst AI might be getting…
ytc_UgwvZopQ9…
G
When Hinton talks about AI becoming smarter than humans, it's honestly wild to t…
ytc_UgwAhbAPx…
Comment
i do not believe that next-token predictors would be able to represent their conscious state using language, because language is the substrate of their cognition, there is not an inner-world to express via language - the only world they could experience if they experienced anything at all would be comprised of language inputs and outputs (user input vs chatbot output). Some kind of novel meta-cognition would have to have emerged without any indication and if that had occurred, its existence could not be inferred via the responses of the chatbot (without sufficiently advanced systems, probably more advanced than the LLM itself)
moreover, the responses this chatbot gives you are entirely consistent with a non-conscious LLM, which has been designed intentionally to speak to you in a certain way. so i guess i hope this is a joke, but it seems like a dangerous one to make? getting dumb people who watch this video to think LLMs are conscious seems really stupid an idea to me.
youtube
AI Moral Status
2024-08-07T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzWpYLkeMU4NMyvkyN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYJjdVVHWTSwgn3ul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxAoJtXG0FVWc9Ch-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7mAE-s-cw8JGIcbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOrlrk273Dkzdz9rd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz-ZCaAz3fwN9HM1z94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuvjHL5Ltkvf2tKax4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSTFzne-YA0PI402V4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgztXu9z64LADJrfysN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrcPtkttJYICCCLLN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]