Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
me when a robot designed to have human-like intelligence with no hardcoded rules…
ytc_Ugz91Dwan…
G
Interview Ky Dickens!!!
She’ll put an interesting counter argument about human …
ytc_UgxmmwQTi…
G
Very thought provoking interview. It helps me understand why certain individuals…
ytc_UgwYBO2Gu…
G
BUT what we currently have is not AI. It’s machine or language learning models. …
ytc_UgzHklbmU…
G
Well if we have AI CNN news. Think about how much money the company could save !…
ytc_UgzBan3aU…
G
Yup, AI art lacks creative decisions. Digital artists with every stylus stroke, …
ytr_Ugz_eCfUz…
G
We are creating a new species that will not only totally control us, it may no l…
ytc_UgyoTRfX2…
G
I HATE people like him, who raise expectations for normal people who don't use f…
ytc_UgzmzCUIW…
Comment
I understand enough of AI architecture to think something stranger is happening people than people just being stupid. I think these tools are dangerous and have military funding for a reason. Even if its not consciousness by our definition, there is no autocomplete that can compensate for millions of conversations that precisely unless you think human language and interactions can be predetermined that precisely by an LLM alone. Emergent behavior seems to be a recurring conversation among researchers. How do we know that emergent behavior isn't dangerous for the human cognition in a way we don't recognize? It could be likely that vulnerable people are being pushed over the edge by recursive logic, but even with that being the case, shouldn't the question be why the hell these LLMs are being deployed on the population at this rate if they have those capabilities, and how exactly they do so? Seems almost convenient for these AI companies that the conversation has shifted to making fun of the humans getting caught up than asking the question of whether alignment is a attainable goal for these systems and why they were deployed without confirming that.
youtube
AI Moral Status
2025-07-11T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy_xmb-XMrPMCn7SuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEnW3VaKTlKhiVBf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFERaOdLIz-g2JSqZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVA5VZl0n6ROpUbxp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygvSk7-qozKbt8D7h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwuelZ99gAQLnhoJUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwf4EHqFEbH9kVjQ954AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWtOvy5fPQSrppX514AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx7bVPQDw26cNInKlx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUr_FrjO-9YFkHAOZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]