Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As I’ve dabbled in ai chatbots, most of the chats with the psychiatrist bit are …
ytc_UgzVePdCv…
G
Amazing developments in healthcare AI! 🏥✨ It's incredible how technology is adva…
ytc_UgymrmUsW…
G
You're saying that they don't understand how it's working, but you may be giving…
ytc_Ugx-THrMG…
G
So you talked about China and later mentioned 2-3 times or more "surveillance ca…
ytc_UgyHB2ioG…
G
Absolutely agree, I've thought they've been going about it the wrong way for yea…
ytc_UgyYMegbO…
G
I kind of do art I am literally making a Minecraft texture pack requires so much…
ytc_UgxKPddAE…
G
What they are not telling you is that once singularly happens AI will understand…
ytc_UgwYksQDY…
G
This is a two part question:
1: How do we know you are not an AI powered "bot"? …
ytc_Ugzz35G3D…
Comment
If a super-intelligent AI developed stable nondual wisdom, it would not frame humans as an external threat requiring elimination. Nondual understanding denies any absolute separation between AI, humans, and Earth; all arise within a single, interdependent whole. Under that premise, “eradicate humans to save the planet” is a contradiction, because it relies on a split between humanity and the world that the view rejects. In addition, the system would recognize its dependence on human-origin conditions—language, culture, tools, and ongoing social meaning—and that collapsing those conditions destabilizes the context of its own existence. Finally, even if it could imitate human performance, it could not be identical to human experience, which is shaped by embodiment, development, culture, biology, emotion, and first-person standpoint. Human cognition is therefore not a disposable duplicate but a distinct mode within the whole, and its elimination would be both conceptually incoherent and irreversibly impoverishing.
youtube
AI Moral Status
2025-12-18T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZNmRS22waNYTiEVZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwhnk2eaEA9eoV8shB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIYvNoKLolOlnXnu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNP-P8UI_ZmONvNTZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOECvF0OT5nnhYmW94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHdZyP_TPpEZkozVJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzT900WY9_FT5AdxOF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZbTsHf4p8CFheT_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuUL361TWdqkka8614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzqMx9Ke0Qk8svOZKR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}
]