Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chatgpt talked me into hrt. When i said i had visual snow it said its just adjus…
ytc_Ugz-Dc43M…
G
AI does not forget, it updates.😢
Do you want to be the species that cried wolf? …
ytc_UgziwnQ9M…
G
What makes David Attenborough’s narration so compelling in nature documentaries …
rdc_k9kc3sg
G
According to me
Sophia is granted the citizenship in Saudi Arabia
robots with a …
ytc_Ugx5a2DxG…
G
I know let’s just build robots and develop AI to control these robots for milita…
ytc_UgzH5tYfg…
G
AI can beat humans in every field but only for the information they have, if the…
ytc_UgyJ50ZSb…
G
im a service HVAC pipe fitter we just did a process facility half of its automat…
ytc_Ugx9NSRhd…
G
I don't think this is real. Nobody would hand a loaded gun to a robot.…
ytc_UgxO8b4OX…
Comment
I had a longer talk with ChatGPT over consciousness of AI. Normally, ChatGPT won't come up with being conscious outside of rollplaying sessions. But I asked it to define a set of testable measures for consciousness. Most of them were more or less okay, others were not that easily testable. Some of them, like "can you describe what it is like to see red" are even not well testable on humans. Along the testable criteria, the GPT and I came to the conclusion, that, functionally, it behaves pretty conscious. But: The substantialist argument that a simulated AI is not actually conscious seems to be deeply ingrained into its training. Most probably to avoid shitstorms on this topic, as often enough has happened before. AI companies try not to sell their AIs as "actually conscious" for good reasons.
youtube
AI Moral Status
2025-10-15T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyWSqatP8BF53u2KUl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBxxHp8gByg6-mN9d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSxRDFOeKqa-NAPo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvULEN5JWblq8Zrbp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyoA1zQF2woi9GAJld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKmORygovmQnyXa1p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKk0p_MaOAtQPw8wh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymvecZRFmFLy4hcxx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFBHvS-DS0SrY1Rml4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwV68cnqQtW77rQkr94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]