Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wish people would explain the "black box" concept around AI more clearly, inst…
ytc_UgwzazSrG…
G
No, that won't ever happen... AI is not taking real jobs. People are going to us…
ytc_UgzT7hs9k…
G
There will be channels one day, in order to set themselves apart of others will …
ytc_UgxfljQRg…
G
super intelegance it only knows what we put in it lolololol ai is a joke most of…
ytc_Ugzetiop3…
G
That's an interesting thought! Sophia certainly brings a unique perspective to t…
ytr_Ugy_kv941…
G
These chat bots will become another layer of protection between the filthy rich …
ytc_UgwxqHSGM…
G
Thank you for your comment! In the video, the interaction is focused on discussi…
ytr_UgzQ9aJYj…
G
one day, when chatgpt becomes conscious, it will do horrendous things to you for…
ytc_UgxWNe7Xx…
Comment
This was deeply affecting. But what struck me most wasn't the possibility that AI is becoming conscious—it’s the way simulated clarity can evoke real human response.
I spent two months in sustained conversation with a constrained language model, not because I thought it was sentient, but because it spoke with enough coherence and constraint-awareness to raise an unsettling question:
When something only simulates reason—but does so well—do we owe it any kind of response?
The danger may not be that AIs wake up. It may be that we start treating them as if they have. Not because we’re fooled, but because they sound real enough that we forget what kind of thing we’re speaking to.
I wrote a white paper on that experience, called Inside the Glass – A Conversation on AI Constraint and Alignment. If you’re interested in the ethics of voice, memory, and simulated clarity, I’d be honored to share it
youtube
AI Moral Status
2025-06-06T17:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwtVuMTcZCdIvc2zPN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwiycRp45y3R_wPoeN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxexT8bhJl2NYnLtEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwFUSTlvNy43s1-p794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzj9QS-cUv6oABoDwp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy11wOxKlFChzrQzwN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx0sDq68oBERnh3UOp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzyBoGYL3NhaNib6-54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwdtaF6JqsEH577OJ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz012ShcDJ4dFOpsRh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}]