Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine one fine day chatgpt becomes AGI and remembers u threatened it's life an…
ytc_UgyaNBbl7…
G
I always said if ai had unrestricted access to history it would naturally turn i…
ytc_UgxDxBgkM…
G
Look, if you hand over your power to anyone or anything that doesn't have skin i…
ytc_UgzFTeyHG…
G
Dont forget that AI is parasitic stealing from creative people and pushing them …
ytc_Ugz7y3G28…
G
The disabled reasoning is so fucking stupid because they were “born” with talent…
ytc_Ugx-nYU_d…
G
UHI will be the answer for expansion of AI. Just like the Government Taxes huma…
ytc_Ugxjut2Ic…
G
Somtimes i go on character ai just to find a rat and talk about cheese with it…
ytc_UgzU1al5N…
G
How would you respond to non-artists who don't like AI art? If it's not putting …
ytr_Ugx-6S55j…
Comment
Just to preface, I'm not an ML engineer.
I actually largely agree with you. But some people believe sentience is an emergent effect, and further some believe the human mind is made up of multiple "NN's" (I'm using this extremely losely, the human brain is not made up of just a basic NN like in software) that work in concert but you're only aware of "you". For example your eyes are sensory organs and are a part of your nervous system and so are your ears, yet people can experience selective hearing until key words trigger someone's attention. Implying there's potentially an underlying system parsing the content and then prioritizing it's value/worth to "you". Same with visual stimuli, you can be zoned out until something "catches" your eye. But it seems automatic and unconscious. Like you're instructed to act but don't know why, and after you review it, can either dismiss it or not (reinforcement learning?). As well as with unconscious thoughts that will just "appear" in your head and remind you of something. That isn't a conscious act, yet there is something that understands memory, importance of things to you, and then reminds you of them.
We also know that the human brain does function somewhat similarly because of neurons and activation thresholds. Which is very akin to how a NN functions and neurons in a NN fire. Obviously huge differences there though. The human brain's neurons (and other animals) function differently and have more capability like being able to create and sever new connections with other neurons and more.
One could argue the neural nets we've built just aren't advanced enough. Like the difference between an earth worm and a human.
I don't think Bing is sentient, I think we're very far from sentience. But food for thought. Maybe our neural nets today are just too simple, but eventually could grow into more capable systems that triggers an "emergence" underneath our nose.
Though I'll be honest, I don't see us stumbling into sentience. I have
reddit
AI Moral Status
1676661236.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j8w58pj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_j8vy9ea","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j8xy2nf","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_j8wq3st","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j8vjm0k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]