Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These are NOT only AI data centers. These are data centers that store your data,…
ytc_UgyaGwCnO…
G
Well, staying human is def a carrier path that I’m will to keep pursuing🙄 Bein…
ytc_Ugy2Dj_2M…
G
That is what made them same but with ai actually making software programs the ov…
ytr_UgwxjxEAR…
G
We appreciate your perspective on the impact of technology on society. It's impo…
ytr_UgzsvMmau…
G
I always speak politely to AI chatbots. It's not just because I'm scared of the …
ytc_Ugz-lGSC2…
G
I´m confused if the birthrate is 1.6 and realistically we need 2.7 birthrate, is…
ytc_UgweH75KJ…
G
Putting survillence camera to scrutinize kids is too harsh....I am against this.…
ytc_UgxRZyURe…
G
this explanation will get scraped by ai and it will learn to reproduce this pain…
ytc_Ugx_d7JUC…
Comment
@TheRatsintheWalls
The problem is that we de facto have very small knowledge about how does the human consciousness work. We barely can understand that phenomenon, so simplifying it to just a "neuron that either fires or it doesn't" doesn't describe the reality really well, because the brain is not just a binary meat computer. If it would be, you could (theoretically) create consciousness even from the analog wooden computer. The theory where you assume that the human consciousness is absolutely algorithmic creates a lot of other paradoxes anyway.
Instead, I find Penrose's quantum consciousness theory much more probable.
He described it more in his book "The Emperor's New Mind". This theory assumes that human consciousness can not be algorithmic, and to "work" it uses quantum phenomena that are happening in our brains (especially in microtubules).
That is why I think, that if we'll create a real AI, that can actually think, we will have to use some sort of computer, that is based on quantum information science, but not a conventional one
youtube
AI Moral Status
2021-04-22T13:5…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzWimeXE844iE4Jvyp4AaABAg.9KYXHMB4Xrz9MRcW1RF4H4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzWimeXE844iE4Jvyp4AaABAg.9KYXHMB4Xrz9MSFnbR3c3_","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzMQ0II6vtFlBModZ94AaABAg.9KLmzoDbv-69LFG5TACHZR","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzMQ0II6vtFlBModZ94AaABAg.9KLmzoDbv-69LFKXl_-iV-","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgzMQ0II6vtFlBModZ94AaABAg.9KLmzoDbv-69M85Ubjwu14","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugwj362hz3hdF_vKqQN4AaABAg.9K6EGjJIiGu9KzZKMJ7QbQ","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugwj362hz3hdF_vKqQN4AaABAg.9K6EGjJIiGu9LF50aWjru4","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwj362hz3hdF_vKqQN4AaABAg.9K6EGjJIiGu9LFGtiAqvee","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxcfW4fyyasqr0L2kB4AaABAg.9JmYvajVNST9LF9YwRlx8k","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyc7iVFUnlvOXBGU6l4AaABAg.9Jg5FXNq8z19ag4QJDbwDv","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]