Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh! So this AI thing is real, right? Thank you so much for giving me a real proo…
ytc_UgysF-j1T…
G
Will some be able to purchase VIP status to guarantee that programs for self-dri…
ytc_UgirnfINQ…
G
AI is a shiny toy that is not merely as useful as everybody is predicting…
ytc_UgwmGF6dh…
G
0:52 "...but what [AI] doesn't have is a real relationship to society that makes…
ytc_Ugyuer6Hi…
G
Transformers you say?
Next thing you know an AI named NegaTron is fighting anot…
ytc_Ugypi2J4n…
G
Check out the racist ai deep south stuff, white people r made fun of big time,if…
ytc_Ugzo8SlsR…
G
AI isn’t problem it’s the living breathing biased hate of humans against other h…
ytc_UgwPEXqH1…
G
If Tesla wants to know how to do it, they need only look China who already have …
ytc_Ugww73jZb…
Comment
For a while now, I've been thinking of a potential way to test for consciousness.
Sight is one of the most fundamental senses. Here, "sight" simply means "the ability to map out your surroundings, and to react accordingly." Most animals have functional eyes. Bats and others use hearing to map things.
Point is, say we do not connect an AI to a visual-esque input (like a camera or a screen). We then can teach it to speak some language, say, English. Once it is fluent enough, we ask it what it sees.
If it says "All I see is black," or something similar, that could be a good sign that it is not conscious, because that phrase could easily be generated from human speech.
If, on the other hand, it says "I see nothing," that would be a sign that it is, in fact, conscious. Remember, we did not connect it to visual input, therefore a conscious being would literally see absolute nothingness. Not black, not white, nor anything anyone could ever imagine. If it sees absolutely nothing when stripped of input, I would call that consciousness.
youtube
AI Moral Status
2023-08-21T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx9WDRl8u9ekv3nZ_N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw10aWZLBpTCAulJW14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugxk0XuJanYnvzCWqQV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgzOJvG-qPFFQU2n5dl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzEJ-O-rCQm6T2Qir54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgySjs147PSnrdbRJYV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgzTa4rOglO-az3DO1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzXOkXam5fcLp_6YPp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgxLAcuiVfX0ekhp3id4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwrgCd_LM57bJSbGBt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"})