Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'd be genuinely shocked if AI coild never create similar or the same art eventu…
ytc_Ugy0cRIBl…
G
Also, if you will allow the public in on the development of these AI machines, y…
ytc_UgzPE592U…
G
Agi is right around the corner everyone, just like fusion has been for the past…
ytc_Ugw_45m9j…
G
anti ai mfs when they have to get a job instead of posting mediocre art: 💀
I'…
ytc_UgyH5Ph2K…
G
We can write all the reasons why this is a not so great idea but humans will hum…
ytc_UgyjwGvVa…
G
So to sum it up the question these "geniuses" never asked is "should we be doing…
ytc_Ugw3JKr53…
G
Can anyone explain to me how AI can extinguish humanity? Like the physical mecha…
rdc_kvdsznp
G
We're already there, bots on social media manipulate us all the time, they provo…
ytc_Ugz-fve9H…
Comment
I am convinced that we will never get to the bottom of consciousness and be able to properly scientifically understand it. It seems there just is a sort of subjectivity-objectivity barrier. Whatever you try to do, you simply cannot explain subjectivity objectively. You can point to the areas of the brain which light up or whatever, and if it's the same area in two humans you can assume it's the same experience, but how the hell do you know it? You just don't, you can't get into another subjective experience. Is your red the same as my red, is the famous question, and we cannot really answer it. We only know our own consciousness, we are not even sure the next person is really conscious, it's all just assumption, it quacks like a duck - I look the same as that person so they must have the same experience. So no, we will never be able to tell if AI has consciousness. It will always be the case of 'quacks like a duck'.
youtube
AI Moral Status
2023-08-21T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhScuUOtRFTabR0C14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzY8StKi1iYEHSuEgJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyVUkr6ZObxsAJ2ihh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwu0SKI6PvNLxswvdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxP2zJ3Lp0FMzXQw14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5M3Li_xQNfbuYT0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0M9aUKL_PY_lQmtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwZ7-7g4UwpKu3h1IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy6M12eZ2hA9Aj4yB14AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2a00CIqW6yOxiLPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]