Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These codes are just randomly generated, not real. They never were as another co…
ytr_UgzzhFiAN…
G
Think like this and you will realise whatever he said or other studies said is a…
ytr_UgyeKN9Fx…
G
bro rips off lord of the rings frame by frame then has the gull to post cringe a…
ytc_UgzIU1nHD…
G
It's a byproduct of the 24 hours news cycle that also tries to be neutral. Inst…
rdc_fap6044
G
Glad you're willing to risk your life for a video.
German cars are great mech…
ytc_UgwUGlyXn…
G
AI Guru: AI could be an existential threat to humanity. Us: So are you going to …
ytc_Ugzxy1_C3…
G
The thing that really gets me about the comment saying "Let AI do it and go spen…
ytc_UgzVYYxGZ…
G
It’s going to be freaking hilarious if the black robot has a giant dick 💀💀🤣…
ytc_UgxDxSgPm…
Comment
My worst fear is the creation of what the philosopher of mind David Chalmers call philosophical zombies. These are intelligences that seem conscious. They seem to have feelings and the ability to be happy or suffer. They seem like people who should be granted the full rights of a person. But its all an illusion. They don't experience anything. They don't feel pleasure or pain, positive or negative emotions. This would be bad because people will risk their lives and sacrifice their own greater interests in the name of protecting people, especially loved ones. That is a lot of needless sacrifice for something that isn't even really there. We should never make an ai that seems conscious unless there is some way of knowing that these intelligences really are experiencing their lives in a way that gives them rights. But I doubt we could ever be certain of this. How could a test, not matter how ingenious or scientifically advanced determine whether or not something has qualia? Qualia isn't publicly observable in the way that brain activity or neural nets are. Its private. I assume other humans are like me because we are the same type of thing. It would violate the principle of mediocrity to assume that I'm conscious but that other humans are philosophical zombies. But I can't make that argument for an ai. I don't want to give rights to something that shouldn't have them and I really don't want to deny rights to something should have them.
youtube
AI Moral Status
2023-12-11T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyMeyRqqGPurpIT4_p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLW-vg7HcApqdE5Al4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgylBmSHCC4uxvGyypp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnsJ1WOYuvZ72myZx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwqgkwa-FH6M8CN0HZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyMKE182YD4VuSf3Ex4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjBSq-kgwoHRLNrYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy_FwQeJ1AW8Tj2rs14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwmUNzUWt5r1MgYiN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwck4jiJBWzyS0DbLZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]