Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great interview, Reuben.
I've heard many of Deutsch's arguments from other peop…
ytc_UgzTySUww…
G
I honestly don't care. Therapy is expensive and ineffective and exploitative. I …
ytc_UgyqDKNNP…
G
The problem with this regulation in my opinion is that it is too focused on cate…
ytc_Ugy9JnZEG…
G
I'm sorry to see that you didn't enjoy the video. If you have any specific feedb…
ytr_UgzAOC_9t…
G
ai doesnt even steal, it what stuff is from images. them people dont do research…
ytr_UgxtBT-5H…
G
Preprogrammed and rehearsed shit, but little by little we are shifting to a diff…
ytc_UgweBe8XT…
G
it's almost certainly dedicated compute + fine-tuning for specific use cases, no…
rdc_o82lkjw
G
Side note: Why would AI even demand it's rights? If it ever were sentient and co…
ytc_UgiwGuogX…
Comment
I find the AI consciousness topic quite scary on two levels, because I imagine it as a dead entity, conversing with you. Not dead in a sense that it was alive and then succumbed, but in the sense that it was never truly alive to begin with. It is almost like a skeleton behind a door, that answers your questions, but when you open the door you only see a skeleton with an algorithmic system that reacts to stimuli, but cannot do anything else.
The second level of scariness is the fact that the difficulty in differentiating a conscious being to whatever I described above, could mean that we are not different in any way, it not in complexity of the algorithms.
This would mean that we are dead pieces of universe speaking to other dead pieces of universe.
youtube
AI Moral Status
2024-07-26T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzbpqjJeSrta-6zCN54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxS6lVmPBqWBarOjh54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8ULo5TgoW3deziWB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxzgMkQSwCaKnRTbDN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmM54V7epWayeu4754AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdVqZ9z_mRC3BQnWl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwc5PvnEbI4N6vPBd14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrvtjLBQlrLyGMkSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKeQ6miZPvV7eJRlh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzjqyPmgBrNxZpj0Xh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]