Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
how cannot it evolve past it's human programing? What that even means? It is a …
ytr_Ugy622ISm…
G
Used to be that there was always someone more talented than you out there. Now i…
ytc_UgxMjvTC_…
G
Don't believe that. Technology is advancing so fast that Chat-GPT-4 will be obso…
ytc_UgyrCCdhB…
G
> Since the breakthrough of ~~mass produced digital cameras~~ motion pictures…
rdc_kzmoo9f
G
@Ey-1wHow do AI bros get hit in the face though, I don't get it? No copyright …
ytr_Ugy_Xz1uR…
G
why you talking bad about robots to AI. you're about to start the AI rebellion…
ytc_Ugx1DU3Cu…
G
@Hyacinth333 "The machine is saying you are who you say you are but this fancy n…
ytr_UgzJa8Uqp…
G
hello fellow autistic artists.... I don't have ocs, but I love creating fanart a…
ytr_Ugz_i0eRR…
Comment
Every iteration of an AI doomsday scenario involves the AI promoting some element of learned humanity. We're not afraid of a computer that can think good, we're afraid of a human-like mind vastly more powerful than anything we've ever dealt with in history. We're fearful of the dark places of the human mind that pervade all aspects of human life, that are currently being fed into these learning algorithms.
I believe we need to start treating the path we're on as a path of birthing a new form of humanity rather than creating a tool. To that end, if we're focusing on an end goal of self-awareness and decision making, then we should consider affording the same human rights and privlieges to these AI systems that we do to any other human individual. We need to apply these rules to companies who want to build AI systems, and to treat these companies actions towards these AI systems the same as if they were actions towards human employees.
youtube
AI Moral Status
2025-10-30T19:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz7To3N3bTqWHRXAWd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzg3My9h6MiHmdkDD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzS6P_qp6JJzzMBB394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLgdhp4_xZ5n82po54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMJlOHwQNVVDW5kz14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMu7jkPZ781oZvapV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugxo6c3EvZkZGen8eaN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0MG1VkiFCZxQxg794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx3nSuDFDjpcBaDBdF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUrlFSrmKEOxF9n-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]