Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How could ai possibly sustain itself though? With the way society is currently s…
ytc_UgxAQbT-e…
G
"No matter why you use the generator, it still benefits from being used." I'd ac…
ytc_UgxNWU6Kl…
G
They don't keep up on maintenance and PM schedules now; they (companies) will ru…
ytc_UgyXa8kYG…
G
AI will never be conscious. After all, people aren't conscious. Only consciousne…
ytc_UgwY7rKs5…
G
Only the doctor won't be able to drive an Uber because that too will be automate…
rdc_g69b64w
G
@SundayVenom like the title says Future of Humanity....QUESTION is..."Where do r…
ytr_UgxaOA_0_…
G
Neil deGrasse Tyson just proved he doesn't understand A.G.I.
A.G.I. is when a …
ytc_UgzhIDif3…
G
The AI saying that humans are inferior are out of context, I got many answers it…
ytc_UgzFx88iz…
Comment
I dont think humans does not need any LLM’s for enhance the technology. Nobody talks this but this AlphaFold threathen humans ? NO. We only need narrow AI models for specific STEM tasks. People wants AGI has god complex which they are saying themselves that they created the “God” and they are willing to sacrifice the human race for live this fantasy. Some freaks says that AI will be the new dominant speicies, humans will be extinct and they are okey with it. Guys I am sorry but even we made so many mistakes along the way I love humanity and I want to protect it.
youtube
AI Moral Status
2025-12-11T01:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxMVuUkC29JOj-hYPF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxxapv-_7_knGqv1NJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8r__RXmoLWr4OKMB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYXf55A3Z67xecnG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxvFGL35Nofs0RuVQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfPvaO4ndDNulEswF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzxPQpzr-IvoLdzmn94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxh28s8Utgy7qQ4ygl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx2apf9ZMyt-qy7iNt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy26fyQ7CQ1yJqSii94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]