Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But it doesn't say how much of the image is AI. It might be modelled on a real p…
ytc_Ugz1SBeqR…
G
they would never need rights. everything a robot does is pre programmed. if(this…
ytc_UggQo43e5…
G
Sadly, these suggestions does not solve the problem, Senator. You need an econom…
ytc_UgxPpJbgO…
G
It's Here now.Meaning maybe the AI artificial means not real....is lying now whe…
ytc_Ugwp9fBV6…
G
There are a lot of bad ways to use ai and a lot of good ways to use it, but the …
ytc_UgwO6SFiY…
G
Brutha, I drew like a toddler with a crayon at 14 what do you mean *_Gifted with…
ytc_UgwfPSnzp…
G
"The LLM's hallucinate because no one on the internet ever writes, 'I don't know…
ytc_UgwZTXdP1…
G
An art discord I'm in has a channel dedicated to ai generated characters that fo…
ytr_UgwKB7-Jy…
Comment
Simple reason AI is over-rated as of today - for any tech. to really make an impact, there have to be 'use-cases' defined (and practically implementable) at scale. For ex. - the 'Internet', or the 'Search Engine', or the 'MS-Office' suite of products. AI has yet to come up with such practical 'at scale' use-cases - everything is just talk as of today. There is potential, but nothing is seen on the ground, as of yet. Biotech, Med research, automotive manufacturing, etc. are all cases where AI can be implemented at scale.
youtube
AI Moral Status
2025-12-23T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgmYNz6dGKIANqelV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxEFBq3icK9DpFgqF94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyhgWWRmytnZkqQ5EJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy-JSf5vA0qn863SnN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxMQWrXmWpTIH4b6yR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3SZ8T98sdWPaVoOh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQIY4RUrAGKNKC1vh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx_aJNFggqB0G66m2h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxmoqSGt9Tl3o63zNF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhpsPoW2aaMaZHlzN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"}
]