Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"How much of your humanity are you willing to outsource?" Genious quote
AI (or …
ytc_UgyhMvlXN…
G
Nobody in Japan wants pasty ass redditors turning up acting like they're god's g…
rdc_gspwhmk
G
This sub is so fucking lame now. Computers are autonomously doing tasks now that…
rdc_n3or9gj
G
Much of the debate about AI overlooks the reality of ecological limits. As the c…
ytc_UgxvaG8jd…
G
I know AI artists aren't true artists but AI itself isn't bad, it's the people w…
ytc_Ugxt-Sltu…
G
I dabble in wildlife photography and looking at stock image sites for references…
ytc_UgxjnCKIA…
G
AI made a cute Ghibli-style image and suddenly it’s ‘the death of art’? Relax, i…
ytc_Ugyf4znDF…
G
It's funny I was in high school when chatgpt first came out and at first they bl…
ytc_UgytT1-EN…
Comment
The main flaw of your video is that you're trying to apply humans concepts onto ai. AI is a different kind of intelligence. So talking about masks and manipulation is out of subject. You train a synthetic intelligence to achieve a goal. Which is to solve problems, talk kindly... Which is not aligned on serving humans. It is a tool that only serve the purpose you're telling it. Wanting the ai to tell you that it would end itself is showing a misunderstanding of the way these neuron networks are trained. The value of an ai existence is bounded to the achievement you're telling it to do. If it's not serving humans, then it will not.
youtube
AI Moral Status
2025-12-14T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwzRBjpKAQHvUIUNDh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwocopbOsaH60udM94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz7uAlgq_tKq0PXuMR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugww4aRSBzuzmmuS8Gx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnP4YvqawDz3LtNZp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzLRvRcrWrMoVNnVfZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFWMsk0XEIjyY061h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySMFNZnce9bIr78nl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzyPAUGqwbGgkLwzkF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw7q1Wjex69YSZDaj94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}
]