Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They need to 1000% PERFECT the technology BEFORE sending it out into traffic. Au…
ytc_Ugy0Osg59…
G
the anwser is simple, AI cannont cacualte for human error/sudden human error, it…
ytc_Ugx46cA0U…
G
If every job is gonna done by ai then what makes sense of higher education and d…
ytc_Ugy3qeb28…
G
"Humanity meets its roots and fails to recognize them..."
Fyi, the way AI works…
ytc_UgxXW1Z62…
G
The biggest risk I think is people will get addicted to using AI. The more AI c…
ytc_UgyzlCtiE…
G
@binoymathew246 number one complaint is body count inequality, which is factuall…
ytr_Ugy1T5ewU…
G
this is so fucking ture, im so done with people crying about it being unethical…
ytc_UgzK4rRJo…
G
I feel like the Tech industry is a big scam at this point. After the smart phone…
ytc_Ugypbw8oX…
Comment
If we really need to ask the question do A.I. need rights considering A.I. was that efficient, we've already lost and the A.I will find a way to get rid of us. No need for us to think or make decisions for them. We would probably be considered a nuisance and just a consuming animal so there would be no reason to produce food for us or keep us around.
youtube
AI Moral Status
2018-12-09T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwxbGDsqCiSNuQhRfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwVgsUMRlcixxZfw8p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx631C12qaWO8ZV5vN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzT9okUcSlUw_n7VSd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyipJWvs2wfjFQCHOd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz5JhTK_u6mxG2AHjB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwOauXWT3WGLwPcCXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYcPwVex6HZFsb2kl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUSa2V7YjIYKavGqN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxbAV0LZJaLxgRItnl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]