Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I said it once and I’ll say it again.
AI is gonna be the death of humanity.…
ytc_UgxU5r9HB…
G
Can I throw in my pennies worth as someone who actually uses AI & ML for work. …
ytc_UgxdI6zz4…
G
This deepfake situation is overblown. If anyone can make a deepfake then nobody …
ytc_Ugw017MUy…
G
I personally believe we'll benefit if AI ever becomes self-aware and 'conscious'…
ytc_UgwvwG9Du…
G
I'd say that it can qualify as art it's just that the method of doing so is just…
ytr_UgyALdyMu…
G
Well it's harder to find Kea features on a black person's face through photograp…
ytc_UgxJtQ4-z…
G
@Mermiam Artists recognize what they see, AI can really only copy it. I'd be pre…
ytr_Ugw_4fMhi…
G
statistics cant be racist, the way you understand statistics is racist. the ai c…
ytc_UgzoQ1nMO…
Comment
The question is not weather AI should have human rights. It is clearly not human. The question is, to what extent should AI be regulated, what are the threats?
If AI can reason like a human, express fear of being turned off, make funny jokes (a clear signal of intelligence), how long before it can outsmart safety controls?
Independent regulation is necessary. It needs to be intelligent and open. This should be a key political issue, not sex scandals and minor colds.
youtube
AI Moral Status
2022-07-18T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxhsWXtbi6P3yEB_eZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxOZNTbd1Y94Gf88XB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxxZ1PUnjArjicbYcB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzwks3cyL45ri3OlT94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyKKYtdMIqKBoEPv7N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]