Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The debate is interesting, but it's hard to ignore the irony when you speak abou…
ytc_UgwKUTM4o…
G
Ужас, люди не научились с людьми ладить, уже замену придумали. Это всё из-за тог…
ytc_Ugy7BJB8t…
G
Could you not just source 2 reference(for the arm and pirate respectively) image…
ytr_UgxenAH56…
G
I will not be polite to AI on account of not ever (willingly, knowingly) using i…
ytc_UgwgRhXhP…
G
that's some video you got here. The purpose of it seems to be to claim that AI d…
ytc_UgxOa1Cw5…
G
crypto ate too much power same people investing in ai taking everyones power for…
ytr_UgyL_Xy9F…
G
Ai image generators will also change the race of black people to white or someti…
ytc_Ugzu0nyog…
G
I mean this is already known. AI hallucinates and "lies" because it's just auto …
ytc_UgyCGV1-X…
Comment
The only reason AI would want to wipe out humanity is if it has been given the programmability to do so. Meaning that somebody, who is human would have to give AI that capability which makes that person guilty of conspiracy against mankind… Of course these types of people, will never be held accountable if that were to happen to us.
youtube
AI Moral Status
2021-08-30T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzumTmB3RSOifIfzj94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyeowJgt5D9AimsKhZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyd49BnZ7LMGkQI51N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzxtOP90kqmXpiCaWF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZti3DTZTUlYoxxGh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz2VG5MFn6dJ4pqKZh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYw64Zm0omGb2AKqZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw7btQL1vn36zsXBF54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPgfWyvtQjM5KXnDN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwzxkI_lXSuW7EDzZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]