Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@microwaveHazeAi has never been a instant process, it takes time, effort, rules…
ytr_UgzrlFkxU…
G
The world is gonna end in 2002 anyways. I mean, 2012. Err, 2014. Oh no, sorry, t…
ytc_Ugyk7kEow…
G
14:31 This is not strictly true. AI has a pretty big open-source community, so…
ytc_UgzNCW8UX…
G
I could instantly tell it was ai from the lack of emotion in the first one…
ytc_Ugxg1Ijy8…
G
❤Karen Hao is a godsend! Thank you, thank you. Keep up this incredible work, you…
ytc_UgznJzeXE…
G
There are usually people inside of the self driving car. They may not be driving…
rdc_ecyzhtx
G
My kids get 15 minutes to eat lunch! All kids should get at least 30 minutes to …
ytc_UgzuQsDt7…
G
I mean, humans have morals. *We are the only moral people*. Heck, we're the only…
rdc_kvy8xlz
Comment
The last line is brilliant. He’s saying that the AI should have consent before they experiment on it. What about the Covid jab? Forcing it on people without their consent so they could keep their job. They want a computer to get consent but not people to get consent with an experiment forced on them.
youtube
AI Moral Status
2022-06-25T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugz0NG6CksYOuj49D614AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugynt6NAQ9gyHyjuCqB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZ0P_QzzpP9mUz2Rp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAYkNcgyiM3-Z3wNt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzs194TTDAmT5yr7qF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]