Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unironically I think this is actually one position AI would be good at. But it w…
rdc_n9j8npc
G
The environmental impact of AI alone will wipe us out - the planet/climate won’t…
ytc_Ugzloh2Ke…
G
I used the term Delve a lot before ChatGPT, but that's just cuz it's a keyword i…
ytc_Ugz8VlgzU…
G
Yes, let's stand by each other, only together are we stronger! Like you lazy one…
ytc_UgwbA__CZ…
G
If the AI can’t claim the art as its own because it doesn’t have its own personh…
ytc_UgzjKyP_7…
G
On one end I can understand the fear for AI,on the other hand as someone that’s …
ytc_Ugxz0S2OC…
G
It shocks me in this comment section that most commenters don't understand how d…
ytc_UgwcA0TiS…
G
As I understand it, this "global gag rule" stops funding for abortions overseas.…
rdc_dcx8dwd
Comment
Why would you want to create a robot that would match humans in every way and in ways be better than us? That’s not a very smart thing to do. Make technology to physically and mentally enhance us, not replace us. Because giving a robot human values mean you’re giving a robot the value of being in control which all humans are subjected to.
youtube
AI Moral Status
2020-05-10T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy9i3yJlM0bcA_pTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYiIVSZToQGBuXmLJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_LT9rEtzyF1gW6vF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz119JrlRIOhy9fYH54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzaD7CJ38aSGq7XIOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugylttu1L9zP8mIzWGl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzpHkX0_jDnXfKrULB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzgFYSjGjfIzLVBZch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwgwB0YoAmv_0U4pdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmTRFChD2JXtyzrw94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]