Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My three kids use Khan Academy, and can't wait to use the AI-powered Khan. Kudos…
ytc_UgxndMjZP…
G
I'm good with the AI we already have. Seems safe enough and it is so useful. I t…
ytc_Ugxy0yB8I…
G
Call me archaic, but no matter how advanced AI becomes, the creator of the unive…
ytc_UgzWo7F2X…
G
I like art i like the way art is made but i do not like making it myself i could…
ytc_UgwQUnPfs…
G
AI gets alot of data from online sources, including reddit and twitter. So when …
ytr_UgyX4QH3x…
G
Im not the biggest fan of AI (i honestly don't care much). But doing something m…
ytc_UgwX1pon0…
G
Like Reddit?
Reddit has a content sharing agreement with OpenAI, our data is us…
rdc_o84unkz
G
If you think Trump is bad now just wait until he control AI ..thanks OpenAI for …
ytc_UgwwD6NQ0…
Comment
The thing about AI as I see it is the application. You can use them for good and or evil purposes. I am not altogether sure though that we should be making them in our image. Isn’t that playing God ? They should have some physical differences so that we always know and remember they are not human. They could be helpful around the house particularly for disabled or elderly people. They could be helpful in a commercial situation too. If robots take over all the tedious work of humans then what exactly will humans do ?
youtube
AI Moral Status
2021-10-08T10:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxkuduAOoCoXkUjIMp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwwR1hX5c6mibBdagF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy8s2OXPUiLuhEvlOl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwM4qMndoBlMt5sZT94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFW7C8_4S9b4YtOkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzZs-MKV7-Mf40CR4d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxgM_-grZ6TAdAUIjB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrSASuFwAVU3h2bKJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzmlVqq1ZopOWVsNfd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyA-mX3JEf4pwqKoMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]