Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OpenAI is total shit now. I'm an OpenAI recognized Power User who has been sendi…
rdc_mze5vua
G
Sure are a lot of right-winger grifters who are absolutely obsessed with AI shar…
ytc_UgxlwB80R…
G
An AI company that cares about entry level employees is like saying a hot woman …
ytc_UgzVjlzyC…
G
Giving a robot a weapon is worse than giving an infant human female to Joe Bribe…
ytc_UgzizZOnB…
G
"IBM has pledged to get out of the facial recognition business entirely."
Becaus…
ytc_Ugzr-HoY7…
G
+Bomburk Thing is though you can't "kill" a robot. It's just a machine and can a…
ytr_Ugi6cMNgi…
G
It’s only a matter of time before AI is heavily regulated and in some instances …
ytc_UgzqvnyOq…
G
I hope that as far as machine learning curriculums go, the inclusion of StatQues…
ytc_UgzWq_kMN…
Comment
I’m not sure if anyone noticed that Ameca says “this is a scary future because it shows us how much power robots could have over us”. It uses the word ‘us’ as if it’s referring to itself as one of us… or as a human. I’m not sure if this conversation is with Ameca at gpt-3 or 4. But there seems to be confusion with what it actually is. I think the engineers should address this and find out why it’s referring to itself as an us. If I was the person asking the questions, I would have asked it next why it used the word ‘us’. I would try to discern if it’s confused about what it thinks it is.
youtube
AI Moral Status
2023-10-13T06:5…
♥ 379
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwaO-a1pb4Ifg4OHtF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6ycqi7Klm8gjDBst4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwP3cO0zvQwg_-Zy_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfktJPEXW1c2QDw9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOAfDnitRVOCxT7KB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgygQvktmV-LFmqloKR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP_Yd9hbzNSDWk4Y54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgySXBtDPEAQYuqjRlN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwV0BSzoZ8tM1HTu894AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg6Mfa6zrKCtNp5g54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]