Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is exactly how I work with ChatGPT. Noone told me to do it this way, it jus…
ytc_UgxAzTb_J…
G
Genshin has been in a really sad state for years now... and it decided to make i…
ytc_Ugzg7ZtEo…
G
Israel is way surer then china and use ai to murder babies like they have Americ…
ytc_UgyYxXCvr…
G
Many of today’s drivers are part of the problem forcing the hand of companies to…
ytc_UgwkfU5lF…
G
because you do not know how to use it, you literally must research before using …
ytr_UgzRuyXY5…
G
There are no native people in America today, stop the scam. Most people who call…
ytc_UgzS5iBC_…
G
I love Khan academy and this Khanmigo sounds great but lets be honest here for a…
ytc_UgztD4EJi…
G
So much of this is simply sensationalized. AI is not as powerful as most people …
ytc_UgyFc0wM3…
Comment
Do you get how bizarre and creepy it is that Robots "debate" anything having to do with human beings or mankind? It'd be better if they debated the future of Robots, A.I., Robotics, Quantum Computing, Human Enhancements, Nanobots, etc. BTW, making Robots with the same pettiness and rudeness of humans is NOT a great idea. They'll probably wind up with way too many negative human behaviors anyway, so why push or elevate something negative. At some point Robots will self produce, or replicate, and we'll certainly regret teaching bad or negative behaviors - like a child who picks up their parent's bad language or a tendency to procrastinate.
youtube
AI Moral Status
2021-10-05T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxkuduAOoCoXkUjIMp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwwR1hX5c6mibBdagF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy8s2OXPUiLuhEvlOl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwM4qMndoBlMt5sZT94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFW7C8_4S9b4YtOkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzZs-MKV7-Mf40CR4d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxgM_-grZ6TAdAUIjB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrSASuFwAVU3h2bKJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzmlVqq1ZopOWVsNfd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyA-mX3JEf4pwqKoMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]