Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI makes people lazy when it starts doing basic things people do at home. Everyo…
ytc_Ugz5PCjUL…
G
Its really ok if it is done by a lifeless robot
But today israel is doing the sa…
ytc_Ugy3Z-1j5…
G
More, almost all, power in the hands of the oligarchs? Perfect. Billions of peop…
ytc_UgxB90NAO…
G
POISON IT ALL! POSION EVERYTHING! EVERY SINGLE PIECE OF YOUR ART!! LET THE AI GE…
ytc_UgxQuLKqi…
G
The resources to generate AI required from the planet will surely run out. AI re…
ytc_UgxFb3VMi…
G
I had an issue lately where I was being frustrated by my inability to write more…
ytc_Ugyui0phe…
G
Honestly, that's a very good point. All these chatbots and slop machines are bas…
ytr_UgzQw2PcN…
G
Just from the thumbnail picture alone I'm disturbed. Stop trying to make this th…
ytc_Ugzr9LVCz…
Comment
I think if an A.I. reaches a level where it can independently decide it wants to be treated as humans do then we should let it. The only difference between humans and robots is that robots can be programmed to know something in minimal time while humans can take a lifetime to know that same thing (like a calculator).
youtube
AI Moral Status
2017-02-23T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgjCkbW8HzWknngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgiEPKpkQpLBvXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UggS6u_4h0pTJ3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UggR_H-guI1ov3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgisRaHAbPZkRHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UghDTSkKguh_eXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugjb307Mr6aT_XgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UggBAqOIJtgnCHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UghBJWyJQzHrOHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgjYadM9MhFjhngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}]