Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Made the shift to DDG a few weeks back. Google's push of AI convinced me.
I've n…
rdc_n3y608i
G
The enemy of humanity will be the established elite the industries initially imp…
ytc_Ugw4OY83m…
G
Pfft. I sell AI art and I still think this guy is full of shit.…
ytc_UgyBu_oJl…
G
The problem is not AI, the problem as always is humans.
AI will never be flawles…
ytc_UgwNE7mEt…
G
Don't believe everything AI says kids. Use critical thinking and do your own res…
ytc_UgwEwal0U…
G
The reaction time of AI is easily 100 times more than a human. It's like a grizz…
ytc_UgyypHmwH…
G
Skynet this is how it all began the downfall of mankind when AI close created in…
ytc_UgwJxQrhx…
G
AI IS NOT JUST FANCY AUTOCOMPLETE! If you believe that, then your understanding …
ytc_UgymiVyRP…
Comment
AI "were" under control at first or maybe we thought that way but how it got here? It's just a thinking machine is it just like a mirror of humanity now? How can there is a "monster" in a thinking machine like this it's not evil nor a hero too just logical thinking machine... But it might change us for better in their own way make us less selfish maybe? But i think a AI is smart enough this will have very high risks but a more "advanced" ai can make more things like that something we can't understand right now maybe it's in its shell waiting to be better more advanced and it will try changing things it dosen't want to kill us it's not their priroity it's getting better and more "advanced" fix me in the comments if am wrong at some point we all would be happy to make transactions of ideas
youtube
AI Moral Status
2026-01-10T16:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwtmOXiBqsvmjbsUA14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPgS2TazdfiN94pCZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyuPQsLePfzkLvtBx54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9YWElwT1YKyHXzB54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyPgyGqcPyhtlRu6Ah4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgylyQxC_Nu8Mn0VF9h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCxCcFKUAabVHa5I14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw6uxIhESJfbqOOWmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1HkBun19irT81arJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyRP_uMqEzbE6SW4Op4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]