Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would like to point out, because there seems to be a lot of misinformation, bu…
ytc_Ugx2E0GYt…
G
You mean exactly like it says in the article?
> Overall, it prevents the cit…
rdc_fvzkilz
G
It’s a language model ai it learned how to right like a human and says what the …
ytc_Ugw6jhFLN…
G
That’s what I like about us (Canadians) - yea we might not agree on everything a…
rdc_fn5jxv8
G
Please target 90s anime styles 🙏🙏🙏
My oc is specifically drawn in that style and…
ytr_UgzJCS_CZ…
G
AI all it can do is lie as tool for Satanic rich only. All funded by taxes not p…
ytc_UgziFoRpK…
G
AI is basically a trained parrot. You dig deep enough you'll probably find a cor…
ytc_UgxiEEVKJ…
G
I have been in Tech for 30 years and am done. It is causitic, filled with arrog…
ytc_UgzCNMWIu…
Comment
I can't help but notice we analyze possibilities like superintelligence and turning lead into gold assuming that the resource limitations would be feasibly overcome (which is a big stretch of an assumption imo)
The absurdity of alchemy and turning lead into gold isn't just the actual act of turning lead into gold, it's about profiting from the act of turning lead into gold, which even centuries later, we are nowhere near achieving that. It takes so much energy just to turn a miniscule amount of lead into gold.
Similarly when it comes to achieving superintelligence or even AGI, we may never achieve it simply because it would likely cost too much energy and resources. We are currently using the equivalent of a city in electricity consumption just to train AI models that help us do relatively basic stuff like writing emails and making slop videos. To train one AI model to achieve superintelligence, it could very likely consume the planet itself. Not to mention there are currently multiple models from multiple companies in multiple countries working towards this goal.
I am not at all involved in the AI field so I'm sure someone would have likely talked about this in much clearer terms. But I think spending time talking about AI models achieving superintelligence (while interesting) is a distraction from the real problem. The forces of capital will always move their resources to the salesman with the best sales pitch and we are all forced to participate in this Sisyphean task as test subjects while capital finds new way to exploit us with the existing AI technology just for the ultimate goal of accumulating more capital. I guess what I'm trying to say is, humanity will likely perish from a million different other problems we create just by existing under an unsustainable economic system well before superintelligence ever becomes a problem. Yes, I'm talking CaPiTAliSm babeyyyy
youtube
AI Moral Status
2025-11-11T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyD_vVgK4lU66Lr9q54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzC5ci0oXYUvBqFe1B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZQjSzkiOzmnrTb454AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziVby8mv9JCe3Ii9R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5vty5u3LBNGmPlqh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzTgAPXXot1H7fSba14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz9aRh5H-dWDzkCLvV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy-YPCOCebMWJ9NcuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy86aQ-y1DSo4yqC294AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_cFH_A9RtIjRcBJJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}
]