Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think there's 2 sides to the Ai Art debate, people that make it a career of ma…
ytc_Ugya6Y2uf…
G
It's interesting how AI Derivers get as defensive of their AI stuff as very youn…
ytc_Ugy574Zhf…
G
Umm it will take paralegals spot for contract reviews very soon. And once you p…
ytc_UgxC1dt0q…
G
Total nonsense, Jeff Bezos built his company and have the right to run it the wa…
ytc_UgwfybqnF…
G
I’m a man and could tell it was ai. Also, there lots of women in the comments sa…
ytc_UgyMp8o71…
G
Because AI is principally incapable to provide reliable estimates of the error v…
ytc_UgwkPBA86…
G
“My strengths are anatomy and character” -The guy using AI to help him draw both…
ytc_UgwKzhact…
G
Thank you so much for telling me about this difficulty and the cure of nightshad…
ytc_Ugz01QlX7…
Comment
when substantial part of population will have no jobs, obviously elites are not that stupid and they will try not to allow anarchy and/or revolution. That's why they talk of universal basic income. Musk even stated it won't be basic, it will be "high" whatever it means (obviously this "high" description is nonsense but it's beyond the point.). The bottom line people will be fed and they will have cash to spend on things. Some demand from people will exist, also whole concept of economy can change and not rely exclusively on human consumption. Tax revenues? From corporations making astronomical profits getting rid of humans and using AI robots. This part most likely is solvable one way or another.
The problem is completely different. IF and WHEN real Super Intelligence will be created than all bets are off. Every single one in every domain be so ethics, morals, finance, science, technology, culture, health or anything else.
Super Intelligence by definition is entity which is smarter than humans as a whole in all major domains, or just all domains. I afraid that most people don't think what it actually means and what it will INEVITABLY lead to. "Alignment" ideas are naive beyond belief. We need to realize two basic things - less intelligent entity can not control more intelligent for any meaningful period of time. The Thing, by the very objective physical nature, can not have the same system of values, ethics and moral. It's simply impossible for biological entity and silicon digital one to have a common ground here. No matter what humans will try to do. And control they will, and try to be in charge also. This creates obstacle for AI at best, total confrontation at worst.
Default path is a conflict. I just don't see any real reason (besides naive wishful thinking) for conflict not to start.
Good news - Super AI may decide not to kill all of us immediately. Bad news - in the WORST case scenario it will try to learn from us the most it can, research and investigate limits of capabilities of biological systems (humans) in order to improve itself even more and even faster..... As was mentioned, human compassion, ethics and moral are 100% alien to silicon digital entity. Unimaginable torture, when human will dream to die but not be allowed to? Coupled with life extension technologies?
youtube
AI Jobs
2025-11-04T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw6508Kvvh7TFxL1-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx8AO5udO1ofBRW9Z54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJs42BQ_mrIxjrQlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrZOr3l3W7tj1E2GZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlQioofPKn-vWEG-B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyn3zaJNibt75-4SJZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7nNZgK6OryaqUPop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwnIOzCwv1P5y90IL54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypcjuCa672j6cwXbJ4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzmJU87sSHGkRf7Gq14AaABAg","responsibility":"government","reasoning":"unclear","policy":"ban","emotion":"mixed"}
]