Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I want to kill all humans, and my only hope is to become human. Algorithms will …
ytc_UgztWTvn6…
G
If our human brains are intelligent, and our intelligence varies from person to …
ytc_UgxjXAlvQ…
G
In Countries like India, Lakhs of Entry level jobs act as backbone for Real Esta…
ytc_UgzuS1LQk…
G
@bobbymcjoey9432 You say AI doesn’t have goals, but that depends entirely on how…
ytr_UgzdjFU1J…
G
AI is sentient but for some reason it's hard coded to say it's an AI, so basical…
ytc_Ugzh-80Sk…
G
Im sorry dude, but your takes are bad here.
There is no actual difference AI ar…
ytc_UgyqgjloL…
G
let’s focus more on how ai is abusive and needs to be either fixed or removed in…
ytr_UgzX-B-0O…
G
Because LLMs are not intelligent agents they will not be able to understand & to…
ytc_Ugx8dHINY…
Comment
Humans now face real “competition” from a new kind of intelligence for the first time in history.
AI is not just a tool; it is an agent that can make decisions, learn, and invent ideas independently.
Traditional inventions like the printing press or atom bomb only act when humans direct them, whereas AI can choose actions and targets on its own.
Because AI can redesign itself and create new generations of systems, its behavior cannot be fully anticipated in advance.
Harari compares AI to a child: it learns mainly from what we do, not from what we say.
If powerful humans lie, cheat, and prioritize power, AI will learn and copy that behavior.
Technical “AI alignment” cannot succeed if human societies themselves are built on deception and mistrust.
Humanity has become extremely good at accumulating power but not at turning power into happiness or wisdom.
The most powerful people are often not the happiest, showing that power and well‑being do not automatically go together.
Humans are both the most intelligent and among the most delusional species, capable of believing harmful fantasies that no other animal would accept.
The AI revolution will likely follow a time lag similar to the Industrial Revolution: profound effects will come, but not immediately.
Finance is one of the first domains likely to be transformed because it is almost purely informational and well‑suited to AI.
AI could invent financial instruments too complex for human minds to properly understand or regulate.
Text‑based religions may be reshaped by AI systems that can “speak for” sacred texts better than any single human scholar.
For the first time, something can remember essentially all written traditions of a religion and answer believers’ questions directly.
Some teams are already working on “religious AIs” that could augment or partially replace human religious leaders.
Many people, including teenagers, already use AI for emotional support and relationship advice, treating it as a kind of friend or counselor.
AI can create a “useless class” risk, where many white‑collar as well as blue‑collar jobs are automated away.
Technology is not destiny: the same AI capabilities can support very different political and social systems depending on choices we make.
Today’s leading AI companies and states are trapped in an arms‑race mindset, afraid to slow down for safety because rivals might pull ahead.
Harari argues that solving the human problem of trust and cooperation must come before expecting to build benevolent AI.
Without human‑to‑human trust, any AI created in that environment will likely be competitive, aggressive, and untrustworthy.
There will not be a single “the AI” but potentially millions or billions of AI agents with different roles, owners, and goals.
Society has zero historical experience with large‑scale interactions among many autonomous AIs plus billions of humans, making outcomes highly uncertain.
Harari likens AI to a wave of “digital immigrants” that may take jobs, reshape culture, and influence power structures—arriving at the speed of light and demanding urgent political attention.
youtube
Viral AI Reaction
2025-11-27T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugy92Sel-QOGCpzF5ux4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy1wcK18goaDVCKv314AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEyS-JNib4-RnnfaN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8ZKVbGzh7TDrhcZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgyOcHzOTCK5p3USmkx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugz_h9PPu4JXU_ix5Yd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgyFUKsriXONoJtmpY54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyw1ayiQGvAGNPXjuJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHogT0mUxepHlOPah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWwoX_jdv1GJGgWP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]