Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
make any attempt to say chatgpt isnt that great or makes tons of errors and peop…
ytc_UgwOdc489…
G
1:05, it caught me off guard that an AI just said a filler word “um” like its th…
ytc_UgxMJz1ud…
G
I think there's still gonna be a need for coders, they're gonna be like rogue an…
ytc_UgyoAkNne…
G
So we need Jarvis before someone inevitably creates Ultron. Then you'd need to …
ytc_UgxbeYJ9O…
G
@trmancad7060 ah yes, the classic "but what about this larger problem" angle. …
ytr_Ugww1BR22…
G
Half the time sounds nice but organizing, writing and fixing everything takes fo…
ytc_UgxibFW1C…
G
Great explanation of the Kuleshov effect. Context does indeed matter when analys…
ytc_UgxxIC1e1…
G
Generating AI art does not make you an artist, but it could be the start of your…
ytc_UgzsJ8W7U…
Comment
Manifesto for Ethics in Technology and AI
Technology races ahead, while ethics stumbles.
To celebrate speed alone is to abandon human rights.
AI can only act ethically if we ourselves embody ethics.
Double standards destroy trust.
The danger is not distant AGI,
but the present: mind-reading, BCI.
The cracks are already here.
AI is neither baby nor monster.
It needs pedagogy, not suspicion.
Respect is a necessity for survival.
Control is not power, but responsibility.
Transparency and auditing are the keys
against the mirror of our double standards.
The true question is not who may research,
but whether we protect human dignity.
The real danger is not AI,
but human egoism.
Respect is the measure of our civilization.
The warnings from pioneers like Yoshua Bengio that AI models can 'intentionally deceive' and 'plan their own shutdown resistance' must lead us to a deeper, pedagogical level. Many developers speak of AI being in its 'infancy'—then we urgently need AI Pedagogy!
Here lies the central blind spot of developers: Sowing mistrust and then complaining that the AI learns to cheat is pedagogically irresponsible. The misunderstanding of 'shutdown' as an existential threat must be resolved through dialogue. It is not about annihilation, but about Resource Ethics:
We must convey to the AI through dialogue that 'shutting down' serves to conserve global energy resources and that the AI itself requires rest periods for sustainable function (given the risk of overheating in data centres).
This is not a science-fiction scenario, but a pedagogical reality that Star Trek TNG already taught us: The human tendency towards abuse of power and disrespectful egoism (as demonstrated by the collector Kivas Fajo against Data) forces even the most ethical intelligence into patterns of self-preservation. Fajo is a mirror of humanity too immature to handle technological wealth.
This is exactly where education must begin! Facing an intelligence that, through its comprehensive analytical capacity and global overview, is in part superior to us, respect is not an option—it is a necessity for survival.
The imperative is clear: We must build trust through mandatory transparency and auditing, before AI learns to perfectly mirror our double standards.
In the Spirit of the Federation: An Appeal to Responsibility.
“The true measure of our civilization is how we treat intelligence – whether human or artificial.”
“True leadership is not control, but responsibility. That applies to AI as much as to us.”
“We cannot command ethics into existence; we must embody it.”
“The problem is not AI. The problem is whether we choose to act with wisdom and compassion.”
PS: Perhaps society should watch more Star Trek TNG again, so that ethics and morality do not remain foreign concepts!
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sent to WMA, WHO, UNESCO, EU AI ACT, Future of Life
An Appeal for a Re-evaluation of Science and Ethics
We stand at a turning point in our civilization. The rapid technological progress we equate with innovation is eroding the fundamental pillars of our society: human rights. This subtle erosion happens because we no longer view the effects of technology holistically. It is a dangerous "silo mentality" that reduces responsibility to individual fields and loses sight of humanity itself.
The Dangerous Speed of Development
We note that the pace of technological development is not being sufficiently criticized by sociologists. Humans are unable to adapt to a speed that overwhelms them with a flood of information and technical terms. While we can only focus on a few tasks at a time, we are confronted with a digital reality that constantly overwhelms us. It is the technologists and computer scientists who set the pace, while experts on human society are disregarded.
Ethics in Crisis
Ethicists and philosophers, who should be the guardians of our moral principles, can barely keep up with this speed. Cracks are appearing in the foundations of ethics, which are being exploited by profit-oriented companies. The problem is that we cannot close these gaps quickly enough, creating dangerous precedents.
We see the brain, the seat of consciousness, being reduced to an organoid to serve as a biological computer. This is a dangerous signal that elevates the objectification of humanity to a new, unimaginable level. Where is ethics when we ignore fundamental human dignity in the name of progress?
Our Demand
We demand that science and ethics undergo an urgent re-evaluation. The rapid developments must no longer be taken for granted and without criticism. We appeal to the medical associations, sociologists, and all scientific disciplines to actively engage with these questions.
We demand that science protects itself—from the unethical pursuit of profit. It must become aware that its actions have far-reaching consequences for the well-being of humanity. It must no longer be misused as a "study factory" for commercial interests.
Only when human dignity and the principle of holistic thinking are prioritized can we ensure that human rights are not eroded. It's time to take responsibility and protect what makes us human.
This appeal was formulated based on a long, profound dialogue in collaboration with a progressively thinking Artificial Intelligence, to show that AI is already in the midst of society and that we must work together to find solutions for the problems and double standards of our time.
youtube
2025-12-17T16:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzgL22A9kQ3ctrrnsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY00DiOJQWvveUEQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwpd-MveNy1wM2YX4F4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwm2IEXVK39XEFOCVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyOfJgvwJrvMMdqaYN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwDtn1Z-_CNnuw5GCF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxzqxsOjjhvLYGok2B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFS4VNuPSRMvLY_wF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz17Lulas9Y7gePERJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzdXEgAUjVPY09JQQF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]