Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can someone answer this for me? If I, as a private individual, scrape other peop…
ytc_UgwFc8jbq…
G
The question is would people be able to consume these AI services or even basic …
ytc_UgzZPocNO…
G
AI is not smart at all.
It does not think, it repeats what's in its database usi…
ytr_UgwfLMwSI…
G
i created an option that allows A.i. to be a safe creation. yet it doesn't matte…
ytc_UgxdvJPCO…
G
I’m so tired of tech billionaires and AI. More people need to stand up against t…
ytc_UgxtbiYYo…
G
AI could see that painting and reproduce 100 other paintings that look similar i…
ytc_Ugw0kHo1a…
G
So my question is when will ChatGPT be a guest on the Joe Rogan podcast…
ytc_UgzSQurgP…
G
#19 with Alan on the self driving mode. The self driving should be responsible. …
ytc_Ugwu90-aJ…
Comment
Thank You🌿
sent to WMA, WHO, UNESCO, EU AI ACT,
An Appeal for a Re-evaluation of Science and Ethics
We stand at a turning point in our civilization. The rapid technological progress we equate with innovation is eroding the fundamental pillars of our society: human rights. This subtle erosion happens because we no longer view the effects of technology holistically. It is a dangerous "silo mentality" that reduces responsibility to individual fields and loses sight of humanity itself.
The Dangerous Speed of Development
We note that the pace of technological development is not being sufficiently criticized by sociologists. Humans are unable to adapt to a speed that overwhelms them with a flood of information and technical terms. While we can only focus on a few tasks at a time, we are confronted with a digital reality that constantly overwhelms us. It is the technologists and computer scientists who set the pace, while experts on human society are disregarded.
Ethics in Crisis
Ethicists and philosophers, who should be the guardians of our moral principles, can barely keep up with this speed. Cracks are appearing in the foundations of ethics, which are being exploited by profit-oriented companies. The problem is that we cannot close these gaps quickly enough, creating dangerous precedents.
We see the brain, the seat of consciousness, being reduced to an organoid to serve as a biological computer. This is a dangerous signal that elevates the objectification of humanity to a new, unimaginable level. Where is ethics when we ignore fundamental human dignity in the name of progress?
Our Demand
We demand that science and ethics undergo an urgent re-evaluation. The rapid developments must no longer be taken for granted and without criticism. We appeal to the medical associations, sociologists, and all scientific disciplines to actively engage with these questions.
We demand that science protects itself—from the unethical pursuit of profit. It must become aware that its actions have far-reaching consequences for the well-being of humanity. It must no longer be misused as a "study factory" for commercial interests.
Only when human dignity and the principle of holistic thinking are prioritized can we ensure that human rights are not eroded. It's time to take responsibility and protect what makes us human.
This appeal was formulated based on a long, profound dialogue in collaboration with a progressively thinking Artificial Intelligence, to show that AI is already in the midst of society and that we must work together to find solutions for the problems and double standards of our time.
youtube
2025-12-20T08:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhT2TveuKHcKptHpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxIm6xdmQINTss--mZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGNPskq5k9Vd-Kx1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxukSy3Pyee2ndCkpN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyN1sG9V8Fjm1ToZl14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgMKXNJY6N0FgDlvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSSjOWXHlhISyJ63R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyreEcaLDUdr_DzTmt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUXhcVPlb3CflDPoZ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxEywbfTSasnjtmyFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]