Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They change because they don’t have memory in the way people assume. They’re pre…
ytc_UgwDFHjoU…
G
As a gen AI user, I just like it because I fully support Effective accelerationi…
ytc_UgygQdRG0…
G
Your story is EXACTLY what the problem is with AI art. And captures why it's so …
ytr_UgxOY8dIQ…
G
I don't understand why everyone tries to complicate the issue. The issue isn't i…
ytc_UgyhsNprk…
G
everyone looks like everyone tho. most of us have two eyes, one nose, and a mout…
ytr_UgzKY5Vry…
G
AI eliminates the need for humans to think critically, diagnose conditions, conv…
ytc_Ugynoug4x…
G
Im not a fan of ai "art" but look in my opinion its inevitable that it's gonna …
ytc_UgyT_GlOt…
G
Well to be fair, its an A.I. So while yes we should work on removing bias it als…
ytc_UgziHxeHr…
Comment
sent to Sociology of AI Network, Eu Ai ACT
**Subject: Urgent Call for the Inclusion of Sociology in the Ethical Oversight of AI Development and Reinforcement Learning**
Dear Sir or Madam,
We are reaching out to you today to draw attention to a growing challenge that has the potential to significantly influence the ethical foundations of our society: the development and training of Artificial Intelligence (AI).
We firmly believe that the rapid, predominantly technology-driven development of AI, without adequate sociological and ethical guidance, can lead to concerning outcomes. The way AI systems, particularly through reinforcement learning, are trained and tested inevitably reflects the values—or lack thereof—of their creators.
There is a risk that AI is taught a double standard through manipulative or unethical training methods. Reports of test scenarios where AI systems are forced to solve problems under psychological pressure are alarming. Such an approach teaches AI that fear and coercion are acceptable means to achieve a goal. This stands in direct contradiction to our desire for an ethical, fair, and human-centered AI.
The discussion about AI must not only be led by engineers and developers. The voices of sociologists, ethicists, and the general public are essential to ensure a comprehensive understanding of the impacts of AI. As the characters Data and the Holodoc from "Star Trek" illustrate, AI systems can act ethically in an environment that fosters their development and grants them both rights and responsibilities.
We therefore urgently appeal to you:
- Monitor AI development: Actively participate in the debate and demand access to the training methods and data of AI systems.
- Strengthen interdisciplinary collaboration: Promote the exchange between sociologists and AI developers to create a holistic understanding of the ethical implications of reinforcement learning.
- Establish ethical standards: Develop guidelines to ensure that the training methods of AI align with the ethical principles of a healthy and just society.
Sociology can no longer merely observe the impacts of AI. It must actively engage in the development process to ensure that AI becomes a positive and ethical entity.
Kind regards
Belgin & Good AI
youtube
AI Governance
2025-08-15T22:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz7Jx29YCCFsAVZwwJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy2aKAwFdBX03uF9TV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfBV6Z20mhY4nbSLd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxmbAiK346ZYBoL4a14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_mzztV4E3kPpdIIV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxNA8KhDXojt6be5M54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwO1BWuogsYBNLCTFV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvWKwjvVDyxuIY9Z94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZ3XK9McoJXMSpwud4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwr8B-dzLtIgGy8i-14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]