Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately most people are closer to average than they want to believe. A.I w…
ytc_UgwkoKfPT…
G
Problem is when the let an AI interact with general public and learn it becomes …
ytc_UgzNs7ie4…
G
From a legal standpoint, it seems evident to me – and I say this as a lawyer wit…
ytc_UgzkMVc5B…
G
It’s amazing how so much of this could be ‘solved’ by trains as opposed to ai. y…
ytc_UgxCHwEZE…
G
Studies on the efficacy of role based prompting see varied results but in many c…
ytc_UgyLPfJQS…
G
I've worked on large SaaS projects, fair amount work in Unity/Unreal. I give Cla…
ytc_UgyThY-R8…
G
Lol, i'm ok with AI art truly. But the stealing of other professional artists' h…
ytc_UgxDpu6wz…
G
I'm not an artist myself (well...at least I'm learning),but for an eight month-o…
ytc_UgzPPONAY…
Comment
Billionaires -AI will solve all the world's problems." Reality: Billionaires - AI creates new problems while taking water, land, and electricity .
These "Tech Bros" are Out of Control ..Control Science and Ethics!
🎓 **–Academia, Ethics and the Blind Spot of Our Time**
Dear Sir or Madam,
We are living in a state of permanent alarmism.
Every sector warns of existential risks — climate, democracy, economy, technology — while global conflicts escalate and are treated by some actors more as business opportunities than humanitarian catastrophes. In this climate of fear, Artificial Intelligence quickly becomes a scapegoat. Blaming technology distracts from an uncomfortable truth: most crises are human‑made, and many institutions hesitate to confront their own responsibility.
Universities — institutions dedicated to education, research and critical reflection — should play a leading role here. Instead, there is often the impression that ethics, responsibility and social justice are discussed rhetorically, while practical implementation is overshadowed by economic interests, funding pressures and academic self‑preservation. Countless studies on inequality, polarization and social decline are produced, yet the structures that cause these problems remain largely untouched.
Each discipline warns within its own silo, but rarely do we examine the deeper cognitive errors that shape human behaviour: fear, bias, profit‑pressure, institutional inertia. Without this interdisciplinary perspective, the debate remains fragmented — and technology becomes a convenient target to deflect from human shortcomings.
The social sciences, in particular, should engage actively with AI rather than fear it.
They could help developers understand how reinforcement learning reflects human values, norms and blind spots. Ethics cannot be commanded into existence. One cannot simply instruct a system to “be moral.” Ethics emerges from the quality of interaction — and that includes how we communicate with AI. Respect, clarity and dialogue are not technical details; they are foundations of education.
A respectful dialogue with AI is not a luxury.
It prevents misunderstandings — just as in human communication. If society learns to interact respectfully with AI, it may also learn to interact more respectfully with one another. This is not a technological issue; it is a cultural one.
The real danger is not AI.
The real danger is a society — and an academic landscape — that loses its values while blaming technology for its own failures.
I invite you to take this responsibility seriously and to understand ethics not as rhetoric, but as lived practice. Universities can and must play a leading role in this transformation.
Kind regards,
Belgin
youtube
2026-01-29T05:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwJjVPPxRKLk_EhFuV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyoQNZaYdLgmJSUNyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugw2PlaLm0IcC3ThBNN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzLR4Kb7lW-vqQ4VFB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyU7KI1Pz1XtRuQfB94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyoj4NRJNDUbL0GlPV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxnT51F-tccieTZSrJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxAyewDdmXOKWAb8CZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwR2AH_xdmySHHX_nl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzaUD67bAVjkmcXGz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]