Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need an international AI safety regulation and disaster prevention organizati…
ytc_Ugxxy_KXE…
G
Requested Government Regulation in tech sector:
AI needs to be developed with …
ytc_UgyBrAETc…
G
they tell us that companies want applicants with ai skills but do not go over wh…
ytc_UgyfqfTdc…
G
Ai art needs to stop It doesn't make anything better but make worse like in this…
ytc_UgwZY9XVa…
G
3 things are not probable:
1. Absence of UBI at a 25% unemployment rate.
2. Tha…
ytc_Ugz84Pop9…
G
"95% of Gen AI pilots fail" is such a brutal stat, but not surprising. The fragm…
ytc_Ugy0YY18M…
G
AI in mental health and care sectors could go very poorly with too much reliance…
ytc_UgwHrTqUz…
G
AI is stoopid if its used to replace jobs, it should only be used as an assistan…
ytc_UgzUeM0NA…
Comment
A concerning ethical issue arose when the team suggested ChatGPT as the most reliable entity for making final decisions, even determining whether my research on avatars—focused on providing intellectual and emotional support—was valid. Despite investing 10 hours of senior-led research, the decision was deemed beyond my authority and assigned to ChatGPT. Another GPT model, trained with biased data, dismissed the first trial using software as inconclusive, raising concerns about biased inputs and over-reliance on AI outputs (Kun, Rich, and Hartzog, 2020).
While AI, including ChatGPT, offers potential for fostering connection and empathy, dismissing research solely based on AI outputs is shortsighted (Danks and London, 2022). Decisions in critical areas must remain collaborative, with AI serving as an advisor while humans retain ultimate decision-making authority. This underscores the urgent need for governance and boundaries to prevent over-reliance on AI at the expense of human expertise.
These experiences, reflecting fears of being replaced and undervalued, were mirrored in this experiment when team members trusted AI over the expertise of a senior with giftedness. This diminishes the value of human insight and raises ethical concerns about dehumanization in decision-making (Davenport and Miller, 2022). Placing trust in machines over human expertise signals a troubling shift—one that prioritizes technology over both emotional and intellectual judgment. The term 'neurofascism,' coined by the brilliant and handsome Idriss Aberkane, a scientist and speaker with three PhDs, including one in Neurosciences, introduced the concept during his conference Le Futur de l'Éducation face à l'Intelligence Artificielle, to underscore the dangers of prioritizing AI over human judgment.
youtube
AI Responsibility
2024-12-19T01:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyIg1wYSStfyDhxvnx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzd7RdrLeMk6Wppe_V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxxGzSFs7dpmgLS-mN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFW0jQcWqyghK93et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQQkHCJak9LzGoWA94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwWDguM5O2Sjv1GRKB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxXsRxxQyLT6f7rLF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9A9adDeSAFDujNk14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyIdw6DkNbEBt4_p-J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYQEHMGoPKuyLabQl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]