Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A concerning ethical issue arose when the team suggested ChatGPT as the most reliable entity for making final decisions, even determining whether my research on avatars—focused on providing intellectual and emotional support—was valid. Despite investing 10 hours of senior-led research, the decision was deemed beyond my authority and assigned to ChatGPT. Another GPT model, trained with biased data, dismissed the first trial using software as inconclusive, raising concerns about biased inputs and over-reliance on AI outputs (Kun, Rich, and Hartzog, 2020). While AI, including ChatGPT, offers potential for fostering connection and empathy, dismissing research solely based on AI outputs is shortsighted (Danks and London, 2022). Decisions in critical areas must remain collaborative, with AI serving as an advisor while humans retain ultimate decision-making authority. This underscores the urgent need for governance and boundaries to prevent over-reliance on AI at the expense of human expertise. These experiences, reflecting fears of being replaced and undervalued, were mirrored in this experiment when team members trusted AI over the expertise of a senior with giftedness. This diminishes the value of human insight and raises ethical concerns about dehumanization in decision-making (Davenport and Miller, 2022). Placing trust in machines over human expertise signals a troubling shift—one that prioritizes technology over both emotional and intellectual judgment. The term 'neurofascism,' coined by the brilliant and handsome Idriss Aberkane, a scientist and speaker with three PhDs, including one in Neurosciences, introduced the concept during his conference Le Futur de l'Éducation face à l'Intelligence Artificielle, to underscore the dangers of prioritizing AI over human judgment.
youtube AI Responsibility 2024-12-19T01:1… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyIg1wYSStfyDhxvnx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzd7RdrLeMk6Wppe_V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxxGzSFs7dpmgLS-mN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyFW0jQcWqyghK93et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzQQkHCJak9LzGoWA94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwWDguM5O2Sjv1GRKB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzxXsRxxQyLT6f7rLF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9A9adDeSAFDujNk14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyIdw6DkNbEBt4_p-J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzYQEHMGoPKuyLabQl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]