Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These so called "industry - gurus" are more harmful than AI. Jack Dorsey is a th…
ytc_Ugx5y-Zuj…
G
what a dystopian time we live in where we have to suck up to a chatbot…
ytc_UgyN8I2tj…
G
This is like the same argument as let's use less plastic... cept everything is p…
ytc_Ugw0vypCn…
G
Ah yes the ai race we can't let china win the ai race.
It's not like it is concp…
ytc_UgwK1ma67…
G
I love using AI, but developers need put in harsh safety measures in place to pr…
ytc_UgxBDQ5f7…
G
The answer is to homeschool. Let's keep education and childhood human. And cou…
ytc_Ugx_OKxCJ…
G
The easiest way to tell imo is that every sentence in an ai essay tends to be th…
ytc_UgxN7wNaB…
G
@41-Haiku Yes, actually, we ARE talking about automation. Because the creation o…
ytr_UgyjxkmB2…
Comment
In many fields, science and technology are still led mostly by men. Research shows this shapes how new tools, including AI, re built, often with an acceleration-focused mindset: faster, bigger, more efficient, but not always guided by the common good.
Studies in feminist tech ethics remind us that technology could be developed differently: with more diverse voices, more care, and more attention to community needs. When one group leads, one vision dominates but we all feel the impact.
Sources:
• Nature (2022): gender bias in research visibility and credit.
• MIT Press / “Feminist AI” research: the impact of homogeneous teams on AI values.
• UNESCO (2021): gender imbalance in STEM and its influence on tech design.
youtube
AI Governance
2025-12-05T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwaeQyhPiBt-JFs3gV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgdFoEBS6szH2FlGp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxFzZYps_rVZWZZqHB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpveisyvrzKqpVW7N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyIT-PmNu3Cx8g_CXN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzAtqA3wYrvhFt6OUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwyhz0IBWpqWwMaJZt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxhpjs2QohIjnRTOCN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwdklOnoPLI3SjVXu14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgziUiwCBrlI6-BtPDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}]