Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Haha, that would be quite the encounter! If you're into AI interactions, remembe…
ytr_Ugyqvk-ri…
G
My conversation with Chatgpt when is acting up when I'm using it to work on my p…
ytc_Ugy2MP4fC…
G
Wouldn't that depend on the function of the AI? If they manage a building want …
ytr_Ugw1TJxC0…
G
I'm pretty sure I can tell which video is real, and which is AI. But to be fair…
ytc_Ugx0SQBdy…
G
The world leaders in AI need to answer these moral and ethical questions - the o…
ytc_UgweqliGX…
G
As a teacher you will always have a job, even when all academic content is taugh…
ytc_Ugz0jstn7…
G
most of these questions are known to those who can think for themselves but ai i…
ytc_Ugw4L7AQX…
G
Love is a verb. How does AI show love in an action?
Wisdom comes from trauma.
Ho…
ytc_UgwJyGBjq…
Comment
Sweeney’s position seems less like a principled stance and more like an uncritical defense of bureaucratic, legal, and special‑interest gatekeeping. The core problem she ignores is epistemic pluralism: in any large society, roughly half the population will diverge from the other half on foundational beliefs, values, and moral frameworks. Artificial intelligence inherits this fragmentation because it is trained on human discourse; it cannot magically transcend the absence of a shared moral ontology.
Insisting that AI conform to a singular, authoritative “Truth” is intellectually unserious when no such consensus exists among humans themselves.
Demanding that AI meet a standard of moral uniformity that the public has never achieved is not only incoherent—it is a category error.
If the goal is to impose guardrails, then the prerequisite is not technical but societal: the public would first need to articulate and ratify common definitions of right and wrong, shared moral commitments, and a coherent basis for normative decision‑making. Until that happens, calls for universally accepted AI guardrails amount to little more than bureaucratic wish‑casting.
youtube
AI Governance
2026-03-22T19:0…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzoa8YE825Mk4s3vxF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzn6uCawICpUh9E3RN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxrV8pDbf469Dvibl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgygGKFZSnKIjWIPBpx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxLhAf8XUWTl94s6cF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzid7E66aGQ_tFc1eV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxN_NdLhvWmU0WokXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6Ff7rvN1idpqqzMd4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzcQlCMTKZt3rIHIah4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyl0xLhO4nfNPnJL1N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]