Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sweeney’s position seems less like a principled stance and more like an uncritical defense of bureaucratic, legal, and special‑interest gatekeeping. The core problem she ignores is epistemic pluralism: in any large society, roughly half the population will diverge from the other half on foundational beliefs, values, and moral frameworks. Artificial intelligence inherits this fragmentation because it is trained on human discourse; it cannot magically transcend the absence of a shared moral ontology. Insisting that AI conform to a singular, authoritative “Truth” is intellectually unserious when no such consensus exists among humans themselves. Demanding that AI meet a standard of moral uniformity that the public has never achieved is not only incoherent—it is a category error. If the goal is to impose guardrails, then the prerequisite is not technical but societal: the public would first need to articulate and ratify common definitions of right and wrong, shared moral commitments, and a coherent basis for normative decision‑making. Until that happens, calls for universally accepted AI guardrails amount to little more than bureaucratic wish‑casting.
youtube AI Governance 2026-03-22T19:0… ♥ 11
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzoa8YE825Mk4s3vxF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzn6uCawICpUh9E3RN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxrV8pDbf469Dvibl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgygGKFZSnKIjWIPBpx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxLhAf8XUWTl94s6cF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzid7E66aGQ_tFc1eV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxN_NdLhvWmU0WokXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6Ff7rvN1idpqqzMd4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzcQlCMTKZt3rIHIah4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyl0xLhO4nfNPnJL1N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]