Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why do people keep thinking good will prevail when time and time again, humanity proves otherwise? It’s not just corrupt leaders—it’s all of us. An ancient truth we all carry: the eternal tension between yin and yang, good and bad, moral and immoral, creation and corruption, light and shadow. Of course, there are exceptions—but we’d be naive not to account for corruption. We can barely manage ethics, truth, and accountability within our own country—yet we’re supposed to trust AI developed in other nations, often by governments with no regard for human rights? All the talk of “transparency” and “accountability” in AI rings hollow when it comes from institutions that have failed us repeatedly—not to mention the ways others may exploit it even more dangerously. This isn’t about fearing technology. It’s about recognizing that those who create and control it are still human—flawed, self-serving, and often not acting for the greater good. And that’s exactly why AI must be strictly regulated, especially when it’s in the hands of governments whose values differ wildly from our own—and who may have zero regard for human rights.
youtube AI Governance 2025-06-23T02:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzOrWatAb5cM20OUbN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQ2TQM10TRrhgX9PJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_KWDgs88J_kX-MEx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzW-k8yKRLfnlyS63d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzVOIoHqJb-0HsMfwF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7NLxeBYc4gnTS9hB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwjgv8G0aCvTFNG0cx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzJWnUIcUaMwa7qcyB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzqjHVHHkm39R_wcaJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxEkFgTobeCXoaEeL54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"} ]