Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
House and Senate Republicans have abdicated their duty—some with shrugs, others with applause—while a dangerous, unchecked dictator hands out favors, stacks the courts, and methodically dismantles the country, grinding it into dust beneath his ego. They’ve gutted oversight, fast-tracked his loyalists into lifetime judicial seats, and watched as he turns the DOJ into a personal hit squad, pardons cronies on a whim, and slashes every safeguard from clean air to fair elections. Inflation’s roaring, bridges are rotting, allies are ghosting us—and still they nod along, trading democracy for a pat on the back. But here’s the real nightmare: they won’t stop at policy. They’re letting this power-hungry regime seize the next frontier—AI. Unchecked authority now has the keys to tools like Anthropic’s models, Grok, ChatGPT, and beyond. Imagine it: propaganda algorithms tuned to perfection, surveillance nets woven from every click and camera, deepfakes that make truth obsolete, and decision-making handed to black-box systems loyal only to the throne. We can’t let that happen. This isn’t about tech—it’s about survival. If we don’t demand transparency, audits, and real guardrails right now, they’ll weaponize AI not to help humanity, but to cement control. No more “oops, it slipped through”—no more pretending it’s just innovation. The Founders didn’t write checks and balances for algorithms. They wrote them for people. And if Congress won’t enforce them, the people will have to.
youtube 2026-02-28T15:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwLZLrVRAiTCg31ygt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugymw_es4eh3v9Fm4EZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyymn3EiTj0g65sGrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzlCoE8oWoQRRn_6dN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyzcAsYOGmHHXjSYh54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyQxDU_XyBMr4TaxF14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyapIzJcQcT5jhbvRh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugydlabq__46bqcXDoV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzuF6p-BpSgTEhfb1Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwif9XSR02ESheARox4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"approval"} ]