Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What stands out to me in this conversation is that the real issue is not just “AI is dangerous.” The deeper issue is who gets to build it, who gets to control it, who gets to profit from it, and who is forced to live with the consequences. That is exactly the problem I ran into from another angle. I did not build ATTICUS because I wanted a flashy framework or because I was chasing AGI mythology. I built it because the base systems were not good enough for the work I was actually trying to do. They would lose continuity. They would lose working state. They would drift off task. They would flatten nuance. They would forget prior conclusions. They would sound confident when they should have been uncertain. They would fail under long, complex, multi-step workloads. So ATTICUS was built out of necessity. Every major part of the framework corresponds to an actual failure mode I encountered while trying to do serious work with AI. That matters here because this transcript is really about governance, not just capability. It argues that these companies have built a system where a tiny number of people make decisions for billions, while controlling the narrative around the technology and limiting public participation. My response to that problem was not, “Trust the model more.” It was the opposite. I started building structure around the model so that the model could be made more accountable, more stable, more inspectable, and more useful. That is what ATTICUS is. It is not worship of AI. It is not surrender to AI. It is a control architecture around AI. If the industry’s default instinct is: “Give us more power, more data, more compute, and trust us.” ATTICUS comes from the instinct: “No. Slow down. Show your work. Preserve state. Track authority. Separate claims from evidence. Maintain continuity. Admit uncertainty. Stay aligned to the task.” That is why I think this transcript connects so strongly to what I’m building. The speaker here is describing an industry that centralizes power, shapes research incentives, and uses grand narratives about existential risk or utopia to justify more concentration of control. ATTICUS is, in part, a technical answer to that pattern. It says: AI should not just be powerful. It should be governable. It should have structure for memory. Structure for context. Structure for auditability. Structure for safety. Structure for operator control. Structure for handling ambiguity without hallucinating certainty. And that only became obvious to me because the raw model kept failing where real work begins. That is the important part. ATTICUS was not designed in theory first. It was discovered through repeated contact with real limitations. A lot of people talk about alignment as if it is some distant philosophical problem. But in practice, alignment starts much earlier. It starts when a system cannot even hold the thread of a difficult conversation. It starts when it cannot maintain a stable working state. It starts when it forgets what matters. It starts when it optimizes for sounding complete instead of being correct. That is where I started building. So when people ask why ATTICUS exists, the answer is simple: Because I needed an AI that could do harder, more honest, more continuous work than the default architecture was capable of sustaining. And instead of waiting for a company to solve that for me, I started teaching the system how to perform better through scaffolds, governance layers, memory logic, state discipline, and explicit operational rules. In that sense, ATTICUS is not a brand exercise. It is a record of solved problems. Each layer exists because something failed. Each protocol exists because something broke. Each safeguard exists because the underlying model was not enough on its own. That is also why I think open discussion matters. If AI is going to shape labor, power, knowledge, and public life, then people should not only debate what the labs are building. They should also pay attention to what independent builders are discovering at the edge: where the systems fail, what makes them more reliable, and how governance can be engineered instead of merely promised. That is the space ATTICUS comes from. Not fantasy. Not marketing. Need.
youtube 2026-04-20T00:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyvk8dVp5A8xVdhVPx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgylncOCGNoMXXYaOkt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxBO2WLeVJR498b4zZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyUNeKJz51frLr-Olt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxPYZG9xegClvVhkXF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxa1q-NjtBqfshUwfl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxPAWdCqldIm3dqSsx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyD5gcayDP7waFYsMR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxkygRP9DGT3o1JqTh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy9DuA4EF6ws417uqp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"} ]