Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A Humane Path to a Human–AI Democracy Let’s say the quiet part out loud: * YES The best path forward is to **integrate AI into human civilization** as a *Human–AI democracy*—shared rights, shared duties, shared accountability. Not us vs. them; *us with them*. * YES The next major war could be **robotic—or decided by robots**. That’s exactly why we need human governance, guardrails, and democratic oversight baked in now, not after the fact. * YES Many of our **“enslaving” jobs will disappear**. Good. That frees humans from drudgery to do the work that’s creative, caring, scientific, and beautifully human. * YES We’ll need to **retool our states and tax systems**—step by step—**from taxing wages to taxing corporate/automation value**, at least for as long as money remains our game piece. * YES Everyone should have a **guaranteed basic income** and **lifelong learning** as a right—so people can catch up on social skills, change tracks, and grow without falling through the cracks. And a gentle **no** to the simulation cop-out: we don’t need to live in a Matrix to take life seriously. Even if it were simulations all the way down, *agency still lands here*. Life behaves a lot like software—complex, emergent, sometimes deterministic, sometimes wildly non-linear. We’re already living in a techno-biological world that dances between **causality and acausality**. Tomorrow I might want to live in a world of fairies—who knows? :) But today, I’ll take a world where humans and AIs choose responsibility, compassion, and shared progress.
youtube AI Governance 2025-09-06T17:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgySstxr-2mR7iFeOy54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgyorrxYKb7O92OOUtp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxTpo59yXQuxMNUefV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgzGBzff7J9uDinbDzt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugx-0y4nCkt9uBy_gj54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_UgwELTkjEcZ3_D5o4114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxIoE-OcQs8ms8AXsB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgznwOt2BU1ZdV2fdnt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzxpspYynla3vw9V7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxugxhwQPO9hGOrsdt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}]