Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here are a few working theories people kick around: Universal Basic Income (UBI). Everyone gets a baseline “living allowance” from the government or from AI-powered wealth creation. Think of it like society paying out dividends because the machines are doing the labor. Post-scarcity society. If AI can automate everything—farming, housing, healthcare, energy—then basic needs could become virtually free. You wouldn’t “earn” survival; you’d just… have it. The game of life would shift from survival economics to creativity, exploration, and play. Job re-invention. Humans may create entirely new types of “work” that aren’t really about necessity but about meaning—art, exploration, philosophy, virtual worlds, caring professions. We’d become more like “players” in culture than “workers” in an economy. Dark path. Without planning, the benefits of AI might concentrate in the hands of a few corporations or elites. Most people could end up impoverished, dependent, or locked out, while wealth and power become dangerously centralized. That’s the dystopia sci-fi loves to warn us about.
youtube AI Governance 2025-10-03T20:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx5d1E0Wbdvy_NTTl54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwONjXUEi0T3kBq_qF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwoumtwLCvjk4LqNI54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwMVtVad2rk1lajCot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgznlbAHr8zMFwIfkUd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx5z67-W2ptRQEeZOB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxeghA6eMmCgQObyv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8QrI0Lamr_0vvudt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugw8j8H2g0sTJ7gNhQp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxAJvcd41YfQfmK46l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]