Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we were to shut down AI progress forever, it would basically mean going back to our old way of thinking — the same mindset that for thousands of years has created all our unnecessary desires. Our needs today go far beyond what a human actually needs to survive. We don’t even see it anymore — our minds are stuffed with wants that don’t really matter. And honestly, if we simply reduced our needs, pollution on the planet would drop by at least 50%, and it wouldn’t even be that hard. But the truth is, we can’t really control ourselves. Our thinking window is narrow, our mindset is one-directional, and we keep falling into the same patterns even though we know the consequences. That just shows how little control we actually have — we’re missing a tool, because our internal “tool” is too weak. If we shut everything down, we lose the chance to improve anything. Staying stuck in the same mindset that we can’t control means the chance of fixing the world is almost zero — at least with our current collective way of thinking. We’ve been repeating the same things for thousands of years, and everything keeps getting worse. Without an additional tool that helps us discipline our thinking → behavior, real change is almost impossible. I think with this technology we at least have a better chance. It requires working on our collective mindset, because that’s the very channel through which this tool will work — either against us or with us, depending on how we use it. But I believe that if we use this technology the right way, we could eventually create unlimited energy, which would help clean our atmosphere and reduce pollution. My conclusion is this: if we stop the development of this technology forever, we will still be heading downhill, just with an even smaller chance of improving the state of the planet. If we don’t shut it down, the risk is big — but at least we have a chance. And that chance depends on us, as I mentioned.
youtube AI Governance 2025-12-07T15:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyPNnDjzvWET6UOIjZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxyscs1mSLmFQ0r-B94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxaeJW7gFT2qWedbn54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxGbLiyTZxqI7aCXs14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz3cAxJI6zJELtqLJp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyNao5FRkGL2gNTAZx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJbPpNnfR0UF3cR4h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwdyQ7wAcrbF2AIGfJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyypUOjL2kggO_n-NB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz-QDw1y_y-7xMhIpd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]