Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I realize I could be wrong about this but it seems the bigger problem will not be economic but philosophical. We're about to run an experiment but it won't necessarily be on the 99% who become unemployed. It will be on the 1% that benefit from the AI. They'll be pushed into a position of having to answer the question of what it is they're working towards. I'm sorry if I'm quite cynical in my old age but I suspect we'll find out they don't necessarily enjoy money or things or experiences. What they enjoy is the feeling they benefit exclusively from these things and others don't. The Us vs Them isn't incidental, it's exactly the point. Their biggest problem will be explaining to the 99% why they alone deserve to have all those things and experiences. Good luck trying to defend the National Park you just purchased from the soon to be bankrupt Federal Government, from the horde of unemployed homeless people. Directing your AI to kill them all to keep them away from your property will be a decision you alone will have to bear, although it will be spun as a regrettable necessity. The bottom line is that those who benefit will be given every opportunity to allow their greed to run amok. I suspect the end stages of Capitalism will not be a civilization wide epiphany where all resources go to making everyone happy but instead a grinding process where the wealthy intensify their efforts to avoid responsibility for the rest of the human race. The ensuing civil war will be short and very bloody. I put what money I still have on the wealthy.
youtube AI Governance 2026-02-11T18:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz5vkdCEwo-5W0H_Eh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyuXA1HoDjcczWFPoJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw8u8JXXMnq-ZxZtpl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxJfnMYm2lO1K1i0dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy7a2jyLKzI9lS5fPh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxy3FVcCfCkJePolkJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxyU7E_itrAYkGz9UV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzRbmHUbfGPns5eUHN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwH8GMPgJbB_0Q6a6R4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxrw3g8WHV-4vz6DZd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]