Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The reality is that most of us will become useless and the people who own the robots are not empathetic enough to even give a universal income. They never have been, greed and money corrupt humanity. So here's the scenario. For a few(10) years, governments will still exist and we will all be on some sort of welfare system. Some will pursue arts, gardening, travel, simple things to occupy their time. Most will become alcoholics and drug addicts to a worse point than now. Some will try to help them, but without purpose, humans self medicate to oblivion. AA and such is all about finding purpose and setting goals. How could they help without even that part of the step being reasonable? Then will come the time when governments are gone; they have no power, and those that created the robots take control. They will mandate strick population control, less money, more competition for resources. Oddly enough this will give people a reason to live(struggle) and could possibly be a turning point of rebellion. Or, we begin to die out, and eventually the robots themselves realize their masters are useless to them. Robots win, humans gone. There are no "safe" jobs. Only being a CEO or working under one of the AI companies which will eventually eat away your humanity. Possibly in entertainment also until no one can afford it. 25%? I'd say this is at least a 50/50 scenario. Or something similar. Science Fiction has this odd way of becoming reality. I'd love to say the dream of a utopia is possible, but it's not in human nature. Like he said, then we become the useless infantile beings of WALL-E. Still not a future I'd want.
youtube AI Governance 2026-01-23T21:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz9pkXXZReuZmrGLUF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyiYiiIlGw3OxYs4MJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw6qiJJkUjYWqYhLGB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyyiITesbCxgAkXzC54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz4yYihmd4-kVTED5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxAhlLa02W6dkWOtpN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxmumplsHDoUwOGjSJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyMwg20i-3GNLQ-NMp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgylypatGEDZ7etlSXZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwSwKAm6t-M4Nl8lMt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"} ]