Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's shocking how much things he gets wrong. First of all he talks about job loss like it's the biggest deal. The death of all life on the planet by AI (or the puppyfication of humans by AI to reach it's validation goals) is a significantly greater risk. He talks in a very optimistic way. His solutions are actually laughably unimportant. We need to make sure safety happens and in my opinion it's not open sourceing but siloing more advanced AGI systems. The other thing is that working at some point will be hurting the system. If AI can do 100x human level, then why work 32 hours a week? What kind of laughable nonsense is Bernie even talking about? He wants to give solutions to a completely new thing in an old way that doesn't even make sense to bring up. We need to make sure politicians are ready for the disruption AGI would cause, how to silo it, slow it down if necessary (high chance that it will be necessary), the race with China and to implement UBI at the first second we can. The more efficient the production becomes the cheaper things will be. So if all goes well we can expect that everything becomes 100x cheaper in some years after AGI is reached (depending on the speed of implementation). He brought up Germany, and while I'm not saying it's a bad place, I'm saying that the leaders are probably the single most incompetent people on the planet. They shut down the nuclear plants in an already energy heavy environment, where the future will require a lot more energy. They decided to shut down the single most advanced source of energy that currently exists to "save the environment". They are spending a lot of money on the Russia-Ukraine war, so I guess they won't make a comeback. The whole of Europe is slipping into irrelevancy. And the leaders are elected by the public. I'm not 100% sure that keeping democracy alive is a good idea in the future. I'm open to non-democratic solutions (like technocracy).
youtube AI Jobs 2025-10-08T22:5… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxHJdQnNv635m36q494AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxM3pCFAwriR08Vlpl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxLSO2xbEXzlMsVP254AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw21Z5p6X0b3MksQb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxnyjUcBQnWk96gCb94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzSkJJXYTpEoGhFijV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxrkroTRlLzY5vCLIJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwKiX0t-FvB5bGEgtV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyekwzovssSSkedb-N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5RtQYtYhHM3EDe754AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]