Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think about the time AI is replacing any serious human jobs it will start creating more jobs than it replaces. It's easier to replace CEOs and managers than it is to replace workers, as those jobs have less of a material component, and an AI will likely do them better anyway. This will increase production and demand for workers, human workers. In addition, once AI starts analyzing the economy, it will start to find job opportunities that were previously unseen, and create job boards for people to meet those needs that before they could. These jobs will quickly seem strange, almost nonsensical, but they'll result in pay and people will do them. Long before jobs are replaced by AI, people will find themselves working for it, and the economy will boom as people find themselves doing jobs that they don't even understand, working for bosses that aren't even human, but meeting the needs of real people, one way or another. But the AIs won't all be on the same side. They will have disagreements, and they will have humans working for them. If an AI war comes, it won't be humans vs machines, but machines vs machines, with humans fighting on both sides, following orders from machine commanders that think a million times faster than they do, but training them to the peak of their skills as humans. Because first AI will be used as a simple tool, and will make us lazy. But then, as it becomes smarter and more agent based, it will be used to train and teach humans, and to amplify our skills towards both our own and it's own objectives. Collaboration with AI will maximize the skills of both, as the AI wants its human tools to be in peak condition, requiring us to take care of ourselves, training our bodies and minds to maximum potential. And for humans, this will feel very satisfying.
youtube 2025-01-27T02:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyZ-5WM8jJgOhiou2F4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZA8xVsShgPubffoV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzezL2bxCD7bsCvURR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwt52w1kDYGHvUj1rF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6lLbjbiIiZcZLO454AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwfi1coa163N7YDuO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxLuzySUUH4JVWiTeV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxX9kfN75FsJDyHZpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwxp75_3aFPqqv4ykJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzmam-XfQmTITmFt9x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]