Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Consider this: if most companies utilize AI, it could potentially eliminate around 60% of the workforce. That 60% will not be able to find a job, so what happens to the companies profits when there are 60% fewer customers now available! Not to mention the social problems this could create. Make the math work for me, I don't see it. If you receive a dividend, as he says, then you become a subclass, as you will receive a minuscule part, since it has to be shared among 60% of the people who are currently unemployed; then again, you arrive at the same conclusion with the same problems. I believe this could transform the world into a "socialist" global society, where the ruling part is the company that manages the language models, and the rest receive a basic income; however, it will require something of you to obtain that basic income. Make this make sense? I could be 100% wrong, but it certainly appears that way, and I work in the tech ecosystem.
youtube AI Moral Status 2025-07-25T19:3… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxbzFzUcLviNTFmK3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2vLP3y4OOoOaNPah4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"skepticism"}, {"id":"ytc_UgxhoR5UHK6THIMTaSF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzZhKhJ4iVVt2hAfQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxgQLOVOtt98fyt7lR4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwWxPJKhJVI0MzAic94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzDbssCuyZV4i4D89N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzD6hIZA8zXmdwa9gN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxWmftk2HugRn7na3t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxGm-uZtJ_u4Pe3TGp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]