Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First and foremost, of course, we'll need to toss onto the garbage heap everyone who stands in the way of progress—those who cut funding, eliminate jobs, or slap the label of "dumbasses" on people. If AI is as amazing as optimists believe, then it turns out it'll be able to replace a whole lot of people in the modern workplace. That means society is freed from routine drudgery, and so are most professional fields. Education can then focus on cultivating intellectual and research abilities, while people gain more free time for simple human joys and hobbies. At the same time, more people could be engaged in any field—but now for more creative tasks. Society will be able to set more complex and ambitious goals, organizations will tackle more sophisticated projects, since they won't be constrained by most effort being wasted on routine work, from which people will be liberated. In other words, productivity rises to a whole new level. So, anyone who hinders this leap forward, who artificially slows down the progress and development of artificial intelligence just to replace humans within existing production frameworks and profit from it—is simply a saboteur and must be removed from any position of decision-making authority. Otherwise, either AI is garbage, or people don't have enough rifles.
youtube AI Governance 2025-06-18T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxupByq1pJU7KQwkDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwXkNltFeimPmJMZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqBsep7so0OTfonf54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwfI1vnL8WvQKAFH-R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgziafO7OcSIak-2lXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy6rwnX5lC-hwWsLEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwFuc4NmZft8ezsNg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxlHbfZ_Kf0goYUIWp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxnLmiNozAbiWg42y94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiCIBcaMIxrDypsY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]