Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All of this is a good step to make AI benefit humans. However, we can not miss this opportunity. I suggest we go further; all the inherent issues our society can be eliminated with AI and other technologies. We have a utopia on our hands. Many citizens work unfulfilling jobs, jobs which don't have much depth to them, jobs which underappreciate human capacity, in order to *try* to live a decent life. Other citizens work hard to get to the top but sacrifice their mental health and personhood to do so, forced by the injust and inhumane pressures of our society. Both don't ever achieve what it means to be human. Both sacrifice what makes them unique. It isn't normal that 15% of American youths have depression, it isnt normal that 50 thousand people decide to take their own life every year. Our current society isn't for humans; It's for material goods and products. AI can let us reach a society that highlights what's special about humans, one where people don't have to worry about having the necessities to live, one in which people can follow what they're good at and what they love. Humans are more than biological and physical needs, and if we live in a world wherein that's the only thing that's taken account, we lose what actually makes us humans, we become no more than a robot. AI is here to let us be more, to make us forget about the simple worries, make us appreciate the complexities of our nature, and utilize it to its fullest extent.
youtube AI Jobs 2025-10-09T11:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzToEPZMw3OOgPZch54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxmDro5VzpTf2e7Vyt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5WxE6PnrtzIUz4gJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxDVtgylFFVOhBqOnR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxiT2rnJ8xQLGDRGTt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4KUqsmtqplBde7mx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlYkqDB2ZiUNjzCF54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyz3Lh4wgR7KwA_OVl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyqWGkHP44cXaSy18B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxiRFkX5pvXMsytfEN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]