Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think I'll have an issue finding purpose if AGI starts taking my job, and I only have to work 20 hours a week... I can still go home and work on projects that make me happy, like gardening or raising animals. I would also have time to go to the gym, get in great shape again. Have time to cook my own meals. Have time to read to my kids, or play with them outside. Go on walks with them. Spend holidays with my family and friends. Play board games, card games... Think of all the people that play DnD for hours on end. Personally, I'm less concerned with this aspect of AGI, because we had a purpose before we had a society that required us to work for goods and services. We created, we imagined, we explored, we wandered, we loved. I think that AGI can deliver on this promise of "freedom" that we've been so in love with because it has the capacity to remove all of the burdens we need to survive. My biggest concern, in the short term, is the unrest it will cause when the joblessness hits. I don't think this will be permanent, but this is where society is going to fracture into a working class that is resented by the jobless. The elites will be untouchable, and we'll have the poorest people fighting the middle class for prosperity, rather than blaming the rich. *hopefully* it will be enough for a household to go back to being single income and we can gradually integrate how many robotic workers are replacing us... It's up to our government to regulate what percentage of a company's work force is allowed to be automated, and we can simply *optimize* our current working standards. In my opinion, this should start with 24hr operations and holiday coverage outside of normal human business hours. As time goes on, and less of us *want* to work, the government can adjust what percentage of a company's work force is allowed to be automated...
youtube AI Governance 2025-12-08T08:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwGKR1TLOzay3kuu9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwCdlKfwEeh0EYVwB14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxaK5IZ_Z6l9joHoAJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxeyUX-_PGGR8HvKG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQFQJxzNJy9QuvPZd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxO3TejWrHYuzF5Df14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyamvGC5tTbhIVXFm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwC9RZNSdKVLh1Bmc14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxChU7PzPK9ViaK4lZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyMAhJRYuB9ePct-y14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]