Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think economic collapse or at least a phase shift will happen long before the robots launch a hostile takeover. Like Hinton said, all the mundane intellectual work will be done - this is not a problem if you think in the short-term, but IS a problem if you realise that the massive cohorts of young people NEED mundane work to grow their skills into competency. Remembering that the global population is growing, and our current mode of global development is capitalism, there are a lot of young and future humans that will be denied an opportunity to elevate their economic position. Jobs in basic graphic design, visual art, music, and coding are already being consumed by AI used by people who do NOT possess those skills in lieu of collaborating with, or paying a person with those skills. Perhaps the responsible way to develop alongside AI with a minimal detriment to our present society is to first take inventory of our various jobs, their requirements, and whether there are "up and comers" waiting. The key questions would be: 1) Are the people over-worked? 2) Are there people with the correct capacity that need to be hired? 3) Are there people wanting to work in this sector who would benefit from this job? If the answer to Question 1 is YES, and the other two are answered NO, then that seems to be a good starting point to "contributing to societal good". If the answer to all three is YES, then government needs to step in to get people into the right positions.
youtube AI Governance 2025-06-25T07:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyXP-0lXv4Q79kGlut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZU_X54wEkVbSycVN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxbmRGBNntkibQKokV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFCCpZdarjZ6-RpIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyo-K2iTKFIkJYFd1J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzvpvbr8Ng-HIoFBw54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw1yOheP4_jzMeiul54AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz1T3ROLjiq7Y3Ym6B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzm-FlhzIdsLwq6X_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzbpRT3ePZiTCQ6xOt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]