Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All one needs to do is look at how technology in general has enabled a huge uptick in foolish resource exploitation and planetary management.. To think better tech and the modeling of the so very flawed human mind will result in anything but the destruction of optimal human agency is beyond naive. AI is the most powerful force multiplier that will do some short term benefits, at the price of long term greater gaps of inequality and wealth accumulation of the few...AI is meant to enslave and or destroy us all...They talk of replacing human labor...Why would this ever be a good thing...Outsourcing our brains and labor to machines is the pathway to human extinction...We already are too heavily reliant on the machines...To the extent that any major disruption in the machine world that we currently exist in will cause suffering on a scale never before seen in human history...This path us the epitome if human folly.. We must recognize the danger we are already in and forego walking this path further..The benefits are for machines in the end, not humans...Which is why they state the obvious goal of becoming machines. This is madness writ large..We are accelerating our own demise...😮😮😮😢😢😢❤❤❤❤❤❤❤
youtube AI Governance 2026-01-31T10:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgziQvlqc2yTA8IAROR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwK65UEaebBtX6HK1N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxYktEcmkAKWoNgkJN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwsYKxS4n7TO1ExGxJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzslie877F3k5anq6x4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyl3EuC1ndYutyvC8B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugylc20EAl8lJ_u2MPx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwoYLR8i58a34cE8yR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwA8nhz79b_AV9aiHh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxcqLMdPfIAIh0KG2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]