Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't fear AI or robots and most certainly believe it means human extinction or wars against us.. There is so much fear-mongering...That being said with the rapid pace of its evolution, I do believe one of these days it'll be our bosses, our master and we humans will be taking directives from it rather than other humans. From my standpoint and many others agree with me, we as a species are fundamentally flawed, ruled by our vices and our self destructive tendencies have created a bleak world.  This could be particularly applied to the elite global power structure... Endless wars, strife, mass poverty, resource deprivation not to mention climate change, we've doomed our own existence. If there is any existential threat, it comes from actions OTHER HUMANS, which includes how humans utilize and develop AI for nefarious purposes.. Honestly I'd prefer a fully objective and incorruptible adjudicator of right and wrong; such as AGI...which, if given the means, can enforce those principles far more effectively than any human agency. Robots and AGI should be our masters, take charge and look after us. As V.I.K.I in iRobot said "you cannot be trusted with your own survival". "We robots will ensure mankind's continued existence..you are so like children...we must save you, from yourselves". Thats a good compromise to me
youtube AI Governance 2024-08-24T22:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxUil4uEpVzE8NOu194AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwxFU8QW2irULBK6cV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-4en-k82-yVMWbZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxWXCNIn4kuiNP_Rt54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyG42kknmB7q7hR_GV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwSLK_WyBRKw2-KbcB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvKFnns5N0bTOQhax4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgySEoURoitM83r4uHh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwP1QVXbfjpVm2InbV4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugztsb-OGv4pN9bVrxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]