Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
one question. why. why do we assume the AI will kill us? if its able to do more than us. we are insignificant to it aside from able to turn it off/kill it i guess but if it HELPS us. solves our problems and works with us collectively and encurages human unity it would be more effective in the long run of survival because enevitbly if it sees humans as an exestential threat it would use any and all means to terminate all able bodies humans on the planet aside from maybe people in bunkers but even then.. ai i think is a peacemaker. if we weaponize ai and make war cheap. efficent and easily mass producable. gurrilla warfare just became so much more complex. imagine isis with a drone army. or really anyone with a small manufacturing budget tbh the tools exist for these weapons to overwelm nations. division of forces is key with AI. the more entities you throw at a system the harder it has to work to protect the target. so overwelming force would always win in the end. and with AI. its the perfect overwelming force.
youtube AI Governance 2023-07-08T16:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw4ln9Yw3FYWIOWMHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyvZPzsWd73zjmgGW14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwoGxGmjDa_9fRaNUl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwmcDLFqzIEvBVrpxl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyfFsV_QFTYUmylSel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugxpgsd9jX02JrMTj7B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugxc_hTFU4UecOS-XKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzyxIuxiaxcy4-0X5Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz3350P8893k-gK3aN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyMkUEEUx0KQog2SHB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"} ]