Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@medhursttlol this is the dumbest argument I've heard so far. Literally everything you just said is AT LEAST, as applicable to human agents as it is to AI agents. And that's being very generous to your position. But we can already prove that there is far less possible motive for AI to manipulate the markets like humans already do, because they DO have the motive and existential incentives to do that. Tell me this. What incentives could AI have for harming human civilization BECAUSE it's an artificial intelligence, regardless of its developer? If you can't answer that question, your entire argument is a melted puddle of slop. Because if the potential problems with AI are because of profiteering developers, and not necessarily because it's an artificial form of intelligence, then the problem is not AI, it's profiteers. And at that point the solution is to regulate profit-incentives, not the technology that empowers it's users. It's literally like saying we should have banned the steam-engine because of its potential to cary a bomb farther and faster. Or like saying we should have banned calculators because of the possibility that someone could use a calculator to formulate a dangerous pathogen. Or that computers should be banned because they can be used for hacking. This line of reasoning is completely asinine.
youtube AI Governance 2025-12-24T23:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzFdwW5frjuYVxv2214AaABAg.AQ0CTnnK2-5AQ0TaupbVSf","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzFdwW5frjuYVxv2214AaABAg.AQ0CTnnK2-5AQ25WwS9vD-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzFdwW5frjuYVxv2214AaABAg.AQ0CTnnK2-5AQ2N36kRjwd","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyLpFL5Q_QDzkcHJyp4AaABAg.AQ0Btg4EEmWAQ1OTf7Vyv1","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxAsRX1zhyUyEEDw3R4AaABAg.AQ07W4gVk7IAQ0Ixn9dERp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyT4BscfbZpin4pNY94AaABAg.AQ-ulkRCqYfAQ1J9Mr21L8","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyT4BscfbZpin4pNY94AaABAg.AQ-ulkRCqYfAQ1onWknfp9","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyT4BscfbZpin4pNY94AaABAg.AQ-ulkRCqYfAQ23n0xXJb1","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyT4BscfbZpin4pNY94AaABAg.AQ-ulkRCqYfAR83hpZCxzI","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyT4BscfbZpin4pNY94AaABAg.AQ-ulkRCqYfAR9U6yh_RMQ","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]