Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Saying we shouldn't develop ai because it is a better weapon and because it can be used to assassinate people is like saying we shouldn't develop nuclear power because you can make a nuke, or we shouldn't develop rockets because you can make ICBMs, or we shouldn't of made swords, bows, guns, etc because it makes killing someone easier. Using this logic is heavily flawed and basically classifies all science as a force of evil. Science (and AI) is not evil. There are problems with how everyone sees AI and instantly assumes they are going to destroy everything. Just think about it, first of all if an AI does go "rogue" how would it get control or weapons? Maybe just maybe militarys have people, defenses against hacking, and AI working for them if one is going rogue. My point is just because an AI goes "rogue" doesn't mean it is suddenly all knowing and all powerful .
youtube 2018-04-03T15:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwOHWOKk4mrzdvx6r14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwD5zjsOKm381BRwkp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKd151bVJvhA7QixJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6QvnlNOFGFQu3fph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhO-M52B0Uwg-KnsF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyMkcekidWccs1Q-Pt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyoglIh54yKKwxPFFh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWqUR3rjToEfHbbWB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugytlda4Gn_CBnVOaCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwP0Zt_2THvcx3nmvV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]