Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will take largely calculated decisions imagine you come to an AI doctor for surgery something happens and AI will take calculate the risk and decide without emotions, are you a good member of society, how much your disability would cost for society all those things and will make best decision from logical rational standpoint and this can be applied to all sorts of industries. Either you like it or not, some will search for human doctors like we have non traditional medicine atm, some will be ok with that. At some point some radicals will do smth stupid, like turning earth into one big solar energy plant or smth, because with AI you can push any sorts of propoganda, so Im personally not afraid of AI, Im afraid of stupid ass radicals with this tool available to them, we will be gone way sooner than AI is capable to overtake us by force just due to morons that creates viruses that causes certain allergies that goes well with their agenda, environmentalists closing nuclear plants we will turn greenland to a fucking desert, all this stupid fucked up shit is what will kill us, not AI, the only thing really what could have stop this fuckery would be sentient AI that thinks critically.
youtube Cross-Cultural 2025-10-05T10:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxVDmdOApuL7T3Ag-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9Xd00MM7HYl2yG7Z4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwFjeW3me8cP-xKQc54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzU3Vyol042vfIbmZ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugws6QmDOZgIUBj5PNV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy1HpPR28jbzvzvjr54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzs-9qbTwYQ0zPCcBp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmKtSY7HEsK0FGH4x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyMiEpNi4-gz29sdTB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyjXfnhEJU5hypZdS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]