Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Re privacy: open source LMM's are way more trustworthy because everyone (including ethical professionals) can and do look into it and if there's anything wrong they would talk. Re intellectual castration: If someone wants to do evil, they can go to the next schoolyard with their 2nd amendment assault rifle. or they can look the internet up. Then there is the internet. I have not yet heard that search engines censor and then their is the darknet. No one will consider to limit the internet. in regards to limiting LMMs there will always be a jailbreak method that you will find on youtube, social media, etc. Giving an idiot a car may result in nasty outcome and still no one is asked for their IQ before they can acquire a driver's license. We should NOT limit the good of the many because of the lack of ethics of the few. Use your mind/common sense and be critical about whatever information you receive from an LLM as you hopefully are when a prince from a far away country writes you that he needs you to exfiltrate 12 million out of their country! Concluding: Censoring those systems will have high costs on quality of the output and 'the bad guys' will not be kept from being bad by the 'good programmers' IMHO an open, unlimited/uncensored LLM will be the best option for everyone.
youtube AI Governance 2023-05-17T02:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz6WolHxEVNxu54kRN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyPOQp4MbeKnsF-rZB4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIa-6fPSgCPtQlFJN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyHXmgBlhuW2zljuxR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz-7O0uvq1czItcfLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwKrQWE-NtfenYJ7Ad4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzLQuheq92cEdeOjuJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7mfVITxJO3NODdxZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxqkuEhz9D3cO6aRWB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxib9sHmwUYmRkkDVp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]