Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I understand this is a big problem, but solutions walk a fine line between protections and censorship. Sometimes censorship and stigmatizing fake science can actually inhibit new ways of thinking and bury new discoveries. I’m not saying these ill-intentioned channels should flood the internet with misinformation freely. There should be strict laws about not disclosing what content was made with AI and there should VERY PUBLIC steps taken by platforms to sublimate these results. Social media algorithms should (by law) publish the methodology of their algorithms and code for it. That way people can better check if platforms are doing their part by comparing real results to what the published algorithms describe (these algorithms don’t have to be a secret because patents legally protect them from being copied). A platform can be allowed to place content deemed harmful or misleading behind a Fact-Check/Verification-Wall where users are provided with a warning that the content may be misleading or harmful, counter-arguments, and an option to view the content anyways. This content could be completely excluded or locked from viewing on children accounts. I think these measures can help minimize the spread of misinformation to the vulnerable while not putting the freedoms of speech, expression, or learning at risk. It allows people to review these ideas if they are wary or for historical/cultural/information/education purposes
youtube 2023-10-22T08:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz_pUlSIxqncWKmzbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzuqWpeWoVh7gROk4l4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz3wxDgs9fFkiOoGyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5aeUW5xdnva-L3BB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzMf6iln4bayjF7Q4J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy6Q6wbwkaNYJ3R6614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxjhH_wuUnBGee2PTh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfEYnsJR9g3Xl5LSp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzJGgEatpymeGICs9V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwLUAsE1toBiZW9NMR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]