Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have sort of a counterargument to what he is saying specifically about AI molding public opinion subvertly through social media. Let's say it could 100% optimize language to manipulate people. Wouldn't people be far less susceptible to manipulation once they realize an AI is active on platforms influencing everyone? And wouldn't AI be forced to operate within the confines of languages, which are human-made and inherently imperfect? Aren't there certain limits when it comes to how effective an optimized combination of words can be in influencing people? I get there are charismatic dictators who can induce people to do terrible things, but if there's not a human face behind the words, and actual, forceful control of societal institutions, how effective would said AI actually be? Not saying we should take AI for granted, but if it's forced to operate within the parameters, systems, and conceptualizations set up by humans, I think there might be inherent limitations to how powerful it could actually become.
youtube AI Governance 2023-04-18T21:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwXl6sKkIDjWYEuYlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzu9PdxNvFli6B-n5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyT4JT0rtYRvGGhGbB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyycrD11XiLG0nYgDR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxmjyTsnNif23na9GB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzNQootZ128WvylJ-t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxwoqCA4nUO6zaw_D54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwrxTmiGTj8wE-gNpN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyn9R98fnhmdiA5Z394AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz1ADfmLXRWWRkjStx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]