Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Everything he calmly described is reason to collectively agree to stop development, however, he doesn't believe it is feasible to suggest we stop?! That makes no since. If governments made it illegal it would at least slow down development. He already stated the very real risk that artificial intelligence can take over humanity, however, he doesn't think it's feasible to suggest everyone stop, because he doesn't believe they will. That is some defeatist bullshit if I ever heard any. And the idea that he thinks everyone should just keep moving forward, while also investing in safe guards, is ass backward. Their need be a hault to regulate development so a standard of safe and responsible development of AI is created. His suggestion basically leaves the fate of the world in these money motivated corporations hands. When has any corporation ever stopped to consider the collective interest of the people? We are all fucked because even if the AI doesn't take over, the Rich and fucking power hungry will use the AI to do everything we might fear the AI will do to us. So it's essentially a lose lose. Fake news and propaganda are not new phenomena, AI is just going to make the world even more corrupt, and the Rich even more powerful.
youtube AI Governance 2023-05-10T17:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz0fyn2OF5pJL-WwYF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxl14oNZ_-D8u6ZB-h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugx2WyLZ1oCceu188cl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyNQruvlZszATIgm0l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyoQEvML9f9czErTix4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz9rdXWWq0Hmkxn4j14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxztNG-VrU0qxyiyZR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwLYqz1H3oYunwzznN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyLegCW7Gm22EQStzJ4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz2s3j_XW6XHUY0bTp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]