Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Based on this discussion, it seems the best thing for the future is to work on people’s mental health. We need to fix people’s mental health, as well as stop making younger people go bad. Is this possible? There are so many crazy, stupid people right now and it’s not getting better. In the US (and many other countries), it starts at the top. Our president is one of the most damaged, dangerous, stupid people, with a long list of mental illnesses and disorders and deviancies. If we can fix him, then maybe we can start to fix everyone. Otherwise, some unhappy, unhealthy, mentally warped person will be able to destroy millions of people, if not the entire planet. How do we do this? If not, super intelligent AI may not be the biggest threat. It may not even be a major mental problem, but just someone who is ambitious and just wants to be noticed, or be in control, or be relevant. Maybe they were rejected by a love interest when they were 14 and never got over it. Maybe someone was spoiled by their parents and was never told NO. There are so many ways to create “bad” people that it seems hopeless. Maybe super intelligent AI can fix all human mental issues, or help people to get over their mental problems and then train everyone to be better parents, right before it kills all of us, lol. Maybe we need to create a super intelligent AI that loves humans and can’t survive without us! Love is the answer! But I’m worried that a lot of tech people don’t seem to have much experience with, or faith in, love. Plus, all the PayPal mafia types working on AI are basically South African Nazis, so there’s that.
youtube AI Governance 2026-04-23T07:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyPz8SJvONxZxlLeZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgynE_yYAE3wsZUzpb14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxay6wRzYU1t1DtlLx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwFEwtv3tKoinVJA7F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzxof1klY_cEjB5eVF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzOAYIHdoujuARbfnR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx1A1-qVau0DGZXoU14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy5CMmxdFSptefnOdF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugxqn1Lz70w8-mr8WZR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy8k9SYRjjo2vuxgXh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]