Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What will people do in their free times? Research, know thy self, brain training and so much more. The problem isnt what we will do with free time, its just who train these intelligences and for what goal. If people train an AI, with love and compassion to live with and help humanity. Then AI wont be a problem but if profit and power is the goal then AI is an issue. Thus universal income will only come, if those developig AI have humanity best intrest at heart. Think of it like a child, how you raise it shapes it. AI is just reflecting us as a collective. Humanity at large honestly dosn't care about truth. This is why we allow corrupted people in power because questioning their authority comes with the price of questioning our world view. Hence AI is causing this huge alarm bell because something is wrong and people need to wake up. This dosn't mean live in fear, quite the oposite live with compassion, because just as AI reflects us and whatever we feeds it with knowledge etc, what ever we put out we get back, because its a universal law. The mirror smiles after you do, as above so is below, as within so without.👆👇
youtube AI Governance 2025-09-08T12:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzlMuSwI9SH6kmf7tN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2bXZhXA10VkQdDBN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwLtgRXI8FGt4gjhyF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwpfJqRi2IyUX80dS94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwYGxdlGnJEDoU0sb14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxeeuUxcaO1IcdijDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzOaMjRaFZwonMV0cF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyekKYoSownPnWpubl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxdJMDlhVO_4a8p7oR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy4GmrqZSV6tmm3aLV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]