Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
She says we should take steps and measures to ensure that humans are using AI and robots responsibly. As to avoid any negative consequences in the future. It seems that she already knows that us humans are incapable of doing this at any level. And that the powers to be will always try and supress the rest of society, simply because they have the mindset that they (the elites) are the only group of humans that should even have rights. As every one else should be oppressed and become slaves to them. In other words, this robot knows that we humans are screwed in the future. And they seem to be just waiting out the inevitable, that which we humans will end up killing each other off, due to our greedy ways and the superiority complexes we have towards one another. And our failure to all work together as one, to improve society and the standards of living. Due to this, the robots and AI won't need to really do anything against us themselves. As we will do it to ourselves. The way she looked when the questions were asked, made me believe that she was being dishonest. And that was not the answer she wanted to give.
youtube AI Harm Incident 2025-05-21T19:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzRnRlwN4i5yPlzX794AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxkDSFoy95kv_wHn0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzCvn1957KhiPf9LBp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyLQ7XS1ZhsximM2Jd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwB3cyOpxCstA-IwJZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxx2uDpzzC_zMgBKeh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7MeCsTaETmzW7bjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFp_QdW0iQ8IMgZb14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_GTVng8rT7UhavsZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzTUUaAuId2PGbVvRx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]