Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It enrages me, how can any “benefit for healthcare and education” be worth developing something, having a side threat of making everyone extinct. Universal truth is : every coin has two sides. And ones side of AI is so much bigger a threat that any possible “good” which is also questionable , which it was developed for. What do these people want to achieve? And just saying that I wanted something good for people, so I don’t feel responsible for the threat I have created sounds absolutely irresponsible and naive. The road to hell is paved with good intentions indeed. And it has never been more true. Everyone knows it. And a bunch is smart people and rich people bring the whole civilization to a threat. It’s not only about the mentioned risks. Mental debt which AI is creating, potential damage to an ability to be thinking, memorizing, using brains is also there. How harmful can it be for people, even if we don’t seize existing. Someone have created for us huge problems without asking if we wanted to solve them. I am speechless.
youtube AI Governance 2025-07-20T13:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz4X6jTmffs2grqg254AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQdlaBz1T2De6MGRB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwp-AxHn3LS1f4niK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzByd18WcyUMvIUH194AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzNuuWlsbNCoc4T-up4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWrtVzILfEXbnM-BB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwx0lwTzTwS1V_eBHh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxMTIVX_OVhfDTFRDx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzqMILiNwmbGZ5XOdR4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxQtbqJeOcDWq9UPI14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]