Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This work they do might be even seen as important. My issues are twofold, for once that these people sorting through the garbage, filth and violence of others are not paid and cared for enough and formost that these things might not end up at a police presinct to be followed up on, investigated by the justice systems of the nations where that content came from. Americans are often told, see something do something, but buisness act like "we saw something and then make sure noone else could see it" so they could not be fined over initially having these things on their platform. It is about money and not about holding the criminals accountable. In a way this is work that should be done by justice systems in the first place, which costs taxes and taxes these companies are trying to avoid. So we have allowed a system to grow that distributes and provides harming content, often enough without any repurcusions for those who did these criminal acts while these companies keep making money of that content. AI already has filters to sort out certain questions, which can be circumvented, so in the end it may sort out this content without anyone ever knowing about it, that is something we should really think about. Teaching AI to not give information may backfire. Another aspect, we think so much of ourselves in the west, but the training of these systems we give to people from poor countries to safe money, but these people there are differently socialised, have different values, so how would we be sure in the end that the system created this way goes conform with our values?
youtube Cross-Cultural 2024-12-10T13:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz9dazGu195Waazmad4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxrJey2I-aahFph_Kp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxsJjHkBQOlVzPVttV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxfewklGPTYf3Ms8EF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz_dag_sdKxRBtNELV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugxd_Y6RgdLctodK-JR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugziy4GsgVEn_9_XG_t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxn4vmko1Gy35GgH4F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIbdQMWcoAn8cPyzR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3x4pvClhRjU8YHO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]