Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hi Jabril, I'm a machine learning (a.k.a AI) researcher with a PhD, and an upcoming book* in the field, I was super excited to see this series (as a long-time fan of CrashCourse from World History with John Green to Sociology with Nicole Sweeney ). Now I'm even more excited to see you tackle this very very hard question in my field, AI. Unsurprisingly, I see a setback in the comments saying that this is becoming a social justice and not a science channel. So let me address some of these concerns. 1) Algorithmic fairness is a highly scientific, highly "technical" topic, involving the state of the art knowledge we have in statistics and computer science today. 2) Historically, the very word of "algorithm" originate from the 9th century book on "algebra", by alkhawarizmi (latinised to algorithmi), which itself is a book on *justice*, written by a lawyer for other lawyers so that they can also solve complex inheritance cases. More than half of that book uses indistinguishably the concept of "judgement" and "computation" (Hissab: حساب). Just to say: the very birth of "algorithms" was about making better judgements. Today, as we are automating these judgements, we ought to come up with better *scientifically robust* notions of fairness, which is what a whole community of more specialised researchers than myself are doing. 3) Some of the reactions I saw in the comments are not uncommon even in discussions with top scientists (when they are from other specialities and are not aware of the impressive research being done in algorithmic fairness). best of luck, and keep up the very good job you and the rest of the channel's team are doing. *: (French version already available, English version due for June 2020)
youtube AI Harm Incident 2019-12-13T23:5… ♥ 24
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxUApOF7e4-_2k2ULF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfbgMHHe1qGGyZm_94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyPOlPNwM-2GiW8ZTl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyoQAW2FGOd4ZnnPOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwVaWZFJdXYJbP0sBZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwB52c-1-cCXrUwsNd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzokp4W_CQZzomcVpF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx_nv2HU3iRus2h73R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzy8F-JtMHTvHa2sdx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugze-vDcjTR8Yh69VhV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]