Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I did a presentation on this when I was in college. I hope that people get taught that the solution to this issue isn't by "fixing" the AI (although it is also necessary), but education. The overall population doesn't have a computer science background/education and it shouldn't be expected. Because of this, however, the general population believes that a computer makes percise and accurate calculations and therefore any decision by an AI is correct regardless of how cold, racist, or harsh that decision is. Anyone with some understanding of how ML or AI works will know that AI bases its decisions off of data, and that data is often retrieved from the record of decisions made by the humans. For example, an AI court judge isn't going to make calculations that we humans may consider objective and reasonable, but it will base its decision off of the decisions of previous judges from its data. If those judges are inclined to imprison the black men, so will the AI consider the defendent being black as a factor. Any one with some knowledge of CS or Engineering knows the saying "Garbage in, Garbage out" It is not that the AI is racist or evil, it is a machine. It just follows an algorithm that follows the same pattern as the data it is fed. The solution is to find a way to educate the general population that AI isn't a machine that can never be wrong since it follows a fancy algorithm.
youtube AI Bias 2023-01-31T22:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwPyaucridhnkABwkR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwlLz3E9fOUEiRJ8654AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSjdf7HDB-hh69lDp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxk8rHJPYvUbktHOJh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwSpSfTBaDBG973Del4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"disappointment"}, {"id":"ytc_UgyFpBuQZTAF2D2xXat4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy07u2Aq3YRGwEm-uB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3QRxO42-TyGgeJdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwo639UDk3UiY7I4pp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwwPPDYZA2JJaRy9ip4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"} ]