Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ais are not perfect as the way they are programmed and the data used to trained them can be very biased. This is evident by the policing ai (which sounds like a dystopian nightmare wtf) and the hospital ai For example america has a problem with the number of corrupt racist cops in the system arresting large amounts of people of color and pressing harsher charges against poc compared to white people for the same crime (you can find this by googling articles on how white people are less likely to be criminalized for owning weed compared to poc or when comitting the same exact crime as each other) so if you feed this data to an ai it will look towards people of color with the same bias a corrupt racist cop has instead of a good diligent cop Meanwhile for the hospital ai if you google it you can find stories of black patients often being passed up medical care by doctors because there is this harmful stereotype that black people are more tolerant to pain as such requires less attention its a harmful and racist stereotype but its still a problem to this day, so if you feed that data to an ai it will also pickup this behavior And then depending on the programmers it may even look at certain data through a bias lense. So rather than using ai to predict human behavior and situation that require human judgement, we should keep the use of AI to more practical applications such as managing transport, machinery or computers and various simple manual task that does not require specific human insights to operate in comparison to things such as who is likely to commit a crime in the future (which unless someone is psychic or can see the future this is a very dangerous concept to be playing around with especially with peoples well beings on the line) and which patient requires more urgent care ehich again requires a humans insight and decision making skill as a machine will rely on data that might not be as extensive as a doctors personal experience or subjected to a biased lense
youtube AI Bias 2022-12-23T16:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx0WmGzxKqxYzsBqWx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwhX-VkAuL6gS-g4Nd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwALcedloy9KqbEDgF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFqcmM3Q_Kh6yvQf94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyVl59IWfNXYyaBErR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxgxjHX2uqZemNdM1p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugw_3m9BN4k8K2hqhXV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyTGTXe-MnaItcIRoh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyCTNzTgQQYurPoJ-Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyvXW4p32ikkaT5FUx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]