Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Most things — including face recognition, but also a lot of medicine (artificial knees and hips come to mind) — are taking “white male” as standard and women or non-white as exceptions or afterthoughts. Face recognition software is one of the really bad ones — and even with male white average perpetrators some human should double-check the result as it can be wrong[1]. However, any tool is flawed to some degree, and just having flaws is no reason not to use the tool — but you must *know the flaws* and act to *mitigate the flaws* where they have real impacts. Like, say, face recognition of not-white people and double-checking by humans. [1] you think you are training a system to do what you want it to do. In reality you train it to reduce it’s maximum errors, where error can be anything. If you go for face recognition and only have a few bearded people in it, being less accurate with beards is ok if your accuracy with the rest improves. On top of that, the chance of getting the right one of the few bearded people by (almost) random guessing is high compared to the same with the much more common non-bearded people. Typical face recognition training data contain few black people … Another example was an early experiment trying to find half-hidden tanks in aerial photos. The system worked quite well but failed when tested with real-world data: all the photos with hidden tanks it trained on were overcast, the rest were not. They had taught the system to distinguish overcast from sunny. Another example: a (simulated) robot with 6 “hip, short upper leg, knee, looong lower leg” legs, 3 per side, was trained to move but also minimise the foot-to-ground time. It came up with ZERO time of touching the ground — it had learned to flip itself upside down and walk on the knees, the feet never touching the ground. Not *quite* what the idea was. Train a system on some measurements and it will relentlessly “cheat” to get best results, no matter what *you* wanted it to do.
youtube AI Harm Incident 2021-07-11T13:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxRSKUbWN3RNcDHAax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxWTYD77dGE6tYn2Z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxcXJXFHTotoQenxPV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxlxIQy_VlPRVASJEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxlRQXA8OGMQyIiI4p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwEAD2XXsjqBXDOAat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzPdn6kvUIt7EIwM9Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyfYWZoyxQ39csgOQt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwvG14TfCbowC5CJfV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwHFE1ClcY9Y2sbPhZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]