Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with algorithms is that they appear to be unbiased. But if we train the algorithm with biased data that we generate, then the algorithm will inherit that bias. Because people trust algorithms to be unbiased, they assume the result to be unbiased when it is only reinforcing the biased we trained it with. The examples presented may be okay, but I would want to see research to check for bias results before they get too much sway.
youtube AI Harm Incident 2017-05-31T17:4… ♥ 44
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgjiW8Af5fE9mHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UghR_Paq043hzngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UggHJuf7OhBringCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UghV0NZk51v9bngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwMC9M3NzJZ5Bl-Qb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzSYTH9ZRvLgUJPgJR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHOUL7A_PR_eCP9Qt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwFpx4PE6BN2q36PFd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgziV-S5AeDv4qRI7rp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugygcm2gcnye5AGzbk54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]