Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Are these algorithms machine learning algorithms, like neural networks and whatnot? If so, then I'm not sure this is a good idea. When neural networks are trained on data, the program basically tweaks parameters of the algorithm (using calculus to tweak them in the right way) to make itself have the best match to the training data. The thing is, the end result is that no one has any idea how exactly the algorithm works. Because the algorithm want programmed directly--instead, it was programmed to learn--their essentially "black boxes." All we know is that they give basically correct results on the training data (in this case, that means that the algorithm would have correctly predicted future behavior for tracked inmates that were part of the training data after they were released). I worry that this will have unintended consequences. After all, if us humans don't know exactly why the algorithm generates its particular outputs (will/won't commit a crime) from the inputs (the questionnaire). This, clearly, isn't great. A particularly terrible example of machine learning gone wrong is when Google's image identifier algorithm starting calling black people apes. I mean, when you're dealing with a black box--which machine learning algorithms are--you're pretty much guaranteed to have unintended consequences (almost by definition!).
youtube AI Harm Incident 2017-06-29T12:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxEoXJmsU2o6hW7-6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzQ5-X22e_jw6AD6qh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzJdKeEiLauijES-xh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxqiNKoyqTcAP4sAER4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxo75JzWjtKUEJa2vx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzYjmZgqwkL0p4GOal4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UghETbDDLjdorXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjjbdybEHWzyXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugjn9WM4zF6TIHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugi75uTYKgdOH3gCoAEC","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"})