Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@juraj_b The issues with bad data exist whether we are using it with machine learning or not. It has been an issue ever since police first existed and will continue to be an issue forever. Software won't completely solve the issue in every circumstance. Software may give an margin of error or provide a warning about low certainty of the data but it's up to people to use discretion and pay attention to the facts they trust. If the system states there's a certainty of 10% a good cop will not act on that data alone, a bad cop will. So at the end of the day AI/ML changes nothing when it comes to prejudice as data will never be complete. We need better cops. ML however may be able to provide information that was never there before, such as an alibi, it may stop innocent people from being targeted if it is proven they were elsewhere. As always, the tools are not at fault. People are.
youtube AI Harm Incident 2023-02-28T09:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgytuLjDqwIUiEJFPgl4AaABAg.9ci4w3tP8wvAD7Gnjp4Gnf","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugzmn8mNwnYh4S1yFZh4AaABAg.9cfE0D0H5G49cfggLlipKo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx64bzFcTk5xMNU3VV4AaABAg.9cf3z6OOa0o9mfMo3NE6uP","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_Ugx64bzFcTk5xMNU3VV4AaABAg.9cf3z6OOa0oAD7GAi0JsoI","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzQdQZk4f8aNExtT0l4AaABAg.9cex8P35PYt9cpiZw6Ui8S","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyFda32o6dSGsohy594AaABAg.AQwrlbATbWUAQxfuD5vu9g","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwyD5Vcwnc7FfEgJ7R4AaABAg.AQwO0rhlPWGAQwyZcBZ5jf","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzOw24A63ozrmgyiYJ4AaABAg.AQvyNn4r57CAQwKLUgTZSE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgzB9vj7lisSMA6wjEN4AaABAg.AAsIOsgELruABJuhAPkYsw","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_UgweGBpZl_2SSmJsJsJ4AaABAg.A7gC1rhHwRtAEI9DROudjS","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]