Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why? The Ethics researcher made blatant demands after turning in the report. Even if the "demands" were valid, you don't continue to employ someone who turns in their recommendations with an ultimatum that you either do them or they quit. You just thank them for their recommendation and show them the door and then hire someone else who isn't actively fostering a hostile relationship between employee and manager. Also, the ethics complaints were that people with more money can take more advantage of AI speech technology (almost always the case with ANY new technology, electricity absolutely benefitted the wealthy FAR more to begin with, should just see some companies run the AI against the datasets and then rent it out to smaller firms until it's cheap/easy enough for in-house running in small firms). It also complained that the AI models would reflect the majority and not smaller groups of people because it draws its data in aggregate. These are just things to design around and in most cases you want generic so it will be the most useful and benefit the most people. You don't start with benefitting the fewest people and then expand to larger groups, that doesn't make sense no matter how much Timnit Gebru wanted it to go the other way. Basically, total non-ethical issues they decided to make the molehill they die on. This wouldn't have hit the news if not for it involving AI. The person should have been fired.
reddit AI Responsibility 1612459406.0 ♥ 5
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_glzdkik","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_glzsz3d","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"rdc_gm0y271","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"rdc_gm0dan4","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_glzd4wt","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]