Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If the goal of a hiring process is to identify the best candidates while investing as little time as possible (and I think it is), then automation of these types is obviously going to create a vast amount of value. This should make EVERYBODY's life easier. Businesses can spend less time getting better employees. Hiring managers and interviewers can spend more time doing their jobs and less time interviewing. Employees who are not going to be hired can spend less time in unsuccessful interviews, and will get the signal that they need to refocus their job search efforts in more fruitful directions. BUT: Sometimes an individual who would get the job on merit (in a perfect world) will get rejected. And these incidents are very likely to cluster around a small number of individuals, either effectively blacklisting them, or at least creating a massive disadvantage. So, how are we to handle a system that makes society function massively better in general, while in some cases doing something unacceptable? I honestly don't know. But I think it might look exactly like this lawsuit. Workday is delivering massive advantages to employers. They are getting paid for those advantages. But they may have to fork out money to individuals who are unfairly disadvantaged. Overall that seems like a balanced outcome. There is probably a better way to handle disadvantaged candidates than a lawsuit. But any such solution would be very difficult to figure out. So until one exists, lawsuits seem like an excellent choice.
youtube AI Harm Incident 2026-02-15T22:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzphKjKjeodjBxB1Y94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz89_HVLeZlLgn9hIV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwZGcP2cBpsPqmTH5l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwl_KpmFtdId5Skenl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzitGqwyrWeUIU1j794AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFhLm6NAdykanOPbN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyCaH3dBiDaChXT9bh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgyKSONbTt0bjSj0lg94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx1zs_i436teOxmHbx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwn85EQye-PNDAJTOV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]