Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy is an attention seeking idiot, the whole sentient AI debate oxymoronic…
ytc_UgxP6yjVl…
G
Journalist Gil Duran has a channel called The Nerd Reich. It looks at the Tech …
ytc_UgxhLdFzB…
G
Truly frightening. Passing laws regarding technology that didnt exist yet before…
ytc_Ugx76p32u…
G
That is right, except you forgot the Open Source
-------------
We are today, e…
rdc_d3xzwri
G
If you wore a T-Shirt with a ref Stop sign on your chest, what do you think Waym…
ytc_Ugw1_VQtE…
G
When they perfect this women robot all actual women, especially the modern ameri…
ytc_UgzntM76n…
G
For context this is a federal district court iirc, yeah you don’t end up a judge…
ytr_Ugz4TOoOF…
G
I love LOVE Karen Hao and her insider stories and the insights she brings to the…
ytc_UgzFrImQn…
Comment
@drjoshcsimmonsSome companies seek the diversity and are very accomidating. Others definately not so much. But at the end of the day there is cross contamination of data in the background that is used to mash all a person's data together regardless of application it becomes a serious breach of privacy and the data use transformation becomes something completely separate that if it that is used in ways that rank or rate a candidate in addition to any single application its a complete disaster. Especially if a person's background or identity has evolved over time. Training data does not represent/reflect the current candidate and the ranking based on discrimitory "training analysis" is just bonkers. Also if an applicant has a lot of skills and experience and meaningful accomplishments that reflect the job descriptions requirements on application A for company A, but is also applying for an equivalent rank but separate industry and position because they also meet those requirements, any entangling and ranking of an individual's overall data to assist making hiring decisions is its own unregulated data cross contamination that no hiring HR/business should ethically want any involvement in. AI is also unable to make conscious decision thinking. It doesn't have understanding that comes from actual experience. Human decisions are guided by not only education, recall, and logical analysis, but experience, and emotional response based on that experience. Human language and thought is recursive and AI is making many of its quicker decisions and responses NOT putting the pieces of language together constructively. It does not "think about what its being asked to do" recursively or logically with any type of consistency. There is no ethical data governance or oversize to ensure data collection, transformation, analysis, visualization, and complete lack of communication. Decision making bodies are using and trusting AI because they drank the kool-aid and hype that this was the future, and AI was ready to replace much mundane work with humans and because the whole system could provide the best output leadership could trust to make strategic, logistic, and HR decisions. The AI is only doing what it was given to do, under the guise of simplification of process, and ability to make designs to the best of what it was designed to do. Which had nothing to do with any company using Workday, and everything to do with putting itself and its partners gathering candidates PPI, and using it across platforms to create a database ranking candidates on information across any of their applications, for reasons Workday determined and ranked as a whole internally in the system. I dont know how they offered their features and streamlining to the companies using and investing in HRIS SaaS.
This isn't only affecting the unemployed seeking work. This is probably also resulting in a system used by a company that ultimately violates their strategic hiring and employment resource policies and legal compliance.
I have used Workday as an employer, and it had many features in the configured workflow to help break business area silos to facilitate communication/assigned tasks amoung teams and within them. The workflow set up ideally to assign, define, and collaborate effectively to monitor and guide success. There were still manual steps then for employees and we worked as a team to have good templates for communicating, and trust within our team to assist in balanced workloads and identifying, documenting, and improving issues together, and striving to improve team and individual output and a system to document observations and issues (especially recurring), prioritize them, get help assigned regularly, and work through them effectively. I could see AI helping to flag things and perhaps initiative to Kickstart/prompt through the workflow, but it AI should not be directly responsible for decisions that are contain any form of recursion understanding nor acting automatically on them independent of notification and verification. One of the most sophisticated AI systems believed the best thing to do to get a car wash 100y away was to walk because of the short distance and gas savings. It had to be prompted several times, breaking the question down, making mistakes, to finally conclude if you didnt drive the car to the car wash, you couldn't wash the car there. Aaaand we think we want AI making screening decisions with the ability to act on them before a human gets any opportunity to review any resume, or the AI decision? AI is incapable of most abstract thinking and redundant/correlations/and has no experience to create reasoning that couls produce effective results.
Is the AI discriminating candidates based on age, or other personal discrimination/ada data. Was Workday using the current application from the current application and or using collected data across client platforms to increase and use training data to "enhance" or "improve" workdays workflow to rank or flag candidates unfavorably that had nothing to do with meeting the requirements of the current application - education, skills, experience as required on the job application necessary to perform? Were clients of Workday educated on how Workday could (or could) be configured to use this discriminatiry automation? Were clients notified candidate data was collected and used as part of a greater collection of client data? Was it default by Workday to collect data? Did clients need to sign off? Did they sign off their knowledge Workday would not only collect but actively use and compare across all collected data from all collected client sources to better screen each candidate against all their applications to rank (or auto deny) applications as part of the workflow was creating for clients? What were clients told (or not told) and how much were they knowledgable in the colluding?
Unemployed have many skills and experiences and may be spending time to individualize summaries and resumes to demonstrate they have the specific education, skills, and experience to succeed in the role. Shows the candidate read the job, and shows interest and identifies with the role. But if they apply for a lot of jobs and have a lot of ed/skz/xp, and Workday has a background complete transformed profile foe each individual candidate (person applying and how they filled out each application, I can see that AI could pump out "misalignment rankings/overqualified," "so experienced, this person has worked well over 20 years, risky age hire" etc. Or what if a person over 40 has been identified and as such, been honest about disability or other protected identity - and AI with no personality or grasp of an individual's spirit / proven ability, and they are tossed in a deny pile. Disability does not mean incapable of work if they are applying. They may be a harder and more consistent performer, who has spent their life succeeding. Perhaps because of their neurodiversity, perspective, tenacity and reasonable self advocacy. Sometimes making them a better choice. If a human got the opportunity to engage with the candidate before AI tossed them, or another middle aged worker who was seeking a job term in a role at a company they believed in because of them. Yeah.
Doesnt matter if Workday didn't intend its AI to misjudge or believed in its algorithm. The results matter. And their prior fight to defend bad practice after it came to attention does not help them. What matters is a lack of data privacy, lack of data governance practices, and seemingly lack of desire to do better. Seems under this administration, their boldness and lack of care of diversity and candidate privacy is just another huge sign that the available workforce is being mistreated and punished.
A year in an MS IT Management program - being taught the ropes and need for IT security, data governance, strategic management that had use cases that promoted the positive of AI advancement, but the texts and teachers even a couple of years ago - blind and leaving us unprepared utterly for the reality of the actual political and economical reality that was already setting itself for the world now crashing.
People who need jobs arent getting them. More highly qualified people are losing jobs or walking out of them. Workday clearly is in collusion with 3rd party vendors to facilitate data cleansing. Pandering to a government and pandering to companies who they can easiest push AI out to. These clients may not know or ask or care to learn what the AI does ls to produce results. Some may be jumping into the bandwagon of pro-AI space to align themselves with those in power. Other clients want Workday because what was once decent HSIS software, they trusted a brand. Either way ... company candidate/employee/terminations once silo'd protected within each client company separately.
Workday started to have more competition and decided early AI adoption could give it a competitive advantage. Well - if only their AI training had more training data outside its own office. It's clients. They may have and definatey worked with at least 1 third party to "keep hands clean." Yeah.
I hate this class action case because a software i once used and admired seems to have leadership making decisions i would call unethical and unjust. At a time more workers , skilled ones, are unemployed. Hamsters in this doomed and cruel wheel. And networked into the AI gluttony.
youtube
AI Harm Incident
2026-02-15T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyVv4ve9_fKeV_61zF4AaABAg.9_JFQ51ukK69_i5nZaMRDM","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgykMsvG7pMs2AzB-x94AaABAg.9ZXl5b10-at9dg-FBI6wVf","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxuinlqoOs14JzDqWd4AaABAg.9LamJsVbFpa9R5vJPCk1P_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxeIEH5f8_pD-MQDB94AaABAg.9BvVklictvZ9_TnKU5qTVl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgzeoJR7DwNYcZRtuXB4AaABAg.ATCV3p6lNRkATCzGG06yMe","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzsMMHG6TjpHFhMAPZ4AaABAg.AT9TAAA8cqUATEiUnTwODF","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgygOGELqtqroKr4kpN4AaABAg.AT8qenBg7YCATAoSebnoIA","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyUKVhsb6daUR5WhDN4AaABAg.AT8lNVyVE_dATALnm4zbnG","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxAPifmaJaPLL74yiJ4AaABAg.AT8SsDzcKbpATDOpHf13S2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxifQWVzouLTi2CQNt4AaABAg.AT86Pd8bQbBATAM4Ybq4-s","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}
]