Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There’s a persistent and misleading narrative circulating often perpetuated by short term goal-driven institutions like politicians, universities, and tech company executives, that the future of work is secure. The claim goes: “You just need to learn how to work with AI,” and humans collaborating with AI will outperform either one alone. But recent research, including a landmark study from the University of Virginia, tells a different story. In diagnosing medical conditions, AI alone achieved far greater accuracy than either human doctors or even doctors assisted by AI. The median diagnostic accuracy for the docs using Chat GPT Plus was 76.3%, while the results for the physicians using conventional approaches was 73.7%. Sounds like “a win” at first. However, Chat GPT Plus alone achieved a median diagnostic accuracy of more than 92%. In other words, the comforting idea that humans will always be essential partners to machines is crumbling. In many fields, AI is not just a tool, it is the superior performer. This has staggering implications for the future of work, especially to the often used solution of prior technological displacement that go “You just need to be retrained.” To the group advocating the "retraining solution," I pose to you this: You have a medical doctor, just finished 8 years of higher education, a low-paid 4-year internship/residency, is hundreds of thousands of dollars in debt, and just got outperformed by automation; what would you have this doctor retrained into that will offer a secure job? Reference the "meaning" aspect being tied to a job, the number of generations that will take to go from "you must have meaningful work to have a fulfilling life" to "it sure would suck to have lived back when you had to do something you might not even have liked for 40 hours a week just to stay alive" will be exactly one.
youtube Cross-Cultural 2025-09-28T02:0… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxpTackWV0QGPP1xu94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwRypX42a_I8Wh5tap4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzePcOD65Hltva5jb14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwXb9fcxHQlxoqH9-x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxpuKAyTV0u8st9Y3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyHe-K7B3Y9sfChbsF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzN3nfGOXG0XEQdsrl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGQLB0JwRbQqJ13yx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgybhATi4O5qKsVE1JF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx_7QVTBQhPRsOSnnB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]