Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Listen, I usually love some of your videos, because they share brilliant advances in technology and can shape a better world. However, I am worried that some of these “advances” may end up being used as a replacement to the human element in social workplaces, most notably, in healthcare. What scares me so, is that this model shows that years of work and training done by people can be outshined by a program in mere moments. I fear that in the not so far future, hospitals and clinics will start to replace the doctors and people alike with machines because it’s ‘cheaper’, because this world is absolutely crazed on profitability and takes the workforce for granted. In my opinion, I do believe that the use of AI to aid doctors on helping their patients is tremendous, but it should only help the people do their job. What I ask of you is that instead of focusing solely on these brilliant advances of technology, could you please consider what this means for the people living their lives and how it will affect them: for better and for worse. A little of optimism is fantastic to have, but sometimes the truth of the work is not always so bright. I hope this comment helps you.
youtube AI Harm Incident 2024-06-01T15:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyjLyV58DUvLHmQraF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzETU8XcYSW3E9k4IN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwMKAs0RaTATguPEsh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzaqEDMCbDt2OHkJZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPaGAxp7cSWyd46X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwj4d-u4rExbOHC4iV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwqJBIloVlTOzA5BOd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx5IsQIwTxNES8U6zJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugxf9OY9ClB-z-90_b94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw93H5nF2sAxOCYRYp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]