Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Vice trying to be hype once again... what a failure. This replace fairytale has died in 2017. You can not replace a medical Dr with a human intellect with a frigging NARROW AI. People are so ignorant... Medical practice will be replaced only by a general AI. By then , singularity will not be too far away and we will have much bigger problems to worry about. We need to work together with AI and augment our jobs. If some of you fanboys of robots and human replacement would like to research why a robotic AI surgery machine for hip operations was completely discontinued because the so beautiful AI decided to make wrong incisions...perhaps that will change your thought. Currently and for the foreseeable future these PhD students with their ambitions and their research fund programs are only trying to get a piece of the pie. In the end only Google and IBMs AI will be of any use anyway. Sorry for the big post but there are too many voices of people outside the field of medicine who are trying to make a quick buck and it is misleading people to think that we are already there. Another big example of AI failure was the Airbus 737 max ... how many people died and how many billions lost due to that false trust in AI... thanks for reading to those who actually read all this.
youtube AI Jobs 2020-05-22T08:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwMFKxtBrV-edjM_gl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwAZ4-eslkJbWRmbud4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw3jRw7Dmppc2_nxVF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxyFuRe62lFZR4jrk14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwVu10Aoys2ngy8Frt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgycKCtptUxAM6fMi5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxqSHAkpL7IiqWcW6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAficXTTdvI6CXeV54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxWJufCb1NPVF6yl_R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyUhSSUvmhXqb0dyAt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]