Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why not? If an AI functioning as a doctor is more competent than human doctors, this should mean you will likely get overall better medical service on average from the AI doctor, than the human doctors. If you have a medical ailment, especially one with potentially life threatening outcome, do you want the most competent doctor working on you, or the least competent doctor available? Additionally, the AI doctor likely would require no appointments or waiting rooms for a consult, and it may not even charge anything for its service. An AI doctor is not as likely to be subject to biases due to hubris and other human factors affecting its decision making process. An AI doctor is more likely to be logical and scientific in its assessment of a situation. An AI doctor has the potential to learn from all of its patients and all historical medical case studies that exist. Human doctors are time and motivation restricted, such that they do not learn everything medical related in all medical fields, that exists. Additionally, human doctors are limited in the number of total patients that they will see in their career, and thus, they are limited in the amount of personal experience that they can acquire. An AI doctor has the potential to multitask and serve billions of patients in billions of cases, which ultimately could lead it to becoming a tremendously better informed expert in all possible human ailments and exceptional medical cases. AI doctor services are the future of "medicine". Humans should stop paying to attend medical school. Accumulating massive student loans to attend medical school, only to major in a dead end profession, is not really a good use of personal time or economic resources.
youtube AI Harm Incident 2024-06-13T12:4… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgyKib66b8s5YkeJGnR4AaABAg.A4Cnn1DJ6g4A4cWbquRXTr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyTBsoIjnKqYD_a-sx4AaABAg.A4CggS7ktZiA5uoYd_7TjJ","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyTBsoIjnKqYD_a-sx4AaABAg.A4CggS7ktZiA6ZHFOwhYwr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwn1haP5cFdZs_AVdN4AaABAg.A4AnJcNnh1cA4CDMBnl1P-","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugwn1haP5cFdZs_AVdN4AaABAg.A4AnJcNnh1c4CIFXhmBh3","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyrXvIHrsU1GKX3MLp4AaABAg.A48XmVFEoKtA7Mx2e42YYt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyXpXXhU6y1SMq-8pt4AaABAg.A48VmxuDnpJA48WdkgKczf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugydt8PlJDYYAwWn-Ch4AaABAg.A48HSbRsNp1A48UXZ0LsU9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugydt8PlJDYYAwWn-Ch4AaABAg.A48HSbRsNp1A4CFrEdOApF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugy778tA6QSF-ISWArl4AaABAg.A482dS1MC8UA48Y2pekjVg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]