Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From my experiments with ChatGPT and Bard, it is my observation that AI routinely bullshits when it doesn't know the answer, i.e., when it can't find the correct answer in its database. No life-and-death decisions should ever be made by a bullshitter. It's like trusting a con-artist who is pretending to be a doctor.
youtube AI Responsibility 2023-06-04T08:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx14ixslSC2jIJqVqF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy5jwnBEdF30lTeZQB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxft56P1gIEG1FWn4F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzc8nTjd0D3_Qcmg2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQaodh0DCq13sEGPJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxtIuYnaCtIxh4ntIN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZiHgmsocAwrqLo7R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy5OKRriXHSZNrWt154AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxOjKAwPkPTAvKyMLp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmZZoX4UK4R1bhK6R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]