Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was having argument with my Brother in law. He was saying that AI in the very near future will approve medical prescriptions. I argued that even if that happens, the final say/permission/signature will be done by human doctor. He persisted that this is stupid and that it will be completely given by "Dr. AI" In any case, this is a very good example how AI could be dangerous.
youtube AI Governance 2023-04-19T13:5… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgybCRLuslUr-O7PqK94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwK7ESV0IIbRfMoHJp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyO2TCNw1VVn8By_794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0u9JUwkduphZaD-94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwWt-Lz7hC1PtFwQPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx1tynQvgHuB6gTHep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6ZfCPmFVNT5MZO_h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw6MvxvLZgsT9GjZzh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzMl3ldqg3Zxme2Xwp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyhr_5736wVeRWAJCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]