Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Correction: chatbots don't hallucinate *sometimes*, chatbots hallucinate *always*. That the hallucinations sometimes more or less match reality does not change that. Chatgpt estimates what things should more or less look like after the prompt, but they're not *fact-based*, they just make stuff up based on rules inferred from their corpus.
youtube AI Responsibility 2023-06-10T16:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugynye34bty3I0GGjmF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqoliZUMZipyp6dY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwy5QNuQd6LmTLqQod4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx1mtgAtiKaV7yDRkd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFV8VoLd-xieQcqnl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw8tTvuxcr8tHD4oCp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy8GdpkifFW_Dftf614AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyTfQzLgNjPtF2Y4E94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcXgLIgnFo2zc6JXF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxA3RMkuy-A1ZM5KXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]