Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So... who's able to tell if the AI is hallucinating or not? Is it actual human experts...?
youtube AI Harm Incident 2024-07-21T10:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzm3v-0AJPeTDGrgwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxSqTj30OmvHOnf3hZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyxjqxS1AqLu6mqh9Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4tvg5bMaIAf273bZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxYR9WRLnOALHEPAxJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwPg56ZQhyUaGZ0Iel4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzW6CC-FG44Hcc3C3Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugy2qhWGCLZtN8twC3t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzeJ0eS452dxHtBF-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxjriUE9InF5RDqNON4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"} ]