Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
how do we know that the car wasn't hacked ? and someone else was driving ? How do we know that a.i didn't do it ? this is a easy way to get rid of someone.....
youtube AI Harm Incident 2026-01-10T03:5… ♥ 8
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw5w2-xLm8RC9DvjSR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyzXzWsqnmu2g5ETDh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzRIqqsxmUHS8J7mDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxNVeolHR0IQFH-ZGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyb-M-j4UhbMWlAg9d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]