Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
43:40 imo, it's just a context size problem, leading the AI to be dumber. The attention mechanism is getting overwhelmed and is breaking its utility > leading to a dangerous outcome when someone depends on it.
youtube AI Moral Status 2025-10-31T16:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw2x0sErqnTEBCSJZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy-eDQc-LnP66KrhfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzByHsIC0Ly09nEiBx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx5tikRL4eR8Xsl6Z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzhJipb1hcM9z79LoV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy9NPfWs1XgLcMeNm94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzInCW4859HZVBJ3bt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzRmbdzCg0fy4umJTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxeRE8t-gKr81KpBE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7BPzdIpFM2_wq-ZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]