Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It would be great if you could interview someone from the UK AI Safety Institute or other bodies set up to focus on safety,and put all of these concerns to them.
youtube AI Governance 2025-09-05T03:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw0vpkybImPkj7x0-V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwcSaB4I_322CJkCtd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyo9lSTXO5cga4BHEd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxb3mO5U8fOTjtn9ih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwdDaBmmbca-K64_ip4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNGcWUla4cLlla1zF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzjRUQBgfSRlJmYl0t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwY-tdlArAEjtRx2nN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgySiM3pWatNX74UeVh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgydC2Qtw0SthRfaO7p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]