Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've known Roman (a bit) since 2010 (Lugano, Switzerland, AGI conference), back when he believed that perhaps he/we could come up with a solution/strategy for AI safety through relatively traditional research in a way that could apply to superintelligence. He discusses a lot of topics here and is very much worth listening to.
youtube AI Governance 2025-10-04T23:4… ♥ 48
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVv86zBUBPj2N__HN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyaYV-52c9FtlXHESV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy8KYA4e0d3F4Xh8014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnT7yjiBYdtLmXF5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwuNH22lQYCIWDiDIx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzfN-Aj2OOgzSjKZLd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzJTG49LFD9C0vtA9h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzc0PTN6ChVsp6FFnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw24im24lmOEMxSMsB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxhhZXti42riS0PZLx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]