Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have never been so frustrated with Wolfram in my life. Does he really want to talk about the philosophy of what death is, aposed to answering Yudkowsky's question of.. "is Ai going to kill us?" What is wrong with him? He went off on such a tangent just to talk about his fundamental theory instead of answering the question. Very very annoying and yudkowsky knew us viewers would be getting annoyed and tried to steer the convo back constantly. Props to him!
youtube AI Governance 2025-06-20T13:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwOADiuaXBnCzNn12t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzyqx28DsxiPaLFTyh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwDXqplPpxNozU2sF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvDuGZnPv_v4DYeK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwpv41S56DBe6sSL3R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgznR5t1fDRorLMcrZF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxunEQ6aq6xLWUDo3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxx6qeyYN7ufVjcLJd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxp2OlZXn271yQiZv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8BL9ElYhezuf-c4l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]