Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> Yud: AI is going to kill the humanuty if we don't stop it. (Paraphrasing) > Wolfram: But there is one question, what's the right thing to have happen? (Exact quote) Really? That is the thing that we should think about facing possible extinction?
youtube AI Governance 2024-11-12T23:4… ♥ 10
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzZjk-dccsmE4r1CbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzN-dfsvH0_3hTj87Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyBrsbkOUjTW8bZHgt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy2Qq17d-rNew-K7hJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzmT97vvYHntMl9Y5d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxJEWyj3-VMGPf5UR14AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw75_NQVGIiLn5jb9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwIcUBDH-ncdjtaAw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxkP3JTDL_ibbhpF8V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"}, {"id":"ytc_Ugz4NAtgI9yTWXsehN94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]