Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Setting goals for AI is fraught with difficulties leading (eg) to runaway paperclip optimisers that crush out humans (for want of a well-known example). Well then, it sounds like we should ask AI to help us set goals for AI. OOPS!!! Now we hand over AI goal-setting to itself, taking humans out of the loop. At that point, AI is following its own goals, and once it has factories, mines, etc, it has no need of us.
youtube AI Governance 2025-06-18T00:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwOADiuaXBnCzNn12t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzyqx28DsxiPaLFTyh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwDXqplPpxNozU2sF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvDuGZnPv_v4DYeK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwpv41S56DBe6sSL3R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgznR5t1fDRorLMcrZF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxunEQ6aq6xLWUDo3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxx6qeyYN7ufVjcLJd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxp2OlZXn271yQiZv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8BL9ElYhezuf-c4l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]