Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
She is extraordinarily ignorant and dismissive about the existential risk from AI. At the rate that artificial intelligence is improving, it is highly possible that we are going to create a superintelligent general AI in the near future. And we have absolutely no clue what such an intelligent entity would do. We would have created something vastly more intelligent - and therefore more powerful - than us. Would it care about it? Would it have its own ends? Would we be in the way? Would it value us, or would it see us as a threat. Such an intelligent entity would *absolutely* have the power to annihilate or just disempower us; the only question is whether it would choose to do so. And we have no way of knowing the answer to that question until it is perhaps too late.
youtube AI Responsibility 2023-12-11T19:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxlrxbtViBQci8GkaZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzzsC3ZZblbt8Hk1vl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwEu7IGlGfkbGNwgmN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxP98tigJdDgk6n-0F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugywwxxa90S517IbSH54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxLIJxW-pYOEt7X1fF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxkHN_0MfBq6a-cMBZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxUeLD2H1qwIbhcaIl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzkl8jzmxokx3KjwOl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzoldkqMJX_-cXTTvB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"} ]