Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of people don't believe it's possible, that AI will always be dumb or limited in some ways, or at least, not much smarter than a human. Some people don't think that there will be any goal seeking behaviour and therefore nothing to worry about (ie they think the instrumental convergence thesis is wrong). Some people say that if they just live in computers we can turn off the power grid and Internet and we'll be fine. Sounds fun let's do it. Some people say that the resources of Earth are too limited and nobody will be able to train a superintelligence (ie no algorithmic or architectural advances are possible). Some people say that it's going to take a very long time so it isn't worth worrying about at this point. Some people say that there will be lots of superintelligences and they'll have to work together, so naturally we'll be fine. Or that because they're trained by us, they will be guaranteed to love us, even if a bad actor undoes the safety training. None of them hold up to much scrutiny imo.
youtube AI Responsibility 2026-04-21T22:4… ♥ 11
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugwa0_-Wu5d_T7l6jNp4AaABAg.AVsINn_lJbPAVsoL7IOJTa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwa0_-Wu5d_T7l6jNp4AaABAg.AVsINn_lJbPAVteqqqm1Qe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwKEzwBsfbejQpOcIR4AaABAg.AVsFsSLFb2RAVsHzRKmQE9","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugzh96MmAg0_D4CLu9R4AaABAg.AVsFDW9Mwn1AVurQACr4_W","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxLqe5VQFpN5Jf_nC94AaABAg.AVs6TZbRBd9AVs75JMgMw-","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxLqe5VQFpN5Jf_nC94AaABAg.AVs6TZbRBd9AVsAsjghPz8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxZoEmu2VIdwc5hy154AaABAg.AVs2IY67CkrAVs4yLe6Si1","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzNCpnbSXT-WjY_iwZ4AaABAg.AVs08yH6QJmAVs5HyA5RGV","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgzvEQxocgEgb-5XcmB4AaABAg.AVs-y98AtlZAVs2W3bio4w","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgzEh0FMTWyZPE7edvN4AaABAg.9al0vaPC7l39jyS80ViJv_","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]