Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think part of the vagueness in describing AI's benefits comes down to a vagueness in the very definition of "AI." Are we talking about targeted-domain neural networks trained on cancer or pharma data, or are we talking about Large Language Models? The AI field is much larger than LLMs, but because of all the focus on LLMs, there's this completely unsubstantiated claim that LLMs are capable of analysing non-language data. They are _language models_. They've been trained on all of the available documents related to cancer research, and can therefore answer "authoritatively" from that training (ignoring mistakes, contrary opinions, misunderstandings, misjudgments, hallucinations). They can recognise patterns in language, but LLMs are neither capable of analysing scientific data they've not been trained to process, nor the kinds of creative thought necessary to go beyond pattern recognition into the realm of invention. And let's not even get into the fantasies of AGI... So yes, we get Sam Altman's sales pitch about curing cancer, but OpenAI's products are entirely focused on language, so his statements are not grounded in truth, nor is much else he says apparently. But it sells product.
youtube AI Responsibility 2026-04-22T03:2… ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw3kZF7XTBhPMiN-IZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxMhRvpwOgHmZxRWmh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyuUj81voCoOwyo1kx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugx0GYNlUUcSrSqTYCd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxGsATK1RZyznOhT4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzB_UC3BjaDx74Bhap4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxORgxyIRalS7qyQsJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxi6XyrjRaMBCE5PiZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyRA8-31QyLa_fd8Sx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyT7fe5VfHN7LSH8GV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]