Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This all reminds me of a. the "initiative" that the Nobel Prize winner Eric Kandel got behind, "the decade of the brain" (or was it century?). Kandel was very excited about brain research, I believe in part, because he was a brain researcher himself, and worked for a company, Biogen, that was attempting to bring an Alzheimer's drug to market. No drugs for Alzheimer's have met the excited forecasts of their creators, and while there has been some small progress, the brain is still largely a mystery. (fMRI is also making strides). b. The writer Siri Hustvedt, in a very powerful chapter on the mind-body problem from her book "Women Looking at Men Looking at Women" talks at length about the consequences of the cognitivist revolution, which is wider than AI alone, but certainly has much to say about it. Where Karen Hao talks about human communication being different from neural nets and LLM's, Hustvedt goes further. She talks about the original scene of the development of human communication, the mother-infant dyad. How rich it is, how suffused with feeling, how it is like a symphony not merely of "information transfer" or "logics" but mutual dynamic interaction between two sentient, involved, feeling individuals in a kind of symphony. So much of the hype about AI, artificial general intelligence, training expert machines seems exciting to me, but fantastic in not such a great way. Whereas the development of machine intelligence has been shown to work for things like flipping burgers and making fast food, and whereas I have been told that much legal work actually HAS been taken over by machines, all of this still hasn't brought about the shangri-la that the boomers have themselves claimed it would. None of the people promoting machines or computerized labor seem too busy - as KH correctly points out - actually trying to make the world a better, safer, more enjoyable place to live. Sam Altman could AT LEAST talk about what all the people who are being replaced by computers could turn to instead, to make their livings - as in a UBI, or some kind of utopian life that accords with his utopian forecasts. More to the point, when I use AI programs such as Perplexity, Claude or Gemini, I feel this creepy sense that whatever I get from them I could get by reading thoughtful brilliant humans who give rich, sentient, detailed and sophisticated answers to the same questions. None of the AI I have looked at matches Isaiah Berlin on the intricacies of intellectual history, state repression or the history of socialism, to give one very limited example of how awesome reading really can be when in the company of an experienced and challenging writer. One does not feel the pulse of original living, breathing people recounting humorous or terrifying or deeply felt personal experience. I, for one do not have the sense that people are "superior" to machines or vice-verse. Much current thinking - or even going back to Turing - suggests that we are strange kinds of machines. There are books like Minds and Machines; people have made careers out of analogizing from machines to people. Why we would WANT to be run by machines seems more a matter of laziness, apathy, fantasy and yes, greed, at times. If we cannot stop people from doing what they want - I have been assured that we cannot - human intelligence and common sense suggest that despite all the hype, all the pie-in-the-sky aGI stuff, humans have a lot to gain, still, by being the strange, beautiful (when they are up to it), fascinating, complex, richly textured animals that God, nature of they themselves made.
youtube Cross-Cultural 2026-01-02T02:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwIKOpqNjFhKXAvj8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwGxTccvCGqDiDlDdZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyjzHFZ1GrL_tsabTZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKb5cHmLPv0HB7z554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyqSnbEBAV_7uRxvNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyC-U91haXFQRtKp2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwlNkMi9uCRkZIYXHp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzjjiXnfFJtNy5h73V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw_mLQB3O4LPZoR5Ax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxAcrvs9cULRvezXRB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]