Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Absolutely love this guest. Karen Hao is so articulate, compassionate, and so graciously and deftly side-stepped the phone call altogether. I understand why Steven did that. They're trying to look at as many viewpoints as possible to look at the data, and a part of Karen's work, as a journalist, is to interview and get information from these companies, but I like that she stuck to the real issue here. The, what I can only describe as, cannibalism of the industry, where it's using up the people in industries to then output the same kind of work. I also see what she's saying here, in that in the future, who is going to "teach" these models to do anything else or evolve their skills, if what most of AI is doing is *not* deterministic logic, but just statistical computations. She's describing the difference between the human capability of honing our skills and finding deeper and deeper depths to it, versus, AI where we have to continue teaching it certain skills, if I'm understanding this correctly. And I believe she's saying, we buy into these myths thinking AI has those same capabilities as humans to continue learning, when in actuality it's just basing it's capabilities on the information it's being fed and trained on. It's not advancing or deepening those capabilities in an autonomous way like a human brain/intelligence does. Like if you teach an automated car to drive in San Francisco, it's not, then going to use that information to drive in another city. The skill doesn't transfer because it's not a skill it's learning that teaches it to innately understand to drive on roads and not hit pedestrians. It quite literally has to be fed that information each and every time it wants to learn to drive in a new location. Love how she made the information easily understandable and accessible.
youtube 2026-04-12T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz3bJPoPfK54NIU3Ed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7iJPvEIY5We-5bFR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyejTrOK7-c8Kpz90F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx04wlecbGB0eanSWJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxZAeTzuHKmOAd42gl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxbPnVQ7PeZqMD3NNV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw_q6GpJlp_pBmLAtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgxWz3z6SYu7WjPNuCh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx9G7zK4re-uMlT-f94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzQ6FizkQWps3tUo9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]