Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In my casual survey of AI, it is at least 30%-40% in error--often laughingly in error. For generalizations, it seems to be more accurate, but for niche and technical topics, it is most often in error. The sciences often have strict visual definitions (and a range of subtle variables within those definitions). These are not easy for AI to scrape and understand. On the Community Post page of my YouTube page, I feature AI-generated visual examples, of common rocks and structures related to geology. Most of the time these AI images are laughingly incorrect. The errors show the limitations of AI, and how those error rates probably equivalent to many other subjects. So, I think AI can produce some helpful insights on AI-friendly topics (like the humanities). However, those nuggets of helpful insights remain littered with bad information (even in the humanities topics). Certainly, students are going to use AI to simplify their workloads. AI slop is now affecting internet information, and affecting the knowledge base of online reference materials. It isn't the end the world. The establishment of the World Wide Web and personal web pages had hints of this dilemma in the 2000s. Some people thought that libraries would become obsolete--but libraries have survived. Thx for the good talk.
youtube 2025-10-01T17:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy4l-w8qL6Kdc4tUs54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4LR9Z4q092vEK-PZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"amusement"}, {"id":"ytc_UgxF8SCOsEyWG5hbh6R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzP9n5VAb_wo2qzX_Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwklJXchGkUpVBg57N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzbKM--yM8nJ-_z6lF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxuK_B2oejc5tMNccF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyW_LK_Ltek9PHJ9Rh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw360CAxXgNCGQQUzx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"amusement"}, {"id":"ytc_UgxpTdGZ1qE5pDaT9NN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]