Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Nice build and remix, but some transparency into your source content would be helpful and demonstrative. As we move rapidly into a remix and reimagine creation phase, it behooves creators to demonstrate hyper-transparency in their content sources and even workflows. Trust will become a key currency in this new era, and quality content creators can lead the way. Congrats, Your material scored well in our AI content curation process that uses multiple frontier models and some specialty models to score on accuracy, timeliness and relevance. Cheers, & thanks for your quality content! Scoring details: Combined Summary of Scoring Sentiment Overall Assessment: 8.5/10 Concept Accuracy: 8.8/10 Agreement: All models highly commend the accurate definition of parameters as weights and biases, the use of intuitive analogies (recipes, neuron connections), and the clear explanation of training as iterative adjustment using the linear equation (y = wx + b). Timeliness: 7.8/10 Agreement: While foundational concepts remain relevant, all models highlighted the content's outdated aspects. This primarily concerns the lack of modern architectures (transformers, attention mechanisms, MoE) and the use of parameter counts (500M-70B) that fall short of current LLM scales (e.g., GPT-4's estimated 1T+). Relevance to Learning About LLMs: 9.2/10 Agreement: This is consistently rated as a strong suit. The content is praised for clearly addressing the fundamental "what are parameters?" question, building intuition, and serving as an ideal primer for beginners before they delve into more advanced topics.
youtube AI Bias 2026-01-02T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxJiGI2kdW7iw5kNXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyD8HWKRa69Czk04YV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgznVO0dgFjTgLJ7Omx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvWz3e2JT8i4wmYCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzMGSZvVZLq3oBkS9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxFX6iYFFhD7TKJgR14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9d76qZSwtngZxk0N4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzZBKFpnBGLHHRuAm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxghdaZJR9WRma1DxN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw5zn312X5yo-Cwkm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]