Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I saw you physics videos and they are valid, but this topic seems to be out of your scope. 1) you are saying about AGI as something ideal - thats not possible, every part of information could be invalid, even if we are saying that it valid right now - physics is a great example here. Every though - its a though, not the fact, even for AI. The only thing we can be sure - things that we created and control on 100%. So asking the model - is the same as asking human, but with more probability of correctness. 2) halucinations. its not halucinations - its model trying to fill the gap in knowledge. People do this everyday and in more worse scenarios. Again mathematically there cant be continuous information without gaps. So extrapolation - is not a problem its a feature. 3) specialization. Again there couldn't be an ideal solution that works for everything, the same way like there is no ideal human. This video is a great example, you know your stuff, but dont know ML and neurobiology. Thats normal. and thats why nobody use just one model, peole use set of specialized models that answer in there area only - the same way like our brain do.
youtube 2025-12-18T10:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwS4Nc1PFIc19tkdz94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyR7j2w5LCEdzbKrB54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrvlJmf5fo7HEWhuZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz2zN8iSRi_DazNAWh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXkMPOK-IwusqFQch4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJr2bDabtRKk-dVft4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwEjv_tGxWX7hQfq4x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxHy9uIobWKVNX0zht4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxUvx3W75gq9isk5cd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzOILuVXV9-qRVy8al4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]