Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Kinda immediately goes off the rails for me. Whether artificial "intelligence" much less "super intelligence" is actually a thing is just glazed over, and the discussion is all way too much philosophy mumbo jumbo and colloquial jargon to make any sense to me. The constant use of words that describe human processes like "thinking" make it all borderline sci fi IMO, it leads to a sort of "vibes" framing. I'm fairly knowledgeable about electronics and somewhat knowledgeable about coding, I want to know in at least semi-technical jargon how these "intelligence" concepts make any sense. Human beings have a fairly limited understanding of how the brain works, the physical, chemical and electrical impulses implicated in the processes in the brain - but we're not modeling that with the AI in the first place. When the guest suggests jokingly that maybe AGI should be re-named "superdeduper intelligence"...he's still using that core problematic word - intelligence. That's not what's happening. Stop framing this technology as if that is what is happening. I realize "super calculator" sounds lame af, but I still fail to see how anything beyond logical processes drive this technology. Just because the product can be in the form of human communication - language, imagery, sound - don't get it twisted. I'm still really stuck on how much science fiction and colloquial rhetoric keeps wanting to frame this technology as something it doesn't seem capable of ever being - just wave your hands about infinite progress and the future and never mind the lack of technical explanations
youtube AI Moral Status 2025-10-31T13:3… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzP70ix2PKtiHVcbWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzGAl1hr4cKdxQ5ez54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugye_52wf7-yvnbmb814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_HCArOhYX7qErAN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeVF3QOmvsKgDvEel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKrVVcaRxCW5jxgoB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw80i-COGpIL6xpnEd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMbtsrZZJWmzZn7654AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyf_JcKywvlI9mqp_h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwrnJdWRTx_ANa3BnR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]