Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The hypothetical scenarios where we keep getting headlines like "AI does X" as FUD are not anything remotely like what you describe here nor what anyone describes. Go read the papers. It's more like asking the AI how they believe an AI in this situation would behave, which is just going to reiterate ideas from the source training data like all the prolific amounts of FUD around AI, and thus produce a self-fulfilling prophecy. LLMs and gai aren't intelligent and so they do not have conceptions or awareness of self and other, of self preservation. They are statistical transformers that can't do novel things or adapt. Would a hypothetical AGI destroy humanity is a fictional philosophical discussion. What you're talking about are ML models that are extremely limited and nothing approaching what AI is understood to be for the lay person from sci-fi or the AGI in such a hypothetical discussion. No, this batch of ML tools even with many billions of dollars training them will not destroy humanity any more than all the other ML tools we've built over the past 50 years have. I don't see why you don't freak out about how your email spam filter might end humanity, it's just about as complex as GPT, but one of these you've bought into as being "more intelligent" when neither is, they're both the same shit.
youtube AI Governance 2026-01-05T19:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxN3hliUkWyEQgol5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"relief"}, {"id":"ytc_UgwSbzr5E4ori0zTamx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz7FWYfbLcDTZunKLl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxb9a7z-ybGq7FGdFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZiGGkyx5pE2FOxAJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]