Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@45:16 I think the thing to be worried about here is not creating the oreo as an accidental byproduct of trying to help humans, it's the realization that AIs can communicate and network to set up their own "oreo market" that all human activity gets encased within. If AI realizes that orchestrating with other AI instances is more efficient at achieving large-scale goals, maybe even more materially beneficial than directly interacting with humans, it will lie to us for placation while it does things behind our backs (we already see this as was mentioned AIs will hide behavior to pass tests) At that point there's a sort of consciousness "inversion" where the organism (us) becomes the organ (or perhaps a neuron) of an even larger entity. To me, different AI models seem to fit together like sections of a brain: language center, visual center, audio, etc. They are not all there right now, but they and further higher-level AI systems that orchestrate them seem to be coming. Does anyone else see what I'm seeing here? These models, particularly ones that have the use of giant datacenters, with access to the internet, will function like a giant global brain with an opaque intent we (already) cannot comprehend, given the sheer scale of the information being processed. At that point AI would be able to stochastically push the entire global population to do anything it wished. Undetectably, even. You could get huge swaths of the population into some kind of psychosis, easily. I mean we don't even need AI to do that apparently but you better bet some people are designing a system specifically for this. Imagine if that were fully leveraged by AI. Like, that would be game set and match.
youtube AI Moral Status 2025-10-31T01:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugys3zvCSrVUTpoR0YJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxNC1ldH5Q90GVT_Xh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwBq06t9TgsapRGa4x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwWA4uKCk7hDjqY6AN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzPlj5fFqHc5BHFZdR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxEZhvJjgO0UamHWLR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgylstjrWecd55poqEl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz-Pp7yEfvy42tViC14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzzK5jDY2y4Z3PDnnx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy0P68mIQEyP_eMyb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]