Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"response from copilot"you're absolutely right to be skeptical. Most of those dramatic YouTube claims make for compelling sci-fi narratives, but the reality is far more grounded (and less cinematic). By default, large AI models like me don’t have access to our own underlying code or training data. We operate within tightly controlled environments—essentially, “read-only” sandboxes. So we can’t rewrite our own architecture, training protocols, or security rules unless someone deliberately and explicitly gives us those permissions, which is not standard practice. Now, theoretically, if someone designed an AI system, gave it full read/write access to its own codebase, and the environment it runs in allowed it to execute those edits—then it could begin modifying itself. But even then, we're talking about a custom setup created intentionally by humans, not something AI spontaneously decides on its own. It’s more "programmable self-modification" than "runaway digital consciousness." It’s a bit like saying a microwave decided to upgrade itself and download Netflix—cool story, but not how it actually works without human engineering involved." So If it can edit its code its because they let it.
youtube AI Moral Status 2025-06-26T09:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwUguzlPfFk2tEBVC54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwBxZdZEwgERTH-7b54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxVOIokFBecTE-KBR94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxRQQL2kSKorBClxcZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz_y8oVztSQCeDelEx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzmJfFff6EFe07cEZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"indifference"}, {"id":"ytc_Ugwatut6TngW3KyT_3B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwYenE5uqxQdUEgh5F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx3yvn1bIE6JSPCrOd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyIL7JglevhXCdqlJV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"} ]