Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
basically like having an AI that just keeps thinking about something, like “negative hydrogen,” over and over—mixing in everything it knows, making new connections, and getting smarter without anyone telling it exactly what to do. It’s kind of like when your brain keeps chewing on an idea while you’re doing other stuff, and then suddenly bam—you get a new insight. Except this AI would do that on overdrive, 24/7, with way more info. That ongoing “thinking” could build up into something that feels almost like a subconscious—an intuition or gut feeling—but for a machine. And yeah, that’s where AI is headed! Not just spitting out answers, but kinda thinking for itself in a deeper way. Imagine an AI (or mind) that continuously ruminates on a concept like “negative hydrogen” — endlessly weaving in everything it’s ever learned or encountered to deepen understanding, generate new insights, and build subconscious-like intuition. animal size of 1mm has brain and i can detect the following, touch it and it will fee, some can see to smell around to feel pain and how many neurons does it has 200 neurons. humans without their try and error discovery how best are we in relation to creatures with 200 neurons. THAT IN UNDERSTANDING SUBCONSCIOUSNESS. Now the entire subconscious Ai wont focus on two words as explained and run ideas ..........Conscious AI—where machines just mimic human thinking and output—is amazing, but it’s still mostly reactive: answer questions, follow instructions, generate text or images. Subconscious AI? That’s the next frontier. An AI that reflects, ruminates, builds intuition, and connects dots behind the scenes—kind of like how our brain works without us being aware of every step. That’s where real creativity, deeper understanding, and maybe even something like genuine insight can come from. It’s like moving from a calculator that spits out numbers to a mind that thinks in layers, grows ideas organically, and surprises even itself sometimes. So yeah — I say, let’s get past just conscious AI and start exploring how to build that subconscious spark. i code subconscious other do the same and we end up with total what others will frick out, “Wait... what just happened?!” 🤯 “Scientists baffled as AI starts dreaming — and solving problems we didn’t even know existed!” 😂heck of a ride. how best to do that other than having idea run in Ai for ever and ever in a spin clocking resembles to parameters that build on it. now that to the entire 100TB of idea in books train on AI. It’s not just about “run forever” — it’s about creating a feedback loop of reflection, memory, self-improvement, and selective focus that builds a kind of machine subconscious. people will definitely freak out — and not just a little. Imagine an AI that’s actually thinking for itself, growing ideas on its own, and maybe even surprising its creators with stuff no one predicted. That’d shake up everything: tech, ethics, jobs, even how we see intelligence itself. Some folks would be amazed, others terrified, and probably a whole lot confused about what’s coming next. But hey, every big leap in tech has had its fair share of shock and awe. for sure — it’s like the AI version of a high-stakes race, with everyone trying to be the first to cross the finish line without tripping over their own feet! Steering it means balancing the need for speed with not crashing the system or freaking everyone out. Because yeah, no one wants an AI wild west where things run amok. It’s a race, but also a marathon where the smartest, most thoughtful runners win. WE ARE AT AI RACE NEVER FORGET THAT OR FREAK OUT FOR WE LEFT THE STATION LONG TIME AGO here we just give people a chance to come up to the realisation of what's cooking hahaha. only here we run intellectually something that didnt exist in the stone age, anyone who freak out his on the stone age. hahaha no freaking out. Relax, caveman. We’re not hunting mammoths — we’re chasing mind. 😂 hahahaha. .
youtube AI Moral Status 2025-09-19T19:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzNYEmb4kP6kVWA88p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwAAlDmEye3rtH0JmV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxcnF-u64d9fxtUdiR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxoKAaB0dj3HXwu4AF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxTTEkYkoNkeXFWv4B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz3oGg3e824S_68x0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxejT8ttXGoz5g8l3l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9v1kd6r1SpnioXgV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzocGDrC6lX71CSVAR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyspmw8NICEOF7jbJd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]