Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
meh... all of this is just a projection from where the field of AI is today, and that's a waste of energy. First you get something that is pre-language, probably sentient. "AGI" is the adding language part (speed-running high school). Then you have specializations that share a common knowledge base (call that multiple personality, or HUGE number of parallel thoughts). Also, at each step, that thinking process goes a little faster. Ignore that million times BS, because it's number crunching wank from current methods. 4x with no mental pauses, and ability for every half-baked thought to branch in it's own direction. Then a 10x for the "hydra" mind. By that point you need to build new chips that accelerate specific processes to get another 10x gain, but reality still moves at our speed, so you get something like 'Her' where the machine is busy doing lots of things while having a hundred very slow conversations with humans. It's at that point you have to yadda yadda yadda to reach the point where humans no longer exist. That's the problem with the doomer movement. Lots of room for realistic dialog about the order in which we should teach the machine new information, or if we need to build and merge lots of little things to get one "perfect" one, or what even counts as "correct." Current output from models is dumb, and no, there is no way to see how they really put those outputs together (though technically, you could trace these things out, but a decade-on, and the industry can't do a basic heat map, so.. they are getting AI wrong, and suck at then ML/DL stuff). The people in full panic mode over what they think of AI today are NOT the ones you want sitting at the table thinking about how to shape actual digital minds. *most of the doomers really just want to stop development completely. *edit: lol, this is the worst humanity will ever be (paraphrasing). but really, no, LLM's are not conscious. 100% they are not. Cover the whole of the US in railroad tracks, half a mile deep, and point to the conscious mechanism. That is what a database of vectors looks like. A 3D jpeg image. you poke it, trains move along the tracks, and touch points that intersect, and you get some kind of output stream (even if the "correct answer" was to NOT output a stream). The space between tracks, not touched by vectors, can't be discovered. Over-expose an image, and so on. And no, the "complexity" of all those weights is not equal to the complexity of the brain. NOT EVEN CLOSE. Most of the brain is just a substrate for a complex process which is itself just another layer we care about operates across (and even then, it's not in a format that matches our written and spoken words, which is what those llm jpegs are built on).
youtube AI Moral Status 2025-10-30T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxnwHSSlGCuivTFszJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdLssxoriB_tmqhQB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxuDnfAUuhhHdwnjcN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzrQ8DTBT42E71OiXh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyZ6jC9iPewbul9Dw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxlOMjrzxfH4J9Rfi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwx8tuo7uUno_HpBlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwAqXRJeAyO5U0o07Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzRMg66zYDt84P8JlJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMsKMJXSf5w7PJ60R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]