Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Who ever gets it first needs to lock it up. Ppl will try to steal it and sell it…
ytc_Ugxy6eJtp…
G
What’s the problem with machines making the world a better place where all human…
ytc_UgyciFiMA…
G
we probably need 20 million new jobs now to reach a robust labor participation r…
ytc_UgiHR5FDM…
G
@Angel268201
Agreed! Although those skills you mentioned might already be embedd…
ytr_UgyA3cBx2…
G
Saying Ai will be smarter than us is like saying Ai will be smarter than the hum…
ytc_Ugx6zVxN5…
G
My company never picked it up. No big deal. I was doing fine without it. Didn’t …
rdc_l566yd0
G
I mean the US Supreme Court itself denied an appeal for a denying of letting a m…
ytc_Ugxxc00Es…
G
Autonomous driving has been a thing and the Walker S2 from UBTech Robotics (huma…
ytr_UgwT19YFn…
Comment
There's too much to respond to here to contain in one youtube comment.. The TLDR is I recommend the book "A Thousand Brains" by the neuroscientist Jeff Hawkins on a core reason why this doomsaying on AI is overblown, and worse it distracts from the actual threat.. ourselves.
A (as brief as I can make it) summary of my own thoughts. I'm a physicalist and computer scientist- I believe consciousness is purely an emergent phenomena of effectively algorithms and heuristics running in the brain. That doesn't make consciousness less remarkable, but I do believe it will one day be largely explainable, and Hawkins gives a pretty damn convincing framework of the science for starting to do just that. The main conclusions I draw from it are a) the objective functions humans seek to fulfill derive almost completely from the 'old brain', that we share the basic wiring for with most life and generates emotions, body control, survival basics. b) The extensive neocortex that humans have supplements this core system with more advanced reasoning and intelligence. AI, as we are creating it, is purely the latter. The anthropomorphism of intelligence and the lack of precision (as you yourself pointed out) on the term consciousness muddies the waters when this conversation is had generally, but the core of the argument is *we would have to go out of our way to develop an 'old brain' in AI, if we could, as the feedback systems of consciousness are inherently corporealized and have developed from the forces of evolution*. If you want more precise arguments supporting this I highly suggest the book. These AI systems are nothing more than algorithms are nothing more than a force multiplier to the agents that use them. This is the part that bothers me the most- what we should actually, pragmatically, be worried about are the humans that are going to use these "intelligent" systems, compounding the actual problem of our "lizard brain" evolutionary maladjusted heuristics, aka human nature. How *we* use and abuse the technology is already relevant, much more so than the perceived threat of potential "conscious" actors in the far future. On that point..
I don't believe P = NP. If you don't know what this means, it is a central question in computer science / complexity theory, and certainly the evidence and most theorists lean on this side. If P did equal NP, I would believe that not only would AI happen, but it would be as dangerous as you say, this feared 'intelligence explosion' people get worked up about. This is almost certainly not the universe we live in. Most problems are Hard. Computation is limited. Agents lack information. An AI would be no different, it could not surpass fundamental unknowables and computational limits that would catapult it to being unstoppable by humans. It's intelligence, no matter how great, would also be heuristic. These are limits like the speed of light, and even with much faster 'reasoning' skills, update procedures, evidence gathering protocols, creativity, etc., a truly misaligned agent will be resource-bounded in a fundamental sense that greatly reduces the risk that it would be unstoppable to all of humanity (and the many machines we will have at our disposal). There are several ways I could make this more precise, but the point is this is pure fantasy at our current segment in time. We should be instead more concerned about the many Hard existential problems we are facing right now. The rouge intelligence protocols are already out there, they are in ourselves.
youtube
AI Moral Status
2023-08-22T02:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzLiCoxtyHU-8qTtFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwry411H4TNsgx44bJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgxLcR5sBrwJ9Z7JO9x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwrutfkpih9N4QDmEF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRjTBDoHkiFQy1Xdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUt0c01T7on8sEOAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwlKYy5bdkaEKcHDMl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxvSXcFb6g27DqCZHZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKkuVnzsZAC5OAv8p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaWQ8OcZzEx03fQZF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]