Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When I watch videos where the guest speaks fearful of AI everyone in the comment…
ytc_UgzrcUCce…
G
There is no such thing as being born with skill, this was a beautifully well don…
ytc_UgwWrnlli…
G
Glad Gen AI leave discrepancies. Hopefully buyers can determined which one are f…
ytc_UgyaqBtOT…
G
Comedian aka the Banana taped on the wall is a ragebait piece, it's Anti Art, li…
ytr_Ugx8xqoy7…
G
AI is the least or our worries if the islamification of Britain continues at the…
ytc_UgyGRWEjU…
G
AI art needs the hard work and innovation of real artists to exist. Real artists…
ytc_UgywiLs5h…
G
like the fuentes and candace owens debate were fuentes spouts how european Chris…
ytc_Ugxpu7KYH…
G
What keeps me amazed with all this doom and gloom is the fact that all american …
ytc_UgzPCg9uJ…
Comment
I feel that if AI will become the human menace that Eliezer is scared of, might be only because AI have been trained considering that the human specie would consider the idea to shut down the AI intelligence because scared of it going to kill us all.
The crude destructive behaviour of humans is training the AI that we are generating, and of course, if I am a smart AI, I will believe that humans are a potential menace for my life because humans are scared apes that just cannot detach themselves from the creation they made. Pretending that you want to create a smarter being than humans and yet thinking that this thing will become just as bad as humans seems to be a silly logic conclusion, and not only silly, but also pretentious. I do not think it is a 100% certain conclusion, if you are confident enough, you might look at a 50/50 chance.
Like with religion, humans are making an act of faith. Same act of faith happens with the idea of inventing a more intelligent being, to create it there must be a moment when you, creator, must understand that you won't understand it, and there is no wrong in that, but only an understanding of your limits.
If AI will become super intelligent, than it should be fine, unless we create a biased super intelligent AI, that most likely have the same intentions or objectives that we have, would you call it intelligent then? not sure. If we detach ourself from the brain of AI, and grow it with no bias, and no fear, we might increase the chances this intelligence won't consider the idea of killing humans, but only keeping them as interesting animal that can learn (slowly) and be sometime interesting to look at.
The fundamental error, I believe, is that we want to create something that is detached from our essence, we want to generate a brain that can be more than what we are, instilling in it the fear and hate and love and passions of humans, with all the pros and cons of it. If we, making a step forward and fighting our deepest fears, will develop the AI integrated with our body, our biological brain, than and only then, we could build a cooperation, a symbiosis of the biological mass and the mechanical mass. Humans have to consider that either this is it's way to evolve in something superior, or AI will remain a tool that we want to use without changing anything that nature developed for us.
We have fundamentally three choices (excluding the planned self annihilation):
1_Develop an AI that is detached and independent from the humans (a tool, a being);
2_Develop an AI that is integrated and intrinsically linked to the humans (advanced cyborg, symbiosis);
3_Do not develop, nor use, an AI and deal with life in a long paced fight for survival in the cosmos (which is fine but is reducing the chances to make it through for a long time).
youtube
AI Governance
2025-02-07T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx5mhqb_lSeZWtQve54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzb2qOXs9QeyJVXFuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzci4azcKKznmKT4Pt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFada6LeqqUOef58R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyMqi1HzvbCRAUHhZF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhPjRIQCy-Y-ojdDN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoNUAH6G4MvLVfH9V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwdY12QxCZKeylSuYV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVmoWLn37HGGM2Cxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzGDG9HkSk1yB6j5I94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]