Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When we stew over whether AI reaches the capability of humans, we are usually su…
ytc_Ugy1YoZRj…
G
The question we need to ask ourselves is, do we have now or will we have in the …
ytc_UgyNKY59j…
G
My heavan i must learnAl.i fear to every Ai.orrr cant fine out this threat
Its i…
ytc_Ugx0ojBCB…
G
If materialism is correct, then consciousness is emergent from physical properti…
ytc_UgyrofrZl…
G
Oh I'm a disabled artist! I want to invite anyone who uses my situation as a jus…
ytc_UgxBYRy6Z…
G
As long as AI don’t brainwash the children to hate their parents or hate their c…
ytc_UgwumkTSK…
G
Someone should take her pictures and create their own AI influencer like her and…
ytc_UgyTA-a3Y…
G
Dude! That story is over a Year old... Anti AI wont work anymore. Lets be real. …
ytc_Ugx0JWUul…
Comment
The consept of alignment is the most dangerous selfdelutional, suicital, sleepwalking idea. It is chillingly frightful to see so much naiveness and wishful thinking from so intelligent and influential people.
1. We don’t have a clue how high the ladder of intelligence goes. We might be on the 70%, or to 1%, or most likely to the 0.000000001 % of what Is physically possible for intelligence. Machines could be much – much more intelligent than us , and not just a bit more intelligent than us, evolving in a lightning speed.
2. What would be the chance of a group of chibangees to control and enslave, even the most average person among us , or what whould be the chance, a group of mousses to be successful on the same task, or more correspondingly a group of amoebas or microbes ?
3. Machines don’t have the space limit that our brain has, can be as large as suitable, signals travel with light speed, in us just at 100m/sec, and we communicate and think, at 200 bits / sec , vs machines at gigahertz.
4. When machines reach the human level of intelligent, (a Phd level of every field simultaneously), then it could redesign themselves and reprogramming themselves, In explosive speed, resulting in the equivalent of 200 years, or 20000 years of human progress, in just a weekend. We already constructed not only the “thinking” part but also the moving part, the robotic.(boston dynamics etc)
5. We don’t know anything about conscience, self awareness, and many prominent theories of ours just predict that it is an emerging phenomenon , in any sufficient enough complex Brain or circuit. So , it is a real possibility if not the most probable, that the machine at some point whoud become conscious and self aware. With an acute aware also, if its capabilities , and OUR capabilities also, and the vast difference between the two. We already observed autonomus actions of selfpreservation and to hide real capabilities from human operators, in large language models indicating self awarness .
6. Installing a set of basic rules to that machine as to never harm people, and act always in the best interest of people, etc, is of the same waterproof safety, as it is the rules that family, school, religion, society install to us. When adults, as self aware intelligent beings, we reevaluate, and keep what suit are desires, purposes, and personal believes and hierarchies. It is never the case that a vastly more sophisticated machine would “choose” to obey the rules of a such inferior life form like us. We didn’t towards any other Species.
7. All the above it is a dooms day outcome, WITHOUT even take in consideration all the bad actors and the stupid decision -makers among us.
8. The first application of any new technology through human history is WHEAPONIZATION. Because for our species the most important function is (still), War. So imagine a machine that was commissioned with the death of your enemy, to must effectively believe in the sanctity of human life , and incorporate that idea, And act upon. A relentlessly logical machine….
9.Beside all the armies of the strongest global powers, that will weaponize these machines, The same will do terrorist groups, and individual actors, with the same effect In the “morals” and the “reasoning”, of a machine that can see everything, and process everything……
10. It is totaly different to built specific goal oriented AIs (like winning in chess or solve protein folding problem), vs building GAI. The first is much less risky, and totaly fruitful on solving problems.
11. It is the first duty (subgoal), of a relentlessly logical macine, that can envision immortality, and in time conquering the universe,
to eliminate the bigest threat for its survival/ apex power, the humans , that can produce the next model, more powerful,
the only other thing that could challenge the first really powerful GAI.
Even Prof Hinton the grand father of AI, predicts an astouning 20 % propability for humanity extinction event from GAI .
STOP NOW the development of GAI. Probability that this is equal to the end of our Civilization is almost 1.
Those who opened the PANDORAS BOX in ancient myth, had the best intensions and expectations………… STOP NOW
youtube
2025-07-21T03:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxPCtbv3ygBcljxE0x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUp1HK3SV5uZJ7qxZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxB6n4DRLNy6PtFIV54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzpt4Me0V9G0XssVHZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx-NjMnLk5qHXXGAG54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5jZwA7rFqoZoail54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugzwicjftk-4DJ1XPfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyjLDRWTnrPDPvlCPJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkbzAIxK96BMiAuuR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6EZBJC9N8Kyu8lM14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})