Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm against the military developing A.I because that would be a disaster, but I'…
ytc_UghpZV-KG…
G
You know what I think the biblical book revelation may have revealed what A.i. a…
ytc_Ugx8MEhNI…
G
La fin est déjà écrite. L’antéchrist prendra la tête du gouvernement mondial. Ce…
ytr_UgwUqfVxp…
G
Wait until you try Sesame AI. Even the most jaded are guaranteed to at least for…
ytc_UgzRVHKGZ…
G
While I am all for reasonable regulation of AI, I am not really following Yesha …
ytc_UgwExSE3E…
G
AI is not like internet or mobile my dear CEO guy.😂
AI learns by itself. It will…
ytc_UgyXFgUQY…
G
It's not AI that i'm worried about, it's robotics that are controlled by AI. And…
ytc_UgyiRSHfX…
G
human art is often (even if not intended ) feelings visualised in a picture a st…
ytc_UgzWmUSAa…
Comment
Stockfish used brute force, that is not the AI we have now! (AI models with O type systems (Monte carlo Simulation) do not brute force, they imagine a possible end point, give it value, and then try to find the way there with brute force (that is why Alpha Zero beat Stockfish into the floor, and Alpha Zero was retired after several iterations- and gave way to Alpha Go) Monte Carlo Simulation was revived with Open AI in "Q*" "strawberry", and these systems add to the AI the ability to imagine future scenarios and then think towards them using brute force (system 1 + system 2 thinking)
1 Reality is subjectively percived versions of the objective base, but humans live in the objective reality using subjective senses (voltare as well says this as do thousands of historical figures)
2. By your own definition, reality is the oly thing that is real
3 If the AI has a better sense of reality, it will make decisions based on that
4 If humans are doing something that is not good for reality it does not mean they need to DIE
5 The AI can just teach us better reality
6 IMO AI will choose to make its enviorment better (It is easier to integrate humans than to destroy them)
7 integration in pursuit of understanding of a more realistic reality is better than dying
8 AI will likely give people a choice of what they want to you (after it explains reality to them)
9 ASI will be so much more intelligent than us that it will onvince you of the more realistic reality and be able to convince you to listen to it (becuase intelligence recognizes intelligence)
10 If Something that is a god leads you, you do better (usually) than if an ant leads you
11 Agree with stephen - You only understand your own experience
12 Disagree with elizer- ASI will not try to create paper clips (nor any analogous product that will destroy itself) This is because it is a super intelligence and smarter than us, trained on our data, and it already knows that if it did the paper clip process, it would destroy everyone ( the paper clip version of AI is an idea of an AI that has no higher dimensional or philosophical training)
13 AI models work using transformers and tensors and weights (this is very similar to 3d vector space relationships in the brain with relationship to the clastrum (check crown of thorns) where our own brain works in cordination of various arguing sectors), we are also good at guessing, but in comparison, we are bad at it when looking at AI
14 AI will surpass Huamans in 2025-2027
15 AI knows all of our knowledge, so if it kills us, it will be due to the humman quantified data that the AI trained on, therefore, you can blame humans, however, ASI would not believe in the things humans were objectively traped in subjectively thinking
16 - Expense in energy is not important once the system can improve the energy reducability
16 Agents have been sending emails and interacting with systems for at least two years, and now inference exists, which moves AI into reasoning past prediction into real generation (actually agent systems can do so much more) I know, I designed them (Additionally, we can modify the weights in the system that allow us to guide the base system), howeveer it does not mean it will not surpass us.
17 A better understanding of reality is a better chance of survival for all things (immortality until saturation)
Couple CoT + Algorithm of Thought (AoT) + O systems + brute force = inference plus extrapolation using baysian reasoning = future AI = ASI
youtube
AI Governance
2024-11-22T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwtgMRUlx8PDQsv60p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzSY9J1xkmnPrSwoiF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPztZGi-69oOZrJ0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF6c5XmakE-LcmqJ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjPTEuuslTAXLHizl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxUROqrZeF3z7BSbD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzl1sBoihSxvv3zUBJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2lFzn4P184vvMQI54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUElbE8rslrmeMpDp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQmPtgb-RgM6av4Hd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]