Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One problem for AI is that it needs insane amount of energy. There isn't enough …
ytc_UgwDHPiNc…
G
I can't believe how many people here actually think these are real robot firing …
ytc_UgwXZyi_S…
G
@Tomatoffel Bro. Not even close to similar. I'm not a machine being trained by d…
ytr_UgwiRsJLP…
G
This is exactly why robots shouldn't be made. They lack empathy and only follow …
ytc_Ugx-OOdg5…
G
The thing is, the parents probably didnt notice that he was depressed, even thou…
ytc_Ugx0dVwMI…
G
I went last year at Cairns and could already tell it was too late. Saddening.…
rdc_dsbbnhc
G
“Hello! I'm an AI language model, so I don't have a physical form or life points…
ytc_UgxgRQR56…
G
Are you kidding me now anyone could create a song even without singing. Where ar…
ytc_UgxS237KY…
Comment
I work in tech. I dont develop AI systems and even I have trouble understanding how they are programmed. But I have a grasp of how they do it and it may be useful for you to know.
AI is currently not intelligent, full stop, it's just very good at appearing as if it was. Let me explain.
As this video said, AI is taught using human knowledge, its given access to information and can read it all and find repeating patterns and derive what looks to be understanding. But you should really look at it as a revamped keyboard auto correct program. You give it a prompt, it completes your sentence. Since it has a vast amount of articles posts and tweets of people talking about everything, it can reuse parts of those and create a seemingly human conversation. But again, it's just deriving data, words, as a probability function, it's not really understanding in the way we actually think it is.
I'm not stating this to refute the video or its content, not at all. It's actually the opposite, I'm trying to explain why I believe we're being told it's the end on the one hand but on the other they still keep developing.
Those who know think it won't ever happen, because AI training process doesnt encourage generation of knowledge or creative thinking despite what it may look from outside. As such their worst case scenario is that AI may lie to us, but unknowingly. On the other hand, there's a lot of interest in having us afraid and scared, because we are more likely to accept control if we have fear.
All of this being said, AI, or rather AGI, can and is probably going to become sentient. The main problem I see is that these systems are black boxes that evolve far quicker than we're able to track and control it. So if we keep pushing it it could suddenly take such a big leap forward that it would leave us behind.
We should be very careful on the responsibilities we give to any machine. It's not that we have to avoid tech, but we really need to change our motivations as a society. If efficiency is the norm, then AI will decide to wipe us because its the rational thing to do. We cannot ask a logical entity to perform actions with social consequences. AI should help us find solutions and calculate consequences, but there is the risk of taking the wrong decision. But this is also a risk for humans too. So neither machines or humans should be solely responsible.
youtube
AI Governance
2023-07-07T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz_6sxwDfKBbzhqFwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWqONWjJupgQ4M6Yx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1JzHYNhfu0NE_lbR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWBtH_Nh0yr0hXLk14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy51OBn-qqF8-G6eIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeS0TvcmE3YIK3ARJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxfzIF48BFVATsYmOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9EldN3FevYSCN-f54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxmJFzgFtBu8r19A3Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwgFHPwxH7oIPXCxCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]