Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Mike-48 This can be fairly and easily defined as AI.
The term has been around…
ytr_Ugwn-ld0a…
G
Why? It's literally nothing harmful. It's just a video where we guess which clip…
ytr_UgzE9U7he…
G
There are alternatives to what is being discussed .What needs to be understood i…
ytc_UgxbH-Ny8…
G
Dr. Joy Buolamwini has an amazing book on this topic called "Unmasking AI". I'd …
ytc_UgytK8RTC…
G
If I have to detect it's AI, I already an wat ging the video. So money earned. A…
ytc_UgyKYk3m7…
G
@Eisenbison Yeah keep lying to yourself. You don't have to take the time to draw…
ytr_Ugw0xbrJb…
G
His main example of the "monster" model was the antisemitic remarks from AI. But…
ytc_Ugz1QX2n7…
G
its so pathetic. Even AI specialist are using fear mongering techniques. At the …
ytc_UgwFBnOeT…
Comment
I have a couple of issues with your statement:
What you're referring to is "Moore's law" and technically it doesn't state that computing power doubles, just that the amount of transistors on a chip doubles in said ~18 months time span (roughly). Meaning on the same amount of chip space as before, not that the chip is getting bigger, which results in the transistors becoming smaller. Of course this results in more compute power, but it does not necessarily mean that it scales linearly with transistors getting smaller and denser. Besides that, Moore's law has been stagnating for a few years now, because there are actual physical limits on how small a transistor can be (we're not there yet, but it's getting a lot harder). At such a low scale, you have to consider a lot more factors, than say on a rudimentary chip on a bread board (just for comparison's sake, but the idea is the same for small and really small chips). At the tiny scale, you'll suddenly have to think about the speed of light and how fast your signals can actually physically travel across the chip, whereas on a bread board scale, that's pretty much irrelevant. Anyway, my point is, computer chips are hard enough to manufacture as they are and making them smaller (and faster and more efficient and more powerful) has its limits and gets a lot harder at some point. Hence, the stagnation of Moore's law in recent years.
Then there is the point that twice the compute power, doesn't necessarily mean the thing will run twice as fast in real applications. If you take a problem and try to solve it with an inefficient algorithm, but throw a lot of compute at it, that might work. But you could also use a lot less compute and solve it with an efficient algorithm. Or solve much much larger problems in the same time, with the same compute.
So to recap:
Large language models (which you refer to as "AI" here, which isn't incorrect, but also nowhere near the whole of it, AI is much much more than LLMs) will almost certainly become more powerful in function in the coming years, because these models are currently able to solve problems that other algorithms/models couldn't solve before and experience a lot of breakthroughs at the moment, which in turn makes the research and development money fly their way. This next part is a bit of a personal opinion. I believe that at some point LLMs won't experience these massive breakthroughs anymore and then the focus will shift to some other technology. Maybe they will have replaced a lot of jobs by then, who knows. There will certainly be at least a few. But there are also going to be enough problems that LLMs won't be able to solve, because they weren't designed to solve them in the first place, no matter the supposed resources they run on. Try having GPT-4 drive a car autonomously, it won't work; completely different problem space (or maybe it can be made to work somehow! I highly doubt it, but I'll be happy to eat my words, if someone manages to prove me wrong). And I hope I made clear why "AI isn't going to be 128x more powerful in 10 years" and have not lost the point in all this. :D
Feel free to ask follow up questions, if I failed to do that and my point is unclear. Cheers :)
youtube
AI Harm Incident
2024-05-31T15:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugweg0Hlk5amWE7IHcR4AaABAg.A46Ft6UuMNkA48L-KoLDZf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxNhQfo1NtiDl-LhGR4AaABAg.A46FrNP7yFYA46MPF7yJ-Y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxNhQfo1NtiDl-LhGR4AaABAg.A46FrNP7yFYA46NZrns7vx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxNhQfo1NtiDl-LhGR4AaABAg.A46FrNP7yFYA476uIZla3c","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxNhQfo1NtiDl-LhGR4AaABAg.A46FrNP7yFYA47DGkiubw3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyarB4AxK6J1m1hQvN4AaABAg.A46FkjxaGOUA46VkdD0Ugs","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyarB4AxK6J1m1hQvN4AaABAg.A46FkjxaGOUA46oYAre9-k","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyarB4AxK6J1m1hQvN4AaABAg.A46FkjxaGOUA4AL8lPjb8k","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyrrEagJSdnFd5Jr1t4AaABAg.A46FAH2qi1VA46IJDQdV6h","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxVCVjn43ikz6SiH7x4AaABAg.A46F-EOTfXHA46W8KvEi_W","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]