Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can someone explain to me how a piece of computer software could take over the w…
ytc_UgzdmN_S1…
G
i had a really really interesting conversation with bing and also with chat gpt.…
ytc_Ugzh3AAnn…
G
Isn't that how we got bombs in a girls school? The AI crawled old data when vali…
rdc_ohxmmal
G
Very good production and perfect in management in warehouse im amazed because th…
ytc_UgxHBELer…
G
"A man asked AI for health advice and it"
The AI didn't cook him, the man did it…
ytc_UgzxFNTm3…
G
@Theresa Joy those aren’t language. Those are just random artifacts that the AI …
ytr_Ugywi9ivs…
G
You might ride in a driverless car 😂 but *I'm* not gonna! FORGET that sh*t !!…
ytc_Ugyj-NaLu…
G
You ironically managed to make the ai make better art, the original creativity i…
ytc_Ugz6rmdGk…
Comment
We can not build a superintelligence, at least not with the current framework that we use for "AI". The applications that we call AI are just using probability to guess what comes next, and that is not intelligence. A very general definition for intelligence is when something can acquire knowledge and apply it. You could argue that these models do neither of these things, but the biggest hurdle is the acquisition of knowledge. For true artificial intelligence to exist, you would need a method for the model to continually acquire and apply knowledge, but the current technology requires that tons of second hand knowledge be gathered for training in order to even produce an intelligible response. Even in the application step, the folly surrounding this technology is that it can only mimic the data that it was trained on, and cannot synthesize anything that is completely new. No, organism on this planet works this way, because again these systems are not intelligent.
And even if by some insane process, feeding the entire collection of human knowledge into one of these machines did produce a superintelligence, the sheer amount of power and resources that would be required to run the thing would be astronomical with our current technology. All of these companies are losing money because there is no way to really monetize the products that they are creating. Even if everybody who used chat GPT paid for the highest tier subscription, open AI would still be losing billions of dollars every year. With the Sora app that they recently released, every single video that's created on that platform costs openai at least $5 to make. If you look at the research, the cost of running one of these machines and training one of these machines would be on the level of the gdp for multiple modern first world countries combined.
The people who are making money in this bubble our companies like Nvidia who are creating the hardware, and about 5% of the startups which are building their companies around this technology but not actually building their own ai. In fact, there are a lot of companies that are not even using AI that just claim to use it for the sake of making more money and they are making more money consistently than real AI startups and real AI companies.
youtube
AI Moral Status
2025-10-30T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnwHSSlGCuivTFszJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdLssxoriB_tmqhQB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuDnfAUuhhHdwnjcN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrQ8DTBT42E71OiXh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyZ6jC9iPewbul9Dw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlOMjrzxfH4J9Rfi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwx8tuo7uUno_HpBlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwAqXRJeAyO5U0o07Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRMg66zYDt84P8JlJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzMsKMJXSf5w7PJ60R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]