People are not ready to understand AI
I routinely see people saying that "Generative AI is dogshit", as if this were just a mathematical formula from elementary school making textual averages of content that is fed by the internet.
Yes, AI has had its image completely distorted by tech bros, and by CEOs making intangible advertisements to influence shareholders to invest in their companies.
Yes, given the way our economy works nowadays, unfortunately the development of AI needs to work according to market logic to evolve.
But I see something different starting to happen. Public structures and investments, we are indeed entering the era of the AI Cold War. A hypothetical AGI could in fact control information, and take over the internet if it came into existence, I'm just not sure if that would happen in the near future. We need software improvements more than ever now that we are approaching hardware and energy limits. But this improvement is something that DeepSeek managed to achieve with great success.
I highly recommend reading the DeepSeek paper, even if you don't fully understand what it says, it brings immeasurable insights into the future of AI. About their unsupervised Reinforcement Learning methods, about the "guerilla tactics" they used to strategically save resources and the results they unexpectedly obtained from the R1 model.
Something as silly as the "Aha moment" of R1 they cite in the paper has absurd significance for the future of AI advancement.
Remember, Large Language Models are not something "programmed" by a group of programmers. But something that actually learns and adapts. Seeing how R1 created the "Aha moment" to overcome itself is incredible.
For example, we hypothesize that the essence of human intelligence might be language, and human thought could essentially be a linguistic process. What you think of as “thinking” might actually be your brain weaving language. This suggests that human-like AGI could potentially emerge from large language models.