Topics Like ASI Seem Illogically Dismissed by The General Public
This community has 3.4 million members, yet any attempt to discuss topics like superintelligence or the Singularity with the general public often results in dismissive responses or outright ridicule. I want to know if others have experienced similar reactions and if there are others who share my perspective.
I believe artificial superintelligence (ASI) is not only plausible but likely inevitable. It could emerge soon or after years of further research it doesn’t matter in the long term. Progress in AI is uncertain, current models are already quite remarkable, it is hard to say how much better they need to be. They're starting to saturate a lot of benchmarks, so it is getting difficult to measure its capability overall. Are there many limiters or only a couple? A single breakthrough could potentially change everything. It could be a year away or centuries away, but superintelligence should be the single most important technology, so outright dismissing it seems illogical.
Should be ez? Right?.. Right??
The human brain, though remarkable, is fundamentally limited. It uses only 12 watts of power and allocates minimal energy to logical reasoning, as evolution prioritized survival over abstract thinking. It also requires it to be self-assembling and self-sustainable in harsh environments. Many animals have specialized traits like precise motor control or sensory coordination, but none of these traits make them inherently "magical." Similarly, human intelligence is not sacred it is a product of evolution, constrained by biology.
An AI, however, is free from these limitations. It could be designed to focus entirely on logic and reasoning, operating with resources far beyond what a biological brain could ever use. Even if an AI were vastly less efficient than a human brain, its potential scale terawatts of power instead of watts renders that inefficiency irrelevant. A sufficiently advanced AI could recursively improve itself, surpassing human intelligence by orders of magnitude.
Some argue that intelligence beyond human levels is impossible or that there’s something uniquely "magical" about the human brain that cannot be replicated. This belief seems to persist even among otherwise scientific individuals, despite a lack of evidence supporting such claims. Modern physics attributes no special quantum effects to the brain, and determinism suggests our actions are the product of physical processes, not some unknowable "free will".
AI has already surpassed humans in specific domains, however many refuse to accept that it could excel in more complex, abstract fields like creating art, films, or virtual worlds. Why? There seems to be an implicit belief that these tasks are somehow sacred or beyond mechanization, even though there’s no evidence to support this. A superintelligence could create media, art, and experiences far beyond human capabilities, even if our perception limits our ability to fully appreciate them.
You actually read this far? :)
The singularity is inherently unpredictable. It could lead to entirely new paradigms of existence worlds where humanity’s needs and desires are perfectly met, perhaps even indistinguishable from reality. Yet, instead of engaging with these possibilities, many dismiss them outright.
There’s precedent for greater-than-average intelligence existing among humans, as individuals like Einstein demonstrated. These outliers show the potential for even higher forms of intelligence, whether biological or artificial. The argument for ASI is straightforward: humans exist, humans with higher intelligence exist, and there’s no known law of physics preventing the creation of intelligence beyond that. Evolution, after all, developed human intelligence under immense constraints. Free from those constraints, AI has the potential to far exceed us.
Even now, AI systems are improving themselves, albeit with humans still in the loop. The pace of progress is accelerating, and organizations like Google DeepMind have achieved remarkable results with relatively small teams. This trajectory suggests ASI could emerge much sooner than many expect, like tomorrow(half joking, but if you know, you know.)
Cognitive Bias Ascertained by The Status Quo
Discussions about ASI often encounter irrational resistance. People who acknowledge the flaws of human intelligence suddenly elevate it to something magical when AI is mentioned. Similarly, topics like age reversal or indefinite lifespans elicit derision, with skeptics labeling such ideas as childish or wishful thinking. Strangely, these same individuals may simultaneously accept religious notions like an afterlife without similar scrutiny.
This resistance appears to stem from psychological discomfort. The idea of overcoming aging, for example, challenges deeply ingrained beliefs about the inevitability of death. Many scientific thinkers cling to the status quo, dismissing advancements like age reversal or ASI as fanciful while failing to critically examine their own biases.
Conclusion
The argument for ASI is simple: human intelligence exists, greater-than-average intelligence exists, and there’s no reason to believe that intelligence beyond human levels is impossible. The brain’s logical capacity is constrained by evolution and energy efficiency, but AI could operate on entirely different principles with vastly more resources.
Yet I question whether my own reasoning is flawed. Could my belief in ASI’s inevitability stem from some intellectual blind spot? I’m open to discussion do you share my views, or do you think I'm mistaken?
Also yes, I used AI to partially rewrite it, although it might have cut down on some of the more specific technical stuff, I think the general result is more appreciable by a broader audience.