EvoChip

How can computers keep getting faster?

The self fulfilling prophecy of Moore’s Law might as well have been tattooed onto the frontal lobe of every forehead in Silicon Valley. We were deceived by cause and effect. Moore’s law was actually Intel’s product roadmap. This squelched any other innovation in computation because if competing with exponential growth doesn’t scare off investors, competing with a “law” will.

In the last decade, it became irrefutable that transistor density would not continue to keep doubling. Intel’s marketing department is still acting like a cartoon coyote that ran off a cliff and is trying to ignore gravity. Sorry Intel, but the laws of physics are actual laws.

One of the only successful deviations from Intel’s worldview was the advent of the graphics co-processor – the GPU. A dramatically more efficient way to process the kind of data needed for graphics, these chips found their way into all of our computers starting with video games, and evolving into the preferred way to compute AI models.

What should have happened, is that a multitude of new approaches to improving computing should have been invented and developed. With no practical route to market, we lived through a long, cold, silent winter with almost no innovation in computation. That winter is over. Every day, I see new algorithms, new chip architectures, new ways of making computers faster – a hundred times faster, a thousands times faster, sometimes a million times faster.

The seeming glut of computation was largely used to create layers of abstraction. This made it easier to design computers and create software for them. You’d hardly notice, it just takes more memory, more computation. For example, the number 2 in Python consumes 28 bytes of memory. That’s 224 bits, doing a job that could have been done with two. Performing a computation like 2+2 in Python will typically use millions of transistors on a CPU, even though it could be done with three dozen. A modern deep learning model experiences this as compounding inefficiencies. Exponential friction.

EvoChip has invented a combination of new mathematical approaches and evolutionary algorithms to rebuild the stack from transistors all the way up to computational models. Like kids trained use an abacus instead of writing out math problems, this new approach allows for increased efficiency by eliminating many calculation cycles and successive layers of processing. Already experiencing over 1000x performance improvements for quantitative AI applications, EvoChip can radically change the balance of power in computation.

There aren’t enough venture firms with competency to invest in chip advancements which has made me reluctant to back most of the things we see. It can be difficult to get a new chip made. EvoChip is uniquely valuable in both software and hardware. They can support customers in software today, and become a part of future chip designs once the industry catches onto what they can do.

If you know people with computationally intensive models that can’t get their hands on a zillion H100s, maybe send them to EvoChip. The team is looking for more high value use cases to prove themselves.