Amdahl’s Law

Amdahl's law, named after a computer architect named Gene Amdahl and his work in the 1960s, is a law showing how much latency can be taken out of a performance task by introducing parallel computing.

In parallel computing, Amdahl's law is mainly used to predict the theoretical maximum speedup for program processing using multiple processors.

This term is also known as Amdahl’s argument.

Parallel Computing and Multicore Systems

The concept of parallel computing is that more than one processor can work on a given task simultaneously, which will speed up a task by a certain factor. That’s where Amdahl’s law applies, in finding that factor’s benchmark.

The implementation of parallel computing is most commonly done with a system known as multicore processing. Years ago, chip makers started introducing microprocessors with more than one processor core, known as ‘multicore’ design, and that quickly became part of how to innovate for speed.

Where the processors of the 1990s flaunted single-core mechanisms with advanced features and functionality, today's multicore systems may have dozens of cores housed on a single chip, all working together in perfect harmony, or as near to it as engineers can make them. With this, there has arisen a whole industry scene based on the collaborative work of the cores.

Parallel Multicore Versus ASICs

Along with the development of multicore systems, there's also been a move toward what's called the application-specific integrated circuit or ASIC.

Experts argue whether ASICs or multicore designs are more effective for various kinds of tasks, from graphics rendering to data mining to, perhaps, the quantum operations of tomorrow, which will likely require their own proprietary structures.

For example, ASICs are broadly used in the practice of cryptocurrency mining where they are designed to carry out operations specific to generating Bitcoin blocks. Bitcoin mining is an example of how customizing processing power can generate value.

However, multicore design and its parallel processing capacity continues to be a cutting-edge technology for microprocessing, first, for example, in gaming, where programs need to render sophisticated 3D graphics with low latency. Increasingly, parallel processing in some form is becoming a standard in some parts of the tech world.

Further Considerations with Parallel Computing

Other types of considerations related to Amdahl's law involve the use of pipelines and microarchitectures to enable these multicore systems to collaborate.

There are also different builds, known as symmetric versus asymmetric multicore, that can have an impact on how parallel computing systems perform. In addition, engineers might consider whether improving an individual core has global benefit, or how to utilize Amdahl's law to improve parallel computing performance overall.

In considering working with multicore systems, experts might also think about aspects like the scope of scaling for a project, or heterogeneity of processors in a system. Other resources like a memory cache for a processor may also apply.

Although some people see Amdahl’s law as becoming less relevant than it used to be, parallel computing remains a familiar model for those working on the vanguard of creating new hardware systems.

The power to speed up systems is a key piece of advances in the full spectrum of technology projects, for instance, in many AI/ML projects, or in relation to quantum computing progress.

Post a Comment

0 Comments