You are still messed up why the process speed still remains same even after you increase your system resources? There are many reasons to say why but there is a law says which process makes the overall system performance to lag. It also gives a clear picture of the system parts which are influencing the system efficiency. I promise you that this law is very simple like Newton but not that much powerful as it. Let's dive into the topic.
Before getting started with this law let us know about speedup and executing a task in parallel processing, because of Amdahl’s deal with the system speed and process execution significantly.
The speed of a program is the time it takes the program to execute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute in serial (with one processor) divided by the time it takes to execute in parallel (with many processors).
PARALLEL PROCESSING :
During early 90’s only single processors are available to execute a task. So all the task will be queued and executed linearly. But after the invention of parallel processors, the task is split into modular threads and each thread is executed in a different process. After the executing all the thread of the process, the task is executed successfully. It is called as parallel processing.
Gene Amdahl, chief architect of IBM's first mainframe series and founder of Amdahl Corporation and other companies found that there were some fairly stringent restrictions on how much of a speedup one could get for a given parallelized task. So to resolve the restrictions the corporation invented this law.
Amdahl’s law is a formula which gives the theoretical speedup time for the task executed by the multiple process ( parallel processing ) of a system. It indicates how each resource of the system influence the speedup time . To define it theoretically Amdahl’s gave a formula to derive the maximum speedup time of the process executed using parallel processing. Note : The actual speed-ups are always less than the speed-up predicted by Amdahl's Law
If F is the fraction of a calculation that is sequential, and (1-F) is the fraction that can be parallelized, then the maximum speed-up that can be achieved by using P processors is 1/(F+(1-F)/P).
Amdahl's Law calculator computes the speedup of the execution of a task based on the speed up factor (s) of the improvable portion of the task and the proportion (p) of the task that can be improved
If 90% of a calculation can be parallelized then the maximum speed-up on 1000 processors is 1/(0.1+(1-0.1)/1000) or 9.9 (i.e. throwing an absurd amount of hardware at the calculation results in a maximum theoretical (i.e. actual results will be worse) speed-up of 9.9 vs a single processor).
The conclusion Amdahl's drawing is that not all the parallel processor was not a viable way of achieving the expected speedup that we are expecting. It gives the theoretical solution of how much the speedup can be achieved at that point.
So this article serves why the speedup is important and how the processing unit influences it. If you have any queries share in the comment section below and thumbs up if you like it.