A new invention can significantly reduce the resource consumption of the world’s computer servers


An elegant new algorithm developed by Danish researchers can significantly reduce the resource consumption of the world’s computer servers. Computer servers are as taxing on the climate as global air traffic combined, thereby making the green transition in IT an urgent matter.

The researchers, from the University of Copenhagen, expect major IT companies to deploy the algorithm immediately.

One of the flipsides of our runaway internet usage is its impact on climate due to the massive amount of electricity consumed by computer servers. Current CO2 emissions from data centers are as high as from global air traffic combined—with emissions expected to double within just a few years.

Only a handful of years have passed since Professor Mikkel Thorup was among a group of researchers behind an algorithm that addressed part of this problem by producing a groundbreaking recipe to streamline computer server workflows.

Their work saved energy and resources. Tech giants including Vimeo and Google enthusiastically implemented the algorithm in their systems, with online video platform Vimeo reporting that the algorithm had reduced their bandwidth usage by a factor of eight.

Now, Thorup and two fellow UCPH researchers have perfected the already clever algorithm, making it possible to address a fundamental problem in computer systems—the fact that some servers become overloaded while other servers have capacity left—many times faster than today.

“We have found an algorithm that removes one of the major causes of overloaded servers once and for all. Our initial algorithm was a huge improvement over the way industry had been doing things, but this version is many times better and reduces resource usage to the greatest extent possible.

Furthermore, it is free to use for all,” says Professor Thorup of the University of Copenhagen’s Department of Computer Science, who developed the algorithm alongside department colleagues Anders Aamand and Jakob Bæk Tejs Knudsen.

Soaring internet traffic

The algorithm addresses the problem of servers becoming overloaded as they receive more requests from clients than they have the capacity to handle. This happens as users pile in to watch a certain Vimeo video or Netflix film. As a result, systems often need to shift clients around many times to achieve a balanced distribution among servers.

The mathematical calculation required to achieve this balancing act is extraordinarily difficult as up to a billion servers can be involved in the system. And, it is ever-volatile as new clients and servers join and leave. This leads to congestion and server breakdowns, as well as resource consumption that influences the overall climate impact.

“As internet traffic soars explosively, the problem will continue to grow. Therefore, we need a scalable solution that doesn’t depend on the number of servers involved. Our algorithm provides exactly such a solution,” explains Thorup.

According to the American IT firm Cisco, internet traffic is projected to triple between 2017 and 2022. Next year, online videos will make up 82 percent of all internet traffic.

From 100 steps to 10

The new algorithm ensures that clients are distributed as evenly as possible among servers, by moving them around as little as possible, and by retrieving content as locally as possible.

For example, to ensure that client distribution among servers balances so that no server is more than 10% more burdened than others, the old algorithm might deal with an update by moving a client one hundred times. The new algorithm reduces this to 10 moves, even when there are billions of clients and servers in the system.

Mathematically stated: If the balance is to be kept within a factor of 1+1/X, the improvement in the number of moves from X2 to X is generally impossible to improve upon.

As many large IT firms have already implemented Professor Thorup’s original algorithm, he believes that industry will adopt the new one immediately—and that it may already be in use.

Studies have demonstrated that global data centers consume more than 400 terawatt-hours of electricity annually. This accounts for approximately two percent of the world’s total greenhouse gas emissions and currently equals all emissions from global air traffic. Data center electricity consumption is expected to double by 2025.

According to the Danish Council on Climate Change, a single large data center consumes the equivalent of four percent of Denmark’s total electricity consumption.

Mikkel Thorup is head of the BARC research center (Basic Algorithms Research Copenhagen) at the University of Copenhagen’s Department of Computer Science. BARC has positioned Copenhagen as the world’s fourth best place in basic research in the design and analysis of algorithms. BARC is funded by the VIILUM FOUNDATION.

The research article has just been presented at the prestigious STOC 2021 conference. A free version of the article can be read here: https://arxiv.org/abs/2104.05093

Read Vimeo Engineering Blog about the implentation of Mikkel Thorup’s algorithm: https://medium.com/vimeo-engineering-blog/improving-load-balancing-with-a-new-consistent-hashing-algorithm-9f1bd75709ed

Cloud computing represents a fusion of two major trends that are IT efficiency and business agility. The cloud has the provision of storing data without any limitation as well as hiding a tremendous volume of data from other users. The users can access the required files, documents and applications on demand. Users only pay for the services provided by the cloud instead of buying the own expensive infrastructure.

Cloud computing has various features such as on-demand resource allocation, quality of service, elasticity, etc. which makes this very engaging both in the academic as well as the business domain. The continual demand for services provided by the cloud has led to the need to manage the load of machines, energy produce by that machines and scheduling of resources.

Load balancing is the technique of allocating different tasks over different resources in the data center to maintain balance [1]. The resources available can be a data center, a virtual machine, or, a physical machine [2,3].

The resource and service dispensation must be done in a systematic way so that every resource should experience the same loads at any instant of time and should improve the average utilization rate of resources [4]. If there is any kind of load imbalance, then the system performance will be drastically decreased. While maintaining the load, energy consumption should also be kept in mind.

Green cloud computing is a term that covers utilization of resources efficiently as well as reduces energy consumed by these resources [5]. The energy consumption resources in the datacenter are both due to cooling as well as computational resources. While the computational resources cover around 60% of the total energy consumption the other 30% is covered by the cooling infrastructure [6].

The energy consumption problem can be divided into two parts: (a) First, one with server-side operations, and, (b) networking side communications. Optimizing resource allocation as well as reducing the operational cost is a key concept. This can be implemented in the platform as a service segment. Schedulers are used to schedule the resources and the load balancers are used to balance the resources and to predict the load as well as to reduce the energy.

In case of cloud computing, the services or resources are either allocated or de allocated. The major benefit of incorporating into cloud is that it removes the pressure of upfront investment and hence lowers the cost of operation and maintenance. Also, Figure 1 below depicts a demonstration of cloud infrastructure.

The scalability of cloud computing provides users with a level of flexibility and can be scaled up and down according to the need of users. Resources allocation is the task of allocating the resources while maintaining the proper balance in the environment. To maintain a proper balance, resource scheduling algorithms are applied to get an efficient performance.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-01583-g001.jpg
Figure 1
Cloud Infrastructure Bala [7].

Nowadays fog computing and edge computing which are extensions of cloud computing are also used, because IOT devices use cloud computing for storage of data. Fog computing helps to reduce the network burden of data centers and edge computing is used to minimize and manage the traffic [8,9].

A lot of work has already been done on cloud systems using various optimization algorithms but there is still a need for improvement due to the increase in cloud system usage on a daily basis. The number of users is increasing, so companies are using more and more servers to make their services better, but this additional usage is also increasing the load, energy consumption and resources needed in the cloud system.

Hence, the key motivation behind this research was to improve various aspects of the performance of cloud systems like load balancing, resource scheduling, energy consumption using a novel metaheuristic approach named whale optimization algorithm (WOA). Before implementing the WOA algorithm, we perform an experimental survey on various algorithms that are good for load balancing, resource scheduling and energy efficiency.

We used particle swarm optimization (PSO) and cuckoo search algorithm (CSA) algorithm for balancing the load over the clod system and calculate the corresponding values. Then we test CSO and BAT algorithm, which was doing good task-resource allocation as per literature and get their result for resource scheduling. Finally, we propose and implement the whale optimization algorithm which gives the best result for task execution, response time and energy consumption for a cloud system.

In the first phase, we implement the two algorithms PSO and CSA. PSO is a useful algorithm to allocate loads to different machines that is based on the social behaviors of animals as they find their food. PSO is useful to find the machines which have less load and assign tasks to that particular machine but the algorithm also has some limitations in that it takes more execution time [7]. The CSA algorithm is based on the strategy of cuckoos to lay eggs in the best nest and this strategy of cuckoos helps find the best machine for task allocation so that loads can be balanced properly [10].

This algorithm is best suited for job scheduling but the algorithm has limitations like resource scheduling. Both these algorithms give best results for load balancing but are not good for resource scheduling. Then in a second phase, implementation of the cat swarm optimization (CSO) and BAT has been done. The CSO algorithm uses the concept of cats based in two modes—seeking and tracing—which aims to efficiently allocate the available resources to a number of tasks in a cloud environment with minimum cost [11].

This algorithm gives good task-resource allocation strategy results. The bat algorithm is based on the strategy used by bats to catch them pray [12]. This approach is implemented to allocate the resources to tasks in such way that resources can be successfully scheduled using less budget and time, but it does not give good results as far as energy consumption is concerned. Both BAT and CSO give better results as compared to PSO and CSA.

Lastly, the whale optimization algorithm (WOA) algorithm that gives the best result as compared to all algorithms that were implemented in previous phases has been implemented [13]. WOA starts with a random solution by considering the current solution is the best solution and based on that position the population is updated. It is based on the strategy of exploitation and exploration. The WOA algorithm gives comparative results for load balancing, resource scheduling and energy efficiency of cloud systems. For this testing, Cloud Analyst has been used.

This paper includes various sections that cover a brief introduction to cloud computing, the proposed algorithms and results. Section 1 covers the introduction to the cloud concept. Section 2 is a review of the load balancing, energy efficiency and resource scheduling literature. Section 3 includes the proposed algorithms, while Section 4 shows the simulation results of our research and compares the results of existing and the proposed algorithms. Finally, Section 5 gives the conclusions and future scope of the research.

reference link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7956425/

More information: Anders Aamand et al, Load balancing with dynamic set of balls and bins, Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing (2021). DOI: 10.1145/3406325.3451107


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.