A straightforward technique of power management is to shut down the processor when it is idle. A more advanced and dominant technique is dynamic voltage / speed scaling. which is more effective than simply shutting down the processor. Modern processor technologies enable processors to operate at various processor speeds. The development of these technologies is based on the property that the power consumed of a processor is usually a convex and increasing function of processor speed, the convexity of the power function means that the more slowly a task is run, the less energy is used to complete the task. To take full advantage of this new technology, a power management strategy is required to determine the voltage/speed to be used to make sure that the energy is used efficiently.
Power consumption is usually estimated by the well-known cube-root rule: the speed s a processor operates at is roughly proportional to the cube-root of the power p, equivalently, p(s) = s3. For example, if we double the speed of the processor, we can halve the time spent on a task but the power consumption rate is increased eight times, and the total energy needed is therefore increased by four times. At the first glance, one may think that the problem can be solved by operating the processor as slow as possible. Unfortunately, the problem is not so trivial since there are usually other orthogonal objectives which aim at providing some kind of quality of service, like deadline feasibility, makespan (finish time), response time, throughput. Therefore, it is crucial that the right speed is used. The scheduling algorithm, therefore, has to determine the speed at which the processor should run at every time unit, this is known as dynamic speed-scaling (DVS) or dynamic voltage scaling.
Classical job scheduling schedules jobs on processors running at the same speed throughout and the decision to be made is which task to run at each time unit. With DVS, the scheduling algorithm also has to determine the speed at which the processor runs. We consider both the off-line version and the on-line version of the problem with different optimization objectives like minimizing total energy consumption while completing all jobs by deadlines, minimizing total energy plus response time, etc.
Power management is not only important to conserve energy but also to reduce temperature. Temperature is important because the processor's life time can be severely shortened if a processor exceeds its thermal threshold. Therefore, it is wise to use multiprocessor to provide high processing power while keeping temperature of each processor low. In this respect, a low-cost option is to use GPU - General Purpose Graphical Processing Unit. The main question is how to ensure energy efficient computation on GPU.
EPSRC Small Scale Equipment Grants - Energy Efficient Algorithms (P Wong) - part of the EPSRC Small Scale Equipment Grant awarded to University of Liverpool |
Coleman-Cohen Grant - Travel grant for visiting Prof Shmuel Zaks in Technion, Israel |
British Council UK-Tel Hai Academic Research Scheme (PI: P Wong, M Shalom, CoI: D Kowalski, S Zaks) |
Royal Society Conference Grant - for ACM Symposium on Parallelism in Algorithms and Architectures SPAA 2008 |
Coleman-Cohen Grant - Travel grant for visiting Prof Shmuel Zaks in Technion, Israel |
EPSRC Grant - Algorithmic Issues in Power Management by Speed Scaling (APM) |
The Nuffield Foundation - Awards to Newly Appointed Lecturers, on Algorithmic Issues on Effective Real-time Data Dissemination to Massive Clients |
Maintained by Prudence Wong