The planet’s 8,000 or so data centers are the foundation of our online existence, and they will expand even further with the advent of artificial intelligence. However, the rapid growth of the IT industry has raised concerns about its environmental impact.
Research estimates that by 2025, the IT industry could consume up to 20 percent of all electricity produced and emit as much as 5.5 percent of the world’s carbon emissions. This poses a real and, to some, an increasingly urgent question about the industry’s carbon footprint as startups and companies try to catch up with Silicon Valley’s latest technological advancements.
The Challenge of AI and Data Centers
Arun Iyengar, CEO of Untether AI, a specialized chip-making company focused on making AI more energy-efficient, emphasized the importance of addressing the environmental impact of AI. He stated, “Pandora’s box is open. We can utilize AI in ways that enhance the climate requirements, or we can ignore the climate requirements and find ourselves facing the consequences in a decade or so in terms of the impact.”
The Environmental Cost of AI Training
The creation of generative AI tools like GPT-4 or Google’s Palm2 can be broken down into two key stages: training and inference. Training AI models is an energy-intensive process. In 2019, researchers at the University of Massachusetts Amherst found that training a single AI model could emit the carbon dioxide equivalent of five cars over their lifetimes.
A more recent study by Google and the University of California, Berkeley, revealed that training GPT-3 resulted in 552 metric tons of carbon emissions, equivalent to driving a passenger vehicle 1.24 million miles (2 million kilometers). OpenAI’s latest generation model, GPT-4, is trained on around 570 times more parameters than GPT-3, indicating the increasing energy demand as AI models become more powerful and ubiquitous.
Nvidia, a prominent AI chip manufacturer, provides processors known as GPUs (Graphics Processing Units) that are indispensable for training AI models. While GPUs are more energy-efficient than typical chips, they still consume a substantial amount of power.
The Challenge of AI Deployment
The other side of generative AI is deployment, or inference, which involves using the trained model to identify objects, respond to text prompts, or perform other tasks. Deployment doesn’t necessarily require the computing power of Nvidia chips, but the cumulative energy consumption from countless real-world interactions can outweigh the energy used in training.
“Inference is going to be even more of a problem now with ChatGPT, which can be used by anyone and integrated into daily life through apps and web searches,” said Lynn Kaack, an assistant professor of computer science at the Hertie School in Berlin.
Corporate Commitments to Energy Efficiency
Major cloud companies, such as Amazon Web Services and Microsoft, have made commitments to be more energy-efficient. Amazon Web Services has pledged to be carbon-neutral by 2040, while Microsoft aims to achieve carbon-negative status by 2030. Recent evidence suggests that these companies are taking their commitments seriously.
Between 2010 and 2018, global data center energy use increased by only 6 percent, despite a 550 percent rise in workloads and computing instances, according to the International Energy Agency.
Silicon Valley’s Perspective
Some Silicon Valley leaders believe that discussions of AI’s current carbon footprint underplay its revolutionary potential. They argue that the mass deployment of AI and faster computing will ultimately reduce the need to access data centers, leading to energy efficiency improvements.
Nvidia CEO Jensen Huang stated, “In the future, there’ll be a little tiny model that sits on your phone, and 90 percent of the pixels will be generated, 10 percent will be retrieved, instead of 100 percent retrieved—and so you’re going to save (energy).”
Sam Altman from OpenAI envisions AI’s superpowers transforming the future positively, saying, “I think once we have a really powerful superintelligence, addressing climate change will not be particularly difficult. This illustrates how big we should dream.”
In deep…
Sure. Jensen Huang is referring to the concept of neural compression. Neural compression is a technique that reduces the size of a neural network model without significantly affecting its accuracy. This can be done by removing redundant information from the model or by representing the model in a more compact way.
In the context of smartphones, neural compression can be used to reduce the power consumption of image recognition models. This is because the majority of the power consumption of an image recognition model is due to the computation required to retrieve the pixels from memory. By generating 90% of the pixels on the phone, the amount of computation required is significantly reduced, which saves power.
Neural compression is a technique that reduces the size of a neural network model without significantly affecting its accuracy. This can be done by removing redundant information from the model or by representing the model in a more compact way.
There are a number of different neural compression techniques available, but some of the most common techniques include:
- Quantization: This technique reduces the number of bits used to represent the weights and activations of the model. For example, a 32-bit floating-point number can be represented using 8 bits with only a small loss of accuracy.
- Sparsity: This technique removes redundant connections from the model. This can be done by identifying connections that have a small weight or that are not used very often.
- Model distillation: This technique creates a smaller model that is equivalent to a larger model. This is done by training a smaller model to mimic the behavior of a larger model.
Neural compression is a rapidly evolving field, and new techniques are being developed all the time. As these techniques improve, it is likely that neural compression will become even more widely used in smartphones and other mobile devices.
Here are some of the benefits of neural compression for mobile phones:
- Reduced power consumption: Neural networks are computationally expensive, so reducing their size can lead to significant savings in power consumption. This is especially important for mobile phones, which have limited battery life.
- Improved performance: Neural compression can also improve the performance of neural networks on mobile phones. This is because smaller models can be processed more quickly.
- Increased flexibility: Neural compression can make neural networks more flexible and adaptable to different tasks. This is because smaller models are easier to deploy and update.
Overall, neural compression is a promising technology that has the potential to significantly improve the performance, efficiency, and flexibility of neural networks on mobile phones.
Here are some specific examples of how neural compression is being used in mobile phones:
- Image recognition: Neural compression is being used to reduce the size of image recognition models. This is allowing smartphones to perform image recognition tasks more efficiently and with less power consumption.
- Natural language processing: Neural compression is being used to reduce the size of natural language processing models. This is allowing smartphones to perform natural language processing tasks such as speech recognition and machine translation more efficiently.
- Computer vision: Neural compression is being used to reduce the size of computer vision models. This is allowing smartphones to perform computer vision tasks such as object detection and face recognition more efficiently.
The use of neural compression in mobile phones is still in its early stages, but it is a rapidly growing field. As the techniques for neural compression improve, it is likely that they will become even more widely used in smartphones and other mobile devices.
Conclusion
The rapid growth of artificial intelligence and data centers presents a critical environmental challenge. While AI offers immense promise for solving complex problems, including climate change, its energy-intensive training and deployment processes raise concerns about carbon emissions and resource consumption. As companies and startups race to harness AI’s potential, there is a pressing need to strike a balance between innovation and sustainability. While major players are committing to energy efficiency, it’s crucial for the entire industry to address the environmental impact of AI comprehensively. Balancing technological advancement with environmental responsibility is a complex challenge that requires global cooperation, innovation, and a shared commitment to safeguarding our planet for future generations.