Principles of Green Software Engineering with Marco Valtas
Subscribe on:
Introduction [00:01]
Thomas Betts: Hi, everyone. Before we get to today’s episode with Marco Valtas, I wanted to let you know that Marco will be speaking at our upcoming software development conferences, QCon San Francisco and QCon Plus. Both QCon conferences focus on the people that develop and work with future technologies. You’ll learn practical inspiration from over 60 software leaders deep in the trenches, creating software, scaling architectures, and fine tuning their technical leadership to help you adopt the right patterns and practices. Marco will be there speaking about green tech and I’ll be there hosting the modern APIs track.
QCon San Francisco is in-person from October 24th to the 26th, and QCon Plus is online and runs from November 29th through to December 9th. Early bird pricing is currently available for both events, and you can learn more at qconsf.com and qconplus.com. We hope to see you there.
Hello, and welcome to another episode of The InfoQ Podcast. I’m Thomas Betts. And today, I’m joined by Marco Valtas. Marco is the Technical Lead for cleantech and sustainability at ThoughtWorks North America. He’s been with ThoughtWorks for about 12 years, and he’s here today to talk about green software. Marco, welcome to The InfoQ Podcast.
Marco Valtas: Thank you. Thank you for having me.
The Principles of Green Software Engineering [01:07]
Thomas Betts: I want to start off our discussion with the principles of green software engineering. Our listeners can go find these at principles.green is the website. There are eight listed. I don’t think we need to go into all of them, but can you give a high level overview of why the principles were created and discuss some of the major issues they cover?
Marco Valtas: The principles were published around 2019 by the Green Software Foundation. They are very broad on purpose. And the need for the principles is basically that, how can we frame … Well, that’s how principles work. Principles help us to make decisions. When you are facing a decision, you can rely on a principle to guide you. Well, what is the trade-off that am doing? That’s basically what we have on the Green Software Principles.
They are generic in a sense. Like, be carbon efficient, be electricity efficient, measure your carbon intensity or be aware of your carbon intensity. I think the challenge that it posed to all development is like, okay, when I’m doing my software development decision, what trade-offs I’m making in terms of those principles. Am I making a trade-off of using a certain technology or doing something a certain way that will incur more carbon emissions or electricity consumption and so on and so forth?
Thomas Betts: Who are these principles for? Are they just for engineers writing the code? Are they CTOs and CEOs making big monetary decisions? Operations coming in and saying, “We need to run bigger or smaller servers”?
Marco Valtas: I think they are a target for folks that are making decisions on the application level. You can think about the operations and the software development itself. But operations, usually you are going to look on energy profile, like the data center that you are using, what region you are using. So operations can make big calls about what hardware you’re using.
Those are also in the principles, but the principles are also to help you to make a decision, at a very low-level. Like, what is the size of assets that you are using on your website? How much data you’re trafficking through the network, how often you’re doing that. If those apply to an operations decision, that will work. If those principles apply to a development decision, they will work. And that can be said for a CTO position. If you’re making a decision that will have an impact on any of those carbon emissions, electricity consumption, that can help you to make a decision.
Thomas Betts: I know one of the topics on our InfoQ Trends Report this year, and I think last year was the first time we put design for sustainability for one of the architecture trends that people are watching. And it’s something that people have been thinking about for a few years now, but I think you said these came out in 2019.
That’s about the same time where it says these are how you think about those decisions, and the idea of the trade-offs you’re saying architects are always concerned with. It’s like, well, the answer is, it depends. What does it depend on?
Varying carbon impact across data center regions [04:03]
Thomas Betts: Can you go into some of the details? When you say about a data center choice, what difference does that make based on which data center you use and where it’s located?
Marco Valtas: For data centers, you can think about cloud usually, not considering your own data center, but each data center is located in a geographic region. And that geographic region have a grid that provides energy, provides electricity to that data center. If that data center is located in a region where there’s way more fossil fuel energy being generated than solar or wind, that region, that data center has a carbon intensity profile of a larger carbon intensity. If you run your workloads on that region, usually you are going to count more carbon intensity.
But if you run the same workload in another data center, let’s say that you change from one cloud region to another cloud region that is on the energy grid, that region has way more solar than fossil, you are using way more renewable energy. So it lowers down the intensity of your workload. Considering where your data center is, is one of the factors, especially if you look on how the grids are distributed.
Thomas Betts: Is that information that’s easily accessible? I think there’s a general consensus. I live in the United States. The Eastern regions are very coal oriented, and so it’s very fossil fuel. But if you move more into the US Central regions for Azure or AWS, whoever, you get a little bit more of renewables. But is that something that I can go onto a website and say, “I’m running this in US-EAST-1, compare it to US-CENTRAL and tell me what the carbon offset is”? That doesn’t seem like a number I see on any of the dashboards I go to.
Marco Valtas: You won’t see it on any of the dashboards. We at Thoughtworks, we created one cloud carbon footprint tool, which actually helps you to look on the footprint by region. But the data, unfortunately, the data is not easily accessible from the cloud providers. We can go over a tangent around how cloud providers are actually handling this information, but the way that we do on our tool and most of other tools that are there, in the case of the United States, you can use the EPA Report on regions from the United States.
If you go to Europe, there will be the agents of Europe, and there’s other regions where you can get from another public databases, where they have reports around, what is the carbon intensity per watt hour in that region? You plug that and you make the assumption that that data center, it’s on that carbon intensity level. It gets complicated or complex if you consider that maybe the data center is not using that grid’s power. Maybe the data center has solar in itself, so it’s offsetting a little bit. It’s using a little bit of a renewable.
But that starts to be really hard because you don’t know, and the providers will make that information easily accessible. So you go for estimates.
Cost has a limited correlation to carbon impact [07:01]
Thomas Betts: All the data center usage, the number that people are usually familiar with is, what’s my monthly bill? Because that’s the number they get. And there’s correlations. Well, if I’m spending more money, I’m probably using more resources. But all of these have a fudge factor built in. That is not a direct correlation.
And now we’re going to an even more indirect correlation if I’m using more electricity here and it’s in a region that has a higher carbon footprint than I have a higher carbon footprint. But I’m in an 87 and I could be at a 43, whatever those numbers would be.
Marco Valtas: Cost is interesting. Cost is a fair assumption if you have nothing more to rely on. Especially on cloud resources, because you pay by the resource. If you’re using more space on the hard drives, you’re going to pay for more. If you’re using a lot of computing, you are paying more. If you’re using less, you’re going to pay less. It’s the easier assumption that, well, if I cut my costs, I will cut my emissions. But as you said, there’s a correlation, but there’s a limit to that correlation. You actually find that limit quite quickly, just moving regions.
Recently, I recorded a talk on XConf which will be published later this month, and I plotted against the cost of running some resources on AWS and how much carbon intensity on those resources. That correlation of cost and emission is not a straight line at all. If you move from the United States to Europe, you can cut your emissions considerably if you look just on the intensity of those regions. But you are going to raise your cost. That actually breaks the argument like, oh, cost? If I cut my cost, I cut my emissions. No, it doesn’t work like that.
If you have only cost, sure, use that. But don’t aim for that. Try to get the real carbon emission factor that you have on your application. That’s where our cost stands.
Taking a pragmatic approach to optimizing applications [08:57]
Thomas Betts: How do I go about looking at my code and saying, “Well, I want it to be more performant”? Anyone who’s been around for a while has run into processes that this is taking an hour. And if I change it a little bit, I can get it to run in a minute, because it’s just inefficient code. Well, there’s those kinds of optimizations, but then there’s all of the different scenarios. Like, how are my users using this system?
Am I sending too much data over the wire and they’re making too many requests? How far away are they from my data center? There’s just so many factors that go into this. You said it’s a holistic view, but how do you take the holistic view? And then, do you have to look at every data point and say, okay, we can optimize it here, and we can optimize it there, and we can optimize it here?
Marco Valtas: This is what makes this such a rich field to be in, and why I really enjoy being part of it. There are so many things that you can think of, so many actions that you can take. But in order to be pragmatic about it, you should think as an optimization loop, as you have an optimization loop in your performances of your application. You try to find, what is the lowest, what is my bottleneck? What is the thing that is more responsible for my emissions overall?
Let’s say that I have a very, very inefficient application that takes too long to answer. It sorts the data four times before delivering back to the user, and that is obviously my computing side of things are the worst culprit on my emission. So let’s tackle that. And then you can drill down over and over and over again, till you can make calls about what kind of data structure I’m using. Am I using a linked list? I’m using an array list. I’m using what kind of loop. I’m using streams.
Those are decisions that will affect your CPU utilization, which translates to your energy utilization that you can profile and make some calls. But again, it gets complicated. It gets very distributed. The amount of savings depends, so you need to go for the low-hanging fruit first.
But yeah, measuring? There is some proposals from the Green Software Foundations, like the Carbon-Aware SDK and the Software Carbon Intensity Score, which you can get some variables and do a calculation, try to measure your application as a whole. Like, what is my score?
The good thing about those scores is not just being able to see a number, but also compare that number with your decisions. If I change something on my application, does this, relative to the previous state, it gets better or it gets worse? Am I doing things that are improving my intensity or not? And of course, there’s counterintuitive things.
There’s one concept which is called the static power draw from servers. Imagine that you have a server running at 10{64d42ef84185fe650eef13e078a399812999bbd8b8ee84343ab535e62a252847}. It consumes in energy to be just on. And the counterintuitive idea here is that if you run your server at a 90{64d42ef84185fe650eef13e078a399812999bbd8b8ee84343ab535e62a252847} of utilization, that won’t be the same as increasing your 80{64d42ef84185fe650eef13e078a399812999bbd8b8ee84343ab535e62a252847} in energy consumption, because you have a baseline just to keep the server up. Memory also needs to be powered, but it doesn’t draw more power if it’s busy or not. Sometimes you need to make decisions of using a server more than spreading across servers. Those are trade-offs that you need to take.
Thomas Betts: That’s one of those ideas that people have about moving from an on-premise data center to the cloud, that you used to have to provision a server for your maximum possible capacity. Usually, that’s over provisioned most of your time, because you’re waiting for the Black Friday event. And then these servers are sitting idle. The cloud gives you those capabilities, but you still have to design for them.
And that goes back to, I think, some of these trade-offs. We need to design our system so that it scales differently. And assuming the run at 50 to 90{64d42ef84185fe650eef13e078a399812999bbd8b8ee84343ab535e62a252847} is a good thing as opposed to, oh my gosh, my server’s under heavy load and that seems bad, you said it’s kind of counterintuitive. How do we get people to start thinking that way about their software designs?
Marco Valtas: That’s true. Moving to the cloud gave us the ability of, use just the things that we are actually using and not having the servers idle. I don’t think we got there, in the sense that there was a blog post saying that it calculated around $26 billion wasted in cloud resources in 2021, with servers that are up and not doing anything or are offering resources. We can do better on our optimization of the use of the cloud.
Cloud is excellent too, because you can power off and provision as you like, other than on-prems. Bringing that to the table and how can you design your systems to think about that, it starts with measuring how complex an application can be. It’s kind of unbounded. The way that you’re going to design your application to make best use of the carbon resource is going to go through hops. And you should measure how much energy, how much resources are you using?
Some things are given in a sense like, well, if I use more compute, probably I’m doing more emissions. But then there’s more complex decisions. Like, should I use an event-based architecture? How microservice versus a monolith will behave. I don’t have answers for that. At my company, we are researching and trying to run experiments and get to some other information around, well, how much microservice architecture actually impacts on your carbon emissions?
And then the big trade-off is the last, I don’t know, 20 years on software development, we optimize to be ready for deployment, to minimize uncertainty on our releases. And be fast in delivering our values. That’s what continuous delivery is all about. But then you have to ask your question, how much of those practices, or what practices are you using that will turn out to be less carbon efficient? I can think about an example.
There are several clients that I work with that have hundreds of CI servers, and they will run hundreds of pipelines because developers will be pushing code throughout the day. And the pipeline will run and run the test, and run sometimes the performance test and everything. And the builder will stop right in the gate of being deployed and never be deployed. There’s a trade-off to be made here of readiness and carbon emissions.
Should you run the pipeline, every push of code? Does that make sense for your project, for your company? How can you balance all this readiness and quality that we develop throughout the years with the reality that our resources are not endless? They are not infinite. I think one of the impressions that we got from cloud was yeah, we can do everything. We can run with any amount of CPU, any amount of space.
I couldn’t tell you how many data scientists were happy of going to cloud, because now they can run jobs of machine learning that are huge. But now, we have to go back to the question, is this good for the planet? Is this consumption of resources unbounded something that we need to do, or we should?
Carbon considerations for AI/ML [16:42]
Thomas Betts: You touched on one thing that I did want to get to, which is, I’ve heard various reports and you can find different numbers online of how much machine learning and AI models cost just to generate the model. And it’s the idea that once I generate the model, then I can use it. And the using it is fairly efficient. But I think some of the reports are millions of dollars, or the same energy to heat 100 homes for a year to build GTP-3 and other very complex models. They run and they get calculated and then we use them, but you don’t see how much went into their creation.
Do we just take it for granted that those things have been done and someone’s going to run them, and we’ll all just absorb the costs of those being created? And does it trickle down to the people who say, “Oh, I can just run a new model on my workload”? Like you said, the data scientist who just wants to rerun it and say, “Oh, that wasn’t good. I’m going to run it again tomorrow.” Because it doesn’t make a difference. It runs really quickly because the cloud scales up automatically. Uses whatever resources I need, and I don’t have to worry about it.
Marco Valtas: One of the finer principles is, everybody has a part on sustainability, and I think that still holds true independently. What are you doing? We can get into philosophy about technology and see, are you responsible for the technology that you’re producing? And how is the ethics behind that? I don’t want to dig into that, but also we don’t want to cut the value that we’re trying to achieve.
Of course, GTP-3 and other models that are useful and important for another parts might be a case where, well, we’re going to generate this amount of emissions, but then we can leverage that model to be way more efficient in other human endeavors. But can you abstain yourself of the responsibility based on the theoretical value they’re generating? I don’t think so.
I think it’s a call every time, not knowing your emissions. In time, this is going to turn to be something that everybody needs to at least have some idea. As we do our recycling today, 10 years ago, we never worried about doing recycle. Nowadays, we look on the packages. We see, well, even separating your trash, you go like, why this vendor design this package this way that is impossible to recycle? Because this is glued in that, right?
We have that incorporated to our daily lives. And I think in the future, that will be incorporated to design decisions on software development, too.
Performance measurements and estimates [19:12]
Thomas Betts: I wanted to go back a little bit when you talked about measurements, because I think this is one of those key things. You said there are scores available and I hope we can provide some links to that in our show notes. People do performance tests, like you said. Either part of their daily build or on a regular basis, or just I’m watching this code and I know I can make it better. I’m going to instrument it. There are obviously lots of different scales.
This is one of those same things, but it’s still not a direct measurement, that I can’t say my code has saved this carbon. I can change my algorithm here and get the carbon score. I can get the number of milliseconds that it took to run something. Again, is that a good analogy, that if I can get my code to be more efficient, then I can say, well, I’m doing it because it saves some carbon?
Marco Valtas: If it is something that you are targeting at that point, yes. About how accurate is the measurement? That’s hard. The way that we set up our software development tooling, it doesn’t take that in consideration. So you’re going to have rough estimates. And it can say, well, I’m optimizing for carbon, instead of I’m optimizing for memory or something else. That’s definitely something you can do.
When we use the word performance though, it has a broad meaning. Sometimes we can talk about performance like, well, I want my code to run faster, or I want my code to handle more requests per second because I have this amount of users that are arriving at my end point. Does not necessarily means that if we’re making more performance in those dimensions, you are also making more performance on the carbon dimension.
What it boils down to is that carbon intensity will become, at least the way that you can incorporate nowadays, it’s like a cross-functional requirement or a nonfunctional requirement. It’s something that you are aware of. You might not always optimize for it because sometimes it doesn’t make business sense, or your application will not run if you don’t use certain technologies, but it’s something that you are aware of.
And you are aware during the lifetime of your software, if you’re doing certain changes, how that carbon intensity is varying. There’s a good argument like, oh, well, my carbon intensity is rising, but the amount of users I’m handling is increasing too, because I’m a business and I’m increasing my market. It’s not a fixed point. It’s something that you keep an eye on it and you try to do the best that you can for that dimension.
Corporate carbon-neutral goals [21:38]
Thomas Betts: And then this is getting a little bit away from the software, but I know there are a lot of companies that have carbon-neutral emissions. And usually that just focuses on our buildings get electricity from green sources, which got a lot easier when a lot of places closed their offices and sent everybody home, and they stop measuring individuals in their houses. Because I don’t think I have full green energy at my house for my internet.
But I don’t think a lot of companies when they talk about their carbon-neutral goals, as a company are looking at their software usage and their cloud data center usage as part of that equation. Or are they? In the same way that companies will buy carbon offsets to say, “Well, we can’t reduce our emissions, so we’re going to do something else, like plant trees to cancel out our usage,” is there something like that you can do for software?
Marco Valtas: Offsetting something that you can do as a company, not as software, you can definitely buy offsets to offset the software emission, but that’s more a corporation decision. In terms of software development itself, we just want to look on how my software is performing in terms of intensity. In general, we use the philosophy that reducing is better than offsetting. Offsetting is the last resort that you have, before when you get into a wall where are you trying to reduce.
Two philosophies of green software engineering [22:53]
Thomas Betts: The last part of the Green Software Principles, there’s eight principles and then there’s two philosophies. The first is, everyone has a part to play in the climate solution, and then sustainability is enough all by itself to justify our work.
Can you talk to both of those points? I think you mentioned the first one already, that everyone has a part to play. Let’s go to the second one then. Why is sustainability enough? And how do we make that be motivation for people working on software?
Marco Valtas: If you are following up on climate in general, it might resonate with you that we are in an urgent situation. The climate change is something that needs action and needs action now. The amount of changes that we can make to reduce emissions so that we reduce how much hotter will get Earth and how that impacts on environment in general, it’s enough to justify the work. That’s what is behind it. It’s putting the idea that this is enough of a worthy goal to do what we need to do.
Of course, cases can get very complex, especially on large corporations in, what is your part? But sustainability is just enough because it’s urgent in a sense. This is where sometimes organizations might have a conflict. Because there’s the sustainable business or making a profit or making whatever your business grow, and there’s sometimes sustainability, which is not exactly in the same alignment. Sometimes it will cost you more to emit less.
That’s an organization, that’s a corporation decision. That’s environmental governance of the corporation itself. What is behind this principle is basically that idea where focusing on sustainability is enough of a goal. We don’t think that other goals need to be conveyed in order to justify this work.
Thomas Betts: Well, it’s definitely given me a lot to think about, this whole conversation. I’m going to go and reread the principles. We’ll post a link in the show notes. I’m sure our listeners will have questions. If you want to join the discussion, I invite you to go to the episode page on infoq.com and leave a comment. Marco, where can people go if they want to know more about you?
Marco Valtas: You can Google my name, Marco Valtas. It’s easy. If you want to talk with me, I’m not in social networks. I gave up on those several years ago. You can hit me on [email protected] if you really want to ask a question directly to me.
Thomas Betts: Marco, thank you again for joining me today.
Marco Valtas: Thank you, Thomas.
Thomas Betts: Listeners, thank you for listening and subscribing to the show. I hope you’ll join us again soon for another episode of The InfoQ Podcast.
Mentioned
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.