The cradle to shelf approach is based on simple and straight forward criteria to only include emissions that are directly controlled or influenced by the producer. The producer controls the production, choice of suppliers, and distribution of the product. The producer does not control what happens to the product once it is sold and therefore this step should not be included in the system boundaries.
Thus, a comparable food product climate footprint criterion can be summarised as follows:
include all emissions that are related to how the product is produced and distributed and thus independent of who the consumer is. don’t include emission that is related to how the product is purchased, consumed and disposed and thus are dependent on who the consumer is. One can argue that this does not include the full life cycle of the product, which is true, but it includes everything to make a fair comparison between products available in the store. For that reason, it also provides a valid footprint scope to use for consumer information when a purchase decision is made.
Making a climate footprint assessment can sometimes be difficult. The easy case is when you have one piece of land that produce one product. In those cases, you just sum up all emissions associated with that piece of land, divide with the production and that is the climate footprint per kg of the product.
However, this is seldom the case. In many cases, you get several products from one process. For instance, when you produce milk, you also get meat from slaughtered cows and calves. When you grow wheat you also get straw. So, this is a general problem, most food products are produced on an interconnected web. The question of allocation is essentially how large parts of the emissions should be allocated to each product.
Soybeans are often processed to one fraction of soymeal, which is used as fodder, and soy oil, one of the most widely used cooking oils globally. Let’s assume that 1 kg of soybeans causes 1 kg of CO2-eq emissions. For each kg of soybeans, you get 800 g meal and 200 g oil. The question then is how large emissions does 1 kg of oil or meal cause?
One method is to allocate based on weight. That means that both 1 kg of the meal and 1 kg of oil is assumed to cause 1 kg CO2-eq emissions. But you can argue that what is relevant here is the energy content – that is what makes you full. Soy oil has 6 times higher energy content then meal. If you allocate the emissions based on energy content 1 kg of the meal instead causes only 0,5 kg CO2-eq whereas 1 kg of oil cause 3 kg CO2-eq emissions.
Another way to allocate is based on economic value. It is not necessarily so you want to eat more energy, that may just make you obese. But you could argue that the price you pay is an indication of the products worth to you. And in contrast to energy content, soy oil costs around twice the price of soymeal. Using economic allocation soymeal cause 0,8 kg CO2-eq, whereas the oil causes 1,6 kg per kg of product.
Regardless of how you allocate the emissions, the total emissions from soy are the same, 1 kg CO2-eq per kg. But depending on the allocations method, the responsibility for the emissions varies substantially between allocations methods. There is no strict scientific answer to which allocation method is true. It is a matter of perspective.
We know that a climate friendly diet is plant based, with the potential addition of climate friendly animal products i.e. egg, fish, and poultry meat. Sometimes there is a concern that a climate friendly diet may constitute a problem from a health and nutrition perspective.
If we start with nutrition. It is easy to construct a climate friendly diet that consists of all relevant vitamins and minerals. But is it really how people eat? To understand that researchers let 1500 random Swedish people write down everything they ate for four days. The result showed that those with the lowest emission caused around 1,5 ton CO2-eq per person and food consumption, compared to around 2 ton in the highest group. But there were no relevant differences in nutritional intake. This means that both in theory and in practice there is no trade-off between eating environmentally friendly and nutritious.
Concerning health, it is a bit trickier as diseases typically evolve over decades and we have not followed people with specifically climate friendly diets for so long. Although, researchers do know that a high intake of red meat, which also causes large emissions, is associated with certain types of cancer. A high intake of vegetables, on the other hand, prevents certain types of cancer and that high intake of vegetables prevents certain types of cancer. By using these relationships researchers can estimate how many lives can be saved by adopting different kinds of diets. A study in the UK found that if people would eat a diet with 17 % lower emissions, the average life expectancy would increase by 8 months. Further, in a global study, the researchers found that a diet with somewhat lower emissions would save 5 million lives. If all human populations would adopt a vegan diet, the emissions would decrease even more, and up to 8 million lives would be saved.
We can thus conclude that eating more climate friendly means that we still get an adequate amount of vitamins and minerals. But more importantly, it would mean a lower prevalence of certain diseases that would actually save lives.
Global warming is one of the biggest environmental challenges right now and the food production is responsible for one-quarter of the world’s greenhouse gas emissions. In this context CarbonCloud is launching a website where you can find country-specific climate footprints of annual crops from all over the world, as a first important step towards publishing climate footprints for all food products. To enable informed choices for sustainability-engaged producers and consumers we are providing it all for free, in stark contrast to most climate data, which does not exist at all or is hidden behind expensive paywalls.
“Until now, climate footprints have been slowly calculated by hand. We’re using modern technology to solve the problems of the future, automating the calculation process and handing it over to computers. This allows us to calculate massive amounts of footprints simultaneously with consistent quality.” – David Bryngelsson, CEO at CarbonCloud
In order to stop global warming and meet the ambitious climate goals stated in the Paris Agreement, there is an increasing demand for convenient and trustworthy tools to measure the climate impact of goods and food products. Big sustainability actors in the food sector are already using CarbonCloud software to keep track of their climate footprints. Some of them have even gone one step further than just publishing their footprints and have launched campaigns that encourages their customers to make green choices, e.g., Oatly’s campaign to “Show us your numbers” and Estrella’s drive for “Fair snacks”.
A big difficulty in performing climate footprint calculation comparisons is that assessments are made by individual experts using different methods and datasets. We have set out to change this by releasing massive amounts of consistent climate footprint data for free and turn the focus to what can be done to reduce the emissions now that we have comparable data.
“We are releasing all these footprints for free because we want to help solve the climate crisis and give more food producers the possibility to calculate their specific climate footprints and show their numbers.”
– Mikael Tönnberg, CTO at CarbonCloud
Automating the calculations for farmgate annual crops at unprecedented scale is just the start. The next step is perennial crops to be followed by livestock products, refined products and more. Over time, the goal is to cover all food products in the search to also meet the end-consumer market. As new yield data come in every year, or science makes progress on the underlying mechanisms or data collection, all the footprints are automatically re-calculated and updated. Customers using our climate labeling tool will get automatic access to up-to-date high-precision footprints they can use when modeling their production processes. This data set will improve in both scope and precision over time, so if you cannot find what you are looking for, check in again and it may well be there.
For more information please contact:
CarbonCloud is a research-based food-tech startup with a disruptive web-based SaaS solution that enables detailed calculations of climate footprints of food products and production processes. This enables food producers across the world to calculate and analyze the climate footprints of their product portfolios at a fraction of the cost and time spent on traditional consultancy-based life-cycle assessments. Headquarters in Gothenburg, Sweden. It is privately held and backed by international investors. www.carboncloud.com
Benefits of statically typed functional programming? Wrong question.
“What are the benefits of X?” is a rather natural question to ask when you are curious about a subject. However, the response will be very different depending on who gives the answer.
Asking ”what are the benefits of a Formula 1 car?” would result in very different replies if you asked a race driver, farmer, carpenter or a submarine captain.
I think a serious source of miscommunication could be eliminated if we spent a bit more time talking about the wanted end goal, and try to find a non-fluffy answer. We as people has a tendency to assume every one has the same goal or ”we are all farmers”.
This problem often comes up when discussing XDD techniques (DomainDrivenDesign DDD, TestDrivenDesign TDD, Type Driven Design TDD). These techniques focuses on the how not the why – the engine not the goal.
So I think a better question is:
What do we want to achieve?
Let’s start with tests and TDD(test-driven-development). The “why question” in this case is “why do we write tests?”. A straightforward answer is “To make sure the program works”. However, what do we mean by “works”?
When programming, a developer creates an mental model of how the program should work and try to explain that to the computer via code. Another word for this mental model is domain knowledge. A program “works” if the developer has a correct understanding of the domain, and manages to capture that understanding in code.
How does this understanding of “working programs” == “encoded domain knowledge” play out in practice? It appears every time the program needs to be updated! In order to update the program while making sure that it still works, the developer doing the update must know how the program is supposed to behave. Often the code is not enough so they need to reverse-engineer the thought process, look up documentation or ask the original author (who hopefully remember and is still reachable).
When programming, we want to capture knowledge in a way understandablefor both the computer and humans, now and in the future.
Why do we want to capture knowledge?
* First and foremost to avoid vital knowledge to be lost. As time passes people will stop remembering and the organization will change. Old team members will pursue other projects and new members will join. When knowledge is captured and accessible for later use the organization will become much more resilient. The “old guard” that understands the hidden depths of the application is simply not needed (at least not for that reason). One thing is for certain; People won’t stay forever.
* If we make the computer understand the domain knowledge, we ensure that the knowledge we do have is enforced (“All cars should have four wheels”). The scope of most projects are too large to keep in a human working memory at once, requiring assistance from the computer.
* New features should take current domain requirements into consideration. Often, new requirements will affect old ones – sometimes with unexpected consequences. It’s best to identify these unexpected or unwanted consequences early on, since fixing such issues tend to get more expensive over time.
* Knowledge of who can access what is extra important to enforce using the computer. We don’t want security risks where the application could leak information.
* Easy-to-access and explicit knowledge of how the system works makes on-boarding new team members much easier.
* Make it clear what the organization knows and what it does not know. This can be vital for important business (and technical) decisions.
* Makes it possible or even easy to include business people in technical decisions – “Should all cars have exactly four wheels? If no, what is the difference between a car with two wheels and a motorcycle?”.
* Avoid bugs introduced when making a seemingly innocent change that violates an implicit invariant.
* Avoid having to spend time on “defensive programming”, where the programmer makes up for limited understanding with countermeasures such as wide-spread null checks, assertions sprinkled across the code, and similar. This behavior solidifies invariants across the entire code base, making it rigid to change.
All this fluff – What is knowledge then, more specifically?
On a 10 000 meter level: Information about the domain or problem that the current author has which affect their choices and the design of the code.
* What kind of inputs are valid/expected
* What can the output be?
* What can go wrong?
* When should this code be used? When should it not?
* Does running this code do anything but return a value? (Side effects)
* How does similar domain concepts differ? (A user with admin rights and an admin user)?
How is knowledge best captured?
Now you could say, but ”all code is knowledge, with an if-statement it is clear that the x variable needs to be smaller than 5!”. It’s true – all code tells the computer something – the question is which solution is the most scalable and friendly to both human and computer. When the program grows, and the”smaller than 5 check” moves to another function, file or module, this previously clear fact will be very difficult to spot.
Quick detour – ”X as Code”, X-as-C
The last two decades, approaches like ”Configuration-as-Code” and ”Infrastructure-as-Code” has grown tremendously in popularity and made organizations much less reliant on a few number of individuals to setup a new server or application cluster. These approaches are often declarative, the focus of the reader/programmer is what you want to happen – not exactly how. You state,”ssh should be configured” not ”>command1 -x; command2 -y -z; etc… ”
This invites people who are not experts in the given technology to participate and change the wanted end state without having to understand the nitty-gritty details. The knowledge that ”ssh should be configured” is stated explicitly once, leaving the details to be sorted out somewhere else.
More examples of this: Docker, Chef, Nix among many others.
So again, how is knowledge best captured in code?
To enable our human minds to grasp ever more complex domains, we want our knowledge to be encoded in a declarative and explicit manner. It’s best if this information is contained within a limited scope, rather than spread out across the program. This protects our knowledge from being lost due to code evolving over time.
And that leads us to the main event: Knowledge-as-Code.
Knowledge-as-Code (or Know-as-C or ”no-ask”) is fully language or platform agnostic and state that knowledge should be
* Declared once – enforced globally
Declared once – enforced globally
Using central and declarative syntax makes it possible for humans to understand and decode knowledge even if the code base is vast. It also makes it easier to review changes to the requirements. If the requirements are spread out across the code base this is almost impossible to do, e.g. if an if-statement is changed from ”if noOfWheels < 5 then ..” to ”if noOfWheels < 6”, how do we know if this is applies everywhere?
The declared domain rules should be enforced globally by the computer – humans are really bad at this and with a growing code base it is practically impossible to do. By capturing the domain knowledge in a single spot, we make it possible to use a computer to enforce these rules.
A centrally declared requirement prohibits conflicting definitions, such as having both”if noOfWheels < 5 then return ValidCar” and ”if noOfWheels < 6 then ValidCar” in the same code base.
* All valid values should be representable.
If we want to allow numbers larger than 2^45 we should not use an Int32
* All known unknowns should be explicitly expressed
If a function can fail, the computer should force you to handle the failure case
* Only valid values should be representable.
If a function expects a positive integer, it should be impossible to send in a negative one
* No overlap
All possible values should be orthogonal with each other. Example: We cannot say that we have either a Int or a Float. Since all ints are included in the Float type.
* All knowledge should be available to both human and computer
Humans must understand the knowledge to make changes – the computer must understand the knowledge to be able to enforce the rules.
* All feedback should be available for both human and computer.
When something goes wrong the computer should help the human to understand the issue.
* Use abstractions without knowledge loss.
If, in reality, you have a bird or a cat – do not hide it behind a IAnimal or similar. Better to abstract it to, in psuedo-code, ”Animal = Bird OR Cat”.
* Abstract using general, well-defined, non-domain concepts
Such as lists, Dictionary, Functor, Monad
Tools to write Know-as-C
Most statically typed languages are capable of capturing some information in a declarative manner in what I’ll loosely call ”types” below. There are other concepts that also declaratively capture knowledge but for now we’ll use the term “types” as an umbrella term.
Since dynamically typed languages per definition does not have any way to enforce knowledge statically, nor in most cases encode it declaratively, I do not think they are a good option when trying to capture knowledge.
It is important to remember that the cost of encoding knowledge differs between languages and different points of cost and return exists depending on which team and which time frame the project operates under. However, encoding knowledge is vital if you want to know what you have built, if you are building a long lasting product, or where trust or security is important. That being said, the “bang for the buck” will differ greatly depending on which programming language is used.
There are a bunch of more or less language agnostic techniques that can be used as well. For example ”Ghosts of departed proofs”, “Type-driven-development”, “Parse don’t validate”, ”Dependent types” or “Doctests”. As it happens, what these have in common is that they all improve knowledge symmetry and help us reach the other Know-as-C goals.
In general, humans understand some formats and computers another, we want to fuse those so both parties are included – without sacrificing either party’s understanding.
Both human and computer
Computer only domain
Comments Comment examples* Class names Function names Variable names Record field names ADT tags Value level understanding**
Types Function signatures
Potentially false Room for interpretation Rot over time Victims of ”game of telephone”
”Understood” by the computer Understandable by a human
* as default in most languages ** in most languages
It cuts both ways
Many strong (and not so strong) compilers fail at informing humans of issues in a pedagogic manner. In other words, the compiler fails to ensure knowledge symmetry. This is a non-trivial problem to solve, and tend to be overlooked in many languages. In some cases this even leads to a situation where programmers stop seeing the compiler as their assistant and start seeing it as their antagonist.
One example that actively tries to be better is Elm. Even if Elm’s approach is not perfect in all regards, the compiler goes a long way in giving human-readable, solution oriented feedback. That being said, the complexity of the problem of good feedback increases with the competency of the language.
Could it be that this negligence towards the programmer is a contributing factor to hold languages such as Haskell back? A lot of angry and large error messages has a solution that can be described clearly by a human just in a few words ”That function is only partially applied” or ”The arguments is in the wrong order” or ”You forgot the do keyword”.
Haskell’s error messages very clearly describes what is wrong like “Size of sulfation plates prohibits needed chemical interaction” but often lacks the solution oriented information “Time to change the battery”.
This is one instance where the programmer needs a lot of language-/compiler-specific knowledge which enables them to summarize the implicit information given by the compiler into actionable concepts.
Doctest, an example of giving the computer access to more knowledge
Docstrings are comments above functions briefly describing the function. They often contain one or more examples, showing what inputs leads to which outputs. This has multiple benefits, including giving the user of the function a quick way of understanding exactly what the function name or signature meant. Since this is knowledge not understood by the computer, the Know-as-C approach would be increasing type safety rather than adding human-only information using comments. Due to language limitations or other reasons that may not always be possible.
The drawbacks of examples in the docstrings are
* The computer does not have access to these examples and therefore does not check their validity
* A human will extrapolate the example, correctly or incorrectly and therefore expect a certain behavior
* No syntax or compiler check
If comments are necessary, this information asymmetry can be reduced using libraries such as “doctest”. Doctest is available in several languages. Using a doctest library, you give the compiler access to the doc-test examples and they will be checked during compile/testing. This means that all the benefits for the human stay intact, while we increase the amount of knowledge that can be computer verified.
Let us talk tests
* Are we writing tests to capture knowledge to future human readers?
Will they have practical access to that knowledge? Could that knowledge be described in a more declarative and general way?
* Are we writing tests to make more knowledge available to the computer?
* Are we using tests to help us during the initial development?
Problems with tests
Tests have incomplete coverage due to their example-based nature – ”add 2 4 `shouldBe` 6”. What about ”add 4 5”? Property based testing (also called fuzzy testing or fuzz testing) is a good tool to counteract this but regardless of the intent, property based testing is just a nice way to express a lot of example based tests.
A lot of tests aren’t a good source of knowledge for humans – understanding the domain in general by reading individual tests can be quite difficult, with many developers preferring to just read the actual code. Tests are useful when they start failing – to find what you broke – but that is a very reactive approach.
The information given by each test (often the test name) is not something the computer can understand. It’s up to the developer to make sure that each specific test name maps to each specific test implementation, a mapping that can’t be checked statically.
Having tests can create a false sense of security, especially if using metrics such as test-coverage per line or when a lot of dependency injection is used.
To be clear, I think that tests are important, and I write a lot of them, but view the act of having to write tests as a failure – aware that knowledge could have been captured in a better way.
Back to the beginning. Benefits of statically typed functional languages?
So, with the established goal described above, how do we encode knowledge in order to achieve a secure, person independent and stable code base? How can we support programmers in changing and improving code without random things breaking due to lack of knowledge? We use a programming language with a feature-set that enables us to encode knowledge into our code. That means using statically typed functional languages, as they currently provide the most cost-efficient way to encode knowledge and make it available to both humans and computers.
I work as a manager (even if I try to code as much as possible) for a very rapidly growing startup and I would see it as a critical business risk to use tools with weak Know-as-C capabilities (We use Haskell, Elm and PureScript). Know-as-C allows us to make better-informed business decisions and also onboard new developers fast.
Using functional programming is a pure business decision
It is important to reiterate, most of the benefits with Know-as-C are related to organization benefits, management and future-proofing the technical platform for a growing team. However, we see that working with pure, statically typed functional programming languages off-loads a lot of communication and housekeeping to the computer, letting us focus on the things that matter. I truly believe that if more non-technical managers understood the organizational benefits of Know-as-C, they would push hard for knowledge capturing and promote languages such as Haskell.
Many thanks to Jonathan Moregård who proof-read and came with great suggestions and edits.
CarbonCloud welcomes Estrella to the fast growing community of forward thinking food companies, acting for an increased transparency of climate footprints. Estrella are going live early 2021 with their climate labeled snacks.
CarbonCloud offers a science based web tool to calculate and communicate the climate footprints of food products with a label on the packaging. CarbonCloud thinks that this should be the standard, giving consumers the means to make a conscious decision, and food companies the tools to support the transition to sustainable food production.
“We are happy to have a forward thinking and climate aware company like Estrella joining our quest for a sustainable food industry”, says the CEO of CarbonCloud, David Bryngelsson, Ph.D.
He and his team have been working closely with Estrella to make sure their calculations are accurate, and can be compared to other food producers with a common yardstick.
Josefin Hugosson, Trade & CSR Marketing Manager at Estrella, says: “We have been working with sustainability for years and for us it is top priority to do good stuff! But we also want to improve how we communicate our achievements and what we are working on right now. Through emphasising our sustainability work we hope to inspire other businesses in our field and at the same time make our consumers aware of the climate footprint of snacks.”
The push for climate labels and transparency in the food industry is gaining traction among both producers and consumers. “It is impressive to see how much work Estrella has put into shaping a greener snack” says David Bryngelsson and is eager to point out that this is just the beginning of an exciting partnership for climate improvement. The tool will now enable Estrella to communicate their hard work in an objective way and inspire more businesses to follow.
More info: email@example.com, phone: +46-704 402125
CarbonCloud, a startup spun out of world-leading research on food and climate at Chalmers University of Technology, announced today a € 1,000,000 financing round led by Finnish venture capital firm Maki.vc and German TS Ventures
develops innovative software that helps companies within the food
industry to calculate and communicate the climate footprints of their
products at scale. The company has already onboarded high-profile
paying customers who lead the way on climate labels on food,
including names like plant-based milk brand Oatly, who decided to put
labels online and on product packaging. Other customers include
Naturli Foods, Sproud and Nude.
model is based on twenty years of research and has been reviewed in
connection with a wide range of scientific publications. It has been
used by the Swedish Environmental Protection Agency and is also the
basis for international cooperation, for example with Princeton
University and Potsdam Institute for Climate Impact Research (PIK).
world needs a sustainable re-boot to get our economies going as the
Corona pandemic levels out. Now is the time to seriously focus on the
climate, so we don’t walk out from one disaster directly into
another”, says David
CEO and co-founder of CarbonCloud. “Food and agriculture are
globally responsible for almost 25% of the climate problem and
end-consumers increasingly realize that they can make a difference by
purchasing food products with transparent climate labels. Climate
footprints on food is moving from the sustainability teams to the
marketing teams. It matters for business.”
food industry has been lagging behind other sectors on climate
change, largely because the science behind calculating climate
footprints on food is complicated. It has typically required
expensive specialist consultants to perform calculations, which has
hampered any large-scale effort. CarbonCloud’s platform enables
performing climate footprint calculations for products with industry
leading precision in-house, at a fraction of the cost and time
required before. The platform allows comparisons between products
with a common yardstick and for users to share their results with
each other or with the public.
is time to digitize the science of climate change and the bookkeeping
of climate footprints”, says Tim
Schumacher (TS Ventures),
a German Investor and Entrepreneur who has already backed many
successful climate startups. “CarbonCloud delivers precisely the
solution we need to make it possible and attractive for the industry
to truly keep track of their emissions and to tell the world about
love investing in teams making products that help make sustainable
choices a habit. CarbonCloud’s vision for how to make a change in
the food industry is truly unique, putting keys into brands’ and
consumers’ hands in ways we’ve never seen before. With their
experience, there isn’t a better team in the world to build this
platform,” says Pauliina
Investment Director from Maki.vc.
The investment enables CarbonCloud to onboard new customers and expand their operations, and the team is now looking for new talent within sales, marketing and development to join them on their journey to put climate footprint data on all food products globally.
This is an interesting and complicated question. CarbonCloud holds the following position: If the life cycle of a product leads to a net release of greenhouse gases, the product should not be referred to as “climate neutral” even if the emissions are compensated for with carbon offsets.
What is carbon offsetting?
Some companies compensate their climate footprint by supporting projects around the world that either mitigate emissions of greenhouse gases compared to a baseline or remove greenhouse gases from the atmosphere. This is known as “carbon offsetting”. The intentions are praiseworthy, and it can definitely make sense to communicate about them to the public; however, not by claiming to be climate neutral. Instead we encourage statements of the type: “Our climate footprint is XX kg CO2e. We work on reducing our greenhouse gas emissions. We also invest in project YY that we believe can contribute in the fight against climate change.” This is the honest and transparent way. Why then, does the positive not just simply cancel out the negative? There are two main reasons.
1: It is very hard to know how large effect the projects really have. In many cases, they do not even seem to work at all.
2: There is a clear risk of double counting, meaning that several parties take credit for the same emission reductions, or greenhouse gas removals. Let us take a deeper look at these issues.
Does carbon offsetting
This is the million-dollar question. In some cases, it is inherently hard to asses. In other cases, we know that the answer is no. For each project we need to ask ourselves the following:
Does the project deliver the intended results? Things do not always go as planned. A large project in Kenya invested in energy efficient stoves. As it turned out, most of them were never used. Yet, climate offsets were certified and sold. In other projects we will not know the outcome for a very long time. Planted trees, for instance, only absorb and store carbon as long as they are not cut down. How can this be guaranteed for hundreds of years in countries such as Uganda, ranked as one of the most corrupt countries in the world?
Is the project “additional”? In some cases, the project would have taken place anyway, even without the income from carbon offsets. Wind power farms, for instance, produce carbon offsets based on the assumption that the electricity produced replaces coal power. But many of the countries that host the carbon offsetting projects are growing economies with a steadily increasing energy demand. The wind power farms may very well have been built anyway. Additionality is generally an explicit requirement for carbon offsetting project. But unfortunately, the analysis of whether a project is additional is often highly subjective and hard to evaluate in a transparent way. A German research study (Cames, 2016) found that only 2% of the investigated projects had a high probability of being additional.
Is leakage avoided? Leakage is when greenhouse gas emissions increase somewhere else, as a consequence of the carbon offsetting project. If trees are planted on land used by the local population for forage or agriculture, this may lead to other trees being cut down elsewhere. The local farmers may have no other options than to clear vegetation at a new location in order to continue their agricultural activities. This becomes at best a zero-sum game for the climate but a loss for the farmers who need to move, and a loss for biodiversity since planted forests host less biodiversity than natural vegetation.
Who takes the credit?
This is the
second question we need to ask. In the business of carbon offsetting, it is not
unusual that more than one party takes credit for the same action, resulting in
deceptive book-keeping. Let us use an example: trees are planted in Uganda in a
carbon-offsetting project. Company X buys the carbon offsets and label their products
as “climate neutral”. This means that company X takes credit for the removal of
greenhouse gases. However, it is not unlikely that Uganda also accounts for tree
planting in the national inventories of greenhouse gas emissions. In that case
the action is double counted.
Let us take another example. A wind-power plant is built in Brazil. Carbon
offsets are sold, based on the assumption that the electricity replaces coal
power. Avoiding double counting means that Brazil will have to assume that the
electricity produced comes from coal power, although it actually comes from
wind. This does not lie in the interest of Brazil, who has targets to reach
under the Paris agreement. If enough carbon offsetting credits are sold, Brazil
could end up in a situation where they have only renewable energy in reality
but would need to keep on reporting as if they had only coal power, since they
have sold the right for the emission reductions to other parties.
negotiations of the Paris agreement have shown us how difficult it is to agree
on rules that avoid double counting. Reaching our climate targets requires that
we BOTH reduce emissions in all countries around the world AND remove greenhouse
gases from the atmosphere, for instance by planting trees. Double counting
blurs our vision and makes it harder to keep track of what remains to be done. If
we look specifically at the food industry, we see that it is currently
responsible for about 25% of global greenhouse gas emissions (IPCC, 2014). To
fulfill the Paris agreement and stop climate change these emissions will have
to be reduced, even if all other emissions are reduced to zero! Crediting the
food industry with reductions in other sectors can hence not be the solution
for the food industry and such claims have the risk of delaying real and
effective measures from being made.
What do we suggest?
There are technologies that you could argue actually work. One example is direct air capture, involving facilities that capture carbon dioxide from the air so that it can be stored below ground. It is a technology that has a high probability of giving the intended results. The likelihood is very low that that the carbon dioxide will escape from its storage below ground. It is a costly technology with no other positive side effects. Therefore, it can be considered “additional” since it will not be implemented unless someone pays for it. There are other technologies for climate compensation that you could argue also work. We applaud any engagement in such projects. However, our basic appeal is this: find out your climate footprint and communicate it to your customers without smokescreen. CarbonCloud is here to help!
Cames, M., Harthan, R. O.,
Füssler, J., Lazarus, M., Lee, C., Erickson, P., & Spalding-Fecher, R.
(2016). How additional is the clean development mechanism. Analysis of
application of current tools and proposed alternatives. Oeko-Institut EV CLlMA.
IPCC. (2014). Mitigation of climate change. Contribution
of Working Group III to the Fifth Assessment Report of the Intergovernmental
Panel on Climate Change, 1454.
This week CarbonCloud customer Compass Group introduced climate labels on the lunch menus in the lunch restaurant in the Swedish Parliament. This is the restaurant where most of the 349 members of the Swedish parliament and their guests eat lunch every day. With the help of CarbonCloud, Compass Group will now make it possible for the members of the parliament to make climate smart decisions also during the lunch break. This by calculating the climate footprint on every meal the will be served and by introducing climate labels on the menus.
Compass Group are also collaborating with CarbonCloud to offer climate smart food services to companies such as ICA and SEB.
Den 16 december arrangeras CIO Awards i Stockholm för femtonde året i rad. CIO Award sär en gala ämnad åt IT-branschen samt utveckling och framsteg inom IT. Under galan delar CIO Sweden ut fyra olika priser, däribland “Årets hållbara projekt”. I konkurrens med Lantmännen, Göteborgs stad och Arbetsförmedlingen/Iteam, är CarbonCloud och företagetsplattform – CarbonData, en av de fyra finalisterna.
Kriterierna för Årets hållbara projekt är flera. Bland annat ska projektet, genom användning av smart IT, bidra till såväl ökade intäkter, minskade utgifter och minskad miljöpåverkan.Projektet ska minska beroendet av miljöpåverkande faktorer genom effektivare flöden eller smart användning av IT och bidra till att slutkunden kan minska sin miljöpåverkan. Dessa kriterier och fler därtill har CarbonData uppnått, och tagit CarbonCloud till sista steget i tävlingen.
CarbonData, bygger på en modell som med god precision modellerar de olika stegen som krävs för att producera ett livsmedel. Modellen hanterar hela kedjan, inklusive jordbrukets alla delar hela vägen till butikshyllan. Med hjälp av CarbonData kan livsmedelsföretag bland annat se var i livsmedelsprocessen som åtgärder kommer ha störst effekt på livsmedlets totala klimatavtryck.
“Att CarbonCloud och plattformen CarbonData är en av de fyra sista tävlande är en validering på att vi har tagit fram en produkt med stort kunnande och med hög potential. Vi har stentuff konkurrens i vår kategori, och vi är det enda startup-bolaget, bland drakar med stora IT-budgetar, som tagit sig ända till finalen. Vi åker upp till Stockholm och galan spända men ödmjuka.” säger David Bryngelsson, vd på CarbonCloud.CIO Awards delas ut den 16 december på Berns salonger i Stockholm. För mer information läs CIO Awards hemsida.