If we’ve met before you have probably heard from us that consumer demand for sustainable products is increasing. And what better place to study this trend in depth than where the action happens: Grocery stores. The latest McKinsey report ‘The path forward for sustainability in European grocery retail’ does exactly that.
But let’s start with the basics: The food industry is one of the biggest polluters, impacting the climate crisis with agriculture, energy and transportation. According to the McKinsey report, it accounts for a quarter of global greenhouse gas emissions. Did you know that? Of course you did! Evidently, consumers and grocers know that as well.
Not only are consumers aware of this, but it has started to become a selection criterion in their shopping. And in this world, in this system, it’s one of the biggest change drivers. Ready for the hard data?
– The growth rate for food products clearly labelled as sustainable is increasing at a fourfold compared to the market average. Moreover, during the pandemic, sustainable companies proved more resilient during the market crisis.
– Half of the respondents of the same McKinsey report survey, said that they are willing to pay an extra buck for sustainable products. To be more specific:
– Women, higher-income shoppers, and Gen Z -the emerging market dominator- are more likely to purchase products marketed as sustainable.
– Produce, meat, home and personal care products are more likely to have a sustainability selection criterion.
– The consumers’ main areas of interest are reducing greenhouse gas emissions and conserving raw materials.
– This trend is only intensifying, in comparison to similar past attitudes.
The pressure is high – but it’s not only the consumers’ choice that makes a difference. How are retailers i.e., grocery stores, responding to this? Sit tight.
– 100%; all of them; every single respondent in the McKinsey report survey have committed to being climate neutral or climate positive in the long run.
Sidenote: Climate neutrality is certainly a bigger discussion and we had some thoughts on that here.
– More than 30% committed to science-based emission-reduction targets by the end of 2020.
The McKinsey report highlights that sustainability is one of the top three trends that will dominate the grocery retail industry in the coming months. We couldn’t be happier seeing retailer and consumer attitudes towards sustainability grow increasingly positive. But the hard truth is, attitudes won’t make a real change; actions will.
So what can the industry do? Here’s what the McKinsey report recommends:
– Encourage customer demand: Make clear information available to customers regarding sustainability – in essence, increase transparency regarding the policies an actions of retailers as well as supplying companies. (If only there was a label and a knowledge hub for that, are we right?!)
– Take a good look at their operations: First and foremost, understand (fully) their climate footprint and identify the most impactful areas for improvement. Simple enough to say but requires a fundamental change of mindset.
– Get suppliers on board: Suppliers have the lion’s share in the grocery store footprint – but it is good news, here is where the real impact lies! And we are pasting verbatim from the report, so that you don’t accuse us for being too salesy:
‘Retailers can set standards for their suppliers, ensure emissions traceability, and partner with suppliers to create innovative solutions.‘
We can’t help but feel that we have a big part to play in this. If only there was a platform that helps food producers and their suppliers accurately calculate, understand and market their climate footprint, and get the best possible start to lowering it – along with the braveness to do it. Oh wait…!
You have decided to communicate the climate footprint of your food product to your consumers – great call, it’s a win-win. So, what data do you need? In lieu of regulations, you, as a food producer, may have to answer this – but frankly, you should not have to. A scope the food industry frequently struggles to agree on is which stages of the product’s life should be included?
As any data selection, the right data set depends on 1) what data you can have, and 2) what you want your analysis to do – and that is all you should decide. Since you are investigating the topic, we assume that the goal of your analysis is to lower your climate footprint and gain more customers while doing it. In this case, the scope that serves your goal is cradle-to-shelf – and that’s the scope you need to 1) lower your climate footprint, and 2) accurately communicate it to your customers. Do you still want to know why? We see you, we appreciate you, and we relate. Let us explain.
Cradle-to-shelf includes the climate footprint of a food product from the natural resource stage to refinement, packaging, and distribution; Essentially, the calculation includes the life cycle of the product until the point it hits the shelves of the grocery store.
What do I win with cradle-to-shelf?
You win your consumer’s trust and choice. To start with, when consumers see the climate footprint of a product, they do not assume that their actions post-shelf are included. And consumers can only ‘assume’ because to this day, there are no regulations, therefore consumers don’t have predefined expectations. This is a problem in itself: Lacking a standard means lacking the grounds to compare, form a solid criterion, and select (and it is very well established that sustainability is a selection criterion for consumers).
While we’re on the topic of standardization… It is evident that the industry needs a standard and it will happen sooner rather than later. The main reason food producers as well as consumers need this is because they need a level playing field to compare and select. For this goal also, cradle-to-shelf provides the most accurate, controllable, and actionable set of data to build a fair comparison upon.
Secondly, you win control, accuracy, and a level ground for decisions. Cradle-to-shelf falls 100% within the scope a food producer knows and can impact. When a food producer selects this type of data set, their final climate footprint number is as precise as they come. Consequently, the areas and actions where the biggest impact in lowering the footprint are sharply refined and highlighted – a lot of fancy words to say that you can easily and quickly spot exactly what you can do as a food producer to lower your climate footprint.
All in all, cradle-to-shelf is the data you canhave as well the data you need to lower your climate footprint and communicate it to your customers truthfully and in terms they understand. Mission accomplished!
Still curious about the alternative? Let’s investigate that too. The alternative scope is Cradle-to-grave, which includes all stages in the production process from natural resource to refinement, distribution, as well as what happens when the consumer picks up the product, i.e., consumption and disposal. In other words, your cradle-to-grave climate footprint would include the entire lifecycle of the product.
What is it good for?
The cradle-to-grave approach gives a holistic picture of how your product contributes to the global footprint. If the goal of your analysis is a general ‘know what’ of your climate footprint, then cradle-to-grave is a relevant data set. This is usually the goal of researchers and the reason why cradle-to-grave is selected as a scope for their goal. However, this analysis is void of actionability, which misses the goal of the food industry.
What do I lose with cradle-to-grave?
To put it simply, you lose an accurate description of your reality and the opportunity to communicate it truthfully. There are simply too many assumptions in calculating your climate footprint cradle-to-grave. Food producers have virtually no control over what happens to their product after it is picked off the shelf. The only way to include the post-shelf data is to guess it to a degree that is far off the line of scientific integrity.
Think of the lifecycle of a bag of frozen vegetables. To calculate the shelf-to-grave part, one would need to include how the bag will be transported home from the grocery store – by foot, bicycle, car? If so, what kind of car? What kind of electricity does the host home have – Renewable, fossil fuels, a mix? How long will the bag stay in the freezer? How many other products cohabit the freezer? What kind of stove will the vegetables be cooked in? Will the packaging be recycled or disposed in the overall trash? These are simply too many variants assumed to conclude to a final footprint that is actionable or fairly comparable. Moreover, the spectrum of possible answers to these questions can make a rather large difference to the end-calculation of the climate footprint.
Secondly, you may lose opportunities to lower your footprint. Since the consumers’ actions are included in the final climate footprint, food producers may find many areas of improvement in the estimated shelf-to-grave calculation and overlook what is actually in their power to influence: cradle-to-shelf. On the other side of the coin, consumers may also focus on what they can do to lower their footprint of the product, which is a great initiative – BUT! It shifts the focus away from where the biggest impact potential lies: The food production process itself.
We hear you, that’s a lot of information, so maybe circling back helps at this point: What is the right scope to approach the calculation of your climate footprint – cradle-to-grave or cradle-to-shelf? As a food producer, you want to win by lowering your climate footprint and gaining more customers with your authenticity. The first steps are to know your climate footprint and understand it – the first steps are not to take methodological decisions. For you, the cradle-to-shelf approach is the most actionable data set to comprehend what is in your power – and act on it!
Making a climate footprint assessment can sometimes be difficult. The easy case is when you have one piece of land that produce one product. In those cases, you just sum up all emissions associated with that piece of land, divide with the production and that is the climate footprint per kg of the product.
However, this is seldom the case. In many cases, you get several products from one process. For instance, when you produce milk, you also get meat from slaughtered cows and calves. When you grow wheat you also get straw. So, this is a general problem, most food products are produced on an interconnected web. The question of allocation is essentially how large parts of the emissions should be allocated to each product.
Soybeans are often processed to one fraction of soymeal, which is used as fodder, and soy oil, one of the most widely used cooking oils globally. Let’s assume that 1 kg of soybeans causes 1 kg of CO2-eq emissions. For each kg of soybeans, you get 800 g meal and 200 g oil. The question then is how large emissions does 1 kg of oil or meal cause?
One method is to allocate based on weight. That means that both 1 kg of the meal and 1 kg of oil is assumed to cause 1 kg CO2-eq emissions. But you can argue that what is relevant here is the energy content – that is what makes you full. Soy oil has 6 times higher energy content then meal. If you allocate the emissions based on energy content 1 kg of the meal instead causes only 0,5 kg CO2-eq whereas 1 kg of oil cause 3 kg CO2-eq emissions.
Another way to allocate is based on economic value. It is not necessarily that you want to eat more energy, since that may just make you obese. But you could argue that the price you pay is an indication of the products worth to you. And in contrast to energy content, soy oil costs around twice the price of soymeal. Using economic allocation soymeal cause 0,8 kg CO2-eq, whereas the oil causes 1,6 kg per kg of product.
Regardless of how you allocate the emissions, the total emissions from soy are the same, 1 kg CO2-eq per kg (our assumption). But depending on the allocation method, the responsibility for the emissions varies substantially between allocation methods. There is no strict scientific answer to which allocation method is true. It is a matter of perspective.
We know that a climate friendly diet is plant based, with the potential addition of climate friendly animal products i.e. egg, fish, and poultry meat. Sometimes there is a concern that a climate friendly diet may constitute a problem from a health and nutrition perspective.
If we start with nutrition. It is easy to construct a climate friendly diet that consists of all relevant vitamins and minerals. But is it really how people eat? To understand that researchers let 1500 random Swedish people write down everything they ate for four days. The result showed that those with the lowest emission caused around 1,5 ton CO2-eq per person and food consumption, compared to around 2 ton in the highest group. But there were no relevant differences in nutritional intake. This means that both in theory and in practice there is no trade-off between eating environmentally friendly and nutritious.
Concerning health, it is a bit trickier as diseases typically evolve over decades and we have not followed people with specifically climate friendly diets for so long. Although, researchers do know that a high intake of red meat, which also causes large emissions, is associated with certain types of cancer. A high intake of vegetables, on the other hand, prevents certain types of cancer and that high intake of vegetables prevents certain types of cancer. By using these relationships researchers can estimate how many lives can be saved by adopting different kinds of diets. A study in the UK found that if people would eat a diet with 17 % lower emissions, the average life expectancy would increase by 8 months. Further, in a global study, the researchers found that a diet with somewhat lower emissions would save 5 million lives. If all human populations would adopt a vegan diet, the emissions would decrease even more, and up to 8 million lives would be saved.
We can thus conclude that eating more climate friendly means that we still get an adequate amount of vitamins and minerals. But more importantly, it would mean a lower prevalence of certain diseases that would actually save lives.
Global warming is one of the biggest environmental challenges right now and the food production is responsible for one-quarter of the world’s greenhouse gas emissions. In this context CarbonCloud is launching a website where you can find country-specific climate footprints of annual crops from all over the world, as a first important step towards publishing climate footprints for all food products. To enable informed choices for sustainability-engaged producers and consumers we are providing it all for free, in stark contrast to most climate data, which does not exist at all or is hidden behind expensive paywalls.
“Until now, climate footprints have been slowly calculated by hand. We’re using modern technology to solve the problems of the future, automating the calculation process and handing it over to computers. This allows us to calculate massive amounts of footprints simultaneously with consistent quality.” – David Bryngelsson, CEO at CarbonCloud
In order to stop global warming and meet the ambitious climate goals stated in the Paris Agreement, there is an increasing demand for convenient and trustworthy tools to measure the climate impact of goods and food products. Big sustainability actors in the food sector are already using CarbonCloud software to keep track of their climate footprints. Some of them have even gone one step further than just publishing their footprints and have launched campaigns that encourages their customers to make green choices, e.g., Oatly’s campaign to “Show us your numbers” and Estrella’s drive for “Fair snacks”.
A big difficulty in performing climate footprint calculation comparisons is that assessments are made by individual experts using different methods and datasets. We have set out to change this by releasing massive amounts of consistent climate footprint data for free and turn the focus to what can be done to reduce the emissions now that we have comparable data.
“We are releasing all these footprints for free because we want to help solve the climate crisis and give more food producers the possibility to calculate their specific climate footprints and show their numbers.”
– Mikael Tönnberg, CTO at CarbonCloud
Automating the calculations for farmgate annual crops at unprecedented scale is just the start. The next step is perennial crops to be followed by livestock products, refined products and more. Over time, the goal is to cover all food products in the search to also meet the end-consumer market. As new yield data come in every year, or science makes progress on the underlying mechanisms or data collection, all the footprints are automatically re-calculated and updated. Customers using our climate labeling tool will get automatic access to up-to-date high-precision footprints they can use when modeling their production processes. This data set will improve in both scope and precision over time, so if you cannot find what you are looking for, check in again and it may well be there.
For more information please contact:
CarbonCloud is a research-based food-tech startup with a disruptive web-based SaaS solution that enables detailed calculations of climate footprints of food products and production processes. This enables food producers across the world to calculate and analyze the climate footprints of their product portfolios at a fraction of the cost and time spent on traditional consultancy-based life-cycle assessments. Headquarters in Gothenburg, Sweden. It is privately held and backed by international investors. www.carboncloud.com
Benefits of statically typed functional programming? Wrong question.
“What are the benefits of X?” is a rather natural question to ask when you are curious about a subject. However, the response will be very different depending on who gives the answer.
Asking ”what are the benefits of a Formula 1 car?” would result in very different replies if you asked a race driver, farmer, carpenter or a submarine captain.
I think a serious source of miscommunication could be eliminated if we spent a bit more time talking about the wanted end goal, and try to find a non-fluffy answer. We as people has a tendency to assume every one has the same goal or ”we are all farmers”.
This problem often comes up when discussing XDD techniques (DomainDrivenDesign DDD, TestDrivenDesign TDD, Type Driven Design TDD). These techniques focuses on the how not the why – the engine not the goal.
So I think a better question is:
What do we want to achieve?
Let’s start with tests and TDD(test-driven-development). The “why question” in this case is “why do we write tests?”. A straightforward answer is “To make sure the program works”. However, what do we mean by “works”?
When programming, a developer creates an mental model of how the program should work and try to explain that to the computer via code. Another word for this mental model is domain knowledge. A program “works” if the developer has a correct understanding of the domain, and manages to capture that understanding in code.
How does this understanding of “working programs” == “encoded domain knowledge” play out in practice? It appears every time the program needs to be updated! In order to update the program while making sure that it still works, the developer doing the update must know how the program is supposed to behave. Often the code is not enough so they need to reverse-engineer the thought process, look up documentation or ask the original author (who hopefully remember and is still reachable).
When programming, we want to capture knowledge in a way understandablefor both the computer and humans, now and in the future.
Why do we want to capture knowledge?
* First and foremost to avoid vital knowledge to be lost. As time passes people will stop remembering and the organization will change. Old team members will pursue other projects and new members will join. When knowledge is captured and accessible for later use the organization will become much more resilient. The “old guard” that understands the hidden depths of the application is simply not needed (at least not for that reason). One thing is for certain; People won’t stay forever.
* If we make the computer understand the domain knowledge, we ensure that the knowledge we do have is enforced (“All cars should have four wheels”). The scope of most projects are too large to keep in a human working memory at once, requiring assistance from the computer.
* New features should take current domain requirements into consideration. Often, new requirements will affect old ones – sometimes with unexpected consequences. It’s best to identify these unexpected or unwanted consequences early on, since fixing such issues tend to get more expensive over time.
* Knowledge of who can access what is extra important to enforce using the computer. We don’t want security risks where the application could leak information.
* Easy-to-access and explicit knowledge of how the system works makes on-boarding new team members much easier.
* Make it clear what the organization knows and what it does not know. This can be vital for important business (and technical) decisions.
* Makes it possible or even easy to include business people in technical decisions – “Should all cars have exactly four wheels? If no, what is the difference between a car with two wheels and a motorcycle?”.
* Avoid bugs introduced when making a seemingly innocent change that violates an implicit invariant.
* Avoid having to spend time on “defensive programming”, where the programmer makes up for limited understanding with countermeasures such as wide-spread null checks, assertions sprinkled across the code, and similar. This behavior solidifies invariants across the entire code base, making it rigid to change.
All this fluff – What is knowledge then, more specifically?
On a 10 000 meter level: Information about the domain or problem that the current author has which affect their choices and the design of the code.
* What kind of inputs are valid/expected
* What can the output be?
* What can go wrong?
* When should this code be used? When should it not?
* Does running this code do anything but return a value? (Side effects)
* How does similar domain concepts differ? (A user with admin rights and an admin user)?
How is knowledge best captured?
Now you could say, but ”all code is knowledge, with an if-statement it is clear that the x variable needs to be smaller than 5!”. It’s true – all code tells the computer something – the question is which solution is the most scalable and friendly to both human and computer. When the program grows, and the”smaller than 5 check” moves to another function, file or module, this previously clear fact will be very difficult to spot.
Quick detour – ”X as Code”, X-as-C
The last two decades, approaches like ”Configuration-as-Code” and ”Infrastructure-as-Code” has grown tremendously in popularity and made organizations much less reliant on a few number of individuals to setup a new server or application cluster. These approaches are often declarative, the focus of the reader/programmer is what you want to happen – not exactly how. You state,”ssh should be configured” not ”>command1 -x; command2 -y -z; etc… ”
This invites people who are not experts in the given technology to participate and change the wanted end state without having to understand the nitty-gritty details. The knowledge that ”ssh should be configured” is stated explicitly once, leaving the details to be sorted out somewhere else.
More examples of this: Docker, Chef, Nix among many others.
So again, how is knowledge best captured in code?
To enable our human minds to grasp ever more complex domains, we want our knowledge to be encoded in a declarative and explicit manner. It’s best if this information is contained within a limited scope, rather than spread out across the program. This protects our knowledge from being lost due to code evolving over time.
And that leads us to the main event: Knowledge-as-Code.
Knowledge-as-Code (or Know-as-C or ”no-ask”) is fully language or platform agnostic and state that knowledge should be
* Declared once – enforced globally
Declared once – enforced globally
Using central and declarative syntax makes it possible for humans to understand and decode knowledge even if the code base is vast. It also makes it easier to review changes to the requirements. If the requirements are spread out across the code base this is almost impossible to do, e.g. if an if-statement is changed from ”if noOfWheels < 5 then ..” to ”if noOfWheels < 6”, how do we know if this is applies everywhere?
The declared domain rules should be enforced globally by the computer – humans are really bad at this and with a growing code base it is practically impossible to do. By capturing the domain knowledge in a single spot, we make it possible to use a computer to enforce these rules.
A centrally declared requirement prohibits conflicting definitions, such as having both”if noOfWheels < 5 then return ValidCar” and ”if noOfWheels < 6 then ValidCar” in the same code base.
* All valid values should be representable.
If we want to allow numbers larger than 2^45 we should not use an Int32
* All known unknowns should be explicitly expressed
If a function can fail, the computer should force you to handle the failure case
* Only valid values should be representable.
If a function expects a positive integer, it should be impossible to send in a negative one
* No overlap
All possible values should be orthogonal with each other. Example: We cannot say that we have either a Int or a Float. Since all ints are included in the Float type.
* All knowledge should be available to both human and computer
Humans must understand the knowledge to make changes – the computer must understand the knowledge to be able to enforce the rules.
* All feedback should be available for both human and computer.
When something goes wrong the computer should help the human to understand the issue.
* Use abstractions without knowledge loss.
If, in reality, you have a bird or a cat – do not hide it behind a IAnimal or similar. Better to abstract it to, in psuedo-code, ”Animal = Bird OR Cat”.
* Abstract using general, well-defined, non-domain concepts
Such as lists, Dictionary, Functor, Monad
Tools to write Know-as-C
Most statically typed languages are capable of capturing some information in a declarative manner in what I’ll loosely call ”types” below. There are other concepts that also declaratively capture knowledge but for now we’ll use the term “types” as an umbrella term.
Since dynamically typed languages per definition does not have any way to enforce knowledge statically, nor in most cases encode it declaratively, I do not think they are a good option when trying to capture knowledge.
It is important to remember that the cost of encoding knowledge differs between languages and different points of cost and return exists depending on which team and which time frame the project operates under. However, encoding knowledge is vital if you want to know what you have built, if you are building a long lasting product, or where trust or security is important. That being said, the “bang for the buck” will differ greatly depending on which programming language is used.
There are a bunch of more or less language agnostic techniques that can be used as well. For example ”Ghosts of departed proofs”, “Type-driven-development”, “Parse don’t validate”, ”Dependent types” or “Doctests”. As it happens, what these have in common is that they all improve knowledge symmetry and help us reach the other Know-as-C goals.
In general, humans understand some formats and computers another, we want to fuse those so both parties are included – without sacrificing either party’s understanding.
Both human and computer
Computer only domain
Comments Comment examples* Class names Function names Variable names Record field names ADT tags Value level understanding**
Types Function signatures
Potentially false Room for interpretation Rot over time Victims of ”game of telephone”
”Understood” by the computer Understandable by a human
* as default in most languages ** in most languages
It cuts both ways
Many strong (and not so strong) compilers fail at informing humans of issues in a pedagogic manner. In other words, the compiler fails to ensure knowledge symmetry. This is a non-trivial problem to solve, and tend to be overlooked in many languages. In some cases this even leads to a situation where programmers stop seeing the compiler as their assistant and start seeing it as their antagonist.
One example that actively tries to be better is Elm. Even if Elm’s approach is not perfect in all regards, the compiler goes a long way in giving human-readable, solution oriented feedback. That being said, the complexity of the problem of good feedback increases with the competency of the language.
Could it be that this negligence towards the programmer is a contributing factor to hold languages such as Haskell back? A lot of angry and large error messages has a solution that can be described clearly by a human just in a few words ”That function is only partially applied” or ”The arguments is in the wrong order” or ”You forgot the do keyword”.
Haskell’s error messages very clearly describes what is wrong like “Size of sulfation plates prohibits needed chemical interaction” but often lacks the solution oriented information “Time to change the battery”.
This is one instance where the programmer needs a lot of language-/compiler-specific knowledge which enables them to summarize the implicit information given by the compiler into actionable concepts.
Doctest, an example of giving the computer access to more knowledge
Docstrings are comments above functions briefly describing the function. They often contain one or more examples, showing what inputs leads to which outputs. This has multiple benefits, including giving the user of the function a quick way of understanding exactly what the function name or signature meant. Since this is knowledge not understood by the computer, the Know-as-C approach would be increasing type safety rather than adding human-only information using comments. Due to language limitations or other reasons that may not always be possible.
The drawbacks of examples in the docstrings are
* The computer does not have access to these examples and therefore does not check their validity
* A human will extrapolate the example, correctly or incorrectly and therefore expect a certain behavior
* No syntax or compiler check
If comments are necessary, this information asymmetry can be reduced using libraries such as “doctest”. Doctest is available in several languages. Using a doctest library, you give the compiler access to the doc-test examples and they will be checked during compile/testing. This means that all the benefits for the human stay intact, while we increase the amount of knowledge that can be computer verified.
Let us talk tests
* Are we writing tests to capture knowledge to future human readers?
Will they have practical access to that knowledge? Could that knowledge be described in a more declarative and general way?
* Are we writing tests to make more knowledge available to the computer?
* Are we using tests to help us during the initial development?
Problems with tests
Tests have incomplete coverage due to their example-based nature – ”add 2 4 `shouldBe` 6”. What about ”add 4 5”? Property based testing (also called fuzzy testing or fuzz testing) is a good tool to counteract this but regardless of the intent, property based testing is just a nice way to express a lot of example based tests.
A lot of tests aren’t a good source of knowledge for humans – understanding the domain in general by reading individual tests can be quite difficult, with many developers preferring to just read the actual code. Tests are useful when they start failing – to find what you broke – but that is a very reactive approach.
The information given by each test (often the test name) is not something the computer can understand. It’s up to the developer to make sure that each specific test name maps to each specific test implementation, a mapping that can’t be checked statically.
Having tests can create a false sense of security, especially if using metrics such as test-coverage per line or when a lot of dependency injection is used.
To be clear, I think that tests are important, and I write a lot of them, but view the act of having to write tests as a failure – aware that knowledge could have been captured in a better way.
Back to the beginning. Benefits of statically typed functional languages?
So, with the established goal described above, how do we encode knowledge in order to achieve a secure, person independent and stable code base? How can we support programmers in changing and improving code without random things breaking due to lack of knowledge? We use a programming language with a feature-set that enables us to encode knowledge into our code. That means using statically typed functional languages, as they currently provide the most cost-efficient way to encode knowledge and make it available to both humans and computers.
I work as a manager (even if I try to code as much as possible) for a very rapidly growing startup and I would see it as a critical business risk to use tools with weak Know-as-C capabilities (We use Haskell, Elm and PureScript). Know-as-C allows us to make better-informed business decisions and also onboard new developers fast.
Using functional programming is a pure business decision
It is important to reiterate, most of the benefits with Know-as-C are related to organization benefits, management and future-proofing the technical platform for a growing team. However, we see that working with pure, statically typed functional programming languages off-loads a lot of communication and housekeeping to the computer, letting us focus on the things that matter. I truly believe that if more non-technical managers understood the organizational benefits of Know-as-C, they would push hard for knowledge capturing and promote languages such as Haskell.
Many thanks to Jonathan Moregård who proof-read and came with great suggestions and edits.
CarbonCloud welcomes Estrella to the fast growing community of forward thinking food companies, acting for an increased transparency of climate footprints. Estrella are going live early 2021 with their climate labeled snacks.
CarbonCloud offers a science based web tool to calculate and communicate the climate footprints of food products with a label on the packaging. CarbonCloud thinks that this should be the standard, giving consumers the means to make a conscious decision, and food companies the tools to support the transition to sustainable food production.
“We are happy to have a forward thinking and climate aware company like Estrella joining our quest for a sustainable food industry”, says the CEO of CarbonCloud, David Bryngelsson, Ph.D.
He and his team have been working closely with Estrella to make sure their calculations are accurate, and can be compared to other food producers with a common yardstick.
Josefin Hugosson, Trade & CSR Marketing Manager at Estrella, says: “We have been working with sustainability for years and for us it is top priority to do good stuff! But we also want to improve how we communicate our achievements and what we are working on right now. Through emphasising our sustainability work we hope to inspire other businesses in our field and at the same time make our consumers aware of the climate footprint of snacks.”
The push for climate labels and transparency in the food industry is gaining traction among both producers and consumers. “It is impressive to see how much work Estrella has put into shaping a greener snack” says David Bryngelsson and is eager to point out that this is just the beginning of an exciting partnership for climate improvement. The tool will now enable Estrella to communicate their hard work in an objective way and inspire more businesses to follow.
More info: email@example.com, phone: +46-704 402125
CarbonCloud, a startup spun out of world-leading research on food and climate at Chalmers University of Technology, announced today a € 1,000,000 financing round led by Finnish venture capital firm Maki.vc and German TS Ventures
develops innovative software that helps companies within the food
industry to calculate and communicate the climate footprints of their
products at scale. The company has already onboarded high-profile
paying customers who lead the way on climate labels on food,
including names like plant-based milk brand Oatly, who decided to put
labels online and on product packaging. Other customers include
Naturli Foods, Sproud and Nude.
model is based on twenty years of research and has been reviewed in
connection with a wide range of scientific publications. It has been
used by the Swedish Environmental Protection Agency and is also the
basis for international cooperation, for example with Princeton
University and Potsdam Institute for Climate Impact Research (PIK).
world needs a sustainable re-boot to get our economies going as the
Corona pandemic levels out. Now is the time to seriously focus on the
climate, so we don’t walk out from one disaster directly into
another”, says David
CEO and co-founder of CarbonCloud. “Food and agriculture are
globally responsible for almost 25% of the climate problem and
end-consumers increasingly realize that they can make a difference by
purchasing food products with transparent climate labels. Climate
footprints on food is moving from the sustainability teams to the
marketing teams. It matters for business.”
food industry has been lagging behind other sectors on climate
change, largely because the science behind calculating climate
footprints on food is complicated. It has typically required
expensive specialist consultants to perform calculations, which has
hampered any large-scale effort. CarbonCloud’s platform enables
performing climate footprint calculations for products with industry
leading precision in-house, at a fraction of the cost and time
required before. The platform allows comparisons between products
with a common yardstick and for users to share their results with
each other or with the public.
is time to digitize the science of climate change and the bookkeeping
of climate footprints”, says Tim
Schumacher (TS Ventures),
a German Investor and Entrepreneur who has already backed many
successful climate startups. “CarbonCloud delivers precisely the
solution we need to make it possible and attractive for the industry
to truly keep track of their emissions and to tell the world about
love investing in teams making products that help make sustainable
choices a habit. CarbonCloud’s vision for how to make a change in
the food industry is truly unique, putting keys into brands’ and
consumers’ hands in ways we’ve never seen before. With their
experience, there isn’t a better team in the world to build this
platform,” says Pauliina
Investment Director from Maki.vc.
The investment enables CarbonCloud to onboard new customers and expand their operations, and the team is now looking for new talent within sales, marketing and development to join them on their journey to put climate footprint data on all food products globally.
Whether a product can be “climate neutral” an interesting and complicated question. We hold the following position: If the life cycle of a product leads to a net release of greenhouse gases, the product should not be referred to as “climate neutral” even if the emissions are compensated for with carbon offsets.
What is carbon offsetting?
Some companies compensate their climate footprint by supporting projects around the world that either mitigate emissions of greenhouse gases or remove greenhouse gases from the atmosphere. The common term for this is “carbon offsetting”. The intentions are praiseworthy, and it can certainly make sense to communicate about them to the public – but not by claiming to be climate neutral. A statement like this is more truthful: “Our climate footprint is XX kg CO2e. We are working on reducing our greenhouse gas emissions. We are also investing in project YY that we believe can contribute to the fight against climate change.” This is the honest and transparent way.
Why then, does the positive not simply cancel out the negative? There are two main reasons.
1: It is very hard to know how large and effect the projects really have. In many cases, they do not even seem to work at all.
2: There is a clear risk of double counting, meaning that several parties take credit for the same emission reductions or greenhouse gas removals.
Let us take a deeper look at these issues.
Does carbon offsetting
This is the million-dollar question. In some cases, it is inherently hard to assess. In other cases, we know that the answer is no. For each project, we need to ask ourselves the following:
Does the project deliver the intended results? Things do not always go as planned. A large project in Kenya invested in energy-efficient stoves. As it turned out, most of them were never used. Still, climate offsets were certified and sold. For other projects, we will not know the outcome for a very long time. Planted trees, for instance, only absorb and store carbon as long as they are not cut down. How can this be a year-long guarantee in countries such as Uganda, globally top-ranking in corruption?
Is the project “additional”? In some cases, the project would have taken place anyway, even without the carbon-offsetting contribution. Wind power farms, for instance, produce carbon offsets on the assumption that the electricity produced replaces coal power. But many of the countries that host these projects are growing economies with a steadily increasing energy demand. The wind power farms would probably have been built anyway.
Additionality is generally an explicit requirement for a carbon offsetting project. But unfortunately, the analysis of whether a project is additional is often highly subjective and hard to evaluate in a transparent way. A German research study (Cames, 2016) found that only 2% of the investigated projects were highly likely to be additional.
Is leakage avoided? Leakage is when greenhouse gas emissions increase in another area, because of the carbon offsetting project. If trees are planted on a land used by the local population for forage or agriculture, this may lead to cutting other trees down elsewhere: The local farmers may have no other option than to clear vegetation at a new location to continue their agricultural activities. This becomes at best a zero-sum game for the climate but a loss for the farmers who need to move, and a loss for biodiversity since planted forests host less biodiversity than natural vegetation.
Who takes the credit?
This is the second question we need to ask. In the business of carbon offsetting, it is not unusual that more than one party takes credit for the same action, resulting in deceptive book-keeping. Take the following example: trees are planted in Uganda as a carbon-offsetting project. Company X buys the carbon offsets and label their products as “climate neutral”. This means that company X takes credit for removing greenhouse gases. However, it is not unlikely that Uganda also accounts for this tree planting in the national inventories of greenhouse gas emissions. In this case, the action is double-counted.
Let’s take another example. A wind farm is built in Brazil. A company buys carbon offsets on the assumption that the electricity replaces coal power. Avoiding double counting means that Brazil will have to assume that the electricity produced comes from coal power, although it actually comes from wind. This does not lie in the interest of Brazil, who has targets to reach under the Paris agreement. With a certain amount of sold carbon offsetting credits, Brazil could end up in a situation where they have only renewable energy in reality but would need to keep on reporting as if they had only coal power, since they have sold the right for the emission reductions to other parties.
The negotiations of the Paris agreement showcase how difficult it is to agree on rules to avoid double counting. Reaching our climate targets requires that we BOTH reduce emissions in all countries around the world AND remove greenhouse gases from the atmosphere, for instance by planting trees. Double counting blurs our vision and makes it harder to keep track of what remains to be done.
If we look specifically at the food industry, we see that it is currently responsible for about 25% of global greenhouse gas emissions (IPCC, 2014). To fulfil the Paris agreement and stop climate change, the industry still needs to reduce their emissions, even if the remaining 75% from other industries shrinks to zero! Crediting the food industry with reductions in other sectors cannot be the solution and such claims have the risk of delaying real and effective measures.
What do we suggest?
There are technologies that you could argue actually work. One example is direct air capture, i.e. facilities that capture carbon dioxide from the air and storing it below ground. It is a technology with high probability of giving the intended results as it is highly unlikely that the carbon dioxide will escape underground storage. It is a costly technology with no other positive side effects. Therefore, it can be “additional” since it will not exist unless someone pays for it. There are other climate compensation technologies that are arguably effective and we cheer any engagement in such projects. Nevertheless, our basic appeal is this: The most effective way to put a stop to climate change as a food producer is to discover your precise climate footprint (and consequently take actions to lower it) and communicate it to your customers without smokescreen. CarbonCloud is here to help!
Cames, M., Harthan, R. O., Füssler, J., Lazarus, M., Lee, C., Erickson, P., & Spalding-Fecher, R. (2016). How additional is the clean development mechanism. Analysis of application of current tools and proposed alternatives. Oeko-Institut EV CLlMA. B, 3.
IPCC. (2014). Mitigation of climate change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, 1454.
This week CarbonCloud customer Compass Group introduced climate labels on the lunch menus in the lunch restaurant in the Swedish Parliament. This is the restaurant where most of the 349 members of the Swedish parliament and their guests eat lunch every day. With the help of CarbonCloud, Compass Group will now make it possible for the members of the parliament to make climate smart decisions also during the lunch break. This by calculating the climate footprint on every meal the will be served and by introducing climate labels on the menus.
Compass Group are also collaborating with CarbonCloud to offer climate smart food services to companies such as ICA and SEB.