CarbonCloud releases massive amounts of climate footprints for free

Global warming is one of the biggest environmental challenges right now and the food production is responsible for one-quarter of the world’s greenhouse gas emissions. In this context CarbonCloud is launching a website where you can find country-specific climate footprints of annual crops from all over the world, as a first important step towards publishing climate footprints for all food products. To enable informed choices for sustainability-engaged producers and consumers we are providing it all for free, in stark contrast to most climate data, which does not exist at all or is hidden behind expensive paywalls.

“Until now, climate footprints have been slowly calculated by hand. We’re using modern technology to solve the problems of the future, automating the calculation process and handing it over to computers. This allows us to calculate massive amounts of footprints simultaneously with consistent quality.”David Bryngelsson, CEO at CarbonCloud

In order to stop global warming and meet the ambitious climate goals stated in the Paris Agreement, there is an increasing demand for convenient and trustworthy tools to measure the climate impact of goods and food products. Big sustainability actors in the food sector are already using CarbonCloud software to keep track of their climate footprints. Some of them have even gone one step further than just publishing their footprints and have launched campaigns that encourages their customers to make green choices, e.g., Oatly’s campaign to “Show us your numbers” and Estrella’s drive for “Fair snacks”.

A big difficulty in performing climate footprint calculation comparisons is that assessments are made by individual experts using different methods and datasets. We have set out to change this by releasing massive amounts of consistent climate footprint data for free and turn the focus to what can be done to reduce the emissions now that we have comparable data.

“We are releasing all these footprints for free because we want to help solve the climate crisis and give more food producers the possibility to calculate their specific climate footprints and show their numbers.”

Mikael Tönnberg, CTO at CarbonCloud

Automating the calculations for farmgate annual crops at unprecedented scale is just the start. The next step is perennial crops to be followed by livestock products, refined products and more. Over time, the goal is to cover all food products in the search to also meet the end-consumer market. As new yield data come in every year, or science makes progress on the underlying mechanisms or data collection, all the footprints are automatically re-calculated and updated. Customers using our climate labeling tool will get automatic access to up-to-date high-precision footprints they can use when modeling their production processes. This data set will improve in both scope and precision over time, so if you cannot find what you are looking for, check in again and it may well be there.

For more information please contact:,

+46-704 402125

David Bryngelsson



CarbonCloud is a research-based food-tech startup with a disruptive web-based SaaS solution that enables detailed calculations of climate footprints of food products and production processes. This enables food producers across the world to calculate and analyze the climate footprints of their product portfolios at a fraction of the cost and time spent on traditional consultancy-based life-cycle assessments. Headquarters in Gothenburg, Sweden. It is privately held and backed by international investors.

[Tech] Knowledge-as-Code

Benefits of statically typed functional programming? Wrong question.

What are the benefits of X?” is a rather natural question to ask when you are curious about a subject. However, the response will be very different depending on who gives the answer.

Asking ”what are the benefits of a Formula 1 car?” would result in very different replies if you asked a race driver, farmer, carpenter or a submarine captain.

I think a serious source of miscommunication could be eliminated if we spent a bit more time talking about the wanted end goal, and try to find a non-fluffy answer. We as people has a tendency to assume every one has the same goal or ”we are all farmers”.

This problem often comes up when discussing XDD techniques (DomainDrivenDesign DDD, TestDrivenDesign TDD, Type Driven Design TDD). These techniques focuses on the how not the why – the engine not the goal.

So I think a better question is:

What do we want to achieve?

Let’s start with tests and TDD(test-driven-development). The “why question” in this case is “why do we write tests?”. A straightforward answer is “To make sure the program works”. However, what do we mean by “works”?

When programming, a developer creates an mental model of how the program should work and try to explain that to the computer via code. Another word for this mental model is domain knowledge. A program “works” if the developer has a correct understanding of the domain, and manages to capture that understanding in code.

How does this understanding of “working programs” == “encoded domain knowledge” play out in practice? It appears every time the program needs to be updated! In order to update the program while making sure that it still works, the developer doing the update must know how the program is supposed to behave. Often the code is not enough so they need to reverse-engineer the thought process, look up documentation or ask the original author (who hopefully remember and is still reachable).

When programming, we want to capture knowledge in a way understandable for both the computer and humans, now and in the future.

Why do we want to capture knowledge?

* First and foremost to avoid vital knowledge to be lost. As time passes people will stop remembering and the organization will change. Old team members will pursue other projects and new members will join. When knowledge is captured and accessible for later use the organization will become much more resilient. The “old guard” that understands the hidden depths of the application is simply not needed (at least not for that reason). One thing is for certain; People won’t stay forever.

* If we make the computer understand the domain knowledge, we ensure that the knowledge we do have is enforced (“All cars should have four wheels”). The scope of most projects are too large to keep in a human working memory at once, requiring assistance from the computer.

* New features should take current domain requirements into consideration. Often, new requirements will affect old ones – sometimes with unexpected consequences. It’s best to identify these unexpected or unwanted consequences early on, since fixing such issues tend to get more expensive over time.

* Knowledge of who can access what is extra important to enforce using the computer. We don’t want security risks where the application could leak information.

* Easy-to-access and explicit knowledge of how the system works makes on-boarding new team members much easier.

* Make it clear what the organization knows and what it does not know. This can be vital for important business (and technical) decisions.

* Makes it possible or even easy to include business people in technical decisions – “Should all cars have exactly four wheels? If no, what is the difference between a car with two wheels and a motorcycle?”.

* Avoid bugs introduced when making a seemingly innocent change that violates an implicit invariant.

* Avoid having to spend time on “defensive programming”, where the programmer makes up for limited understanding with countermeasures such as wide-spread null checks, assertions sprinkled across the code, and similar. This behavior solidifies invariants across the entire code base, making it rigid to change.

All this fluff – What is knowledge then, more specifically?

On a 10 000 meter level: Information about the domain or problem that the current author has which affect their choices and the design of the code.

More concretely:

* What kind of inputs are valid/expected

* What can the output be?

* What can go wrong?

* When should this code be used? When should it not?

* Does running this code do anything but return a value? (Side effects)

* How does similar domain concepts differ? (A user with admin rights and an admin user)?

How is knowledge best captured?

Now you could say, but ”all code is knowledge, with an if-statement it is clear that the x variable needs to be smaller than 5!”. It’s true – all code tells the computer something – the question is which solution is the most scalable and friendly to both human and computer. When the program grows, and the”smaller than 5 check” moves to another function, file or module, this previously clear fact will be very difficult to spot.

Quick detour – ”X as Code”, X-as-C

The last two decades, approaches like ”Configuration-as-Code” and ”Infrastructure-as-Code” has grown tremendously in popularity and made organizations much less reliant on a few number of individuals to setup a new server or application cluster. These approaches are often declarative, the focus of the reader/programmer is what you want to happen – not exactly how. You state, ”ssh should be configured” not ”>command1 -x; command2 -y -z; etc…

This invites people who are not experts in the given technology to participate and change the wanted end state without having to understand the nitty-gritty details. The knowledge that ”ssh should be configured” is stated explicitly once, leaving the details to be sorted out somewhere else.

More examples of this: Docker, Chef, Nix among many others.

So again, how is knowledge best captured in code?

To enable our human minds to grasp ever more complex domains, we want our knowledge to be encoded in a declarative and explicit manner. It’s best if this information is contained within a limited scope, rather than spread out across the program. This protects our knowledge from being lost due to code evolving over time.

And that leads us to the main event: Knowledge-as-Code.


Knowledge-as-Code (or Know-as-C or ”no-ask”) is fully language or platform agnostic and state that knowledge should be

* Declared once – enforced globally

* Complete

* Precise

* Symmetric

* Unobfuscated

Declared once – enforced globally

Using central and declarative syntax makes it possible for humans to understand and decode knowledge even if the code base is vast. It also makes it easier to review changes to the requirements. If the requirements are spread out across the code base this is almost impossible to do, e.g. if an if-statement is changed from ”if noOfWheels < 5 then ..” to ”if noOfWheels < 6”, how do we know if this is applies everywhere?

The declared domain rules should be enforced globally by the computer – humans are really bad at this and with a growing code base it is practically impossible to do. By capturing the domain knowledge in a single spot, we make it possible to use a computer to enforce these rules.

A centrally declared requirement prohibits conflicting definitions, such as having both”if noOfWheels < 5 then return ValidCar” and ”if noOfWheels < 6 then ValidCar” in the same code base.


* All valid values should be representable.

If we want to allow numbers larger than 2^45 we should not use an Int32

* All known unknowns should be explicitly expressed

If a function can fail, the computer should force you to handle the failure case


* Only valid values should be representable.

If a function expects a positive integer, it should be impossible to send in a negative one

* No overlap

All possible values should be orthogonal with each other. Example: We cannot say that we have either a Int or a Float. Since all ints are included in the Float type.


* All knowledge should be available to both human and computer

Humans must understand the knowledge to make changes – the computer must understand the knowledge to be able to enforce the rules.

* All feedback should be available for both human and computer.

When something goes wrong the computer should help the human to understand the issue.


* Use abstractions without knowledge loss.

If, in reality, you have a bird or a cat – do not hide it behind a IAnimal or similar. Better to abstract it to, in psuedo-code, ”Animal = Bird OR Cat”.


* Abstract using general, well-defined, non-domain concepts

Such as lists, Dictionary, Functor, Monad

Tools to write Know-as-C

Most statically typed languages are capable of capturing some information in a declarative manner in what I’ll loosely call ”types” below. There are other concepts that also declaratively capture knowledge but for now we’ll use the term “types” as an umbrella term.

Since dynamically typed languages per definition does not have any way to enforce knowledge statically, nor in most cases encode it declaratively, I do not think they are a good option when trying to capture knowledge.

It is important to remember that the cost of encoding knowledge differs between languages and different points of cost and return exists depending on which team and which time frame the project operates under. However, encoding knowledge is vital if you want to know what you have built, if you are building a long lasting product, or where trust or security is important. That being said, the “bang for the buck” will differ greatly depending on which programming language is used.

There are a bunch of more or less language agnostic techniques that can be used as well. For example ”Ghosts of departed proofs”,Type-driven-development”, “Parse don’t validate”, ”Dependent types” or “Doctests”. As it happens, what these have in common is that they all improve knowledge symmetry and help us reach the other Know-as-C goals.

In general, humans understand some formats and computers another, we want to fuse those so both parties are included – without sacrificing either party’s understanding.

For example:

Human onlyBoth human and computerComputer only domain
Comment examples*
Class names
Function names
Variable names
Record field names
ADT tags
Value level understanding**
Types Function signaturesMachine code
Potentially false
Room for interpretation
Rot over time
Victims of ”game of telephone”
”Understood” by the computer Understandable by a human

* as default in most languages
** in most languages

Special case


It cuts both ways

Many strong (and not so strong) compilers fail at informing humans of issues in a pedagogic manner. In other words, the compiler fails to ensure knowledge symmetry. This is a non-trivial problem to solve, and tend to be overlooked in many languages. In some cases this even leads to a situation where programmers stop seeing the compiler as their assistant and start seeing it as their antagonist.

One example that actively tries to be better is Elm. Even if Elm’s approach is not perfect in all regards, the compiler goes a long way in giving human-readable, solution oriented feedback. That being said, the complexity of the problem of good feedback increases with the competency of the language.

Could it be that this negligence towards the programmer is a contributing factor to hold languages such as Haskell back? A lot of angry and large error messages has a solution that can be described clearly by a human just in a few words ”That function is only partially applied” or ”The arguments is in the wrong order” or ”You forgot the do keyword”.

Haskell’s error messages very clearly describes what is wrong like “Size of sulfation plates prohibits needed chemical interaction” but often lacks the solution oriented information “Time to change the battery”.

This is one instance where the programmer needs a lot of language-/compiler-specific knowledge which enables them to summarize the implicit information given by the compiler into actionable concepts.

Doctest, an example of giving the computer access to more knowledge

Docstrings are comments above functions briefly describing the function. They often contain one or more examples, showing what inputs leads to which outputs. This has multiple benefits, including giving the user of the function a quick way of understanding exactly what the function name or signature meant. Since this is knowledge not understood by the computer, the Know-as-C approach would be increasing type safety rather than adding human-only information using comments. Due to language limitations or other reasons that may not always be possible.

The drawbacks of examples in the docstrings are

* The computer does not have access to these examples and therefore does not check their validity

* A human will extrapolate the example, correctly or incorrectly and therefore expect a certain behavior

* No syntax or compiler check

If comments are necessary, this information asymmetry can be reduced using libraries such as “doctest”. Doctest is available in several languages. Using a doctest library, you give the compiler access to the doc-test examples and they will be checked during compile/testing. This means that all the benefits for the human stay intact, while we increase the amount of knowledge that can be computer verified.

Let us talk tests

* Are we writing tests to capture knowledge to future human readers?

Will they have practical access to that knowledge?
Could that knowledge be described in a more declarative and general way?

* Are we writing tests to make more knowledge available to the computer?

* Are we using tests to help us during the initial development?

Problems with tests

Tests have incomplete coverage due to their example-based nature – ”add 2 4 `shouldBe` 6”. What about ”add 4 5”? Property based testing (also called fuzzy testing or fuzz testing) is a good tool to counteract this but regardless of the intent, property based testing is just a nice way to express a lot of example based tests.

A lot of tests aren’t a good source of knowledge for humans – understanding the domain in general by reading individual tests can be quite difficult, with many developers preferring to just read the actual code. Tests are useful when they start failing – to find what you broke – but that is a very reactive approach.

The information given by each test (often the test name) is not something the computer can understand. It’s up to the developer to make sure that each specific test name maps to each specific test implementation, a mapping that can’t be checked statically.

Having tests can create a false sense of security, especially if using metrics such as test-coverage per line or when a lot of dependency injection is used.

To be clear, I think that tests are important, and I write a lot of them, but view the act of having to write tests as a failure – aware that knowledge could have been captured in a better way.

Back to the beginning. Benefits of statically typed functional languages?

So, with the established goal described above, how do we encode knowledge in order to achieve a secure, person independent and stable code base? How can we support programmers in changing and improving code without random things breaking due to lack of knowledge? We use a programming language with a feature-set that enables us to encode knowledge into our code. That means using statically typed functional languages, as they currently provide the most cost-efficient way to encode knowledge and make it available to both humans and computers.

I work as a manager (even if I try to code as much as possible) for a very rapidly growing startup and I would see it as a critical business risk to use tools with weak Know-as-C capabilities (We use Haskell, Elm and PureScript). Know-as-C allows us to make better-informed business decisions and also onboard new developers fast.

Using functional programming is a pure business decision

It is important to reiterate, most of the benefits with Know-as-C are related to organization benefits, management and future-proofing the technical platform for a growing team. However, we see that working with pure, statically typed functional programming languages off-loads a lot of communication and housekeeping to the computer, letting us focus on the things that matter. I truly believe that if more non-technical managers understood the organizational benefits of Know-as-C, they would push hard for knowledge capturing and promote languages such as Haskell.


Mikael Tönnberg

Many thanks to Jonathan Moregård who proof-read and came with great suggestions and edits.

Estrella takes a stand for climate transparency with CarbonCloud

CarbonCloud welcomes Estrella to the fast growing community of forward thinking food companies, acting for an increased transparency of climate footprints. Estrella are going live early 2021 with their climate labeled snacks.

CarbonCloud offers a science based web tool to calculate and communicate the climate footprints of food products with a label on the packaging. CarbonCloud thinks that this should be the standard, giving consumers the means to make a conscious decision, and food companies the tools to support the transition to sustainable food production. 

“We are happy to have a forward thinking and climate aware company like Estrella joining our quest for a sustainable food industry”, says the CEO of CarbonCloud, David Brynglesson, Ph.D. 

He and his team have been working closely with Estrella to make sure their calculations are accurate, and can be compared to other food producers with a common yardstick. 

Josefin Hugosson, Trade & CSR Marketing Manager at Estrella, says: “We have been working with sustainability for years and for us it is top priority to do good stuff! But we also want to improve how we communicate our achievements and what we are working on right now. Through emphasising our sustainability work we hope to inspire other businesses in our field and at the same time make our consumers aware of the climate footprint of snacks.

The push for climate labels and transparency in the food industry is gaining traction among both producers and consumers. “It is impressive to see how much work Estrella has put into shaping a greener snack” says David Bryngelsson and is eager to point out that this is just the beginning of an exciting partnership for climate improvement. The tool will now enable Estrella to communicate their hard work in an objective way and inspire more businesses to follow.

More info:, phone: +46-704 402125

Swedish startup CarbonCloud attracts € 1,000,000 in international VC funding to work with food brands on their carbon footprints

CarbonCloud, a startup spun out of world-leading research on food and climate at Chalmers University of Technology, announced today a € 1,000,000 financing round led by Finnish venture capital firm and German TS Ventures

CarbonCloud develops innovative software that helps companies within the food industry to calculate and communicate the climate footprints of their products at scale. The company has already onboarded high-profile paying customers who lead the way on climate labels on food, including names like plant-based milk brand Oatly, who decided to put their climate labels online and on product packaging. Other customers include Naturli Foods, Sproud and Nude.

CarbonCloud’s model is based on twenty years of research and has been reviewed in connection with a wide range of scientific publications. It has been used by the Swedish Environmental Protection Agency and is also the basis for international cooperation, for example with Princeton University and Potsdam Institute for Climate Impact Research (PIK).

“The world needs a sustainable re-boot to get our economies going as the Corona pandemic levels out. Now is the time to seriously focus on the climate, so we don’t walk out from one disaster directly into another”, says David Bryngelsson, CEO and co-founder of CarbonCloud. “Food and agriculture are globally responsible for almost 25% of the climate problem and end-consumers increasingly realize that they can make a difference by purchasing food products with transparent climate labels. Climate footprints on food is moving from the sustainability teams to the marketing teams. It matters for business.”

The food industry has been lagging behind other sectors on climate change, largely because the science behind calculating climate footprints on food is complicated. It has typically required expensive specialist consultants to perform calculations, which has hampered any large-scale effort. CarbonCloud’s platform enables performing climate footprint calculations for products with industry leading precision in-house, at a fraction of the cost and time required before. The platform allows comparisons between products with a common yardstick and for users to share their results with each other or with the public.

“It is time to digitize the science of climate change and the bookkeeping of climate footprints”, says Tim Schumacher (TS Ventures), a German Investor and Entrepreneur who has already backed many successful climate startups. “CarbonCloud delivers precisely the solution we need to make it possible and attractive for the industry to truly keep track of their emissions and to tell the world about it.”

“We love investing in teams making products that help make sustainable choices a habit. CarbonCloud’s vision for how to make a change in the food industry is truly unique, putting keys into brands’ and consumers’ hands in ways we’ve never seen before. With their experience, there isn’t a better team in the world to build this platform,” says Pauliina Martikainen, Investment Director from

The investment enables CarbonCloud to onboard new customers and expand their operations, and the team is now looking for new talent within sales, marketing and development to join them on their journey to put climate footprint data on all food products globally.

Additional information:, phone: +46-704 402125

Photos: Press kit

Can a product be “climate neutral”?

This is an interesting and complicated question. CarbonCloud holds the following position: If the life cycle of a product leads to a net release of greenhouse gases, the product should not be referred to as “climate neutral” even if the emissions are compensated for with carbon offsets.

What is carbon offsetting?

Some companies compensate their climate footprint by supporting projects around the world that either mitigate emissions of greenhouse gases compared to a baseline or remove greenhouse gases from the atmosphere. This is known as “carbon offsetting”. The intentions are praiseworthy, and it can definitely make sense to communicate about them to the public; however, not by claiming to be climate neutral. Instead we encourage statements of the type: “Our climate footprint is XX kg CO2e. We work on reducing our greenhouse gas emissions. We also invest in project YY that we believe can contribute in the fight against climate change.” This is the honest and transparent way. Why then, does the positive not just simply cancel out the negative? There are two main reasons.

1: It is very hard to know how large effect the projects really have. In many cases, they do not even seem to work at all.

2: There is a clear risk of double counting, meaning that several parties take credit for the same emission reductions, or greenhouse gas removals. Let us take a deeper look at these issues.

Does carbon offsetting work?

This is the million-dollar question. In some cases, it is inherently hard to asses. In other cases, we know that the answer is no. For each project we need to ask ourselves the following:

  • Does the project deliver the intended results? Things do not always go as planned. A large project in Kenya invested in energy efficient stoves. As it turned out, most of them were never used. Yet, climate offsets were certified and sold. In other projects we will not know the outcome for a very long time. Planted trees, for instance, only absorb and store carbon as long as they are not cut down. How can this be guaranteed for hundreds of years in countries such as Uganda, ranked as one of the most corrupt countries in the world?
  • Is the project “additional”? In some cases, the project would have taken place anyway, even without the income from carbon offsets. Wind power farms, for instance, produce carbon offsets based on the assumption that the electricity produced replaces coal power. But many of the countries that host the carbon offsetting projects are growing economies with a steadily increasing energy demand. The wind power farms may very well have been built anyway. Additionality is generally an explicit requirement for carbon offsetting project. But unfortunately, the analysis of whether a project is additional is often highly subjective and hard to evaluate in a transparent way. A German research study (Cames, 2016) found that only 2% of the investigated projects had a high probability of being additional.
  • Is leakage avoided? Leakage is when greenhouse gas emissions increase somewhere else, as a consequence of the carbon offsetting project. If trees are planted on land used by the local population for forage or agriculture, this may lead to other trees being cut down elsewhere. The local farmers may have no other options than to clear vegetation at a new location in order to continue their agricultural activities. This becomes at best a zero-sum game for the climate but a loss for the farmers who need to move, and a loss for biodiversity since planted forests host less biodiversity than natural vegetation.

Who takes the credit?

This is the second question we need to ask. In the business of carbon offsetting, it is not unusual that more than one party takes credit for the same action, resulting in deceptive book-keeping. Let us use an example: trees are planted in Uganda in a carbon-offsetting project. Company X buys the carbon offsets and label their products as “climate neutral”. This means that company X takes credit for the removal of greenhouse gases. However, it is not unlikely that Uganda also accounts for tree planting in the national inventories of greenhouse gas emissions. In that case the action is double counted.

Let us take another example. A wind-power plant is built in Brazil. Carbon offsets are sold, based on the assumption that the electricity replaces coal power. Avoiding double counting means that Brazil will have to assume that the electricity produced comes from coal power, although it actually comes from wind. This does not lie in the interest of Brazil, who has targets to reach under the Paris agreement. If enough carbon offsetting credits are sold, Brazil could end up in a situation where they have only renewable energy in reality but would need to keep on reporting as if they had only coal power, since they have sold the right for the emission reductions to other parties.

The negotiations of the Paris agreement have shown us how difficult it is to agree on rules that avoid double counting. Reaching our climate targets requires that we BOTH reduce emissions in all countries around the world AND remove greenhouse gases from the atmosphere, for instance by planting trees. Double counting blurs our vision and makes it harder to keep track of what remains to be done. If we look specifically at the food industry, we see that it is currently responsible for about 25% of global greenhouse gas emissions (IPCC, 2014). To fulfill the Paris agreement and stop climate change these emissions will have to be reduced, even if all other emissions are reduced to zero! Crediting the food industry with reductions in other sectors can hence not be the solution for the food industry and such claims have the risk of delaying real and effective measures from being made.

What do we suggest?

There are technologies that you could argue actually work. One example is direct air capture, involving facilities that capture carbon dioxide from the air so that it can be stored below ground. It is a technology that has a high probability of giving the intended results. The likelihood is very low that that the carbon dioxide will escape from its storage below ground. It is a costly technology with no other positive side effects. Therefore, it can be considered “additional” since it will not be implemented unless someone pays for it. There are other technologies for climate compensation that you could argue also work. We applaud any engagement in such projects. However, our basic appeal is this: find out your climate footprint and communicate it to your customers without smokescreen. CarbonCloud is here to help!  

Cames, M., Harthan, R. O., Füssler, J., Lazarus, M., Lee, C., Erickson, P., & Spalding-Fecher, R. (2016). How additional is the clean development mechanism. Analysis of application of current tools and proposed alternatives. Oeko-Institut EV CLlMA. B3.

IPCC. (2014). Mitigation of climate change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change1454.

CarbonCloud and Compass Group introduce climate labels in the Swedish Parliament

This week CarbonCloud customer Compass Group introduced climate labels on the lunch menus in the lunch restaurant in the Swedish Parliament. This is the restaurant where most of the 349 members of the Swedish parliament and their guests eat lunch every day. With the help of CarbonCloud, Compass Group will now make it possible for the members of the parliament to make climate smart decisions also during the lunch break. This by calculating the climate footprint on every meal the will be served and by introducing climate labels on the menus.

Compass Group are also collaborating with CarbonCloud to offer climate smart food services to companies such as ICA and SEB.

CarbonCloud kan vinna “Årets hållbara projekt”

Den 16 december arrangeras CIO Awards i Stockholm för femtonde året i rad. CIO Award sär en gala ämnad åt IT-branschen samt utveckling och framsteg inom IT. Under galan delar CIO Sweden ut fyra olika priser, däribland “Årets hållbara projekt”. I konkurrens med Lantmännen, Göteborgs stad och Arbetsförmedlingen/Iteam, är CarbonCloud och företagetsplattform – CarbonData, en av de fyra finalisterna.

Kriterierna för Årets hållbara projekt är flera. Bland annat ska projektet, genom användning av smart IT, bidra till såväl ökade intäkter, minskade utgifter och minskad miljöpåverkan.Projektet ska minska beroendet av miljöpåverkande faktorer genom effektivare flöden eller smart användning av IT och bidra till att slutkunden kan minska sin miljöpåverkan. Dessa kriterier och fler därtill har CarbonData uppnått, och tagit CarbonCloud till sista steget i tävlingen.

CarbonData, bygger på en modell som med god precision modellerar de olika stegen som krävs för att producera ett livsmedel. Modellen hanterar hela kedjan, inklusive jordbrukets alla delar hela vägen till butikshyllan. Med hjälp av CarbonData kan livsmedelsföretag bland annat se var i livsmedelsprocessen som åtgärder kommer ha störst effekt på livsmedlets totala klimatavtryck.

“Att CarbonCloud och plattformen CarbonData är en av de fyra sista tävlande är en validering på att vi har tagit fram en produkt med stort kunnande och med hög potential. Vi har stentuff konkurrens i vår kategori, och vi är det enda startup-bolaget, bland drakar med stora IT-budgetar, som tagit sig ända till finalen. Vi åker upp till Stockholm och galan spända men ödmjuka.” säger David Bryngelsson, vd på CarbonCloud.CIO Awards delas ut den 16 december på Berns salonger i Stockholm. För mer information läs CIO Awards ​hemsida​.

CarbonClouds beräkningar i ny våg av kokböcker

I mitten av september släpptes Köttälskarens nästan vegetariska kokbokav Eva och Matts Hildén. Det var den tredje kokboken sedan i juli som beräknat och visar receptens och matens faktiska klimatavtryck, med hjälp av CarbonCloud. I juli släpptes Food Pharmacys kokbok Näringsjägarnaoch Världsnaturfonden WWF tillsammans med Sveriges Olympiska Kommittés kokbok Vego i världsklasskom i augusti. Responsen på böckerna som hittills släppts har varit överväldigande.

“Det pratas mycket om ekologiskt, närodlat och i säsong, men i ärlighetens namn är det väldigt svårt att veta hur miljövänlig en måltid egentligen är. Visst, äter du växtbaserat ligger du bra till men det kan ändå vara intressant att veta ännu mer i detalj.” säger ​Mia Clase, författare och partner i Food Pharmacy.

För att säkerställa och skapa precist underlag, som de också visar upp, har författarna använt CarbonClouds beräkningsmodell CarbonData, i recepten. Modellen möjliggör för ett enskilt livsmedel att enkelt analyseras och snabbt få sitt klimatavtryck beräknat.

“För oss är det viktigt att visa att för att vi ska kunna äta inom ramen för en planet måste vi alla äta mycket mer från växtriket och mycket mindre kött. Den här boken visar tydligt vilken skillnad det faktiskt är i klimatpåverkan mellan mat från växtriket och rätter med kött” säger Anna Richert, matexpert på WWF.

Vego i världsklass utmanar fördomar och myter om vegetarisk mat och visar att vego är så mycket mer än bara sallader.

“Vi vill inspirera fler som tränar och lever en aktiv livsstil att våga äta mer vego. Boken innehåller en mängd recept som passar för dig som tränar och responsen har varit otroligt positiv!” säger Richert.

I ​Näringsjägarna​ svarar författarna på ​vad kan vi göra för att få oss och planeten att må bättre, och varför har mat gått från att vara något enkelt och glädjefyllt till svårt och ibland till och med ångestskapande.

“Vi hoppas öka medvetenheten kring hur man genom enkla förändringar kan göra vardagsmaten mer klimatsmart. Många av våra vanligaste svenska middagsrätter som spaghetti med köttfärssås, lasagne, fiskpinnar och köttbullar ligger långt över WWF:s rekommendation (under 0,5 CO2e). Med små justeringar kan man göra om recepten så att de håller sig på rätt sida gränsen” säger Clase.

“Ett av målen för CarbonCloud är att konkretisera och informera om mat och dess klimatpåverkan. Att vår modell nu används i ett nytt format visar både på trovärdigheten i modellen och tillgängligheten att väldigt enkelt och precist kunna beräkna mat och livsmedel. Det är givetvis både väldigt intressant och jätteroligt att så tongivande författare och aktörer valt vår modell i viktiga böcker” säger David Bryngelsson, vd på CarbonCloud.

Den tredje kokboken som tagit hjälp av CarbonCloud för att klimatberäkna recepten – Köttälskarens nästan vegetariska kokbok​ av Matts och Eva Hildén. Den innehåller familjevänliga måltider med betydligt mindre kött och med ännu godare smak.

“Vi hoppas att vår kokbok kan vara ett första steg för personer som gillar kött att minska sin konsumtion utan att göra någon avkall på vad de faktiskt vill och tycker om att äta. Genom att se vilka faktiska skillnader det är i klimatavtryck mellan, till exempel en traditionell tacomiddag och våra tacos, är vi övertygade om att fler tilltalas av att testa” säger Matts Hildén.

Malmö Stad beräknar alla maträtters klimatpåverkan under Malmöfestivalen

Under Malmöfestivalen, den 9-15 augusti, kommer alla mattält och foodtrucks som serverar mat, på Gustav Adolfs Torg, att beräkna varje enskild maträtts klimatavtryck. Bakom beräkningarna ligger foodtech-bolaget CarbonCloud. Malmö Stad vill beräkna maten som serveras för att samla ihop data för framtiden och kunna redovisa faktiska resultat på hur maten påverkar klimatet.

Sedan 2009 har Malmö Stad, som arrangerar Malmöfestivalen, arbetat med ett gediget hållbarhetsarbete inför, under och efter den årliga festivalen, som varje år lockar omkring 1,4 miljoner besökare.

“Vi har ett stort ansvar som arrangör till en festival som lockar så otroligt mycket folk till ett centrerat område. I takt med att samhället utvecklas så ska vi också göra det. Därför arbetar vi mer och mer med hållbarhet och har gjort det i snart tio år.” säger David Östberg, projektledare för Malmöfestivalen.

I matområdet kommer det finnas 50 mattält och foodtrucks som samtliga ska beräkna varje maträtts klimatavtryck. Malmö Stad stället hårda krav på matförsäljarna under festivalen, och strävar alltid efter att förbättra sitt arbete. I år blir det första gången som festivalen beräknar klimatavtrycken på all mat som serveras i matområdet.

“Matområden är stort och mitt i stadsrummet, det är hjärtat för hela festivalen. Vi säljer mycket mat så det är klart att maten har stor miljöpåverkan. Vi har bland annat ersätt förbrukningsvaror i plast, till hållbara alternativ. Vi är bra på mycket, men vi behöver också ta in expertis och hjälpmedel för att bli ännu bättre i vissa områden. ” säger Östberg

Till sin hjälp för beräkning av maträtters klimatavtryck har festivalen vänt sig till CarbonCloud. Det Göteborgsbaserade foodtech-bolaget erbjuder ett effektivt menyplaneringsverktyg, som bland annat möjliggör för kockar och kök att enkelt kunna beräkna sina rätters klimatavtryck och få direkta resultat.

“I år får vi möjlighet att kunna mäta och få faktiska resultat på matens klimatpåverkan. Vi hoppas kunna analysera och använda resultaten i framtiden, så vi kan fortsätta utveckla oss hela tiden, år efter år.” säger Östberg

UKK har minskat sin klimatpåverkan med 14% med hjälp av CarbonCloud

Uppsala Konferens och Kongress, UKK, beräknade i höstas klimatpåverkan från maten de serverade. Efter resultaten presenterats valde de att justera sina inköp, förändra sina menyer och beräkna på nytt under våren. De nya resultaten visade en stor förbättring på väldigt kort tid, samtidigt som de nya koncepten blivit succér. Nu vill UKK nå ännu lägre nivåer av utsläpp.

CarbonCloud är ett Göteborgsbaserat foodtech-startup som utvecklat och erbjuder ett användarvänligt menyplaneringsverktyg – CarbonAte. Verktyget möjliggör bland annat för kockar att snabbt bli experter på att laga klimatsmart mat samtidigt som restaurangen får direkta resultat att kommunicera ut till gäster. Verktyget har använts av UKK sedan i höstas.

– Medvetenheten hos våra kockar har ökat ännu mer sedan vi började använda CarbonAte. Vårt sätt att tänka har förändrats när vi gör inköp idag. Enkelheten i att beräkna livsmedel och råvarors klimatpåverkan och därmed göra medvetna val är den i särklass största faktorn till att vi kunnat minska utsläppen så kraftigt under så kort tid, säger Robin Akrap, food and beverages manager på UKK.

foto: Magnus Hörberg

Både internt och externt har stora förändringar skett för UKK efter att de började beräkna maten. Kockarna började arbeta med nya smaker och testa nya råvaror som uppskattas av både personal och gäster. UKKs nya koncept​ ​Uppsalas grönaste brunch​ lanserades i februari, och har varit en succé. UKK har också märkt varje rätt med vilken klimatpåverkan varje enskild rätt har, så gästerna kan se, bli informerade och göra medvetna val.

– Givetvis hade vi tankar på hur våra gäster skulle reagera när vi skrev ut vilken klimatpåverkan våra rätter hade på miljön. Lunchen, till exempel, är helig för många. Men det har bara varit positiva reaktioner och många kommer in och är svinglada. Vi pratar mycket med våra gäster och känslan är att våra kundrelationer blivit starkare när vi
informerar och visar transparens, säger Robin Akrap.

Restaurangen på UKK har jobbat aktivt med klimatfrågan länge och har en Svanencertifiering sedan flera år tillbaka. Med det nya beräkningsverktyget har UKK, från en redan låg nivå, tagit ytterligare ett steg och lyckats sänka sina utsläpp än mer.

– Vi strävar alltid efter att vara moderna, smarta och vårt hållbarhetsarbete ska vara genomgående genom hela verksamheten. 14% minskning av utsläppen från vår mat på mindre än ett halvår är jättegoda resultat. Vår ökade medvetenhet har förändrat vårt sätt att tänka och gästerna är jättepositiva. Jag är övertygad om att vi kommer minska utsläppen från maten ännu mer i framtiden, säger Robin Akrap.

Restaurang Uppsala Konsert & Kongress har minskat sina utsläpp med ca 14%. Restaurangen hade ett utgångsvärde på 1,08 kg koldioxidekvivalenter per portion, vilket nu minskats till 0,93 kg. En svensk snittlunch släpper ut nästan två kg koldioxidekvivalenter per portion.*

*Källa:​ ​

För mer information, kontakta
David Bryngelsson, vd, CarbonCloud
Robin Akrap, food and beverage manager, UKK