Under the hood of Automated Mapping: It’s not magic, it’s just appropriate
At CarbonCloud, our goal is to solve the climate crisis by highlighting data-driven, impactful emissions reduction decisions for the food and beverage industry. Automated mapping is the launchpad for this ambitious task and while its technology is just as ambitious, its application is foundational for the food and beverage industry: 1) Rapidly provide information they can act on. Meaningful action is critical to stress here because, as imperative information is, it does not get the job done in itself. 2) Facilitate scalability to the whole food system and change over time.
Well-begun is half-done
Automated Mapping is the sidekick for scaling any food or beverage company’s climate performance strategy, instantly generating the calculation for tens of thousands of products and continuously updating them. It inputs existing information from your portfolio and maps out the first results that are valid enough to make decisions and act upon at every possible and relevant stage. Automated mapping carries and eliminates most of the burden of getting the results and the business can focus on improving or identifying knowledge gaps.
This is what we’d call in business terms the “value proposition”. The feedback we are getting on this presentation is that it sounds too good to be true or it feels like magic. As much as we’d like to spin this off into punchy marketing copy, it’s simply appropriate technological application on exhaustive problem identification. So, let’s make it tangible and bare all the technology working under the hood to show you how it’s not that magical – in reality, it’s simply a helpful, useful solution to understanding the current picture and shape a data-driven strategy.
Automated Mapping's engine under the hood
You can kickstart Automated Mapping with a range of input volume: We enrich our library with thousands of products a month so most likely your product is on ClimateHub, already mapped and calculated; if your product is not already mapped then you merely need to input your product name and market, and optionally the list of ingredients. The result will still serve the goal “results that are valid enough to make decisions and act upon” but the larger the input volume, the further ahead you advance to transitioning to higher data definition.
This input is sent through our modeling pipeline which compares it to other similar products, matches it to different product properties and models an appropriate production process. The modeling engine consists of two main parts:
The engine is trained on our ever-growing number of data points already in the system to classify the next one. In addition, if you have also provided the list of ingredients, the engine runs them recursively – making dependencies first-class citizens that are themselves modeled by going through the whole pipeline.
These two parts run in continuation. The engine first analyzes all the products for classification type and its specific properties. Then it uses the analysis to identify the types and properties these products have in common. For example, if you have 10 types of bread and they are all classified as wheat-based, the engine will conclude that they all use the same wheat supply.
Finally, these findings are output in a structure that allows humans to enhance, amend, and refine the assumptions in a scalable way. The outcome is a model of the supply chain and the emissions at each stage and for every product.
The user can then view the results on our platform to see the specifics with full transparency and focus their efforts: What kind of assumptions are made and how they are organized? What do we already know and where should we focus on understanding more?
Our special today is our Newsletter, including snackable tips, hearty climate knowledge, and digestible industry news delivered to your inbox
Facilitating change over time
The answers to these questions highlighted by Automated Mapping naturally serve as the launchpad for long-term improvement on both data definition and strategic decisions. The platform then enables a seamless transition towards the highest climate data definition possible and the subsequent depth of detailed insights.
From that point onward, the user can refine the description of emissions in two different axes. On the first axis, the company can add new or more products, new or more markets, or remove discontinued products. On a second axis, the company can work in further detail and exchange components directly in the digital twin of every product.
Simplifying this process is critical for the development of the company’s maturity. If 5, 50, or 5,000 products use the same distribution chain, they can model that distribution chain in isolation and it would propagate throughout their portfolio. When suppliers are onboarded, the data granularity will increase and can replace the assumption in the same structure. When suppliers make amendments or refinements in their supply chain models, the change will propagate to all the products using the supplier’s ingredient. The same applies for every step in the supply chain and production node in your system.
There’s a lot of will, engagement and drive to really make a change for two material reasons: ignoring the upcoming disclosure regulations will hurt the balance sheets, but there’s a softer component. Companies consist of humans who care about the climate crisis and we cannot allow this drive for positive change fizzle out.
In the end, the intention doesn’t really matter. We want to avoid climate action failure at a global scale and our contribution to this effort is to enable food businesses to drive impactful positive change and focus to their efforts to where it matters in the portfolio. It doesn’t matter how high-tech the tool that enables them to do it is – what matters is that it’s useful. The technological innovation in Automated Mapping just happens to be appropriate!