Square Image

Mohammad Ashraf

Engagement Lead

Choosing Your Attribution Model

Essential Requirements for Constructing an Attribution Model


Saras Analytics builds bespoke data solutions for eCommerce brands. Their products Daton and Pulse enable brands to build a single source of truth for marketing, operations and finance teams across DTC, Amazon, and retail channels.

In This Article:

Building an effective attribution model is not easy. There are prerequisites and data foundations in place that are needed for your brand to successfully build one out. Here, we’ll take you through what is needed and how best to do it.

The prerequisites to building an attribution model

Centralizing your data

At the core of it all, brands must first centralize data from various sources into a single, self-owned data warehouse (DWH). This is ideally done using platforms like Google Cloud Platform (GCP), Snowflake, or Microsoft Azure. GCP is generally recommended for brands due to its flexibility and cost-effectiveness.

By centralizing your data, it allows your brand to bring various data sources together. This would enable better data integration and make it easier to gain insights from a comprehensive dataset.

Using an ETL tool to automate data centralization

From there, brands can use an ecommerce focused Extract, Transform, and Load (ETL) tool like Daton to automatically replicate data to the DWH. This can be done on a daily basis from Shopify, GA4, Fairing, and more.

Such tools can tap into the APIs of almost any e-commerce related platform and reliably replicate the data without any manual effort. This streamlines the process, making it less labor-intensive, while still ensuring that your database is up to date.

Joining datasets on a common customer key

Customer keys allow you to identify and search for an asset by a data value known only by you. A customer key is a unique value across your business that is typically stored in your database.

Brands would need to join datasets on a common customer key, typically the customer email or Shopify customer ID. This can be done internally with your team’s data engineer, or via an external product such as Daton Pulse.

While it may not be a viable option for all, having a data engineer can also help with customization and further iterations of the attribution model. Nonetheless, external services will suffice as well.

Making sense of the data

From there, when combining and summarizing the data, you will need to spend time to understand the format and granularity of the data coming from each source. This is a necessary step in order to draw meaningful insights, as you need to have the right context to be able to put it to use effectively.

To highlight this, with regards to granularity, touchpoint data from Shopify and GA4 will include the source or medium, and also the campaign details.

However, it is likely that the PPS data from Fairing would only provide data as per the options included in the survey questions. As such, one will need to summarize all inputs into a common format and level of granularity before drawing insights from them.

Similarly, for formating, one needs to be cognisant of the different data types of each field and join them accordingly.

Approach to building an attribution model

With all these prerequisites fulfilled, your brand is now ready to build out an attribution model to elevate your marketing.

As such, a typical attribution modeling approach that we would recommend could look something like this:

  1. a. Use PPS data to attribute customers who filled the survey
  2. b. Use Shopify Customer Journey API data to attribute the rest of the customers using one of the above mentioned models
  3. c. Use GA4 data to attribute customers who are still not attributed to any paid channel using one of the above mentioned models
  4. d. Brands can pick which model to use, based on each model’s accuracy levels when compared against PPS data

 Ready to know
your customers better?

View Interactive Demo