A key part of an Ignite integration is the data syncronization tooling. It brings data from the chosen platform (e.g. eCommerce platform) to SparkLayer. This data can then be viewed by the merchant users within the SparkLayer Dashboard. Also, the data will be used in the SparkLayer Frontend to power the wholesale ordering experience, including which customers should SparkLayer be enabled to, what pricing should be shown, as well as the customer's order history.
A full data sync is to be triggered in these scenarios:
- On install of the application
- If any inconsistencies exist and data needs to be refreshed. This will be actioned via the integration's Trigger a Data Sync endpoint which is called by the SparkLayer Dashboard ( when the merchant clicks the manual syncronization buttons for products and customers).
Beyond this, data will be synced via partial data updates (based on webhooks or other event mechanisms from the eCommece platform).
Queueing and retry
To ensure the stability and reliability of the platform, partial sync events should be retried in case of issues.
This service should not be affected by heavy users of our API and should be designed to make as little updates as possible i.e. for a product update, fetch the product and stock levels and only run updates where required.
To ensure visibility of the last updated date and the last full sync. This should be logged via the sync-log API detailed further down this page.
Any issues encountered during the process should be handled and logged via the sync-log API to be displayed to our merchants in the Spark Dashboard. An example of this could be a SKU not set on a product variant or a customer address missing data such as "line 1 is empty".
The data to be synced includes products, customers, orders and store settings.
Below are the key elements of data to be synced from the platform (see the API here):
- external_id -> Platform Product ID
- external_id -> Platform Variant ID
- status Status should be set to live when the product is live and set to discontinued when the product is deleted.
Stock levels must be set synced from inventory_level to our stock API (see the API here)
Extending the product object with ***B2B attributes***
The below data would be useful if this is synced across from fields on the platform where the data could be stored, such as attributes (If possible, against the variant record)
- restock_date Against the stock record but can be taken from variant
Product pricing can be synced into the SparkLayer Pricing API.
We suggest that the Ignite implementation create at least one price list containing the default price for every product in the underlying platform. The price list should be named after the platform, eg. "BigCommerce Price". The price list should be created when the platform is connected and pricing should be updated whenever a product sync happens.
- external_id -> Platform ID
Extending the customer object with ***B2B attributes*** The below data would be useful if this is synced across from fields on the platform where the data could be stored, such as attributes. Group can be fed via other means i.e. if there's another field which makes sense to sync via such as if there's a group on the underlying platform.
Order data to be synced (see the Purchasing API here); this needs to include order creation, amends and shipments.
Please note: only B2B orders placed though the SparkLayer frontend are required to be synced across.
On platform connection, as well as when the relevant store settings are updated on the eCommerce platform, the following store settings should be synced across:
- Base currency (this should be syned to SparkLayer using the Settings API here and a setting key of baseCurrency).
For optimal visibility of the data syncronization process, any customer and product sync timestamps and errors should be reported through the sync-log API. This ensures that our merchants can view this information on the Sparklayer Dashboard and take the necessary actions. Documentation pertaining to the sync-log API endpoints is accessible here.
During a full data synchronization a full sync log should be recorded using the PUT endpoint.
Please be advised:
- Upon a full data sync log, errors from the preceding sync log are cleared.
- It's crucial that the external_id associated with each error remains unique. This precaution prevents errors from being mistakenly overwritten or incorrectly resolved.
- As full data sync might take a while and is often done asyncronously, it will not usually be possible to record all the errors in the inital sync log PUT request. Instead, the initial sync log PUT request can be used to record the start of the sync and the errors (as well as their potential resolutions) can be recorded in subsequent partial sync PATCH requests. Also, the sync log API needs to be notified when the full sync is complete. This can be done by sending a partial sync PATCH request with the last_full_sync property once the Ignite integration has finished syncing all the data.
- In the case of a full sync where items are synced in separate application instances (i.e. though a use of a queue) it's important to ensure that the sync log last_full_sync property is only updated once all items have been synced. For this you might need to keep track of the items synced and remaining in the integration as the sync log API does not handle this.
A partial synchronization log should be recorded any time an individual item is synced. For instance, this would happen if a product or customer is updated on the store and the update is synced to SparkLayer. This can be done using the sync log API PATCH endpoint.
Please be advised:
- The PATCH endpoint retains pre-existing errors.
- Incorporates any new errors.
- Additionally, errors can be resolved using an array of successfully synchronized platform IDs.
The sync log API is responsible for sync log upkeep. Given that previous errors are cleared during a full data synchronization, there's typically no need to delete sync logs. However, if a store decides to uninstall SparkLayer, it's essential to delete the store's synchronization logs using the designated delete endpoint.