Modern browsers tend to silently compensate for a range of underlying website issues, including redirect chains, JavaScript errors, anti-bot logic, and geolocation inconsistencies. A diagnostic crawler, by contrast, cannot ignore these behaviours, as they directly affect how consent mechanisms are delivered to different users.
To help minimise these situations, a few practical guidelines are worth keeping in mind:
1. Load the start page in a browser first, then copy the final resolved URL into Step 1 of the new project settings. This resolves the *vast* majority of redirect-related issues.
2. If a site blocks our crawler, the website team can whitelist our user agent. This is typically very straightforward (the example uses Cloudflare), but does require administrator access to the firewall.
3. If a site’s geolocation database is incomplete or inaccurate, resulting in the wrong regional page or banner being shown, this needs to be corrected by the website administrator. If you need the IP list, contact us via the Chat Support widget within your account.
4. If multiple audits are run simultaneously on the same site without whitelisting, firewalls may interpret this as bot activity. Running smaller audits sequentially (for example, 25 pages at a time) significantly reduces this risk.
When auditing a wide range of sites with differing implementations, these edge cases do inevitably arise.
If you have any questions, you can always contact us via the Chat Support widget within your account. We are based in Sweden and the central EU.