Which primary datasets are used in this workflow?
The workflow uses Google Open Buildings for global building footprints, a 30‑meter resolution digital elevation model (DEM) for terrain, and ESA WorldCover (10 m) land-cover classification to filter vegetation.
Why not simply download satellite imagery for the entire search area?
Downloading all imagery would be impractical (roughly 15 TB for the area) and remote imagery can be too grainy to match; using datasets lets you filter the search space instead of brute-forcing imagery.
How does terrain fingerprinting work at scale?
Create a 3D sketch of terrain visible in the photo, convert that into a template, then compare it mathematically against DEM-derived terrain around each candidate building to score matches automatically.
How are candidates prioritized and reduced for manual review?
After initial density filtering, candidates are scored by terrain-match quality and a tree/land-cover percentage filter; the top-ranked sites (e.g., top 100–300) are exported to GIS for manual verification.
What are the main suggested improvements to increase accuracy?
Use a higher-resolution DEM where available and create a more precise terrain template/sketch — both changes improve matching fidelity and reduce false positives.