jupytext:
text_representation:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.15.2
kernelspec:
display_name: geo
language: python
name: python3
National Surface Depressions

Description
This dataset represents the extent and location of surface depressions derived from the 10-m resolution dataset from the 3D Elevation Program (3DEP). The levet-set algorithm available through the lidar Python package was used the process the DEM and delineate surface depressions at the HU8 watershed scale.
Reference
- Wu, Q., Lane, C. R., Wang, L., Vanderhoof, M. K., Christensen, J. R., & Liu, H. (2019). Efficient Delineation of Nested Depression Hierarchy in Digital Elevation Models for Hydrological Analysis Using Level‐Set Method. JAWRA Journal of the American Water Resources Association , 55(2), 354-368. https://doi.org/10.1111/1752-1688.12689
Environment setup
First, create a conda environment with the required packages:
1conda create -n gdal python=3.11
2conda activate gdal
3conda install -c conda-forge mamba
4mamba install -c conda-forge libgdal-arrow-parquet gdal leafmap
5pip install lonboard
1conda create -n gdal python=3.11
2conda activate gdal
3conda install -c conda-forge mamba
4mamba install -c conda-forge libgdal-arrow-parquet gdal leafmap
5pip install lonboard
If you are using Google Colab, you can uncomment the following to install the packages and restart the runtime after installation.
1# %pip install leafmap lonboard
1# %pip install leafmap lonboard
Data access
The dataset was derived from the 10-m resolution dataset from the 3D Elevation Program (3DEP) at HU8 watershed scale. The results were then merged at the HU2 watershed scale. Below is a map of the National Hydrography Dataset (NHD) watershed boundary structure.

The script below can be used to access the data using DuckDB. The script uses the duckdb Python package.
1import duckdb
2
3con = duckdb.connect()
4con.install_extension("spatial")
5con.load_extension("spatial")
6
7hu2 = "06" # Change to the HU2 of your choice
8url = f"https://data.source.coop/giswqs/depressions/all_dep/HU2_{hu2}.parquet"
9con.sql(f"SELECT * EXCLUDE geometry, ST_GeomFromWKB(geometry) as geometry FROM '{url}'")
1import duckdb
2
3con = duckdb.connect()
4con.install_extension("spatial")
5con.load_extension("spatial")
6
7hu2 = "06" # Change to the HU2 of your choice
8url = f"https://data.source.coop/giswqs/depressions/all_dep/HU2_{hu2}.parquet"
9con.sql(f"SELECT * EXCLUDE geometry, ST_GeomFromWKB(geometry) as geometry FROM '{url}'")
Find out the total number non-floodplain wetlands in the selected watershed:
1con.sql(f"SELECT COUNT(*) FROM '{url}'")
1con.sql(f"SELECT COUNT(*) FROM '{url}'")
Alternatively, you can use the aws cli to access the data directly from the S3 bucket:
1aws s3 ls s3://us-west-2.opendata.source.coop/giswqs/depressions/all_dep/
1aws s3 ls s3://us-west-2.opendata.source.coop/giswqs/depressions/all_dep/
Data visualization
To visualize the data, you can use the leafmap Python package with the lonboard backend. The script below shows how to visualize the data.
1import leafmap
2
3hu2 = "06" # Change to the HU2 of your choice
4url = f"https://data.source.coop/giswqs/depressions/all_dep/HU2_{hu2}.parquet"
5gdf = leafmap.read_parquet(url, return_type='gdf', src_crs='EPSG:5070', dst_crs='EPSG:4326')
6# leafmap.view_vector(gdf, get_fill_color=[0, 0, 255, 128])
1import leafmap
2
3hu2 = "06" # Change to the HU2 of your choice
4url = f"https://data.source.coop/giswqs/depressions/all_dep/HU2_{hu2}.parquet"
5gdf = leafmap.read_parquet(url, return_type='gdf', src_crs='EPSG:5070', dst_crs='EPSG:4326')
6# leafmap.view_vector(gdf, get_fill_color=[0, 0, 255, 128])
Data analysis
The depression dataset has two variations: all_dep
and nfp_dep
. The all_dep
dataset contains all depressions, while the nfp
dataset contains non-floodplain depressions.
All depressions
Find out the total number of surface depression in the contiguous United States (CONUS):
1con.sql(f"""
2SELECT COUNT(*) AS Count
3FROM 's3://us-west-2.opendata.source.coop/giswqs/depressions/all_dep/*.parquet'
4""")
1con.sql(f"""
2SELECT COUNT(*) AS Count
3FROM 's3://us-west-2.opendata.source.coop/giswqs/depressions/all_dep/*.parquet'
4""")
Calculate some descriptive statistics of all surface depressions:
1con.sql(f"""
2SELECT sum(area) as total_area, median(area) AS median_area, median(volume) AS median_volume, median(avg_depth) AS median_depth
3FROM 's3://us-west-2.opendata.source.coop/giswqs/depressions/all_dep/*.parquet'
4""")
1con.sql(f"""
2SELECT sum(area) as total_area, median(area) AS median_area, median(volume) AS median_volume, median(avg_depth) AS median_depth
3FROM 's3://us-west-2.opendata.source.coop/giswqs/depressions/all_dep/*.parquet'
4""")
Non-floodplain depressions
Find out the total number non-floodplain depressions in the contiguous United States (CONUS):
1con.sql(f"""
2SELECT COUNT(*) AS Count
3FROM 's3://us-west-2.opendata.source.coop/giswqs/depressions/nfp_dep/*.parquet'
4""")
1con.sql(f"""
2SELECT COUNT(*) AS Count
3FROM 's3://us-west-2.opendata.source.coop/giswqs/depressions/nfp_dep/*.parquet'
4""")
Calculate some descriptive statistics of non-floodplain surface depressions:
1con.sql(f"""
2SELECT sum(area) as total_area, median(area) AS median_area, median(volume) AS median_volume, median(avg_depth) AS median_depth
3FROM 's3://us-west-2.opendata.source.coop/giswqs/depressions/nfp_dep/*.parquet'
4""")
1con.sql(f"""
2SELECT sum(area) as total_area, median(area) AS median_area, median(volume) AS median_volume, median(avg_depth) AS median_depth
3FROM 's3://us-west-2.opendata.source.coop/giswqs/depressions/nfp_dep/*.parquet'
4""")