GHRSST Sea Surface Temperature Analysis
The GHRSST Level 4 MUR (Multi-scale Ultra-high Resolution) Global Foundation Sea Surface Temperature Analysis (v4.1) provides
a high-resolution, daily global assessment of ocean surface temperature. This dataset fuses observations from multiple
satellite-borne sensors (including infrared and microwave radiometers) and in situ measurements, offering a robust and
consistent picture of sea surface temperature conditions. The algorithm employs advanced interpolation and blending
techniques to fill gaps caused by cloud cover and sensor discrepancies, resulting in a seamless, 1-kilometer-resolution
global field. The “foundation” temperature represents the sea surface temperature free of diurnal warming effects,
making it particularly valuable for climate studies, numerical weather prediction, and a variety of marine applications.
The dataset is distributed by NASA’s Jet Propulsion Laboratory in collaboration with the Group for High Resolution Sea Surface
Temperature (GHRSST), and is widely used by researchers, operational agencies, and policy-makers for informed decision-making
in fields such as marine resource management, weather forecasting, and climate monitoring.
For more information, please refer to the documentation.
About this repository
The code used to repackage this data is accessible here.
Care has been taken to not change the source data, but just to repackage it into cloud optimised geotiffs with
STAC metadata. Each daily dataset is accessible in a <year>/<month>/<day>
path, and file names are consistent with the
source data, e.g., "{$Y%m%d}090000-JPL-L4_GHRSST-SSTfnd-MUR-GLOB-v02.0-fv04.1.stac-item.json
.
Accessing data
Using Python
To load a single day, you can do something like the following using Python:
1from pystac import Item
2from odc.stac import load
3
4path = "https://data.source.coop/ausantarctic/ghrsst-mur-v2/2025/02/09/20250209090000-JPL-L4_GHRSST-SSTfnd-MUR-GLOB-v02.0-fv04.1.stac-item.json"
5item = Item.from_file(path)
6
7data = load([item], chunks={}, anchor="center")
8data
1from pystac import Item
2from odc.stac import load
3
4path = "https://data.source.coop/ausantarctic/ghrsst-mur-v2/2025/02/09/20250209090000-JPL-L4_GHRSST-SSTfnd-MUR-GLOB-v02.0-fv04.1.stac-item.json"
5item = Item.from_file(path)
6
7data = load([item], chunks={}, anchor="center")
8data
And then to plot it, do:
1sst_loaded = data["analysed_sst"].squeeze().compute()
2sst_masked = sst_loaded.where(sst_loaded != -32768)
3
4# Extract the scale and offset from the metadata
5meta = item.assets["analysed_sst"].extra_fields["raster:bands"][0]
6scale = meta["scale"]
7offset = meta["offset"]
8k_to_c = -273.15
9
10sst_scaled = sst_masked * scale + offset + k_to_c
11sst_scaled.plot.imshow(size=8, robust=True, cmap="inferno")
1sst_loaded = data["analysed_sst"].squeeze().compute()
2sst_masked = sst_loaded.where(sst_loaded != -32768)
3
4# Extract the scale and offset from the metadata
5meta = item.assets["analysed_sst"].extra_fields["raster:bands"][0]
6scale = meta["scale"]
7offset = meta["offset"]
8k_to_c = -273.15
9
10sst_scaled = sst_masked * scale + offset + k_to_c
11sst_scaled.plot.imshow(size=8, robust=True, cmap="inferno")
Reading from STAC Parquet file
Alternately, you can use the STAC Parquet file as an index to all the STAC docs.
1import stacrs
2import pystac
3from odc.stac import load
4
5url = "https://data.source.coop/ausantarctic/ghrsst-mur-v2/ghrsst-mur-v2.parquet"
6
7center = 13, -61
8buffer = 5
9
10bbox = (
11 center[1] - buffer,
12 center[0] - buffer,
13 center[1] + buffer,
14 center[0] + buffer,
15)
16
17items = await stacrs.read(url)
18
19# Or use .search to filter by time
20# year = 2024
21# items = stacrs.search(
22# url,
23# bbox=bbox,
24# datetime=f"{year}-01-01T00:00:00.000Z/{year}-12-31T23:59:59.999Z",
25# )
26
27items = [pystac.Item.from_dict(i) for i in items["features"]]
28
29data = load(items, bbox=bbox, chunks={})
30data
1import stacrs
2import pystac
3from odc.stac import load
4
5url = "https://data.source.coop/ausantarctic/ghrsst-mur-v2/ghrsst-mur-v2.parquet"
6
7center = 13, -61
8buffer = 5
9
10bbox = (
11 center[1] - buffer,
12 center[0] - buffer,
13 center[1] + buffer,
14 center[0] + buffer,
15)
16
17items = await stacrs.read(url)
18
19# Or use .search to filter by time
20# year = 2024
21# items = stacrs.search(
22# url,
23# bbox=bbox,
24# datetime=f"{year}-01-01T00:00:00.000Z/{year}-12-31T23:59:59.999Z",
25# )
26
27items = [pystac.Item.from_dict(i) for i in items["features"]]
28
29data = load(items, bbox=bbox, chunks={})
30data
Using R
The 'analysed_sst' files are templated by:
1template <- "https://data.source.coop/ausantarctic/ghrsst-mur-v2/{format(date, '%Y/%m/%d')}/{format(date, '%Y%m%d')}090000-JPL-L4_GHRSST-SSTfnd-MUR-GLOB-v02.0-fv04.1_analysed_sst.tif"
1template <- "https://data.source.coop/ausantarctic/ghrsst-mur-v2/{format(date, '%Y/%m/%d')}/{format(date, '%Y%m%d')}090000-JPL-L4_GHRSST-SSTfnd-MUR-GLOB-v02.0-fv04.1_analysed_sst.tif"
Example for a specific date:
1date <- as.Date("2025-02-14")
2(url <- glue::glue(template))
1date <- as.Date("2025-02-14")
2(url <- glue::glue(template))
Strictly, we should check the catalogue first:
1catalog <- arrow::read_parquet("https://data.source.coop/ausantarctic/ghrsst-mur-v2/ghrsst-mur-v2.parquet")
2
3if (!url %in% catalog$assets$analysed_sst$href) {
4 message("file does not exist, check available dates")
5} else {
6 dsn <- glue::glue("/vsicurl/{url}")
7}
1catalog <- arrow::read_parquet("https://data.source.coop/ausantarctic/ghrsst-mur-v2/ghrsst-mur-v2.parquet")
2
3if (!url %in% catalog$assets$analysed_sst$href) {
4 message("file does not exist, check available dates")
5} else {
6 dsn <- glue::glue("/vsicurl/{url}")
7}
Read using the terra package:
1library(terra)
2(sst <- rast(dsn))
1library(terra)
2(sst <- rast(dsn))
Or alternatively, using gdalraster:
1library(gdalraster)
2(ds <- new(GDALRaster, dsn))
1library(gdalraster)
2(ds <- new(GDALRaster, dsn))
Or with stars:
1library(stars)
2(sstars <- read_stars(dsn, proxy = TRUE))
1library(stars)
2(sstars <- read_stars(dsn, proxy = TRUE))
To crop with the terra package:
1crop(sst, ext(130, 150, -55, -42))
1crop(sst, ext(130, 150, -55, -42))
Or, more generally, project (by_util is very important for COG efficiency):
1project(sst, rast(ext(130, 150, -55, -42), res = res(sst)), by_util = TRUE)
1project(sst, rast(ext(130, 150, -55, -42), res = res(sst)), by_util = TRUE)
We can also change CRS and resolution as desired:
1target <- rast(
2 ext(c(-1, 1, -1, 1) * 6e5),
3 res = 1000,
4 crs = "+proj=laea +lon_0=147 +lat_0=-45"
5)
6project(sst, target, by_util = TRUE)
1target <- rast(
2 ext(c(-1, 1, -1, 1) * 6e5),
3 res = 1000,
4 crs = "+proj=laea +lon_0=147 +lat_0=-45"
5)
6project(sst, target, by_util = TRUE)
License
See: https://podaac.jpl.nasa.gov/CitingPODAAC
Data hosted by the PO.DAAC is openly shared, without restriction, in accordance with NASA's Earth Science program Data and Information Policy.