On May 20 this year a tropical cyclone called Amphan hit India and Bangladesh. This super cyclone destroyed everything in its way in Southeast India and Bangladesh damaging over US$13 billion surpassing all records in the last decade. In this article we will have a quick look at Sentinel-1 satellite image and see how this damages look.
Fortunately more than half the crop fields of Bangladesh are already harvested. This means fewer standing crops and less damage for farmers. But the Southeast part Bangladesh is dense in cultured fisheries which took the obvious hit. For this article I will search for any Sentinel-1 GRD image of Satkhira or Jashore in IW mode and ascending orbit direction. There are three available at ESA Scihub, May 7, 19 and 25, means there are two image before the storm and one after.

Our typical Sentinel-1 images has two bands, VV (vertical-vertical) and VH (vertical-horizontal). Polarization helps us to find subtle features of land surface, the more orientations we can manage the more features we can see. Please refer to any online tutorial for the basics of RADAR or SAR remote sensing and how polarization works. Tropical countries use to have clouds for most of the year, RADAR images are the only way to peek through the cloud and take pictures.
Amphan started forming since May 16, causing rain in most part of these areas for days. The May 19 image may not differentiate much from May 25, but may 7 will. So, I downloaded two,
- May 7 image (S1A_IW_GRDH_1SDV_ 20200507T121229_20200507T121254_032459_03C246_4BAF)
- May 25 image (S1B_IW_GRDH_1SDV_ 20200525T121151_20200525T121216_021738_029423_93ED)
Sentinel images found at Sentinel Hub are downloaded using a Python library called SentinelSAT
with a script I wrote a while ago. You can also try another library called PyroSAR
. The pyroSAR.snap.util.geocode
does all the preprocessing steps you’ll need. After that I used SNAP
and opened the terrain-corrected images to create 2 more bands for each image. I used band math
tool of SNAP
to create two more band,
- a band substracting
VH
fromVV
, orVV-VH
- a band deviding
VV
byVH
, orVV/VH
orVV-by-VH
Remember to uncheck Virtual (save expression only, don’t store data)
of the band math
windows. Also put 0 for Replace NaN and infinity results by
. To have comparable image pick nomalization from image histogram property.

After that I exported the images to GeoTIFF
. I will explain why I created these two bands in a moment. Then I used ArcGIS Pro
to check the GeoTIFF
files. You can use any GIS software to visualise the band combinations.
If you put same band for all three channel the image is almost always black-and-white. In term of pixel values, dark areas have lower values, bright areas have higher. Bright areas are usually areas with smoother surface, for example concrete structures, bare fields or sea shores. Dark areas in our case are waterbodies.

Combining different bands in Red
, Green
and Blue
(RGB) color channels doesn’t actually bring any scientific value but they show us subtle land types which help us to choose methods and research techniques. For example, Sentinel-1 shows dark patches in many places of the image which are waterbodies of various depth. These patches are distinct and often distinguishable in any color combination. So iterative pixel-based supervised classifications should be enough to classify or quantify them.
In ArcGIS Pro
, different bands name themselves Band_1
, Band_2
… etc. In our image Band_1
is VV
, Band_2
is VH
, Band_3
is VV-VH
, Band_4
in VV_by_VH
. If you put Band_3
, Band_2
and Band_1
in Red
, Green
and Blue
channels respectively, the color you’ll call a 321 combination. The image with 333 combination looks like this.

The pixel values in few rivers on the south had either 0 or invalid (NaN) values, so the ‘band math’ tool made these transparent. This image is of May 7.
Since waterbodies stand out easily from their surroundings let us play some more. I created two more bands for each image at the beginning, the VV-VH
and VV-by-VH
. Since dark areas means lower pixel values, subtracting one imege from the other will make these values even smaller. The VV-VH
(VV
-minus-VH
) should make waterbodies look cleaner, and their boundaries sharper. On the otherhand VV-by-VH
puts the pixel values in a ratio, which should make the values higher in waterbodies and lower on other areas. This helps us see water contents spreading on land surfaces, shallow and standing water, freshly flooded areas or even wet soil.

321 combination showing water in black, notice the increase of black areas 5 days after Amphan.
Based on these assumptions, let’s look at these images. The VV-VH
(Band_3
) shows that many existing waterbody boundaries look smudged or smooth probably because their banks have flooded or got destroyed.

421 combination, showing water in red over black in the image above.
Putting VV-by-VH
in Red
channel makes the reds pop. Red tones among these places could mean the croplands under water or recently recovered from flood. The deeper dark patches at the bottom are the commercial fish ponds which I mentioned earlier.

This image shows one of those areas. 412 combination, the fishing zones use to have boundaries around each pond which in places disappeared or submerged. Notice the flooded areas around the fishing zone.
This is just one scene of Sentinel-1 from Bangladesh, one will find more stories and combinations in other areas following the same technique.
This article is previously published @medium.