I'm on a project where we're trying to relate shark abundance to environmental variables, and analyses thus far show mangroves are positively correlated i.e. the sharks (smalltooth sawfish) like to be around mangroves. To predict sawfish abundance to a wide area, I need to find whether coastlines are beaches or mangroves. Visually this is easy in google satellite view - one can see that the blue bits are sea, the yellow bits are beach, the green bits are mangrove. I'm trying to do this automatically.
My colleagues are working on a shapefile where they've drawn a polygon around the (incredibly complex) coast of Florida, and are now looking to buffer that 2 metres back, to end up with a shapefile that is just the 2m of coast (we (and the sharks) don't care if it's vegetation further inshore). Edit: buffer successfully created while I was writing this.
I'm hoping to to use that layer as a mask to download Planet data and then extract the colours and use that to label the coastline as beach or mangrove. But I'm really struggling to get my head around Planet's offerings and how to use them. My questions,
1. Is there a better way of creating a highly detailed coastline shapefile other than drawing it by hand? The highest res version of NOAAs global coastline shapefiles (purple) looks pretty chunky compared to my colleague's hand-drawn polys (yellow) which are chunky compared to google satellite's basemap (under. Image. Note my colleague's shapefiles are offset from gSat basemap which I assume is a gSat projection issue but I need to check).
2. Which planet product should I use? The Planet Daily Imagery is worse res than Ortho Visual Collect which is worse than google satellite. Ideally we'd use the 15th of Jan, April, July & Oct 2018, plus one current one, but if there's one 'best-quality' option and everything else is worse then that's fine too. I'm led to understand that Planet is the best imagery available to me (academic, licence through university), but was surprised to see its showcase beaten by google satellite (which I thought it powered). Maybe NYC is an unfair example e.g. if the gSat images are composites from aerial photos?
3. How should I extract pixel colour for the classification? I realise this is probably a common GIS raster operation not specific to Planet, but just in case anyone has any pointers, or does this kind of operation as a common workflow.
Thanks in advance everyone!
Please sign in to leave a comment.