Obtaining rock contours
Pipeline
1) Pre-processing
- Input is an orthographic RGB image with resolution .
- Since the input orthographic image could cover an area and such image could have size , to avoid the Out-Of-Memory problems algorithm splits the input image into small patches with overlapping. The size of patches depends on the GPU-RAM amount.
2) Deep Learning Neural Network
- Architecture of ANN pointed to predict the different instances of rocks. It uses the famous semantic-segmentation UNet but with a special loss function designed to obtain instance-segmentation.
- ANN was trained to process input data with different distortions (as in color as in geometry space: color saturation, shadows, brightness/contrast, random noise, blurring, e.t.c.)
- Packed batches processed in GPU
- Predicted results unpacked to patches and merged back to big-image results
3) Post-processing
- Split the big result image(for a particular scale) into small patches. The size of patches depends on CPU-RAM.
- Merging patches with different scales to one common result
- Apply image-processing algorithms to get rocks’ contours
- Obtain required geometry characters Area and Size from each rock’s contour
Algorithm
Since AI is trained on the particular range of detecting rocks' sizes, to cover the wider range of processed rock sizes the multi-scale approach has been implemented.
It means that AI processes source data on various scales. It allows to detect as little as big rocks and combine results into one general with little and big rocks.
Scales are represented by values: 1.0 and 0.25.
In the image above can be seen how the results of rocks’ predictions with scale=1.0(left image) detect little rocks and “does not see” big rocks, when results of prediction with scale=0.25 (right image) detects big rocks and “do not see” little rocks.
It allows obtaining the range of detected rocks from 10 cm to 5 meters.
If the second derivative is negative, we have a "convex in the increasing probability direction” projection section. This is a necessary (but not sufficient) condition for determining the found object. Good for little rocks (with noise as well) but not enough to cover big rocks.
Store results obtained from scale=1.0 :
The image below shows a colored masks:
It could be noticed that some big rocks were missed. Instead, a lot of little rocks detected in the areas corresponded to big rocks. To detect big rocks we need results from another scale.
Store results obtained from scale=0.25 :
Now we have strong results from big rocks when little rocks are absent.
Merge areas of big rocks from different scales:
Merge little rocks and big rocks together:
At this moment we have masks of separated big and little rocks. But such a mask corresponds to predicted high-probability of rocks and does not well cover boundaries of rocks.
Create connected components and add extra component for extending rock’s shapes to their boundaries:
Apply watershed and contour detection algorithms to extent connected components and obtain their contours:
After contours have been found for each connected component, their geometry parameters Area and Size could be used for gathering the fragmentation 2D-statistics. If add Digital Elevation Model (DEM) to stored results 3D-statistics could be achieved.
Statistics for areas where rocks were not detected by AI will be processed additionally by a special Fines-rectification algorithm based on obtained AI results and input control parameter Fines-Factor.
Comments
0 comments
Please sign in to leave a comment.