We have been working hard to optimise our disease segmentation model and embed it in our mobile application. This will allow us to provide the user with the segmentation result even when there is no internet available.
We have also been working on improving the model to work with lower resolutions so that we can a map overlay based on the frames extracted from the lower resolution video streamed from the drone during the flight.
As you can see from the screenshots we have made excellent progress in this task by managing to successfully achieve both objectives although the current analysis speed is slow. This means the on-device segmentation will be used as a backup to the cloud service while we work on futher optimisation.