Next Article in Journal
Unmanned Autogyro for Mars Exploration: A Preliminary Study
Previous Article in Journal
Efficient Reactive Obstacle Avoidance Using Spirals for Escape
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy

1
School of Electronic and Electrical Engineering, TU Dublin, Central Quad, Grangegorman Lower, D07 ADY7 Dublin, Ireland
2
School of Computer Science, TU Dublin, Central Quad, Grangegorman Lower, D07 ADY7 Dublin, Ireland
*
Author to whom correspondence should be addressed.
Drones 2021, 5(2), 52; https://doi.org/10.3390/drones5020052
Submission received: 4 May 2021 / Revised: 3 June 2021 / Accepted: 4 June 2021 / Published: 17 June 2021
(This article belongs to the Topic Autonomy for Enabling the Next Generation of UAVs)

Abstract

:
With the rise of Deep Learning approaches in computer vision applications, significant strides have been made towards vehicular autonomy. Research activity in autonomous drone navigation has increased rapidly in the past five years, and drones are moving fast towards the ultimate goal of near-complete autonomy. However, while much work in the area focuses on specific tasks in drone navigation, the contribution to the overall goal of autonomy is often not assessed, and a comprehensive overview is needed. In this work, a taxonomy of drone navigation autonomy is established by mapping the definitions of vehicular autonomy levels, as defined by the Society of Automotive Engineers, to specific drone tasks in order to create a clear definition of autonomy when applied to drones. A top–down examination of research work in the area is conducted, focusing on drone navigation tasks, in order to understand the extent of research activity in each area. Autonomy levels are cross-checked against the drone navigation tasks addressed in each work to provide a framework for understanding the trajectory of current research. This work serves as a guide to research in drone autonomy with a particular focus on Deep Learning-based solutions, indicating key works and areas of opportunity for development of this area in the future.

1. Introduction

Since 2016, drone technology has seen an increase in consumer popularity, growing in market size from 2 billion USD in 2016 [1] to 22.5 billion USD in 2020 [2]. As small form factor UAVs similar to the drone pictured in Figure 1 flooded the market, several industries adopted these devices for use in areas including but not limited to cable inspection, product monitoring, civil planning, agriculture and public safety. In research, this technology has been used mostly in areas related to data gathering and analysis to support these applications. However, direct development of navigation systems to provide great automation of drone operation has become a realistic aim, given the increasing capability of Deep Neural Networks (DNN) in computer vision, and its application to the related application area, vehicular autonomy. The work outlined in this paper is twofold: (1) it provides a common vocabulary around levels of drone autonomy, mapped against drone functionality, and (2) it examines research works within these functionality areas, so as to provide an indexed top–down perspective of research activity in the autonomous drone navigation sector. With recent advances in hardware and software capability, Deep Learning has become very versatile and there is no shortage of papers involving its application to drone autonomy. While domain-knowledge engineered solutions exist that utilize precision GPS, lidar, image processing and/or computer vision to form a system for autonomous navigation, these solutions are not robust, have a high cost for implementation, and can require important subsystems to be present for optimal operation, such as network access. The focus in this paper is on navigation works that utilise Deep Learning or similar learning-based solutions as a basis for implementation of navigation tasks towards drone autonomy. Just as Deep Learning underpins the realisation of self-driving cars, the ability of trained Deep Learning models to provide robust interpretation of visual and other sensor data in drones is critical to the ability of drones to reach fully autonomous navigation. This paper aims to highlight navigation functionality of research works in the autonomous drone navigation area, across the areas of environmental awareness, basic navigation and expanded navigation capabilities. While the general focus is on DNN-based papers, some non-DNN-based solutions are present in the collected papers for contrast.
Research projects focused specifically on the development of new navigational techniques with or without the cooperation of industry partners form the definition of what is considered as the state of the art—not as currently implemented solutions in industry but solutions and implementations being actively researched with the potential for future development.

Sources

Our overview covers peer-reviewed publications, acquired using conditional searches of relevant keywords including “drones”, “autonomous navigation”, “artificial intelligence” and “deep learning” or other similar keywords in databases of quality research such as Google Scholar, IEEE Xplore and ArXiv. The most common source of publications found after selection was revealed to be the IEEE Xplore database [3], likely due to the high coverage of high-quality published academic research in the area of electronic engineering and computer science. From the sources found, the most relevant papers on autonomous drone navigation were selected by assessing their relevance to the topic as well as the number of citations per year, as a basic measure for citation analysis [4,5]. The set of papers selected is referred to as the “research pool” (Appendix A, Appendix B, Appendix C, Appendix D and Appendix E).

2. Approach

In this section, we explain the structure and high level metrics that we apply to this overview.

2.1. Levels of Autonomy

As a first step, we need to define the concept of autonomy for drones, with a view to recognising different levels of autonomous navigation. This paper identifies the emergent navigation features in current research against these levels. We apply the Six levels of autonomy standard published by the Society of Automation Engineers (SAE) International. Though the context of these levels was intended by SAE for autonomous ground vehicles, the logic can apply to any vehicle capable of autonomy [6]. The concept of autonomy for cars and drones is similar, implying a gradual removal of driver roles in the navigation of obstacles and path finding. This, progressing to fully independent autonomous navigation regardless of restrictions due to surface bound movement or obstacles. By examining the SAE levels of autonomy for cars, we note how each level is directly applicable to drones. This provides a useful line of analysis for our overview In Figure 2, we set out the functionality of drone navigation, mapped against these levels of autonomy. Autonomy starts at Level 1 with some features assisted, including GPS guidance, airspace detection and landing zone evaluation. These features are designed to provide automated support to a human operator. These features are already to be found in commercially available drones. Level 2 autonomous features are navigational operations that are specific and use case dependent, where an operator must monitor but not continuously control. In the context of drone operation this can include features where the drone is directed to navigate autonomously if possible, e.g., the “follow me” and “track target” navigational commands. Some of these features are available in premium commercial products. Level 3 features allow for autonomous navigation in certain identified environments where the pilot is prompted for engagement when needed. At level 4 the drone must navigate autonomously within most use cases without the need for human interaction. Level 5 autonomy implies Level 4 autonomy but in all possible use cases, environments and conditions and as such is considered a theoretical ideal that is outside the scope of this overview. Though this paper aims at evaluating the features of papers in the context of Level 4 autonomy, it was found that the bulk of the papers approached in the research pool involved Level 2 or 3 autonomy, with the most common project archetype involving DNN training for autonomous navigation in a specific environment.

2.2. Features of Autonomy

We identified that autonomous navigation features fall into three distinct groups: “Awareness”, which details the vehicle’s understanding of its surroundings, which can be collected via non-specific sensors; “Basic Navigation”, which includes the functionality expected from autonomous navigation, such as avoiding relevant obstacles and collision avoidance strategies; and “Expanded Navigation”, which covers features with a higher development depth such as pathway planning and multiple use case autonomous navigation. These groupings and their more detailed functional features are listed in Figure 3, as identified for Level 4 automation. In addition, we note that common engineering features are a useful category for this overview of navigation capability, and we include these as a fourth category for analysis. This is done to acknowledge projects in the research pool that are aimed at achieving a goal within a given hardware limitation, such as optimisations for lower-end hardware and independence from subsystems such as wireless networks [7].

2.3. Citations

In this overview, we indicate the level of research activity by functional area of autonomous drone navigation. We note that within the research domain of autonomous drone navigation there is a lack of standard metrics to enable comparison of contribution and performance. In Section 3, we include “number of citations” as a basic indicator of research attention, whilst also acknowledging that the number of citations can be ambiguous. We order our research by number of citations per year to allow for elapsed time building larger citation counts. We also note that citations in themselves are not a quality indicator, but are simply an indicator of research attention/critical analysis from other works.

2.4. Evaluation Criteria in the Literature

The most common technical approach in the research pool is that of deep learning-based navigation policies implemented on monocular quad-rotor helicopter drones. Within these, the most common criteria for the evaluation of neural networks are accuracy and F1 score. These are applied to assess the ability of the particular DNN to correctly address a particular sensor-data driven tasks, such as object detection, image classification or distance assessment. While accuracy is straightforward, being a direct measure of the network’s ability to predict values correctly against the test dataset, F1 score is less transparent as a harmonic mean of precision and recall [8]. As such, a low F1 value implies a high number of false positive predictions. Due to DNN accuracy being dependent on the quality of the data, and F1 score being both data-specific and situational, we consider it irrelevant to compare the accuracy and F1 score of one DNN architecture to another if the application of the said architecture is in an entirely different environment. Efficiency, in the context of drone navigation, can take the form of processing time in milliseconds (ms), or the power draw while the solution is running in milliwatts (mW). This can be relevant across environments and applications, as it is in part a product of the DNN architecture itself and the implementation of that architecture into experiments, not necessarily the training/test dataset that was fed into it. For this overview, this metric is only represented in the form of processing time, as power draw is more reliant on the engineering of the hardware. Though evaluating quantitative values such as accuracy, efficiency and F1 score are outside the scope of this paper, they are included where visible in the full research pool.

3. Results

The following results are a subset of the full research pool that contains the navigation features of the most cited papers per year published, organised by the feature headers described in Figure 3. Quantitative results, using the aforementioned typical evaluation criteria, are available for reference in Appendix A, Appendix B, Appendix C, Appendix D and Appendix E (A complete evaluation matrix for the research pool, with bold text for readability, is available in Table S1 in the Supplementary Materials, additionally Table S2 is included in the Supplementary Materials as an abbreviation legend).

3.1. Awareness

This encompasses any feature that is included in the referred solution as analysis of the drone’s spatial environment; though basic navigation features can be developed without this understanding, it limits the capability of the said navigation. Projects that do not include awareness features could lead to limited command capability and an over-reliance on prediction; the feature mappings of the awareness section can be seen in Table 1.
  • Spatial Evaluation (SE): The drone can account for the basic spatial limitations of its surrounding environment, such as walls or ceilings, allowing it to safely operate within an enclosed space.
  • Obstacle Detection (ODe): The drone can determine independent objects, such as obstacles beyond the bounds of the previously addressed Spatial Evaluation, but does not make a distinction between those objects.
  • Obstacle Distinction (ODi): The drone can identify distinct objects with independent properties or labels, e.g., identifying a target object and treating it differently from other objects or walls/floors in the environment.

3.2. Basic Navigation

Most of the solutions examined implement features in the category of basic navigation, which we describe as core navigation features for autonomous drones. The Basic Navigation features outlined below are tabulated in Table 2.
  • Autonomous Movement (AM): The drone has a navigation policy that allows it to fly without direct control from an operator; this policy can be represented in forms as simple as navigation commands such as “go forward” or as complex as a vector of steering angle and velocity in two dimensions that lie on the x–z plane.
  • Collision Avoidance (CA): The drone’s navigation policy includes learned or sensed logic to assist in avoiding collision with non-distinct obstacles.
  • Auto Take-off/Landing (ATL): The drone is able to enact self-land and take-off routines based on information from its awareness of the environment; this includes determining a safe spot to land and a safe thrust vector to take off from.

3.3. Expanded Navigation

Expanded navigation covers elements of autonomy that we suggest are second-level navigation autonomy features, relative to those of Section 3.2, and will be addressed at a later stage than the core features of basic navigation. These features would increase the operational capacity of a drone autonomy project that already covers some features of basic navigation; the following features are tabulated in Table 3.
  • Path Generation (PG): The drone attempts to generate or optimize a pathway to a given location, the application of the generated pathway can vary depending on the goal of the project (e.g., pathways for safety or pathways for efficiency).
  • Environment Distinction (ED): The drone can distinguish or take advantage of features of an uncommon use case environment, such as forests, rural areas or mountainous regions. Urban and indoor environments have been excluded from this criteria.
  • Non-Planar Movement (NPM): The implemented navigational policy makes use of full three-dimensional movement strategies enabling the drone to navigate above or below obstacles as well as around them.

3.4. Engineering

This group heading does not tie directly into Level 4 autonomous navigation, but captures additional challenges that apply to a portion of the covered research. It encompasses any feature that advances the robustness of drone physical implementation or addresses any common limitations related to drone hardware in the context of autonomous flight [7]. These feature mappings are visible in Table 4.
  • On-Board Processing (OBO): The drone does not rely on external computation for autonomous navigation. The on-board performance of navigation is performed with an efficiency comparable to an external system.
  • Extra Sensory (ES): The drone employs the use of sensors other than a camera and rotor movement information such as the RPM or thrust. The presence of this feature is not necessarily beneficial; however, the use of additional on-board sensors to aid in autonomous navigation may be worth the weight penalty and computational trade-off.
  • Signal Independent (SI): Drone movement policies do not rely on streamed information such as global position from a wireless/satellite network or other subsystems. This is likely to be a limiting factor, as such a feature may greatly improve the precision of an autonomous system.

3.5. Comparative Results

Figure 4 indicates the focus of functional features in the research space based on how the relative frequency of features appearing in the research pool. This is a potentially useful indicator of which areas are lacking in research attention, versus research areas that are heavily covered. This information is discussed in detail in Section 4.

4. Discussion

Through analysis of the results across the feature headers, and the comparative results between the papers in the research pool, it is shown that there are areas which are significantly more developed in the current research space. Conversely, this analysis also identifies underdeveloped areas where opportunity exists for further research.

4.1. Common Learning Models

Three particular Deep Learning models appear most frequently in the research pool in support of autonomous decision making. Firstly, “VGG-16” [40] is a CNN image classifier that has been trained on the “ImageNet” dataset [41] of over 14 million images matched to thousands of labels. VGG-16 supports wide-ranging image classification or can serve as a base for transfer learning with fine-tuning using images specific to a target drone environment. The majority of research works that adopt it or the object detection model “YoloV3” [42] in the research pool use it as a base for collision avoidance or object detection/distinction. The “ResNet” architecture [43] originates from a CNN-based paper discussing the optimisation of the “AlexNet” architecture [44] through the utilisation of residual layer “shortcuts” that can approximate the activity of entire neural layers. Similar to VGG-16, ResNet is trained on the ImageNet dataset. The benefit of ResNet’s shortcuts architecture is a considerable reduction of processing overhead, resulting in efficient models with low response times but maintaining comparable accuracy. This is favourable for drone operations that require a low CPU overhead. “DroNet” is more specific to the area of autonomous drone navigation and applies manually labelled car and bicycle footage as training data for navigation in an urban environment. Outputs for DroNet from a single image are specific to the purposes of drone navigation, providing a steering angle, to keep the drone navigating while avoiding obstacles, and a collision probability, to let the UAV recognize dangerous situations and promptly react to them. As a purpose-built autonomous drone network, the DroNet work [22] is highly cited and used as a base network for several other papers in the research pool.

4.2. Areas of Concentrated Research Effort

The most common project archetype seen throughout the research pool follows DNN-based autonomous movement with a quad rotor drone trained from bespoke data [7] or transfer-learned from a pretrained network [25]. The most frequent focus of research work within the research pool was for basic autonomous movements. Though the quality of various implementations and methods of acquiring results differ, solutions trended towards the same structure of approximately 75–95% navigational accuracy inside the project’s use case. Whilst this is a wide range of navigational accuracy achievement and exact tasks will differ across individual research works, the high levels of accuracy for DNN-based navigation policies indicate that they are effective in the environments that they are trained for. Most projects took the approach of reducing complexity either by not relying on subsystems such as GPS or network access, and/or by partially or fully focusing on optimising network efficiency for on-board operation. Most projects also avoided the use of any additional sensors, instead relying on a single camera system. No papers in the research pool considered the use of dual cameras for spatial awareness, which defied author expectations.

4.3. Areas of Opportunity

A surprising result from the comparative analysis shows that there were few research projects with the environmental distinction feature. Of those that do, no project attempted to distinguish explicitly between two or more environments. Several projects did test their given implementations in various environs [22,29,38], but did not qualify as addressing the environmental distinction feature, as their approach did not provide consideration for the differences in those environments to be represented in the solution itself. There is no architecture modification to consider different environments, and there are no datasets used in the research pool with distinct environment labels. This area is of considerable potential, as the recognition of different environments could drastically affect the accuracy and efficiency of the solution, and provides a level of transparency within autonomous navigation that may be necessary for future regulatory compliance. Certain papers, such as Rodriguez et al. [45], used an interesting approach to training datasets by training their model on simulated data. However, such an approach can result in a significant trade-off in accuracy under realistic test conditions. However, it was noted that the visual fidelity of such simulations was poor compared to what is achievable in modern rendering engines, and some reduction in this trade-off can be seen when simulations are run through modern video-game engines [46], such as the Unity or Unreal engines. It is pertinent to note that a drone-specific simulation software known as Gazebo has been used in some projects, which demonstrates the validity of simulation [47].

4.4. Issues

Most research works explain their approach to model training and testing, explaining the chosen ground truth, labels and descriptions of how the navigation system interfaces with the CNN model. One issue to highlight, however, is a lack of uniformity of metrics in the domain. Some papers evaluate their approach using environment-specific metrics, such as the number of successful laps [46] and performance at different speeds [23]. In the DNN research space, the inclusion of visual descriptions of architectures and evaluation results comparing similar architectural or function-level approaches is crucial to the explainability of the project. The use of research work-specific metrics, when displayed without connection to a more common metric such as accuracy, makes it difficult to compare the performance of autonomous navigation approaches across the domain.
Another typical issue found in the research pool is various computer and electronic engineering hurdles not attempted too be overcome, not addressed, or the solutions carefully designed to work inside the boundaries of such hurdles. This reduces the robustness of the implementation and potentially limits the use cases in which the solution can operate. Power consumption, data processing, latency, sensor design and communication are all areas affected by this issue. We suggest that drone autonomy research projects could benefit greatly from interdisciplinary interaction.

Supplementary Materials

Table S1: Drone Autonomy Research Overview Rubric Sorted by Number of Citations/Year; Table S2: Abbreviation legend for Autonomous Features are available online at https://www.mdpi.com/article/10.3390/drones5020052/s1.

Author Contributions

Conceptualization, T.L.; Data curation, T.L.; Formal analysis, T.L.; Investigation, T.L.; Methodology, T.L.; Supervision, S.M. and J.C.; Visualization, T.L.; Writing—original draft, T.L.; Writing—review and editing, T.L., S.M. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Foundation Ireland (SFI) ADVANCE Centre for Research Training.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The feature mappings and standard metric information (where found) for the entire pool can be found as a Supplementary Materials.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DNNDeep Neural Network
UAVUnmanned Aerial Vehicle
IoTInternet of Things
CNNConvolutional Neural Network
CPUCentral Processing Unit
MDPIMultidisciplinary Digital Publishing Institute
IEEEInstitute of Electrical and Electronic Engineers
SAESociety of Automation Engineers (SAE International)

Appendix A. Research Pool—2020 Section

Table A1. All papers in the research pool published in the year 2020, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
Table A1. All papers in the research pool published in the year 2020, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
PaperYearCitationsF1 ScoreAccuracyEfficiency
A. Loquercio et al. [9]202034---
M. K. Al-Sharman et al. [10]202011---
S. Nezami et al. [11]20208-0.983-
H. Shiri et al. [12]20206---
K. Lee et al. [13]20206--80 ms
A. Anwar et al. [14]20205---
R. Chew et al. [15]202040.860.86-
I. Roldan et al. [48]20204-0.9948-
Y. Liao et al. [49]20203-0.978-
Y. Wang et al. [50]20201---
I. Bozcan et al. [51]202010.9907--
L. Messina et al. [52]20201---
B. Li et al. [53]20200-0.9-
J. Tan et al. [54]202000.88860.9-
M. Gao et al. [55]20200---
R. Yang et al. [56]20200-0.96-
K. Menfoukh et al. [57]202000.850.91-
V. Sadhu et al. [58]20200---
R. Raman et al. [59]20200---
B. Hosseiny et al. [60]202000.8550.909-
R. I. Marasigan et al. [61]20200---
M. Irfan et al. [47]20200---
V. A. Bakale et al. [62]20200--92 ms
L. O. Rojas-Perez et al. [63]20200--25.4 ms

Appendix B. Research Pool—2019 Section

Table A2. All papers in the research pool published in the year 2019, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
Table A2. All papers in the research pool published in the year 2019, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
PaperYearCitationsF1 ScoreAccuracyEfficiency
D. Wofk et al. [16]201955-0.77137 ms
E. Kaufmann et al. [17]201950--100 ms
D. Palossi et al. [7]2019430.8210.89155.5 ms
Hossain et al. [18]201919---
Y. Y. Munaye et al. [19]201911-0.98-
S. Islam et al. [20]20199-0.8-
A. Alshehri et al. [21]20198-0.8017-
M. A. Akhloufi et al. [64]20198--33 ms
A. G. Perera et al. [65]20196-0.7592-
X. Han et al. [66]20194-0.88-
D. R. Hartawan et al. [67]20194-1330 ms
G. Muñoz et al. [68]20194---
Mohammadi et al. [69]20194---
A. Garcia et al. [70]20193-0.9845 ms
S. Shin et al. [71]20193---
S. Y. Shin et al. [71]20192---
A. Garcia et al. [72]20191---
L. Liu et al. [73]20191---
J. A. Cocoma-Ortega et al. [74]20190-0.95-
M. T. Matthews et al. [75]20190---
J. Morais et al. [76]20190---
A. Garrell et al. [77]20190-0.7581-
E. Cetin et al. [78]20190---

Appendix C. Research Pool—2018 Section

Table A3. All papers in the research pool published in the year 2018, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
Table A3. All papers in the research pool published in the year 2018, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
PaperYearCitationsF1 ScoreAccuracyEfficiency
A. Loquercio et al. [22]20181580.9010.95450 ms
E. Kaufmann et al. [23]201860--100 ms
O. Csillik et al. [24]2018580.96240.9624-
S. Jung et al. [25]201857-0.75534 ms
A. A. Zhilenkov et al. [26]201823---
S. Lee et al. [27]201814---
S. Dionisio-Ortega et al. [28]201814---
Y. Feng et al. [79]201813---
N. Mohajerin et al. [80]201813---
A. Carrio et al. [46]201813-0.9850 ms
A. Rodriguez-Ramos  et al. [45]201812-0.7864-
M. Jafari et al. [81]201811---
M. A. Anwar et al. [14]201811---
A. Khan et al. [82]201810-0.78-
Y. Xu et al. [83]20187---
I. A. Sulistijono et al. [84]20186-0.841450 ms
J. Shin et al. [71]20186---
S. P. Yong et al. [85]201850.7310.9732-
C. Beleznai et al. [86]20183--50 ms
H. U. Dike et al. [87]20183-0.86586.6 ms
X. Guan et al. [88]20183---
Y. Liu et al. [73]20183---
X. Dai et al. [89]20181---
J. M. S Lagmay et al. [90]20181---
X. Chen et al. [91]20180-0.9550 ms

Appendix D. Research Pool—2017 Section

Table A4. All papers in the research pool published in the year 2017, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
Table A4. All papers in the research pool published in the year 2017, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
PaperYearCitationsF1 ScoreAccuracyEfficiency
D. Gandhi et al. [29]2017165---
D. Falanga et al. [30]201798-0.80.24 ms
K. McGuire et al. [31]201788---
A. Zeggada et al. [32]201743-0.82739 ms
Y. Zhao et al. [33]201731---
L. Von et al. [34]201725---
P. Moriarty et al. [35]201711-0.985-
Y. F. Teng et al. [92]201711---
Y. Zhou et al. [93]20173---
A. Garcia et al. [94]20173-0.9-
Y. Choi et al. [95]20171-0.989-
Y. Zhang et al. [96]20171-0.83-
S. Andropov et al. [97]20170---

Appendix E. Research Pool—2016 Section

Table A5. All papers in the research pool published in the year 2016, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
Table A5. All papers in the research pool published in the year 2016, tabulated by F1 score, accuracy and efficiency (processing time in milliseconds) where found.
PaperYearCitationsF1 ScoreAccuracyEfficiency
A. Giusti et al. [36]2016424---
T. Zhang et al. [37]2016263---
S. Daftry et al. [38]201626-0.78-
M. E. Antonio-Toledo  et al. [39]20163---

References

  1. Giones, F.; Brem, A. From toys to tools: The co-evolution of technological and entrepreneurial developments in the drone industry. Bus. Horiz. 2017, 60, 875–884. [Google Scholar] [CrossRef]
  2. The Drone Market Report 2020–2025; Technical Report; Drone Industry Insight, 2020.
  3. IEEE Website. 2021. Available online: https://www.ieee.org/content/ieee-org/en/about/ (accessed on 4 June 2021).
  4. Aragón, A.M. A measure for the impact of research. Sci. Rep. 2013, 3, 1649. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Lehmann, S.; Jackson, A.D.; Lautrup, B.E. Measures for measures. Nature 2006, 444, 1003–1004. [Google Scholar] [CrossRef] [PubMed]
  6. Society of Automation Engineers (SAE). J3016B Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles; SAE: Warrendale, PA, USA, 2018. [Google Scholar]
  7. Palossi, D.; Loquercio, A.; Conti, F.; Flamand, E.; Scaramuzza, D.; Benini, L. A 64-mW DNN-Based Visual Navigation Engine for Autonomous Nano-Drones. IEEE Internet Things J. 2019, 6, 8357–8371. [Google Scholar] [CrossRef] [Green Version]
  8. Sasaki, Y. The Truth of the F-Measure. 2007. Available online: https://www.cs.odu.edu/{~{}}mukka/cs795sum10dm/Lecturenotes/Day3/F-measure-YS-26Oct07.pdf (accessed on 4 June 2021).
  9. Loquercio, A.; Kaufmann, E.; Ranftl, R.; Dosovitskiy, A.; Koltun, V.; Scaramuzza, D. Deep Drone Racing: From Simulation to Reality with Domain Randomization. IEEE Trans. Robot. 2020, 36, 1–14. [Google Scholar] [CrossRef] [Green Version]
  10. Al-Sharman, M.K.; Zweiri, Y.; Jaradat, M.A.K.; Al-Husari, R.; Gan, D.; Seneviratne, L.D. Deep-learning-based neural network training for state estimation enhancement: Application to attitude estimation. IEEE Trans. Instrum. Meas. 2020, 69, 24–34. [Google Scholar] [CrossRef] [Green Version]
  11. Nezami, S.; Khoramshahi, E.; Nevalainen, O.; Pölönen, I.; Honkavaara, E. Tree species classification of drone hyperspectral and RGB imagery with deep learning convolutional neural networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef] [Green Version]
  12. Shiri, H.; Park, J.; Bennis, M. Remote UAV Online Path Planning via Neural Network-Based Opportunistic Control. IEEE Wirel. Commun. Lett. 2020, 9, 861–865. [Google Scholar] [CrossRef] [Green Version]
  13. Lee, K.; Gibson, J.; Theodorou, E.A. Aggressive Perception-Aware Navigation Using Deep Optical Flow Dynamics and PixelMPC. IEEE Robot. Autom. Lett. 2020, 5, 1207–1214. [Google Scholar] [CrossRef] [Green Version]
  14. Anwar, A.; Raychowdhury, A. Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning. IEEE Access 2020, 8, 26549–26560. [Google Scholar] [CrossRef]
  15. Chew, R.; Rineer, J.; Beach, R.; O’Neil, M.; Ujeneza, N.; Lapidus, D.; Miano, T.; Hegarty-Craver, M.; Polly, J.; Temple, D.S. Deep Neural Networks and Transfer Learning for Food Crop Identification in UAV Images. Drones 2020, 4, 7. [Google Scholar] [CrossRef] [Green Version]
  16. Wofk, D.; Ma, F.; Yang, T.J.; Karaman, S.; Sze, V. FastDepth: Fast Monocular Depth Estimation on Embedded Systems. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6101–6108. [Google Scholar] [CrossRef] [Green Version]
  17. Kaufmann, E.; Gehrig, M.; Foehn, P.; Ranftl, R.; Dosovitskiy, A.; Koltun, V.; Scaramuzza, D. Beauty and the beast: Optimal methods meet learning for drone racing. In Proceedings of the IEEE International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019; Volume 2019, pp. 690–696. [Google Scholar] [CrossRef] [Green Version]
  18. Hossain, S.; Lee, D.-J. Deep Learning-Based Real-Time Multiple-Object Detection and Tracking from Aerial Imagery via a Flying Robot with GPU-Based Embedded Devices. Sensors 2019, 19, 3371. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Munaye, Y.Y.; Lin, H.P.; Adege, A.B.; Tarekegn, G.B. Uav positioning for throughput maximization using deep learning approaches. Sensors 2019, 19, 2775. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Islam, S.; Razi, A. A Path Planning Algorithm for Collective Monitoring Using Autonomous Drones. In Proceedings of the 2019 53rd Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 20–22 March 2019; pp. 1–6. [Google Scholar] [CrossRef]
  21. Alshehri, A.; Member, S.; Bazi, Y.; Member, S. Deep Attention Neural Network for Multi-Label Classification in Unmanned Aerial Vehicle Imagery. IEEE Access 2019, 7, 119873–119880. [Google Scholar] [CrossRef]
  22. Loquercio, A.; Maqueda, A.I.; Del-Blanco, C.R.; Scaramuzza, D. DroNet: Learning to Fly by Driving. IEEE Robot. Autom. Lett. 2018, 3, 1088–1095. [Google Scholar] [CrossRef]
  23. Kaufmann, E.; Loquercio, A.; Ranftl, R.; Dosovitskiy, A.; Koltun, V.; Scaramuzza, D. Deep Drone Racing: Learning Agile Flight in Dynamic Environments. In Proceedings of the Conference on Robotic Learning, Zürich, Switzerland, 29–31 October 2018; pp. 1–13. [Google Scholar]
  24. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of Citrus Trees from Unmanned Aerial Vehicle Imagery Using Convolutional Neural Networks. Drones 2018, 2, 39. [Google Scholar] [CrossRef] [Green Version]
  25. Jung, S.; Hwang, S.; Shin, H.; Shim, D.H. Perception, Guidance, and Navigation for Indoor Autonomous Drone Racing Using Deep Learning. IEEE Robot. Autom. Lett. 2018, 3, 2539–2544. [Google Scholar] [CrossRef]
  26. Zhilenkov, A.A.; Epifantsev, I.R. System of autonomous navigation of the drone in difficult conditions of the forest trails. In Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering, ElConRus 2018, Moscow and St. Petersburg, Russia, 29 January–1 February 2018; Volume 2018, pp. 1036–1039. [Google Scholar] [CrossRef]
  27. Lee, S.; Shim, T.; Kim, S.; Park, J.; Hong, K.; Bang, H. Vision-Based Autonomous Landing of a Multi-Copter Unmanned Aerial Vehicle using Reinforcement Learning. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems, ICUAS 2018, Dallas, TX, USA, 12–15 June 2018; pp. 108–114. [Google Scholar] [CrossRef]
  28. Dionisio-Ortega, S.; Rojas-Perez, L.O.; Martinez-Carranza, J.; Cruz-Vega, I. A deep learning approach towards autonomous flight in forest environments. In Proceedings of the 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 21–23 February 2018; pp. 139–144. [Google Scholar] [CrossRef]
  29. Gandhi, D.; Pinto, L.; Gupta, A. Learning to fly by crashing. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017; Volume 2017, pp. 3948–3955. [Google Scholar] [CrossRef]
  30. Falanga, D.; Mueggler, E.; Faessler, M.; Scaramuzza, D. Aggressive quadrotor flight through narrow gaps with onboard sensing and computing using active vision. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017. [Google Scholar] [CrossRef] [Green Version]
  31. McGuire, K.; de Croon, G.; De Wagter, C.; Tuyls, K.; Kappen, H. Efficient Optical Flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone. IEEE Robot. Autom. Lett. 2017, 2, 1070–1076. [Google Scholar] [CrossRef] [Green Version]
  32. Zeggada, A.; Melgani, F.; Bazi, Y. A Deep Learning Approach to UAV Image Multilabeling. IEEE Geosci. Remote Sens. Lett. 2017, 14, 694–698. [Google Scholar] [CrossRef]
  33. Zhao, Y.; Zheng, Z.; Zhang, X.; Liu, Y. Q learning algorithm based UAV path learning and obstacle avoidence approach. In Proceedings of the Chinese Control Conference, CCC, Dalian, China, 26–28 July 2017; pp. 3397–3402. [Google Scholar] [CrossRef]
  34. Von Stumberg, L.; Usenko, V.; Engel, J.; Stuckler, J.; Cremers, D. From monocular SLAM to autonomous drone exploration. In Proceedings of the 2017 European Conference on Mobile Robots, ECMR 2017, Paris, France, 6–8 September 2017. [Google Scholar] [CrossRef] [Green Version]
  35. Moriarty, P.; Sheehy, R.; Doody, P. Neural networks to aid the autonomous landing of a UAV on a ship. In Proceedings of the 2017 28th Irish Signals and Systems Conference, ISSC 2017, Killarney, Ireland, 20–21 June 2017; pp. 6–9. [Google Scholar] [CrossRef]
  36. Giusti, A.; Guzzi, J.; Ciresan, D.C.; He, F.L.; Rodriguez, J.P.; Fontana, F.; Faessler, M.; Forster, C.; Schmidhuber, J.; Caro, G.D.; et al. A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots. IEEE Robot. Autom. Lett. 2016, 1, 661–667. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, T.; Kahn, G.; Levine, S.; Abbeel, P. Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; Volume 2016. [Google Scholar] [CrossRef] [Green Version]
  38. Daftry, S.; Zeng, S.; Khan, A.; Dey, D.; Melik-Barkhudarov, N.; Bagnell, J.A.; Hebert, M. Robust Monocular Flight in Cluttered Outdoor Environments. arXiv 2016, arXiv:1604.04779. [Google Scholar]
  39. Antonio-Toledo, M.E.; Sanchez, E.N.; Alanis, A.Y. Robust neural decentralized control for a quadrotor UAV. In Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; Volume 2016, pp. 714–719. [Google Scholar] [CrossRef]
  40. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  41. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
  42. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  44. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  45. Rodriguez-Ramos, A.; Sampedro, C.; Bavle, H.; Moreno, I.G.; Campoy, P. A Deep Reinforcement Learning Technique for Vision-Based Autonomous Multirotor Landing on a Moving Platform. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1010–1017. [Google Scholar] [CrossRef]
  46. Carrio, A.; Vemprala, S.; Ripoll, A.; Saripalli, S.; Campoy, P. Drone Detection Using Depth Maps. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1034–1037. [Google Scholar] [CrossRef] [Green Version]
  47. Irfan, M.; Dalai, S.; Kishore, K.; Singh, S.; Akbar, S.A. Vision-based Guidance and Navigation for Autonomous MAV in Indoor Environment. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies, ICCCNT 2020, Kharagpur, India, 1–3 July 2020. [Google Scholar] [CrossRef]
  48. Roldan, I.; Del-Blanco, C.R.; De Quevedo, D.; Urzaiz, F.I.; Menoyo, J.G.; López, A.A.; Berjón, D.; Jaureguizar, F.; García, N. DopplerNet: A convolutional neural network for recognising targets in real scenarios using a persistent range-Doppler radar. IET Radar Sonar Navig. 2020, 14, 593–600. [Google Scholar] [CrossRef]
  49. Liao, Y.; Mohammadi, M.E.; Wood, R.L. Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment. Drones 2020, 4, 24. [Google Scholar] [CrossRef]
  50. Wang, Y.; Wang, H.; Wen, J.; Lun, Y.; Wu, J. Obstacle Avoidance of UAV Based on Neural Networks and Interfered Fluid Dynamical System. In Proceedings of the 2020 3rd International Conference on Unmanned Systems (ICUS), Harbin, China, 27–28 November 2020; pp. 1066–1071. [Google Scholar] [CrossRef]
  51. Bozcan, I.; Kayacan, E. UAV-AdNet: Unsupervised Anomaly Detection using Deep Neural Networks for Aerial Surveillance. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Harbin, China, 27–28 November 2020; pp. 1158–1164. [Google Scholar] [CrossRef]
  52. Messina, L.; Mazzaro, S.; Fiorilla, A.E.; Massa, A.; Matta, W. Industrial Implementation and Performance Evaluation of LSD-SLAM and Map Filtering Algorithms for Obstacles Avoidance in a Cooperative Fleet of Unmanned Aerial Vehicles. In Proceedings of the IRCE 2020—2020 3rd International Conference on Intelligent Robotics and Control Engineering, Oxford, UK, 10–12 August 2020; pp. 117–122. [Google Scholar] [CrossRef]
  53. Li, B.; Wu, J.; Tan, X.; Wang, B. ArUco Marker Detection under Occlusion Using Convolutional Neural Network. In Proceedings of the 2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE), Dalian, China, 19–20 September 2020; Volume 8, pp. 706–711. [Google Scholar] [CrossRef]
  54. Tan, J.; Zhao, H. UAV Localization with Multipath Fingerprints and Machine Learning in Urban NLOS Scenario. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 1494–1499. [Google Scholar] [CrossRef]
  55. Gao, M.; Wei, P.; Liu, Y. Competitive Self-Organizing Neural Network Based UAV Path Planning. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 2376–2381. [Google Scholar] [CrossRef]
  56. Yang, R.; Wang, X. UAV Landmark Detection Based on Convolutional Neural Network. In Proceedings of the 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE), Yunlin, Taiwan, 23–25 October 2020. [Google Scholar]
  57. Menfoukh, K.; Touba, M.M.; Khenfri, F.; Guettal, L. Optimized Convolutional Neural Network architecture for UAV navigation within unstructured trail. In Proceedings of the CCSSP 2020—1st International Conference on Communications, Control Systems and Signal Processing, El Oued, Algeria, 16–17 May 2020; pp. 211–214. [Google Scholar] [CrossRef]
  58. Sadhu, V.; Sun, C.; Karimian, A.; Tron, R.; Dario, P. Aerial-DeepSearch: Distributed Multi-Agent Deep Reinforcement Learning for Search Missions. In Proceedings of the IEEE International Conference on Mobile Ad Hoc and Sensor Systems (MASS), Delhi, India, 10–13 December 2020; pp. 1–9. [Google Scholar]
  59. Raman, R.; Jeppu, Y. Formal validation of emergent behavior in a machine learning based collision avoidance system. In Proceedings of the SYSCON 2020—14th Annual IEEE International Systems Conference, Montreal, QC, Canada, 24 August–20 September 2020. [Google Scholar] [CrossRef]
  60. Hosseiny, B.; Rastiveis, H.; Homayouni, S. An Automated Framework for Plant Detection Based on Deep Simulated Learning from Drone Imagery. Remote Sens. 2020, 12, 3521. [Google Scholar] [CrossRef]
  61. Marasigan, R.I.; Austria, Y.D.; Enriquez, J.B.; Lolong Lacatan, L.; Dellosa, R.M. Unmanned Aerial Vehicle Indoor Navigation using Wi-Fi Trilateration. In Proceedings of the 2020 11th IEEE Control and System Graduate Research Colloquium, ICSGRC 2020, Shah Alam, Malaysia, 8 August 2020; pp. 346–351. [Google Scholar] [CrossRef]
  62. Bakale, V.A.; Kumar, Y.; Roodagi, V.C.; Kulkarni, Y.N.; Patil, M.S.; Chickerur, S. Indoor Navigation with Deep Reinforcement Learning. In Proceedings of the 5th International Conference on Inventive Computation Technologies, ICICT 2020, Coimbatore, India, 26–28 February 2020; pp. 660–665. [Google Scholar] [CrossRef]
  63. Rojas-Perez, L.O.; Martinez-Carranza, J. DeepPilot: A CNN for Autonomous Drone Racing. Sensors 2020, 20, 4524. [Google Scholar] [CrossRef] [PubMed]
  64. Akhloufi, M.A.; Arola, S.; Bonnet, A. Drones Chasing Drones: Reinforcement Learning and Deep Search Area Proposal. Drones 2019, 3, 58. [Google Scholar] [CrossRef] [Green Version]
  65. Perera, A.G.; Law, Y.W.; Chahl, J. Drone-Action: An Outdoor Recorded Drone Video Dataset for Action Recognition. Drones 2019, 3, 82. [Google Scholar] [CrossRef] [Green Version]
  66. Han, X.; Wang, J.; Xue, J.; Zhang, Q. Intelligent Decision-Making for 3-Dimensional Dynamic Obstacle Avoidance of UAV Based on Deep Reinforcement Learning. In Proceedings of the 2019 11th International Conference on Wireless Communications and Signal Processing, WCSP 2019, Xi’an, China, 23–25 October 2019. [Google Scholar] [CrossRef]
  67. Hartawan, D.R.; Purboyo, T.W.; Setianingsih, C. Disaster victims detection system using convolutional neural network (CNN) method. In Proceedings of the 2019 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology, IAICT 2019, Bali, Indonesia, 1–3 July 2019; pp. 105–111. [Google Scholar] [CrossRef]
  68. Muñoz, G.; Barrado, C.; Çetin, E.; Salami, E. Deep Reinforcement Learning for Drone Delivery. Drones 2019, 3, 72. [Google Scholar] [CrossRef] [Green Version]
  69. Mohammadi, M.E.; Watson, D.P.; Wood, R.L. Deep Learning-Based Damage Detection from Aerial SfM Point Clouds. Drones 2019, 3, 68. [Google Scholar] [CrossRef] [Green Version]
  70. Garcia, A.; Mittal, S.S.; Kiewra, E.; Ghose, K. A convolutional neural network vision system approach to indoor autonomous quadrotor navigation. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems, ICUAS 2019, Bali, Indonesia, 1–3 July 2019; pp. 1344–1352. [Google Scholar] [CrossRef]
  71. Shin, J.; Kwak, K.; Kim, S.; Kim, H.J. Adaptive Range Estimation in Perspective Vision System Using Neural Networks. IEEE/ASME Trans. Mechatronics 2018, 23, 972–977. [Google Scholar] [CrossRef]
  72. Garcia, A.; Mittal, S.S.; Kiewra, E.; Ghose, K. A Convolutional Neural Network Feature Detection Approach to Autonomous Quadrotor Indoor Navigation. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Macau, China, 3–8 November 2019; pp. 74–81. [Google Scholar] [CrossRef]
  73. Liu, Y.; Zhou, Y.; Li, X. Attitude Estimation of Unmanned Aerial Vehicle Based on LSTM Neural Network. In Proceedings of the International Joint Conference on Neural Networks, Rio de Janeiro, Brazil, 8–13 July 2018; Volume 2018. [Google Scholar] [CrossRef]
  74. Cocoma-Ortega, J.A.; Rojas-Perez, L.O.; Cabrera-Ponce, A.A.; Martinez-Carranza, J. Overcoming the Blind Spot in CNN-based Gate Detection for Autonomous Drone Racing. In Proceedings of the 2019 International Workshop on Research, Education and Development on Unmanned Aerial Systems, RED-UAS 2019, Cranfield, UK, 25–27 November 2019; pp. 253–259. [Google Scholar] [CrossRef]
  75. Matthews, M.T.; Yi, S. Model Reference Adaptive Control and Neural Network Based Control of Altitude of Unmanned Aerial Vehicles. In Proceedings of the IEEE Southeastcon, Huntsville, AL, USA, 11–14 April 2019; Volume 2019. [Google Scholar] [CrossRef]
  76. Morais, J.; Sanguino, J.; Sebastiao, P. Safe return path mapping for drone applications. In Proceedings of the 2019 IEEE International Workshop on Metrology for AeroSpace, MetroAeroSpace 2019, Turin, Italy, 19–21 June 2019; pp. 249–254. [Google Scholar] [CrossRef]
  77. Garrell, A.; Coll, C.; Alquezar, R.; Sanfeliu, A. Teaching a Drone to Accompany a Person from Demonstrations using Non-Linear ASFM. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Macau, China, 3–8 November 2019; pp. 1985–1991. [Google Scholar] [CrossRef] [Green Version]
  78. Cetin, E.; Barrado, C.; Munoz, G.; MacIas, M.; Pastor, E. Drone Navigation and Avoidance of Obstacles Through Deep Reinforcement Learning. In Proceedings of the AIAA/IEEE Digital Avionics Systems Conference, San Diego, CA, USA, 8–12 September 2019; Volume 2019. [Google Scholar] [CrossRef]
  79. Feng, Y.; Zhang, C.; Baek, S.; Rawashdeh, S.; Mohammadi, A. Autonomous Landing of a UAV on a Moving Platform Using Model Predictive Control. Drones 2018, 2, 34. [Google Scholar] [CrossRef] [Green Version]
  80. Mohajerin, N.; Mozifian, M.; Waslander, S. Deep Learning a Quadrotor Dynamic Model for Multi-Step Prediction. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2454–2459. [Google Scholar] [CrossRef]
  81. Jafari, M.; Xu, H. Intelligent Control for Unmanned Aerial Systems with System Uncertainties and Disturbances Using Artificial Neural Network. Drones 2018, 2, 30. [Google Scholar] [CrossRef] [Green Version]
  82. Khan, A.; Hebert, M. Learning safe recovery trajectories with deep neural networks for unmanned aerial vehicles. In Proceedings of the IEEE Aerospace Conference Proceedings, Big Sky, MT, USA, 3–10 March 2018; Volume 2018, pp. 1–9. [Google Scholar] [CrossRef]
  83. Xu, Y.; Liu, Z.; Wang, X. Monocular vision based autonomous landing of quadrotor through deep reinforcement learning. In Proceedings of the Chinese Control Conference, CCC, Wuhan, China, 25–27 July 2018; Volume 2018, pp. 10014–10019. [Google Scholar] [CrossRef]
  84. Sulistijono, I.A.; Imansyah, T.; Muhajir, M.; Sutoyo, E.; Anwar, M.K.; Satriyanto, E.; Basuki, A.; Risnumawan, A. Implementation of Victims Detection Framework on Post Disaster Scenario. In Proceedings of the 2018 International Electronics Symposium on Engineering Technology and Applications, IES-ETA 2018, Bali, Indonesia, 29–30 October 2018; pp. 253–259. [Google Scholar] [CrossRef]
  85. Yong, S.P.; Yeong, Y.C. Human Object Detection in Forest with Deep Learning based on Drone’s Vision. In Proceedings of the 2018 4th International Conference on Computer and Information Sciences: Revolutionising Digital Landscape for Sustainable Smart Society, ICCOINS 2018, Kuala Lumpur, Malaysia, 13–14 August 2018; pp. 1–5. [Google Scholar] [CrossRef]
  86. Beleznai, C.; Steininger, D.; Croonen, G.; Broneder, E. Multi-modal human detection from aerial views by fast shape-aware clustering and classification. In Proceedings of the 2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing, PRRS 2018, Beijing, China, 19–20 August 2018. [Google Scholar] [CrossRef]
  87. Dike, H.U.; Wu, Q.; Zhou, Y.; Liang, G. Unmanned Aerial Vehicle (UAV) Based Running Person Detection from a Real-Time Moving Camera. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics, ROBIO 2018, Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 2273–2278. [Google Scholar] [CrossRef]
  88. Guan, X.; Cai, C. A new integrated navigation system for the indoor unmanned aerial vehicles (UAVs) based on the neural network predictive compensation. In Proceedings of the 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation, YAC 2018, Nanjing, China, 18–20 May 2018; pp. 575–580. [Google Scholar] [CrossRef]
  89. Dai, X.; Zhou, Y.; Meng, S.; Wu, Q. Unsupervised Feature Fusion Combined with Neural Network Applied to UAV Attitude Estimation. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics, ROBIO 2018, Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 874–879. [Google Scholar] [CrossRef]
  90. Lagmay, J.M.S.; Jed, C.; Leyba, L.; Santiago, A.T.; Tumabotabo, L.B.; Limjoco, W.J.R.; Michael, C.; Tiglao, N. Automated Indoor Drone Flight with Collision Prevention. In Proceedings of the TENCON 2018—2018 IEEE Region 10 Conference, Jeju, Korea, 28–31 October 2018; pp. 1762–1767. [Google Scholar] [CrossRef]
  91. Chen, X.; Lin, F.; Abdul Hamid, M.R.; Teo, S.H.; Phang, S.K. Real-Time Landing Spot Detection and Pose Estimation on Thermal Images Using Convolutional Neural Networks. In Proceedings of the IEEE International Conference on Control and Automation, ICCA, Anchorage, AK, USA, 12–15 June 2018; Volume 2018, pp. 998–1003. [Google Scholar] [CrossRef]
  92. Teng, Y.F.; Hu, B.; Liu, Z.W.; Huang, J.; Guan, Z.H. Adaptive neural network control for quadrotor unmanned aerial vehicles. In Proceedings of the 2017 Asian Control Conference, ASCC 2017, Gold Coast, Australia, 17–20 December 2017; Volume 2018, pp. 988–992. [Google Scholar] [CrossRef]
  93. Zhou, Y.; Wan, J.; Li, Z.; Song, Z. GPS/INS integrated navigation with BP neural network and Kalman filter. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, Macao, 5–8 December 2017; pp. 2515–2520. [Google Scholar] [CrossRef]
  94. Garcia, A.; Ghose, K. Autonomous indoor navigation of a stock quadcopter with off-board control. In Proceedings of the 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems, RED-UAS 2017, Linkoping, Sweden, 3–5 October 2017; pp. 132–137. [Google Scholar] [CrossRef]
  95. Choi, Y.; Hwang, I.; Oh, S. Wearable gesture control of agile micro quadrotors. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Linkoping, Sweden, 3–5 October 2017; Volume 2017, pp. 266–271. [Google Scholar] [CrossRef]
  96. Zhang, Y.; Xiao, X.; Yang, X. Real-Time object detection for 360-degree panoramic image using CNN. In Proceedings of the 2017 International Conference on Virtual Reality and Visualization, ICVRV 2017, Zhengzhou, China, 21–22 October 2017; pp. 18–23. [Google Scholar] [CrossRef]
  97. Andropov, S.; Guirik, A.; Budko, M.; Budko, M. Synthesis of neurocontroller for multirotor unmanned aerial vehicle based on neuroemulator. In Proceedings of the Conference of Open Innovation Association, FRUCT, St. Petersburg, Russia, 3–7 April 2017; Volume 2017, pp. 20–25. [Google Scholar] [CrossRef]
Figure 1. A typical quad rotor helicopter drone, constructed for autonomous flight research.
Figure 1. A typical quad rotor helicopter drone, constructed for autonomous flight research.
Drones 05 00052 g001
Figure 2. Level of autonomous drone navigation mapped by functional features.
Figure 2. Level of autonomous drone navigation mapped by functional features.
Drones 05 00052 g002
Figure 3. A categorisation of Level 4 autonomous navigation features by group/function.
Figure 3. A categorisation of Level 4 autonomous navigation features by group/function.
Drones 05 00052 g003
Figure 4. The average occurrence frequency of a given feature across the research pool as a percentage of the total entries in the research pool.
Figure 4. The average occurrence frequency of a given feature across the research pool as a percentage of the total entries in the research pool.
Drones 05 00052 g004
Table 1. The most cited research pool entries as of 18 March 2021 in the context of awareness features.
Table 1. The most cited research pool entries as of 18 March 2021 in the context of awareness features.
PaperYearCitationsSEODeODi
A. Loquercio et al. [9]202034NoNoYes
M. K. Al-Sharman et al. [10]202011NoNoNo
S. Nezami et al. [11]20208NoNoYes
H. Shiri et al. [12]20206NoNoNo
K. Lee et al. [13]20206NoNoNo
A. Anwar et al. [14]20205NoNoNo
R. Chew et al. [15]20204NoNoYes
D. Wofk et al. [16]201955YesNoNo
E. Kaufmann et al. [17]201950NoNoYes
D. Palossi et al. [7]201943YesYesNo
Hossain et al. [18]201919NoNoYes
Y. Y. Munaye et al. [19]201911NoNoYes
S. Islam et al. [20]20199NoNoNo
A. Alshehri et al. [21]20198NoNoYes
A. Loquercio et al. [22]2018158YesYesNo
E. Kaufmann et al. [23]201860NoNoYes
O. Csillik et al. [24]201858NoNoYes
S. Jung et al. [25]201857NoNoYes
A. A. Zhilenkov et al. [26]201823YesNoNo
S. Lee et al. [27]201814NoNoYes
S. Dionisio-Ortega et al. [28]201814NoYesNo
D. Gandhi et al. [29]2017165NoYesNo
D. Falanga et al. [30]201798NoNoNo
K. McGuire et al. [31]201788YesNoNo
A. Zeggada et al. [32]201743NoNoYes
Y. Zhao et al. [33]201731NoNoNo
L. Von Stumberg et al. [34]201725YesYesNo
P. Moriarty et al. [35]201711NoNoYes
A. Giusti et al. [36]2016424NoNoYes
T. Zhang et al. [37]2016263NoNoNo
S. Daftry et al. [38]201626YesNoNo
M. E. Antonio-Toledo et al. [39]20163NoNoNo
Table 2. The most cited entries in the research pool as of 18 March 2021 in the context of Basic Navigation features.
Table 2. The most cited entries in the research pool as of 18 March 2021 in the context of Basic Navigation features.
PaperYearCitationsAMCAATL
A. Loquercio et al. [9]202034YesYesNo
M. K. Al-Sharman et al. [10]202011NoYesNo
S. Nezami et al. [11]20208NoNoNo
H. Shiri et al. [12]20206NoNoNo
K. Lee et al. [13]20206YesYesNo
A. Anwar et al. [14]20205YesYesNo
R. Chew et al. [15]20204NoNoNo
D. Wofk et al. [16]201955NoNoNo
E. Kaufmann et al. [17]201950YesYesNo
D. Palossi et al. [7]201943YesYesNo
Hossain et al. [18]201919NoNoNo
Y. Y. Munaye et al. [19]201911NoNoNo
S. Islam et al. [20]20199NoYesNo
A. Alshehri et al. [21]20198NoNoNo
A. Loquercio et al. [22]2018158YesYesNo
E. Kaufmann et al. [23]201860YesYesNo
O. Csillik et al. [24]201858NoNoNo
S. Jung et al. [25]201857YesNoNo
A. A. Zhilenkov et al. [26]201823YesYesNo
S. Lee et al. [27]201814NoNoYes
S. Dionisio-Ortega et al. [28]201814YesYesNo
D. Gandhi et al. [29]2017165YesYesNo
D. Falanga et al. [30]201798YesYesNo
K. McGuire et al. [31]201788YesYesNo
A. Zeggada et al. [32]201743NoNoNo
Y. Zhao et al. [33]201731NoNoNo
L. Von Stumberg et al. [34]201725NoNoNo
P. Moriarty et al. [35]201711NoNoYes
A. Giusti et al. [36]2016424YesYesNo
T. Zhang et al. [37]2016263YesYesNo
S. Daftry et al. [38]201626YesYesNo
M. E. Antonio-Toledo et al. [39]20163NoNoNo
Table 3. The most cited entries in the research pool as of 18 March 2021 in the context of Expanded Navigation features.
Table 3. The most cited entries in the research pool as of 18 March 2021 in the context of Expanded Navigation features.
PaperYearCitationsPGEDNPM
A. Loquercio et al. [9]202034NoNoYes
M. K. Al-Sharman et al. [10]202011NoNoNo
S. Nezami et al. [11]20208NoYesNo
H. Shiri et al. [12]20206YesNoNo
K. Lee et al. [13]20206YesNoYes
A. Anwar et al. [14]20205NoNoNo
R. Chew et al. [15]20204NoYesNo
D. Wofk et al. [16]201955NoNoNo
E. Kaufmann et al. [17]201950YesNoYes
D. Palossi et al. [7]201943NoNoNo
Hossain et al. [18]201919NoNoNo
Y. Y. Munaye et al. [19]201911NoNoNo
S. Islam et al. [20]20199YesNoNo
A. Alshehri et al. [21]20198NoNoNo
A. Loquercio et al. [22]2018158NoNoNo
E. Kaufmann et al. [23]201860NoNoNo
O. Csillik et al. [24]201858NoYesNo
S. Jung et al. [25]201857NoNoYes
A. A. Zhilenkov et al. [26]201823NoYesNo
S. Lee et al. [27]201814NoNoYes
S. Dionisio-Ortega et al. [28]201814NoYesNo
D. Gandhi et al. [29]2017165NoNoNo
D. Falanga et al. [30]201798YesNoYes
K. McGuire et al. [31]201788NoNoNo
A. Zeggada et al. [32]201743NoNoNo
Y. Zhao et al. [33]201731YesNoNo
L. Von Stumberg et al. [34]201725NoNoNo
P. Moriarty et al. [35]201711NoYesYes
A. Giusti et al. [36]2016424NoNoNo
T. Zhang et al. [37]2016263NoNoNo
S. Daftry et al. [38]201626NoNoNo
M. E. Antonio-Toledo et al. [39]20163YesNoYes
Table 4. The most cited entries in the research pool as of 18 March 2021 in the context of Engineering features.
Table 4. The most cited entries in the research pool as of 18 March 2021 in the context of Engineering features.
PaperYearCitationsOBOESSI
A. Loquercio et al. [9]202034YesNoYes
M. K. Al-Sharman et al. [10]202011NoNoNo
S. Nezami et al. [11]20208NoYesNo
H. Shiri et al. [12]20206NoYesNo
K. Lee et al. [13]20206NoNoNo
A. Anwar et al. [14]20205NoNoNo
R. Chew et al. [15]20204NoNoNo
D. Wofk et al. [16]201955YesNoYes
E. Kaufmann et al. [17]201950YesNoYes
D. Palossi et al. [7]201943YesNoYes
Hossain et al. [18]201919YesNoYes
Y. Y. Munaye et al. [19]201911NoNoNo
S. Islam et al. [20]20199NoYesNo
A. Alshehri et al. [21]20198NoNoNo
A. Loquercio et al. [22]2018158NoNoNo
E. Kaufmann et al. [23]201860YesNoYes
O. Csillik et al. [24]201858NoNoNo
S. Jung et al. [25]201857YesNoYes
A. A. Zhilenkov et al. [26]201823YesNoYes
S. Lee et al. [27]201814YesNoYes
S. Dionisio-Ortega et al. [28]201814NoNoNo
D. Gandhi et al. [29]2017165NoNoNo
D. Falanga et al. [30]201798YesYesYes
K. McGuire et al. [31]201788YesYesYes
A. Zeggada et al. [32]201743NoNoNo
Y. Zhao et al. [33]201731NoYesNo
L. Von Stumberg et al. [34]201725NoYesNo
P. Moriarty et al. [35]201711NoNoNo
A. Giusti et al. [36]2016424NoNoNo
T. Zhang et al. [37]2016263YesNoNo
S. Daftry et al. [38]201626NoYesNo
M. E. Antonio-Toledo et al. [39]20163NoNoNo
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, T.; Mckeever, S.; Courtney, J. Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy. Drones 2021, 5, 52. https://doi.org/10.3390/drones5020052

AMA Style

Lee T, Mckeever S, Courtney J. Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy. Drones. 2021; 5(2):52. https://doi.org/10.3390/drones5020052

Chicago/Turabian Style

Lee, Thomas, Susan Mckeever, and Jane Courtney. 2021. "Flying Free: A Research Overview of Deep Learning in Drone Navigation Autonomy" Drones 5, no. 2: 52. https://doi.org/10.3390/drones5020052

Article Metrics

Back to TopTop