Skip to main content

Abstract

The widespread introduction of semi-autonomous drones into the field will revolutionize how information is captured and relayed to personnel. By creating a system that reduces the amount of user input necessary to effectively use these drones, we can increase their effectiveness while simultaneously reducing the amount of attention that is required for their operation. This will allow the operator to perform other mission-related tasks while the drone continues autonomously, increasing their effectiveness in a team.

Problem Statement

Currently, drones require large amounts of training and attention from the operator to use effectively. The proposed system will reduce the amount of training and attention required to operate the drone by controlling automated flight modes that execute complex missions with minimal user input. This system will allow for the widespread deployment of drones across the field, with minimal operator skill and training necessary for operation and a low cost for each system.

Proposal

The proposed system will take advantage of the DJI SDK currently available on the Mavic line of consumer drones. This platform provides a relatively low cost of ~$2000 and can be packed down into a small carrying case that can be quickly set up and launched when needed. Once launched, the handheld controller will prompt the operator for the desired flight mode, either a pre-planned automated flight mode or manual control. If an automated flight mode is selected, the drone would load the first step of the flight plan onto its internal flight controller and begin to execute the mission. The currently proposed system contains three flight plans that are listed below. Additional flight plans could be added and uploaded to the controller before any mission.

  1. Motion detect and follow - The drone would move to a set station position and wait for motion in the target area. If motion is detected, the drone would begin following and alert the operator to the motion. A live video feed would be available for the operator to view, and the drone would follow the target until the track is lost or an end condition is met.
  2. Patrol - The drone would follow a set patrol path and relay live video back to the operator, alerts could be sent in case of detected objects or motion. Manual transfer to motion tracking or another automated flight mode would be possible with a simple user input.
  3. Follow and alert mode - The drone can be set to follow the operator as they continue on their mission, alerting to motion or incoming objects behind the operator or in areas outside of the operator's line of sight. This would allow the operator to gain an advantage through their advanced warning to incoming threats.

Challenges and Unknowns

Motion detect and follow mode and patrol mode have had basic testing through the DJI app. While this provides a proof of concept for these flight modes, an application will need to be made to switch between them, send alerts, and automatically begin motion tracking. Follow and alert mode has not been tested as motion detection is not possible natively in the DJI SDK. Development of the mobile application using the SDK will require more team members to accomplish, I am looking for any possible team members with interest in developing the mobile application.

Though DJI has provided an extensive SDK for control of these autonomous mission modes, it is currently unknown how quickly they can be switched between. Another challenge will be with reverting to a previous flight mode after the loss of motion tracking, as the SDK does not keep a history of previous flight modes that can be referred to for reverting flight modes.

This system relies upon GPS and communications being intact for mission mode switching and any controller-based computer vision features, this communication might not be intact or reliable in all environments. This is a major weakness in this proposal, as using the SDK on the controller requires consistent communication to the drone for any changes in flight mode or alerts to the operator.

 

Comments

dBlocher | 9 February 2021

I concur that one should not under-estimate the challenge of the software development here. Are there particular milestones or tests that would be useful in proving out an incremental capability that drives towards the desired capability? What are the basic elements of the tasks you wish the drone to do, and what is currently available in the SDK?  Are there other drones that have the capability to do these combined tasks?  In terms of the communication requirements, I wouldn't worry too much about this. A fielded system could develop encrypted / secure / anti-jam comms that wouldn't be natively available in a platform such as DJI. The real benefit of your work might be showing what is possible with drones and solving some technical problems in integrating motion detection with a following drone, etc.

scwolpert | 9 February 2021

Great idea! Using an existing+stable UAV platform to get a proof-of-concept going quickly is a great direction. The problem statement allows for simulation, but actually interfacing and potentially flying a drone to prove out a capability is so much more compelling!

The DJI platforms have some types of on-board video analytics (VA), but I believe the VA is "closed" and not user-/dev-accessible, just as you said:

"...motion detection is not possible natively in the DJI SDK."

To expand upon DBLOCHER's response, agreed, the DJI SDK might be restrictive, but even with a DJI platform, I think there are lots of options to move forward. As a proof of concept: maybe you could process video on the ground for object detection/squirter alert by capturing the DJI video output (HDMI) into a video analytics box? Starting with a regular computer/laptop is easiest, then move to something small-ish like a Jetson Xavier AGX. It would be super-cool to have video analytics change the path of the drone automatically, even if just switching flight modes based on an event.

For your reference, Rosetta Drone is an (abandoned) example of a software "shim" that was used to adapt the DJI SDK into MAVLink, to take advantage of more common software, such as QGroundControl, to try to harness the open-source flight planning and mission tools. It's possible a fork is being maintained elsewhere?

For other drones outside DJI, you could potentially put video analytics / object detection onboard the UAV, which is likely the ultimate end goal, but let's walk before we run. Also, processing on a SWaP-limited drone would likely slow down development a lot, and lock you into a concept/drone too early. Just a thought!

achang7 | 11 February 2021

Jack, I like the three behaviors that you've selected for development. All three have relevant uses in a tactical environment. I would be very interested in some of the results of the testing that you come up with. For example, how wide of a field of view would the different behavior be able to observe? This would impact how a Soldier would employ the drone (i.e. how close does the drone need to be to a point of interest to reliably observe it for detect and follow, or how large of an area would the patrol function be able to cover). For the follow and alert mode, what sort of following distance are you envisioning? As an operator, I really don't want the aircraft to be directly above my position and giving it away. Also, have you considered a negative following distance? As in having a "Lead and alert" mode?

A question from the technical perspective: do you have extensive experience with the DJI SDK and the Mavic platform that is drawing you to that particular SDK/platform combination? As both SCWOLPERT and DBLOCHER, the DJI SDK can be rather rather limiting and you would definitely need to come up with a way to get around DJI's lack of on-board video analytics. Have you looked at Skydio drones? https://skydio.com/skydio-2 I have not worked with their SDK for the Skydio 2, but we did some early work with the Skydio R1 and found it to be pretty user friendly.

Jack Forbes | 16 February 2021

Hello,

Thanks for the response and feedback on my proposal. Regarding the following distance, I was envisioning a close range (50-150ft), as Activetrack has issues maintaining a lock beyond this for a man-sized target. This following distance may be increased if I went with another platform than the Mavic with better tracking and camera hardware, but I focused my ideas around the Mavic because I already own one that I could use for preliminary testing. Through your response and others, I have seen that using the Mavic or other DJI drones is most likely not the best choice, and I am open to switching to another platform such as the Skydio. I also think DJI's lack of on-board image processing open to the user really limits what could be achieved if the Mavic was used for this application in regards to onboard object detection and alert, so the Skydio or a custom-made drone is looking to be the better option moving forward, especially as I do not have any experience with the DJI SDK. In regards to a custom-made drone for this, I am envisioning a gimbaled HD camera with a narrow field of view alongside a wide-angle camera to catch any motion outside the gimbaled one's FOV. Image processing could be done on an Nvidia Jetson or similar board, and data could be relayed over IP through a wireless bridge and flight commands could be relayed to a flight controller such as ardupilot. Let me know what problems you foresee if I go with a custom-made option or a commercial drone such as the Skydio, or of any other ideas that cross your mind.

 

Thanks,

Jack Forbes

awright | 19 February 2021

what are your thoughts on assessing how well the drone performs the automated functions? it may be that different optimizations for one of the modes, (like the motion detect mode you mention has been tested), are critically important to operate in different environments -- is that something you would be able to modify with the SDK?

Jack Forbes | 19 February 2021

As of right now I plan to start developing a program using the DJI SDK to perform an automated tracking sentry mode, where it would fly to a waypoint and upload snapshots to an AI server to detect any objects of interest. If any are detected it would then begin automatically following that object. As for assessing how well this mode and others perform in different environments and optimizing them, I only have control over the detection code that I will write as the SDK only provides me with high-level controls over the automated flight modes. Optimizations will therefore be limited to physical changes to the mission such as altitude and follow distance, changing the AI server and objects of interest, as well as other conditions such as the size of the allowed tracking area. This is probably the largest limitation of the DJI SDK, but as of now I do not have access to the Skydio or its SDK for testing. Though I believe that it would be a superior platform to DJI if used in this project due to its superior object tracking and more extensive SDK. The Skydio SDK has recently been restricted to only certified developers, and when I contacted them for more information they elaborated that there is no current way for me to gain access to their SDK. This leaves me restricted to either DJI or another commercial drone if I want to use a pre-existing platform, or the option to build a platform from the ground up. I have decided that out of these options the best way for me to move forward will be with the DJI SDK, though if at a future date I gained access to the Skydio I would most likely move to it.