Spring Break Update: Autonomous Flight

dji-phantom-2-quadcopters-uk.png

I have been spending the spring break exploring DJI’s Android Waypoints API for autonomous control of the quadcopter. The API allows us to execute a custom mission by creating ‘MissionSteps’ and inserting them into a queue to be processed sequentially by the quadcopter. MissionSteps are simple actions such as taking off, landing,  gimbal manipulation, taking a picture, yaw of the quadcopter, and the Waypoints feature, which involves setting a GPS coordinate and altitude to which the quadcopter will automatically fly. We originally intended to map out a simple scenario in a parking lot for a demo of our system. The quadcopter would fly on its own to predetermined parking spots, take a picture, and send it off for processing on our server. However, we have run into much difficulty in developing with Waypoints. The API is poorly documented, with limited description of the behavior of the different MissionSteps. I had to try many of the mission steps just to discover the exact behavior. This has proven rather dangerous. In fact, early today, while testing, the quadcopter suddenly took off and crashed into my house. Fortunately, I was able to catch the drone as it fell. The only damage to the drone was a broken propeller. I have ordered replacements and they should arrive by this Sunday. After this experience, I question the safety and viability of pursuing an autonomous system. We shall discuss with Professor Tagkopoulos at our next meeting on how we should proceed on this portion of the project.

Android App: Half-way mark

We made it to the end of Winter Quarter with a successful demo to our professor.

We were able to successfully control the drone, take a picture onto the drone’s SD card, download that image onto the mobile phone’s storage, and upload it to a server via an HTTP Post.

Picture Taking

We utilized the DJI SDK API and expanded upon it to save an image onto a known location for further processing. We encountered many difficulties with establishing the connection, but it’s working now.

Picture Transfer

The Android App successfully runs a media scanner on the DJI Phantom 3’s SD card and transfers it to the Android App.

Picture Upload

The Android App successfully uploads the image to a server via an HTTP Post.

Server

Currently the server is an Ad-Hoc network setup from one of our computers. It reads the incoming image feed, processes it, and performs database analysis and storage. Post analysis of the image, the server stores the image itself, the captured license plate, the latitude/longitude, and the timestamp of the image.

Known Issues

  • The captured image isn’t indexed by the Android device, but we confirmed that it indeed gets stored onto the phone storage. We can upload the image from that known not-indexed location. Because of this indexing issue, you can’t see the image in the gallery, but you can see it with more robust file explorer apps.
  • We currently don’t query the DJI Phantom 3 for the longitude & latitude of the captured image’s location. We also currently don’t post the timestamp of the image. Currently those are hard-coded, but will later be fixed to reflect actual coordinates and timestamps.

Server Progress Update

Communication between mobile app and server

In the last week, we’ve achieved communication between the mobile application and our ‘droneservice’ Flask application.  The mobile app sends an image, the geographic coordinates of the location at which the image was taken, and a timestamp referencing the time the image was captured via an HTTP POST request.  ‘droneservice’ processes the request as expected (refer to the previous blog post for more information about ‘droneservice’).  For the time being, the mobile app and server are communicating over a wireless LAN being created by the laptop on which we are running the server.

Database created

We have created a relational database that saves the results from OpenALPR’s image processing.  The schema involves a hierarchy of 3 models in which the top-level model stores information about a captured image, the mid-level model stores information about each plate found in an image, and the bottom-level model stores information about each of the candidate license plate numbers for each license plate that is found.  This information is now stored by our ‘parkinglogservice’ after it takes the results from the Redis queue as described in the previous blog post.

Next steps: the website

We are now ready to begin work on the website.  The primary purposes of the website will be to 1) act as a query service for license plate data and 2) provide a platform on which users can verify a car as being overparked.