Final Remarks

Project Summary

We’ve created a two part system to allow DJI Phantom 3 drones to be used for parking enforcement. The first component, the ‘Patrol’ Android application allows the Phantom 3 send images (along with GPS, and timestamp) to our server, which is the 2nd component of this project. The server runs license plate recognition, data aggregation upon the results, applies enforcement logic, and finally runs a web interface for users to view and process citations. Our system has been made robust with authentication to ensure only approved drones can submit evidence to the system.

Users are able to access our user interface to view and process citations at: http://taglabrouter.genomecenter.ucdavis.edu/webservice/

Resources:

  1. All project code can be found at: https://github.com/quadsquad193/phantomboreas
  2. User manual can be found at: https://www.dropbox.com/s/syu4eotxcqmv218/QuadcopterUserManual.pdf?dl=0
  3. Patrol Android application apk could be found at: https://www.dropbox.com/s/oodbwicxhjcack4/app-debug.apk?dl=0
  4. Project Overview Video: https://www.youtube.com/watch?v=6PEZUbAusp0

 

Thanks again to Professor Tagkopoulos and Professor Liu for their guidance and support.

-Baotuan, Kelvin, Mark, and Alex

IMG_3647

Decision Day (4.2.16) Recap

Screen Shot 2016-04-04 at 2.26.26 PM

Our project showcase at UC Davis Decision Day was a great success. Our presentation, along with those of 2 other teams from ECS193, drew a lot of traffic and interest from prospective students and their families. Additionally, we were able to answer many questions from the prospective students regarding UC Davis and the Computer Science program here. We would like to thank the Computer Science Department for giving us this opportunity as well as Professor Tagkopoulos for his continued guidance, without which we would not have gotten this far.

Spring Break Update: Autonomous Flight

dji-phantom-2-quadcopters-uk.png

I have been spending the spring break exploring DJI’s Android Waypoints API for autonomous control of the quadcopter. The API allows us to execute a custom mission by creating ‘MissionSteps’ and inserting them into a queue to be processed sequentially by the quadcopter. MissionSteps are simple actions such as taking off, landing,  gimbal manipulation, taking a picture, yaw of the quadcopter, and the Waypoints feature, which involves setting a GPS coordinate and altitude to which the quadcopter will automatically fly. We originally intended to map out a simple scenario in a parking lot for a demo of our system. The quadcopter would fly on its own to predetermined parking spots, take a picture, and send it off for processing on our server. However, we have run into much difficulty in developing with Waypoints. The API is poorly documented, with limited description of the behavior of the different MissionSteps. I had to try many of the mission steps just to discover the exact behavior. This has proven rather dangerous. In fact, early today, while testing, the quadcopter suddenly took off and crashed into my house. Fortunately, I was able to catch the drone as it fell. The only damage to the drone was a broken propeller. I have ordered replacements and they should arrive by this Sunday. After this experience, I question the safety and viability of pursuing an autonomous system. We shall discuss with Professor Tagkopoulos at our next meeting on how we should proceed on this portion of the project.

Team Progress Update (2.29.16)

Our goals for this week are to complete communication for a photo taken from the DJI quadcopter to be sent to a server and be processed by OpenALPR to obtain the license plate reading result.

What’s done so far:

  1. Kelvin has been working on the backend for the server, which is now able to accept an image from a multipart post, and obtain the OpenALPR results.The whole backend consists of three processes and a Redis store:

    1. `droneservice` is a thin Flask application that accepts a post request and puts information onto a job queue in Redis.

    2. `openalprservice` consumes images from Redis and uses the OpenALPR library to do recognition. It then reports any results back into a Redis “results” queue.

    3. `parkinglogservice` ingests results from Redis and will do something with it. This is where the business logic will go. For now it just prints out OpenALPR’s license plate recognition results. We are at the point where we need to discuss more specific business details and begin to implement `parkinglogservice` to do meaningful work.

 

demoshit

2. Android to server communication. Alex has completed a demo app to be able to send a photo from Android gallery to a server using a multipart post. We have yet to try this functionality with Kelvin’s recently completed backend.

3. Android application to control camera and retrieve photos. We are having trouble with this for the reasons described below. At this point, we are unable to make any further progress until the issue is resolved.

What is affecting our progress (urgent!):  We are currently having trouble connecting to the DJI quadcopter with our Android application. We believe it is due to the out of date firmware on the drone as well as the controller. We are unable to make the update however, since the drone battery level is too low and we do not have the correct charger. We need to be able to charge the drone ASAP to continue working.

Timeline for Decision Day Demo

Timeline for Freshman Decision Day Demo.

We have been asked by the CS department to showcase our project at Freshman Decision Day on April 2nd. Below is our timeline to complete quality demo by then.

-March 4th, Friday. Be able to send image from DJI Phantom to server and process it.
-March 11th, Friday. Business logic, database, and web interface.
-March 18th (Finals week)
-March 21-25 (Break) Work on during break to make web interface high quality and beautiful.
-March 28th -April 1st, Friday (have working demo finalized and present to Professor Tagkopoulos before actual event)
April 2nd, Saturday (Freshman Decision Day Demo)

Update from Team Meeting 2.20.16

After meeting with Professor Tagkopoulos on Friday, we have reprioritized working on the image recognition task. We have decided upon using the OpenALPR project, a well-developed open source project, to supply the computer vision and image processing capabilities for our project. OpenALPR provides many pre-built web services and daemons to perform license plate recognition tasks, as well as an integration API with numerous language bindings. We have tested this functionality and it works reasonably well upon some sample images of vehicles that we have tried.

Here’s the link to the project: https://github.com/openalpr/openalpr

Here’s the sample output we got:

Screenshot from 2016-02-20 13:31:18

To be able to integrate the drone, we need a means to store images from Quadcopter to server for processing by OpenALPR. The previous team has implemented this for Google glass. We will need the drone and the code from last year to begin testing and optimizing the recognition for pictures taken from the drone.

Goals for the week:

  • Be able to process images with OpenALPR from server. Have a working demo to show Professor Liu and Professor Tagkopoulos.
  • Delving deeper into last year’s code concerning DJI control and DB communication to determine what we could reuse. Daniel Chen from last year’s team has given us access to the BitBucket repository for their code.

 

Looking ahead, we may need to develop a custom android app using parts of last year’s design if we want autonomous control with DJI’s waypoints. Also, if we are able to have a reasonable working demo by end of this quarter with OpenALPR, we may be able to pursue more advanced recognition methods. These are, of course, “nice to have” features that would be planned for next quarter but are not a priority right now.