Final Remarks

Project Summary

We’ve created a two part system to allow DJI Phantom 3 drones to be used for parking enforcement. The first component, the ‘Patrol’ Android application allows the Phantom 3 send images (along with GPS, and timestamp) to our server, which is the 2nd component of this project. The server runs license plate recognition, data aggregation upon the results, applies enforcement logic, and finally runs a web interface for users to view and process citations. Our system has been made robust with authentication to ensure only approved drones can submit evidence to the system.

Users are able to access our user interface to view and process citations at: http://taglabrouter.genomecenter.ucdavis.edu/webservice/

Resources:

  1. All project code can be found at: https://github.com/quadsquad193/phantomboreas
  2. User manual can be found at: https://www.dropbox.com/s/syu4eotxcqmv218/QuadcopterUserManual.pdf?dl=0
  3. Patrol Android application apk could be found at: https://www.dropbox.com/s/oodbwicxhjcack4/app-debug.apk?dl=0
  4. Project Overview Video: https://www.youtube.com/watch?v=6PEZUbAusp0

 

Thanks again to Professor Tagkopoulos and Professor Liu for their guidance and support.

-Baotuan, Kelvin, Mark, and Alex

IMG_3647

Server Progress Update

Communication between mobile app and server

In the last week, we’ve achieved communication between the mobile application and our ‘droneservice’ Flask application.  The mobile app sends an image, the geographic coordinates of the location at which the image was taken, and a timestamp referencing the time the image was captured via an HTTP POST request.  ‘droneservice’ processes the request as expected (refer to the previous blog post for more information about ‘droneservice’).  For the time being, the mobile app and server are communicating over a wireless LAN being created by the laptop on which we are running the server.

Database created

We have created a relational database that saves the results from OpenALPR’s image processing.  The schema involves a hierarchy of 3 models in which the top-level model stores information about a captured image, the mid-level model stores information about each plate found in an image, and the bottom-level model stores information about each of the candidate license plate numbers for each license plate that is found.  This information is now stored by our ‘parkinglogservice’ after it takes the results from the Redis queue as described in the previous blog post.

Next steps: the website

We are now ready to begin work on the website.  The primary purposes of the website will be to 1) act as a query service for license plate data and 2) provide a platform on which users can verify a car as being overparked.