Week 10 Android App Updates!

After the last 2 quarters worth of development, we have completed the Android app! The design hasn’t changed much since the Week 7 updates, but we cleaned it up, added the proper attributions to the source code, and packaged it all up into our Github project.

You can find the final codebase (nicknamed phantomboreas) on Github.

Week 7 Android App Updates!

The Android app is nearly done!

First, our app now authenticates with the server using a token system.

Moreover, we redesigned the app to follow Material Design standards. To that end, we added a tabbed display: one tab for the video stream / image upload functionality and another tab for a gallery. Moreover, the photo capture button’s design was made in line with the Material Design theme.

A gallery was added so parking officers can view their recent photo captures. Given the extremely high resolution images captured by the Phantom 3, we opted to use the Picasso library to load the images efficiently.

Here are screenshots of those tabs:

 

Lastly, we used Android Asset Studio (https://romannurik.github.io/AndroidAssetStudio/) to create app and web icons. Here is a preview:web_hi_res_512.png

Mobile: GPS/Timestamps & Next Up!

33

We successfully implemented GPS and Timestamp retrieval. We are now able to send this data with the pictures of the license plates to the server. The server now has the tools to implement business logic regarding actual time-based parking enforcement!

Next up for the mobile app is implementing authentication so unauthorized users cannot simply download the apk and send off clock.pngpictures. The mobile and server teams will be working largely in concert with one another for this.

Lastly, we are about 3 weeks from a preliminary delivery to our clients so stay tuned!

Android App: Half-way mark

We made it to the end of Winter Quarter with a successful demo to our professor.

We were able to successfully control the drone, take a picture onto the drone’s SD card, download that image onto the mobile phone’s storage, and upload it to a server via an HTTP Post.

Picture Taking

We utilized the DJI SDK API and expanded upon it to save an image onto a known location for further processing. We encountered many difficulties with establishing the connection, but it’s working now.

Picture Transfer

The Android App successfully runs a media scanner on the DJI Phantom 3’s SD card and transfers it to the Android App.

Picture Upload

The Android App successfully uploads the image to a server via an HTTP Post.

Server

Currently the server is an Ad-Hoc network setup from one of our computers. It reads the incoming image feed, processes it, and performs database analysis and storage. Post analysis of the image, the server stores the image itself, the captured license plate, the latitude/longitude, and the timestamp of the image.

Known Issues

  • The captured image isn’t indexed by the Android device, but we confirmed that it indeed gets stored onto the phone storage. We can upload the image from that known not-indexed location. Because of this indexing issue, you can’t see the image in the gallery, but you can see it with more robust file explorer apps.
  • We currently don’t query the DJI Phantom 3 for the longitude & latitude of the captured image’s location. We also currently don’t post the timestamp of the image. Currently those are hard-coded, but will later be fixed to reflect actual coordinates and timestamps.

Updates on Mobile App

We managed to get the Sample DJI Tutorial App working. The app streams the video from the drone, with buttons to capture images and record video.

From here, we plan to attach server communication code to send the images to a server for further processing.

Challenge: we expect to have to do a bit of research on how to load images from the Phantom 3 SD Card or if we can intercept the image in the “capture button” code.

Project Pitch

Here’s our first post! Given that, we should probably explain what the project is and why we are doing it. This project is part of our Senior Design class at UC Davis. Our sponsor, professor Tagkopoulos, wanted to encorporate a quadcopter, Google Glass, real-time processing, and object recognition in one large project.

After hours of thinking of practical use cases using all those requirements, we settled on a parking enforcement application. The vision is to have quadcopters scan timed street parking, grab parked cars’ license plates, send them to a server to perform object recognition and extract the license plate numbers. From there, the server logs the timestamp, GPS info, and license number to track how long a car has been parked. Crosshchecking with a database of parking information for a given city, the server can have triggers set to act when a car is overparked. Initially we will have it send a notification to a parking officers’ phone (through a mobile app we will create) or to their Google Glass, if they have that set up. We plan to include a sign-up service where people send us their phone numebrs and license plate information so that they will get preemptive text notifications before the officer is informed, so as to give them a chance to leave before the car is legally overparked.

Our sponsor loved the idea =)