Mai is an app that I created with Jason, Brian, David, and John at our coding bootcamp. With Mai, you can post photos and share stories, much like Facebook and Instagram. Mai takes one step further by writing the stories for you.
How would this be useful? Imagine, you are a small-to-medium business owner and want to create meaningful content for your followers. With technologies such as image recognition and natural language generation (NLG), you should be able to easily do so, just by snapping photos and posting them online. If our app could understand the way you perceive the world and the way you like to write, you would no longer need to set aside time to create stories.
I knew our project had ambitious goals and would need to span two projects instead of one. That’s 3 weeks in bootcamp time and perhaps 8 weeks in normal work time. I was very fortunate to get all OG members back (Jason, David, and John) and to welcome a new member (Brian) for Phase 2. Without a doubt, the key was to build and maintain good communication and friendship in Phase 1. I imagine working with an entirely new team in Phase 2 would have delayed the project by a week or two.
In addition to meeting basic requirements (use Handlebars/Node/Express/MySQL for Phase 1 and React for Phase 2), I wanted us to use technologies that were in demand so that we can market ourselves as developers. We ended up learning (all in 3 weeks):
- JSON web token
- Amazon S3
- Google Cloud Vision
Switching from Handlebars to React was surprisingly easy. Both are centered around the notion of components and both use regular HTML. Compared to other teams, we had a head start because we got to reuse our frontend and backend code.
S3 was an unplanned addition. While implementing Dropzone to upload photos, we discovered that we can’t save files on Heroku. Since images are rather big to store in and retrieve from a database, we had to quickly find a solution for permanent storage. AWS was naturally the way to go, but I remember John and David getting stressed as they tried to parse overly technical docs into something that we can use. I used to take file upload for granted and appreciate it much more now.
On the other hand, Google Vision was just amazing to play with. We had a prototype running since Phase 1. It could detect faces, texts, and landmarks quite well. It even knew which photos were NSFW so that we could filter them out.
Originally, we wanted to use Google Natural Language to create stories. Unfortunately, during Phase 1, we found out that it only does natural language processing (NLP), not natural language generation. NLG seems to remain an open problem still, as no other Node packages solved our problem. We made a compromise in Phase 2 by creating hashtags from Google Vision’s responses.
As for me, I had the most trouble refactoring the backend, using Redux, and creating enough tasks for everyone. First, as our API got richer, we felt the need to categorize the routes based on what they modify:
- routes_user.js (sign-up, login, logout, initialize store, update profile, update password, delete account)
- routes_story.js (CRUD for stories)
- routes_photos.js (upload photos, analyze photos, create captions)
- routes_writer.js (find writers)
The payloads that we used for React were different than those that we had used for Handlebars. In addition, with the introduction of JSON web token, the backend code became more complex.
What helped us (especially members newer to coding) understand and test the APIs was to create mock data. One day, I decided to write SQL commands to create 5 users, each with 6 stories, each with 1 – 5 photos, and build associations among them. Once they were written, we could actually see what we were getting from the backend and how to extract the values that we needed. If we end up doing a Phase 3, I would like to explore libraries similar to Ember Mirage, which, with the help of Faker, lets us create mock data and do load testing easily.
Second, using Redux didn’t turn out to be as easy as in Wes Bos’ Reduxtagram tutorial. At the time, Wes’ tutorial had relied on React Router v3, whereas I wanted to use the new React Router v4. I had to learn the best practice by trial-and-error. As a result, I lost a few days of development time and confused my team since they were learning Redux from my code. In Phase 3, I would like to do a better job in normalizing data in the Redux store. Due to lack of experience and time, I stored the data that came from the backend as-is. Because the same information (e.g. user’s name) was stored a few times, I had to take extra steps to keep them consistent for caching.
Finally, as project lead, I had responsibilities to create and assign tasks so that everyone is challenged just right, stays active, and feels like that they are contributing to our app. A few times, I failed in this aspect and let down Jason, Brian, and David during Phase 2. Initially I had ambition for us to grow to a 6-member team. In retrospect, I am glad that we went with 5 and formed stronger bonds by pair programming in 2-2-1 or 3-2. What I could have done is to ask more how they felt they could contribute today.
What does Mai future look like? At the moment, everyone is busy working or looking for jobs. I hope that we will get together again for Phase 3 and possibly more. I believe we can tackle some of these goals next:
- Display success and error messages to the user
- Write unit, integration, and acceptance tests
- Allow users to reorder photos in edit story
- Allow users to upload a profile photo
- Allow users to search writers and stories
- Allow users to follow writers and get recommendations
- Allow users to get notifications
- Utilize the right sidebar
- Normalize store data
- Look into NLG!
You can find our Phase 1 code here:
Download from GitHub