Augment-Eat

About

Before I get into anything, I have a few confessions to make about this one. The category in my portfolio that I have decided to put this project in is a little misleading; I didn't actually design the interface for this app. Well, almost everything was done almost equally together, but this project was done at the very end of high school in mid-2019, and to be honest I have forgotten a lot of the hard skills that I only learned for this project specifically, like C#. I do, though, remember being very proud of what my team of three had managed to accomplish.

The app serves to help people get an idea of what their food will look like, and in turn might help them decide, before ordering. It's a small problem, but it could potentially be a huge help for a restaurant's reputation to serve according to expectation every single time. In short, restaurants would upload and maintain a version of their menu that contains 3D model versions of the dishes they serve for users to view with AR through their device's camera, most likely their phone's. These restaurants who choose to be enhanced would also choose something to be their own custom tracking image for the AR, most likely their placemats, or on their tables. Customers would then be able to see what certain dishes would look like right on their tables.

The Process

The Initial Plan

The vision I had for this app in a virtual long term is for it to become a standard. An app that everyone has on their phone as a "common utility" like Shazam for example. Regarding the restaurant-clients side of the business, my team was ready to make a different version of the app as well (like Canvas for professors) that allows them to scan dishes directly to put in the digital menu, but we ended up just using pre-made assets for demonstration purposes). I was also questioning the whole time if this is worth it, if it's worth the restaurants maintaining this second menu on top of going through a revolution of a setup just for the convenience of getting expectations right. I think it's right on the line, but ultimately I thought that it's worth it.

The Approach

We identified the problem that people aren't always satisfied with the food they get if the restaurant's menu is mainly word-based. We also we conducted a design procedure similar to what was taught in B IMD 250, except in my opinion this project focuses more on the "back-end" stuff, like the behavior of the app behind the interface, as opposed to the Canvas Revamp project where we were working on the user's experience from interacting with solely the first layer of the interface.

The full documentation of the project can be found here, but in short we collected our numbers using surveys, interviews, and secondary research. We directly interviewed two local restaurants: Go Curry and Bariuma Ramen. The numbers we have collected are that 41.9% of our respondents make their decisions based on menu images in restaurants. 87.1% said they eat out a few times a week, and 25% of the food they get doesn't match menu descriptions or images. On top of getting data about menus themselves, we also went out to ask people (in Jakarta, Indonesia, which is where I went to high school) about their familiarity with AR. About 63% of the people we asked were familiar with the concept of AR, but even so, most of them were aware of what AR is, which helps to support that this solution has a good potential to be effective for where it was intended to be used.

The tools we used to make this app possible were Unity (for the 3D models and the interface itself), integrated with Vuforia (the AR 3D tracking platform), Visual Studio (for the app's behaviors of the app), and Android Studios (not my part but I'm guessing for compatibility).

The End Result

The app can be downloaded for Android devices here.

Here is the short demonstration video made for the app:

The Reflection

What I Learned

Making a simple application takes a lot. Practically, multiple tools and platforms are always going to be used and it isn't an option to avoid working with many of them at once to achieve a single goal.

My Next Steps

We were all still learning as we made the app, so the whole system was very funky and fragile. The next step would be to make the second iteration with a much more professional and rigorous look and behavior. I might just ask my team if they would like to continue the development as I'm still in close contact with them.