Selfie was a thing, and is obviously still a thing until now. It all started in 2004, when the #selfie hashtag got popular on Flickr, and it got bigger ever since. But, it was not until iPhone 4 was out with its front camera in 2010, which then even turned selfies into a hype & lifestyle.
You can't deny the power of selfies. In fact, according to Google, around 93 million selfies are taken per day by its Android devices. It's impressive how people get sucked into the culture, and even become addicted to it.
Making advantage of the trend, Snapchat took the selfie experience to a whole new class. In 2015, Snapchat acquired a Ukrainian company, Looskery to inject a technology to elevate the users' experience.
Later on, Instagram and Facebook also followed to adapt the technology in 2017. That's how the filter can transform you into a cute dog, bunny, or even the most absurd one like this.
Now that we're up to this point, you're probably wondering, "What's this super awesome technology that can be this fun and cool?"
The answer is simple. It's augmented reality that makes it all work. To make it easier for you, we will try to break down the whole process.
The process behind the screen
Although it seems like the filter appears instantly on your face, it actually takes quite a long step to identify your face.
Probably, your first question will be, "How can the system identify my face?"
Here's the thing. Basically, all humans have common facial features. For example, your eyes look darker than the upper cheeks, or your nose bridge appears brighter than your eyes.
Using the Viola-Jones object detection framework, the system will try to match the contrasts between light and dark pixel regions on your face with the model they have in the system.
However, keep in mind that while scanning your face, you can't tilt your face or move your camera to the side. This system can only detect your full frontal face.
And, after they're all matched perfectly, that's when the system can detect your face successfully.
So... that's it? Nope, we still have more steps to go.
You see, when you're taking a selfie with the dog filter on Snapchat, your nose can turn into a dog's nose. And not to forget, there are dog's ears on the upper part of your face and the dog tongue will also appear when you open your mouth.
To make those things work, we need another system that can locate the exact position of your facial features.
Active shape model (ASM) makes it possible. The system already has a face model that is already marked manually by human. It looks out for borders and the facial features of your face.
Once the system scans your face, the model will try to map your face and match it with the one they have in the model. Although your face might not be a 100% match with the model, it doesn't affect that much.
The model already has a template for specific patterns of the face features, such as how the top of your mouth looks like. Based on that, it will look out for those patterns on your face, and then match them.
To be more detailed, ASM can also locate other points of your face to create a 3D mask that can be scaled, rotated, and moved in the system.
After that, the system can adapt the 3D mask to your facial features. And that explains how a rainbow can come out when you open your mouth, your eyeballs color change when you blink your eyes, or some accessories suddenly appear on your face.
On the bottom line...
In the end, we never know how far AR can enhance our selfie experience, especially with the technology that never stops evolving. But, one thing for sure is, our team's looking forward to how it's going to turn out later on. So, let's wait for 5... or 10 years later, maybe?
In Assemblr, we believe that all people – including you – can create and present your own AR experiences. With an easy-to-use application, Assemblr empowers people to implement AR in any occasion. Interested to unveil more possibilities of AR? Download Assemblr now, available in App Store and Play Store!