Computer Vision + AR Android App: Emojify


Role / 角色:Computer Vision Engineer
Collaborated with Android Developer Mikael von Pfaler
Technical Director / 技术总监: Igal Nassima
Studio / 工作室: SUPERBRIGHT
Made in NY 纽约
Client / 客户: AT&T

An native Android app, takes in a selfie photo, process it through a complex algorithm and output an Emoji painting.

The algorithm fully utilizes the latest Computer Vision technology, mixed with my portrait painting practice, serving as a great example of how art practice and modern technology can be blended seamlessly to create something truly unique.

Table of Content:

  • Final Application Output Showcase 最终效果展示
  • Technical 技术
    • Combined methods to get the face & hair contour 综合方法得到面部+头发的轮廓
    • Combined approach to draw key facial features for a better hand-painted look
  • Development Log (Chronically)

Final Application Output Showcase


Pass 1: Use Emoji to replace pixels based on color

Gettting the basic face shape carved out but with no details on key facial features like eyes, mouth, etc.

Not every piece of emoji is being used for generating the painting, I got rid of the emojis that do not look good on a person’s face and use a custom written software to analyze if the current emoji selection has a decent color distribution.

Pass 2: Combined approach to draw key facial features for a better hand-painted look

Use different way of drawing eyebrow/nose/mouth to highlight their 3d structures:

Development Log / “How I got the best result step by step”