Computer Vision + AR Android App: Emojify

2018

Role / 角色:Computer Vision Engineer
Collaborated with Android Developer Mikael von Pfaler
Technical Director / 技术总监: Igal Nassima
Studio / 工作室: SUPERBRIGHT
Made in NY 纽约
Client / 客户: AT&T

An native Android app, takes in a selfie photo, process it through a complex algorithm and output an Emoji painting.
这是一款原生的安卓应用程序,它会拍摄一张自拍照片,通过一个复杂的算法对其进行处理,然后输出一幅Emoji画。

The algorithm fully utilizes the latest Computer Vision technology, mixed with my portrait painting practice, serving as a great example of how art practice and modern technology can be blended seamlessly to create something truly unique.
该算法充分利用了最新的计算机视觉技术,结合我的肖像绘画实践,作为一个很好的例子,艺术实践和现代技术可以无缝结合,创造出真正独特的东西。


Table of Content:

  • Final Application Output Showcase 最终效果展示
  • Technical 技术
    • Combined methods to get the face & hair contour 综合方法得到面部+头发的轮廓
    • Combined approach to draw key facial features for a better hand-painted look
  • Development Log (Chronically)

Final Application Output Showcase


Technical

Pass 1: Use Emoji to replace pixels based on color

Gettting the basic face shape carved out but with no details on key facial features like eyes, mouth, etc.

Not every piece of emoji is being used for generating the painting, I got rid of the emojis that do not look good on a person’s face and use a custom written software to analyze if the current emoji selection has a decent color distribution.

Pass 2: Combined approach to draw key facial features for a better hand-painted look

Use different way of drawing eyebrow/nose/mouth to highlight their 3d structures:


Development Log / “How I got the best result step by step”