FoodChangeLens: CNN-based Food Transformation on HoloLens

Shu Naritmo    Ryosuke Tanno    Takumi Ege    Keiji Yanai

Department of Informatics, The University of Electro-Communication

In Proc. of International Workshop on Interface and Experience Design with AI for VR/AR (DAIVAR 2018)

Demo Movie Version 1. It was created by Shu Naritomi and Takumi Ege.
Demo Movie Version 2. It was created by Shu Naritomi and Takumi Ege.

Abstract

In this demonstration, we implemented food category transformation in mixed reality using both image generation and HoloLens. Our system overlays transformed food images to food objects in the AR space, so that it is possible to convertin consideration of real shape. This system has the potential to make meals more enjoyable. In this work, we use the Conditional CycleGAN learned with a large-scale food image data collected from the Twitter Stream for food category transformation which can transform among ten kinds of foods mutually keeping the shape of a given food. We show the virtual meal experience that food category transformation among ten kinds of typical Japanese foods: ramen noodle, curry rice, fried rice, beef ricebowl, chilled noodle, spaghetti with meat source, white rice, eelbowl, and fried noodle.

paper thumbnail

Paper

PDF, 2018.

Citation

Shu Naritomi, Ryosuke Tanno, Takumi Ege, and Keiji Yanai. "FoodChangeLens:CNN-based Food Transformation on HoloLens", in Proc. of International Workshop on Interface and Experience Design with AI for VR/AR (DAIVAR), 2018. Bibtex





Another Application

Food Transfer Image Museum on HoloLens

Shu Naritomi made the above.

Food Image-to-Image Translation using StarGAN

Ryosuke Tanno made the above.



Related Work


Acknowledgement

This work was supported by JSPS KAKENHI Grant Number 15H05915, 17H01745, 17H05972, 17H06026 and 17H06100.