±¹³»¿ÜÀÇ ÀüÀÚÁ¤º¸ ¹× ICT ºÐ¾ß ¿¬±¸ÀÚµéÀÌ ÇмúÁ¤º¸ ¶Ç´Â ¿¬±¸¿Í °ü·ÃµÈ ±³À° ÄÜÅÙÃ÷¸¦ ¹«·á·Î Á¢ÇÒ ¼ö ÀÖ´Â ¿Â¶óÀÎ ¼¼¹Ì³ªÀÔ´Ï´Ù.
È»óȸÀǽýºÅÛÀ» Ȱ¿ëÇÏ¿© ¼Ò±Ô¸ð ¼¼¹Ì³ª, ¿öÅ©¼ó µî ¸ñÀû¿¡ µû¶ó ´Ù¾çÇÏ°Ô È°¿ëÇÒ ¼ö ÀÖ½À´Ï´Ù.
Mixed Reality (MR), an environment that seamlessly merges the real and virtual worlds to enable real-time interaction between the physical and digital objects, is expected to be a genuinely transformational technology that will change our lives with unprecedented applications. Despite their vast potential, truly immersive MR apps are yet to be developed. The core challenge lies in the unique workload of seamlessly combining virtual information over the physical world with resource-constrained mobile/wearable devices. While such workload often requires a continuous and simultaneous execution of multiple Deep Neural Networks (DNNs) and rendering tasks, existing mobile deep learning platforms are ill-suited as they are mostly designed for running only a single DNN. In this talk, I will introduce our recent projects to characterize the workloads of future MR apps, comprehensively understand their requirements, and develop core mobile deep learning system techniques to support them.
ÁøÇà ¹æ¹ý
»çÀü Áغñ
¢º ¹®ÀÇ: webmaster@eiric.or.kr