People of ACM - Juan Miguel de Joya

July 26, 2016

While you were a member of the ACM SIGGRAPH student chapter at UC Berkeley, you and a few of your fellow students developed Mindscape VR, an immersive virtual reality environment where users could change their surroundings and move objects simply by thinking about them. Can you tell us a little about how this brain wave technology works?

Mindscape VR was developed as part of the 2015 Cognitive Technology exhibit at the Exploratorium, with the intent of creating a proof-of-concept brain-computer interface that reads users’ brain waves as input data to determine conditions and interactions in a virtual environment. Our intent behind this was to explore alternative methods of interaction in VR, as a lot of experiences at the time use input devices such as mice, keyboards, or gamepad controllers to interface with a VR environment. There is cognitive disparity and latency derived from device-driven user interactions, and the learning curve for how you can interact in a virtual environment while having a head-mounted device (HMD) takes away from the immersive experience.

We used the Muse electroencephalogram (EEG) headset as a non-invasive input device that users can put on prior to wearing their HMD. Electroencephalography is a method of monitoring and recording brain activity by having electrodes placed along the scalp. These disks measure the voltage fluctuations resulting from ionic current flows coming from the neurons in the brain, with the EEG signals representing the voltage difference between the two scalp electrodes. The device has traditionally been used to monitor and diagnose brain disorders, seizures, and comas, and we found that we could use the Muse headset to pick up a range of frequencies that correlate to whether a subject is in a concentrated or relaxed state. Using this data, we gave the users of Mindscape VR measured control over environs, allowing them to levitate and collect pebbles, affect the time of the day, and shoot fireballs by concentrating.

What is the most exciting aspect of being a Technical Director Resident at Pixar Animation Studios?

Outside of the projects I’ve worked on, one of the most exciting and fulfilling aspects of being at the studio is that I get to work with and learn from people who are equally passionate about their work, our community at Pixar, and the greater community outside of the studio. One thing that I have consistently found about myself is that I enjoy collaborating with and making friends, and working at Pixar allows me to contribute and be part of a wide range of groups and activities, be it through a given production that I’m part of, intracompany extracurricular initiatives such as being on our company’s AIDS Walk planning committee, or even just working out with friends, having breakfast, or playing Dungeons & Dragons. Pixar has also supported my involvement at the SIGGRAPH conferences, which has enabled me to meet and learn from developers outside of the animation industry as well as to continue my volunteer efforts. I can’t say I have had the opportunity to work at a company that allows me to engage all my interests in such a holistic manner, and I am incredibly grateful to have the opportunity to be a resident at the studio.

You have said that your primary work has been in physics simulations and animation. What are some examples of physics simulations you have worked on? What did you have to learn about physics to make your simulations realistic?

In the past I’ve been involved with James F. O’Brien’s group at the University of California, Berkeley’s Visual Computing Lab, which is where a majority of my work and interest in physics simulation and animation stems from. One of the publications that I’m proud to have been involved with was our 2014 paper “Adaptive Tearing and Cracking of Thin Sheets,” which proposes a method for adaptive fracture propagation using local projecting remeshing and a sub-stepping fracture process to ensure stability during remeshing. We were able to reproduce a wide range of materials with different fracture behaviors with the method, and it was well received by community members at SIGGRAPH.

In terms of learning the physics, I think it’s an iterative process. I did not have a strong background in physics or math when I took my first computer graphics class—as a matter of fact, I had then opted out of being an economics major and pursuing an investment banking internship to start from scratch in computer science and was not given any particular special education on either subject growing up. I certainly believe that anyone can take the first step and invest in a technical career in computer graphics by learning the math and physics fundamentals. From there one can build knowledge based on what they become interested in pursuing and being persistent about their own self-education to reach their goals.

In your role as a member of the games committee for ACM SIGGRAPH 2016, can you give us a sneak peek at some of the latest innovations to be highlighted in the conference’s gaming program? What do you think is the next “big thing” in gaming?

Among the things I really enjoy about being part of the ACM SIGGRAPH 2015 and 2016 Games Committee are the opportunities to connect and learn from the game developer community, and I sincerely believe the SIGGRAPH community continually benefits from their generosity to share their technical achievements in real-time rendering. SIGGRAPH 2016 will continue the tradition of showcasing the most innovative work in computer graphics and interactive techniques, and this year’s Games Committee is extremely proud to have worked with the rest of the conference planning committees to ensure we have excellent content to share.

The Advances in Real-Time Rendering courses remain as pillars of games-related programming at SIGGRAPH 2016. It is a two-part series dedicated to this year’s most innovative algorithms and techniques for fast, interactive rendering for video games, featuring developers from companies such as Ubisoft, Naughty Dog, Treyarch, and Activision discussing the latest in temporal anti-aliasing, precomputed global illumination, and volumetric techniques for character rendering among other topics. Similarly, its sister course, Open Problems in Real-Time Rendering, serves as a forum for discussing the top unsolved problems in real-time rendering, the desired approaches to such problems, and the steps that need to be undertaken to bridge the gaps between ideation and solution.

Equally exciting are the Physically Based Shading in Theory and Practice and Moving Mobile Graphics courses, which discuss the state-of-the-art techniques and approaches that are being implemented for shading and mobile graphics, respectively. Physically based shading models have been heavily employed in both film and games, and the Physically Based Shading course showcases new research in the field alongside practical production applications and real-world examples as presented by some of the top developers and engineers in the field. The Moving Mobile Graphics course similarly addresses key advances within mobile graphics, including virtual reality and GPU-accelerated image processing.

Citing the next big thing in gaming from a broader perspective outside of the technology is difficult because the medium has become increasingly interdisciplinary, and what’s interesting about both it and the industry surrounding it may be different depending on what a given individual may want from their experience. Mixed-reality entertainment was predicated on the potential of immersive gaming and/or gamified experiences, so I find it fascinating to see how developers will continue to play around with spatial cognition and communication in VR and augmented reality (AR), be it through author-driven interactive films, games, simulations, or in some other form of communicative medium entirely.

At the same time, traditional gaming has become mechanically and/or narratively complex over time, with gaming content being distributed through a myriad host of consumer electronic platforms. We now have games that are cinematic, raise philosophical and existential questions, and make pointed sociopolitical statements through empathic roleplaying. We likewise have games like Pokémon Go that become cultural phenomena because they marry technology with social movements and ideas, adding player-subjective connotative experiences on top of the game itself. It’s exciting to see video games and any related interactive experiences become socially pervasive, and I am curious to see the type of self-reflection, discussions, and stories that we as a society will have and share as the medium continues to further integrate itself in our daily lives.

 

Juan Miguel de Joya is a Technical Director Resident at Pixar Animation Studios and a former computer science researcher in the Visual Computing Lab at the University of California, Berkeley. With a background in the technical and aesthetic applications of computer graphics and interactive techniques, he has recently expanded his knowledge in areas including product/device design, user interfaces and parallel computing.

De Joya began his association with ACM SIGGRAPH as a student volunteer at SIGGRAPH 2012, and later became a member of the student chapter at UC Berkeley. He has since been an active volunteer at many ACM SIGGRAPH conferences. De Joya is presently a member of the games committee for SIGGRAPH 2016, taking place July 24-28 in Anaheim, California, and is serving as the co-chair of the VR Showcase at SIGGRAPH Asia 2016, December 5-8 in Macao. And he is already thinking ahead to SIGGRAPH 2017, where he will, once again, serve on the games committee.