The goal for this series of small projects was to learn as much as possible about creating convincing AR experiences with Unity and C#. In the process I gained a heightened understanding of best practices and limitations, as well as how I can optimize my models to perform better on mobile platforms.
The pieces below each represent different techniques explored and lessons learned. I have a number of new ideas in the works that build off this knowledge to create experiences with a deeper focus on interaction.
To take my first step into the world of AR, this experiment used image tracking to bring a piece of visual art to life with Unity's AR Foundation SDK.
I created the visual art with patch-based scripting in TouchDesigner and used the first frame as the image target.
I'm excited for the possibilities this brings to animate art and allow people to interact with it in new ways.
Moving to 3D, my goal was to work with plane tracking to place objects in the world around me. I worked with a simple cube to maintain focus on AR techniques rather than modeling.
Using components from AR Foundation, I set up an AR plane tracker that could detect surfaces around me through my phone's camera. From there, I referenced a C# script that allowed me to place objects on these surfaces using raycasting.
I also experimented with light estimation, using information from my phone's camera to accurately light the object based on its real-world surroundings.
Expanding on the previous build, I chose a more complex form as I was interested in diving deeper into reflections.
Similar to the environmental light estimation, I was also able to use information from my phone's camera to create a reflectance map true to the object's surroundings.
To push the illusion further, I also experimented with human occlusion, which allowed me to place my hand between the object and the camera.
Using similar concepts as my last two experiments, I moved on to add more interaction opportunities to the experience.
Using a Unity asset called Lean Touch, I implemented the ability to rotate and resize the object using gestures on my phone's touch screen.
With these new possibilities, I brought the experience outside to test it at a larger scale. This taught me the importance of depth clipping settings when creating experiences at this size.
Moving on from these early tests, I wanted to work with a realistic application to create more meaningful interactions.
I decided to use image tracking to place an old architecture model on its site plan displayed on my ipad.
Beyond the basic image tracking functionality, I developed a C# script that allows a user to move a cross section through the model by dragging their finger across the screen.
The result helps users understand new sectional relationships in the design by allowing them to interact with the model in real-time.
If you want to learn more about the process, head over to the Urban Canyon project, where I go through it in greater detail towards the bottom of the page.
I'm grateful for the progress I've made so far developing my extended reality skillset. These early projects have provided a solid foundation and will allow me to push toward my goals with confidence.
I'm currently working on creating custom XR interactions with a focus on music. So far, I've developed a VR prototype with lighting controls and audio-reactive shaders and geometry, however, my long-term goal is to move these concepts into AR for live musical performances.