We follow our first What’s New post with a couple more tools that we feel could have potential for use in eLearning and related areas – by putting the artist in VR.
Tilt Brush and Quill
Tilt Brush, Google’s 3D painting virtual reality app, has been around for almost a year. Now, with the increasing focus on VR, Tilt Brush and other similar tools could significantly influence the type of immersive learning experiences we create.
Tilt Brush was first released in April 2016 exclusively for the HTC Vive. Last month, Google released a version for the Oculus Rift/Touch. The tool actually allows you to draw and paint in virtual reality itself, using the entire space you’re in as your canvas, and to walk around and inside what you’ve drawn. You can say goodbye to paintbrushes and pen tablets and use your hands to create brushstrokes. It’s no longer just the viewers or the learners who are immersed in a virtual world, but the artists themselves.
Interestingly, this is the first app that Google has made available on Facebook’s VR platform. Not that Oculus is far behind, mind you. Oculus’ Story Studio too have their own VR painting app call Quill, whose Beta version was released in December.
Both Tilt Brush and Quill were winners at the 2017 Lumière Awards . Tilt Brush won the award for Best VR Experience, while Quill won the award for Best VR Animated Experience for Dear Angelica, an emotional story of a young girl’s memories of her mother.
When it comes to eLearning, we wonder:
- Will it be easier and more efficient to create VR training experiences using tools like Tilt Brush or Quill? Especially where there’s a need for more creative and conceptual graphics and where 100% geometric/engineering-type accuracy is not required?
- How can these tools help us create even more compelling visual storytelling experiences, where one gets pulled not just into virtual spaces and objects but into emotions themselves?
- How will this change the types of games we can create? Take Paulo’s Wing, for example, a VR game released on Steam in February this year. All the artwork and assets for Paulo’s Wing were created in VR using Tilt Brush, and then brought into Unity, resulting in an intense game experience in a vibrant world.
- A tool like Tilt Brush can be used to create Mixed Reality experiences as well. How can we use it to design Mixed Reality learning experiences, and with what level of precision?
Given that VR is really expected to pick up in the L&D space this year, it will be interesting to see what designers create with the medium from within the medium.
We also picked out a few tools/technologies from Peter Smart’s talk, “The Future of the Web and How to Prepare for it Now”, that we thought could be put to good use as learning solution features or components.
Beyond Verbal – Exposing Feelings and Thoughts Behind the Words
Beyond Verbal offers an API that can enable devices and applications to understand and analyze emotions based on vocal intonations. From a training perspective, something like this could be used for soft skills or management and leadership topics, for example.
Imagine a customer service scenario where the learner has to select the right response when interacting with a dissatisfied customer. Now take that one step further and have the learner actually speak their selected response. We can then evaluate what type of emotion comes out through the verbalized response. Does it support or contradict the words? Is it likely to further frustrate the customer?
Or perhaps there’s an HR training on how to conduct performance reviews. You need to provide some constructive feedback… firmly. But your tone shouldn’t be furious or rude or even dismissive. Something like Beyond Verbal could help analyze the tone of each learner’s response and provide feedback accordingly.
Kairos and Microsoft Cognitive Services – Recognizing Faces, Measuring Attention, Changing Font Size Dynamically
Kairos offers an API and an SDK that can allow you to detect, identify, and verify faces. Microsoft Cognitive Services includes a Face API that allows you to verify if two photos are of the same person. Features like this could be useful for learner authentication, not just from a security viewpoint but even for something like assessments. You could have the learner click a selfie before starting the assessment, and compare it with a file photo. So you’d be able to make sure it’s really the learner taking the assessment, and not someone else!
Kairos also allows you to measure attention through their “tracking” feature. This picks up how long someone remained in front of the camera, the number of times they looked at and then away from the camera, and for what percentage of time they looked at the camera. Where there are stringent requirements, such a feature could possibly help ensure that learners don’t refer to other materials when taking an assessment.
Another possible application of face recognition could be to implement responsive type. Here’s an experiment conducted by Marko Dugonjić as far back as 2013 that shows how font size can be adjusted based on distance from the screen. With APIs/SDKs like the ones developed by Kairos and Microsoft, could responsive type now become a lot easier to implement?