Virtual Reality and it’s uses for sensory development and rehabilitation
Hello,
My name is Robert Mitchell, I am a 3rd-year student at the University of Portsmouth studying Computer Games Technology.
As is such with the final year of an undergraduate course I am required to research and write a dissertation. my chosen topic as the title suggests is ‘Virtual Reality and its uses for sensory development and rehabilitation’.
A major part of the project is to create a proof of concept that a virtual ‘sensory room’ could aid in sensory development and rehabilitation, although I have the technical knowledge to prototype and iterate I am not an OT and although I have a general idea about my chosen topic I would like to invite discussion and feedback from industry professionals.
Right off the bat, I want to just share a little more about the project. The project is being created inside the UNITY 3D engine (www.unity3d.com) using the HTC Vive system (www.vive.com/uk) as my HMD of choice with all input coming from the Leap motion (www.leapmotion.com) hand tracking solution as an input system.
Over the coming months, I will be posting Images, Videos, and blog post on my progress with this research project and I would greatly appreciate any feedback and comments to aid in refining this research project.
Update 1.0: Interaction with the virtual world.
From a very early point in this project, I knew that the idea of sensory rehabilitation wasn’t just about the client, it was also about the family and or carers.
A sensory environment should be a place where both parties can interact and learn together. with this, in mind, I knew I wanted to develop the idea of people outside of the VR world having a window into the Virtual world and a way to interact with the client.
The way I approached this is fairly simple, the client inside VR interacts with the world through the power of Leap Motion and simple hand gestures leaving me with 2 Vive controllers that are redundant are free to be used as alternative input.
Let’s start by tackling the window into the world. The way I tackled this problem was by creating a secondary camera inside Unity and telling it to only cast to a secondary display (in this case a monitor). this camera was made a parent of the Vive controller, meaning wherever the controller went the camera followed.
Great! we have an external camera! but the user needs to able to interact and see the world without being tethered to a monitor, so let’s talk 3D printed mounts. I plan to print a simple mount that will clip to the ring of the vive controller (with cutouts for tracker sensors) and hold a phone to the controller. then through either the magic of UNET or a slightly more slap and dash method of TeamViewer or other desktop streaming apps, the user can see into the world through the camera attached to the controller.
A video of this is to follow when I have another person to help record the footage.
Now the user can see how will they interact? I have a few ideas I would like to pitch to you all and I would love to hear your feedback and suggestions
Collider on camera to allow objects such as a virtual rope to be pushed around.
Turning the camera into a paintbrush tool to allow painting and drawing (much like google’s Tilt brush)
drop 3D objects like balls and cubes into the world for both users to play with, such as creating towers or throwing the balls.
Please let me know what you think of these ideas, as I begin to implement more I will send photos and videos to give a better sense of the application.
Update 1.1: Interaction inside the virtual world.
Based upon my 1.0 update on the interaction between the virtual world and outside world, I wanted this update to focus on 1 thing and 1 thing only, virtual interaction.
When reading into MSR (Multi-Sensory Rooms) I learned they use a range of effects and products including Lighting effects, Sound effects, Selected rhythmical music and Cause and effect items (switches and tactile objects such as ropes).
Although many of these effects and sensations such as light and sound can easily be emulated within virtual environments, whereas certain effects such as tactile feedback cannot be emulated easily without custom hardware.
With this in mind, I wanted to share 3 developments into interaction techniques and their feedback.
Pitch and Volume alterations The way this technique is working is based on the proximity of the user’s hands in the world, as the player’s hands move further away and closer together the volume/pitch will go up or down.
Hand presence and tracing One of the most important things to me about Virtual Reality is the embodiment, with this in mind I gave the players hands and a simple trace system that when they move their hands a custom coloured trail will follow the hand providing light feedback to the client.
Tactile Rope Although we are a long way off fully tactile gloves I wanted to give the illusion of tactility through ropes having in front of the player (cylinders attached together with character joints) that will react to the player’s hands colliding with the rope and each other to give the illusion of a visual output although a physical output may not be present.
Over the coming days and weeks, I will be creating more forms of interaction but I am interested to hear opinions and feedback on the topic and the ideas explained above. Stay tuned over the weekend for video evidence of all these features working.
Update 1.2: The art of audio
From what I have learned from my research sensory music and FX are one of the most important aspects of sensory development and rehabilitation. I knew that the choice of music from Genre all the way to the tempo and count can have a massive effect on peoples moods, especially in people with mental and sensory disabilities.
With this in mind, I took to the Occupational Therapy subReddit asking for professional input on what characteristics are desired and required in audio for sensory applications. I got a reply very quickly summarizing one user’s experiences and advice when it came to music, they explained how they found a steady slow rhythm (such as 4/4) would help to calm and focus students alongside a constant and predictable bassline helping due to its predictability.
With this information I dug further into google to look for suitable songs, I found myself stopping on 2 main pages to find relaxing yet predictable audio, these sites where ‘http://www.global-journey.com’ and ‘http://soundbible.com’.
The main tracks I decided to use where pieces without vocals and many sites agree that lyrics can be distracting and even distressing to certain users. I found pieces with a simple standard rhythm and a range of orchestral instruments so as to create a trance-like calming environment.
Alongside pieces of music, I also wanted to incorporate relaxing ambient sounds such as dolphins, crashing waves and birds tweeting. These sounds were once again referenced from free online sites but mainly through the website ‘bensounds.com’ and ‘http://soundbible.com’
You’re probably wondering how are these sounds triggered in the application?, well I’m glad you asked. I plan for all music to be triggered by the family member or carer outside the application, on their phone screen they will have a range of options and menus to set a range of settings including what music will be playing. on the other hand sound, effects will be triggered in the app by the user through actions such as pressing a block or moving their hands in a certain way.
Update 1.3: Interacting with sound.
So yesterday I touched a little upon music and sound effects, I spoke about how I would trigger through family members and/or carers. Although this is mostly true, I would like to expand upon that idea a bit. Although allowing certain aspects of sound to be controlled by outside sources I also want the client inside the application to have a certain amount of control and that’s what I want to touch on today (hypothetically and literally)
So how am I going to approach a client interacting with sound aspects? thankfully LEAP MOTION has its Orion API for unity which really helps and does a lot of the heavy lifting for you after initial setup of Orion into unity I am presented with a pair of hands and an Interaction Manager. That piece of script is the powerful and customizable piece of script that will handle all our interaction arguments. Below is an image of the unity inspector and that magical Interaction Behaviour script, right off the bat you may notice 2 pretty interesting things, HoverBegin() and HoverEnd() both of these are pre-compiled methods created by LEAP MOTION to get interaction into an application quickly and easily.

In short, I attached the HoverBegin script to play the soothing sounds of dolphins, but as soon as the client moves there hand away the sounds stop. It really is that simple! and also so customizable. The joys of open source software are these methods can be accessed and changed on the fly with a few clicks to suit whatever need or purpose. Alongside using HoverBegin() and HoverEnd() I am also using ContactBegin() to add 2 different ways for the client to interact with sounds, either through a physical touch or a hover over an object.
The main reason I wanted the client and users outside of the application to both have control of sound is to enable a way for both parties to interact and enjoy the experience together rather than isolating the client in the VR world.
Stay Tuned for more Updates very soon!
Update 1.4: Christmas, a time for reflection.
Sorry, I have been pretty quiet with the updates recently, I was wiped out for about a week with the flu and then Christmas hit, I hope everyone had a lovely holiday period and ate their body weight in chocolate, I know I did!
With just 3 months of this project left, I know I have to really buckle down and get creating something amazing. Over the last 2 weeks, I unfortunately haven’t coded a single line due to not being around my coding machine! but that hasn’t stopped me developing new ideas and researching new methods to implement upon my return.
Firstly lets set a few goals, because what is a project without goals? I want to have a fully working prototype with phone interaction by the end of January, this gives me just over a month to knuckle down and solidly code a prototype sensory room.
There are 3 main features I want and these are:
* Interaction through LEAP MOTION in app.
* Audio and visual feedback through a range of methods.
* External interaction method through a mobile device.
Great now I have more of a goal (Sort of! planning has never been my strong point!) I know how to spread the workload and we can talk about more fun stuff like AUDIO.
‘But you talked about audio last time’ I hear you cry, well fear not cause thanks to the help of other redditers I have made a small breakthrough and it comes in the form of ASMR (autonomous sensory meridian response), why asmr? well as a redditer explained to me ‘ASMR can relax so much that it puts many people to sleep, and is a proven natural sleep aid, but also stimulates the brain.’ this mix of stimulation and relaxation fits perfectly with the aims of a sensory room so ASMR sounds are planned to be implemented if anyone knows anything on the topic or just wants to start a conversation about it don’t be afraid to leave a comment.
This Christmas hasn’t just been music and audio related though I also sat down and mocked up ideas for UI interaction with the outside player, basically how will the carer/parent interact with the patient? I threw around a few ideas before finally falling upon the idea of the wheel menu system, a simple toggle with show/hide a wheel at the bottom of the screen, as the user scrolls across the screen the wheel will move displaying more options. Each object will be in subcategories such as ‘lights’ ‘sounds’ ect. Once a subcategory is chosen a new wheel will appear around the original with the full options and at a tap of the screen that object/action will occur.
The reason I chose this ui was mainly its elegance and simplicity, I wanted the user outside of the experience to be able to navigate intuitively without too many problems.
That about sums up the last 2 weeks for me, recovering from the flu, enjoying the festive break and kicking into gear with new ideas, I would love to know what everyone’s opinions are and open any and all comments.
thanks for the patience, video, and photos coming soon.
Update 1.5: New Year, New UI
So after my reflective update over the Christmas period, I knew it was about time for me to get the wheels back in motion on this project and provide you lovely people with another update this time it’s all about UI.
As I discussed in update 1.4 the UI for outside the experience was incredibly important, I touched on how I wanted the UI to be circular and have all UI options scrollable and this is still the end goal but for this week I wanted to prototype. So I set about creating a functional yet admittedly ugly UI using cubes and text.
So what did I even want the UI to do? first and foremost I wanted it to be a way of external users controlling the virtual world and the 3 main ways I want them to do this are through
Environment Control (changing the skybox)
Change and effect Audio
Change Light colour
In its current state the UI can be brought up with a simple button in the top left of the screen this then gives the 3 options above. from inside all menus it’s easy to close the UI by simply hitting the same button in the top right corner.
When it comes to UI in my eyes simplicity is key I don’t want to have to jump through 20 hoops to do a simple task that should take 2 clicks and that was my focus for this UI.
Now here is where I need the Reddit community to help me. 3 options are great but in terms of sensory development what else should be able to be affected by the user? should an option be given to the outside user to place and remove sensory objects or should all objects be available to the client, to begin with?
I look forward to hearing everyone’s opinions on the matter and hope to update you all again very soon.
Back to Top