How scaling works
At Masters of Pie, we build immersive collaborative software solutions for heavy engineering applications. Our users want the ability to interpret their designs at many scales, as this allows them to see details that are hard to see on a 2D screen. Thus the ability to make things appear big or small was an obvious feature.
Scaling is not however as it seems…
From a user’s perception, the model gets bigger or smaller depending on how they are scaling.
In actual fact, the system is making the user bigger or smaller in relation to the model. The user generates the experience they want, but not quite in the way they imagine. In a single user session, this has no ramifications whatsoever.
The Masters of Pie software framework, called Radical, integrates directly into existing platforms to enable seamless and secure sharing, in real-time, of complex 2D and 3D data across AR, VR, Desktop & Mobile devices. Radical was designed from day one to be a collaborative tool and so having more than one user in a session is the whole reason for Radical to exist. Pushing live datasets into immersive environments really speeds up the design and manufacture process as multiple people can collaborate and troubleshoot in a group session.
Usually, it is the norm for everyone to be looking at the same dataset at the same scale, but this does not have to be the case, and we built an environment to allow anyone to be at any scale at any time.
That “it sounded good in my head” moment
In theory, that was no trouble at all, until it was. Any designer can relate to the “it sounded good in my head” moment. That’s when that genius concept that you had needs coding and sometimes it’s awful. Sound familiar?
But that’s why we prototype. Kill the bad stuff cheaply and focus on the great.
What was wrong?
Well, remember when we talked about how scaling works? It’s the person that scales and not the model? Well, it turns out that when I scale the model down, I actually become a giant. So when I scale the model up, I become an ant (ant mode as it became known).
So, when collaborating with others, you can be swarmed by giants whose giant heads and giant interface elements get in your way. Or, everyone can ‘vanish’ as they are tiny ants moving around being hard to spot.
A good job we have VOIP so you can enquire as to what has happened to everyone… Another issue was annotations and other data that was introduced into the model at a scale other than 1:1.
Way too small man
As an example, I shrink down to analyse some fine tolerances between two surfaces. I deem there to be an issue and so I take my annotation tool out, highlight the issue and write up my findings in 3D space. This is all well and good but being as I am at this time effectively ‘ant-man’, my keen insights are impossible to see for anyone not at my scale and so the guys still at scale 1:1 largely ignore them.
Look at the size of that thing!
As another example, I might scale the model down to get an overview of the whole thing. A common enough thing to do. But remember, when the model gets small, I get huge.
So now when I reveal my insights, they are overly large for anyone at 1:1 scale and can (and do) obscure the model data for those users who usually express dissatisfaction at my genius.
We are all in this together
One option explored was to ‘auto-scale’ everyone at the same time to the same scale. This sounded great in my head as then all the annotations would be at the same scale and size for everyone all of the time. Genius! Except that it is not genius at all.
It turns out that when someone else scales the model for you, it can induce that VR nausea effect that many users experience. The design goals are simple:
- Trying not to make anyone sick
- Avoiding obscuring anyone’s view with giants or giant annotations
- Not allowing people to vanish (honey, I shrunk the engineers)
Regarding the first design goal, we do not want to allow a single user to control the visual experience of others as this can lead to motion sickness. There are some techniques we can employ to help with this though, so let’s not write it off wholesale just yet.
Fade to black, then render
If we were to allow someone to control our viewport for us, then one way to avoid the inevitable sickness is for the ‘controlled view’, that is the one being controlled by someone else (the ‘controlling view’), to fade to back whilst the visual aspect of the scaling takes place.
Once the scaling is over, the new view is rendered out onto everyone’s viewport allowing them all to be in sync at the same scale. Something to prototype and test.
Another option is to ‘tunnel vision’ the ‘controlled’ viewports so that the viewer’s peripheral vision is grayed out, and they just see a narrow field of view as the scaling happens. The removal of peripheral vision does help with motion sickness and some apps that allow ‘smooth movement’ use this technique to great effect. The ones that do not result in short immersion times (VR Minecraft sadly makes me queasy in no time when running away from a creeper).
Same scale, no worries?
Well, this is a partial solution as when users annotate or open interface elements all users see them at the same scale at the same time, thus there are no giants and no ants and nothing is huge or too small to read.
However, should we render our genius at ant scale then return to normal human scale, the annotations are still at ant scale and unusable.
Just scale the annotations and interface elements with the user scale!
I hear you yelling that, and it ‘sounds great in your head’, but it’s still a problem. Remember those insights we all sketched out as annotations at ‘ant scale’?
They were about fine tolerances around some pretty small components and the notes and adjustments we drew up looked spot on. However, now we are back at human or room scale, they all get scaled up with us in this model, and they are now out of proportion and position in relation to the model. Scaling them up effectively ruined them. So that’s not viable either…
Another design goal
The goal of adding annotations and such is to help refine the design and correct potential errors. So, if we can get these ideas out of VR and back into the real world in an easily digestible and communicable manner, then that is just as important (if not more so) than some of the other goals (besides not throwing up of course).
Enter the Snapshot
Snapshots in Radical allow a user to record the state of a virtual scene at a specific moment in time. It records the user’s location, the scene details (the model state, annotations etc.) and importantly for this, the scene scale.
Thus, as good practice, everytime annotations are complete, a snapshot should be taken that would allow users to return to that same state with the click of a virtual button. Not only that, but it records an image that can be shared out via email etc. Therefore, should anyone need to revisit a decision or a moment of genius, they can. So the need to be able to see everything at any time fades away.
This helps with the ‘ant’ case of things being too small, but what about elements created that will now appear huge?
How it works
As I scale, elements not at my scale fade away so they are not blocking my view. As I scale to the same scale as elements created in-scene, they fade into view. That way, I only really see elements at my scale and everything else that would be too big or small to be useful is not seen.
This declutters the scene, allows annotations to be created and recorded at any scale. And with Snapshots, gives us a way to disseminate decisions to other members of the wider team that were not immersed so that they may see and understand why things changed, closing the loop between the real and virtual design and engineering worlds.