0

    Spatial computing: the ultimate guide

    Simon Edward • Sep 15, 2023

    What is spatial computing? How does it relate to extended reality (XR)? Join us as we explain the whats, hows and buts of this versatile technology.


    What is spatial computing? How does it relate to extended reality (XR)? Join us as we explain the whats, hows and buts of this versatile technology.

    "Would you like [insert app here] to know your location?"


    If you've ever used a smartphone, you're probably familiar with notifications like this. Privacy concerns aside, they're part of the fabric of our everyday lives.


    But that just shows how important and widespread spatial computing has become.


    Your smartphone can use GPS to enhance the experience of certain applications. To do this, it needs to parse thousands of ones and zeroes. It's this data that tells it where you are, which direction you're facing and what's around you.


    This approach – taking spatial data and turning it into stuff we can see, understand and interact with – is at the heart of spatial computing.


    Mapping your local jogging route? Spatial computing. Listing local pizza restaurants? Spatial computing. Catching Pokémon in the park? You guessed it.


    But smartphone GPS functions are just one slice of a very complicated pie. Spatial computing has many, many more applications. Some are mundane. Others skirt the fantastical.


    This article will give you a comprehensive overview of spatial computing so that as it becomes increasingly incorporated into your business and personal life, you'll be better equipped to use and understand it.


    What is spatial computing?


    Spatial computing is any computing system that uses distance and shape to integrate information for the user. Classic examples include satellite navigation (SatNav), seabed floor imaging and AR systems.


    Picture of a car SatNav

    There are multiple modalities a computer can use to judge the distance and shape of various objects, which we'll discuss below. For now, it's just important to understand what makes a computer "spatial".


    The true difference between a spatial computer and a non-spatial computer is the ability to interact with one's physical surroundings.


    Immersive experiences, for instance, rely on the user's relationship to their surrounding area. If a computer can map out even a small area near a user, it allows the software to harness that environment as part of the experience. Armed with spatial data, the app can do things like superimpose images, map geography and locate objects (think
    Geocaching).


    What are the different types of spatial computers?


    There are many types of spatial computers, though most people think of augmented reality (AR) and virtual reality (VR). We'll define those, analyse the differences and list some other important examples in this section.


    SatNav


    Satellite navigation uses signals stemming from electromagnetic radiation to triangulate the position of any computer on Earth. In simpler terms, it can pinpoint a location on the Earth's surface. 


    Interestingly, this technology is so delicate that it can be affected by the gravitational field of the Earth's mass. Had Einstein not discovered that
    photons can be affected by gravity, SatNav would not be possible. (This has nothing to do with the wider world of spatial computing, but who cares? It's fascinating.)


    Anyway, SatNav allows many spatial computing-related commercial endeavours.
    Pokémon GO and Geocaching are among the most famous. But now, as you're probably aware, many apps use location tracking to enhance user experience. 


    Body and facial recognition


    This not only includes the expanding (and controversial) world of facial recognition technology, but also well-known video game consoles that use movement-tracking controllers. Examples include the Nintendo Switch and Xbox Kinect.


    Technology like this relies on personal, corporeal data – and providing this data makes some people justifiably uncomfortable. That's why execution is paramount. The Nintendo Wii, for instance, ensured user comfort by limiting its scope and being transparent in its functionality. 


    Extended reality


    Extended reality (XR) software and hardware facilitates interaction between the user and their environment. It's an umbrella term that covers AR, VR and mixed reality (MR).


    We'll cover AR and VR below.


    Augmented reality


    AR changes our experience of the world around us – by displaying industrial data on a piece of machinery, say, or putting cat ears on Grandma.


    AR can use dedicated, wearable hardware – such as the
    Magic Leap 2 headset – or it can be incorporated into a variety of non-wearable devices.


    Picture of the Magic leap 2

    It can use dedicated, wearable hardware, such as a VR headset, or it can be incorporated into other, non-wearable devices.


    Some vehicle safety technologies, for instance, can be considered examples of AR. Take reversing cameras. They're rarely as simple as a straightforward, rear-facing video feed. Your car's computer is
    augmenting your experience with warning beeps and guiding lines.


    Camera-equipped smartphones and computers can accomplish sophisticated feats of AR too. There are smartphone apps that can identify constellations, for example, or conjure up 3D virtual furniture in your real living room.


    Virtual reality


    VR is the next step up in XR immersion. It creates believable virtual worlds, which the user experiences with a dedicated VR headset.


    Does VR count as an example of spatial computing? Some say yes. Some say no.


    The naysayers' argument goes something like this: VR is designed to conjure up virtual worlds, not facilitate interaction with the real world. Sure, it allows the user to interact with their environment, but this environment has effectively created itself.


    Others argue that this is all semantics. To parse its virtual world and display it to the user, the VR headset has to chew through mountains of spatial data. And that includes real-world data points like the position of the user's hands and the direction they're facing.


    How does spatial computing work?


    There are three main methods that spatial computers use to give information to users and allow for user interaction.


    1. Electromagnetic radiation


    The first method has already been noted above. Satellite navigation works by bouncing signals (electromagnetic radiation generated by the SatNav device) onto multiple satellites at once.


    Why multiple? Well, this is pretty detailed. If you want the broad strokes, feel free to skip to the next section.


    Imagine a square. Then, imagine a sensor floating above the square. One sensor cannot determine the exact position of the dot on the square if it only knows the dot's distance from its sensor apparatus.


    Instead, it would actually know a circle of places where the dot could be – in other words, a set of positions in which it's possible to be that far away from the sensor.


    But what if you had two sensors? Well, then the area of possible dots would get smaller. The more sensors you add, the more precisely you can pinpoint the dot's position.


    With enough sensors, there would only be one possible position for the dot, given the various distances it reported to the sensors.


    2. LiDAR


    The second method for spatial computing involves something called light detection and ranging (LiDAR).


    Picture of light shining through a prism

    LiDAR works on the principle that light bounces off most objects. (To some extent. It can also be absorbed or moved through the object. However, this can also be measured and accounted for.)


    LiDAR sends out an electromagnetic signal and then measures how long it takes for the signal to return. Using the speed of light as a constant, it can then calculate the distance the radiation needed to travel to get there and back in that specific amount of time.


    As you can imagine, this sort of technology requires enormous processing time – and a lot of signals to be sent out at once if, say, the technology needed to parse a three-dimensional image. So, to make things easier, we move to the third method.


    3. Object recognition


    The third method of spatial computing is object recognition. This is the kind of technology used for things like social media filters.


    To put dog ears on a TikToker, a computer simply needs to recognise the general shape of a human face. The image is then adjusted to the specific dimensions of the TikToker's face and superimposed.


    Now this seems like it would be superior to LiDAR in many respects. However, like the difference between LiDAR and SatNav, it really just comes down to what a person needs from their application or their software.


    Object recognition is faster. But it's also glitchier and more limited.


    Again, let's consider a social media filter. Think about what the filter does when it's placed over an object that isn't a face. On the flip side, you wouldn't want LiDAR to process an entire human face, as it would take too long and use up too much processing power.


    The business case for spatial computing


    As we've seen, many spatial computing functions are now integrated into our lives. SatNav. Snapchat filters. Local Pokémon. The list goes on.


    But there are numerous commercial use cases for the technology. Many of these are enjoying enthusiastic adoption across sectors like manufacturing, energy and construction.


    Take building information modelling (BIM) in the architectural industry, for instance. Using a VR headset, a planner can walk through a lifelike representation of the building in virtual space, spotting potential issues that might otherwise be missed.


    Or imagine an equipment maintenance engineer on an oil rig. Equipped with AR goggles, they can see industrial Internet of Things data overlaid onto key pieces of real-world machinery – and access 3D, interactive instruction manuals to help them tackle tricky fixes more quickly.


    In healthcare, the technology can even help save lives. Say some surgeons are preparing for a complicated operation. Wearing AR or VR headsets, they can inspect a 3D representation of the patient's vital organs. This effectively allows them to conduct a dry run of the operation before any scalpels see the light of the operating room.


    In all likelihood, almost every industry will begin incorporating spatial computing into their daily workflows. Teachers. Customer service workers. Cosmetologists. You name it.


    We just have to make the space for it.


    Expand Reality is a specialist retailer of
    enterprise XR devices. For more guides and industry analysis, follow our blog.




    Vuzix Information Centre - FAQs , Case Studies, Comparisons
    by Simon H 11 Feb, 2024
    This Blog updates you on the new Vuzix Z100, the Vuzix Information Centre and the Vuzix M400 v Realwear Navigator 520.
    by Simon H 08 Jan, 2024
    In this Blog we aim to simplify the journey for someone who wants to understand the Magic Leap 2 Offering:- The Case Studies and the return on investment you can achieve. The difference between augmented reality and assisted in terms of hardware product offering. The Differences between Microsoft HoloLens and Magic Leap 2. The Information Centre where you can Download specification sheets, Book a Test Drive and read useful blogs. Where you can find Magic Leap 2 Frequently asked questions.
    Extended reality (XR) is bursting with open-source initiatives including web design
    by Simon Edward 27 Nov, 2023
    Extended reality (XR) is bursting with open-source initiatives including web design engines and build-your-own headsets. Join us as we look at 5 of them.
    Show More
    Share by: