Are all Augmented Reality platforms created equal? Not even a little bit.
If you think there's been a lot of talk about Augmented Reality and Virtual Reality this year, 2018 is going to blow you away. Apple, Microsoft, and Google all have impressive tech demos for Augmented Reality hoping to become real products and services in the not-too-distant future, and each of these three companies is approaching AR from different perspectives. We've spent some time with each, so here's a quick explainer on how they all compare!
There's a good chance you've seen the impressive-looking demos for Hololens before, even if you haven't seen it first-hand. Microsoft frequently depicts a world filled with what they call holograms, with a special headset allowing you to see and interact with this virtual layer on the real world. The demos are very impressive, simulating a world where apps live on the walls around you and can be activated and enjoyed my simply pinching the air in front of you.
In the real world, the Hololens Developer Kit is a heavy computer you wear on your head that lets you see augmented reality apps through a small window at the center of your field of vision. Hold a smartphone about 12 inches from your face, and that's about the field of view you get from this headgear. The consumer version of Hololens isn't expected to be this heavy or limited, but it's difficult to judge what doesn't yet exist.
Hololens does two things remarkably well, even in this early form. First, it can remember entire rooms and the placement of all apps within. You can anchor a save point in a game on your coffee table or the Netflix app on the wall or even a Skype contact as a reminder to call that person, and when you re-enter that room, the Hololens remembers where all of those things live and puts them back. Microsoft envisions a world where computing is done in a way that all of your apps simply live in the space around you, and it is very close to making this experience real.
The other thing Hololens is incredible at right now is gesture control. You can click with your fingers, move things around the room with ease, and interact with everything from games to web browsers in a very natural format. None of the gestures feel awkward or overly dramatic; you simply lift your hand and tap. It doesn't take long to feel comfortable with this, because it's a fairly basic extension of what we already do on phones and tablets every day. When combined with voice commands, the overall control scheme feels very human.
Entirely separate from Google's Virtual Reality efforts, Project Tango is an enhanced smartphone or tablet with a special camera and sensor array, which makes it possible for the phone to simulate the way a human being processes visual information. Through this simulation, Tango apps can be aware of the physical dimensions of a space and use that information to create experiences specific to the world around the user.
That's a very complicated way of saying Tango knows where you are in a room and how people move from one room to another. Google has given several demonstrations where an app on a Tango phone becomes a tour guide in a museum by being aware of the physical environment and offering guided instructions from one exhibit to the next. What makes this impressive is how well Tango does things like guide users up and down stairs, being aware of its position in space the whole time. It's the reason Tango hardware is used in NASA's SPHERES project to help on the International Space Station.
For those of us here on Earth, Tango only exists in one phone you can buy right now, with plans to exist in another phone very soon. Apps built for Tango range from mapping an entire building to determine Wi-Fi strength everywhere to simple games where you can walk around the world you're playing in. Tango apps can measure depth, immerse you in a story, and take you on trips across entire buildings with what appears to be relative ease.
The current Tango phone, Lenovo's Phab 2 Pro, is a massive device with cameras that cause Tango apps to struggle in low light. The consumer launches for Google's other Tango partners are expected to address this concern with a new camera and sensor array, as well as being more easily available to consumers.
Apple doesn't want to bring Augmented Reality to a small group of people to improve over time; it wants to bring this new tech to every iOS user all at once. ARKit is a set of tools that allows developers to build AR apps and games for the iPhone and iPad, so instead of a centralized platform, developers can decide just how deep into AR they want to bring new and existing apps. Apple's big demos for ARKit involve being able to turn flat surfaces in your home into AR playgrounds, ranging from ways to visualize complex data to advanced games and stories being played out right on your coffee table.
Developers are just now starting to explore what ARKit is capable of, but the initial demo Apple provides to help get you started makes the underlying functionality quite clear. You point the phone camera at a flat surface, and when the software confirms the space is good for AR, it is time to play. When you place something in the real world, the software makes it possible for you to move around as though the virtual world were really there. You can lean in with your phone to get a closer look at something specific, walk away to get a wider view of the world, or walk around the world you've created to see every angle.
The biggest thing Apple has done here is made ARKit incredibly simple for users to jump into and have fun. It's point-and-play Augmented Reality in a way that doesn't really exist anywhere else right now, and that's exactly what it needs to be for people to casually try something new in AR. Unlike people who buy something specifically for enjoying AR, simplicity is key. The biggest consequence of that simplicity is a lack of depth. You can't really have multiple AR apps running at the same time with ARKit, and you certainly can't have these experiences existing in separate physical locations. It's the difference between living in a space where AR happens around you and choosing to augment an area when you launch an app. It's unlikely this will be seen as a significant detriment, since most people always have their phones nearby.
Apple released an updated version of the iPad Pro with a 120Hz display to show off what AR is capable of when refreshing at a speed that feels more natural, and you can see the difference when compared side by side with a traditional iPhone experience. It makes a big difference in AR, specifically, but not nearly as big a difference as a quality low-light camera, which Apple currently does not have. ARKit fails to lock in a lot of moderately lit rooms because the image is often grainy. In well-lit rooms this isn't a problem, but it's a challenge Apple will need to address as ARKit becomes a real feature in the next iPhone.
How do they compare?
If you're looking at the most technically capable platform, it's clear Microsoft has the greatest functionality with Hololens. Unfortunately, it's not clear when the consumer release of that platform is going to be or how bulky or expensive it will be when that happens. Hololens feels a lot more like what we can expect from the future, and it's hard to not be impressed with the future Microsoft wants to create.
Google's partners are shipping Tango AR phones right now, but the first effort is really more for commercial use. As consumer options become available, the biggest advantage to Tango will be more accurate movement and placement of AR apps and games. Tango is aware of all the shapes in your space and how you will interact with them, which will lead to more immersive AR experiences.
Apple is aiming for mass adoption out of the gate with ARKit, but it is limited compared to Tango and Hololens. Where the other platforms seek to augment the whole space around you, ARKit is only going to be capable of augmenting a space immediately in front of you for as long as you while you have the app open. A lot of people are going to use and love ARKit apps, which will probably cause software-based innovation to happen much faster than the other platforms. That's going to be great for everyone.