020 7359 0005

Imagine walking into your living room, and on your coffee table you see all the planets in our solar system orbiting the sun.

You can have that experience right now, with the help of AR and your smartphone. The possibilities are endless, and the two tech giants Google and Apple are battling it out to be king of AR.

Google makes the first move

Google initially released Tango back in 2014, a platform to allow devices, usually smartphones, to use computer vision to detect their position in relation to the space around them. Tango enables developers to explore the possibilities of using precise navigation without GPS, see windows into 3D worlds, measure the space around you, and of course, play games.

What are Tango’s strong points?

Tango prides itself on four main concepts; motion tracking, area learning, depth perception and visual positioning. Google states how it’s essential to enable devices to share our ability to understand the physical relationship between objects within the room to deliver the best AR experience.

Motion Tracking

Pretty self explanatory, Tango enables devices to track movement and orientation through a 3D space.

Pathway created from Google Tango's Motion Tracking
Source

Area Learning

An intriguing one, area learning means that the device you’re using can see and remember physical features of a space, such as edges and corners so that it can recognise it later. How? By storing “a mathematical description of the visual features it has identified inside a searchable index on the device”. I’d recommend reading about drift correction and localisation here.

The real trajectory vs the estimated trajectory
Source

Depth perception

This is the big one for actually producing an AR experience. Without depth, you can’t place any virtual objects, and that means a lot less interaction for the user. Tango enables devices to detect depth and understand the shapes within your environment. This works as a good base for the rest of the AR experience.

Visual Positioning

Using all the previous concepts, visual positioning is there to help the device map the surrounding environment. This means apps using the Tango platform can work on location-based AR experiences.

Limitations

The first obvious limitation of Google Tango is that you can only use it on the Asus ZenFone AR and Lenovo Phab 2 Pro phones, not something which everyone has, and at this point in the grand scheme of things, I don’t think people are going to switch to a particular smart phone just to get AR capabilities. Why are these devices required for Tango? Because they’ve been specially build to include both a depth sensor and motion tracking sensor, specifically for Tango capabilities.

There are also some known issues with Tango including problems with changing between apps whilst running active Tango apps, CPU load stopping depth points from being returned, and ADFs needing to be remade or the calibration within that room won’t work properly. ADFs are Area Description Files allowing the device to remember a space you’ve walked through before.

Apple swipes back with ARKit

Not to be outdone by Google, Apple first released their offering to the AR world back in June this year along with a preview version of iOS 11. Just like Google Tango, Apple’s ARKit lets developers create “unparalleled augmented experiences” for the iPhone and iPad, already sounding more accessible than Tango, right? Apple states that their ARKit can let you ‘blend’ the physical world around you with digital objects, and lets you interact with the real world in “entirely new ways”.

How does ARKit compare?

ARKit has four main selling points, very similar to Tango, but with much fancier names.

TrueDepth Camera

With the launch of the iPhone X, Apple’s ARKit has a very nice camera to play with. This enables advanced face tracking in augmented reality. All your expressions will be caught in real time with great accuracy so that those great Animoji can be applied with ease.

Visual Inertial Odometry

This part of ARKit focuses on being able to track the world around you, much like Tango’s Motion Tracking. Using data from your camera sensor, as well as Core Motion data (this is information collected from your accelerometer, gyroscope, pedometer), your iPhone will be able to sense how you’re moving within a room accurately.

Scene Understanding and Lighting Estimation

Light estimation is very cool and I think one of the most exciting bits of AR. Once again, using your camera your iPhone will be able to pick up horizontal planes, so think tables, floors, desks, so that apps can place virtual objects on these points and stick there. Then, using your camera sensor, the light levels of your surrounding will be used to apply the correct amount of light to the virtual object. This means the darker the room, the darker the object. This leads to the possibility of shadows and the like to be applied to virtual objects, making the whole AR experience more immersive.

ARKit allows the camera to find planes to place virtual objects onto
Source

High performance and easy testing

Unlike Tango, testing for ARKit will run on iPhones 6s and upwards as well as both the iPad Pro, as long as they’re running the latest iOS. This means it’s likely that if you’ve got a slightly older phone, you’ll be able to experience new AR apps, but expect them to run a little slower on older phones. This is immediately more accessible than Google’s initial AR offering of Tango, and when you factor in the high performance processors Apple’s using, this makes a concrete standing for Apple’s AR offering.

Goodbye Tango, hello ARCore

Google Tango is out and ARCore is in.

Not to be outdone by Apple, just in time for ARKit’s release, ARCore was announced and Google have well and truly upped their game since Tango.

Focusing on three main concepts, ARCore is “designed to work on a wide variety of qualified Android phones running N and later”. Right now this means that it can run on the Google Pixel, Pixel XL, Pixel 2, Pixel 2 XL and the Samsung Galaxy S8. An improvement since Tango, greatly improving the number of handsets users can experience AR on.

How does it differ from Tango?

The Tango system requires specific hardware to run properly, a fisheye lens built into one of the supported devices to scan the room, combined with infrared scanning of surfaces within the space to create a map (think about that depth perception, this is why a fisheye lens is used). ARCore uses a different approach, one which can be achieved without the need for specific hardware. Google can utilise the more complex systems used in Tango to achieve what would otherwise be quite a simplistic feel on ARCore, into a more realistic one.

Android Central put together this deep dive of the differences if you want to know more:

Main concepts

Apart from the most obvious difference in hardware, ARCore has streamlined its focus and concentrates on placing virtual objects in your surrounding environment.

Motion tracking

Using distinct features in the image seen by the camera, the phone will use a process called concurrent odometry and mapping (COM), to map feature points and understand where you are in the room, and how to present an AR object if you’re moving within the room.

Environmental understanding

To allow virtual objects to appear in the room around you, the phone needs to detect surfaces within the room. These are called planes. ARCore uses the phone’s camera to look for clusters of feature points to make each plane it detects, a table for instance, available for virtual objects. Google themselves note that flat surfaces without any texture might have trouble picking up feature points, and so might not be detected properly.

Light estimation

Using the intensity data gathered using your camera, ARCore can, just like Apple’s ARKit, detect information about lighting so that virtual objects placed within the world can have the correct lighting applied to them, “increasing the sense of realism”.

A shadow in real life will cast a shadow in AR
Source

Anchoring objects

To keep track of virtual objects over time, anchors are defined to improve the position of objects and keep them stable even when you move the device around.

Checkmate

There we have it, currently ARCore and ARKit are pretty much on a level playing field. It’s going to be interesting to see which takes off most quickly, and where that will lead society. Apple have the upper hand in reach right now, with ARKit being accessible on 380 million devices by the end of the year, as opposed to 100 million for ARCore, however, much of Google Tango’s source code can be found in ARCore. What does this tell us? That Google’s got three years of testing their AR technology on Apple.

What’s going to be the deciding factor in this race is how AR apps are utilised. Much like when smartphones were released, there were plenty of novelty apps released to show off having a touch screen, but they weren’t very useful; it took time before developers were able to create some of the brilliant apps you can find today. When AR can be used to make truly useful, valuable experiences for users through our smartphones, rather than games just using it for the novelty, I believe this is when we’ll see real progression and a really uptake in AR. This is where the execution between apps will be extremely important.

If you want to see some examples of what’s possible with AR and smartphones, I’d seriously recommend looking at @builtwitharcore and @madewitharkit to see some fascinating examples.