The Meta 2 isn’t ready for everyday office work yet—but when it works, it’s still a remarkable gadget.
Let’s just get the weird part out of the way: I’m typing these words on an invisible computer. Well, kind of. There’s a visible laptop open on the corner of my desk, but only so my (also not invisible) keyboard and mouse can plug into it. But the window containing these actual words I’m writing? That’s just hovering in midair, directly in front of my face, equidistant between me and the now-dark, real monitor I usually use at work.
To be honest, though, right now I’m a little more interested in the other window hiding behind this one—the one with last night’s NBA highlights all cued up and ready to help me procrastinate. So, I reach out with my hand, grab the work window by its top bar, move it out of the way, and commence watching Victor Oladipo bury the San Antonio Spurs. Even better? Since I’m the only one who can see these windows, my next-desk neighbor doesn’t know exactly what I’m doing. To her (hi, Lauren!), I’m just the idiot sitting there with a space-age visor on, making grabby motions in midair.
This is the vision of “spatial computing,” an infinite workspace made possible by augmented reality. And while my workspace at the moment isn’t quite infinite, it still stretches across a good part of my vision, courtesy of the Meta 2 headset I’m wearing. There’s a window with my email, and another one with Slack, just so I know when it’s time for me to jump in and start editing a different piece. The question is, is the idiot sitting there in his space-age visor able to get all his work done? That’s what I’ve spent the last week trying to figure out. Spoiler alert: he isn’t.
But the experiment also suggests a different, more important question: will the idiot in his visor be able to get all his work done in it someday? That’s the one that has a more hopeful answer.
If virtual reality’s promise was bringing you inside the frame—inside the game, or the movie, or social app, or whatever screen-based world we’ve always experienced at a remove—then AR’s is turning the whole damn world into the frame. The virtual objects you interact with are here now, in your real-life space, existing side-by-side with the non-virtual ones. While at the moment we’re mostly doing that through our phones, we’re on the cusp of a wave of AR headsets that will seek to turn those pocket AR experiences into more persistent ones.
Meta 2 is one of those headsets; at $1495, it poses an interesting threat to the far more expensive Microsoft Hololens, as well as the who-knows-when-it’s-coming Magic Leapheadset. (Despite the three using differing marketing language—”augmented,” “mixed,” “holographic”—they all basically do the same thing.) It’s still a developer kit, though Meta employees are quick to tell you that they use theirs every day at work. But while lots of non-employees have gotten a chance to see what the Meta 2 can do in the confines of proctored demonstrations, not many outside the developer community have had the luxury of an extended multi-day visit with the thing. I have. And I’ve got the enduring red forehead mark to show for it.
This isn’t a product review, so I’m not going to take you through the specs of the thing. Here’s what you need to know: Its field of view is significantly larger than the Hololens (which can sometimes feel like I’m bobbing my head around looking for the sweet spot that lets me see the virtual objects fully), and its lack of a pixel-based display—there are twin LCD panels, but they reflect off the inside of the visor—means that visuals appear far sharper at close range than VR users might be used to. Text is more readable, images more clear. In theory, it’s perfect for the kind of work I do as a writer and editor.
The Meta 2 uses an array of outward-facing sensors and cameras to map your physical surroundings, and then use it as a backdrop for everything you do in the headset. That means that if you push a window all the way behind, say, your computer monitor, it should effectively disappear, occluded by the real-world object. The key here is should: like many of the Meta’s most interesting features, it’s inconsistent at best. The mouse pointer would sometimes simply disappear, never to return; until the company pushed a software update, the headset refused to acknowledge my hand if I was wearing a watch; it wasn’t uncommon for the headset to stop tracking me altogether.
The headset’s software interface, called Workspace, is a bookshelf of sorts, populated by small bubbles. Each represents a Chrome-based browser window (albeit a minimal rendition, stripped of familiar toolbar design) or a proof-of-concept demo experience—and maybe soon, third-party apps. To launch them, you reach out your hand, closing your fist around it, and drag it into free space. (Hand selection was an issue throughout my time with the headset; if I wanted a no-second-takes-necessary experience, I generally opted for a mouse.) There’s a globe, a few anatomical models, a sort of mid-air theremin you can make tones on by plucking it with your fingers, and…not much else. That’s not necessarily a concern; this may look and feel like a consumer product, but its only real purpose is to get people building apps and software for it.
But as a writer and editor who was ostensibly using it to replace his existing setup, I simply didn’t have the tools for the job. Meta’s current browser is based on an outdated version of Chrome, meaning that using Google Drive was out—both for writing and for syncing with a any other web-based text editor. The headset allows a full “desktop” view of your computer, but anything you open in that view takes a big hit in clarity; editing in Word, or even in a “real” web browser, wasn’t worth the eyestrain. Did I enjoy having a bunch of windows open, and moving them around on a whim? Of course. Did I like the fact that I could do my work—or not—without prying eyes knowing I was agonizing over yet another sneaker purchase? God, yes. But for day-to-day work, the “pro” column wasn’t nearly as populated as the “con.”
Every company working in this space rightfully believes in the technology’s promise. Meta even partnered with Nike, Dell, and a company called Ultrahaptics, which uses sound to create tactile sensations (yes, really), to create a vision of the future that makes Magic Leap’s promotional pyrotechnics look like a used-car commercial.
But this isn’t just augmented reality; it’s not reality at all. At least not yet. Certainly, augmented and mixed reality is well-suited to fields like architecture and design; being able to manipulate a virtual object with your hands, while still sitting or standing with colleagues in the real world, could very well revolutionize how some of us do our jobs. But for now, most of AR’s professional promise is just that. Even a diehard Mac user can get used to a Windows machine, but until object manipulation is rock-solid, until the headset is all-day comfortable, and until there’s a suite of creative tools made expressly for AR, rather than just seeking out web-based workarounds that may or may not work, then for now it’s simply a fun toy—or at least a shortcut to looking like a weirdo in the office.
In a couple of years’ time, though? That’s another story. As with VR before it, the AR horse left the barn ages ago; there’s so much money flowing into it, so much research flowing into it, that significant improvement is only a matter of time—and not much time at that. So don’t take my problems with a developer kit as a doomsday prophecy; think of it like a wish list. And right now, I just wish it could be what I know it will.