The Future of Immersive Environments: Virtual Home Design, “Backcasting” the Future and a Look at How VR/AR Get Social

At the Gigaom Change conference in Austin, Texas, on September 21-23, 2016, Dr. Jacquelyn Ford Morie (CEO of All These Worlds), Melissa Morman (Experience Officer at BDX), Liam Quinn (CTO of Dell), and Doreen Lorenzo (Director of UT Austin’s Center for Integrated Design) talked about empathetic design in virtual space and the future of augmented reality.
The future is already here, but there is much more to come in terms of more fully immersive environments. Virtual and augmented reality (VR/AR) will proliferate in digital spaces, taking us from a two-dimensional interface to three-dimensional virtual spaces. But once these virtual and augmented environments are ubiquitous, what will we do, how will we react and what new things will we learn?
One of the areas where we’ll see some of the biggest changes is the home.
Melissa Morman, Client Experience Officer at BDX, is looking at ways homebuilders can adopt and deliver more digital experiences for their customers. Mormon said she is scouting new technologies for the homebuilding industry by asking questions like, “How do you attract customers digitally?”
Currently, prospective homeowners are given floorplans to help them evaluate (and visualize) a new home. But when the home isn’t built or significant changes are being made, floor plans can’t do the job. Smart builders understand this and are looking at ways of using virtual and augmented reality tools to help clients see the possibilities.
Donning an Oculus Rift headset, customers are digitally immersed into the virtual home and are able to make adjustments to colors, materials and even the physical configurations of the rooms. Need to make a hallway wider for wheelchair access? Want to see what your countertop looks like with another color of granite? All of these changes can be visualized in great detail.
Once inside these immersive environments, how might we react though? What will our emotional responses be and how can those be used in creative ways?
Dr. Jacqueline Ford Morie said that “VR let’s you experience walking a thousand miles in someone else’s shoes. It’s powerful as a tool for empathy.” She cited a project called “Hunger in LA” which recasts the participant in a reconstructed scene of a real-life man who has collapsed in line at a food bank. This project was ground-breaking as a journalistic approach to creating empathy and understanding.
The panel moderator and director of the UT Austin Center for Integrated Design, Doreen Lorenzo, agreed that there is a huge opportunity for designers to use VR and AR to “step inside” the world of the user and really understand what they need — whether you’re designing for someone with disabilities or understanding the specific needs of a group. Morie agreed, saying, ”We’re starting to use a lot of VR for health reasons so it can be life-changing. That’s coming.”
But this is all a single-person experience. The perception of VR is that it’s anti-social. Can we expect to see social, virtual experiences?
Morie mentioned a project called Placeholder as a great example of some of the earliest social VR work ever done (the project is led by Computers as Theater author and researcher, Brenda Laurel). Filling the role of different spirit animals, you and a group of your friends can talk to one another and leave each other messages in the larger scope of the game. There are also opportunities to have richer, more immersive experiences — diving under the water as fish or soaring in the clouds as a bird. “VR is social, not anti-social,” she said.
If VR is temporary immersive experiences, then AR is always with us. We can imagine this as constantly accessible informational overlays. Imagine a mechanic working on a part with a virtual manual right in front of them. But further in the future, AR has the potential to go beyond simple overlays. In a world that merges AR and VR, they’ll create a mixed reality (MR) that is seamless and fluid.

Quinn said they were already starting to see aspects of this vision with Dell’s Smart Desk for creative professionals. Dell is developing business applications for augmented reality that will allow IT departments do things like remote technical support with augmented overlays. They’re also working with automotive and airline partners to create mixed reality environments for their customers, creating ever-richer experiences for engage.
By Royal Frasier, Gryphon Agency for Gigaom Change2016

What’s going on in Phoneland?

Connecting the dots on some news stories from Phoneland.
First, the CEO of Ericsson has been sacked:

Kim McLaughlin, Ericsson Ousts Vestberg as CEO After Turnaround Plans Stall
Vestberg’s departure caps a turbulent period for Ericsson, which is cutting jobs while battling fierce competition from from Huawei Technologies Co. and Nokia Oyj. The company said last week it would accelerate cost cuts after reporting four straight quarters disappointing revenue and profit. Vestberg has faced questions on probes into alleged corruption in Asia and Europe, and last week the company rejected a report in Swedish media that it may be inflating sales by booking revenue before some clients are invoiced.

As usual, that’s the proximate cause, but the deep structure is that 4G tech has been rolled out worldwide already, and no one’s buying much these days.

With much of the so-called fourth-generation networks already built in the U.S. and China, Vestberg had vowed to improve profitability, but the stock has declined since reaching a more than seven-year high in April last year.
Vestberg had carved out new business units targeting media and enterprise customers to get back to growth, while investing in a next generation of so-called 5G wireless technology, which represents the next wave of spending at Ericsson’s telecom carrier customers. However, he refrained from big, dramatic moves like Nokia’s purchase of Alcatel-Lucent SA, opting instead for a partnership with Cisco Systems Inc. for Internet products like routers.

So, he’s out for thinking small bore, and we’re seeing the hiccups from the 4G/5G transition in Phoneland.
Second story: Steven Russolillo says that Apple is ripe for a Rally, despite the fact that market watchers are negative on the giant:

Much of the bearish thesis is due to weakening iPhone sales, which account for more than half of revenue. The iPad isn’t selling as well as it used to and the jury is out on the Apple Watch. Tech investors are allergic to anemic growth, which explains why the tech-heavy Nasdaq has lagged behind the Dow industrials and S&P 500.
Still, Apple has been punished more than enough. The iPhone slump appears priced in. And while the next iPhone, expected later this year, likely won’t be a significant upgrade, there is optimism that sales growth will soon bounce back. Analysts forecast iPhone unit sales will rise 5% for fiscal 2017, which ends next September.

The real question is not about stock price (or profits, either, with $10.52 billion in the March quarter), but about consumer buying behavior. Will we have to wait for a new mobile device — like AR/VR goggles? — before there is another huge surge in consumer demand for mobile? Watches aren’t the future, but goggles will be, I bet. Not a 2016 trend, though. Maybe 2017?
The third and last data point for today: Aaron Pressman digs into AT&T’s efforts to convince Wall Street its wireless business is healthy. His argument reviews the standard argument that postpaid subscribers — the ones signed up for monthly accounts — are generally considered to be better sources of reliable revenue than prepaid subscribers, who generally ‘spend less for service, buy cheaper phones, and tend to defect to other carriers more frequently’.

The bottom line is that so far this year, AT&T’s postpaid subscribers grew only 1% while prepaid subscriptions increased 21%. That’s disturbing to Wall Street, based on the ruling assumption that postpaid customers are preferable.
Thus, Stephens has been trying to push some new math on the analysts. In essence, his argument is that the best customers in prepaid are actually a lot better—and more profitable—than the worst customers in postpaid.
The average service revenue AT&T collected from postpaid customers who have left—and who mostly had not upgraded to smartphones yet—was only $35, he said during a conference call with analysts on Thursday afternoon. But the new prepaid customers signing up with Cricket are bringing in “closer to a $41, $42” of average revenue. Additionally, it costs less to acquire a new prepaid customer and less to provide them with customer service, he noted.
“So from that standpoint, the economics are better, and it is being shown in our margins,” Stephens told analysts, pointing out that while total wireless revenue was down slightly, profit margins were at record highs.

So AT&T has landed in a different dimension, where the economics are reversed, with T-Mobile and others screwing up the numbers for postpaid, while the supposedly poor prepaid sector looks good. However, this may be only true for a short transient period.
And the back office transitions around cable and internet, suggest other churn as the world is turning:

The telco is shedding expensive-to-maintain cable TV customers at its U-Verse unit while adding less costly satellite TV customers for DirecTV. AT&T is dropping broadband Internet customers who connect via older DSL lines while trying to add fiber optic broadband customers. And it’s trying to move corporate customers from traditional managed networks to cheaper virtualized networks. If all of the transitions succeed, both revenue and profits should grow.

Putting all the dots together? The consolidation in Phoneland is accelerating. Old technology is maturing, while new technologies and business models are only slowly emerging, which is leading to the downdraft at Ericsson, and financial analyst disdain for Apple and AT&T. The slowing rate of purchasing — by telcos and consumers, both — is leading to consolidation, the classic market maturation that comes right before a new era of breakthroughs and growth. But those breakthroughs won’t be in 2016.

The Seven Wonders of the Business Tech World

Just over 2000 years ago, Philo of Byzantium sat down and made a list of the seven wonders of the world at that time. Like any such subjective list, it was met with criticism in its own time. The historian Herodotus couldn’t believe the Egyptian Labyrinth was left off and Callimachus argued forcefully for the Ishtar Gate to be included.
At Gigaom Change in September (early adopter pricing still available), we will explore the seven technologies that I think will most affect business in the near future. I would like to list the seven technologies I chose and why I chose them. Would you have picked something different?
Here is my list:
Robots – This one is pretty easy. Even if you make your trade in 1’s and 0’s and never touch an atom, robots will still impact some aspect of your business, even if it is upstream. Additionally, the issue of robots has launched a societal debate about unemployment, minimum wage, basic income, and the role of “working for a living” in the modern world. We have dreamed of robots for eons, feared them for decades, and now we finally get to see what their real effect on humanity will be.
AI – This is also, forgive the pun, a no-brainer. AI is a tricky one though. Some of the smartest people on the planet (Hawking, Gates, Musk) say we should fear it while others, such as the Chief Scientist of Baidu say worrying about AI is like worrying about overpopulation on Mars. Further, the estimates to when we might see an AGI (artificial general intelligence, an AI that can do a wide range of tasks like a human) varies from 5 years to 500 years. Our brains are, arguably, what make us human, and the idea that an artificial brain might be made gets our attention. What effect will this have on the workplace? We will find out.
AR/VR – Although we think of AR/VR as (at first) a consumer technology, the work applications are equally significant. You only have to put on a VR headset for about three minutes to see that some people, maybe a good number, will put this device on and never take it off. But on the work front, it is still an incredibly powerful tool, able to overlay information from the digital world onto the world of atoms. Our brains aren’t quite wired up to imagine this in its full flowering, but we will watch it unfold in the next decade.
Human/Machine Interface – Also bridging the gap between the real world and the virtual one is the whole HMI front. As machines become ever more ubiquitous, our need to seamlessly interface with them grows. HMI is a wide spectrum of technologies: From good UIs to eye-tracking hardware to biological implants, HMI will grow to the point where the place where the human ends and the machine begins will get really blurry.
3D Printing – We call this part of Gigaom Change “3D Printing” but we mean it to include all the new ways we make stuff today. But there isn’t a single term that encapsulates that, so 3D Printing will have to suffice. While most of our first-hand experience with 3D printing is single-color plastic demo pieces, there is an entire industry working on 3D printing new hearts and livers, as well as more mundane items like clothing and food (“Earl Grey, hot”). From a business standpoint, the idea that quantity one has the same unit price as quantity one-thousand is powerful and is something we will see play out sooner than later.
Nanotechnology – I get the most pushback from nano because it seems so far out there. But it really isn’t. By one estimate, there are two thousand nanotech products on the market today. Nano, building things with dimensions of between 1 and 100 nanometers, is already a multi-billion dollar industry. On the consumer side, we will see nano robots that swim around in your blood cleaning up what ails you. But on the business side, we will see a re-thinking of all of the material sciences. The very substances we deal with will change, and we may even be said to be not in the iron nor stone age, but the nano age, where we make materials that were literally impossible to create just a few years ago.
Cybersecurity – This may seem to be the one item that is least like all of the others, for it isn’t a specific technology per se. I included it though because as more of our businesses depend on the technologies that we use, the more our businesses are susceptible to attacks by technology. How do we build in safeguards in a world where most of us don’t really even understand the technologies themselves, let alone, subtle ways that they can be exploited?
Those are my seven technologies that will most effect business. I hope you can come to Austin Sept 21-23 to explore them all with us at the Gigaom Change Leader’s Summit.
Byron Reese
Publisher
Gigaom

Exploring Apple’s growing interest in VR content

While it’s no surprise that the presence of virtual reality is weaving its way into many sectors of tech, we’re still left waiting to find out how it’ll unfold when it comes to Apple.

Google, Samsung, Sony, and Facebook have all shared their VR plans, but Apple is remaining silent. Well, mostly silent anyway. Yesterday, Apple confirmed to TechCrunch that it had purchased Swiss startup Faceshift, which develops motion-capture technology. Prior to the acquisition, Faceshift focused on producing motion-capture solutions primarily for gaming and film applications. Most notably, the startup has done work for little film called Star Wars.

Faceshift’s tech integrated with cameras capable of capturing and deciphering depth in a space, then used the visual information picked up by the camera to influence the appearances, actions and facial expressions of digital objects like avatars and animated characters.

So, what does Apple want with Faceshift? In typical Apple fashion, it gave a stock non-answer when asked about how it plans to use Faceshift. That leaves us to speculate on how, exactly, Apple plans to leverage the work of the company that helped bring Star Wars characters to life. First, though, let’s talk a little bit about what Faceshift was doing prior to being snatched up by Apple.

Faceshift’s most recognizable and noteworthy work is obviously with Star Wars, but its tech has a number of applications that go beyond visual effects in film. One such application was in avatars.

Though avatars are frequently used in gaming, the recent uptick in interest in virtual reality and augmented reality applications has seen many content companies from game studios to film studios rethinking the way in which we see ourselves represented digitally. No longer are avatars simply a cartoonish representation of our online selves on Xboxes and Playstations — they’ve shown the potential to become key components in the way we perceive ourselves in a digital world.

Faceshift has also dipped its toes into augmented reality, using its tech in conjunction with mirrors. In one instance, FaceShift teamed up with AMVBBDO and Pepsi for an AR-based halloween prank that transformed people’s faces into horrifying It-like clown masks. On the less terror-inducing end of the spectrum, possibilities for Faceshift’s mirror tech included the ability to change or modify one’s appearance digitally in a retail setting. Say, for example, trying on a prospective pair of glasses not available in shop or experimenting with a new kind of makeup.

Taking into consideration some of Apple’s other recent acquisitions like Metaio, some degree of AR-application doesn’t seem all that far-fetched. At its core, augmented reality is about allowing users to interact with digital components in space. Virtual reality, by way of comparison, is an immersive experience — one that replaces your current reality with one that lives inside of a headset rather than adding components to your current perceived reality. It’s worth noting, however, that not all applications of AR look like mirror-based pranks, or even Microsoft’s Hololens.

What an Apple foray into AR would look like is difficult to guess. Microsoft has been implementing AR through Hololens for gaming and design purposes, but much of that comes down to the headset. Though certainly not impossible, there isn’t much to indicate right now that Apple’s planning on producing its own headset hardware.

But the reason behind acquiring Faceshift (or even Metaio) might not be augmented reality-related at all. Maybe it’s got more to do with improving features in existing Apple technology, or something security-related, like using facial recognition and mapping to improve device and information security. Or maybe — highly unlikely, but maybe — Apple intends to create its own space opera films.

For now, it’s anyone’s guess how Faceshift might factor into Apple’s plans for the future. But it almost certainly didn’t acquire Faceshift without some seriously sophisticated plans for motion capture and facial mapping technology. And, like always, we’ll be waiting for Apple to read us in.

MindMaze’s headset brings your brainwaves into virtual reality

Swiss startup MindMaze is moving its technology from the medical field to the mainstream with a virtual reality headset that reads the wearer’s brain waves and uses the data to help them relax and play. The team announced $8.5 million in angel funding today, which it will use to help bring several products to market by the end of the year.

At the Game Developer Conference in San Francisco, I tried four prototypes that demonstrated how MindMaze’s technology works on different platforms. Inside a virtual reality headset MindMaze calls “NeuroGoggles,” I saw virtual fire spring from the tips of my fingers in an augmented reality mode. Plastic strips placed EKG sensors all over my head, allowing a TV next to me to livestream my brain activity.

A prototype version of MindMaze's NeuroGoggles.

A prototype version of MindMaze’s NeuroGoggles.

At another table, a sweatband studded with sensors measured my relaxation and allowed me to power up a glowing ball on the screen in front of me. I furrowed my brow to release its energy and battle a MindMaze employee in a reverse tug-of-war.

MindMaze also has a Kinect-like camera that tracked my movement in 3D. The company has plans for it similar to Leap Motion; a gamer could use it to integrate their motions into their virtual avatar, for example.

A prototype MindMaze  headband reads brainwaves and incorporates them into games.

A prototype MindMaze headband reads brainwaves and incorporates them into games.

MindMaze got started by using its 3D tracking technology to help stroke, amputation and spinal cord and brain injury patients. There’s an episode of “House” where Hugh Laurie discovers a man’s anger stems from pain in his amputated hand. He alleviates the man’s pain with a box split down the middle by a mirror. One arm goes on each side of the mirror, and when the patient moves their intact hand it appears their other hand is moving too.

Studies have not found conclusive evidence that mirror boxes actually alleviates phantom limb pain, but it has been shown to help stroke patients regain control of their limbs. MindMaze replicates the same treatment with a virtual limb. The patient moves their intact hand, and a mirrored virtual hand can perform the same action.

A rendering of the proposed design for MindMaze's NeuroGoggles.

A rendering of the proposed design for MindMaze’s NeuroGoggles.

When MindMaze begins selling its devices to consumers, CEO and founder Tej Tadi said he sees people using it both to manage their mental health and for gaming.

“The first thing we want to do short term is enable a whole new gaming platform,” Tadi said. “It’s more enriched than real life.”

In the tug-of-war, it was nearly impossible to relax and power up because journalists don’t relax and the room was buzzing with people. I’ve tried the Muse headband in the past and wasn’t particularly impressed. Feedback reminding me I’m not doing well at relaxing doesn’t exactly make me more relaxed.

A rendering of the proposed design for MindMaze's motion capturing camera.

A rendering of the proposed design for MindMaze’s motion capturing camera.

But I like the idea of relaxing inside virtual reality. It is especially suited to blocking out the world and putting you in a space where it is possible to be calm. MindMaze’s prototype goggles were decently made and had the unusual feature of displaying 180 degrees of your view in virtual reality and the other 180 in augmented reality, making it easy to switch back and forth between the virtual and real worlds. The company has plans to make its headset and 3D camera smaller and wireless, and both are meant to look a whole lot spiffier before their consumer release. I’m interested to see what other applications MindMaze dreams up.

Magic Leap CEO remains cryptic in Reddit AMA

Augmented reality startup Magic Leap has been maddeningly mum since it leapt into the public eye with nearly $600 million in funding last year. Only a handful of journalists and investors have tried it.

CEO Rony Abovitz took to Reddit for an AMA today, a rare interaction with the public and all-to-familiar demonstration of the company’s ability to dodge real questions. He acknowledged Magic Leap is working on obvious augmented reality (AR) challenges without revealing any more technical details. He also said the company has a launch date target and will publicly talk about selling it commercially “at some point in the near future,” but gave no specific date.

However, he did reveal a few interesting tidbits about his personal interests and what he believes sets Magic Leap apart. Here are the highlights:

Abovitz thinks other tech is damaging your eyes and brain

In the most interesting — and strange — post in the AMA, Abovitz charged that rival augmented reality headsets cause temporary or permanent neurological problems. What?

According to Abovitz:

Our philosophy as a company (and my personal view) is to “leave no footprints” in the brain. The brain is very neuroplastic – and there is no doubt that near-eye stereoscopic 3d systems have the potential to cause neurologic change.

There is a history (for optics geeks) of issues that near-eye stereoscopic 3d may cause – but this has always been very limited use and small populations (like the military). We have done an internal hazard and risk analysis (like the kind I did from my med-tech/surgical robotics days) on the spectrum of hazards that may occur to a wide array of users. Frequency of use, duration of use, and the neuroplasticity of the user are all key factors – but because we are all people – we may all be impacted.

I personally experienced a number of these stereoscopic-3d issues – and would not wear these devices -especially knowing that digital light-field systems are on the way and safe.

I haven’t seen any studies that support this. I have experienced AR and VR-induced nausea and eye strain, but those are hardly permanent.

Abovitz is probably saying this because Magic Leap uses a fundamentally different technology than rival AR headsets. It projects light directly into the wearer’s eye that mimics natural light, giving them a visual that appears real (at least, that’s how MIT Technology Review’s recent piece describes it). Other headsets have a lens that hovers in front of each of the wearer’s eyes. Images appear on the lens, which can cause your eye to switch back and forth between different depths to focus, causing strain.

A virtual whale from Magic Leap's website.

A virtual whale from Magic Leap’s website.

Magic Leap wants to replace your mobile device

Abovitz is not the first person to say this, and he won’t be the last: Augmented reality is a platform, not a specialized tool. Magic Leap believes people will eventually want to use its glasses more than current mobile devices. Yes, you’ll look like that dorky guy you saw wearing Google Glass once, and you’ll like it.

“There will be a transition period – some people will wait, others will adopt quickly, while some will use both for a while,” Abovitz wrote.

If Florida worked for NASA, it can work for Magic Leap

Magic Leap is located in Dania Beach, Florida, a city of 30,000 people 45 minutes north of Miami. It’s not the first place you would think to find a booming startup.

Abovitz moved to southern Florida as a child, and said he still loves it today, citing “Disney, NASA and alligators.”

“NASA brought the best and brightest here in the 60s to go to the moon – there is something about being here which gets you to think different and big,” Abovitz wrote in the AMA.

Neal Stephenson is part networker, part UI guy

Magic Leap hired “Snow Crash” author Neal Stephenson late last year as its futurist — a grab that had virtual reality geeks oohing and ahhing.

“I pinch myself everytime I sign off on a Neal Stephenson expense report,” Abovitz wrote. “So mundane and sci-fi at the same moment :)”

Stephenson works with Magic Leap’s team to ensure developer sites are friendly and builds “relationships with people and companies who are living the future we want to build,” Abovitz said.

Magic Leap does not plan to develop for Microsoft

When Microsoft revealed its HoloLens augmented reality headset, team lead Alex Kipman named Magic Leap in his call for HoloLens developers. But Abovitz said he and his team “have our own plans.”

Something about “gold tickets coming”

Is Abovitz secretly Willy Wonka?

The new View-Master officially turns Google Cardboard into a toy

Mattel announced an update to its iconic View-Master handheld 3D-viewer toy in New York on Friday. The new View-Master was developed in conjunction with Google and its Cardboard project, and it turns the retro slide-viewer into a modern virtual reality headset.

Basically, the new View-Master is a plastic toy version of the Google Cardboard headset first revealed last June. Instead of inserting disks of stereoscopic images, as the original View-Master requires, you simply slip a supported smartphone into the headset, which will provide the gyroscopes, processor, and screen needed for immersive VR.

Besides the fact that it’s made from plastic, there’s going to be one major difference between Cardboard and the new View-Master: The main action button will be moved from the left-hand side to the right side of the headset, and has been turned into a “capacitive touch” lever, which better matches the historic View-Master lever you might remember from bygone years.

viewmaster-lever

In fact, there are so few differences between Google Cardboard and the new Mattel View-Master that the demo I received was on a Google Cardboard headset with a mounted Nexus 5 phone, not the promised headset.

Mattel wanted to show off its new View-Master app, which will be available from Google Play this fall.  The View-Master app uses Google Cardboard APIs to produce immersive, interactive worlds, although much of the interactivity is still under development. Users will be able to buy experiences online, as well as in stores, in the form of “experience reels,” sold in packs of four for $15, which contain exclusive content you can’t download from online.

new-viewmaster-discs

When in the View-Master app, you load an experience reel through a nifty augmented reality interaction. Taking advantage of your smartphone’s rear camera, the app passes whats in front of you to your screen. When your gaze falls on an experience reel, figures from that reel pop up into your line of vision. For instance, the San Francisco experience reel projects a polygonal Golden Gate Bridge. Simply click on the AR images and you’ll be transported into a 360-degree virtual world. I travelled to the moon as well as the time of the dinosaurs.

Mattel says there isn’t a social aspect to its worlds so parents can safely send their kids to the moon without worrying about the trolls that pop up online. The company is also conducting studies which certify that virtual reality is safe for kids as young as seven.

There are other Cardboard-compatible headsets not made from Cardboard. Other Cardboard headsets — like the recently-announced VR for G3 from LG — should be able to run the View-Master app as well, and the new View-Master will work with any Google Cardboard-compatible app. Mattel didn’t elaborate on which specific devices will work in the new View-Master, but noted it should be a lot of them, including big phones with 6-inch screens.

Mattel says that the new View-Master should be in stores by October, and will cost around $30 — not counting the smartphone you need to use it. Although Google Cardboard is largely exclusive to Android at the moment, Mattel and Google representatives confirmed that they’re working to bring it to iOS devices.

Although the new View-Master is directly aimed at kids (and nostalgic parents) its low price and wide availability might make it the Google Cardboard headset of choice for overgrown kids — early adopters and virtual reality enthusiasts.

 

 

As HoloLens looms, Atheer plans new augmented reality headset

Mountain View-based startup Atheer plans to announce today a second developer version of its augmented reality glasses, moving it closer to the long-promised headset it plans to sell to businesses.

The refreshed glasses are the first I’ve seen from Atheer that actually look like, well, glasses. They have a Snapdragon 800 processor and a lighter, more compact frame. Their Android-based operating system now looks much cleaner and is navigable with just a few gestures.

The four-year-old startup is now operating under a new CEO: Alberto Torres, who was formerly the senior vice president of mobility at HP. He replaced former Atheer CEO Soulaiman Itani in October.

Atheer CEO Alberto Torres demonstrates a medical application on the Atheer augmented reality headset.

Atheer CEO Alberto Torres demonstrates a medical application on the Atheer augmented reality headset.

Under Itani, Atheer launched a crowdfunding campaign offering augmented reality glasses to consumers. It canceled the campaign after raising more than $200,000, citing deeper interest from the enterprise side.

Torres said Atheer will remain focused on the enterprise for at least the next three years. Atheer is already working with partners in the healthcare, oil and gas and logistics spaces, and is looking to expand into new partnerships with this new headset.

During a demo, Atheer showed me the latest iterations of the apps it has now been building for more than a year. I pinched my fingers to zoom in and out on a 3D model of a heart and swiped through Google Maps. I watched as a virtual spout attached itself to a real piece of paper and began spewing virtual water, and scanning a barcode with the glasses’ camera pulled up a list of instructions for a job. A new feature allows you to hold your hands up to make a square, and the headset will automatically take a picture. Many are abilities desperately needed in industries where your hands are too dirty to touch a tablet screen or it is monotonous to log item after item after item.

Atheer CEO Alberto Torres.

Atheer CEO Alberto Torres.

As general interest grows, Atheer will begin to dip its toes into the consumer space, perhaps starting with very specific use-cases. Torres said he would love to create an augmented reality headset that scuba divers could use underwater. Skydivers and skiiers would also benefit from gesture-based computing.

“I see it becoming what your laptop is today,” Torres said, describing it as a tool people will take out for short-use sessions. “I don’t believe that we are ready for people to be wearing this all the time.”

As Atheer traverses this new space, it will find both an enemy and an ally in Microsoft, which made the surprise announcement of its HoloLens augmented reality headset last month. The big player’s entry into the space will bring mass interest, but Torres is confident Atheer can still stand out.

“History shows it’s not always the people who throw the most money at it that get the right answer,” Torres said. “Our focus is to execute and have a great experience. This world of computing at your fingertips … it’s going to happen.”

Meta raises $23M Series A to refine its augmented reality glasses

Meta, an early startup in the augmented reality industry, has raised a $23 million Series A round led by Horizons Ventures, Tim Draper, BOE Optoelectronics and Garry Tan and Alexis Ohanian of Y Combinator. Danhua Capital, Commodore Partners and Vegas Tech Fund also participated.

The Series A announcement comes at an interesting time. Meta, which was founded in 2012, has been working toward its early promise of Ironman-like augmented reality. Its team has shown me demos involving 3D modeling, the internet of things and even basic web browsing, but just a week ago Microsoft breezily caught up with its HoloLens headset.

But Meta chief product officer Soren Harner characterized it as an exciting time for the company. Augmented reality has struggled to drive the same enthusiasm and exposure as its cousin virtual reality, which was much quicker to overcome technical hurdles.

The Meta 1 developer kit glasses.

The Meta 1 developer kit glasses.

“There are big, credible companies standing behind this space,” Harner said in an interview. “We don’t have other baggage. We’re not promoting platforms beyond augmented reality.”

I have not personally tried the HoloLens, or Magic Leap, another Meta competitor that is rumored to be incredible, but my impression is that Meta is falling behind. I found ODG’s product to be far sharper and easier to use. While it may be true that Meta has the independence to pursue the singular goal of an AR headset, it also doesn’t have [company]Microsoft[/company] and Magic Leap’s team or financial resources.

Harner couldn’t talk about Meta’s future release plans, but the startup’s next headset will be another developer kit, not a consumer version. ODG and Microsoft plan to get their consumer headsets out this year.

Instead of speed of delivery, Meta is concentrating on building its content library. It is hosting two hackathons over the next few months and continuing to ship glasses to developers.

“It’s an integration of hardware and software that’s going to make [augmented reality] happen,” Harner said. “We have traction. We have the resources to push really hard on it.”

Microsoft HoloLens hands on: It’s early, but it’s already nifty

I was able to try out HoloLens at Microsoft’s headquarters on Wednesday. HoloLens is a virtual reality headset running what Microsoft thinks will be the future of computing: Windows Holographic. But it’s not Google Glass or Oculus Rift. The headset places virtual objects in the space around you, which you see through clear glass-like lenses, instead of immersing you in a completely fictional world on a screen.

Unfortunately, I have no photos of the headsets I tested, although concept images and renders are available from HoloLens.com. That’s because Microsoft didn’t let any cameras into the HoloLens demos, given that HoloLens isn’t that close to being a product yet (and letting the unwashed masses test a not-ready-for-prime-time product can be embarrassing). Although Microsoft said it will come out as part of the Windows 10 rollout — billed as sometime in 2015 — the developer’s versions I was able to test out are not the slick all-in-one devices Microsoft showed off on stage and Wired wrote about.

Hololens-rotates

The version I tested was a complete prototype, warts and all: The HoloLens hardware was strapped to a fitting mechanism more often found on climbing helmets, and the “first of its kind” “Holographic Processing Unit” was a little smaller than a Mac Mini and needed to be worn around my neck. And it wasn’t exactly mobile; the dev unit I tested needed to have a connected wire for power. I understand this was a prototype unit for testing and development, but that doesn’t bode well for the product’s battery life when it’s eventually released.

But what I did get to test out was compelling. I “donned” the device and tried out four applications for HoloLens: HoloBuilder, an augmented reality sibling of Minecraft; HoloStudio, a 3D modeling application; Onsight, a Mars simulation developed in conjunction with NASA’s JPL labs; and a version of Skype.

HoloBuilder was the only game I tried out, and suddenly Microsoft’s $2.5 billion purchase of Mojang made a lot more sense. The app makes a room in your home into a Minecraft world. Using my line of sight as a cursor, I dug through a table, blew up a wall, and explored my environment. HoloLens knows the surfaces around you and it did a great job of sensing depth — which is one of the big advancements that [company]Microsoft[/company] is touting. After I blew up a wall, I found a whole new lava-covered world which really looked like it was inside the wall. You use voice commands like “shovel” to call up tools.

HoloStudio is a modeling app that lets you build 3D models in space. According to Microsoft, after you build your model, you can 3D print it and make it a real object — several Microsoft people said that HoloLens was the best “print preview” for 3D printing.

But the models you can create in HoloLens usually have multiple colors and parts, and unless you know how to break it down into components a 3D printer can handle, you’ll probably have to send your HoloStudio files to a professional 3D printer to make them into reality.

[pullquote person=”” attribution=”” id=”908722″]HoloBuilder was the only game I tried out, and suddenly Microsoft’s $2.5 billion purchase of Mojang made a lot more sense.[/pullquote]

I didn’t get to use HoloStudio but I saw a 30-minute demo. From what I saw, the interface really reminded me of the Sims — colorful, friendly, and intuitive. It did not look like a professional 3D modeling program like CAD; it looked like consumer software.

One thing you have to realize when you don HoloLens is that there aren’t any cameras on you; When you interact with other people, you might be able to see them, but they can’t see you. That really came to light when using Skype on HoloLens.

I videoconferenced with someone who gave me instructions on how to install a light switch. I could see him, since he was running Skype on a conventional device with a front-facing camera. He could see what I could see, but he couldn’t see me. I pinned his visage right about the problem I needed to solve and he gave me intelligent instructions about what to do. It’s easy to see HoloLens being used in industrial capacities in the same way.

Microsoft Hololens demo

NASA clearly thinks there’s some potential here too, and it helped Microsoft develop Onsight, an app which interfaces with the software that NASA uses to plan what the Mars rover Curiosity is doing. HoloLens threw me onto a very detailed surface replication of Mars, down to individual rocks. I could click on rocks using an “air tap” gesture and explore the environment.

When wearing HoloLens and checking out a computer running NASA’s software, I found I could see the screen and work on a conventional desktop. The demo even included an example of dragging the mouse off the desktop’s screen and into my simulated Mars landscape.

I conferenced with a JPL employee, presumably wearing HoloLens, who demonstrated how HoloLens could help scientists from around the world collaborate on the Curiosity mission. I could see where he was looking, and talk to him with minimal lag about what Curiosity should do next. But remember there are no cameras on you. The avatar of the JPL employee I saw was a golden rendered human figure, reminiscent of a yellower version of Dr. Manhattan from The Watchmen.

HoloLens appears to be using a prism projector to display virtual objects, which is the same display technology that Google Glass uses. You can only see virtual objects — holograms — in the center of your field of vision, and there’s a outlined rectangle in which virtual objects can appear. So while I was travelling to Mars, I still saw the Microsoft offices in the periphery of my vision. But after a while, I found myself immersed. I found the images clear and sharp, and there wasn’t a lot of lag displaying new virtual objects when I quickly looked at something else. The HoloLens also has two little speakers that rest just above your ears.

I also found that there’s a bit of a problem with eye contact while wearing HoloLens. Many of the Microsoft demoers didn’t want to look in my eyes for extended periods of time — in their defense, I did look like a cyborg — which may be why Microsoft is covering the final design with a big Marshawn Lynch-style tinted eyeguard.

HoloLens, Microsoft tells me, is a full Windows 10 computer. But there are a lot of unanswered questions.

Microsoft did not offer information on availability, price, what the “HPU” includes, any specs really, or any gestures you can do beyond the simple “air tap.” We don’t really know which sensors are included, or the resolution of the optics, or how standard Windows tasks, like writing a Word document, will work on HoloLens.

But that wasn’t the point of Microsoft’s big reveal. Very few companies have a working augmented reality product ready to be launched to the public, and Microsoft just leapfrogged all of them.