Google open sources spatial audio SDK for realistic VR and AR experiences

Google is looking to improve virtual reality experiences with new experiments and SDKs. The company announced that it is open sourcing Resonance Audio, its spatial audio SDK released last year. Google is also providing new insights into its experiments with light fields, which are a set of algorithms for advanced capture, stitching, and rendering.

“To accelerate adoption of immersive audio technology and strengthen the developer community around it, we’re opening Resonance Audio to a community-driven development model. By creating an open source spatial audio project optimized for mobile and desktop computing, any platform or software development tool provider can easily integrate with Resonance Audio. More cross-platform and tooling support means more distribution opportunities for content creators, without the worry of investing in costly porting projects,” Eric Mauskopf, project manager for Google wrote in a post.

According to the company, spatial audio is necessary to providing a sense of presence within virtual reality and augmented reality worlds.

The open source project will include a reference implementation of YouTube’s Ambisonic-based spatial audio decoder, which is compatible with the Ambisonic format used across the industry. It will also feature encoding, sound field manipulation and decoding techniques, and head related transfer functions to achieve rich spatial audio. Additionally, Google will open its library of optimized DSP classes and functions.

In addition, it is being open sourced as a standalone library and associated engine plugins, VST plugin tutorial and examples.

Since its November launch, Google says Resonance Audio has been used in many applications such as Pixar’s Coco VR for Gear VR and Disney’s Star Wars Jedi Challenges app for Android and iOS.

Other ways Google has been trying to create a sense of presence in VR is through experiments with Light Fields. According to the company, light fields create a sense of presence by creating motion parallax and realistic textures and lighting.

“With light fields, nearby objects seem near to you—as you move your head, they appear to shift a lot. Far-away objects shift less and light reflects off objects differently, so you get a strong cue that you’re in a 3D space. And when viewed through a VR headset that supports positional tracking, light fields can enable some truly amazing VR experiences based on footage captured in the real world,” the team wrote in a post.

As part of its experiment, Google is releasing an app on Steam VR called “Welcome to Light Fields” to show the potential of this technology.

 

Microsoft Windows 10 Fall Creators Update SDK now available

Developers can start preparing their applications for the next update of Windows 10 with the newly available Windows 10 Fall Creators Update SDK. The SDK features new tools for building mixed reality experiences, modernizing applications for today’s workplace, and building and monetizing games and apps.

“Windows 10 Fall Creators Update provides a developer platform that is designed to inspire the creator in each of us – empowering developers to build applications that change the way people work, play and interact with devices. To truly fulfill this platform promise, I believe that our developer platform needs to be centered around people and their needs.  Technology should adapt and learn how to work with us,” Kevin Gallo, corporate vice president of the Windows developer platform, wrote in a post.

According to the company, the next wave of virtual and augmented reality is mixed reality. With Windows Mixed Reality, developers can create immersive experiences that are reusable across platforms and device form factors. “Windows 10 was designed from ground up for spatial interactions and the next wave in this journey is Windows Mixed Reality, uniting the digital and real world to create a rich, immersive world. As humans, we interact with space constantly, and Windows Mixed Reality will feel the most natural for users,” Gallo wrote.

To modernize apps for the workplace, the SDK enables developers to create and update existing apps with Visual Studio 2017 version 15.4, integration of .NET Standard 2.0, and an improved Windows 10 deployment system.

In addition, developers can build better game and app experiences with the Expanded Resources feature in the Fall Xbox One Update, the Xbox Live Creators Program, and the Mixer SDKs for major game engines and languages.

Gartner’s top 10 technology trends for 2018

With only a couple more months left of the year, Gartner is already looking ahead to the future. The organization announced its annual top strategic technology trends at the Gartner Symposium/ITxpo this week.

The basis of Gartner’s trends depends on whether or not they have the potential to disrupt the industry, and break out into something more impactful.

The top 10 strategic technology trends, according to Gartner, are:

    1. AI foundation: Last year, the organization included artificial intelligence and machine learning as its own trend on the list, but with AI and machine learning becoming more advance, Gartner is looking at how the technology will be integrated over the next five years. “AI techniques are evolving rapidly and organizations will need to invest significantly in skills, processes and tools to successfully exploit these techniques and build AI-enhanced systems,” said David Cearley, vice president and Gartner Fellow. “Investment areas can include data preparation, integration, algorithm and training methodology selection, and model creation. Multiple constituencies including data scientists, developers and business process owners will need to work together.”
    2. Intelligent apps and analytics: Continuing with its AI and machine learning theme, Gartner predicts new intelligent solutions that change the way people interact with systems, and transform the way they work.
    3. Intelligent things: Last in the AI technology trend area is intelligent things. According to Gartner, these go beyond rigid programming models and exploit AI to provide more advanced behaviors and interactions between people and their environment. Such solutions include: autonomous vehicles, robots and drones as well as the extension of existing Internet of Things solutions.
    4. Digital twin: A digital twin is a digital representation of real-world entities or systems, Gartner explains. “Over time, digital representations of virtually every aspect of our world will be connected dynamically with their real-world counterpart and with one another and infused with AI-based capabilities to enable advanced simulation, operation and analysis,” said Cearley. “City planners, digital marketers, healthcare professionals and industrial planners will all benefit from this long-term shift to the integrated digital twin world.”
    5. Cloud to the edge: Internet in the Internet of Things has brought up the notion of edge computing. According to Gartner, Edge computing is a form of computing topology that processes, collects and delivers information closer to its source. “When used as complementary concepts, cloud can be the style of computing used to create a service-oriented model and a centralized control and coordination structure with edge being used as a delivery style allowing for disconnected or distributed process execution of aspects of the cloud service,” said Cearley.
    6. Conversational platforms: Conversational platforms such as chatbots are transforming how humans interact with the emerging digital world. This new platform will be in the form of question and command experiences where a user asks a question and the platform is there able to respond.
    7. Immersive experience: In addition to conversational platforms, experiences such as virtual, augmented and mixed reality will also change how humans interact and perceive the world. Outside of video games and videos, businesses can use immersive experience to create real-life scenarios and apply it to design, training and visualization processes, according to Gartner.
    8. Blockchain: Once again, blockchains makes the list for its evolution into a digital transformation platform. In addition to the financial services industry, Gartner sees blockchains being used in a number of different apps such as government, healthcare, manufacturing, media distribution, identity verification, title registry, and supply chain.
    9. Event driven: New to this year’s list is the idea that the business is always looking for new digital business opportunities. “A key distinction of a digital business is that it’s event-centric, which means it’s always sensing, always ready and always learning,” saidYefim Natis, vice president, distinguished analyst and Gartner Fellow. “That’s why application leaders guiding a digital transformation initiative must make ‘event thinking’ the technical, organizational and cultural foundation of their strategy.”
    10. Continuous adaptive risk and trust: Lastly, the organization sees digital business initiatives adopting a continuous adaptive risk and trust assessment (CARTA) model as security becomes more important in a digital world. CARTA enables businesses to provide real-time, risk and trust-based decision making, according to Gartner.

“Gartner’s top 10 strategic technology trends for 2018 tie into the Intelligent Digital Mesh. The intelligent digital mesh is a foundation for future digital business and ecosystems,” said Cearley. “IT leaders must factor these technology trends into their innovation strategies or risk losing ground to those that do.”

To compare, last year’s trends are available here.

In addition, the organization also announced top predictions for IT organizations and users over the next couple of years. The predictions include: early adopters of visual and voice search will see an increase in digital commerce revenue by 30% by 2021; five of the top seven digital giants (Alibaba, Amazon, Apple, Baidu, Facebook, Google, Microsoft and Tencent) will willfully self-disrupt by 2020; and IoT technology will be in 95% of electronics by 2020.

Google reveals Pixel 2 and Pixel 2 XL

Google officially unveiled the next generation of its Pixel phones, the Pixel 2 and Pixel 2 XL, at an event in San Francisco on Wednesday. The devices are built on the capabilities of Google Assistant, continuing Google’s shift “from mobile-first to AI-first” in computing, Google CEO Sundar Pichai said.

The Pixel 2 and Pixel 2 XL aren’t much of a departure from the original Pixel phones in terms of design. The standard Pixel 2 features a 5″ screen with an OLED display, while the Pixel 2 XL sports a massive 6″ P-OLED screen with an 18:9 aspect ratio and a curved screen.

The phones have an all-aluminum body, with the smaller glass plate on the back. The fingerprint scanner remains in the same spot on the middle of the back of the phone.

The standard Pixel 2 is available in three colors: Kinda Blue, Just Black, and Clearly White. The Pixel 2 XL, however, is only available in Just Black and a separate Black and White colorway.

pixelhome.jpg

 

Google’s search box will now be at the bottom of the screen and will remain static while the user moves between home screens. A new feature called Active Edge will allow users to access Google Assistant by simply squeezing the sides of the phone. It will work with a case, and machine learning will be able to identify an “intentional squeeze.”

Google is continuing the trend of including premium camera technology in the Pixel line with the Pixel 2. The 12.2 MP camera, which achieved a DxOMark score of 98, is tuned for AR content and can read it in 60fps. The camera will also include a portrait mode, similar to what Apple introduced with its iPhone 7 Plus. This feature could be especially helpful for marketing professionals and social media pros who need to capture the best images for their content.

 

The iPhone X’s notch is basically a Kinect

Sometimes it’s hard to tell exactly how fast technology is moving. “We put a man on the moon using the computing power of a handheld calculator,” as Richard Hendricks reminds us in Silicon Valley. In 2017, I use my pocket supercomputer of a phone to tweet with brands.

But Apple’s iPhone X provides a nice little illustration of how sensor and processing technology has evolved in the past decade. In June 2009, Microsoft unveiled this:

In September 2017, Apple put all that tech in this:

Well, minus the tilt motor.

Microsoft’s original Kinect hardware was powered by a little-known Israeli company called PrimeSense. PrimeSense pioneered the technology of projecting a grid of infrared dots onto a scene, then detecting them with an IR camera and acsertaining depth information through a special processing chip.

The output of the Kinect was a 320 x 240 depth map with 2,048 levels of sensitivity (distinct depths), based on the 30,000-ish laser dots the IR projector blasted onto the scene in a proprietary speckle pattern.

In its day, the Kinect was the fastest selling consumer electronics device of all time, while it was also widely regarded as a flop for gaming. But the revolutionary depth-sensing tech ended up being a huge boost for robotics and machine vision.

In 2013, Apple bought PrimeSense. Depth cameras continued to evolve: Kinect 2.0 for the Xbox One replaced PrimeSense technology with Microsoft’s own tech and had much higher accuracy and resolution. It could recognize faces and even detect a player’s heart rate. Meanwhile, Intel also built its own depth sensor, Intel RealSense, and in 2015 worked with Microsoft to power Windows Hello. In 2016, Lenovo launched the Phab 2 Pro, the first phone to carry Google’s Tango technology for augmented reality and machine vision, which is also based on infrared depth detection.

And now, in late 2017, Apple is going to sell a phone with a front-facing depth camera. Unlike the original Kinect, which was built to track motion in a whole living room, the sensor is primarily designed for scanning faces and powers Apple’s Face ID feature. Apple’s “TrueDepth” camera blasts “more than 30,000 invisible dots” and can create incredibly detailed scans of a human face. In fact, while Apple’s Animoji feature is impressive, the developer API behind it is even wilder: Apple generates, in real time, a full animated 3D mesh of your face, while also approximating your face’s lighting conditions to improve the realism of AR applications.

PrimeSense was never solely responsible for the technology in Microsoft’s Kinect — as evidenced by the huge improvements Microsoft made to Kinect 2.0 on its own — and it’s also obvious that Apple is doing plenty of new software and processing work on top of this hardware. But the basic idea of the Kinect is unchanged. And now it’s in a tiny notch on the front of a $999 iPhone.

The impact of virtual and augmented reality on corporate developers

It was more than 30 years ago that Microsoft Windows was first released. At the time, it was a radical departure from the text-based interfaces that dominated most screens. It has been over 25 years since Windows 3.0, the first point that people started really paying attention to Windows. Suddenly, there was a reason for people to pay attention. Multitasking was important and something that DOS didn’t do. However, Windows had to fight off the perception that it was for games to find its footing as a useful productivity tool.

Fast forward to today, when virtual and augmented reality solutions are making fun games because of platforms like Oculus Rift and Pokémon Go. Games have thrust these technology solutions into the consciousness of individuals and business leaders who wonder how they can be used for productivity instead of entertainment. It’s up to today’s corporate developers to take the technologies and make them productive.

The Learning Curve
Like the learning curve for Windows decades ago, the learning curve for virtual and augmented reality isn’t shallow – but it’s a learning curve that corporate developers can overcome. While most corporate developers could, historically, safely ignore threading and performance concerns in their corporate applications, that is no longer the case. The need for real-time feedback creates a need to defer processing and focus on the interaction with the user. This means learning, improving your learning, or relearning how to manage threads in your applications.

It also means looking for optimal processing strategies that most developers haven’t seen since their computer science textbooks. With Moore’s Law creating massive processing capacity in both central processing capabilities as well as graphics capabilities, it’s been some time since most developers have needed to be concerned with which strategy was the fastest. However, as these platforms emerge, it’s necessary to revisit the quest for optimal processing strategies – including the deferral of work into background threads.

More challenging than development-oriented tasks may be the need to develop models in three-dimensional space. Most developers eventually got decent with image editors to create quick icons that designers could later replace. However, building 3D models is different. It means a different set of tooling and a different way of thinking.

The Applications
Most corporate developers were relegated to working on applications that were far removed from the reality of the day-to-day business. Recording the transactions, scanning the forms, tracking customer interactions… all were important, but disconnected from making the product, servicing the customer, or getting the goods to the end user. VR and AR are changing that. Instead of living in a world that’s disconnected from how the user does their work, VR and AR are integrating how users do their work and how they learn.

In the corporate world, VR applications include training with materials that are too expensive or dangerous to work with in reality – and the remote management of robots and drones that do the work that is too difficult for a human to do. Instead of controlling electrons in a computer, VR software is moving atoms or rewiring human brains. Training is no longer boring videos of someone else doing something, it’s an interactive simulation that used to be too expensive to build. The opportunity to remotely control through VR provides the benefits of human judgement with the safety of not exposing humans to dangerous conditions.

AR can augment humans. Instead of having to memorize reams of content, it can be displayed in-context. Knowledge management systems have traditionally been boring repositories of information that’s difficult to access. AR connects the knowledge repository with its use.

AR also makes accessible to humans sensors that are beyond our five senses. AR can bring thermal imaging, acoustic monitoring, and other sensors into our range of perception through visual or auditory cues. Consider how digital photography transformed the photography industry. Now everyone can get immediate feedback and can make adjustments instead of having to wait for the development process.

The Change
Ultimately, VR and AR mean that developers get the chance to have a greater and more tangible impact on the world around them. They can be a part of augmenting human capacity, reducing the risk to humans, and to improve training. All it takes is a refocus on threading, performance, and learning a bit about 3D modeling.

We’re all excited about the gaming potential of HoloLens, but Microsoft is also fixated on enterprise AR, much like Google now is with Glass.

During a talk at the CVPR (Computer Vision and Pattern Recognition) conference in Hawaii, Microsoft Research VP Harry Shum revealed that it will be boosted by an AI co-processor on its holographic processing unit (HPU).

The aim is to give the headset object and voice recognition skills that work in real time without the need for a cloud connection.

Computer vision and voice recognition have gaming and entertainment potential, and Shum showed off the new chip with a hand tracking and segregation demo.

However, it’s arguably more useful for businesses. At Build 2017 in May, Microsoft CEO Satya Nadella demonstrated how Lowes can use HoloLens to help customers design a kitchen, for instance.

At the same event, Microsoft revealed the potential of HoloLens to help the blind “see” by recognizing objects and describing them. For instance, it can do facial recognition to identify friends and family, find a hotel or apartment room number, and even describe a scene, like a man walking a dog. Basic description chores are possible without the need for a data connection.

Microsoft is designing the AI chip’s silicon in-house. It will run off the HoloLens battery, is fully programmable and supports a variety of types of deep learning. It’s meant to be a fast, flexible AI solution that doesn’t require an internet connection to do tasks like object and voice recognition.

Microsoft isn’t alone in building such a chip, as Apple and Google also have AI processors on the go. However, this appears to be the only one designed for a wearable AR device.

The AI co-processor seems to be a central part of its HoloLens strategy. “This is the kind of thinking you need if you’re going to develop mixed reality devices that are themselves intelligent,” Microsoft Research said in its blog.

Microsoft famously skipped HoloLens 2, which was supposed to come out in 2017. Instead, it jumped straight to version 3, which is set to arrive in late 2018 or early 2019.

You might know Lenovo for its laptops and Yogabooks, but the electronics maker is apparently used to conjuring up some pretty far-out concepts much more exciting than a 2-in-1.

At its third annual Tech World innovation summit, it has revealed not just a couple of new Yogabook colors, but also the concept products it’s most proud of, including an augmented reality headset and a smart speaker-projector.

Lenovo’s daystAR headset is a standalone vision processing unit with a 40-degree field of view. It doesn’t need to be connected to a phone or a PC, and the company expects developers to use its homegrown AR platform to create apps for the device.

Next in the list is SmartCast+, which is supposed to boast a lot more capabilities than the speakers with voice assistants of today, such as Amazon’s Echo and its own Echo clone.

If the smart speaker-projector ever becomes a real product, Lenovo wants to give it the ability to recognize sounds and objects, as well as to deliver AR experiences by projecting images onto a wall or a screen.

See Also

The Future of Kids Wearables: More Than Just Tracking Devices

Lenovo is also eyeing the creation of an AI assistant called CAVA that’s smarter than Siri and Alexa. The company wants to use deep learning to create facial recognition systems and natural language understanding technologies for the AI.

That way, CAVA can truly understand your messages and make recommendations based on what you tell it. If you tell CAVA that you have a meeting in two hours, for instance, it can automatically check the weather and traffic conditions to tell you when to leave.

One of the last two concepts Lenovo showed off is the SmartVest, an ECG-equipped piece of clothing that can monitor your heart rhythm 24/7.

The other one is the Xiaole customer service platform that can learn from interactions with customers in order to make each conversation more natural and personalized.

Lenovo says it sees cooking up concepts as an important part of its R&D process, because it lets the company explore and push boundaries. Unfortunately, there’s no guarantee that any of them will become real products.

We’d sure love to take those headsets and speaker-projector for a spin, though — we’ll just have to keep an eye on Lenovo’s future releases.

 

While mobile VR is a vibrant market these days, thanks to the Gear VR and Google’s Daydream View, the same can’t be said for AR.

If you want to dabble in augmented reality, you’d better be prepared to shell out at least $950 on hardware like the Meta 2, and even more for a beefy PC to run it. Microsoft’s HoloLens, which helped to popularize the dream of AR, still costs a whopping $3,000.

But Mira, a young LA-based startup, is hoping to make things simpler Prism, its $99 mobile headset. Just drop in an iPhone 7, and you too can view AR atop the real world.

Also Read

Microsoft HoloLens review

Prism looks like a slimmed down version of the Meta 2, with a similar set of transparent, oversized lenses for displaying AR imagery.

Similar to the Gear VR and Daydream, there’s a slot for for your phone (it only works with the iPhone 7 for now). Instead of pointing the screen right at your eyes, though, you position it away from you.

A set of mirrors reflects what’s on the screen and repositions it on the front lenses. It might sound like a bit of a hack, but the result is a surprisingly clear set of holographic images in a relatively inexpensive device (not including the cost of the iPhone, of course).

I had no trouble putting on the Prism; even though looks a bit bulky, it’s significantly lighter and easier to wear than either the Meta 2 (which needs to be tethered to a PC) or HoloLens.

Mostly, that’s due to the healthy layer of cushioning that rests on your forehead. The front lenses snap on magnetically, allowing you to easily remove them when you need to travel with the Prism.

Mira has also developed a small motion-sensing controller, which is curved and fits into your hand like the Daydream View’s. Most importantly, it also includes a trigger for your index finger like the Gear VR’s remote.

That’s particularly useful for interacting with virtual objects. The remote also sports a touchpad on top, as well as menu and home buttons.

Mira says developers will be able to build both single and multiplayer experiences with its SDK. Your friends will also be able to see your AR adventures on their iOS devices using Spectator Mode.

They can also take photos and videos of you interacting with virtual objects, which makes the Prism experience a bit more communal than VR headsets.

Of course, Prism will only be as useful as the software available for it. Mira says the initial release of the headset is targeted at developers, and it’s partnering with a few studios to build more AR experiences. (You can expect to hear more about those in the coming weeks.)

The company plans to ship Prism to developers this fall, and it should reach consumers by this holiday season. Clearly, Mira has a long and difficult road ahead, but Prism’s low price and relative convenience could help it play an important role in the nascent world of AR.