Microsoft and Amazon announce deep learning library Gluon

Microsoft has announced a new partnership with Amazon to create a open-source deep learning library called Gluon. The idea behind Gluon is to make artificial intelligence more accessible and valuable.

According to Microsoft, the library simplifies the process of making deep learning models and will enable developers to run multiple deep learning libraries. This announcement follows their introduction of the Open Neural Network Exchange (ONNX) format, which is another AI ecosystem.

Gluon supports symbolic and imperative programming, which is something not supported by many other toolkits, Microsoft explained. It also will support hybridization of code, allowing compute graphs to be cached and reused in future iterations. It offers a layers library that reuses pre-built building blocks to define model architecture. Gluon natively supports loops and ragged tensors, allowing for high execution efficiency for RNN and LSTM models, as well as supporting sparse data and operations. It also provides the ability to do advanced scheduling on multiple GPUs.

“This is another step in fostering an open AI ecosystem to accelerate innovation and democratization of AI-making it more accessible and valuable to all,” Microsoft wrote in a blog post. “With Gluon, developers will be able to deliver new and exciting AI innovations faster by using a higher-level programming model and the tools and platforms they are most comfortable with.”

The library will be available for Apache MXNet or Microsoft Cognitive Toolkit. It is already available on GitHub for Apache MXNet, with Microsoft Cognitive Toolkit support on the way.

Amazon releases new compiler for AI frameworks

Amazon is addressing artificial intelligence development challenges with a new end-to-end compiler solution.

The NNVM compiler, developed by AWS and a team of researchers from the University of Washington’s Allen School of Computer Science & Engineering, is designed for deploying deep learning frameworks across a number of platforms and devices.

“You can choose among multiple artificial intelligence (AI) frameworks to develop AI algorithms. You also have a choice of a wide range of hardware to train and deploy AI models. The diversity of frameworks and hardware is crucial to maintaining the health of the AI ecosystem. This diversity, however, also introduces several challenges to AI developers,” Mu Li, a principal scientist for AWS AI, wrote in a post.

According to Amazon, there are three main challenges AI developers come across today: switching between AI frameworks, maintaining multiple backends, and supporting multiple AI frameworks. The NNVM compiler addresses this by compiling front-end workloads directly into hardware back-ends. “Today, AWS is excited to announce, together with the research team from UW, an end-to-end compiler based on the TVM stack that compiles workloads directly from various deep learning frontends into optimized machine codes,” Li wrote. The TVM stack, also developed by the team, is an intermediate representation stack designed to close the gap between deep learning frameworks and hardware backends.

“While deep learning is becoming indispensable for a range of platforms — from mobile phones and datacenter GPUs, to the Internet of Things and specialized accelerators — considerable engineering challenges remain in the deployment of those frameworks,” said Allen School Ph.D. student Tianqi Chen. “Our TVM framework made it possible for developers to quickly and easily deploy deep learning on a range of systems. With NNVM, we offer a solution that works across all frameworks, including MXNet and model exchange formats such as ONNX and CoreML, with significant performance improvements.”

The NNVM compiler is made up of two components from the TVM stack: NNVM for computation graphs and TVM for tensor operators, according to Amazon.

“NNVM provides a specification of the computation graph and operator with graph optimization routines, and operators are implemented and optimized for target hardware by using TVM. We demonstrated that with minimal effort this compiler can match and even outperform state-of-the-art performance on two radically different hardware: ARM CPU and Nvidia GPUs,” Li wrote. “We hope the NNVM compiler can greatly simplify the design of new AI frontend frameworks and backend hardware, and help provide consistent results across various frontends and backends to users.”

Comparing Alexa, Google Assistant, Cortana and Siri smart speakers

The smart home assistant race has been building to a fever pitch over the course of the last couple of years. Things really came to head this past two weeks, when Amazon, Google and Sonos all held big events highlighting their latest smart speaker plays, making the already busy field a heck of a lot more crowded.

The burgeoning category can be a tough one to navigate. A lot of picking the right speaker for your own needs comes down to your assistant of choice — that, in turn, has a lot to do with both feature sets and your own mobile operating system loyalties. Each has benefits and drawbacks — Amazon has cornered the home, Apple has done a good job in mobile and Google has straddled the two better than anyone else. And Microsoft, well, a lot of people own Windows computers, at least.

Things can be equally complex from a hardware standpoint, between first-party products and the increasing presence of third-parties like Sony, Sonos and JBL. Devices also run a pretty wide price gamut, from ~$50 to $300. Some focus on premium sound, some feature screens, and some even let you choose between multiple assistants.

Here’s a quick break down to help make navigating these waters seem a bit less treacherous.

[Infogram version] 

Source: TechCrunch

Here are all of Amazon’s new Echo gadgets

Amazon just announced about a million new Echo speakers at a surprise event today, ranging from an updated classic Echo, to an alarm-clock style Echo Spot model, to… a talking fish? Let’s run through them all

The new and improved Amazon Echo

Amazon’s main Echo is getting a refresh that make look an awful lot more like a Google Home thanks to an interchangeable fabric, metal, and wood covers. It’s a lot shorter than the original Echo, and features improved audio thanks to a dedicated tweeter, a down-firing woofer, and Dolby tuning. Amazon says voice recognition has been improved too.


The best part: It now starts at just $99. That gives Amazon a particular advantage over Apple’s HomePod and Sonos’ speakers, which start at $300 and $200, respectively. Of course, this says nothing about their respective sound quality, but for people simply looking for a decent wireless speaker with multi-room sound, the Echo could be an enticing option when it starts shipping next month.

Echo Plus


The Echo Plus looks a lot more like the original Echo, but its main trick is that it can act as a smart home hub, eliminating the need for yet another thing to plug into a wall any time you want to add a new IoT device, as well as any accompanying apps. It also features the enhanced sound and Dolby tuning, but will sell for $149, including a free Phillips Hue lightbulb for a limited time. It will go on sale next month and comes in black, white, and silver finishes.

Echo Spot

The Echo Show and Echo Dot apparently had a baby and called it Spot. The tiny little Echo features a 2.5-inch circular touchscreen that displays information you might’ve asked for, or can even work as a monitor for a home security system. A camera allows you to have video calls on the tiny screen, which I can imagine looking very cute but feeling pretty cumbersome.


Of course, it also just works as an Echo, featuring far-field microphones for your commands, and it comes with Bluetooth connectivity and a 3.5mm headphone jack to connect it to your existing sound system. The Echo Spot will sell for $130 and go on sale in the US in December. Pre-orders are open now.

Echo Connect


Amazon is increasingly trying to position its Echo devices as an all-around communications device, and the new Echo Connect lets you chime in to your existing landline. You can ask Alexa to call anyone on your contact list, and Alexa will announce people’s names out loud when they call you. It’s a small set of features, but at $35, it’s not too bad.

Echo Buttons


Amazon is apparently turning the Alexa into a party game AI too. Echo Buttons are, well, buttons that light up and can be used to play Alexa-based games like trivia or music games. Think of them like buzzer buttons, but for a game dictated by an artificial intelligence. They cost $20 bucks and come in a pair of two.  Amazon says the buttons are just the first of more Alexa-connected trinkets to come.

Apple and Amazon’s long streaming fued appears to nearly be over. After Amazon removed the Apple TV and Chromecast from its online store two years ago, it returned to selling at least the former.

Well, it did for a while – 9to5Mac spotted a listing for the Apple TV 4k on Amazon earlier today, but it seems to have disappeared once more. I imagine that’s just a technicality, and that Amazon plans to sell the Apple TV soon – perhaps closer to the launch of Prime Video on tvOS.

Earlier this year, Apple announced Amazon Prime Video is coming to the Apple TV in the fall, so the move isn’t surprising, but it’s still nice to see. Back when the streaming device was removed, Amazon cited “customer confusion” as the reason, a lazy way of saying it didn’t want Apple TV competing with its own prime streamers.

Now that Apple and Amazon have worked out a deal, here’s hoping Google is next. I’d like to watch me some “Mozart in the Jungle” on myAndroid TV, thank you very much.


Apple ditches Bing for Google as the default web search provider for iOS

Apple has confirmed that it has now changed the default web search provider from Bing to Google for its web searches on Siri, Spotlight etc. The company confirmed that users will now be able to get the web results from Google instead of the earlier Bing for its Safari Browser.

The company is making sure to bring users the best possible experience and simultaneously maintain its strong working relationship with Both Google and Microsoft. The change would help enhance search results compared to its previous search results with Siri which did not give satisfactory results.

“We value our relationship with Apple and look forward to continuing to partner with them in many ways, including on Bing Image Search in Siri, to provide the best experience possible for our customers. Bing has grown every year since its launch, now powering over a third of all the PC search volume in the U.S., and continues to grow worldwide. It also powers the search experiences of many other partners, including Yahoo (Verizon), AOL and Amazon, as well as the multi-lingual abilities of Twitter. As we move forward, given our work to advance the field of AI, we’re confident that Bing will be at the forefront of providing a more intelligent search experience for our customers and partners,” Microsoft’s statement reads.

The change is expected to be released with the company’s macOS High Sierra release today. The search results are expected to bring the change which include web links and video results. The Video results will be coming directly from YouTube and web image results will currently be coming from Bing for the time being.

Which Programming Language Should I Learn To Get A Job At Google, Facebook, or Amazon?

The choice of programming language acts as a big factor for a novice in the world of programming. If one stumbles upon a language whose syntax is too complex, one would definitely reconsider learning it. But, what if you’ve crossed that entry barrier and you’re looking to make a career and land a job at heavyweights like Google, Facebook, or Amazon?

You might have come across the articles that tell which programming languages are used at big companies like Google, Facebook, etc. The choice of those companies doesn’t necessarily reflect their needs while hiring a candidate. There are few chances that they’d be interested to interview someone who is expert in a single programming language.

Similar views were also expressed by Justin Mattson, Senior Software Engineer, Fuschia at Google. He answered a user’s query on Quora (via Inc.).

In his answer, Mattson says that if a company is hung up on the fact that you know a language X, but not language Y, you shouldn’t be interested in working there. ” Languages are a tool, like a saw. Whether the saw is manual, table or laser is less relevant than understanding the basic principles of wood and how cutting it happens,” he writes.

There are chances that a person is expert in a popular programming language, but that doesn’t make him/her a good engineer. Different programming languages teach us different things–C and C++ teach you what’s happening with memory and other low-level operations and Java, Ruby, etc., test your design choices. So, it’s important that you learn more languages.

“Don’t learn just one, learn at least two, hopefully, three. This will give you a better sense of what feature are often common to most languages and what things differ,” Mattson adds.

But, what about expertise in a single programming language?


Is having complete command over one language completely irrelevant? Answering this question, Mattson says that one must become an expert in the language one uses, instead of focusing on what a company wants. “If you say you’re an expert in Python and then can’t use it properly in the interview, that is a problem,” he adds.

In the nutshell, if your fundamentals and design choices are strong, the programming language selection isn’t that important. In such companies, you’ll need to deal with multiple languages and pick up the new one as needed.

Amazon Reportedly Working on Smart Glasses With Integrated Alexa AI

Amazon is actively developing a pair of smart glasses with Alexa virtual assistant built in, the reported on Wednesday.

Designed like a regular pair of spectacles, the device will enable Alexa to be invoked by the wearer at any time and at all places, the report said, citing people familiar with Amazon’s plans.

The founder of Google Glass is said to be working on Amazon’s Alexa smart glasses

The company is reportedly including a bone-conduction audio system in the specs so that the wearer can hear Alexa’s voice without inserting headphones.

The founder of Google Glass, Babak Parviz, is said to have been working on the Alexa product since he was hired by Amazon in 2014. Earlier this year, Google re-introduced its Google Glass wearable headset after discontinuing production in 2016.

In addition, The Financial Times reports that Amazon is also working on a more conventional home security camera, and that one or both of these products may appear before the end of this year.

Previous reports have claimed that Amazon is working on a successor to its popular Echo connected smart speaker and plans to bring the device to market this year in time to compete with Apple’s HomePod, which is set to launch this December.

According to rumors that first surfaced in 2016, Apple is also working on several different kinds of smart glasses, with the main application of bringing augmented reality experiences to the wearer.

Reports this year suggest Apple’s glasses will connect wirelessly to the iPhone, much like the Apple Watch, and will display “images and other information to the wearer”.

Alexa can find ‘baby making’ music on Amazon’s streaming services

Amazon announced today that users of its streaming service Prime Music, which is free with a Prime membership, and its subscription-based Amazon Music Unlimited can now ask Alexa to find tunes appropriate for various activities.

As of now, over 500 different activity-based requests are supported including music for meditation, partying and even “getting pumped.” The new feature is available immediately to users with Alexa-enabled devices.

The new voice controls were geared towards activities that have been requested most often by Alexa users and listeners of Amazon’s music streaming services.

In the announcement, the company said that 27 percent of all activity requests come from users who want to relax. Meditation is the number one requested activity, with spa, party and dinner rounding off the top four.

Along with specific activities, users can also request a particular genre to go with it. Amazon includes the examples “Alexa, play classical music for sleeping,” “Alexa, play pop music for cooking,” and “Alexa, play baby making jazz music.”

Because nothing sets the mood like a your partner telling their virtual assistant to find a playlist suitable for baby making.

AWS broadens its lineup of GPU instances with the new Nvidia Tesla M60 based G3 family.

Amazon Web Services (AWS) has launched a new family of high performance Nvidia-based GPU instances.

The new “G3” instances are powered by Nvidia’s Tesla M60 GPUs, and succeed its former G2 instance, which had four four NVIDIA Grid GPUs and 1,536 CUDA cores.

As with G2, which launched in 2013, the new G3 instances are targeting applications that need huge parallel processing power, such as 3D rendering and visualization, virtual reality, video encoding, remote graphics workstation applications.

AWS is offering three flavors of the G3 instance, with one, two, or for GPUs. Each GPU has 8 GB of GPU memory, 2048 parallel processing cores, and a hardware encoder that supports up to 10 H.265 streams and 18 H.264 streams.

AWS notes that the G3 instances support Nvidia’s GRID Virtual Workstation, and are capable of supporting four 4K monitors.

AWS claims the largest G3 instance, the g3.16large, has twice the CPU power and eight times the host memory of its G2 instances. It has four GPUs, 64 CPUs, 488GB of RAM.

The virtual CPUs use Intel’s Xeon E5-2686v4 (Broadwell) processors. Its largest G2 instance featured 60GB RAM.

On-demand pricing for the G3 instances are $1.14 per hour for g3.4xlarge, $2.28 per hour for the g3.8xlarge, and $4.56 per hour for the g3.16xlarge. The instances are available only with AWS Elastic Block Storage, compared with the G2 instances, which are available with SSD storage.

The G3 instances are available in US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), AWS GovCloud (US), and EU (Ireland). AWS is planning to expand the offering to more regions in the coming months.

AWS has continued to broaden its lineup of GPU instances over the years. Back in 2013 it was pitching the G2 family for machine learning and molecular modeling, but these applications are now catered to with its P2 instances, which it launched in September.

The largest P2 instance offers 16 GPUs with a combined 192GB of video memory. They also feature up to 732 GB of host memory, and up to 64 vCPUs using custom Intel Xeon E5-2686 v4 Broadwell processors.

“Today, AWS provides the broadest range of cloud instance types to support a wide variety of workloads. Customers have told us that having the ability to choose the right instance for the right workload enables them to operate more efficiently and go to market faster, which is why we continue to innovate to better support any workload,” said Matt Garman, Amazon EC2 vice president.

Microsoft has also been beefing up on GPU instances for Azure customers. The company launched its NC-Series compute-focused GPU instances last year, offering up to four Nvidia Tesla M60 GPUs and 244GB RAM with 24 cores using Intel Xeon E5-2690 v3 (Sandy Bridge) processors.

In May it announced the forthcoming ND-series which use Nvidia Pascal-based Tesla P40 GPUs and an updated lineup of NC-series instances. The largest ND-series features 24 CPUs, four P40 GPUs, and 448GB RAM. The largest NC-series, the NC24rs_v2 features 24 CPUs, four Tesla P100 GPUs, and 448 GB RAM