Google now offers a wall mount for its smallest smart speaker

The Google Home Mini admittedly doesn’t have much of a footprint, but what if you want to separate the smart speaker from the clutter on your desktop or kitchen counter and give it its own place … like on the wall.

Thanks to Google’s launch this week of a wall mount, now you can.

Costing a reasonable $15, the Mountain View company has produced the Home Mini mount in partnership with tech accessories outfit Incipio. Available now via the Google Store, the white-colored mount fits snugly around your Google smart speaker and comes with screws and drywall anchors for easy fitting. In the box you’ll also find a roll of 3M tape if you’d prefer not to drill holes in your wall.

Google describes the lightweight accessory as a “sleek, durable wall mount [that] makes it easy to put Google Home Mini right where you want it — plus, free up precious counter and tabletop surfaces.”

Securing the Mini in a permanent spot on the wall will also help to keep it out of the way of curious littl’uns who might want to use it for a game of living-room hockey or some other leisurely pursuit, though the mount is designed in a way that lets you easily flip the speaker out if necessary.

The only point to consider when it comes to fitting the mount is the location of your power outlets, as the speaker has no integrated battery.

Voice-activated smart speakers were reportedly a big hit over the holiday season. Google, with the Home, Home Mini and Home Max speakers, and Amazon, with its growing range of Alexa-powered Echo devices, are leading the pack, though companies are entering the market all the time with their own take on the product.

Apple, too, is gearing up to take them all on with the HomePod. Unveiled last year, the Cupertino-based company had been hoping to have it ready for the holiday season but design issues caused them to push the launch date to 2018.

The Google Home Mini is the smallest of Google’s three smart speakers, and retails for $50. Like any smart speaker worth its salt, it lets you perform an array of functions such as playing tracks from your music library, answering queries, making calls, offering traffic and weather reports, and controlling a growing range of smart-home devices.

It can also scare the bejeezus out of unsuspecting grannies.

Google sold about 6 million Home speakers during the holidays

This past holiday season, there’s a good chance you either bought or received a smart speaker of some sort. Amazon’s been dominating this space since the first Echo Dot came out in 2016, but with the launch of the Google Home Mini in late 2017, Google finally had its own ultra-cheap speaker to get inside as many people’s homes as possible.

Google recently shared a post on its blog outlining the success of Home products and the Assistant throughout the past year, and perhaps the most surprising bit of news is that more than one Google Home product was sold every single second in 2017 after the Home Mini started shipping in October.

Google doesn’t really say if it started counting these sales on the Home Mini’s launch date (October 19) or later in the month when it actually began shipping to consumers, but even so, we can estimate that about 6 million Home speakers were sold. Google doesn’t say which of its three speakers accounted for the most sales, but seeing as how you could buy the Home Mini for just $29 throughout most of 2017, this is what we’re guessing was the most popular.

Usage of Google Homes increased by nine times this past holiday season compared to the one in 2016, making it more apparent than ever that Google is coming at Amazon and its Echo brand with full force.

Now that we’re talking about it, did you buy or receive a Google Home during the holidays?

Google prepares Android developers for changes in 2018

Starting in the second half of 2018, Android apps on the Google Play store will be required to target a “recent” Android API level and meet other new, additional requirements Google announced yesterday.

“Google Play powers billions of app installs and updates annually,” Edward Cunningham, product manager at Android wrote in a post. “We relentlessly focus on security and performance to ensure everyone has a positive experience discovering and installing apps and games they love. Today we’re giving Android developers a heads-up about three changes designed to support these goals, as well as explaining the reasons for each change, and how they will help make Android devices even more secure and performant for the long term.”

Early in the coming year, Play will begin adding “a small amount of security metadata” to each APK submitted for further authentication, and will require no effort on the part of the developer. Then come August, Play will require all newly submitted apps to target Android API level 26 (Android 8.0) or higher, and November will bring the same requirement to updates of existing apps. This minimum API level will increase “within one year following each Android dessert release” from then on.

“This is to ensure apps are built on the latest APIs optimized for security and performance,” Cunningham wrote.

One year later, in August 2019, new apps and updates will be required to be submitted with both 64-bit and 32-bit binaries.

“We deeply appreciate our developer ecosystem, and so hope this long advance notice is helpful in planning your app releases,” Cunningham wrote. “We will continue to provide reminders and share developer resources as key dates approach to help you prepare.”

More information can be found in Cunningham’s blog post.

Google’s Project Jacquard jacket can now light up its tag and find your phone

When it went on sale in late September, Levi’s Commuter Trucker Jacket was the first piece of clothing to integrate Google’s Project Jacquard touch-gesture functionality.

At $350 a pop, it’s not a surprise that the Jacquard by Google app (which is used to customize and control the jacket) shows just 100-500 installs. That means a few hundred people will be delighted to learn that the app just got its first major update, which lets wearers of the Jacquard-woven jacket use gestures that enable new light modes for the tag on the sleeve, as well as find their phone.

The new Jacquard abilities are called “Illuminate” and “Find Your Phone.” The “Illuminate” ability let wearers assign a custom gesture to enable one of three light modes:

  • Shine, which turns the tag on the jacket’s sleeve into a flashlight
  • Blink, which turns the tag into a blinking light to make you more visible to drivers and others when you’re outside in the dark
  • Strobe, which turns the tag into a “multicolored party light” (Google’s words)

When you use a gesture assigned to enable the new “Find Your Phone” ability, your phone will ring for up to 30 seconds at maximum volume, even if it’s on silent.

Other Jacquard abilities include navigation help, call and text management, and music and audio control. In addition to the two new features, Google updated the “What’s Playing on Android” music-listening feature to work with all music services supporting Jacquard.


Your jacket can now do more!

Introducing two new abilities:

  • Illuminate – Blink, shine or celebrate with the light on your Jacquard snap tag.
  • Find Your Phone – Misplaced your phone? Make your phone ring so you can locate it.

We’ve updated What’s Playing on Android to work with all Jacquard supported music services.

Cleaned house for the holidays, and swept out a few bugs.


Google starts using more secure packaging for trade-in program

Leading up to the launch of the Pixel 2, Google started a trade-in program to help drastically lower the cost of its shiny, new phone. Quotes for the trade-in program are more than reasonable, but it hasn’t been without its fair share of hiccups.

One of the main complaints we’ve heard is that the trade-in kit Google sends out is pretty flimsy, but it looks like this is now being addressed.

Rather than sending out plastic bubble wrap-lined sleeves, the Google Store is now shipping actual cardboard boxes for people to send out their phones in. Google initially stood behind the plastic sleeves, but we’d be lying if we said we weren’t glad to see this change.

The new cardboard box is relatively slim, but the inside is padded with foam on the top and bottom to keep your device safe and secure during its trip.

Most people on the Reddit thread where photos for the box were shared seem to be quite happy with the new packaging, and this should hopefully help to keep headaches during the trade-in process to a bare minimum.


Google previews TensorFlow Lite

Google is giving developers a way to add machine learning models to their mobile and embedded devices. The company announced the developer preview of TensorFlow Lite. The new solution is a lightweight version of TensorFlow, the open-source software library for machine intelligence.

“TensorFlow has always run on many platforms, from racks of servers to tiny IoT devices, but as the adoption of machine learning models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models,” the TensorFlow team wrote in a post.

The developer preview includes a set of core operators for creating and running custom models, a new FlatBuffers-based model file format, an on-device interpreter with kernels, the TensorFlow converter, and pre-tested models. In addition, TensorFlow Lite supports the Android Neural Networks API, Java APIs and C++ APIs.

According to Google, developers should look at TensorFlow lite as an evolution of the TensorFlow Mobile API. TensorFlow Mobile API already supports mobile and embedded deployment of models. As TensorFlow Lite matures, it will become the recommended mobile solution. For now, TensorFlow Mobile will still support production apps.

“The scope of TensorFlow Lite is large and still under active development. With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models. We plan to prioritize future functional expansion based on the needs of our users. The goals for our continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices,” the team wrote.

Google announces new machine learning capabilities for Firebase

Firebase’s development platform will receive a series of updates focused on tightening the integration of Firebase services and incorporating more machine learning technology into the toolkit, the Google-owned company announced today at the Firebase Dev Summit in Amsterdam.

Firebase is a mobile platform for developing high-quality applications.

The first notable update is the integration of Crashlytics by Fabric, which Google acquired in January. Crashlytics enables uses to track, prioritize, and fix stability issues within applications in real time. The integration will be rolled out over the next couple of weeks.

The company also announced that the overall UI and console appearance are receiving a major overhaul. “All of the products that you’re used to seeing in the Firebase console are still there; we’ve simply reorganized things to more accurately reflect the way your team works,” Francis Ma, group product manager for Firebase, wrote in a post. The features will also be integrated over the coming weeks.

In addition, the Firebase team announced a new A/B testing framework, based on the Google Optimize machine learning-assisted analytics tool. “Setting up an A/B test is quick and simple,” Ma wrote. “You can create an experiment with Remote Config or FCM, define different variant values and population sizes to test on, then set the experiment goal. From there, Firebase will take care of the rest, automatically running the experiment then letting you know when a winner towards your goal is determined with statistical significance.” The A/B testing feature is available as a beta feature today.

Finally, Google introduced Firebase Predictions, which uses machine learning to measure analytics and group users based on predicted behavior. The default groups are:

  • Users who are predicted to churn in the next 7 days
  • Users who are predicted to stay engaged with your app
  • Users who are predicted to spend money
  • Users who are predicted to not spend money in the next 7 days

In addition, users can set up custom groupings based on preferred data.

“While we’re excited about the updates to Firebase that we’ve announced today, we also know that there’s a lot more work to be done. We are working hard to prepare for the General Data Protection Regulation (GDPR) across Firebase and we’re committed to helping you succeed under it. Offering a data processing agreement where appropriate is one important step we’re taking to make sure that Firebase works for you, no matter how large your business or where your users are. We’ll also be publishing tools and documentation to help developers ensure they are compliant,” Ma wrote.

News digest: Google renames API.AI to Dialogflow, the Cloud Foundry Container Runtime, and Microsoft’s UWP support for .NET Standard 2.0

Google has a new name for its API.AI solution: Dialogflow. API.AI first started out as an API that could add natural language processing capabilities to applications, services, and devices. According to the company, over the past year it has grown into more than just an API with new features such as its analytics tool and 33 prebuilt agents, which is why the company decided it was necessary to rename the solution.

“Our new name doesn’t change the work we’re doing with you or our mission. Our mission continues to be that Dialogflow is your end-to-end platform for building great conversational experiences and our team will help you share what you’ve built with millions of users,” Ilya Gelfenbeyn, lead product manager at Google, wrote in a post.

The company also announced two new features for Dialogflow: an in-line code editor and multi-lingual agent support.

CFCR becomes the Cloud Foundry’s default method for deploying containers
The Cloud Foundry Foundation has announced that Cloud Foundry Container Runtime (CFCR) is now the default Cloud Foundry approach to deploying containers using Kubernetes and BOSH. Users can now use the Container Runtime to deploy Kubernetes or application runtime for a Cloud Application Platform. This project was originally donated to the Cloud Foundry Foundation in June by Google and Pivotal, in order to expand choice for Cloud Foundry’s massive user base.

“The technology has progressed quickly—after only four months in incubation, the first commercial offering has already been launched. Container Runtime expands the capabilities of Cloud Foundry beyond Application Runtime, giving enterprises more options to take advantage of cloud-native best practices,” said Abby Kearns, executive director for the Cloud Foundry Foundation. “With nearly 70 percent of enterprises using containers in some capacity, choice is critical. This expansion enables businesses to take advantage of the power of Kubernetes combined with BOSH, an open source, enterprise-grade management tool.”

Microsoft adds UWP support for .NET Standard 2.0
Microsoft has announced a major update to UWP for .NET developers, which is their largest release since shipping .NET Native with Windows 10. The company is adding support for .NET Standard 2.0, which will give UWP developers access to about 20k more APIs. The update will also allow developers to migrate code into UWP apps more easily. UWP apps use .NET Core for debugging and .NET Native for release builds. This release adds incremental build support for .NET native, making debugging with .NET Native more approachable, according to the company.

Sauce Labs announces Extended Debugging for Selenium tests
Sauce Labs has announced Extended Debugging for Selenium tests, which provides faster resolution times for fixing errors. This tool combines browser console log information with networking data in order to determine the cause and location of a problem.

“Automated testing is the backbone of continuous delivery. By adding Extended Debugging to our platform, we’re ensuring that our customers can identify the root case of test failures faster,” said Lubos Parobek, vice president of product at Sauce Labs. “This has been a much anticipated addition to our platform as browser and networking failures can often be difficult to reproduce, troubleshoot and fix.”

Anchore releases Anchore Cloud 2.0 
Anchore announced the release of Anchore Cloud 2.0, a series of software tools that provides developers, operations, and security teams with a means to achieve proper container compliance, both on-premises or in the cloud. Anchore Cloud is a SaaS product built on an open source analysis and policy engine, and allows users to look for container images on both public and private registries. Anchor is integrated with popular open source tools such as Jenkins and Kubernetes.

“Anchore Cloud 2.0 gives users the tools necessary to achieve a controllable containerized software flow in a way that can be certified by the user for their specific needs,” said Daniel Nurmi, CTO and cofounder of Anchore. “Coupled with our open source on-premise engine, Anchore Cloud 2.0 provides users the ability to quickly and easily integrate powerful inspection, reporting, and security and compliance checks into their existing or new container build environments.”

Syncfusion updates Dashboard and Data Integration platforms
Syncfusion is announcing an update to their Dashboard and Data Integration platforms. In this release, the Dashboard Platform and Data Integration Platform will be integrated, enabling users to access workflows from the Data Integration Server in the Dashboard. New Dashboard features include advanced sorting options for the Dashboard Designer, a common ODBC connection, a waterfall chart widget, and a widget for radar and polar charts.

The Data Integration Platform now offers a user-friendly design for processors, process groups, and ports, allowing views to be expanded and collapsed. It also features support for monitoring tasks, and allowing disk and JVM memory to be monitored.

“We’ve been very pleased at the success of our Data Platform,” said Daniel Jebaraj, vice president. “We’ve taken some innovative steps toward simplifying effective data usage for businesses, and we hope to continue improving the platform with releases like this.”

Comparing Alexa, Google Assistant, Cortana and Siri smart speakers

The smart home assistant race has been building to a fever pitch over the course of the last couple of years. Things really came to head this past two weeks, when Amazon, Google and Sonos all held big events highlighting their latest smart speaker plays, making the already busy field a heck of a lot more crowded.

The burgeoning category can be a tough one to navigate. A lot of picking the right speaker for your own needs comes down to your assistant of choice — that, in turn, has a lot to do with both feature sets and your own mobile operating system loyalties. Each has benefits and drawbacks — Amazon has cornered the home, Apple has done a good job in mobile and Google has straddled the two better than anyone else. And Microsoft, well, a lot of people own Windows computers, at least.

Things can be equally complex from a hardware standpoint, between first-party products and the increasing presence of third-parties like Sony, Sonos and JBL. Devices also run a pretty wide price gamut, from ~$50 to $300. Some focus on premium sound, some feature screens, and some even let you choose between multiple assistants.

Here’s a quick break down to help make navigating these waters seem a bit less treacherous.

[Infogram version] 

Source: TechCrunch

Google reveals Pixel 2 and Pixel 2 XL

Google officially unveiled the next generation of its Pixel phones, the Pixel 2 and Pixel 2 XL, at an event in San Francisco on Wednesday. The devices are built on the capabilities of Google Assistant, continuing Google’s shift “from mobile-first to AI-first” in computing, Google CEO Sundar Pichai said.

The Pixel 2 and Pixel 2 XL aren’t much of a departure from the original Pixel phones in terms of design. The standard Pixel 2 features a 5″ screen with an OLED display, while the Pixel 2 XL sports a massive 6″ P-OLED screen with an 18:9 aspect ratio and a curved screen.

The phones have an all-aluminum body, with the smaller glass plate on the back. The fingerprint scanner remains in the same spot on the middle of the back of the phone.

The standard Pixel 2 is available in three colors: Kinda Blue, Just Black, and Clearly White. The Pixel 2 XL, however, is only available in Just Black and a separate Black and White colorway.



Google’s search box will now be at the bottom of the screen and will remain static while the user moves between home screens. A new feature called Active Edge will allow users to access Google Assistant by simply squeezing the sides of the phone. It will work with a case, and machine learning will be able to identify an “intentional squeeze.”

Google is continuing the trend of including premium camera technology in the Pixel line with the Pixel 2. The 12.2 MP camera, which achieved a DxOMark score of 98, is tuned for AR content and can read it in 60fps. The camera will also include a portrait mode, similar to what Apple introduced with its iPhone 7 Plus. This feature could be especially helpful for marketing professionals and social media pros who need to capture the best images for their content.