Here is what Google announced from Google I/O 2018

Google commenced its yearly I/O designer gathering at Shoreline Amphitheater in Mountain View, California. Here are a portion of the greatest declarations from the Day 1 keynote.

 

Google declared it is rebranding its Google Research division to Google AI. The move flags how Google has progressively centered R&D around PC vision, normal dialect preparing, and neural systems.

What they actually announced:

 

Continued Conversation

 

Google declared a “continued conversation” refresh to Google Assistant that makes conversing with the Assistant feel more regular. Presently, rather than saying “Hello Google” or “alright Google” each time you need to state a summon, you’ll just need to do as such the first run through.

An example of the AI-powered Smart Compose feature in Gmail is demonstrated during Google’s I/O 2018 developers' conference in Mountain View, California on May 8, 2018. (Google)

 

Google Duplex

A standout amongst the most discussed occasions was when Google boss Sundar Pichai exhibited how the organization’s advanced colleague had set a hair style arrangement for somebody without anyone else’s input.

An example of the AI-powered Duplex feature is demonstrated during Google’s I/O 2018 developers' conference in Mountain View, California on May 8, 2018. (Google)

“So what you’re going to hear is the Google Assistant actually calling a realsalon to schedule the appointment for you.

Hello, how can I help you?

Hi, I’m calling to book a women’s haircut for a client. I’m looking for something on May 3rd.

Sure, give me one second.

Mm-hmmm.

That was a real call you just heard.”

New Voices

Pichai reported that six new voices are being added to Google Assistant, including one in light of singing star John Legend.

“John Legend’s voice will come not long from now, with the goal that you can get reactions like this. ‘At 10 a.m. you have an occasion called Google I/O keynote. At that point at 1 p.m. you have margaritas. Have a superb day.’ I’m anticipating 1 p.m.”

Google Maps – Augmented Reality

 

Google will now incorporate enlarged reality to help control clients. In Google Maps, individuals can look through the gadget camera and get turn-by-turn bearings while seeing the genuine road. The element will likewise utilize nearby points of interest and structures caught by the camera to help manage the trek.

Gmail Smart Compose

 

The most recent new apparatus for Gmail is an autocomplete include called Smart Compose. It utilizes AI to recommend approaches to complete sentences as they are composed. For instance, “I haven’t seen you” may be autocompleted to “I haven’t seen you in a while and I trust you’re doing admirably.”

Android P

 

The Android P variant of Google’s Android working framework will incorporate all the more effective AI instruments. One will give the battery a chance to learn after some time how individuals utilize their applications and afterward adjust with an end goal to spare power. Versatile Brightness will learn and roll out improvements correspondingly. It will gain from the level of shine in nature, and in addition settings changed by clients after some time.

Google News

 

Google is upgrading its news introduction in a way it says will make it less demanding for clients to stay aware of the news they are most keen on. The Google News application will exhibit five best stories, in addition to others it supposes will be of most enthusiasm to the client.

Google Photos

 

There are a few new changes to Google Photos. When one element perceives a photograph of somebody in a client’s contact show, it can propose sending the photograph to that individual. It can likewise change photographs to PDF documents, and naturally add shading to highly contrasting photographs, or make some portion of a shading photograph high contrast.

Google drops big trace approximately its plans for a new version of android

The tech giant does is in alphabetical order and in 2018 we’re up to p – with speculation already rife as to what may want to follow final 12 months’s android Oreo.

 

The declaration typically takes region inside the summer, but google can also have permit the name slip out early this yr thru its instagram tale which, among a selection of pictures it encouraged customers to screenshot and use as smartphone screensavers to have fun spring, covered an image of popsicles.

The name has been one of the names rumoured to be in consideration, however it’s now not the best P-named candy treat Google has teased users within recent months.

It’s annual i/o convention is now less than a month away, so the answer to this sweet thriller might not be far off being solved.

 

 

 

Google Cloud expands MongoDB availability across GCP regions

After popular demand, Google has announced it is expanding MongoDB availability across most Google Cloud Platform regions as well as on Cloud Launcher. MongoDB is available on Google Cloud through its database-as-a-service solution, MongoDB Atlas.

“With over 35 million downloads and customers ranging from Cisco to Metlife to UPS, MongoDB is one of the most popular NoSQL databases for developers and enterprises alike,” Kent Smith, cloud customer engineer at Google, wrote in a blog post. “With MongoDB Atlas, you get a globally distributed database with cross-region replication, multi-region fault tolerance and the ability to provide fast, responsive read access to data users around the globe. You can even configure your clusters to survive the outage of an entire cloud region.”

MongoDB Atlas on GCP is now available in Iowa, South Carolina, Oregon, North Virginia, Sao Paulo, Belgium, London, Frankfurt, Taiwan, Mumbai, Tokyo, Singapore and Sydney. “With this expanded geographic availability, you can now join the wide variety of organizations around the world, from innovators in the social media space to industry leaders in energy, that are already running MongoDB on GCP,” Smith wrote.

According to Smith, MongoDB Atlas on GCP can be used to reduce operational overhead of setting up and scaling databases, and enable team members to focus on building apps. In addition, it provides multiple sharing policies and enables users to distribute data across a cluster. Clusters are managed into projects and live inside a Virtual Private Cloud per region. Users can configure cross-region replication from MongoDB Atas’ UI, and automatically scale the storage of clusters or enable sharding with no manual intervention.

Google now offers a wall mount for its smallest smart speaker

The Google Home Mini admittedly doesn’t have much of a footprint, but what if you want to separate the smart speaker from the clutter on your desktop or kitchen counter and give it its own place … like on the wall.

Thanks to Google’s launch this week of a wall mount, now you can.

Costing a reasonable $15, the Mountain View company has produced the Home Mini mount in partnership with tech accessories outfit Incipio. Available now via the Google Store, the white-colored mount fits snugly around your Google smart speaker and comes with screws and drywall anchors for easy fitting. In the box you’ll also find a roll of 3M tape if you’d prefer not to drill holes in your wall.

Google describes the lightweight accessory as a “sleek, durable wall mount [that] makes it easy to put Google Home Mini right where you want it — plus, free up precious counter and tabletop surfaces.”

Securing the Mini in a permanent spot on the wall will also help to keep it out of the way of curious littl’uns who might want to use it for a game of living-room hockey or some other leisurely pursuit, though the mount is designed in a way that lets you easily flip the speaker out if necessary.

The only point to consider when it comes to fitting the mount is the location of your power outlets, as the speaker has no integrated battery.

Voice-activated smart speakers were reportedly a big hit over the holiday season. Google, with the Home, Home Mini and Home Max speakers, and Amazon, with its growing range of Alexa-powered Echo devices, are leading the pack, though companies are entering the market all the time with their own take on the product.

Apple, too, is gearing up to take them all on with the HomePod. Unveiled last year, the Cupertino-based company had been hoping to have it ready for the holiday season but design issues caused them to push the launch date to 2018.

The Google Home Mini is the smallest of Google’s three smart speakers, and retails for $50. Like any smart speaker worth its salt, it lets you perform an array of functions such as playing tracks from your music library, answering queries, making calls, offering traffic and weather reports, and controlling a growing range of smart-home devices.

It can also scare the bejeezus out of unsuspecting grannies.

Google sold about 6 million Home speakers during the holidays

This past holiday season, there’s a good chance you either bought or received a smart speaker of some sort. Amazon’s been dominating this space since the first Echo Dot came out in 2016, but with the launch of the Google Home Mini in late 2017, Google finally had its own ultra-cheap speaker to get inside as many people’s homes as possible.

Google recently shared a post on its blog outlining the success of Home products and the Assistant throughout the past year, and perhaps the most surprising bit of news is that more than one Google Home product was sold every single second in 2017 after the Home Mini started shipping in October.

Google doesn’t really say if it started counting these sales on the Home Mini’s launch date (October 19) or later in the month when it actually began shipping to consumers, but even so, we can estimate that about 6 million Home speakers were sold. Google doesn’t say which of its three speakers accounted for the most sales, but seeing as how you could buy the Home Mini for just $29 throughout most of 2017, this is what we’re guessing was the most popular.

Usage of Google Homes increased by nine times this past holiday season compared to the one in 2016, making it more apparent than ever that Google is coming at Amazon and its Echo brand with full force.

Now that we’re talking about it, did you buy or receive a Google Home during the holidays?

Google prepares Android developers for changes in 2018

Starting in the second half of 2018, Android apps on the Google Play store will be required to target a “recent” Android API level and meet other new, additional requirements Google announced yesterday.

“Google Play powers billions of app installs and updates annually,” Edward Cunningham, product manager at Android wrote in a post. “We relentlessly focus on security and performance to ensure everyone has a positive experience discovering and installing apps and games they love. Today we’re giving Android developers a heads-up about three changes designed to support these goals, as well as explaining the reasons for each change, and how they will help make Android devices even more secure and performant for the long term.”

Early in the coming year, Play will begin adding “a small amount of security metadata” to each APK submitted for further authentication, and will require no effort on the part of the developer. Then come August, Play will require all newly submitted apps to target Android API level 26 (Android 8.0) or higher, and November will bring the same requirement to updates of existing apps. This minimum API level will increase “within one year following each Android dessert release” from then on.

“This is to ensure apps are built on the latest APIs optimized for security and performance,” Cunningham wrote.

One year later, in August 2019, new apps and updates will be required to be submitted with both 64-bit and 32-bit binaries.

“We deeply appreciate our developer ecosystem, and so hope this long advance notice is helpful in planning your app releases,” Cunningham wrote. “We will continue to provide reminders and share developer resources as key dates approach to help you prepare.”

More information can be found in Cunningham’s blog post.

Google’s Project Jacquard jacket can now light up its tag and find your phone

When it went on sale in late September, Levi’s Commuter Trucker Jacket was the first piece of clothing to integrate Google’s Project Jacquard touch-gesture functionality.

At $350 a pop, it’s not a surprise that the Jacquard by Google app (which is used to customize and control the jacket) shows just 100-500 installs. That means a few hundred people will be delighted to learn that the app just got its first major update, which lets wearers of the Jacquard-woven jacket use gestures that enable new light modes for the tag on the sleeve, as well as find their phone.

The new Jacquard abilities are called “Illuminate” and “Find Your Phone.” The “Illuminate” ability let wearers assign a custom gesture to enable one of three light modes:

  • Shine, which turns the tag on the jacket’s sleeve into a flashlight
  • Blink, which turns the tag into a blinking light to make you more visible to drivers and others when you’re outside in the dark
  • Strobe, which turns the tag into a “multicolored party light” (Google’s words)

When you use a gesture assigned to enable the new “Find Your Phone” ability, your phone will ring for up to 30 seconds at maximum volume, even if it’s on silent.

Other Jacquard abilities include navigation help, call and text management, and music and audio control. In addition to the two new features, Google updated the “What’s Playing on Android” music-listening feature to work with all music services supporting Jacquard.

WHAT’S NEW

Your jacket can now do more!

Introducing two new abilities:

  • Illuminate – Blink, shine or celebrate with the light on your Jacquard snap tag.
  • Find Your Phone – Misplaced your phone? Make your phone ring so you can locate it.

We’ve updated What’s Playing on Android to work with all Jacquard supported music services.
https://support.google.com/jacquard/answer/7538406?hl=en

Cleaned house for the holidays, and swept out a few bugs.

 

Google starts using more secure packaging for trade-in program

Leading up to the launch of the Pixel 2, Google started a trade-in program to help drastically lower the cost of its shiny, new phone. Quotes for the trade-in program are more than reasonable, but it hasn’t been without its fair share of hiccups.

One of the main complaints we’ve heard is that the trade-in kit Google sends out is pretty flimsy, but it looks like this is now being addressed.

Rather than sending out plastic bubble wrap-lined sleeves, the Google Store is now shipping actual cardboard boxes for people to send out their phones in. Google initially stood behind the plastic sleeves, but we’d be lying if we said we weren’t glad to see this change.

The new cardboard box is relatively slim, but the inside is padded with foam on the top and bottom to keep your device safe and secure during its trip.

Most people on the Reddit thread where photos for the box were shared seem to be quite happy with the new packaging, and this should hopefully help to keep headaches during the trade-in process to a bare minimum.

 

Google previews TensorFlow Lite

Google is giving developers a way to add machine learning models to their mobile and embedded devices. The company announced the developer preview of TensorFlow Lite. The new solution is a lightweight version of TensorFlow, the open-source software library for machine intelligence.

“TensorFlow has always run on many platforms, from racks of servers to tiny IoT devices, but as the adoption of machine learning models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models,” the TensorFlow team wrote in a post.

The developer preview includes a set of core operators for creating and running custom models, a new FlatBuffers-based model file format, an on-device interpreter with kernels, the TensorFlow converter, and pre-tested models. In addition, TensorFlow Lite supports the Android Neural Networks API, Java APIs and C++ APIs.

According to Google, developers should look at TensorFlow lite as an evolution of the TensorFlow Mobile API. TensorFlow Mobile API already supports mobile and embedded deployment of models. As TensorFlow Lite matures, it will become the recommended mobile solution. For now, TensorFlow Mobile will still support production apps.

“The scope of TensorFlow Lite is large and still under active development. With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models. We plan to prioritize future functional expansion based on the needs of our users. The goals for our continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices,” the team wrote.

Google announces new machine learning capabilities for Firebase

Firebase’s development platform will receive a series of updates focused on tightening the integration of Firebase services and incorporating more machine learning technology into the toolkit, the Google-owned company announced today at the Firebase Dev Summit in Amsterdam.

Firebase is a mobile platform for developing high-quality applications.

The first notable update is the integration of Crashlytics by Fabric, which Google acquired in January. Crashlytics enables uses to track, prioritize, and fix stability issues within applications in real time. The integration will be rolled out over the next couple of weeks.

The company also announced that the overall UI and console appearance are receiving a major overhaul. “All of the products that you’re used to seeing in the Firebase console are still there; we’ve simply reorganized things to more accurately reflect the way your team works,” Francis Ma, group product manager for Firebase, wrote in a post. The features will also be integrated over the coming weeks.

In addition, the Firebase team announced a new A/B testing framework, based on the Google Optimize machine learning-assisted analytics tool. “Setting up an A/B test is quick and simple,” Ma wrote. “You can create an experiment with Remote Config or FCM, define different variant values and population sizes to test on, then set the experiment goal. From there, Firebase will take care of the rest, automatically running the experiment then letting you know when a winner towards your goal is determined with statistical significance.” The A/B testing feature is available as a beta feature today.

Finally, Google introduced Firebase Predictions, which uses machine learning to measure analytics and group users based on predicted behavior. The default groups are:

  • Users who are predicted to churn in the next 7 days
  • Users who are predicted to stay engaged with your app
  • Users who are predicted to spend money
  • Users who are predicted to not spend money in the next 7 days

In addition, users can set up custom groupings based on preferred data.

“While we’re excited about the updates to Firebase that we’ve announced today, we also know that there’s a lot more work to be done. We are working hard to prepare for the General Data Protection Regulation (GDPR) across Firebase and we’re committed to helping you succeed under it. Offering a data processing agreement where appropriate is one important step we’re taking to make sure that Firebase works for you, no matter how large your business or where your users are. We’ll also be publishing tools and documentation to help developers ensure they are compliant,” Ma wrote.