Unifi releases new AI powered analytics

Unifi Software has announced plans to power its data platform with a new artificial intelligence engine. OneMind is designed to source, explore and prepare data for businesses so that they can make educated decisions more easily.

In addition, OneMind learns from the data in order to predict patterns and recommend datasets.

The Unifi Data Platform is built on four platforms:

  1. Governance and security
  2. Catalog and discovery
  3. Data preparation
  4. Workflow and scheduling

OneMind touches each of these pillars by automatically recommending attributes to mask in order to maintain compliance; profiling data to help businesses understand their datasets; automating the steps necessary to cleanse, enrich, parse, normalize, transform, filter, and format data; and making recommendations of previously created workflow automation jobs.

“With our deep engineering expertise in AI and support for Natural Language Queries, we make it very easy for any kind of user to ask questions on the catalog and generate the answer through Natural Language Processing, leading to a Google-like experience on data and metadata discovery. From a technology perspective, OneMind captures complicated relationships between enterprises’ data, metadata in various sources into a governed dynamically growing ‘Enterprise Data Knowledge Graph’ that can be displayed visually and provides a simple, interactive user experience,” said Ayush Parashar, co-founder and vice president of engineering for Unifi Software.

Why Machine Learning Isn’t As Hard To Learn As You Think

Why is Machine Learning difficult to understand? originally appeared on Quorathe place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by John L. Miller, Industry ML experience with video, sensor data, images. PhD. Microsoft, Google, on Quora:

I’m usually the first person to say something is hard, but I’m not going to here. Learning how to use machine learning isn’t any harder than learning any other set of libraries for a programmer.

The key is to focus on using it, not designing the algorithm. Look at it this way: if you need to sort data, you don’t invent a sort algorithm, you pick an appropriate algorithm and use it right.

It’s the same thing with machine learning. You don’t need to learn how the guts of the machine learning algorithm works. You need to learn what the main choices are (e.g. neural nets, random decision forests…), how to feed them data, and how to use the data produced.

There is a bit of an art to it: deciding when you can and can’t use machine learning, and figuring out the right data to feed into it. For example, if you want to know whether a movie shows someone running, you might want to send both individual frames, and sets of frame deltas a certain number of seconds apart.

If you’re a programmer and it’s incredibly hard to learn ML, you’re probably trying to learn the wrong things about it.

This question originally appeared on Quora – the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Microsoft and Amazon announce deep learning library Gluon

Microsoft has announced a new partnership with Amazon to create a open-source deep learning library called Gluon. The idea behind Gluon is to make artificial intelligence more accessible and valuable.

According to Microsoft, the library simplifies the process of making deep learning models and will enable developers to run multiple deep learning libraries. This announcement follows their introduction of the Open Neural Network Exchange (ONNX) format, which is another AI ecosystem.

Gluon supports symbolic and imperative programming, which is something not supported by many other toolkits, Microsoft explained. It also will support hybridization of code, allowing compute graphs to be cached and reused in future iterations. It offers a layers library that reuses pre-built building blocks to define model architecture. Gluon natively supports loops and ragged tensors, allowing for high execution efficiency for RNN and LSTM models, as well as supporting sparse data and operations. It also provides the ability to do advanced scheduling on multiple GPUs.

“This is another step in fostering an open AI ecosystem to accelerate innovation and democratization of AI-making it more accessible and valuable to all,” Microsoft wrote in a blog post. “With Gluon, developers will be able to deliver new and exciting AI innovations faster by using a higher-level programming model and the tools and platforms they are most comfortable with.”

The library will be available for Apache MXNet or Microsoft Cognitive Toolkit. It is already available on GitHub for Apache MXNet, with Microsoft Cognitive Toolkit support on the way.

The race is officially on to market seamless autonomous driving technology within the shortest possible timeframe. One of the biggest challenges, however, lies in preparing autonomous vehicles to navigate the many unforeseen and unexpected obstacles that face drivers on any given journey.

Real-world testing, while ideal, of course, falls short of producing enough quality, representative, diverse and well-labeled data to properly train the AI components of self-driving vehicles. The very nature of the real world makes it impossible to predict, and the driving experience is fraught with various unforeseen obstacles, unexpected traffic delays and detours, and sudden changes in weather conditions. To properly prepare autonomous vehicles to properly navigate and react, it is imperative the industry looks beyond real-world testing.

Simulation Superiority
A game engine-based, virtual reality environment could offer an attractive alternative as it is cheaper, faster and safer than experimenting with real vehicles. Also, scenes can be repeated an endless number of times that would either hardly ever happen in real life, or would pose an intolerable risk to human drivers.

Take the very basic example of someone lying on the roadway. You certainly can’t use a real person to mimic this scenario, and a dummy would cause untold traffic problems. The same applies, for example, to having hitchhikers in the emergency lane, or unexpected roadwork or an impaired driver, or a sudden traffic accident.

Applying a digital setting for autonomous vehicle (AV) testing brings the ultimate benefit of engaging AI for scenarios that cannot be replicated under real-life conditions.

Even if driving conditions were fairly standard, covering thousands of miles would take a massive fleet and a significant investment in time. Alternatively, a set of high-performance computers gets the job done within an hour. Simulating cameras and sensors, in real time if necessary, can cut testing time tremendously and save big dollars.

Handling Traffic Madness Hotspots
That brings up another key issue, namely motion planning. According to current consensus in the industry, machine recognition software can be trained by feeding AI images or footage captured in traffic. The same doesn’t apply to motion planning, however, since the very nature of a moving car influences variables in your surroundings. This is why simulation-based AI is the only way of training around motion.

Look at the Arc de Triomphe in Paris, one of the craziest roundabouts in Europe. Traffic from 12 major avenues feed into it, and drivers navigate in six lanes without road markings. The French even have two types of insurance policies: those that exclude and those that include using your car there. Drivers inch rather instinctively in that roundabout, and it would be pretty hard to imagine risking accidents just to train AI on the spot. By contrast, a simulated algorithm can prepare AI to fine-tune motion planning specifically to navigate the Arc de Triomphe roundabout or any number of hotspots all around the world.

Walking on the path of game engines
The most recent survey of the National Highway Traffic Safety Administration lists critical reasons for vehicle crashes in the United States. The good news is, many of the non-driver-related factors that cause accidents can be effectively simulated, including technical errors.

Game engine-based testing comes in handy, for instance, when you need to see what happens if a sensor or a car part breaks down. But grave danger may often be an issue here too – you can imagine how hard it would be to find candidates willing to sign up to navigate a tire blowout while driving 100 miles per hour.

Our in-house developed simulator at AImotive can already handle one hundred traffic scenarios under different weather, lighting and road surface conditions. We started out with the idea that we could use video games to create the first neural networks for our development purposes,  relying on images from Project Cars. However, it wasn’t able to be as flexible as we needed, and it didn’t feature conventional driver obstacles such as buses or pedestrians. Consequently, a customized version of Unreal’s game engine was created, which provided us with the variability needed for full-scale simulation. The merging of AI-trained self-driving tech with gaming technology has been just the right bridge needed to make testing much more accurate, far less time consuming and at an ideal cost.

Real-World Testing Won’t Be Gone for Good
It is worth noting, however, that a gaming environment is not the cure-all solution. Real-world testing, despite its shortcomings, still plays a critical role in autonomous development as simulation often lacks the kind of variability that can be found only in real life.
For the self-driving industry, however, it is perfectly fine if around 90-95% of testing takes place in a simulated environment. Observing that ratio is crucial to reaching full autonomy in a timely manner, and smart developers are discovering that simulation provides the smartest, safest and fastest way to put self-driving vehicles on the road.

Time is the most important factor in detecting network breaches and, consequently, in containing cyber incidents and mitigating the cost of a breach.

“Security event investigations can last hours, and a full analysis of an advanced threat can take days, weeks or even months. Even large security operations center (SOC) teams with more than 10 skilled analysts find it difficult to detect, confirm, remediate, and verify security incidents in minutes and hours,” says Chris Morales, Vectra Network’s head of security analytics.

“However, the teams that are using artificial intelligence to augment their security existing analysts and achieve greater levels automation are more effective than their peers and even SOC teams with more than 10 members who are not using AI.”

Human-machine teaming is crucial

Vectra Networks has polled 459 Black Hat attendees on the composition and effectiveness of their organizations’ SOC teams.

The group – a mix of security architects, researchers, network operations and data center operations specialists, CISOs and infosec VPs – were asked whether their SOCs are already using AI in some form for incident response, and 153 (33%) said Yes.

The size of these teams, the time it takes them to detect and confirm a threat, and to remediate the incident and verify its containment varies.

But, when comparing the time it takes SOC teams of over 10 analysts to do all those things with or without the help of AI, the former group is consistently more speedy.

Take for example the time it takes for them to detect a threat:

ai threat detection response

Or how long it takes for them to remediate an incident:

ai threat detection response

“There is a measurable trend with organizations that have implemented AI to automate tedious incident response tasks to augment the SOC manpower, enable them to focus on their artisan skills and empower decision making,” Morales noted. “When man and machine (AI) work together, the result is always better than man or machine alone.”

These results fit together with those of a McAfee survey that tried to get to the bottom of what makes some threat hunters and SOCs more successful than others. The answer was: the automation of many tasks relating to threat investigation, so that they can spend more time on the actual threat hunting.

Legendary programmer Chris Lattner has had a roller coaster of a year. He left Apple (where he developed the Swift programming language) to help build Tesla’s Autopilot technology, only to leave months later after realizing that he wasn’t a good fit.

However, Lattner might be settling down. He just announced that he’s joining Google (namely, the Brain team) to make AI “accessible to everyone.” While Lattner doesn’t specify exactly what he’ll be doing, Bloomberg sources say he’ll be working on the TensorFlow language Google uses to simplify AI programming.

The hire won’t necessarily change the state of affairs for Apple, which has had to make do without Lattner for months, but it’s a definite coup for Google. Lattner earned praise for Swift because it was fast, thoroughly modern, and (most importanty) accessible — everyone from first-timers to seasoned programmers stands to benefit from it.

Google could put that know-how to work making TensorFlow easier to use, or lowering the hardware demands so that AI runs more smoothly on phones and computers. There’s no guarantee that he’ll repeat his previous feats at Google, but the potential is certainly there.

minoHealth is a health system developed to diagnose diseases accurately better than what the human health worker can execute. The technology, health based system, is reported to use Deep Learning to predict and diagnose medical conditions in patients – a system similarly used by few healthcare system.
They wrote on there website

“Futuristic Medical Health System seeking to Democratize Quality Healthcare with Artificial Intelligence(A.I) Medical Predictions/Diagnostics Systems, Cloud Medical Records System for Hospitals, Ministry of Health and Patients separately and “Big Data” Analytics.”

minoHealth currently has three AI healthcare systems.

•The first system predicts if a female patient would develop Diabetes in the next 5 years or not.
•The second and third systems determine if a Breast Tumor is Malignant or Benign with two separate approaches.

Deep Learning is the most effective part of Artificial Intelligence today.

minoHealth team also plans to work with Epidemiologists in Ghana and Ministry of Health to develop lots of medical datasets to train other Deep Learning models to cater to even more medical conditions and healthcare needs of Ghanaians.

Quora has become a great resource for machine learning. Many top researchers are active on the site answering questions on a regular basis.

Here are some of the main AI-related topics on Quora. If you have a Quora account, you can subscribe to these topics to customize your feed.

While Quora has FAQ pages for many topics (e.g. FAQ for Machine Learning), they are far from comprehensive. In this post, I’ve tried to provide a more thorough Quora FAQ for several machine learning and NLP topics.

Quora doesn’t have much structure, and many questions you find on the site are either poorly answered or extremely specific. I’ve tried to include only popular questions that have good answers on general interest topics.

Machine Learning



Supervised Learning

Reinforcement Learning

Unsupervised Learning

Deep Learning

Convolutional Neural Networks

Recurrent Neural Networks

Natural Language Processing

Generative Adversarial Networks

This Originally appeared on Quora

Microsoft’s upcoming Photos app is getting AI image search so that it can spot and classify objects, much like Google Photos and Apple Photos can. Spotted by Windows Central, the latest Insider Preview version of the app now has a search bar that you can use to enter terms like “flower,” “wine bottle,” and “bar.”

It will then use a cloud-based image recognition algorithm to pick and sort out those items in your photo collection, much as the rival apps do.

The first time you use it, the Microsoft Photos will index everything, a process that takes about a second per image, Windows Central notes.

Afterwards, all the indexing is stored locally, so you can search and sort by objects, colors and other terms very rapidly. (If you’re uncomfortable with the idea of sorting people by facial recognition, you can opt out fairly easily.)

Similar again to how Google Photos works, Microsoft Photos will put together and let you confirm suggested albums to keep photos shot around the same place and time in a similar theme. The AI search will also suggest photo albums based on tags like “cats,” for instance.

The feature is starting to become indispensable for OneDrive or Office 365 users, who can store up to a terabyte of data as part of their subscriptions. That’s a lot of photos, so having an AI to manage them will make your collection less unwieldy.

The new Photos app is in it the very early beta stages, as the features are only on the Insider Preview and not the other beta sites (Fast Ring, Skip Ahead or Production).

So Microsoft presumably wants to test this pretty thoroughly before releasing it, likely with the Fall Creators Update due sometime in, well, the fall.

Microsoft is releasing a new tool that uses artificial intelligence to find and detect software bugs. The Microsoft Security Risk Detection tool, previously known as Project Springboard, will be available by the end of the summer.

“The tool is designed to catch the vulnerabilities before the software goes out the door, saving companies the heartache of having to patch a bug, deal with crashes or respond to an attack after it has been released,” the company wrote in a blog.

This type of software security strategy is called fuzz testing. According to the company, while companies have practiced fuzz testing in the past, today it is becoming too complex to do it manually. Microsoft Security Risk Detection acts as a helping hand to fuzz testing by asking “what if” questions to determine a crash or concern, Microsoft explained.

“We use AI to automate the same reasoning process that you or I would use to find a bug, and we scale it out with the power of the cloud,” said David Molnar, a Microsoft researcher.

The solution’s process involves: Uploading binaries, running multiple fuzzers, identifying high-value bugs, and fixing bugs.

DocuSign, an early adopter of the tool, used the tool to find potential problematic bugs. According to the senior director of software security at DocuSign, John Heasman, the risk detection tool made it easy to avoid potential attacks, and release high-quality software with assurance. The number one benefit for the team was that the solution rarely reported back false positives.