IBM releases data science and machine learning platform Cloud Private for Data

IBM is embracing artificial intelligence with the launch of IBM Cloud Private for Data. The platform consists of integrated data science, data engineering and app building services. According to IBM, it is designed to help organizations accelerate their AI journeys and increase productivity.


“Whether they are aware of it or not, every company is on a journey to AI as the ultimate driver of business transformation,” said Rob Thomas, general manager of IBM Analytics. “But for them to get there, they need to put in place an information architecture for collecting, managing and analyzing their data. With today’s announcements, we are planning to bring the AI destination closer and give access to powerful machine learning and data science technologies that can turn data into game-changing insight.”

The platform is powered by an in-memory database that is capable of ingesting and analyzing one million events per second, according to their internal testing. In addition, it is deployed on Kubernetes, allowing for a fully integrated development and data science environment, IBM explained. The company hopes it will provide organizations with access to data insights that were previously unobtainable, and allow users to exploit event-driven applications to gather and analyze data from IoT sensors, online commerce, mobile devices, and more.

READ ASLO: IBM releases WebSphere Liberty code to open source


Cloud Private for Data includes capabilities from IBM’s Data Science Experience, Information Analyzer, Information Governance Catalogue, Data Stage, Db2, and Db2 Warehouse. These capabilities will allow customers to gain insights from data stored in protected environments and make data-driven decisions. According to the company, the solution is meant to provide a data infrastructure layer for AI behind firewalls.

Going forward, IBM plans to have Cloud Private for Data run on all cloud and be available in industry-specific solutions for areas such as financial services, healthcare, and manufacturing.

As part of the launch, the company also announced the Data Science Elite Team, a no-charge consultancy team that will advise clients on machine learning adoption and assist them with their AI roadmaps.

DigitalOcean finds majority of developers aren’t using AI or CD

Despite the benefits artificial intelligence brings to the lifecycle, not a lot of developers are taking advantage of it. A newly released report from DigitalOcean found only 17 percent of respondents worked with artificial intelligence or machine learning in 2017.

However, 73 percent of those not using AI, plan to at least learn more about the technology in 2018. Over half of the respondents (63%) cited automating workflows as a big challenge in 2018. Incorporating machine learning and artificial intelligence was the second biggest challenge to be faced next year, according to 32 percent of respondents.

In addition, the report revealed only 42 percent of respondents are using continuous integration or continuous delivery. Those who are not using it say it is because it isn’t necessary for their workflow, they plan to use it or that it is too complicated.

The Q4 report is designed to look at emerging software development trends that can give developers an idea of what to expect in 2018. The company surveyed more than 2,500 people in the software development community.

Another key finding was that Linux is still the server operating system of choice, with 89 percent of respondents saying it was their preferred choice. The other options were Windows (8%), MacOS (2%), and BSD (1%).

Almost half of the respondents said that they would be looking for a new job in 2018 with work environment and culture to be the most important aspect when considering a new company.

Finally, 67 percent of respondents reported LetsEncrypt as their favorite SSL provider. The second place choice, Comodo, only had eight percent of respondents favoring it. GoDaddy came in a six percent and Verisign at 3 percent.

Unifi releases new AI powered analytics

Unifi Software has announced plans to power its data platform with a new artificial intelligence engine. OneMind is designed to source, explore and prepare data for businesses so that they can make educated decisions more easily.

In addition, OneMind learns from the data in order to predict patterns and recommend datasets.

The Unifi Data Platform is built on four platforms:

  1. Governance and security
  2. Catalog and discovery
  3. Data preparation
  4. Workflow and scheduling

OneMind touches each of these pillars by automatically recommending attributes to mask in order to maintain compliance; profiling data to help businesses understand their datasets; automating the steps necessary to cleanse, enrich, parse, normalize, transform, filter, and format data; and making recommendations of previously created workflow automation jobs.

“With our deep engineering expertise in AI and support for Natural Language Queries, we make it very easy for any kind of user to ask questions on the catalog and generate the answer through Natural Language Processing, leading to a Google-like experience on data and metadata discovery. From a technology perspective, OneMind captures complicated relationships between enterprises’ data, metadata in various sources into a governed dynamically growing ‘Enterprise Data Knowledge Graph’ that can be displayed visually and provides a simple, interactive user experience,” said Ayush Parashar, co-founder and vice president of engineering for Unifi Software.

Why Machine Learning Isn’t As Hard To Learn As You Think

Why is Machine Learning difficult to understand? originally appeared on Quorathe place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by John L. Miller, Industry ML experience with video, sensor data, images. PhD. Microsoft, Google, on Quora:

I’m usually the first person to say something is hard, but I’m not going to here. Learning how to use machine learning isn’t any harder than learning any other set of libraries for a programmer.

The key is to focus on using it, not designing the algorithm. Look at it this way: if you need to sort data, you don’t invent a sort algorithm, you pick an appropriate algorithm and use it right.

It’s the same thing with machine learning. You don’t need to learn how the guts of the machine learning algorithm works. You need to learn what the main choices are (e.g. neural nets, random decision forests…), how to feed them data, and how to use the data produced.

There is a bit of an art to it: deciding when you can and can’t use machine learning, and figuring out the right data to feed into it. For example, if you want to know whether a movie shows someone running, you might want to send both individual frames, and sets of frame deltas a certain number of seconds apart.

If you’re a programmer and it’s incredibly hard to learn ML, you’re probably trying to learn the wrong things about it.

This question originally appeared on Quora – the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Microsoft and Amazon announce deep learning library Gluon

Microsoft has announced a new partnership with Amazon to create a open-source deep learning library called Gluon. The idea behind Gluon is to make artificial intelligence more accessible and valuable.

According to Microsoft, the library simplifies the process of making deep learning models and will enable developers to run multiple deep learning libraries. This announcement follows their introduction of the Open Neural Network Exchange (ONNX) format, which is another AI ecosystem.

Gluon supports symbolic and imperative programming, which is something not supported by many other toolkits, Microsoft explained. It also will support hybridization of code, allowing compute graphs to be cached and reused in future iterations. It offers a layers library that reuses pre-built building blocks to define model architecture. Gluon natively supports loops and ragged tensors, allowing for high execution efficiency for RNN and LSTM models, as well as supporting sparse data and operations. It also provides the ability to do advanced scheduling on multiple GPUs.

“This is another step in fostering an open AI ecosystem to accelerate innovation and democratization of AI-making it more accessible and valuable to all,” Microsoft wrote in a blog post. “With Gluon, developers will be able to deliver new and exciting AI innovations faster by using a higher-level programming model and the tools and platforms they are most comfortable with.”

The library will be available for Apache MXNet or Microsoft Cognitive Toolkit. It is already available on GitHub for Apache MXNet, with Microsoft Cognitive Toolkit support on the way.

The race is officially on to market seamless autonomous driving technology within the shortest possible timeframe. One of the biggest challenges, however, lies in preparing autonomous vehicles to navigate the many unforeseen and unexpected obstacles that face drivers on any given journey.

Real-world testing, while ideal, of course, falls short of producing enough quality, representative, diverse and well-labeled data to properly train the AI components of self-driving vehicles. The very nature of the real world makes it impossible to predict, and the driving experience is fraught with various unforeseen obstacles, unexpected traffic delays and detours, and sudden changes in weather conditions. To properly prepare autonomous vehicles to properly navigate and react, it is imperative the industry looks beyond real-world testing.

Simulation Superiority
A game engine-based, virtual reality environment could offer an attractive alternative as it is cheaper, faster and safer than experimenting with real vehicles. Also, scenes can be repeated an endless number of times that would either hardly ever happen in real life, or would pose an intolerable risk to human drivers.

Take the very basic example of someone lying on the roadway. You certainly can’t use a real person to mimic this scenario, and a dummy would cause untold traffic problems. The same applies, for example, to having hitchhikers in the emergency lane, or unexpected roadwork or an impaired driver, or a sudden traffic accident.

Applying a digital setting for autonomous vehicle (AV) testing brings the ultimate benefit of engaging AI for scenarios that cannot be replicated under real-life conditions.

Even if driving conditions were fairly standard, covering thousands of miles would take a massive fleet and a significant investment in time. Alternatively, a set of high-performance computers gets the job done within an hour. Simulating cameras and sensors, in real time if necessary, can cut testing time tremendously and save big dollars.

Handling Traffic Madness Hotspots
That brings up another key issue, namely motion planning. According to current consensus in the industry, machine recognition software can be trained by feeding AI images or footage captured in traffic. The same doesn’t apply to motion planning, however, since the very nature of a moving car influences variables in your surroundings. This is why simulation-based AI is the only way of training around motion.

Look at the Arc de Triomphe in Paris, one of the craziest roundabouts in Europe. Traffic from 12 major avenues feed into it, and drivers navigate in six lanes without road markings. The French even have two types of insurance policies: those that exclude and those that include using your car there. Drivers inch rather instinctively in that roundabout, and it would be pretty hard to imagine risking accidents just to train AI on the spot. By contrast, a simulated algorithm can prepare AI to fine-tune motion planning specifically to navigate the Arc de Triomphe roundabout or any number of hotspots all around the world.

Walking on the path of game engines
The most recent survey of the National Highway Traffic Safety Administration lists critical reasons for vehicle crashes in the United States. The good news is, many of the non-driver-related factors that cause accidents can be effectively simulated, including technical errors.

Game engine-based testing comes in handy, for instance, when you need to see what happens if a sensor or a car part breaks down. But grave danger may often be an issue here too – you can imagine how hard it would be to find candidates willing to sign up to navigate a tire blowout while driving 100 miles per hour.

Our in-house developed simulator at AImotive can already handle one hundred traffic scenarios under different weather, lighting and road surface conditions. We started out with the idea that we could use video games to create the first neural networks for our development purposes,  relying on images from Project Cars. However, it wasn’t able to be as flexible as we needed, and it didn’t feature conventional driver obstacles such as buses or pedestrians. Consequently, a customized version of Unreal’s game engine was created, which provided us with the variability needed for full-scale simulation. The merging of AI-trained self-driving tech with gaming technology has been just the right bridge needed to make testing much more accurate, far less time consuming and at an ideal cost.

Real-World Testing Won’t Be Gone for Good
It is worth noting, however, that a gaming environment is not the cure-all solution. Real-world testing, despite its shortcomings, still plays a critical role in autonomous development as simulation often lacks the kind of variability that can be found only in real life.
For the self-driving industry, however, it is perfectly fine if around 90-95% of testing takes place in a simulated environment. Observing that ratio is crucial to reaching full autonomy in a timely manner, and smart developers are discovering that simulation provides the smartest, safest and fastest way to put self-driving vehicles on the road.

Time is the most important factor in detecting network breaches and, consequently, in containing cyber incidents and mitigating the cost of a breach.

“Security event investigations can last hours, and a full analysis of an advanced threat can take days, weeks or even months. Even large security operations center (SOC) teams with more than 10 skilled analysts find it difficult to detect, confirm, remediate, and verify security incidents in minutes and hours,” says Chris Morales, Vectra Network’s head of security analytics.

“However, the teams that are using artificial intelligence to augment their security existing analysts and achieve greater levels automation are more effective than their peers and even SOC teams with more than 10 members who are not using AI.”

Human-machine teaming is crucial

Vectra Networks has polled 459 Black Hat attendees on the composition and effectiveness of their organizations’ SOC teams.

The group – a mix of security architects, researchers, network operations and data center operations specialists, CISOs and infosec VPs – were asked whether their SOCs are already using AI in some form for incident response, and 153 (33%) said Yes.

The size of these teams, the time it takes them to detect and confirm a threat, and to remediate the incident and verify its containment varies.

But, when comparing the time it takes SOC teams of over 10 analysts to do all those things with or without the help of AI, the former group is consistently more speedy.

Take for example the time it takes for them to detect a threat:

ai threat detection response

Or how long it takes for them to remediate an incident:

ai threat detection response

“There is a measurable trend with organizations that have implemented AI to automate tedious incident response tasks to augment the SOC manpower, enable them to focus on their artisan skills and empower decision making,” Morales noted. “When man and machine (AI) work together, the result is always better than man or machine alone.”

These results fit together with those of a McAfee survey that tried to get to the bottom of what makes some threat hunters and SOCs more successful than others. The answer was: the automation of many tasks relating to threat investigation, so that they can spend more time on the actual threat hunting.

Legendary programmer Chris Lattner has had a roller coaster of a year. He left Apple (where he developed the Swift programming language) to help build Tesla’s Autopilot technology, only to leave months later after realizing that he wasn’t a good fit.

However, Lattner might be settling down. He just announced that he’s joining Google (namely, the Brain team) to make AI “accessible to everyone.” While Lattner doesn’t specify exactly what he’ll be doing, Bloomberg sources say he’ll be working on the TensorFlow language Google uses to simplify AI programming.

The hire won’t necessarily change the state of affairs for Apple, which has had to make do without Lattner for months, but it’s a definite coup for Google. Lattner earned praise for Swift because it was fast, thoroughly modern, and (most importanty) accessible — everyone from first-timers to seasoned programmers stands to benefit from it.

Google could put that know-how to work making TensorFlow easier to use, or lowering the hardware demands so that AI runs more smoothly on phones and computers. There’s no guarantee that he’ll repeat his previous feats at Google, but the potential is certainly there.

minoHealth is a health system developed to diagnose diseases accurately better than what the human health worker can execute. The technology, health based system, is reported to use Deep Learning to predict and diagnose medical conditions in patients – a system similarly used by few healthcare system.
They wrote on there website

“Futuristic Medical Health System seeking to Democratize Quality Healthcare with Artificial Intelligence(A.I) Medical Predictions/Diagnostics Systems, Cloud Medical Records System for Hospitals, Ministry of Health and Patients separately and “Big Data” Analytics.”

minoHealth currently has three AI healthcare systems.

•The first system predicts if a female patient would develop Diabetes in the next 5 years or not.
•The second and third systems determine if a Breast Tumor is Malignant or Benign with two separate approaches.

Deep Learning is the most effective part of Artificial Intelligence today.

minoHealth team also plans to work with Epidemiologists in Ghana and Ministry of Health to develop lots of medical datasets to train other Deep Learning models to cater to even more medical conditions and healthcare needs of Ghanaians.

Quora has become a great resource for machine learning. Many top researchers are active on the site answering questions on a regular basis.

Here are some of the main AI-related topics on Quora. If you have a Quora account, you can subscribe to these topics to customize your feed.

While Quora has FAQ pages for many topics (e.g. FAQ for Machine Learning), they are far from comprehensive. In this post, I’ve tried to provide a more thorough Quora FAQ for several machine learning and NLP topics.

Quora doesn’t have much structure, and many questions you find on the site are either poorly answered or extremely specific. I’ve tried to include only popular questions that have good answers on general interest topics.

Machine Learning



Supervised Learning

Reinforcement Learning

Unsupervised Learning

Deep Learning

Convolutional Neural Networks

Recurrent Neural Networks

Natural Language Processing

Generative Adversarial Networks

This Originally appeared on Quora