Self-Driving Uber car knocks down pedestrian in Arizona

TEMPE, Ariz. – Tempe, Arizona police are investigating a deadly crash involving a self-driving Uber vehicle early Monday morning. The Uber vehicle was reportedly headed northbound when a woman walking outside of the crosswalk was struck.

The woman, identified as 49-year-old Elaine Herzberg, was taken to the hospital where she died from her injuries.
Tempe Police says the vehicle was in autonomous mode at the time of the crash and a vehicle operator was also behind the wheel. No passengers were in the vehicle at the time.
An Uber spokesperson said they are aware of the incident and are cooperating with authorities.
They released the following statement: “Our hearts go out to the victim’s family. We are fully cooperating with local authorities in their investigation of this incident.”
Uber’s CEO Dara Khosrowshahi also acknowledged the incident on Twitter:

Uber has paused self-driving operations in Phoenix, Pittsburgh, San Francisco and Toronto, which is a standard move, the company says.

The investigation is still active.

Uber began testing self-driving cars in Tempe in February 2017. The fleet of self-driving Volvos arrived in Arizona after they were banned from California roads over safety concerns. Gov. Doug Ducey touted Arizona as a testing ground, saying at the time in a written statement, “Arizona welcomes Uber self-driving cars with open arms and wide open roads.”

One of the self-driving cars was involved in a crash a month later, after a car failed to yield to the Uber vehicle and hit it, authorities said. The self-driving SUV rolled onto its side as a result crash.

There were no serious injuries were reported in that crash.

All you need to know about MWC 2018

We’ve seen another announcement-packed Mobile World Congress event in Barcelona this week, with new phones launched by Samsung, Sony, LG, Nokia, Asus, and others. So what can this glut of new devices tell us about where smartphones are heading in 2018, and what we’ll see for the rest of the year?

Despite all the phones unveiled in Spain, there are plenty more to come: We’ll very likely see new flagships from Google, Apple, LG, OnePlus, Huawei, and HTC over the next 10 months, and Samsung will be back with a successor to the Galaxy Note 8. The year is just getting started.

Modern-day phone processors come with AI optimizations built inBezels are disappearing even on budget phonesMost of Nokia's new phones run stock AndroidSamsung is keeping faith with the fingerprint sensor

AI is everywhere

It’s no surprise that one of the biggest trends of 2017 rolls right into 2018 – phone makers now want to pack as much artificial intelligence into their handsets as possible, even if they have to stretch the definition of the term “AI” to do it.

The latest chips from Qualcomm, Samsung, and Huawei, among others, are built with AI computation in mind, specifically optimized to better handle the machine learning that powers a lot of the artificial intelligence processing required on modern devices.

Modern-day phone processors come with AI optimizations built in

AI can seem like quite an abstract concept, but the end result is phones that are better able to think for themselves and learn over time, without offloading the intense calculations that are required off to the cloud – being able to recognize what you’re taking a photo of, and adjusting the camera settings accordingly, is a good example of an AI-enabled feature.

Digital assistants are another example, now better than ever at recognizing your voice and interpreting your commands without having to check back with base first. More of that computing can be done on-board the phones of 2018.

e’ve also seen some impressive augmented reality demos at MWC – another 2017 trend spilling over into 2018 – and you can expect AR to be key in the new handsets we’re going to be seeing from Google and Apple later in the year. Animated, life-like emojis seem to be the order of the day, but AR has plenty of potential beyond cartoon characters.

Meanwhile, older technologies refuse to die off, despite Apple’s best efforts. Samsung’s new flagship phones include both fingerprint sensors and 3.5-mm headphone jacks, so we won’t all be switching to face unlock just yet – the first handset with a fingerprint sensor under its display has already appeared, and if the tech becomes more widely adopted, it could once again become the default way of getting into your phone.

Mobile World Congress always sets down a marker for what we can expect from the phones of the rest of the year, and 2018 has been no different – we’re looking forward to what appears next.

Microsoft releases Azure Bot Service and Cognitive Services Language Understanding

Microsoft has announced two new development tools designed to advance conversational artificial intelligence experiences. Microsoft Azure Bot Service and Microsoft Cognitive Services Language Understanding (LUIS) are now available.

“Conversational AI, or making human and computer interactions more natural, has been a goal since technology became ubiquitous in our society. Our mission is to bring conversational AI tools and capabilities to every developer and every organization on the planet, and help businesses augment human ingenuity in unique and differentiated ways,” Lili Cheng, corporate vice president of Microsoft’s AI and research group, wrote in a post.

The Azure Bot Service is designed to help developers create conversational interfaces while LUIS is designed for developing custom natural integrations.

The Bot Service provides an environment where these conversational bots can interact with customers on multiple channels across any device. Channels include Cortana, Facebook Messenger, and Skype. “Intelligence is enabled in the Azure Bot Service through the cloud AI services forming the bot brain that understands and reasons about the user input. Based on understanding the input, the bot can help the user complete some tasks, answer questions, or even chit chat through action handlers,” the Microsoft Azure Bot Service and Language Understanding team wrote in a post.

Language Understanding is the key part of the “bot brain” that enables them to “think” and “reason” in order to make appropriate actions. The Language Understanding solution supports a number of languages in addition to English, and comes with prebuilt services for English, French, Spanish and Chinese. In addition, it provides phrase suggestions to help developers customize LUIS domain vocabulary in Chinese, Spanish, Japanese, French, Portuguese,German and Italian.

In addition, the company announced new capabilities for Azure Bot Service and Language Understanding. These features include: an updated user interface, an expansion of up to 500 intents and 100 entities for more conversational experiences, ability to customize cognitive services, and intelligent APIs that enable systems to see, hear, speak, understand and interpret.

“Think about the possibilities: all developers regardless of expertise in data science able to build conversational AI that can enrich and expand the reach of applications to audiences across a myriad of conversational channels. The app will be able to understand natural language, reason about content and take intelligent actions,” the Azure team wrote. “Bringing intelligent agents to developers and organizations that do not have expertise in data science is disruptive to the way humans interact with computers in their daily life and the way enterprises run their businesses with their customers and employees.”

Unifi releases new AI powered analytics

Unifi Software has announced plans to power its data platform with a new artificial intelligence engine. OneMind is designed to source, explore and prepare data for businesses so that they can make educated decisions more easily.

In addition, OneMind learns from the data in order to predict patterns and recommend datasets.

The Unifi Data Platform is built on four platforms:

  1. Governance and security
  2. Catalog and discovery
  3. Data preparation
  4. Workflow and scheduling

OneMind touches each of these pillars by automatically recommending attributes to mask in order to maintain compliance; profiling data to help businesses understand their datasets; automating the steps necessary to cleanse, enrich, parse, normalize, transform, filter, and format data; and making recommendations of previously created workflow automation jobs.

“With our deep engineering expertise in AI and support for Natural Language Queries, we make it very easy for any kind of user to ask questions on the catalog and generate the answer through Natural Language Processing, leading to a Google-like experience on data and metadata discovery. From a technology perspective, OneMind captures complicated relationships between enterprises’ data, metadata in various sources into a governed dynamically growing ‘Enterprise Data Knowledge Graph’ that can be displayed visually and provides a simple, interactive user experience,” said Ayush Parashar, co-founder and vice president of engineering for Unifi Software.

Visual Studio Live Share gives you pair programming without the shared keyboards

Decades after introducing IntelliSense, the code completion and information features that transform Visual Studio into something more than just a text editor, Microsoft is introducing something that it claims is just as exciting: Live Sharing.

Collaboration is critical for many developers. Having another pair of eyes look over a problematic bug can offer insight that’s proving elusive; tapping the knowledge of a seasoned veteran is an important source of training and education. Some developers advocate pair programming, a system of development where two people literally share a keyboard and take turns to drive, but most feel this is intrusive and inconvenient. Ad hoc huddles around a single screen are common but usually mean that one developer has to contend with the preferences of another, hindering their productivity. Screen sharing avoids the awkward seating but also means that the sharer either has a loss of control if they give the other person keyboard and mouse access, or, if they don’t, it prevents the other person from taking the initiative.

Live Share is Microsoft’s solution. It provides a shared editing experience within Visual Studio and Visual Studio Code (currently only for JavaScript, TypeScript, and C#) that’s similar to the shared editing found in word processors; each person can see the other’s cursor and text selections; each person can make edits—but it goes further, by enabling shared debugging, too. A project can be launched under the debugger, and both people can see the call stack, examine in-scope variables, or even change values in the immediate window. Both sides can single step the debugger to advance through a program.

It provides rich collaboration—while still allowing both developers to use the environment that they’re comfortable and familiar with. If you prefer to use Visual Studio, with your windows laid out just so, and still use the same key bindings as you learned for Visual C++ 6 back in the ’90s, you can do so, and it doesn’t matter that your peer is using Visual Studio Code on a Mac, with (ugh) vim key bindings. With Live Share, you just send a sharing request to your colleague and they can connect to your project, editor, and debugger from the comfort of their own environment.

The feature will be released as a preview for Visual Studio Code and Visual Studio at some unspecified point in the future, using a combination of updates to the core programs and extensions to round out the functionality. Microsoft stresses that the preview is still at an early stage. Technically, it allows multi-way collaboration (not just pairs), though this may not be enabled initially. At some point it will allow direct connections between systems on the same network, but, initially, it may require sharing activity to bounce through a Microsoft server.

Even at this early stage, however, it looks tremendously useful and like a huge step forward in collaboration and productivity.

Building a better dev ops platform

More immediately, today marks the general availability of Visual Studio App Center (formerly Mobile Center), Microsoft’s one-stop shop for mobile application deployment and testing. Point App Center at your source repository (hosted on Microsoft’s Visual Studio Team Services (VSTS) or GitHub), and it will fetch the code, set up build scripts, and run unit and integration tests.

That’s standard continuous integration stuff, but App Center goes further: it can run your application tests on real hardware, both iOS and Android, to span dozens of different screen size and operating system combinations. You can even see screenshots of the app running on the various different makes and models of handset.

Once your application is passing its tests, App Center has a beta deployment system so that you can roll it out to beta testers. Need to make a quick fix to address a bug? If your app is written in JavaScript, you can use Code Push to send updated scripts to your users without needing to do a full build and reinstall. This works even for stable builds that have been submitted to their respective app stores; you can patch live applications, and we’re told that Apple and Google will allow this as long as the patches aren’t too radical.

App Center lets you test across a whole bunch of devices at the same time. Notice how the first three phones have crashed out to the desktop because of a bug in the app being tested.


Even after a successful beta test, you’ll probably want to collect crash and analytics data from your users to discover problems and better understand how they’re using your application. App Center has tooling for that, too.

Microsoft’s goal with App Center is to make it easy for developers to adopt best practices around building, testing, reporting, and so on; App Center is a one-stop shop that handles all of these for you. Under the covers it uses VSTS. This means that if your needs grow beyond what App Center can do—for example, if you have server-side code that needs to have its builds, testing, and deployment synchronized with the client-side code—you can use the same workflows and capabilities in the full VSTS environment, while still retaining access to everything App Center can do.

Of course, you still have to develop applications in the first place. Microsoft is continuing to try to make Visual Studio the best place for app development regardless of platform. Live Player, shown earlier this year at Build, greatly streamlines the develop-build-debug loop for app development by pushing your application code to a device (iOS or Android) instantly, letting it run without needing to deploy an updated app package each time.

This is particularly compelling for honing user interfaces. Interfaces written in XAML, Microsoft’s .NET interface markup language, can be shown in Live Player, and they update live; as soon as you save the XAML changes, the UI shown on the device updates accordingly. You don’t even need to navigate to a particular screen within the application to test it; you can have Live Player simply show arbitrary XAML files. This makes developing and testing interfaces substantially less painful.

Increasing the reach of machine learning

Microsoft also announced Visual Studio Tools for AI, a range of features to make developing machine learning applications within Visual Studio easier. With this tooling, Visual Studio will be able to create projects that are already set up to use frameworks such as TensorFlow or Microsoft’s own CNTK.

Machine learning systems build models that are generated by large-scale training, with the training done on clusters and often accelerated with GPUs or dedicated accelerator chips. The models produced can then be run on client machines. A model that’s used for, say, detecting faces in video streams will still need a powerful client, but much less so than the hardware needed for the initial training.

This model training is thus a good fit for cloud computing. The Tools for AI integrate with Azure’s Batch AI Service, a managed environment providing a GPU-accelerated training cluster. Training jobs can be submitted from within Visual Studio, and progress can be tracked there, too, giving insight into things like the level of GPU utilization.

Once a model has been built, there are now new ways of deploying it to devices. Microsoft has been talking up this notion of the “intelligent edge” as a counterpart to the “intelligent cloud;” this means pushing the machine-learning models into edge devices to make use of the local processing power where it makes sense to do so. A new framework, the AI Toolkit for Azure IoT Edge, is intended to streamline that process.

The company also announced a preview of Azure SQL Database Machine Learning Services, which allows machine learning models to be deployed into a SQL database and accessed directly. An example use case of this is a support ticketing system. A machine learning model could be generated to infer a priority for each ticket so that issues that seem to be urgent are prioritized automatically. With the new Azure services, this model can be run directly within the SQL database.

As much as Microsoft and other companies have been talking up machine learning, it is for many developers something of an unknown. While high-level systems such as Cognitive Services don’t require much knowledge of the details of machine learning—they use prebuilt, off-the-shelf models, making them quick and easy to start using—developers who want to create their own models will need to learn and understand new frameworks and techniques.

Microsoft’s attempt to fill that knowledge gap is its AI school. As it builds up its range of systems and capabilities, it hopes that more accessible machine learning will turn up in more places.


MIT expands its presence to the West Coast to teach a deep learning course in Silicon Valley

Deep learning is a growing field that has been increasingly popular in recent years as advances in artificial intelligence are made and the excitement to innovate grows. This spring, MIT will be launching a deep learning course that will take place in San Jose, California, making this the first MIT course ever taught on the West Coast.

The course, Designing Efficient Deep Learning Systems, will be taught by Vivienne Sze, an associate professor in the electrical engineering and computer science department at MIT and an expert in AI. The course will part of a Machine Learning Certificate program that MIT will launch in 2018. Other courses in the program include Modeling and Optimization for Machine Learning and Applications, and Machine Learning for Big Data and Text Processing.

According to Sze, the course will focus on designing deep learning systems that are efficient and low-powered. An inefficiently designed system could consume a lot of power, which makes it impractical to use on portable devices such as a cell phone. “The goal is to make the students or audience aware of the interaction between the algorithms and the hardware,” said Sze.

The course will focus on both academics and industry professionals working with deep learning. “We’ve run the course previously in more of an academic setting, but I think a lot of this technology is really translating into industry and people really want to use it,” said Sze. “If you actually want to use it and deploy it you should understand how to make it efficient.”

Sze hopes that this course will teach people not only how to design efficient deep learning systems, but also how to evaluate existing solutions.

With great technology comes great risks. As new technology continues to emerge in this digital day and age, Carnegie Mellon University’s Software Engineering Institute (SEI) is taking a deeper look on the impact they will have. The institute has released its 2017 Emerging Technology Domains Risk report detailing future threats and vulnerabilities.

“To support the [Department of Homeland Security’s United States Computer Emergency Readiness Team] US-CERT mission of proactivity, the CERT Coordination Center located at Carnegie Mellon University’s Software Engineering Institute was tasked with studying emerging systemic vulnerabilities, defined as exposures or weaknesses in a system that arise due to complex or unexpected interactions between subcomponents. The CERT/CC researched the emerging technology trends through 2025 to assess the technology domains that will become successful and transformative, as well as the potential cybersecurity impact of each domain,” according to SEI’s report.

According to the report, the top technologies that pose a risk are:

  • Blockchain: Blockchain technology has become more popular over the past couple of years as companies are working to take the technology out of cryptocurrency and transform it into a business model. Gartner recently named blockchain as one of the top 10 technology trends for 2018. However, the report notes the technology comes with unique security challenges. “Since it is a tool for securing data, any programming bugs or security vulnerabilities in the blockchain technology itself would undermine its usability,” according to the report.
  • Intelligent transportation systems: It seems every day a new company is joining the autonomous vehicle race. The benefits of autonomous vehicles include safer roads and less traffic, but the report states that one malfunction could have unintended consequences such as traffic accidents, property damage, injury and even death.
  • Internet of Things mesh networks: With the emergence of the IoT, mesh networks have been established as a way for “things” to connect and pass data. The report notes that mesh networks carry the same risks as traditional wireless networking devices and access points such as spoofing, man in the middle attacks and reconnaissances. In addition, the mesh networks pose more risks due to device designs and implementations. “A single compromised device may become a staging point for attacks on every other node in the mesh as well as on home or business networks that act as Internet gateways,” the report states.
  • Machine learning: Machine learning provides the ability to add automation to big data and derive business insights faster, however the SEI worries about the security impact of vulnerabilities when sensitive information is involved. In addition, just as easy as it is to train machine learning algorithms on a body of data, it can be as easy to trick the algorithm also. “ The ability of an adversary to introduce malicious or specially crafted data for use by a machine learning algorithm may lead to inaccurate conclusions or incorrect behavior,” according to the report.
  • Robotic surgery: Robot-assisted surgery involves a surgeon, computer console and a robotic arm that typically performs autonomous procedures. While the technique has been well established, and the impact of security vulnerabilities have been low, the SEI still has its concerns. “Where surgical robots are networked, attacks—even inadvertent ones—on these machines may lead to unavailability, which can have downstream effects on patient scheduling and the availability of hospital staff,” according to the report.
  • Smart buildings: Smart buildings fall under the realm of the Internet of Things using sensors and data analytics to make building “efficient, comfortable, and safe.” Some examples of smart buildings include: real-time lighting adjustments, HVAC, and maintenance parameters. According to the SEI, the risks vary with the type of action. “The highest risks will involve safety- and security- related technologies, such as fire suppression, alarms, cameras, and access control. Security compromises in other systems may lead to business disruption or nothing more than mild discomfort. There are privacy implications both for businesses and individuals,” the wrote.
  • Smart robots: Smart robots are being used alongside or in place of human workers. With machine learning and artificial intelligence capabilities, these robots and learn, adapt and make decisions based on their environments. Their risk include, but are not limited to, hardware, operating system, software and interconnectivity. “ It is not difficult to imagine the financial, operational, and safety impact of shutting down or modifying the behavior of manufacturing robots, delivery drones; service-oriented or military humanoid robots; industrial controllers; or, as previously discussed, robotic surgeons,” according to the researchers.
  • Virtual personal assistants: Almost everyone has access to a virtual personal assistant either on their PC on mobile device. These virtual personal assistants use artificial intelligence and machine learning to understand a user and mimic skills of a human assistants. Since these assistants are highly reliant on data, the report states there is a privacy concern when it comes to security. “VPAs will potentially access users’ social network accounts, messaging and phone apps, bank accounts, and even homes. In business settings, they may have access to knowledge bases and a great deal of corporate data,” the researchers wrote.


According to the report, the top three domains that are the highest priority for outreach and analysis in 2017 are: intelligent transportation systems, machine learning and smart robots. “These three domains are being actively deployed and have the potential to have widespread impacts on society,” the report states.

Microsoft and Amazon announce deep learning library Gluon

Microsoft has announced a new partnership with Amazon to create a open-source deep learning library called Gluon. The idea behind Gluon is to make artificial intelligence more accessible and valuable.

According to Microsoft, the library simplifies the process of making deep learning models and will enable developers to run multiple deep learning libraries. This announcement follows their introduction of the Open Neural Network Exchange (ONNX) format, which is another AI ecosystem.

Gluon supports symbolic and imperative programming, which is something not supported by many other toolkits, Microsoft explained. It also will support hybridization of code, allowing compute graphs to be cached and reused in future iterations. It offers a layers library that reuses pre-built building blocks to define model architecture. Gluon natively supports loops and ragged tensors, allowing for high execution efficiency for RNN and LSTM models, as well as supporting sparse data and operations. It also provides the ability to do advanced scheduling on multiple GPUs.

“This is another step in fostering an open AI ecosystem to accelerate innovation and democratization of AI-making it more accessible and valuable to all,” Microsoft wrote in a blog post. “With Gluon, developers will be able to deliver new and exciting AI innovations faster by using a higher-level programming model and the tools and platforms they are most comfortable with.”

The library will be available for Apache MXNet or Microsoft Cognitive Toolkit. It is already available on GitHub for Apache MXNet, with Microsoft Cognitive Toolkit support on the way.

Amazon releases new compiler for AI frameworks

Amazon is addressing artificial intelligence development challenges with a new end-to-end compiler solution.

The NNVM compiler, developed by AWS and a team of researchers from the University of Washington’s Allen School of Computer Science & Engineering, is designed for deploying deep learning frameworks across a number of platforms and devices.

“You can choose among multiple artificial intelligence (AI) frameworks to develop AI algorithms. You also have a choice of a wide range of hardware to train and deploy AI models. The diversity of frameworks and hardware is crucial to maintaining the health of the AI ecosystem. This diversity, however, also introduces several challenges to AI developers,” Mu Li, a principal scientist for AWS AI, wrote in a post.

According to Amazon, there are three main challenges AI developers come across today: switching between AI frameworks, maintaining multiple backends, and supporting multiple AI frameworks. The NNVM compiler addresses this by compiling front-end workloads directly into hardware back-ends. “Today, AWS is excited to announce, together with the research team from UW, an end-to-end compiler based on the TVM stack that compiles workloads directly from various deep learning frontends into optimized machine codes,” Li wrote. The TVM stack, also developed by the team, is an intermediate representation stack designed to close the gap between deep learning frameworks and hardware backends.

“While deep learning is becoming indispensable for a range of platforms — from mobile phones and datacenter GPUs, to the Internet of Things and specialized accelerators — considerable engineering challenges remain in the deployment of those frameworks,” said Allen School Ph.D. student Tianqi Chen. “Our TVM framework made it possible for developers to quickly and easily deploy deep learning on a range of systems. With NNVM, we offer a solution that works across all frameworks, including MXNet and model exchange formats such as ONNX and CoreML, with significant performance improvements.”

The NNVM compiler is made up of two components from the TVM stack: NNVM for computation graphs and TVM for tensor operators, according to Amazon.

“NNVM provides a specification of the computation graph and operator with graph optimization routines, and operators are implemented and optimized for target hardware by using TVM. We demonstrated that with minimal effort this compiler can match and even outperform state-of-the-art performance on two radically different hardware: ARM CPU and Nvidia GPUs,” Li wrote. “We hope the NNVM compiler can greatly simplify the design of new AI frontend frameworks and backend hardware, and help provide consistent results across various frontends and backends to users.”

Companies to watch in 2018

The world of software development involves so much more than writing code these days. Developers need to understand artificial intelligence, the cloud, new methodologies, and the expanding infrastructure required for the Internet of Things. Here are some companies our editors are watching to lead the way.

WHAT THEY DO: Application security
WHY WE’RE WATCHING: With data breaches recurring at an alarming rate, this startup is building DevSecOps solutions for companies that understand the importance of security and are looking for a better way.
WHY WE’RE WATCHING: The future of user interfaces is conversational (see: Siri, Cortana, Alexa, et al) and is using artificial intelligence to enable intelligent dialogs between humans and IT systems.
WHAT THEY DO: Integration platform-as-a-service
WHY WE’RE WATCHING: Flow is a platform created for connectivity via API that enables organizations to automate workflows. Flow Express is a low-code solution for business users.

WHAT THEY DO: Customer engagement
WHY WE’RE WATCHING: Usermind’s platform ensures that data is compatible, accessible and actional across teams and systems, without the need to run queries. This provides the context organizations require to build successful applications.

WHAT THEY DO: Artificial intelligence
WHY WE’RE WATCHING: Veritone has created a platform that provides access to its cognitive engines, for such things as face and object recognition, natural language understanding and more, in what the company calls an operating system for AI.

Postdot Technologies
WHAT THEY DO: API management
WHY WE’RE WATCHING: More than 3 million developers are using the company’s Postman API development environment to create, test, document and share APIs.

WHAT THEY DO: Data visualization
WHY WE’RE WATCHING: The company recently released an open-source project, Dash, to help developers build analytical web applications using the Python programming language. Dash is built on Plotly.js, React and Flask to connect UI components to the analytical Python code.

WHAT THEY DO: Data analytics
WHY WE’RE WATCHING: An advanced analytics database provider that uses GPUs for IoT data and analytics for real-time insights into data streams and large data sets.

WHAT THEY DO: Algorithm marketplace
WHY WE’RE WATCHING: The company offers an enterprise solution for algorithms, functions and machine learning models that can run as microservices. It has backing from Google’s AI venture fund Gradient Ventures.

WHAT THEY DO: Localization and mapping
WHY WE’RE WATCHING: This early-stage startup helps developers create robotic, augmented reality and virtual reality solutions that localize, navigate and understand unfamiliar surroundings. It is backed by Toyota AI Ventures.

WHAT THEY DO: AI development
WHY WE’RE WATCHING:  For business operations that span both virtual and physical worlds, bonsai’s platform makes machine learning libraries easier for developers and enterprises to manage.

WHAT THEY DO: Network visibility
WHY WE’RE WATCHING: This cybersecurity startup has created a network visibility solution that gives information security professionals insight into what’s happening. Its founders created the Bro open-source framework and still drive its development.