DevOps remains a competitive advantage

DevOps continued to dominate development teams and businesses throughout the year with organizations trying to reap the benefits. A Logz.io study found that despite DevOps being a well-known phenomenon, 50 percent of respondents are still in the process of implementing DevOps or have just implemented it within the past year.

In the past year, many software companies teamed with or acquired others to broaden their DevOps solutions. CA acquired Veracode in the beginning of the year to help add security to its DevOps portfolio. JFrog acquired DevOps intelligence platform CloudMunch in June. CollabNet and VersionOne announced a merger in August to bring agile and DevOps together. Perforce made a push into DevOps with the acquisition of agile planning tool provider Hansoft in September.

In addition to acquisitions, companies developed and released their own DevOps solutions from scratch throughout the year. Dynatrace started the year off with the release of UFOs, a status alert system designed to help DevOps teams get a better look into their deployment pipelines. Microsoft announced the release of Visual Studio 2017 in March with DevOps as one of its core pillars. VS 2017 included code editing, continuous integration, continuous delivery, and Redgate database integration. Microsoft continued its DevOps commitment throughout the year, ending with the preview release of Azure DevOps projects in November.

GitLab took a new approach to DevOps with the release of AutoDevOps in July, and shared its vision for Complete DevOps in October. AutoDevOps provides the ability to automatically detect programming languages and build an app in that language, and then automatically deploy it.The Complete DevOps vision combines development and operations into one user experience.

CloudBees released a DevOptics solution to provide metrics and insights between teams in August. Electric Cloud released ElectricFlow 8.0 with new DevOps insight analytics. Atlassian unveiled the Atlassian Stack and DevOps Marketplace to break down silos and accelerate DevOps adoptions, and brought DevOps workflows to scale with the release of Bitbucket Server 5.4 and Bamboo 6.2 in October

Companies also worked throughout the year to bring DevOps together with other software development approaches and tools. In January, CA released a report that revealed agile and DevOps worked better together than alone. Later in the year, CA released another study that found if businesses really wanted to boost their software delivery performance, they should combine DevOps with cloud-based tools. Scrum.org and the DevOps Institute teamed up on ScrumOps, a new approach to software delivery that brings Scrum and DevOps together.

One of the biggest new approaches to come out of 2017 was the idea of DevSecOps. DevSecOps is a new notion that bakes security into the DevOps lifecycle in order to find and fix security vulnerabilities earlier and all throughout the life cycle for faster, higher quality code.

Veracode started the year off with its release of Greenlight, an embedded DevSecOps solution that enables developers to identify and fix security vulnerabilities, and rescan the code to double-check issues are fixed. DBmaestro released the Policy Control Manager, a DevSecOps feature designed to eliminate risks, and reduce downtime and loss of data. In July, WhiteHat Security took a look at the success of a DevSecOps approach in its Application Security Statistics report. The report found critical vulnerabilities in apps were resolved in a fraction of the time it takes without a DevSecOps approach.

Other reports throughout the year looked at the challenges blocking DevOps: Redgate found databases were one of the most common bottlenecks for DevOps teams. Quali discovered infrastructure and fragmented toolsets were among the top barriers for DevOps adoption. And in a combined report, Atlassian and xMatters found successful DevOps implementations make the most out of culture, monitoring and incident management. The State of DevOps report conducted by Puppet along with DORA (DevOps Research and Assessment) found in order to achieve DevOps success, automation, leadership and loosely coupled architectures and teams are key.

According to Forrester’s software development predictions for 2018, DevOps tools will continue to proliferate and consolidate, and DevOps will drive the use of APIs and microservices.

Google prepares Android developers for changes in 2018

Starting in the second half of 2018, Android apps on the Google Play store will be required to target a “recent” Android API level and meet other new, additional requirements Google announced yesterday.

“Google Play powers billions of app installs and updates annually,” Edward Cunningham, product manager at Android wrote in a post. “We relentlessly focus on security and performance to ensure everyone has a positive experience discovering and installing apps and games they love. Today we’re giving Android developers a heads-up about three changes designed to support these goals, as well as explaining the reasons for each change, and how they will help make Android devices even more secure and performant for the long term.”

Early in the coming year, Play will begin adding “a small amount of security metadata” to each APK submitted for further authentication, and will require no effort on the part of the developer. Then come August, Play will require all newly submitted apps to target Android API level 26 (Android 8.0) or higher, and November will bring the same requirement to updates of existing apps. This minimum API level will increase “within one year following each Android dessert release” from then on.

“This is to ensure apps are built on the latest APIs optimized for security and performance,” Cunningham wrote.

One year later, in August 2019, new apps and updates will be required to be submitted with both 64-bit and 32-bit binaries.

“We deeply appreciate our developer ecosystem, and so hope this long advance notice is helpful in planning your app releases,” Cunningham wrote. “We will continue to provide reminders and share developer resources as key dates approach to help you prepare.”

More information can be found in Cunningham’s blog post.

Apple follows Microsoft’s footsteps, plans to launch unified app platform

Universal Windows Platform (UWP) is a unique way that allows an app to run on various Windows 10 devices like the Xbox, Mobile, etc. But this platform has lost its charm since Microsoft killed its mobile platform which was mainly going to be benefitted by this.

It basically offers an API in the core of the OS across devices that can be used by the developer to create their app package that would work on all devices regardless of the screen size. It is basically a common or a unified platform for the apps so that they can run on various Windows 10 devices at the core. Microsoft’s Universal Windows Platform will be getting a competitor and the competitor is Apple.

A new report from Bloomberg suggests that Apple is also following the footsteps of Microsoft by unifying its platform for apps. Apple would be creating a unified platform for apps so that developers can create apps for iOS and macOS devices. The project is internally codenamed as ‘Marzipan’ and may get announced at annual WWDC next year, according to an article by Bloomberg. After which developers will be able to create an app package for their apps that would work on any hardware be it iPhone with iOS on a touch interface or on a Mac with macOS on a mouse. This platform may be rolled out as part of its next major updates for iOS and macOS.

According to a statement by CEO Tim Cook, he doesn’t seem to be a fan of this idea at least till when he said it. His statement goes like this, “You can converge a toaster and a refrigerator, but those things are probably not going to be pleasing to the user.” Even this idea was called a compromise by Apple’s software chief. But the idea is really beneficial especially for the apps that never get updated on the macOS. What do you think about Apple’s UWP clone? Let us know in the comments below.

Microsoft releases Azure Bot Service and Cognitive Services Language Understanding

Microsoft has announced two new development tools designed to advance conversational artificial intelligence experiences. Microsoft Azure Bot Service and Microsoft Cognitive Services Language Understanding (LUIS) are now available.

“Conversational AI, or making human and computer interactions more natural, has been a goal since technology became ubiquitous in our society. Our mission is to bring conversational AI tools and capabilities to every developer and every organization on the planet, and help businesses augment human ingenuity in unique and differentiated ways,” Lili Cheng, corporate vice president of Microsoft’s AI and research group, wrote in a post.

The Azure Bot Service is designed to help developers create conversational interfaces while LUIS is designed for developing custom natural integrations.

The Bot Service provides an environment where these conversational bots can interact with customers on multiple channels across any device. Channels include Cortana, Facebook Messenger, and Skype. “Intelligence is enabled in the Azure Bot Service through the cloud AI services forming the bot brain that understands and reasons about the user input. Based on understanding the input, the bot can help the user complete some tasks, answer questions, or even chit chat through action handlers,” the Microsoft Azure Bot Service and Language Understanding team wrote in a post.

Language Understanding is the key part of the “bot brain” that enables them to “think” and “reason” in order to make appropriate actions. The Language Understanding solution supports a number of languages in addition to English, and comes with prebuilt services for English, French, Spanish and Chinese. In addition, it provides phrase suggestions to help developers customize LUIS domain vocabulary in Chinese, Spanish, Japanese, French, Portuguese,German and Italian.

In addition, the company announced new capabilities for Azure Bot Service and Language Understanding. These features include: an updated user interface, an expansion of up to 500 intents and 100 entities for more conversational experiences, ability to customize cognitive services, and intelligent APIs that enable systems to see, hear, speak, understand and interpret.

“Think about the possibilities: all developers regardless of expertise in data science able to build conversational AI that can enrich and expand the reach of applications to audiences across a myriad of conversational channels. The app will be able to understand natural language, reason about content and take intelligent actions,” the Azure team wrote. “Bringing intelligent agents to developers and organizations that do not have expertise in data science is disruptive to the way humans interact with computers in their daily life and the way enterprises run their businesses with their customers and employees.”

CNCF releases Kubernete 1.9

The Cloud Native Computing Foundation has announced the upcoming release of the production-grade container scheduling and management solution. Kubernetes 1.9 will be released next week. This is the fourth release of the year.

As part of this release, the Apps Workloads API is now generally available. The Apps Workloads API groups together object such as DaemonSet, Deployment, ReplicaSet, and StatefulSet, which make up the foundation for long running stateless and stateful workloads in Kubernetes. Deployment and ReplicaSet are the two most commonly used and are now stable after a year of use and feedback.

Though Kubernetes was originally developed for Linux systems, support on Windows Server has been moved to beta status so that it can be evaluated for usage.

This release also features an alpha implementation of the Container Storage Interface, which is a cross industry initiative meant to lower the barrier for cloud native storage development. CSI will make it easier to install new volume plugins and allow third-party storage providers to develop without have to add the Kubernetes core codebase.

Other features in this release include CRD validation, beta networking IPVS kube-proxy, alpha SIG node hardware accelerator, CoreDNS alpha, and alpha IPv6 support.

In a recent survey conducted by the CNCF, the organization revealed 61 percent of organizations are evaluating Kubernetes, and 84 percent are using Kubernetes in production.

More information about version 1.9 is available here.

Postman survey shows that API documentation needs improvement

API development platform provider Postman has released the results of their 2017 State of API Survey which gathered insight from their community of 3.5 million developers on API usage, technologies, tools and concerns.

Some of Postman’s key findings show that around 70 percent of Postman developers spend more than a quarter of their week working with APIs; most development work involved private and internal APIs, though public APIs have their place; microservices were identified by respondents as the most interesting technology for 2017; and that documentation was one area that needs general improvement, with respondents providing concrete suggestions for how this could be done.

One irony of the findings lies in developers’ call for improved API documentation, while showing an aversion to documenting their own APIs. While there were many suggestions for what sorts of improvements could be made, according to the survey, the two most important were standardization and better code examples.

“We conducted this survey so our entire community could better understand the API ecosystem from the developer’s perspective,” Abhinav Asthana, Postman’s co-founder and CEO, said in the announcement. “The Postman community is made up of API power users, and their insight about APIs and how to work with them should inform the direction of the industry.”

Postman hopes that its findings will be of use to workers in leadership and in development.

“The findings provide insights for a range of API developers and decision makers,” the company’s announcement reads. “API developers and technical leads can use this data to identify and analyze current norms within the API community — technologies, time and energy expended, and where future focus will develop.

IT leaders and managers can use this data to discover the needs of development teams within the organization. C-level executives can use this data to inform plans to acquire necessary talent and tools to support upcoming deliverables.”

IBM releases WebSphere Liberty code to open source

IBM on the 20th moved the code that underlies its WebSphere Liberty solution for development using Agile and DevOps methodologies to GitHub, where it will be available this week under the Eclipse Public License v1.

The Open Liberty project is working to create a new runtime for Java microservices that can be moved between different cloud environments, according to Ian Robinson, an IBM distinguished engineer and the chief architect of WebSphere. Open Liberty will be the basis the IBM’s continued development of its Liberty product – the codebase is the same — and will be fully supported in commercial WebSphere licenses. It can be downloaded at openliberty.io.

The Open Liberty code on GitHub will give developments the components they need to create Java applications and microservices, using the Java EE foundation from WebSphere Liberty and the work from the Eclipse MicroProfile community. MicroProfile defines common APIs and infrastructure to microservices applications can be created and deployed without vendor lock-in, Robinson wrote in his blog.

Along with being a founding member of the Eclipse MicroProfile project, IBM has collaborated with Google and Lyft on the Istio project to create an open service fabric for microservices integration and management, and would like to see MicroProfile integrate with Istio, Robinson said.

Further, IBM’s commitment to open source includes the contribution of IBM’s Java 9 VM to Eclipse as Eclipse OpenJ9, which – when combined with Open Liberty, Eclipse MicroProfile and Java EE at Eclipse – creates a fully open licensing model of a full Java stack for building, testing, running and scaling Java applications.

“We hope Open Liberty will help more developers turn their ideas into full-fledged, enterprise ready apps,” Robinson wrote in his blog. “We also hope it will broaden the WebSphere family to include more ideas and innovations to benefit the broader Java community of developers at organizations big and small.”

The iPhone X’s notch is basically a Kinect

Sometimes it’s hard to tell exactly how fast technology is moving. “We put a man on the moon using the computing power of a handheld calculator,” as Richard Hendricks reminds us in Silicon Valley. In 2017, I use my pocket supercomputer of a phone to tweet with brands.

But Apple’s iPhone X provides a nice little illustration of how sensor and processing technology has evolved in the past decade. In June 2009, Microsoft unveiled this:

In September 2017, Apple put all that tech in this:

Well, minus the tilt motor.

Microsoft’s original Kinect hardware was powered by a little-known Israeli company called PrimeSense. PrimeSense pioneered the technology of projecting a grid of infrared dots onto a scene, then detecting them with an IR camera and acsertaining depth information through a special processing chip.

The output of the Kinect was a 320 x 240 depth map with 2,048 levels of sensitivity (distinct depths), based on the 30,000-ish laser dots the IR projector blasted onto the scene in a proprietary speckle pattern.

In its day, the Kinect was the fastest selling consumer electronics device of all time, while it was also widely regarded as a flop for gaming. But the revolutionary depth-sensing tech ended up being a huge boost for robotics and machine vision.

In 2013, Apple bought PrimeSense. Depth cameras continued to evolve: Kinect 2.0 for the Xbox One replaced PrimeSense technology with Microsoft’s own tech and had much higher accuracy and resolution. It could recognize faces and even detect a player’s heart rate. Meanwhile, Intel also built its own depth sensor, Intel RealSense, and in 2015 worked with Microsoft to power Windows Hello. In 2016, Lenovo launched the Phab 2 Pro, the first phone to carry Google’s Tango technology for augmented reality and machine vision, which is also based on infrared depth detection.

And now, in late 2017, Apple is going to sell a phone with a front-facing depth camera. Unlike the original Kinect, which was built to track motion in a whole living room, the sensor is primarily designed for scanning faces and powers Apple’s Face ID feature. Apple’s “TrueDepth” camera blasts “more than 30,000 invisible dots” and can create incredibly detailed scans of a human face. In fact, while Apple’s Animoji feature is impressive, the developer API behind it is even wilder: Apple generates, in real time, a full animated 3D mesh of your face, while also approximating your face’s lighting conditions to improve the realism of AR applications.

PrimeSense was never solely responsible for the technology in Microsoft’s Kinect — as evidenced by the huge improvements Microsoft made to Kinect 2.0 on its own — and it’s also obvious that Apple is doing plenty of new software and processing work on top of this hardware. But the basic idea of the Kinect is unchanged. And now it’s in a tiny notch on the front of a $999 iPhone.

LinkedIn open sources Kafka Cruise Control

Although Apache Kafka is widely adopted, there are still operational challenges that teams run into when they try to run Kafka at scale. In order to restore balance to Kafka clusters, LinkedIn open sourced and developed Cruise Control, its general-purpose system that continuously monitors clusters and automatically adjusts the resources needed to meet pre-defined performance goals.

According to LinkedIn staff software engineer Jiangjie Qin in a LinkedIn engineering post, Cruise Control started off as an intern project by Efe Gencer, who is currently a research assistant at Cornell University. Several members of the Kafka development team helped to brainstorm and design Cruise Control, and the project received several other contributions from the Kafka SRE team at LinkedIn.

Cruise Control for Kafka is currently deployed at LinkedIn, where it monitors user-specified goals, makes sure there are no violations of these goals, analyzes the existing workload on the cluster, and then automatically executes administrative operations to satisfy those goals, according to Qin.

Cruise Control was also designed with a few requirements in mind, which meant it needed to be reliable, resource-efficient, extensible, and serve as a general framework “that could only understand the application and migrate only a partial state and be used in any stateful distributed system,” writes Qin.

Cruise Control follows a monitor-analysis-action working cycle, providing a REST API for users to interact with. This REST API supports “querying the workload and optimization proposals of the Kafka cluster, as well as triggering admin operations,” according to Qin.

Cruise Control is also made up of a Load Monitor, which collects standard Kafka metrics from the cluster and derives per partition resource metrics that are not available. For instance, it estimates CPU utilization on a per-partition basis, writes Qin.

The Analyzer is the actual “brain” of the open source project, using a heuristic method to generate optimization proposals based on the goals and the cluster workload model from the Load Monitor.

According to Qin:

“Cruise Control also allows for specifying hard goals and soft goals. A hard goal is one that must be satisfied (e.g., replica placement must be rack-aware). Soft goals, on the other hand, may be left unmet if doing so makes it possible to satisfy all the hard goals. The optimization would fail if the optimized results violate a hard goal. Usually, the hard goals will have a higher priority than the soft goals.”

Now that Cruise Control is open sourced, Kafka users can check out its architecture and what challenges it aims to solve. LinkedIn recommends users check this reference for a guide.

Postman adds multi-region API monitoring

An update to the Postman API development released today brings multi-region support for monitoring API performance, and to measure network latency between regions.

To many, API performance has been a black-box situation. Developers rely on APIs to provide services and data to their applications, yet often don’t know the state of those APIs when they have been created by another organization. Often, a change to the API, or its location, can impact the application’s performance.

With the Postman update, “you make an HTTP request to the API with the test on Postman,” Abhinav Asthana, co-founder and CEO of Postdot Technologies, which produces Postman. “You can do it as often as every five minutes to make sure it doesn’t go down, which could result in massive losses” of business, he added.

“We can simulate what API response times will be like if, for example, you’re in the United States and the API is on a server in Japan,” Asthana said. The new multi-region support lets organizations “get as close to where their users are as possible,” he said, to reduce latency in API calls and improve overall application performance.

With this release, Postman supports six regions – US East, US West, Canada, EU, Asia Pacific and South America – mirroring the AWS API Gateway regions, he said, adding that Postman expects to add on more and more regions.