HTML 5.2 becomes a W3C Recommendation

The World Web Web Consortium (W3C)’s Web Platform Working Group today announced a new specification to replace the HTML 5.1 Recommendation. The team announced HTML5.2 is ready and now a W3C Recommendation.

“The HTML 5.2 specification defines the 5th major version, second minor revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). In this version, new features continue to be introduced to help Web application authors, new elements continue to be introduced based on research into prevailing authoring practices, and special attention continues to be given to defining clear conformance criteria for user agents in an effort to improve interoperability,” the group wrote in a post.

New features include the <dialog> element, integration with the JavaScript module system of ECMA-262, an update to the ARIA reference, and a referrer policy. “HTML is the World Wide Web’s core markup language. Originally, HTML was primarily designed as a language for semantically describing scientific documents. Its general design, however, has enabled it to be adapted, over the subsequent years, to describe a number of other types of documents and even applications,” the group wrote.

Also available today is the first public working draft of HTML 5.3, which will be the 5th major version, third minor release.

Apache Mnemonic is now a Top-Level Project

The Apache Software Foundation has announced Apache Mnemonic is graduating from the Apache Incubator to become a Top-Level Project. This signifies the project’s progress and success.

Apache Mnemonic is an open-source object platform for processing and analysis of linked objects, according to the foundation. It is designed to address Big Data performance issues such as serialization, caching, computing bottlenecks, and non-volatile memory storage media.

“The Mnemonic community continues to explore new ways to significantly improve the performance of real-time Big Data processing/analytics,” said Gang “Gary” Wang, vice president of Apache Mnemonic. “We worked hard to develop both our code and community the Apache Way, and are honored to graduate as an Apache Top-Level Project.”

The Java-based project includes a unified platform enabling framework, a durable object model and computing model, an extensible focal point for optimization, and integration with Big Data projects like Apache Hadoop and Apache Spark. The project has been used in industries like eCommerce, financial services and semiconductors.

“Apache Mnemonic fills the void of the ability to directly persist on-heap objects, making it beneficial for use in production to accelerate Big Data processing applications at several large organizations,” said Henry Saputra, ASF member and Apache Mnemonic incubating mentor. “I am pleased how the community has grown and quickly embraced the Apache Way of software development and making progressive releases. It has been a great experience to be part of this project.”

With the project, objects can be directly accessed or accessed through other computing languages such as C and C/++. According to the team’s website, it is currently working on pure Java memory service, durable object vectorization and durable query service features.

“Apache Mnemonic provides a unified interface for memory management,” said Yanhui Zhao, Apache Mnemonic committer. “It is playing a significant role in reshaping the memory management in current computer architecture along with the developments of large capacity NVMs, making a smooth transition from present mechanical-based storage to flash-based storage with the minimum cost.”

Apache Hadoop reaches 3.0

The Apache Software Foundation has announced version three of the open source software framework for distributed computing. Apache Hadoop 3.0 is the first major release since Hadoop 2 was released in 2013.

“Hadoop 3 is a major milestone for the project, and our biggest release ever,” said Andrew Wang, Apache Hadoop 3 release manager. “It represents the combined efforts of hundreds of contributors over the five years since Hadoop 2. I’m looking forward to how our users will benefit from new features in the release that improve the efficiency, scalability, and reliability of the platform.”

Apache Hadoop has become known for its ability to run and manage data applications on large hardware clusters in the Big Data ecosystem. The latest release features HDFS erasure coding, a preview of YARN Timeline Service version 2, YARN resource types, and improved capabilities and performance enhancements around cloud storage systems. It includes Hadoop Common for supporting other Hadoop modules, the Hadoop Distributed File System, Hadoop YARN and Hadoop MapReduce.

“This latest release unlocks several years of development from the Apache community,” said Chris Douglas, vice president of Apache Hadoop. “The platform continues to evolve with hardware trends and to accommodate new workloads beyond batch analytics, particularly real-time queries and long-running services. At the same time, our Open Source contributors have adapted Apache Hadoop to a wide range of deployment environments, including the Cloud.”

Apache Hadoop is widely deployed in enterprises and companies like Adobe, AWS, Apple, Cloudera, eBay, Facebook, Google, Hortonworks, IBM, Intel, LinkedIn, Microsoft, Netflix and Teradata. In addition, it has inspired other Hadoop related projects such as: Apache Cassandra, HBase, Hive, Spark and ZooKeeper.

CNCF releases Kubernete 1.9

The Cloud Native Computing Foundation has announced the upcoming release of the production-grade container scheduling and management solution. Kubernetes 1.9 will be released next week. This is the fourth release of the year.

As part of this release, the Apps Workloads API is now generally available. The Apps Workloads API groups together object such as DaemonSet, Deployment, ReplicaSet, and StatefulSet, which make up the foundation for long running stateless and stateful workloads in Kubernetes. Deployment and ReplicaSet are the two most commonly used and are now stable after a year of use and feedback.

Though Kubernetes was originally developed for Linux systems, support on Windows Server has been moved to beta status so that it can be evaluated for usage.

This release also features an alpha implementation of the Container Storage Interface, which is a cross industry initiative meant to lower the barrier for cloud native storage development. CSI will make it easier to install new volume plugins and allow third-party storage providers to develop without have to add the Kubernetes core codebase.

Other features in this release include CRD validation, beta networking IPVS kube-proxy, alpha SIG node hardware accelerator, CoreDNS alpha, and alpha IPv6 support.

In a recent survey conducted by the CNCF, the organization revealed 61 percent of organizations are evaluating Kubernetes, and 84 percent are using Kubernetes in production.

More information about version 1.9 is available here.

NodeSource releases N|Solid 3.0 with large-scale Node.js deployment features

NodeSource has announced the latest release of N|Solid, which is the company’s flagship product designed to provide visibility into Node.js app performance and system health. N|Solid 3.0 has a new set of features to make it easier for customers to secure, monitor, and manage large-scale Node.js deployments.

“N|Solid 3.0 introduces more comprehensive and customizable thresholds and notifications settings as well as interface updates that make it easier to secure and monitor large-scale applications,” said Joe McCann, founder and CEO of NodeSource. “Expanded functionality and greater flexibility empower teams to deliver peak performance and optimize infrastructure usage, while saving money.”

With this new update, N|Solid will be able to support even larger deployments of Node.js while providing more flexibility to explore Node.js metrics, according to the company. N|Solid 3.0 adds improved cluster view scatter plot rendering, which helps give teams a better user experience on large-scale Node.js deployments. It also provides users with the ability to change both of the scatter plot axis values in the dashboard and use 50 different metrics to find and analyze relationships between application data.

N|Solid 3.0 also enables better collaboration through new filters for metrics and saved search results. The new update allows snapshot and profile assets to be downloaded, sorted, deleted, or marked as favorite.

“N|Solid 3.0 exemplifies our commitment to constantly push the envelope that assures validation that open source, particularly the Node.js ecosystem, truly means open for business,” said McCann. “The trick is to enhance the value of Node.js environments while respecting enterprises’ needs to optimize existing investments while securely leveraging new capabilities that provide big improvements in efficiency and effectiveness at minimal cost and accelerate time-to-value.”

Angular 5.1 released with Angular CLI 1.6 and Angular Material

The Angular team is celebrating the holidays with new releases of their mobile and desktop frameworks. Following last month’s release of Angular 5.0, the team has announced Angular 5.1, Angular CLI 1.6 and the first stable release of Angular Material.

Angular says that version 5.1 is just a minor release that provides smaller features and bugfixes. It includes improved universal and App Shell support in the CLI, improved decorator error messages, and TypeScript 2.5 support.

Angular Material provides Material Design components for Angular. It is based off of Google’s Material Design visual language and provides 30 UI components. In addition, the release includes the Angular Component Dev Kit to provide developers with a set of custom component building blocks.

“After 11 alpha releases, 12 beta releases, and three release candidates, we’re happy to now mark the full 5.0.0 release of Angular Material and the Angular CDK, and the graduation of the CDK from Angular Labs,” Stephen Fluin, developer advocate at Google and Angular team member, wrote in a post.

Starting with this release, Angular Material and Angular CDK will follow the same major release schedule as the rest of the platform.

Angular CLI 1.6 includes support for building apps using the new Server Worker implementation. This new implementation was released with Angular 5.0. “Performance has always been an important goal for web developers, and it’s especially important in today’s world of flaky WiFi and mobile connections,” Fluin wrote. The implementation is designed to improve loading performance of apps and make the loading experience similar to a natively installed app, Fluin explained.

The Angular CLI 1.6 also includes improved support for Angular Universal and App Shell support for generating and building an app shell.

MariaDB coming to Azure, as Microsoft joins the MariaDB Foundation

On the first day of its Connect developer conference, Microsoft announced that it is joining the MariaDB Foundation, the group that oversees the development of the MariaDB database.

Connect is Microsoft’s other annual developer conference. The big conference, Build, takes place each spring and covers the breadth of Microsoft-related development, from Windows to Azure to Office to HoloLens. Connect has tended to have something of an open source, database, and cloud spin to it. At Connect last year, Microsoft announced that it was joining the Linux Foundation. In years prior, the company has used the event to announce the open sourcing of Visual Studio Code and, before that, .net.

MariaDB is a fork of the MySQL database that’s developed and maintained by many of the original MySQL contributors. In 2008, Sun Microsystems bought MySQL AB, the company that developed and created MySQL. In 2009, Oracle announced its plans to buy Sun, creating fear in the community about MySQL’s future as a successful, community-developed, open-source project. To ensure that the database would continue development in spite of the purchase, the MariaDB fork was created in 2009. The subsequent development of MySQL arguably justifies those fears; while Oracle still publishes source code, the development itself happens behind closed doors, with minimal outside contributions.

To prevent MariaDB from suffering the same fate, the MariaDB Foundation was created in 2012. This foundation owns the copyright to contributions and its goal is to ensure that the project remains driven by the user community.

The MariaDB Foundation aims to address one of the common problems with open-source projects: attracting and enabling new contributors. Contributing to open-source projects can often be an intimidating and difficult experience. Fixing a bug or adding a feature usually requires some familiarity with the codebase to know which part of the code needs to be changed. Once this has been done and the development work completed, there’s then work to get the work accepted as a contribution; matching the coding standards of the project, passing the review process, justifying design decisions, and more. The MariaDB Foundation does this outreach in an effort to lower the barriers to entry.

Monty Widenius, the co-creator of MySQL and CTO of the MariaDB Foundation, says that this has been very successful; over the last year, MariaDB has received more community contributions than MySQL did over its entire lifetime. Sustaining this kind of outreach requires money; Microsoft, joining the foundation as a platinum member, will help this work continue.

Microsoft’s involvement will also boost the level of Windows-specific expertise involved in MariaDB’s development. MySQL has always supported Windows, but core developers, including Widenius, are primarily Linux developers, with most optimization and design catering to that operating system. Microsoft’s Windows expertise should, over time, result in an improved experience on Windows.

Widenius tells us that Microsoft has already started to make contributions, with Azure being the focus. As well as announcing the membership of the MariaDB Foundation, Microsoft also said that it will soon offer a preview of Azure Database for MariaDB. This will be a fully managed, cloud-hosted version of MariaDB.

The International Organization for Standardization (ISO) C++ committee has voted on changes to the draft version of C++20 during its fall meeting in Albuquerque, New Mexico over the weekend.

According to committee chair Herb Sutter, ISO looked at range-based for statements with initializer; bit-casting object representations; and operator ⇔ for C++20.

The committee has allowed initialization for the range-based for loop, followed the C++ Core Guidelines scoping recommendations, added the new header , adopted Sutter’s proposal for the ⇔, approved extensions to the standard library, and did a lot of cleanup to make the language simpler to use.

Also discussed was the Modules technical specification (TS) ballot comments. TS refers to a document that is separate from the main standard where the committee can gain experience with new features, according to Sutter.

“A primary goal of the meeting was to address the comments received from national bodies in the Modules TS’s comment ballot that ran this summer,” Sutter wrote in a post. “We managed to address them all in one meeting, as well as deal with most of the specification wording issues discovered in the process of responding to the national comments.”

The committee will continue to work towards approving the Modules TS for publication as well as clean up some the language used.

“It will be great to get the TS published, and continue getting experience with implementations now in progress, at various stages, in all of Visual C++, Clang, and gcc as we let the ink dry and hammer out some remaining design issues, before starting to consider adopting modules into the C++ draft standard itself,” Sutter wrote.

Facebook open sources new build features for Android developers

Facebook is building on its open-source performance build tool, Buck, to speed up development and minimize the time it takes to test code changes in Android apps.

Buck is designed to speed up builds, add reproducibility to builds, provide correct incremental builds, and help developers understand dependencies. The company first open sourced the solution in 2013.

“We’ve continued to steadily improve Buck’s performance, together with a growing community of other organizations that have adopted Buck and contributed back. But these improvements have largely been incremental in nature and based on long-standing assumptions about the way software development works,” Jonathan Keljo, software engineer at Facebook, wrote in a post. “We took a step back and questioned some of these core assumptions, which led us deep into the nuances of the Java language and the internals of the Java compiler.”

According to Keljo, the team has completely redesigned the way Buck compiles Java code in order to provide new performance improvements for Android engineers.

The solution is also introducing rule pipelining, which Keljo says is designed to shorten bottlenecks, and increases parallelism to reduce build times by 10 percent.

“Buck is usually able to build multiple rules in parallel. However, bottlenecks do occur. If a commonly used rule takes awhile to build, its dependents have to wait. Even small rules can cause bottlenecks on systems with a high enough number of cores,” Keljo wrote.

Rule pipelining now enables dependent rules to compile while the compiler is still finishing up dependencies. This feature is now available in open source, but is not turned on by default.

The company is also announcing source-only stub generation to flatten the dependency graph and reduce build times by 30 percent.

“Flatter graphs produce faster builds, both because of increased parallelism and because the paths that need to be checked for changes are shorter,” Keljo wrote.

More information is available here.

Stack Overflow: Angular and Swift are dramatically rising in popularity

Stack Overflow is taking a look at the most dramatic rises and falls in developer technologies. According to its data, Apple’s programming language for iOS development, Swift, and Google’s web framework Angular are getting a lot of attention from developers today.

“Life as a developer (or data scientist, in my case) involves being comfortable with changing technologies,” Julia Silge, data scientist at Stack Overflow, wrote in a post. “I don’t use the same programming languages that I did at the beginning of my career and I fully expect to be using different technologies several years from now. Both of these technologies grew incredibly fast to have a big impact because they were natural next steps for existing developer communities.”

The data is based off of Stack Overflow “questions by” tag.

The data also shows Google’s mobile IDE Android Studio, Apple’s iPad and Google’s machine learning library TensorFlow with remarkable growth over the past couple of years.

Technologies that have had a decrease in interest within the developer community include JavaScript framework Backbone.js, game engine Cocos2d, Microsoft’s Silverlight, and Flash framework Flex.

Stack Overflow also looked at technologies with the highest sustained growth since 2010. The report found Angular.js, TypeScript, Xamarin, Meteor, Pandas, Elasticsearch, Unity 3D, machine learning, AWS and dataframe have grown at a high level over the past couple of years.

“Several of these technologies are connected to the growth of data science and machine learning, including Pandas and the dataframe tag,” wrote Silge. “Others occupy unique positions in the software industry, such as the ubiquitous search engine Elasticsearch and the game engine Unity. These technologies are diverse, but they all have grown at strong and steady rates over the past 5 to 7 years.”