Facebook open sources new build features for Android developers

Facebook is building on its open-source performance build tool, Buck, to speed up development and minimize the time it takes to test code changes in Android apps.

Buck is designed to speed up builds, add reproducibility to builds, provide correct incremental builds, and help developers understand dependencies. The company first open sourced the solution in 2013.

“We’ve continued to steadily improve Buck’s performance, together with a growing community of other organizations that have adopted Buck and contributed back. But these improvements have largely been incremental in nature and based on long-standing assumptions about the way software development works,” Jonathan Keljo, software engineer at Facebook, wrote in a post. “We took a step back and questioned some of these core assumptions, which led us deep into the nuances of the Java language and the internals of the Java compiler.”

According to Keljo, the team has completely redesigned the way Buck compiles Java code in order to provide new performance improvements for Android engineers.

The solution is also introducing rule pipelining, which Keljo says is designed to shorten bottlenecks, and increases parallelism to reduce build times by 10 percent.

“Buck is usually able to build multiple rules in parallel. However, bottlenecks do occur. If a commonly used rule takes awhile to build, its dependents have to wait. Even small rules can cause bottlenecks on systems with a high enough number of cores,” Keljo wrote.

Rule pipelining now enables dependent rules to compile while the compiler is still finishing up dependencies. This feature is now available in open source, but is not turned on by default.

The company is also announcing source-only stub generation to flatten the dependency graph and reduce build times by 30 percent.

“Flatter graphs produce faster builds, both because of increased parallelism and because the paths that need to be checked for changes are shorter,” Keljo wrote.

More information is available here.

Stack Overflow: Angular and Swift are dramatically rising in popularity

Stack Overflow is taking a look at the most dramatic rises and falls in developer technologies. According to its data, Apple’s programming language for iOS development, Swift, and Google’s web framework Angular are getting a lot of attention from developers today.

“Life as a developer (or data scientist, in my case) involves being comfortable with changing technologies,” Julia Silge, data scientist at Stack Overflow, wrote in a post. “I don’t use the same programming languages that I did at the beginning of my career and I fully expect to be using different technologies several years from now. Both of these technologies grew incredibly fast to have a big impact because they were natural next steps for existing developer communities.”

The data is based off of Stack Overflow “questions by” tag.

The data also shows Google’s mobile IDE Android Studio, Apple’s iPad and Google’s machine learning library TensorFlow with remarkable growth over the past couple of years.

Technologies that have had a decrease in interest within the developer community include JavaScript framework Backbone.js, game engine Cocos2d, Microsoft’s Silverlight, and Flash framework Flex.

Stack Overflow also looked at technologies with the highest sustained growth since 2010. The report found Angular.js, TypeScript, Xamarin, Meteor, Pandas, Elasticsearch, Unity 3D, machine learning, AWS and dataframe have grown at a high level over the past couple of years.

“Several of these technologies are connected to the growth of data science and machine learning, including Pandas and the dataframe tag,” wrote Silge. “Others occupy unique positions in the software industry, such as the ubiquitous search engine Elasticsearch and the game engine Unity. These technologies are diverse, but they all have grown at strong and steady rates over the past 5 to 7 years.”

Angular 5.0 now available

The Angular development team today announced a major release to the mobile and desktop framework. Angular 5.0 focuses on making Angular “smaller, faster, and easier to use.”

The new release includes the new build optimizer that will run by default when production builds are created with the CLI. The tool is designed to make bundles smaller. “The build optimizer has two main jobs. First, we are able to mark parts of your application as pure,this improves the tree shaking provided by the existing tools, removing additional parts of your application that aren’t needed,” the team wrote in a post. “The second thing the build optimizer does is to remove Angular decorators from your application’s runtime code. Decorators are used by the compiler, and aren’t needed at runtime and can be removed. Each of these jobs decrease the size of your JavaScript bundles, and increase the boot speed of your application for your users.”

Angular 5.0 also enables users to share application state between server side and client side versions of the app with the Angular Universal State Transfer API and DOM support.

Angular Universal enables developers to perform server side rendering on their Angular apps.

The version’s ServerTransferStateModule and the corresponding BrowserTransferStateModule enable users to generate information about their rendering with platform-server and transfer it to the client side without having to regenerate the information. This is useful when users perform application fetches data over HTTP.

In addition, the release features compiler improvements to support incremental compilation, speed up rebuilds and ship smaller bundles. Some compiler improvements include TypeScript transforms, ability to preserve whitespace from components and applications, and improved decorator support.

Other features include new router lifecycle events, RxJS 5.5, updateOn Blur / Submit capabilities in Angular Forms, CLI v1.5, zone speed improvements, and a new  HttpClient.

Information on how to update to version 5.0 is available here.

Microsoft lays out plans for VS Code in 2018

Microsoft is already planning for Visual Studio Code’s future with a new roadmap and focus areas for 2018. The company plans to tackle VS Code from three perspectives next year: happy coding; Node, JavaScript and TypeScript; and a rich extension ecosystem.

For “happy coding,” the team wants to make the code editor more pleasant to work with by making it more flexible when working with position editors and panes, multi-selection and supporting multi-root workspaces. “You will see a significant focus on the fundamentals in the next few months as well, focused on performance, localized language support, and accessibility so that every developer can be productive with VS Code,” the team wrote on the roadmap. Fundamentals will include: improved startup performance, memory consumption, accessibility, language support packs, Windows update experience, and serviceability.

The team will also continue to update the editor to support the best code editing, navigation, and understanding experiences for TypeScript and JavaScript. In addition, it will make it easier to configure debugging of Node based applications and support client and server side debugging. Other language improvements include refining the Language Server Protocol; improving the debug adaptor protocol; enhancing JavaScript discoverability; and working with the TypeScript team.

“Of course, VS Code is not just a Node, JavaScript, and TypeScript tool. Our rich extensibility model and extension ecosystem means that you can install support for just about every language and framework, from C++ to C# to Go, Python, and more,” the team wrote.

Other updates on the horizon include improving the extension recommendation system, improving extension searching, simplifying the ability to track down issues, showing users more information about extension usage, and enhancing the language API.

The full roadmap is available here.

Microsoft Windows 10 Fall Creators Update SDK now available

Developers can start preparing their applications for the next update of Windows 10 with the newly available Windows 10 Fall Creators Update SDK. The SDK features new tools for building mixed reality experiences, modernizing applications for today’s workplace, and building and monetizing games and apps.

“Windows 10 Fall Creators Update provides a developer platform that is designed to inspire the creator in each of us – empowering developers to build applications that change the way people work, play and interact with devices. To truly fulfill this platform promise, I believe that our developer platform needs to be centered around people and their needs.  Technology should adapt and learn how to work with us,” Kevin Gallo, corporate vice president of the Windows developer platform, wrote in a post.

According to the company, the next wave of virtual and augmented reality is mixed reality. With Windows Mixed Reality, developers can create immersive experiences that are reusable across platforms and device form factors. “Windows 10 was designed from ground up for spatial interactions and the next wave in this journey is Windows Mixed Reality, uniting the digital and real world to create a rich, immersive world. As humans, we interact with space constantly, and Windows Mixed Reality will feel the most natural for users,” Gallo wrote.

To modernize apps for the workplace, the SDK enables developers to create and update existing apps with Visual Studio 2017 version 15.4, integration of .NET Standard 2.0, and an improved Windows 10 deployment system.

In addition, developers can build better game and app experiences with the Expanded Resources feature in the Fall Xbox One Update, the Xbox Live Creators Program, and the Mixer SDKs for major game engines and languages.

Beauty vs Brains — A designer’s approach to software development

When most software developers have a new idea they go straight to their computer, I turn off my devices and break out the old fashioned notebook. In high school I liked to sketch and draw, and today I use the same markers and pens to kick off the develop process. I prefer this method because when it comes to pleasing the consumer, design always wins.

Much to the chagrin of most development heads I work with, I don’t start with a data model. The first thing I do is craft sketches of the design from a user’s point of view and work backward. After the initial design I dive into functionality, then move to development and discuss what we can realistically make. But in that discussion design always wins.

I started Quore, a hospitality software solution, eight years ago using this design-first approach. Today, we have more than 30,000 users, and the first thing most people remark when they try Quore is its intuitive design. While I’m a firm believer that there must be a balance of beauty and brains when it comes to software design, too often the end user takes a back seat.

Here are four ways to approach new development with a design-first mentality to ensure the end user is top-of-mind:

Go dark
Going dark is a great way to expand your imagination. By turning off electronics, developers are forced to get creative by drawing and discussing ideas. I believe that distractions kill ideas, so when Quore needed to expand to a new office, I made sure there was a dedicated “static-free” room in the plans. The room is a place for all employees to escape technology and face creativity. Clearing the static is one great way to vehemently pursue a solution to a problem.

Throw out the rulebook
Using graphic design rules, not software design rules, developers can ensure design always wins. As a rule, graphic designers start with what the end user sees first. Graphic designers know that it’s all about perception: people first see shapes, then color, then content. By taking this into account, designers can create products that are intuitive and easy to use. Start by first sketching the product, then add color to bring the visual to life.

Great design takes a careful approach to color choices. Color invokes emotion and has the power to affect behavior. When designing Quore, it was important to incorporate features that thoughtfully take color into consideration. One feature notifies employees with warm colors when they are going into overtime, another when rooms are flagged for maintenance.

Know your customer
A deep understanding of your customers and users industry will always lead to stronger designs, implementations and tests. While recently creating a feature to increase the efficiency of housekeeping departments, we first identified the most crucial tasks of the housekeeper role and built the design from those tasks. The outcome of this exercise yielded a feature that increased adoption among users, increased the efficiency of the department, increased guest satisfaction by ensuring a room is ready upon check-in, and saved money.

Bring in the team
Once you’ve mapped out the entire process from a user’s point of view, it’s time to bring in the whole design team. Encouraging other designers to review your concepts allows you to gauge its feasibility from an engineer’s perspective. These people can help identify what may be frivolous and what makes the most sense functionally. While the concept may require some retooling, outside perspectives usually help narrow the design into the best solution.

When Quore entered the market in 2013, there were other products with similar goals, but most were basic spreadsheet programs. The look and functionality of Quore was a hit with our new customers, and many dropped their existing software solutions and switched to Quore. Quore has always taken a user-first approach, and continues to attract new customers with its intuitive design. Focusing on design and user experience above all else will ensure successful, lasting products.

The impact of virtual and augmented reality on corporate developers

It was more than 30 years ago that Microsoft Windows was first released. At the time, it was a radical departure from the text-based interfaces that dominated most screens. It has been over 25 years since Windows 3.0, the first point that people started really paying attention to Windows. Suddenly, there was a reason for people to pay attention. Multitasking was important and something that DOS didn’t do. However, Windows had to fight off the perception that it was for games to find its footing as a useful productivity tool.

Fast forward to today, when virtual and augmented reality solutions are making fun games because of platforms like Oculus Rift and Pokémon Go. Games have thrust these technology solutions into the consciousness of individuals and business leaders who wonder how they can be used for productivity instead of entertainment. It’s up to today’s corporate developers to take the technologies and make them productive.

The Learning Curve
Like the learning curve for Windows decades ago, the learning curve for virtual and augmented reality isn’t shallow – but it’s a learning curve that corporate developers can overcome. While most corporate developers could, historically, safely ignore threading and performance concerns in their corporate applications, that is no longer the case. The need for real-time feedback creates a need to defer processing and focus on the interaction with the user. This means learning, improving your learning, or relearning how to manage threads in your applications.

It also means looking for optimal processing strategies that most developers haven’t seen since their computer science textbooks. With Moore’s Law creating massive processing capacity in both central processing capabilities as well as graphics capabilities, it’s been some time since most developers have needed to be concerned with which strategy was the fastest. However, as these platforms emerge, it’s necessary to revisit the quest for optimal processing strategies – including the deferral of work into background threads.

More challenging than development-oriented tasks may be the need to develop models in three-dimensional space. Most developers eventually got decent with image editors to create quick icons that designers could later replace. However, building 3D models is different. It means a different set of tooling and a different way of thinking.

The Applications
Most corporate developers were relegated to working on applications that were far removed from the reality of the day-to-day business. Recording the transactions, scanning the forms, tracking customer interactions… all were important, but disconnected from making the product, servicing the customer, or getting the goods to the end user. VR and AR are changing that. Instead of living in a world that’s disconnected from how the user does their work, VR and AR are integrating how users do their work and how they learn.

In the corporate world, VR applications include training with materials that are too expensive or dangerous to work with in reality – and the remote management of robots and drones that do the work that is too difficult for a human to do. Instead of controlling electrons in a computer, VR software is moving atoms or rewiring human brains. Training is no longer boring videos of someone else doing something, it’s an interactive simulation that used to be too expensive to build. The opportunity to remotely control through VR provides the benefits of human judgement with the safety of not exposing humans to dangerous conditions.

AR can augment humans. Instead of having to memorize reams of content, it can be displayed in-context. Knowledge management systems have traditionally been boring repositories of information that’s difficult to access. AR connects the knowledge repository with its use.

AR also makes accessible to humans sensors that are beyond our five senses. AR can bring thermal imaging, acoustic monitoring, and other sensors into our range of perception through visual or auditory cues. Consider how digital photography transformed the photography industry. Now everyone can get immediate feedback and can make adjustments instead of having to wait for the development process.

The Change
Ultimately, VR and AR mean that developers get the chance to have a greater and more tangible impact on the world around them. They can be a part of augmenting human capacity, reducing the risk to humans, and to improve training. All it takes is a refocus on threading, performance, and learning a bit about 3D modeling.

Postman Pro free features available for small projects, developers

Small projects and individual developers now have access to API development tools with Postman free of charge, since the company’s latest version of the free Postman app will have limited-quantity access to many of the paid features of Postman Pro.

Postman is a provider of an API development environment, and version 5.0 of its Postman app allows API developers to leverage the full power of Postman, with support at every stage of their workflow, according to the company. The app is free to all users and it’s available on Mac, Windows, and Linux native apps, as well as a Chrome app.

Developers will have access to these popular features of Postman Pro in Postman 5.0, but for free and in small-project quantities. For instance, users will be able to access Postman’s private and public documentation feature (1000 views/month); run API monitoring calls (1000 calls/month); create and use mock servers (1000 server calls/month); and access Postman Collections via the Postman API (1000 API calls/month).

“Postman firmly believes that developers deserve more,” said Abhinav Asthana, CEO and co-founder of Postman. “The growth and popularity of Postman Pro since its launch last year has demonstrated to us that these features should be available to all API developers, to make their workflow faster, easier and better.”

Customers that choose Postman Pro have support throughout their API workflow, with unlimited access to documentation, mock servers and the Postman API. They also can send a higher number of free API monitoring calls with an option to purchase discounted blocks of 500,000 calls for monitoring work.

The main goal of DevOps is quite simple: ship software updates frequently, reliably, and with better quality. This goal is somewhat “motherhood and apple pie,” since almost every organization will agree that they want to get there. Many will tell you they’ve already embarked on the DevOps journey by following some commonly followed frameworks, such as “CALMS.”

However, very few will express complete satisfaction with the results. After speaking to 200+ DevOps professionals at various stages of the adoption lifecycle, we found that organizations generally fall in one of three categories:

We were most interested in groups two and three since they were actually in the middle of their DevOps journey. When asked to better explain the challenges and roadblocks, here is what we found:
•68% said that the lack of connectivity between the various DevOps tools in their toolchain was the most frustrating aspect
•52% said that a large portion of their testing was still manual, slowing them down
•38% pointed out that they had a mix of legacy and modern applications, i.e. a brownfield environment. This created complexity in terms of deployment strategies and endpoints, toolchain, etc.
•27% were still struggling with siloed teams that could not collaborate as expected
•23% had limited access to self-service infrastructure
•Other notable pain points included finding the right DevOps skill set, difficulty managing the complexity of multiple services and environments, lack of budget and urgency, and limited support from executive leadership

Let us look at each of these challenges in greater detail.

#1: Lack of connectivity in the DevOps toolchain
There are many DevOps tools available that help automate different tasks like CI, infrastructure provisioning, testing, deployments, config management, release management, etc. While these have helped tremendously as organizations start adopting DevOps processes, they often do not work well together.

As a classic example, a Principal DevOps engineer whose team uses Capistrano for deployments told us that he still communicates with Test and Ops teams via JIRA tickets whenever a new version of the application had to be deployed, or whenever a config change had to be applied across their infrastructure.

All the information required to run Capistrano scripts was available in the JIRA ticket, which he manually copied over to his scripts before running them. This process usually took several hours and needed to be carefully managed since the required config was manually transferred twice: once when entered into JIRA, and again when he copied it to Capistrano.

This is one simple example, but this problem exists across the entire toolchain.

Smaller organizations get around this problem by writing custom scripts that glue their toolchain together. This works fine for a couple of applications, but quickly escalates to spaghetti hell since these scripts aren’t usually written in a standard fashion. They are also difficult to maintain and often contain tokens, keys and other sensitive information. Worse still, these scripts are highly customized for each application and cannot be reused to easily scale automation workflows.

For most serious organizations, it is an expensive and complex effort to build this homegrown “DevOps glue,” and unless they have the discipline and resources of the Facebooks and Amazons of the world, it ultimately becomes a roadblock for DevOps progress.

Continuous Delivery is very difficult to achieve when the tools in your DevOps toolchain cannot collaborate and you manage dependencies manually or through custom scripts.

Challenge #2: Lack of test automation
Despite all the focus on TDD, most organizations still struggle with automating their tests. If the testing is manual, it is almost impossible to execute the entire test suite for every commit, becoming a barrier for Continuous Delivery. Teams try to optimize this by running a core set of tests for every commit and running the complete test suite only periodically. This means that most bugs are found later in your software delivery workflow and are much more expensive to find and fix.

Test automation is an important part of the DevOps adoption process and hence needs to be a top priority.

Challenge #3: Brownfield environments
Typical IT portfolios are heterogeneous in nature, spanning multiple decades of technology, cloud platform vendors, private and public clouds in their labs and data centers, all at the same time. It is very challenging to create a workflow that spans across these aspects since most tools work with specific architectures and technologies. This leads to toolchain sprawl as each team uses the toolchain best serving their needs.

The rise of Docker has also encouraged many organizations to develop microservices-based applications. This has also increased the complexity for DevOps automation since an application now needs hundreds of deployment pipelines for heterogeneous microservices.

Challenge #4: Cultural problems
Applications evolve across functionals silos. Developers craft software, which is stabilized by QA, and then deployed and operated by IT Operations. Even though all these teams are expected to work together and collaborate, they often have conflicting interests.

Developers are driven to move as fast as they can and build new stuff. QA and Release management teams are driven to be as thorough as possible, making sure no software errors can escape past their watchful eyes. Both teams are often gated by SecOps and Infrastructure Ops, who are incentivized to ensure production doesn’t break.

Governance and compliance also plays a role in slowing things down. Cost centers are under pressure to do more with less, which leads to a culture that opposes change, since change introduces risk and destabilizes things, which means more money and resources are required to manage the impact.

This breakdown across functional silos leads to collaboration and coordination issues, slowing down the flow of application changes.

Some organizations try to address this by making developers build, test and operate software. Though this might work in theory, developers are bogged down by production issues, and a majority of time is spent on operating what they built last month as opposed innovating on new things. Most organizations try to get all teams involved across all phases of the SDLC, but this approach still relies on manual collaboration.

Automation is the best way to broker peace and help Dev and Ops collaborate. But as we see in other challenges, ad-hoc automation itself can slow you down and introduce risk and errors.

Challenge #5: Limited access to self-service infrastructure and environments
For many organizations, virtual machines and cloud computing transformed the process of obtaining the right infrastructure on-demand. What previously took months could now be achieved in a few minutes. IaaS providers like AWS have hundreds of machines with flexible configurations and many options for pre-installed OS and other tools. Tools like Ansible, Chef, Puppet help represent infrastructure-as-code, which further speeds up provisioning and re-provisioning of machines.

However, this is still a problem in many organizations, especially those running their own data centers or those that haven’t embraced the cloud yet.

We Need more from DevOps
A popular DevOps framework describes a CALMS approach, consisting of Culture, Automation, Lean, Measurement and Sharing. The DevOps movement started as a cultural movement, and even today, most implementations focus heavily on culture.

While culture is an important part of any DevOps story, changing organizational culture is the hardest thing of all. Culture forms over a period of time due to ground realities. Ops teams don’t hate change because they are irrational or want to be blockers. Over the years, they’ve taken the heat every time an over-enthusiastic Dev team tried to fast-track changes to production without following every step along the way.

Seating them with the developers might help make the work environment a little friendlier but it doesn’t address the root cause, no matter how many beers they have together.

Chatbots are one area of innovation that is driving and changing engagement for digital experiences, and a number of customers and others in the software market are adopting and embedding chatbots within their applications. Like other emerging technologies, chatbots add complexity to applications, which is why companies like Perfecto are trying to knock down that challenge and make it easy to deliver chatbot experiences faster.

Perfecto, a software quality lab, recently added a new set of capabilities that automate testing for voice-enabled chatbots and Facebook Messenger or Siri integrations. Its purpose is to provide developers building virtual assistant capabilities for functional correctness, responsiveness, and voice quality — shrinking the number of testing hours and giving them the resources the need to build voice-activated assistants.

The capabilities are fully integrated into Perfecto’s cloud-based platform, which includes all of the platforms for testing web, mobile and IoT devices.

“Testing software used to be a lot easier,” said Carlo Cadet, Perfecto’s chatbot market expert. “Now you are adding in this whole platforms’ conversation, like what device are you using versus what I’m using, what are the conditions you are using versus what I’m using, are you on Wi-Fi or mobile — as the engagement methods become richer, the complexity challenge rises for teams.”

The various tasks needed to accomplish automated testing for these services would require functions like the ability to convert a text string into audio, injecting audio onto the virtual assistant on the device, validate the response, and record and transact with the response.

Mobile banking and insurance apps are using chatbots like Geico’s Kate, HSBC’s Andrew, and Bank of America’s Erica. These chatbots offer a core set of functions to customers, such as asking the questions like, “What is my balance?” Cadet said Perfecto wants to remove the manual testing component for development teams, and the way to do so is with automation.

For example, it is now easy to automate a sequence like, “What is my balance?” with Perfecto. Developers can code a string (“What is my balance?”), and flip that string to audio using text to speech, injecting the expected audio into the device. Audio responses from the in-app chatbot are converted to strings (speech to text) that can be compared to the expected response. In addition, developers can test for the audio quality (mean opinion score) that the chatbot delivered.

As brands enhance their applications with voice-based interactions, testing — specifically automated — becomes a critical component for creating and deploying chatbot services. According to Cadet, automation is the future, and companies need to have automation as the bedrock for continuous testing and innovating quickly.