Node.js reached new milestones in 2017

The Node.js company NodeSource is releasing a Node by Numbers 2017 analysis to look at the growth and adoption of the JavaScript project. Node.js is a JavaScript runtime that features an event-driven, non-block I/O model.

“By all measures, 2016 was a fantastic year for Node.js – and 2017 was even better. Metrics across the board show growth and expansion of the platform,” according to the report.

2017 saw three supported long-term support releases: Node.js 4.x (“Argon”), Node.js 6.x (“Boron”), and Node.js 8.x (“Carbon”), making it the first time in history the project has had three supported LTS release lines. “From this point on, unless something changes in the way Node.js LTS releases are managed, we will always have three actively supported LTS release lines when Node by Numbers rolls around,” the NodeSource team wrote in a post. “This means that 2017 is the first time we’ll be seeing the dynamics of adoption and movement from one Node.js LTS release to another—giving the project, maintainers, and end-users better insight into which versions are most supported and most relied upon.”

The most popular download was and continues to be Node.js 6 despite Node.js 8’s upward trend. Node.js LTS usage saw a downward trend last year.

Additionally, the project reached a new milestone in its amount of downloads. For the first time, the project saw one million downloads in a single day last year. Another key finding included a 63 percent increase in the total number of new contributors.

Some anomalies the company saw in Node last year included: a jump in Russian downloads, an uptick in Node.js 9 downloads, and a spike in Node.js 7 downloads from April to May.

“Year over year, Node.js continues to succeed and grow. Credit for this growth is deserved across the board: to the countless hours that individuals contribute to the project to help build it, to the hundreds of thousands of modules that JavaScript developers have published, and to the developers who use it on a daily basis for everything from Enterprise-grade IoT to rapidly building out basic MVPs,” the team wrote.

Visual Studio Live Share gives you pair programming without the shared keyboards

Decades after introducing IntelliSense, the code completion and information features that transform Visual Studio into something more than just a text editor, Microsoft is introducing something that it claims is just as exciting: Live Sharing.

Collaboration is critical for many developers. Having another pair of eyes look over a problematic bug can offer insight that’s proving elusive; tapping the knowledge of a seasoned veteran is an important source of training and education. Some developers advocate pair programming, a system of development where two people literally share a keyboard and take turns to drive, but most feel this is intrusive and inconvenient. Ad hoc huddles around a single screen are common but usually mean that one developer has to contend with the preferences of another, hindering their productivity. Screen sharing avoids the awkward seating but also means that the sharer either has a loss of control if they give the other person keyboard and mouse access, or, if they don’t, it prevents the other person from taking the initiative.

Live Share is Microsoft’s solution. It provides a shared editing experience within Visual Studio and Visual Studio Code (currently only for JavaScript, TypeScript, and C#) that’s similar to the shared editing found in word processors; each person can see the other’s cursor and text selections; each person can make edits—but it goes further, by enabling shared debugging, too. A project can be launched under the debugger, and both people can see the call stack, examine in-scope variables, or even change values in the immediate window. Both sides can single step the debugger to advance through a program.

It provides rich collaboration—while still allowing both developers to use the environment that they’re comfortable and familiar with. If you prefer to use Visual Studio, with your windows laid out just so, and still use the same key bindings as you learned for Visual C++ 6 back in the ’90s, you can do so, and it doesn’t matter that your peer is using Visual Studio Code on a Mac, with (ugh) vim key bindings. With Live Share, you just send a sharing request to your colleague and they can connect to your project, editor, and debugger from the comfort of their own environment.

The feature will be released as a preview for Visual Studio Code and Visual Studio at some unspecified point in the future, using a combination of updates to the core programs and extensions to round out the functionality. Microsoft stresses that the preview is still at an early stage. Technically, it allows multi-way collaboration (not just pairs), though this may not be enabled initially. At some point it will allow direct connections between systems on the same network, but, initially, it may require sharing activity to bounce through a Microsoft server.

Even at this early stage, however, it looks tremendously useful and like a huge step forward in collaboration and productivity.

Building a better dev ops platform

More immediately, today marks the general availability of Visual Studio App Center (formerly Mobile Center), Microsoft’s one-stop shop for mobile application deployment and testing. Point App Center at your source repository (hosted on Microsoft’s Visual Studio Team Services (VSTS) or GitHub), and it will fetch the code, set up build scripts, and run unit and integration tests.

That’s standard continuous integration stuff, but App Center goes further: it can run your application tests on real hardware, both iOS and Android, to span dozens of different screen size and operating system combinations. You can even see screenshots of the app running on the various different makes and models of handset.

Once your application is passing its tests, App Center has a beta deployment system so that you can roll it out to beta testers. Need to make a quick fix to address a bug? If your app is written in JavaScript, you can use Code Push to send updated scripts to your users without needing to do a full build and reinstall. This works even for stable builds that have been submitted to their respective app stores; you can patch live applications, and we’re told that Apple and Google will allow this as long as the patches aren’t too radical.

App Center lets you test across a whole bunch of devices at the same time. Notice how the first three phones have crashed out to the desktop because of a bug in the app being tested.

 

Even after a successful beta test, you’ll probably want to collect crash and analytics data from your users to discover problems and better understand how they’re using your application. App Center has tooling for that, too.

Microsoft’s goal with App Center is to make it easy for developers to adopt best practices around building, testing, reporting, and so on; App Center is a one-stop shop that handles all of these for you. Under the covers it uses VSTS. This means that if your needs grow beyond what App Center can do—for example, if you have server-side code that needs to have its builds, testing, and deployment synchronized with the client-side code—you can use the same workflows and capabilities in the full VSTS environment, while still retaining access to everything App Center can do.

Of course, you still have to develop applications in the first place. Microsoft is continuing to try to make Visual Studio the best place for app development regardless of platform. Live Player, shown earlier this year at Build, greatly streamlines the develop-build-debug loop for app development by pushing your application code to a device (iOS or Android) instantly, letting it run without needing to deploy an updated app package each time.

This is particularly compelling for honing user interfaces. Interfaces written in XAML, Microsoft’s .NET interface markup language, can be shown in Live Player, and they update live; as soon as you save the XAML changes, the UI shown on the device updates accordingly. You don’t even need to navigate to a particular screen within the application to test it; you can have Live Player simply show arbitrary XAML files. This makes developing and testing interfaces substantially less painful.

Increasing the reach of machine learning

Microsoft also announced Visual Studio Tools for AI, a range of features to make developing machine learning applications within Visual Studio easier. With this tooling, Visual Studio will be able to create projects that are already set up to use frameworks such as TensorFlow or Microsoft’s own CNTK.

Machine learning systems build models that are generated by large-scale training, with the training done on clusters and often accelerated with GPUs or dedicated accelerator chips. The models produced can then be run on client machines. A model that’s used for, say, detecting faces in video streams will still need a powerful client, but much less so than the hardware needed for the initial training.

This model training is thus a good fit for cloud computing. The Tools for AI integrate with Azure’s Batch AI Service, a managed environment providing a GPU-accelerated training cluster. Training jobs can be submitted from within Visual Studio, and progress can be tracked there, too, giving insight into things like the level of GPU utilization.

Once a model has been built, there are now new ways of deploying it to devices. Microsoft has been talking up this notion of the “intelligent edge” as a counterpart to the “intelligent cloud;” this means pushing the machine-learning models into edge devices to make use of the local processing power where it makes sense to do so. A new framework, the AI Toolkit for Azure IoT Edge, is intended to streamline that process.

The company also announced a preview of Azure SQL Database Machine Learning Services, which allows machine learning models to be deployed into a SQL database and accessed directly. An example use case of this is a support ticketing system. A machine learning model could be generated to infer a priority for each ticket so that issues that seem to be urgent are prioritized automatically. With the new Azure services, this model can be run directly within the SQL database.

As much as Microsoft and other companies have been talking up machine learning, it is for many developers something of an unknown. While high-level systems such as Cognitive Services don’t require much knowledge of the details of machine learning—they use prebuilt, off-the-shelf models, making them quick and easy to start using—developers who want to create their own models will need to learn and understand new frameworks and techniques.

Microsoft’s attempt to fill that knowledge gap is its AI school. As it builds up its range of systems and capabilities, it hopes that more accessible machine learning will turn up in more places.

Source: arstechnica.com

KotlinConf kicks off with Kotlin 1.2 RC

The Kotlin programming language is getting a number of new updates and improvements as part of the inaugural KotlinConf taking place in San Francisco this week. Kotlin is a statically typed programming language for modern multiplatform applications developed by JetBrains.

The big announcement from the conference’s keynote was the release of Kotlin 1.2 Release Candidate. The new release will include experimental support for multiplatform projects, language improvements, support for array literals in annotations, and compiler enhancements.

In addition, the Kotlin team announced support for iOS development with Kotlin/Native. Kotlin/Native is designed to compile Kotlin directly to machine code. iOS support is being released as part of Kotlin/Native 0.4. “This support is still in its early days, but it’s there, and it’s a major step on our path of enabling Kotlin development on all platforms,” Dmitry Jemerov, software developer for JetBrains, wrote in a post.

Earlier this year, the programming language made headlines when Google announced it would support Kotlin in Android. Since then, the Android team has seen more than 17% of Android Studio projects using Kotlin.

“We are really excited about the strong momentum, and we are thrilled that Android developers all over the world are discovering the joy of Kotlin programming,” James Lau, product manager for Android, wrote in a post. “Kotlin for Android is production-ready. From startups to Fortune 500 companies, developers are already using Kotlin to build their apps. Developers from Pinterest, to Expedia, to Basecamp — and many others — are finding their use of Kotlin is increasing productivity and their overall developer happiness levels.”

Google recently released Android Studio 3.0 with Kotlin support built-in.

Other announcements from the keynote included: Kotlin/Native IDE support, a initial preview version of the Kotlin/Native plugin for CLion; Ktor 0.9, a asynchronous coroutine-based Web framework built from Kotlin; and official Kotlin wrappers for React.js. According to the team, the new wrappers are perhaps the biggest news for web front-end development with Kotlin. The new feature enables developers to create modern web apps using React and Kotlin, without having to worry about project setup and build configuration.

“As for the future evolution of the language, our main goal at this time is to enable better and broader code reuse between the platforms supported by Kotlin. We plan to extend the set of libraries available on all platforms with the same API to include I/O, networking, serialization, date handling and more,” Jemerov wrote.

The focus of Kotlin 1.3 will include internal changes such as better performance, improved type inference, and improved responsiveness of IDE plugins.

GitHub Universe outlines plans for the future of software development

About ten years ago, GitHub embarked on a journey to create a platform that brought together the world’s largest developer community. Now that the company believes it has reached its initial goals, it is looking to the future with plans to expand the ecosystem and transform the way developers code through new tools and data.

“Development hasn’t had that much innovation arguably in the past 20 years. Today, we finally get to talk about what we think is the next 20 years, and that is development that is fundamentally different and driven by data,” said Miju Han, engineering manager of data science at GitHub.

The company announced new tools at its GitHub Universe conference in San Francisco that leverages its community data to protect developer code, provide greater security, and enhance the GitHub experience.

“It is clear that security is desperately needed for all of our users, open source and businesses alike. Everyone using GitHub needs security. We heard from our first open source survey this year that open source users view security and stability above all else, but at the same time we see that not everyone has the bandwidth to have a security team,” said Han.

GitHub is leveraging its data to help developers manage the complexity of dependencies in their code with the newly announced dependency graph. The dependency graph enables developers to easily keep track of their packages and applications without leaving their repository. It currently supports Ruby and JavaScript, with plans to add Python support in the near future.

In addition, the company revealed new security alerts that will use human data and machine learning to track when dependencies are associated with public security vulnerabilities, and recommend a security fix for it.

“This is one of the first times where we are going from hosting code to saying this is how it could be better, this is how it could be different,” said Han.

On the GitHub experience side, the company announced the ability to discover new projects with news feed and explore capabilities. “We want people to dig deeper into their interests and learn more, which is one of the core things it means to be a developer,” said Han.

The new news feed capabilities allows users to discover repositories right from their dashboard, and gain recommendations on open source projects to explore. The recommendations will be based off of people users are following, their starred repositories, and popular GitHub project.

“You’re in control of the recommendations you see: Want to contribute to more Python projects? Star projects like Django or pandas, follow their maintainers, and you’ll find similar projects in your feed. The ‘Browse activity’ feed in your dashboard will continue to bring you the latest updates directly from repositories you star and people you follow,” the company wrote in a blog.

The “Explore” experience has been completely redesigned to connect users with curated collections, topics, and resources so they can dig into a specific interest like machine learning or data protection, according to Han.

Han went on to explain that the newly announced features are just the beginning of how the company plans to take code, make it better, and create an ecosystem that helps developers move forward.

“These experiences are a first step in using insights to complement your workflow with opportunities and recommendations, but there’s so much more to come. With a little help from GitHub data, we hope to help you find work you’re interested in, write better code, fix bugs faster, and make your GitHub experience totally unique to you,” the company wrote.

In this article, you will learn how you can simplify your callback or Promise based Node.js application with async functions (async/await).

Asynchronous language constructs have been around in other languages for a while, like async/await in C#, coroutines in Kotlin and goroutines in Go. With the release of Node.js 8, the long awaited async functions have landed in Node.js as well.

By the end of this tutorial, you should be able to answer the following question too:

What are async functions in Node?

Async function declarations return an AsyncFunction object. These are similar to Generator-s in the sense that their execution can be halted. The only difference is that they always return a Promiseinstead of a { value: any, done: Boolean } object. In fact, they are so similar that you could gain similar functionality using the co package.

In an async function, you can await for any Promise or catch its rejection cause.

So if you had some logic implemented with promises:

function handler (req, res) {  
  return request('https://user-handler-service')
    .catch((err) => {
      logger.error('Http error', err)
      error.logged = true
      throw err
    })
    .then((response) => Mongo.findOne({ user: response.body.user }))
    .catch((err) => {
      !error.logged && logger.error('Mongo error', err)
      error.logged = true
      throw err
    })
    .then((document) => executeLogic(req, res, document))
    .catch((err) => {
      !error.logged && console.error(err)
      res.status(500).send()
    })
}

You can make it look like synchronous code using async/await:

async function handler (req, res) {  
  let response
  try {
    response = await request('https://user-handler-service')  
  } catch (err) {
    logger.error('Http error', err)
    return res.status(500).send()
  }

  let document
  try {
    document = await Mongo.findOne({ user: response.body.user })
  } catch (err) {
    logger.error('Mongo error', err)
    return res.status(500).send()
  }

  executeLogic(document, req, res)
}

In older versions of V8, unhandled promise rejections were silently dropped. Now at least you get a warning from Node, so you don’t necessarily need to bother with creating a listener. However, it is recommended to crash your app in this case as when you don’t handle an error, your app is in an unknown state:

process.on('unhandledRejection', (err) => {  
  console.error(err)
  process.exit(1)
})

Patterns with async functions

There are quite a couple of use cases when the ability to handle asynchronous operations as if they were synchronous comes very handy, as solving them with Promises or callbacks requires the use of complex patterns or external libraries.

These are cases when you need to loop through asynchronously gained data or use if-else conditionals.

Retry with exponential backoff

Implementing retry logic was pretty clumsy with Promises:

function requestWithRetry (url, retryCount) {  
  if (retryCount) {
    return new Promise((resolve, reject) => {
      const timeout = Math.pow(2, retryCount)

      setTimeout(() => {
        console.log('Waiting', timeout, 'ms')
        _requestWithRetry(url, retryCount)
          .then(resolve)
          .catch(reject)
      }, timeout)
    })
  } else {
    return _requestWithRetry(url, 0)
  }
}

function _requestWithRetry (url, retryCount) {  
  return request(url, retryCount)
    .catch((err) => {
      if (err.statusCode && err.statusCode >= 500) {
        console.log('Retrying', err.message, retryCount)
        return requestWithRetry(url, ++retryCount)
      }
      throw err
    })
}

requestWithRetry('http://localhost:3000')  
  .then((res) => {
    console.log(res)
  })
  .catch(err => {
    console.error(err)
  })

It gave me a headache just to look at it. We can rewrite it with async/await and make it a lot more simple.

function wait (timeout) {  
  return new Promise((resolve) => {
    setTimeout(() => {
      resolve()
    }, timeout)
  })
}

async function requestWithRetry (url) {  
  const MAX_RETRIES = 10
  for (let i = 0; i <= MAX_RETRIES; i++) {
    try {
      return await request(url)
    } catch (err) {
      const timeout = Math.pow(2, i)
      console.log('Waiting', timeout, 'ms')
      await wait(timeout)
      console.log('Retrying', err.message, i)
    }
  }
}

A lot more pleasing to the eye isn’t it?

Intermediate values

Not as hideous as the previous example, but if you have a case where 3 asynchronous functions depend on each other the following way, then you have to choose from several ugly solutions.

functionA returns a Promise, then functionB needs that value and functionC needs the resolved value of both functionA‘s and functionB‘s Promise.

Solution 1: The .then Christmas tree
function executeAsyncTask () {  
  return functionA()
    .then((valueA) => {
      return functionB(valueA)
        .then((valueB) => {          
          return functionC(valueA, valueB)
        })
    })
}

With this solution, we get valueA from the surrounding closure of the 3rd then and valueB as the value the previous Promise resolves to. We cannot flatten out the Christmas tree as we would lose the closure and valueA would be unavailable for functionC.

Solution 2: Moving to a higher scope
function executeAsyncTask () {  
  let valueA
  return functionA()
    .then((v) => {
      valueA = v
      return functionB(valueA)
    })
    .then((valueB) => {
      return functionC(valueA, valueB)
    })
}

In the Christmas tree, we used a higher scope to make valueAavailable as well. This case works similarly, but now we created the variable valueA outside the scope of the .then-s, so we can assign the value of the first resolved Promise to it.

This one definitely works, flattens the .then chain and is semantically correct. However, it also opens up ways for new bugs in case the variable name valueA is used elsewhere in the function. We also need to use two names — valueA and v — for the same value.

Solution 3: The unnecessary array
function executeAsyncTask () {  
  return functionA()
    .then(valueA => {
      return Promise.all([valueA, functionB(valueA)])
    })
    .then(([valueA, valueB]) => {
      return functionC(valueA, valueB)
    })
}

There is no other reason for valueA to be passed on in an array together with the Promise functionB then to be able to flatten the tree. They might be of completely different types, so there is a high probability of them not belonging to an array at all.

Solution 4: Write a helper function
const converge = (...promises) => (...args) => {  
  let [head, ...tail] = promises
  if (tail.length) {
    return head(...args)
      .then((value) => converge(...tail)(...args.concat([value])))
  } else {
    return head(...args)
  }
}

functionA(2)  
  .then((valueA) => converge(functionB, functionC)(valueA))

You can, of course, write a helper function to hide away the context juggling, but it is quite difficult to read, and may not be straightforward to understand for those who are not well versed in functional magic.

By using async/await our problems are magically gone:
async function executeAsyncTask () {  
  const valueA = await functionA()
  const valueB = await functionB(valueA)
  return function3(valueA, valueB)
}

Multiple parallel requests with async/await

This is similar to the previous one. In case you want to execute several asynchronous tasks at once and then use their values at different places, you can do it easily with async/await:

async function executeParallelAsyncTasks () {  
  const [ valueA, valueB, valueC ] = await Promise.all([ functionA(), functionB(), functionC() ])
  doSomethingWith(valueA)
  doSomethingElseWith(valueB)
  doAnotherThingWith(valueC)
}

As we’ve seen in the previous example, we would either need to move these values into a higher scope or create a non-semantic array to pass these value on.

Array iteration methods

You can use map, filter and reduce with async functions, although they behave pretty unintuitively. Try guessing what the following scripts will print to the console:

  1. map
function asyncThing (value) {  
  return new Promise((resolve, reject) => {
    setTimeout(() => resolve(value), 100)
  })
}

async function main () {  
  return [1,2,3,4].map(async (value) => {
    const v = await asyncThing(value)
    return v * 2
  })
}

main()  
  .then(v => console.log(v))
  .catch(err => console.error(err))
  1. filter
function asyncThing (value) {  
  return new Promise((resolve, reject) => {
    setTimeout(() => resolve(value), 100)
  })
}

async function main () {  
  return [1,2,3,4].filter(async (value) => {
    const v = await asyncThing(value)
    return v % 2 === 0
  })
}

main()  
  .then(v => console.log(v))
  .catch(err => console.error(err))
  1. reduce
function asyncThing (value) {  
  return new Promise((resolve, reject) => {
    setTimeout(() => resolve(value), 100)
  })
}

async function main () {  
  return [1,2,3,4].reduce(async (acc, value) => {
    return await acc + await asyncThing(value)
  }, Promise.resolve(0))
}

main()  
  .then(v => console.log(v))
  .catch(err => console.error(err))

Solutions:

  1. [ Promise { <pending> }, Promise { <pending> }, Promise { <pending> }, Promise { <pending> } ]
  2. [ 1, 2, 3, 4 ]
  3. 10

If you log the returned values of the iteratee with map you will see the array we expect: [ 2, 4, 6, 8 ]. The only problem is that each value is wrapped in a Promise by the AsyncFunction.

So if you want to get your values, you’ll need to unwrap them by passing the returned array to a Promise.all:

main()  
  .then(v => Promise.all(v))
  .then(v => console.log(v))
  .catch(err => console.error(err))

Originally, you would first wait for all your promises to resolve and then map over the values:

function main () {  
  return Promise.all([1,2,3,4].map((value) => asyncThing(value)))
}

main()  
  .then(values => values.map((value) => value * 2))
  .then(v => console.log(v))
  .catch(err => console.error(err))

This seems a bit more simple isn’t it?

The async/await version can still be useful if you have some long running synchronous logic in your iteratee and another long-running async task.

This way you can start calculating as soon as you have the first value – you don’t have to wait for all the Promises to be resolved to run your computations. Even though the results will still be wrapped in Promises, those are resolved a lot faster then if you did it the sequential way.

What about filter? Something is clearly wrong…

Well, you guessed it: even though the returned values are [ false, true, false, true ], they will be wrapped in promises, which are truthy, so you’ll get back all the values from the original array. Unfortunately, all you can do to fix this is to resolve all the values and then filter them.

Reducing is pretty straightforward. Bear in mind though that you need to wrap the initial value into Promise.resolve, as the returned accumulator will be wrapped as well and has to be await-ed.

.. As it is pretty clearly intended to be used for imperative code styles.

To make your .then chains more “pure” looking, you can use Ramda’s pipeP and composeP functions.

Rewriting callback-based Node.js applications

Async functions return a Promise by default, so you can rewrite any callback based function to use Promises, then await their resolution. You can use the util.promisify function in Node.js to turn callback-based functions to return a Promise-based ones.

Rewriting Promise-based applications

Simple .then chains can be upgraded in a pretty straightforward way, so you can move to using async/await right away.

function asyncTask () {  
  return functionA()
    .then((valueA) => functionB(valueA))
    .then((valueB) => functionC(valueB))
    .then((valueC) => functionD(valueC))
    .catch((err) => logger.error(err))
}

will turn into

async function asyncTask () {  
  try {
    const valueA = await functionA()
    const valueB = await functionB(valueA)
    const valueC = await functionC(valueB)
    return await functionD(valueC)
  } catch (err) {
    logger.error(err)
  }
}

Rewriting Node.js apps with async/await

  • If you liked the good old concepts of if-else conditionals and for/while loops,
  • if you believe that a try-catch block is the way errors are meant to be handled,

you will have a great time rewriting your services using async/await.

As we have seen, it can make several patterns a lot more easier to code and read, so it is definitely more suitable in several cases than Promise.then() chains. However, if you are caught up in the functional programming craze of the past years, you might wanna pass on this language feature.

So what do you guys think? Is async/await is the next best thing since the invention of sliced bread, or is it just as controversial as the addition of class was in es2015?

Are you already using async/await it in production, or you plan on never touching it? Let’s discuss it in the comments below.

The essential playbook for software-driven companies

As Marc Andreessen so aptly predicted, software is eating the world. A growing number of companies that developed physical products are adding software capabilities to their offerings. This means a growing need for companies to add software development expertise, software product engineering, embedded software engineering, ecosystem platform engineering, and new software-based application programming interfaces.

The momentum for software to remake the world is more pervasive than ever. It is a core competitive advantage in nearly every industry. Software is a fundamental element in the way companies interact with their markets, partners, consumers, and suppliers.  Software and the services it supports will continue to capture and exponentially grow share of value and market share.

Accenture is releasing a report on this topic titled Beyond the Product: Rewriting the Innovation Playbook For Software-Driven Companies. The report highlights five important actions companies should consider taking to become software-driven businesses.

One: Make software an enterprise-level priority
Companies who aspire to become market leaders need to embrace software as an enterprise-wide responsibility across all facets of the company. Experimentation and prototyping should occur across business functions, producing a continuous pipeline of new ideas and product capabilities. The most successful companies engineer their software products to enable constant customer feedback to new features so they can be resolved and inform continuous and rapid product innovation.

Using powerful analytics capabilities, companies have transformed product definition from an art to a science. And all areas of their businesses, ranging from finance to marketing, need to adopt a software-driven mindset to support quick development cycles associated with software-driven businesses.

Two: Adopt lean and agile ways of working
Nearly all companies, regardless of industry or market, need to develop a certain level of software expertise and mentality to succeed. The companies that do this can open a sizable gap from a field of followers by increasing the rate of product releases through continued investment in automated build, test, and deployment systems. Early innovators appreciate the value of lean, design-led thinking throughout the product lifecycle and are embracing the mantra that agile adoption is no longer only for engineers; it’s assumed across the entire value chain. Rapid, agile processes allow innovators to devote more time and resources to creativity and imagination. The goal is establishing a continuous flow in which established teams consume and deliver against a company-managed backlog of feature requests. This contrasts with the traditional and less efficient model of assembling project teams or discrete engagements.

Three: Harness instrumentation and analytics
To attain market leadership, companies should consider using powerful instrumentation and analytics to observe, enhance and understand how their products powered by software are being used, and to feed insights and strategies for future iterations and agile development. The cloud, connected devices and platform economy have generated more data to analyze, which is creating new opportunities to monetize that data. Companies that capitalize on this opportunity can determine which products and features will generate the most increases in revenues and profits.

Four: Focus on the platform economy 
Leaders in the cloud computing software market recognize their ground-breaking products and services are based on platforms. Their continued success rests on two key elements: the technology platforms they have built to support their businesses; and the business models these platforms enable. These leaders have open platforms for developing new applications and services for the broader ecosystem, which creates an expanded and growing revenue model. Leaders have also developed a set of common services with which their businesses and external developers can create applications and innovative new propositions on their platform to unlock new revenue flows and increase customer dependency.

Five: Tie products to the back office
Today’s demanding markets require products integrated with external ecosystems and internal corporate systems to deliver outcomes and experiences focused on customers. In this software-driven world, the back office is no longer a discrete set of processes that support sales and services. Instead, the back office is an integral part of the engine that powers the agile software-driven experience. Back office functions such as customer relationship management, finance and supply chain facilitate the transactional services that enable the ongoing delivery and fulfillment of software. While there is an increased reliance on software to deliver product features, connected, software-driven products are creating new “Everything-as-a-Service” and Internet of Things market opportunities for those that recognize the importance of tying together products and the back office.

Final thoughts
These five initiatives demonstrate that becoming a business driven by software requires genuine holistic transformation. It’s not simply a matter of becoming a digital enterprise on the outside. Adapting to dynamic markets and all this implies in terms of agility and responsiveness is equally important. The results for companies that have made the required changes demonstrate that the rewards they have generated will fuel their continued leadership and success.

Which Programming Language Should I Learn To Get A Job At Google, Facebook, or Amazon?

The choice of programming language acts as a big factor for a novice in the world of programming. If one stumbles upon a language whose syntax is too complex, one would definitely reconsider learning it. But, what if you’ve crossed that entry barrier and you’re looking to make a career and land a job at heavyweights like Google, Facebook, or Amazon?

You might have come across the articles that tell which programming languages are used at big companies like Google, Facebook, etc. The choice of those companies doesn’t necessarily reflect their needs while hiring a candidate. There are few chances that they’d be interested to interview someone who is expert in a single programming language.

Similar views were also expressed by Justin Mattson, Senior Software Engineer, Fuschia at Google. He answered a user’s query on Quora (via Inc.).

In his answer, Mattson says that if a company is hung up on the fact that you know a language X, but not language Y, you shouldn’t be interested in working there. ” Languages are a tool, like a saw. Whether the saw is manual, table or laser is less relevant than understanding the basic principles of wood and how cutting it happens,” he writes.

There are chances that a person is expert in a popular programming language, but that doesn’t make him/her a good engineer. Different programming languages teach us different things–C and C++ teach you what’s happening with memory and other low-level operations and Java, Ruby, etc., test your design choices. So, it’s important that you learn more languages.

“Don’t learn just one, learn at least two, hopefully, three. This will give you a better sense of what feature are often common to most languages and what things differ,” Mattson adds.

But, what about expertise in a single programming language?

 

Is having complete command over one language completely irrelevant? Answering this question, Mattson says that one must become an expert in the language one uses, instead of focusing on what a company wants. “If you say you’re an expert in Python and then can’t use it properly in the interview, that is a problem,” he adds.

In the nutshell, if your fundamentals and design choices are strong, the programming language selection isn’t that important. In such companies, you’ll need to deal with multiple languages and pick up the new one as needed.

A Guide to Becoming a Full-Stack Developer in 2017

Full-Stack Web Development, according to the Stack Overflow 2016 Developer Survey, is the most popular developer occupation today. It’s no wonder then that there are dozens of online and in-person programs that will help people become Full-Stack Developers and then even assist these new developers land high-paying programming jobs.

Some popular online programs can be found on Lynda, Udacity, Coursera, Thinkful, General Assembly, and so much more. Aside from these online programs, there are also in-person coding bootcamps that are teaching people the skills required to become web developers.

In this article I won’t be discussing which websites or coding bootcamps have the best web development programs, instead I will be providing a definitive guide to what I believe are the most important skills required to become a Full-Stack Web Developer today and land a job if you’ve never coded before. I will be basing the list off of three things:

  1. A combination of what most programs in 2017 are teaching students.

  2. My own personal experiences from interviewing at companies for developer positions in the past and also interviewing potential candidates for current Full-Stack Developer positions at my current company.

  3. Stories and feedback from people on Coderbyte who have been accepted to coding bootcamps and then proceeded to get programming jobs.

The Definitive Guide

A Full-Stack Web Developer is someone who is able to work on both the front-end and back-end portions of an application. Front-end generally refers to the portion of an application the user will see or interact with, and the back-end is the part of the application that handles the logic, database interactions, user authentication, server configuration, etc. Being a Full-Stack Developer doesn’t mean that you have necessarily mastered everything required to work with the front-end or back-end, but it means that you are able to work on both sides and understand what is going on when building an application.

If you want to become a Full-Stack Web Developer in 2017 and land your first job, below is a reference guide with a list of things you should learn.

1. HTML/CSS

Almost every single program, whether online or in-person, that is teaching you how to be a web developer will start with HTML and CSS because they are the building blocks of the web. Simply put, HTML allows you to add content to a website and CSS is what allows you to style your content. The following topics related to HTML/CSS come up often in interviews and on the actual job when you’re working:

  • Semantic HTML.
  • Be able to explain the CSS Box Model.
  • Benefits of CSS preprocessors (you don’t necessarily need to understand how to use one on a deep level, but you should to understand what they are for and how they help with development).
  • CSS Media Queries to target different devices and write responsive CSS.
  • Bootstrap (a framework for helping design and layout content on a page and while many online programs or schools focus heavily on teaching Bootstrap, in reality it’s more important to have a deep knowledge of fundamental CSS than specific Bootstrap features and methods).

2. JavaScript

 

The JavaScript language is growing more popular every year and new libraries, frameworks, and tools are constantly being released. Based on the Stack Overflow 2016 Developer Survey, JavaScript is the most popular language in both Full-Stack, Front-end, and Back-end Development. It’s the only language that runs natively in the browser, and can double up as a server-side language as well (as you’ll see below with Node.js). Below are some topics you need to understand as a Full-Stack Developer:

  • Understand how to work with the DOM. Also know what JSON is and how to manipulate it.
  • Important language features such as functional composition, prototypal inheritance, closures, event delegation, scope, higher-order functions.
  • Asynchronous control flow, promises, and callbacks.
  • Learn how to properly structure your code and modularize parts of it, things like webpack, browserify, or build tools like gulp will definitely be helpful to know.
  • Know how to use at least one popular framework (many programs will focus heavily on teaching you a library or framework like React or AngularJS, but in reality it’s much more important to have a deep understanding of the JavaScript language and not focus so much on framework-specific features. Once you have a good understanding of JavaScript, picking up a framework that sits on top of it won’t be too hard anyway).
  • Although some may argue that you should be using this less or that it’s slowly dying, jQuery code still exists in most applications and a solid understanding of it will be helpful.
  • Some knowledge on testing frameworks and why they’re important (some may even claim that this topic should be optional).
  • Learn about some important new ES6 features (optional).

3. Back-End Language

Once you feel you’ve gotten a good grasp on HTML/CSS and JavaScript, you’ll want to move on to a back-end language that will handle things like database operations, user authentication, and application logic. All online programs and bootcamps usually focus on a specific back-end language, and in reality in doesn’t matter which one you learn so much as long as you understand what is going on and you learn the nuances of your chosen language. You’ll get a ton of different responses if you ask someone which back-end language is the best to learn, so below I’ve listed a few popular combinations. An important note: whichever you decide to learn, just stick with it and learn as much as you can about it — there are jobs out there for all the languages listed below.

  • Node.js: This is a great option because Node.js is itself just a JavaScript environment which means you don’t need to learn a new language. This is a big reason why a lot of online programs and bootcamps choose to teach Node.js. The most popular framework you’d most likely learn to aid you in developing web applications is Express.
  • Ruby: Some popular frameworks for developing in Ruby are Rails and Sinatra. Plenty of programs teach Ruby as a first back-end language.
  • Python: Some popular frameworks for developing in Python are Django and Flask.
  • Java: The Java language isn’t taught so much these days when it comes to Full-Stack Web Development, but some companies do use Java as their back-end and it is still a very in-demand language (see image above).
  • PHP: PHP is rarely taught in programs these days, but just like with Java, it is still very in-demand and it is a cornerstone of the web today.

4. Databases & Web Storage

 

When learning to build web applications, at some point you’ll probably want to store data somewhere and then access it later. You should have a good grasp on the following topics related to databases and storage.

5. HTTP & REST

HTTP is a stateless application protocol on the Internet — it’s what allows clients to communicate with servers (e.g. your JavaScript code can make an AJAX request to some back-end code you have running on a server which will happen via HTTP). Some important topics you should learn about are listed below:

6. Web Application Architecture

 

Once you think you have a grasp on HTML/CSS, JavaScript, back-end programming, databases, and HTTP/REST, then comes the tricky part. At this point if you want to create a somewhat complex web application, you’ll need to know how to structure your code, how to separate your files, where to host your large media files, how to structure the data in your database, where to perform certain computational tasks (client-side vs server-side), and much more.

There are best practices that you can read about online on, but the best way to actually learn about application architecture is by working on a large application yourself that contains several moving parts — or even better, working on a team and together developing a somewhat large/complex application.

This is why, for example, someone with 7+ years of experience may not necessarily know CSS or JavaScript better than someone with 2 years of experience, but over all of those years they’ve presumably worked with all sorts of different applications and websites and have learned how to architect and design applications (among learning other important things) to be most efficient and can see the “big picture” when it comes to development. Below are some things you can read that will help you learn how to architect your web applications efficiently:

  • Learn about common platforms as a service, e.g. Heroku and AWS. Heroku allows you to easily upload your code and have an application up and running with very little configuration or server maintenance and AWS offers dozens of products and services to help with storage, video processing, load balancing, and much more.
  • Performance optimization for applications and modern browsers.
  • Some opinions on what a web application architecture should include.
  • Designing Web Applications by Microsoft.
  • MVC.
  • Most importantly though you should try to work on projects with people, look at codebases of popular projects on GitHub, and learn as much as you can from senior developers.

7. Git

Git is a version control system that allows developers working on a team to keep track of all the changes being made to a codebase. It’s important to know a few important things related to Git so that you understand how to properly get the latest code that you’ve missed, update parts of the code, make fixes, and change other people’s code without breaking things. You should definitely learn the concept behind Git and play around with it yourself.

  • Here’s a reference list of some common git commands you’ll likely use.
  • Here’s a tutorial on using Git and GitHub for beginners.

8. Basic Algorithms & Data Structures

This topic is somewhat polarizing in the development world because there are developers who don’t think there should be such a heavy focus on computer science topics like tree traversal, sorting, algorithm analysis, matrix manipulation, etc. in web development. However, there are companies like Google that are notorious for asking these types of questions in their interviews. As someone said about the Front-End engineering interview at Google:

That said, as Ryan McGrath mentions, our front-end (FE) engineers are expected to have a solid CS background, like all our engineers.

 

It’ll be hard work learning all of this, but it’s rewarding in the end and Full-Stack Development is fun! Leave your comments below.

Stack Overflow survey finds Go, Scala best paying languages for developers

Growth in popularity of Go and Scala has made them the highest-paying programming languages in the United States, according to Stack Overflow’s 2017 developer survey.

The results of the survey of 64,000 respondents, showed that Stack Overflow’s record of trending programming languages aligned with demand and salary for developers with these skills.

The growth of these languages, which can both net an average yearly salary of $110,000, can be attributed to fields like DevOps and Big Data, according to Stack Overflow’s director of developer insights, Kevin Troy.

“Go’s enthusiasts call it ‘the language of the cloud,’” said Troy. “It’s been adopted for a lot of network infrastructure and other systems where scaling across multiple machines is an important consideration.”

The focus of many job listings asking for Go by name are often for developers to build a stable backbone to support the work of other developers, Troy says.

Scala on the other hand is more suited towards number crunching. It’s used notably by Twitter as the primary language for backend development of web applications.

Since it runs on the Java Virtual Machine, it plays well with backends written in Java or other JVM languages, according to Troy. “And since it’s a functional programming language, it’s well-suited to data processing, machine learning, and similar tasks. So it’s not uncommon to see job listings for ‘big data’ jobs call for Scala.”

Oracle continues to make progress Java EE 8, the enterprise edition for the Java platform, and moving forward it would like to advance Java EE within a more open and collaborative community. Specifications are nearly complete and the Java team expects to deliver the Java EE 8 reference implementation this summer.

As the delivery of Java EE 8 approaches, Oracle believes they have the ability to rethink how Java EE is developed in order to “make it more agile and responsive to changing industry and technology demands.”

“Java EE is enormously successful, with a competitive market of compatible implementations, broad adoption of individual technologies, a huge ecosystem of frameworks and tools, and countless applications delivering value to enterprises and end users,” according to Oracle in a blog post. “But although Java EE is developed in open source with the participation of the Java EE community, often the process is not seen as being agile, flexible or open enough, particularly when compared to other open source communities. We’d like to do better.”

According to Oracle, moving Java EE technologies to an open-source foundation may be the right next step, in order to “adopt more agile processes, implement more flexible licensing, and change the governance process.” Oracle also plans on exploring this possibility with the developer community, its licensees and several candidate foundations to see if they can move Java EE in this direction.

“We believe a more open process, that is not dependent on a single vendor as platform lead, will encourage greater participation and innovation, and will be in best interests of the community,” reads the blog.

While there are many details that need to be fleshed out, Red Hat’s John Clingan, senior principal product manager, said that Red Hat is optimistic and applauds Oracle’s decision to advance Java EE under an open-source foundation. Red Hat, an open-source software company, is built on the principles of the open-source way.

“We think that putting Java EE under the jurisdiction of an open source organization is a very positive move that will benefit the entire Enterprise Java community,” said Clingan.

Since Java EE has been evolving for nearly two decades to address market needs, Clingan said that Red Hat believes that a two-tier approach is needed to evolve Java EE more quickly.

“This includes Java EE as a standard, which should move at a measured pace, and Eclipse MicroProfile as an open-source project that acts as an innovation engine to drive new features for Java EE developers more quickly,” said Clingan. The Configuration JSR submission is an example, he added.

As an Eclipse MicroProfile community member, Red Hat plans to continue forward and deliver functional specifications within the Eclipse MicroProfile community as the effort to move Java EE to a foundation progresses. And as a licensee, Red Hat (and JBoss before its acquisition) pioneered the idea of an open-standard enterprise application platform and open-source collaboration, and according to Clingan, it really drove open-source adoption into the “heart of the enterprise,” he said.

Red Hat leads the CDI and Bean Validation Java EE-related JSRs, participates in multiple additional Java EE-related JSRs, and it ships JBoss Enterprise Application Platform as fully Java EE-compatible, said Clingan.

As Java EE moves forward, Oracle writes that it intends on meeting the needs of its developers, end users, customers, technology consumers, partners and licensees. Clingan said that Java EE has the opportunity to grow even more, and with a more permissive license, it will encourage new contributions, new implementations and distributions, he said. And, end-user developers should be able to use Java EE-related technologies more quickly.

Also, Oracle will support existing Java EE implementations and future implementations of Java EE.