Tesla Model 3 killer? Nissan to launch redesigned Leaf

Nissan Motor Co. Ltd. will launch its redesigned Leaf next week, with all eyes on how the all-electric vehicle’s battery range will stack up against its Tesla Inc. and General Motors Co. competitors.

The Leaf is the reigning EV champion in sales — Nissan 7201, +0.27%  says it sold more than 280,000 Leafs worldwide, from the car’s debut in December 2010 to last month.

It remains to be seen whether the battery range will go up, or whether Nissan will offer options in battery size at different price points, offering a larger range on costlier trims. The 2017 model, which starts around $31,000, offers a range of 107 miles.

“If the range goes up and the price remains the same, the new Nissan Leaf will continue to offer one of the least expensive and practical ways to own a pure electric car,” said Ed Hellwig, a senior editor with Edmunds.

The car probably won’t offer as much range as the Chevy Bolt, but if the Leaf delivers even a modest bump over its current range, it will be enough to get the attention of most mainstream EV shoppers, he said.

“The original Leaf was easily recognizable, but not very attractive. This time around Nissan is promising a more conventional design that should make the Leaf more appealing to a wider range of buyers,” Hellwig said.

Nissan on Thursday declined to offer details on the new 2018 Leaf ahead of the unveiling, set for Tuesday at 6 p.m. Pacific.

The Leaf’s current battery range compares with at least 220 miles for the Model 3, which TSLA, +0.77%  launched in late July, and 238 miles for GM’s GM, +2.01%  Chevy Bolt. The Model 3 starts at $35,000, while the Bolt starts at $38,000.

The battery range on the new Leaf will likely be around 150 miles, said Karl Brauer with Kelley Blue Book.

That would give the Tesla Model 3 the battery-range advantage as well as a brand and style

advantage over the Leaf, he said. The Nissan EV could have an upper hand on base price over both the Bolt and the Model 3, and the high availability advantage over the Model 3, Brauer said.

The Bolt is widely available. For the Model 3, customers putting down a $1,000 reservation on the car can expect to receive it in 12 to 18 months, according to Tesla’s website.

The Model 3 is the linchpin of Tesla’s expansion plans, which include launching new passenger and commercial vehicles and arriving at a production rate of 500,000 by the end of next year.

At a conference call earlier in August after company results, Tesla Chief Executive Elon Musk told analysts there should be “zero concerns” about achieving that production goal. Tesla sold its first-ever pure bonds in August to secure a smooth financial ride for the Model 3 production ramp.

GM’s Bolt, launched in December 2016, recently set the mark for an all-electric vehicle range in a Consumer Reports testing, reaching 250 miles on a single charge, the magazine said earlier this month.

Overall, the Bolt is Consumer Reports’ No. 2 recommendation for electric vehicles, behind the much pricier Model S, Tesla’s luxury sedan. The GM car got dented for an “overly squishy” brake-pedal feel, long charging time, choppy ride, and uncomfortable seats, the magazine said.

Gigster receives $20M in funding, Checkmarx’s DevSecOps platform, and Okta’s two-factor authentication

Gigster wants freelance programmers to earn a Silicon Valley salary, from the comfort of their homes.

The four-year-old startup pairs companies looking for software developers in touch with freelance programmers all around the world. The startup just received $20 million in funding from investors like Salesforce’s CEO Marc Benioff, Redpoint Ventures, and basketball star Michael Jordan. The company will use the money to fund sales, marketing and other efforts aimed at persuading big enterprise companies to use Gigster, according to a Business Insider report.

In an announcement, the founders, Roger Dickey and Debo Olaosebikan said: “We’re also obsessed with making software less difficult to build. Using millions of data points gathered from over 1,000 projects, we are building a suite of tools that make software development more efficient & reliable. More customers and more data enable us to discover patterns in how work is done. Patterns lead to tools for better software delivery, which leads to more, happier customers.”

More information on the company’s funding can be found here.

Checkmarx announces new DevSecOps capabilities
At Jenkins World 2017, Checkmarx announced its new Interactive Application Security Testing solution, CxIAST, which gives teams continuous application security testing in real time, with zero scan time, accuracy and seamless implementation.

“CxIAST is a game changer for organizations who are struggling to deliver secure software faster,” said Maty Siman, CTO and founder, Checkmarx. “Our unified AppSec platform correlates data and results from all Checkmarx products across the software development lifecycle and then leverages that information intelligently to generate fast, accurate and actionable results.”

CxIAST monitors an application by using existing functional tests, and it doesn’t need to actively induce the application in order to detect vulnerabilities, according to a company announcement. The solution is also an “important pillar in Checkmarx’s Application Security Testing platform, which provides solutions at every stage of the SDLC,” reads the announcement.

Okta adds two-factor authentication as new standard for customers
Okta, a provider of identify for the enterprise, delivered new functionality for its cloud-based Okta Adaptive Multi-Factor Authentication (AMFA). The company also announced that two-factor authentication comes as a standard for every Okta user, which sets a baseline for strong identity protection, according to the company.

“In today’s cloud and mobile world, we have more data, with more people, and in more locations than ever before – making credential harvesting the most fruitful tactic for today’s threat actors,” Yassir Abousselham, Chief Security Officer at Okta. “Identity is now the security team’s last control point because security can’t manage every single person, device and app; what they can control is who has access to information, and when.”

Abousselham said that’s why the company boosted its security provided by Okta Identity Cloud so it’s more effective for customers. With the enhancements to its AMFA solution, multi-factor authentication as the new standard of identity-driven security, and the ability to “make smarter security decisions based on context, we’re helping to ensure the right person gets access to the right resources, at the right time,” he said.

The Whopper Coin, Movidius Myriad X VPU, and DxEnterprise v17

Burger King has launched the Whopper Coin in Russia, which uses blockchain technology as a secure system for rewards points.

Customers will be able to scan their receipt with a smartphone and will be rewarded with 1 WhopperCoin for every rouble ($0.02) spent on a Whopper sandwich at the fastfood chain. When a user amasses 1,700 WhopperCoin (five or six burgers worth of purchases), they can redeem them for a free Whopper.

Since the crypto-currency is hosted on the Waves platform, it can be freely traded and transferred like any other.

“Eating Whoppers now is a strategy for financial prosperity tomorrow,” Burger King Russia’s head of external communications, Ivan Shestov said.

DH2i adds Linux, Docker support to high availability container solution
High availability and disaster recovery developer DH2i has launched DxEnterprise v17, adding support for Linux to the previously Windows Server-exclusive virtualization management software.

The new release adds support Docker containers for the first time, as well as updated support for SQL Server 2017.

“DH2i’s expanded capabilities have made the underlying infrastructure and platform essentially irrelevant for our customers,” said OJ Ngo, co-founder and CTO of DH2i. “Our customers are able to enjoy an extremely simplistic management experience with our unified interface for Windows, Linux and Docker—all while our Smart Availability technology dynamically ensures that workloads only come online at their best execution venue.”

Introducing the Movidius Myriad X vision processing unit (VPU)The Intel subsidiary Movidius is announcing its Movidius Myriad X vision processing unit, which is intended for deep learning and AI acceleration in vision based devices. Such devices include, drones, cameras, and AR/VR headsets.

The Myriad X features a Neural Compute Engine, which lets the Myriad X achieve over one trillion operations per second of peak DNN inferencing throughput. It also comes with a Myriad Development Kit, which includes all development tools, frameworks and APIs to implement custom vision, imaging, and deep neural network workloads on the chip.

Using Preact instead of React There are plenty of alternatives to React, and one open source project thinks that it is the best choice.

With the thinnest possible Virtual DOM abstraction on top of the DOM, Preact is a “first class citizen of the web platform,” according to the Preact team.

Preact is a speedy, lightweight library option, and it’s designed to work with plenty of React components. Preach is also small enough that the code is actually the largest part of the application, according to Preact’s team, which means less JavaScript to download, parse and execute. It includes extra performance features, and it’s optimized for event handling via Linked State. Developers can use Preact for building parts of apps without complex integration.

Uber’s new CEO, legal fund for ‘WannaCry hero’ found fraudulent, and Quali’s CloudShell Virtual Edition

Uber has formally extended a job offer to Expedia CEO Dara Khosrowshahi to fill their CEO position after co-founder Travis Kalanick resigned from the post in June.

The position has been vacant since Kalanick’s resignation, which was prompted by lawsuits and reports of a toxic corporate culture at the ridesharing company. The company continues to be embroiled in lawsuits over sexual misconduct and withholding information from investors.

Khosrowshahi has been the head of Expedia since 2005 and served as CFO of IAC for seven years prior. He places 39th on Glassdoor’s most recent list of highest rated CEOs among employees, with an approval of 94 percent.

Legal fund for hacker who stopped WannaCry refunded for fraud
All donations to the legal fund for arrested cybersecurity researcher Marcus Hutchins, notable for halting the WannaCry ransomware attack, have been refunded after it was discovered by his lawyer that over $150,000 in donations were fraudulent.

Lawyer Tor Ekeland decided that sifting for legitimate donations wouldn’t be worth the risk and says it’s all been refunded.

Hutchins, a 23-year-old British national working in the United States, was arrested at a cybersecurity conference in Las Vegas in August and has plead not-guilty to six charges related to the development of the Kronos banking Trojan.

Quali announces CloudShell Virtual Edition
Quali announced the general availability of CloudShell Virtual Edition, a new cloud sandbox offering for virtualized IT environments. Quali is a self-service IT environment for cloud and DevOps automation.

CloudShell is designed for virtualized environments and it can spin up sandbox environments with virtual components in a few minutes, according to the company. The new solution also works for developers, testers, and DevOps teams who need to model complex application and virtualization infrastructure blueprints, and quickly deploy them to any cloud or virtualization platform.

More information is available here.

AT&T violates discrimination ban, says complaint
According to a formal complaint from the Federal Communications Commission (FCC), AT&T is violating the Communications Act’s prohibition against “unjust and unreasonable discrimination.”

The complaint allegedly states that AT&T discriminates against poor people by providing faster service in wealthier areas, and speeds as low as 1.5Mbps in low-income areas.

“This complaint, brought by Joanne Elkins, Hattie Lanfair, and Rachelle Lee, three African-American, low-income residents of Cleveland, Ohio alleges that AT&T’s offerings of high-speed broadband service violate the Communications Act’s prohibition against unjust and unreasonable discrimination,” the complaint says.

Evidence of discrimination are based partly on a study from March, where advocacy groups analyzed FCC data and alleged that “AT&T has systematically discriminated against lower-income Cleveland neighborhoods in its deployment of home Internet and video technologies over the past decade,” according to a report from Arstechnica.

This photography tool helps you find the ideal lens for your camera

Finding the right camera is already difficult enough, but pairing it with the right lens could be even more of a hassle – especially to rookies. But fortunately, this is precisely what this handy photography tool can do for you.

What The Lens is a simple web-application that lets you find the perfect lens for your camera by selecting the sort of photos you want to snap.

The way it works is pretty straightforward. The website prompts you to choose among six categories – including landscapes, macro, animals, travel, people and city – and then asks you to select 20 photos that closely align with your own photographic aspirations.

Once you’ve indicated the 20 images that best match your own style, What The Lens will immediately recommend the lens that will best suit your needs, along with a camera you can pair it with. It’ll also direct you to an online store where you can purchase the item (mostly Amazon and Adorama).

What is particularly nifty is that, those who already have bought a camera, can indicate the brand in question to filter out incompatible lenses. The options are pretty limited though.

Now go visit What The Lens here and figure out what the ideal lens for your camera is.

Just a small heads-up: Some users have noted that NSFW images might occasionally show up on the What The Lens photo selection – so you might want to be a little more discreet if you can’t wait to try out the app at home.

Anonymous Messaging App Sarahah to Halt Collection of User Data With Next Update

Sarahah, the anonymous messaging app that shot to the top of app stores earlier this summer, says it plans to remove a feature that uploads users’ contacts, including phone numbers and email addresses to the company’s servers, in the next update.

The app’s creator, Zain al-Abidin Tawfiq, caught flak over the weekend after The Intercept reported Sarahah was failing to ask for the user’s permission before uploading the data.

The app, which allows users to anonymously compliment or critique friends or co-workers, is the 45th most downloaded app on iTunes currently but hit No. 1 on the App Store’s list of top free apps in July. The app has been installed between 10,000,000 and 50,000,000 times on Android devices worldwide, according to the app’s listing on Google Play.

Sarahah, which translates to “frankness” or “honesty” in Arabic, doesn’t hide that it wants to access a user’s contacts. Upon opening the app it says it needs to access contacts in order to show users who else has a Sarahah account. A user can elect not to allow the app access to their contacts and still use it however.

According to The Intercept‘s report, Zachary Julian, a senior security analyst at Bishop Fox, discovered the app’s behavior after installing Sarahah on his Galaxy S5 running 5.11 and monitoring its traffic via BURP Suite, a toolkit used for web app security testing.

Tawfiq did not immediately return Threatpost’s request for comment but responded to The Intercept on Sunday morning. According to the developer, user data was being uploaded for a “Find Your Friends” feature that was supposed to surface in the app in a future update but had been delayed due to a technical issue. The developer stressed on Sunday morning – and again on Monday morning – that the app’s database doesn’t contain “a single contact.”

“The database doesn’t currently host contacts and the data request will be removed on next update,” Tawfiq tweeted.

The Sarahah database doesn’t currently hold a single contact.

— ZainAlabdin Tawfiq (@ZainAlabdin878) August 28, 2017

It’s unclear exactly when that update is slated for; the app was last updated for iOS on July 27 and for Android on July 28.

Privacy conscious users in the meantime users could disable their current account and register a new one for service via its website. The website requires users have an email, password, username, and name to sign up for an account, no contacts required. After doing so a user would have to share their page publicly in order to receive anonymous messages.

Just because mobile applications bill themselves as anonymous doesn’t mean they’re free from security issues.

Years ago Yik Yak, a now-defunct cross-platform app that let users share anonymous updates with users near them, fixed a critical vulnerability that could have de-anonymized users and let an attacker take control of a user’s account. The app, which was valued at $400 million but shuttered in April earlier this year, identified users by their user ID.

If an attacker secured access to that string of characters they’d be able view all of their posts, thought to be private.

Secret, another defunct app that let users anonymously share messages, encountered similar security issues before it shuttered in 2015.

BMW adds a performance version of its electric i3 for 2018

For 2018 BMW is lightly reworking the style that made us call its i3 “a long-range concept car you can actually buy” and it’s expanding the lineup with the i3s. A new performance version, it upgrades the standard i3’s 170 horsepower / 184 pound-feet of torque electric motor to a high-output version capable of 184 horsepower and 199 pound-feet of torque.

A sports suspension drops the i3s 10mm lower, widens its track by 40mm, and connects to new 20-inch rims.

It also has a special Sport driving mode with “more direct accelerator response and tighter steering characteristics.” All of that makes the i3s capable on 0 – 6 MPH in 6.8 seconds, with a 100 MPH top speed. Outside, both the i3 and i3s have new styling tweaks all around to make the car appear wider and match the company’s trademark BMW i Black Belt design.

Slightly more important however are the changes to charging, as part of its “360° ELECTRIC” package. The new TurboCord EV Charger is a $500 accessory that works with both cars. BMW claims it’s the “smallest, lightest UL-listed portable charger available,” ready for Level 1 charging from any regular 120V outlet, as well as 3.6kW charging from 240V outlets, which is 3x as fast as a standard cable.

Features like the gas-powered range extender, ConnectedDrive, iDrive 6 and Apple CarPlay are still around, although there’s a higher resolution 1,440 x 540 10-inch touchscreen available with the Professional navigation package. The cars will make their debut next month at the Frankfurt Motor show, while pricing info will be revealed later.

Nissan Connect App now available for download on Windows 10 Mobile

Nissan has released its official Nissan Connect application to the Windows Store which is available for download for Windows 10 Mobile devices.

 

The app shows you the nearest Nissan Support centers on the map, along with the option to monitor the maintenance work done by the specialized staff.

Users can also track the trips and access the Nissan community to compare with other users. The app which is connected to the Telematic Control Unit system provides alerts and notifications about the driving model the user is having with the car along with option to access the eco scores and see a set of safe driving trips.

 

If any of our users own any Nissan Car and hold a Windows 10 Mobile, then the app can be downloaded by clicking the below Windows Store link. Do let us know your first impressions of the application in the comments below.

Download Nissan Connect for Windows 10 Mobile

What’s the QEWD.js framework and how does it help with heavy CPU processing?

This post was written by Rob Tweed who is the director of M/Gateway Developments Ltd, a consultancy and software development company in the UK that has focused on web and NoSQL database technologies, particularly in the healthcare industry, since the mid-90s. Cycling, photography and listening to and recording music are what keep him sane away from the keyboard! This post first appeared on Rob’s blog.

Imagine all the benefits of Node.js: one language and technology for both front-end and back-end development, plus its outstanding performance; BUT without the concerns of concurrency and heavy CPU processing, AND with high-level database abstractions: with some interesting parallels to Amazon Web Services’ Lambda, that’s what the QEWD.js framework is designed to deliver

I’ve worked with Node.js since its early days in 2011. I’ve also worked for many years more with conventional server-side languages, so I’m aware of the differences with the Node.js philosophy, and with what I’d like to do versus how Node.js wants/expects me to do it. Additionally, I’ve worked recently with Java developers who have made (or tried to make) the transition to Node.js, which has been an interesting and revealing exercise.

Whilst Node.js has become hugely popular, it is not without its many critics. Probably most of the criticisms centre around things that are the very consequences of the deliberately-chosen technical design of Node.js: namely that all user activity takes place within a single process. So, when writing server-side code in JavaScript, Node.js crucially requires you to understand that everything you do can affect every user, and expects you to write your code accordingly. Node.js is therefore all about asynchronous coding and non-blocking I/O. Block or even slow down the process and all other concurrent users suffer and you can bring a service to its knees.

Whereas other, more conventional server-side languages such as Java and Python provide optional syntax to perform asynchronous logic where it makes sense and is more efficient to do so (e.g. to access multiple remote services in parallel), the norm in those languages is to write synchronous logic and even when accessing databases or files. The multi-threaded nature of these languages’ technical architectures means that the developer doesn’t have to be concerned about concurrency. So when developers with a background in languages such as Java or Python are faced with moving to the single process environment of Node.js, it’s unavoidable and mandatory asynchronous logic comes as quite a culture shock.

Some learn to love Node.js, and some grudgingly accept it, but many just don’t “get it” at all and give up, and many others dislike it with a vengeance. That’s a problem if Node.js is to continue growing in popularity: if it’s to extend further into the Enterprise, it’s going to require developers who currently use Java, Python, .Net, etc. to comfortably migrate to and adopt Node.js and JavaScript.

Of course, recent developments in JavaScript have tried to make life easier for the developer: first in the form of Promises, and more recently in the form of Async/Await. These syntax enhancements aim to provide a more synchronous and therefore intuitive feel to asynchronous logic. Nevertheless they’re not the complete answer. The fact that all users are being handled by the one process means you can still bring a Node.js application to its knees with CPU-intensive code: something that understandably rings alarm bells when Node.js is considered for the Enterprise.

As a result, numerous articles have been written that recommend the use of Node.js for only certain kinds of application. One such article by Tomislav Capan is pretty typical, suggesting: “Where Node.js really shines is in building fast, scalable network applications, as it’s capable of handling a huge number of simultaneous connections with high throughput, which equates to high scalability“. Like many others, he concludes:
•You definitely don’t want to use Node.js for CPU-intensive operations; in fact, using it for heavy computation will annul nearly all of its advantages
•The [WebSocket-based] chat application is really the sweet-spot example for Node.js: it’s a lightweight, high traffic, data-intensive (but low processing/computation) application that runs across distributed devices
•If you’re receiving a high amount of concurrent data, your database can become a bottleneck. He recommends that data gets queued through some kind of caching or message queuing (MQ) infrastructure (e.g. RabbitMQ, ZeroMQ) and digested by a separate database batch-write process, or computation intensive processing backend services, written in a better performing platform for such tasks
•Don’t use Node.js for server-side web applications with relational databases (use Rails instead)
•Don’t use Node.js for computationally heavy server-side applications

All well and good, but I would like to be able to have my cake and eat it too:
•I’d like to just use one language — JavaScript — for everything
•I’d like to avoid a mash-up of a separate message queue such as RabbitMQ and multiple languages. The less complexity and the fewer moving parts the better from the point of view of maintainability and stability.
•In my experience it’s almost impossible to avoid some amounts of CPU-intensive processing on the server-side of most web applications, so I’d like to be able to handle such processing without fear of grinding a Node.js application to a halt for everyone.
•I’d also like to not have to worry about concurrency, and write my “userland” code as if it wasn’t an issue. I know that this is the promise (pun not intended) of Async/Await, but such pseudo-synchronous syntax still limits my ability to write, for example, higher-level, properly-chainable database functions in JavaScript. In my opinion, JavaScript should be just as capable as Rails for handling relational databases, and being able to create higher-level database abstractions in JavaScript is a key step to achieving this.

I’m sure I’m not alone in having this wish-list. So, a question I had from my earliest days of using Node.js was: Couldn’t it possible for me to have my cake and eat it, and get all the advantages of Node.js and avoid all the downsides?

Interestingly, we’ve seen the emergence of one use of Node.js where this is the case, and even its creator, Amazon Web Services, seems unaware that this is what they’ve made possible. Their Lambda service provides what is referred to as a “serverless” environment — more accurately a “function as a service” environment. You create and upload a function, and it is run on demand by services and technical means you neither know nor care about, and you simply get charged per invocation of that function. The first language offered for Lambda was Node.js/JavaScript, and although you can now use other languages including Java, Node.js is still the primary offering.

What sets Lambda apart from the normal Node.js environment is that your functions are executed in an isolated runtime container where they don’t compete for any other users’ attention, so concurrency isn’t actually an issue. Nevertheless, look at the published example functions and they all use the usual asynchronous logic.

That doesn’t make sense to me. It’s fair enough to use asynchronous logic if it makes sense or is more efficient to do so, such as when you’re making multiple, simultaneous requests to remote S3 or EC2 services. However, for many Lambda functions you’ll maybe making a few accesses to remote resources which, if they could be done truly synchronously, wouldn’t affect performance or cost, but conversely would simplify the logic considerably. Put it this way: no Java, Python or .Net developer that I know of would go out of their way to use asynchronous logic if they didn’t have to, so why should a Node.js developer?

Of course one of the reasons why Node.js Lambda developers continue to use asynchronous logic is that they believe there’s no alternative: pretty much all the standard interfaces for databases and remote HTTP-based services are asynchronous. Until things like Lambda came along, there was no point in having synchronous APIs for Node.js. Hopefully that can and will change. For example, the tcp-netx module, which provides synchronous as well as asynchronous APIs for basic TCP access, ought to provide the underpinnings for a new breed of synchronous APIs for use in a Node.js environment such as Lambda, where concurrency isn’t an issue. Indeed there’s already such an interface available for MongoDB.Not everyone, of course, will want to move their applications to Amazon’s “serverless” Lambda service. Prevailing wisdom would suggest that it’s not possible for them to “have their cake and eat it too” , but actually that’s not entirely true. Take a look at a Node.js project known as QEWD.js and you’ll see a way to achieve something similar to Lambda’s isolated execution containers, but running on your own servers.

QEWD.js is a server-side platform for REST and browser-based applications, built on top of a module called ewd-qoper8 which implements a Node.js-based message queue. Incoming messages to ewd-qoper8 are queued and dispatched to pre-forked Node.js child processes for processing. However, the key, unique feature is that each child process only handles a single message at a time, so the handler function for that message does not need to be concerned about concurrency: like Lambda, the handler function is executed in an isolated runtime environment. After handling the message and returning the response to the master ewd-qoper8 process, the child process does not shut down, but immediately makes itself available to handle the next available message in the queue. So there are no child process start-up and tear-down costs.

When developing ewd-qoper8 I looked at the possibility of using one of the standard message queues such as ZeroMQ or RabbitMQ, but found that there were no benefits in doing so. ewd-qoper8 turns out to be a very fast and reliable message queue, and allows me to avoid a mash-up of technologies and moving parts, and instead implement everything in Node.js and JavaScript.

QEWD.js builds on top of ewd-qoper8, integrating its master process as an Express middleware to provide a complete back-end development environment for web applications and REST/Web Services. A pretty good analogy of QEWD.js is a Node.js-based equivalent to Apache & Tomcat. QEWD’s fully asynchronous, non-blocking master process, incorporating Express, socket.io and the ewd-qoper8 message queue is, in many ways, a perfect Node.js networked application: it’s really lightweight, doing little else than ingesting incoming HTTP and WebSocket messages, putting them on a queue and dispatching them to an available child process. It’s therefore capable of handling large amounts of activity. All the “userland” processing happens in the isolated environment of a separate child process. QEWD allows you to configure as many child processes as you wish to meet the demands of your service and to make optimal use of your available CPU cores. If a back-end message handler function uses synchronous logic and blocks the child process, it affects nobody else. If it uses a lot of CPU, then it doesn’t directly affect any other concurrent user, any more so than in, say, a Java or .Net environment. Meanwhile, the master process continues to ingest, queue and dispatch incoming messages unabated.

Therefore with QEWD, I feel I have my ideal environment:
•I just have one technology — Node.js — for the entire back-end.
•I use just one language — JavaScript — for everything: front-end and back-end.
•As a developer I don’t have to worry about concurrency. That’s all handled for me by the QEWD/ewd-qoper8 master process which is just a “black box” that handles the external-facing HTTP and WebSocket interface as far as I’m concerned. My code will be executed in an isolated Node.js run-time container that has its entire process to itself, so I don’t need to worry about blocking I/O or CPU intensive processing.
•I can and still do use asynchronous APIs, but only where it makes sense and is more efficient to do so. But for most of the time I can access resources such as databases synchronously, which makes my logic simpler, more intuitive and therefore more maintainable.
•I can build powerful higher-level database abstractions entirely in JavaScript, so I don’t have to resort to using other languages and mixed-technology environments for this area of work. For example, the ewd-redis-globals module is used by QEWD to abstract the Redis database into not only a Document Database, but also a very powerful, high-level concept that I call Persistent JavaScript Objects that can be manipulated and modified directly within the database.

In many ways the “proof of the pudding” with QEWD.js has been to watch how Java developers take to it. I’ve been very encouraged by their reaction. Yes, they need to learn the differences in syntax of JavaScript and its many quirks, but otherwise they’ve told me they like the way their code runs in a much more familiar way and they don’t need to worry about concurrency.

If you’re interested in finding out more about QEWD.js, there’s a pretty comprehensive online training course available on Slideshare. QEWD.js is an Apache 2-licensed Open Source project, and will run on all platforms (even a Raspberry Pi). It’s built around the best of breed Node.js modules such as Express and socket.io. It works with any front-end JavaScript framework including Angular, React.js and React Native. You can use any standard Node.js modules in your back-end message handler functions, and any database using either conventional asynchronous interfaces or, ideally, synchronous ones.

I think that the time has come to begin to question the conventional wisdom regarding Node.js. Amazon Web Services’ Lambda and the QEWD.js project are challenging the ideas about the types of task for which Node.js is best avoided, providing solutions to what were previously seen as deficiencies without the need for other technologies and languages, and changing how server-side JavaScript can be written. I’m not saying that Lambda and QEWD.js will suit everyone or fit all use-cases, but they add a new dimension to and new opportunities for Node.js.

Apache MADlib graduates to Top-Level Project

Apache MADlib, an open-source Big Data machine learning library used for scalable in-database analytics, graduated from the Apache Incubator to become a Top-Level Project (TLP) today. The project has been well-governed under the Apache Software Foundation’s processes and principles, and its graduation to TLP signifies an important milestone for the Apache MADlib community.

“During the incubation process, the MADlib community worked very hard to develop high quality software for in-database analytics, in an open and inclusive manner in accordance with the Apache Way,” said Aaron Feng, vice president of Apache MADlib.

Apache MADlib is a comprehensive library for scalable in-database analytics, providing users with parallel implementations of machine learning, graph, mathematical and statistical methods for structured and unstructured data. It came to be an open source project after database engine developers, data scientists, IT architects and academics became interested in new approaches to sophisticated and scalable in-database analytics, according to the Apache announcement.

“MADlib was conceived from the outset as an open-source meeting ground for software developers, computing researchers and data scientists to collaborate on scalable, in-database machine learning and statistics,” said Joe Hellerstein, professor of computer science at UC Berkeley, cofounder and Chief Strategy Officer at Trifacta, and one of the original authors of MADlib. “It has been great to witness the growth of the MADlib community and codebase as an ASF incubating project, and I look forward to this continuing as a Top-Level Project.”

MAD lib is already deployed on many academic projects and at the industry level. For instance, Pivotal has seen its customers successfully deploy MADlib on large scale data science projects, according to Elisabeth Hendrickson, vice president R&D for data at Pivotal.

“As MADlib graduates to a Top-Level Project at the ASF, we anticipate increased adoption in the enterprise given the mature level of the codebase and the active developer community,” she said.

New participants are more than welcome to join the project, according to Feng, and the team looks forward to working with more contributors as Apache MADlib makes its way to a fully-fledged project at Apache.