Top 12 Most Used Tools By Developers In 2020 – Analytics India Magazine

Frameworks and libraries can be said as the fundamental building blocks when developers build software or applications. These tools help in opting out the repetitive tasks as well as reduce the amount of code that the developers need to write for a particular software.

Recently, the Stack Overflow Developer Survey 2020 surveyed nearly 65,000 developers, where they voted their go-to tools and libraries. Here, we list down the top 12 frameworks and libraries from the survey that are most used by developers around the globe in 2020.

(The libraries are listed according to their number of Stars in GitHub)

GitHub Stars: 147k

Rank: 5

About: Originally developed by researchers of Google Brain team, TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art research in ML. It allows developers to easily build and deploy ML-powered applications.

Know more here.

GitHub Stars: 98.3k

Rank: 9

About: Created by Google, Flutter is a free and open-source software development kit (SDK) which enables fast user experiences for mobile, web and desktop from a single codebase. The SDK works with existing code and is used by developers and organisations around the world.

Know more here.

GitHub Stars: 89.3k

Rank: 6

About: Built by Facebook, React Native is an open-source mobile application framework that is used to develop applications for Android, iOS, Web, etc. With React Native, you can use native UI controls and have full access to the native platform. Some of its features include creating interactive UIs, building encapsulated components that manage their state, enable local changes in seconds and more.

Know more here.

GitHub Stars: 72.2k

Rank: 1

About: Node.js is an open-source, cross-platform, JavaScript runtime environment, which is designed to build scalable network applications. It is a popular environment that allows developers to write command-line tools and server-side scripts outside of a browser. The environment has an advantage that millions of frontend developers that write JavaScript for the browser can write the server-side code as well as the client-side code without the need to learn a completely different programming language.

Know more here.

GitHub Stars: 49.2k

Rank: 11

About: Keras is a popular and open-source deep learning API written in Python language. The API includes a number of features, such as offering consistent & simple APIs, minimises the number of user actions required for common use cases, has extensive documentation and developer guides as well as provides clear & actionable error messages.

Know more here.

GitHub Stars: 44.3k

Rank: 8

About: Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. Ansible works by connecting to nodes and pushing out small programs, called Ansible Modules to them. Some of the features include managing machines quickly and in parallel, avoid custom-agents and additional open ports, be agentless by leveraging the existing SSH daemon, focus on security and easy auditability.

Know more here.

GitHub Stars: 26.1k

Rank: 4

About: Pandas is a Python package that provides fast, flexible, and expressive data structures designed to make working with relational or labelled data both easy and intuitive. The library aims to be the fundamental high-level building block for performing practical as well as real-world data analysis in Python. The features of Pandas include easy handling of missing data in floating point as well as non-floating point data, flexible reshaping and pivoting of data sets, time series-specific functionality such as date range generation, frequency conversion, date shifting and lagging, and much more.

Know more here.

GitHub Stars: 23.3k

Rank: 10

About: Created by HashiCorp, Terraform is an open-source infrastructure as code framework, which is used to build, change and versioning infrastructure efficiently. This declarative coding tool helps developers to utilise HashiCorp Configuration Language (HCL), which is a high-level configuration language to describe the on-premise infrastructure for running applications. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Know more here

GitHub Stars: 14.2k

Rank: 3

About:.NET Core is an open-source and cross-platform version of the .NET framework that is maintained by Microsoft and the .NET community on GitHub. The framework supports C#, Visual Basic, and F# languages to write applications. .NET Core is a general-purpose development platform, which is composed of .NET Core runtime, ASP.NET Core runtime, .NET Core SDK, language compilers and the dotnet command.

Know more here.

GitHub Stars: 11.9k

Rank: 2

About: Created by the tech giant Microsoft, .NET Framework is a software development framework for building and running applications on Windows. It is a developer platform that is made up of tools, programming languages, and libraries for building many different types of applications. It is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.

Know more here.

GitHub Stars: 213

Rank: 12

About: Apache Cordova is an open-source mobile development framework, which allows a developer to use standard web technologies, such as HTML5, CSS3, and JavaScript for cross-platform development. The framework can be used by a mobile developer to extend an application across more than one platform. It can also be used by a web developer to deploy a web app thats packaged for distribution in various app store portals. A mobile developer who is interested in mixing native application components with a WebView that can access device-level APIs, can also use it.

Know more here.

Rank: 7

About: Unity is a game development platform used to build high-quality 3D/2D games that can be deployed across mobile, desktop, VR/AR, consoles, or web. Unity3D is a powerful cross-platform 3D engine and a development engine.Know more here.

comments

Go here to see the original:
Top 12 Most Used Tools By Developers In 2020 - Analytics India Magazine

Buried deep in the ice is the GitHub code vault humanity’s safeguard against devastation – ABC News

Svalbard is a remote, frozen archipelago midway between Norway and the North Pole.

Polar bears outnumber humans, yet it represents arguably the biggest insurance the world holds in case of global technological devastation.

And we just took out a fresh policy.

For the first time ever, open-source code that forms the basis of most of our computerised devices has been archived in a vault that should protect it for 1,000 years.

If you're thinking that an Arctic code vault sounds like a high-tech library crossed with a Bond villain's lair, you're not far off.

Svalbard is remote, home to the world's northernmost town, and is protected by the century-old International Svalbard Treaty.

Crucially, it's already home to the successful Global Seed Vault, which saves seeds in case entire species ever get wiped out by disease or climate change.

Just down the road, the GitHub Archive Program found space in a decommissioned coal mine run by the Arctic World Archive, which already houses and preserves historical and cultural data from several countries.

All put together, the barren archipelago makes the perfect place to seal something you want to protect in a steel vault 250 metres under the permafrost.

The Arctic Code Archive aims to serve as a time capsule for the future, saving huge amounts of open-source computer code alongside a range of data including a record of Australia's biodiversity and examples of culturally significant works.

If you were to make your way into the mine and crack the large steel vault, you'd find 186 film reels inside, each a kilometre long, covered in tiny dots.

It's not just miniaturised text, though. To squeeze in as much as possible, the code is stored in tiny QR codes that pack the information in as densely as possible.

You run into open-source code every day without even knowing it. In fact, you're probably using some to read this article right now.

"Open-source" means the code is shared freely between developers around the world and can be used for any application.

That means a little piece of coding could end up in anything from your TV to a Mars mission.

The concept fosters collaborative software engineering around the globe.

It's incredibly important, and it spans a range of complexity from huge algorithms that mine Bitcoin to single lines of code that determine whether a number is odd or even.

Archiving all of that work means it won't have to be re-invented if it is ever lost, saving time and money.

The archive reels hold a combined 21 terabytes of code. That may not seem much if you have a hard drive at home that holds 2 terabytes.

But we're not storing your photos or movies here each character in a line of code takes up a tiny bit of space.

If someone who types at about 60 words a minute sat down and tried to fill up all that space, it would take 111,300 years and that's if they didn't get tired or need any breaks.

If you're making an archive that's going to last, you've got to make sure it isn't going to degrade over time.

While it might seem intuitive to store the information on something like a Blu-ray disc or on hard drives, these are notorious for breaking down.

They're designed to be convenient, not to be heirlooms you pass down for generations.

"You might have seen this in the past ... years after you touched it last, you try to boot it up again and it wouldn't work," says GitHub's VP of strategic programs, Thomas Dhomke.

"The (information) bits have been lost."

Things that survive the ravages of time tend to be physical. Think papyrus scrolls, Egyptian carvings or Assyrian tablets.

In fact, there's a good chance that people of a distant future will know more about ancient people than they will about us.

When it comes to making physical copies, your office A4 wouldn't cut it, so they used a refined version of century-old darkroom photography technology to create the archival film reels.

Each film is made of polyester coated in very stable silver halide crystals that allow the information to be packed in tightly.

The film has a 500-year life span, but tests that simulate aging suggest it will last twice as long.

Storing it in the Arctic permafrost on Svalbard gives you a host of added benefits.

The cold prevents any degradation caused by heat; it's locked deep in a mountain, protected from damaging UV rays and safe from rising sea levels; and it's remote enough that it's not likely to be lost to looters from a dystopian future.

Despite global warming, and a previous event at the seed bank where some of the permafrost melted, it's believed the archive is buried deep enough that the permafrost should survive.

Just in case, they're not stopping there.

The GitHub Archive Program is working with partners to figure out a way to store all public repositories for a whopping 10,000 years.

Called Project Silica, the goal is to write archives into the molecular structure of quartz glass platters with an incredibly precise laser that pulses a quadrillion times a second.

That's a 1 followed by 15 zeros: 1,000,000,000,000,000.

You might be wondering: doesn't the internet already save all of our information in the cloud?

Yes, but it's not as safe as you might think.

There are actually three levels of archiving, known as hot, warm and cold.

The hot layer is made up of online repositories like GitHub, which allows users to upload their code for anyone to use.

This is copied to servers around the world and is readily accessible to anyone with an internet connection.

While access is quick and easy, if someone removes their code from the hot layer, it is no longer available. That doesn't make for a very reliable archive.

The warm layer is run by the Internet Archive, which runs the Way Back Machine.

It crawls the web and regularly takes snapshots of sites and keeps them on their servers. Anyone can access them, but you have to do a bit of digging.

For example, you can still find the ABC webpage from January 3, 1997, which has the story of then Victorian Premier Jeff Kennett warning against Australia becoming a republic.

The Internet Archive isn't a perfect system it takes regular snapshots, but anything that happened in-between can be lost.

Both the hot and warm layer work well together to give a fair idea of what the internet might have held at any given time, but they both suffer from one critical weakness: they are made up of electronics.

The internet is essentially millions of interconnected computers and huge data storage banks that your device can access.

If there was to be an event that disrupted or destroyed those computers, the information they hold and therefore the internet could be destroyed forever.

The Arctic vault represents the cold layer of archiving.

It's an incomplete snapshot taken at regular intervals (the plan is to add to the archive every five years), but one that should survive the majority of foreseeable calamities.

Some of the potential disasters are academic, but some we've seen before.

In early September 1859, the sun belched, and the world's very rudimentary electronics were fried.

It's known as the Carrington Event, and as the matter ejected from the sun headed towards Earth, the lights of the auroras were seen as far north as Queensland and all the way down to the Caribbean.

When it hit, the largest geomagnetic storm ever recorded caused sparks to fly off telegraph wires, setting fires to their poles. Some operators reported being able to send messages even though their power supplies were disconnected.

If that were to happen today, most of our electronics both here and in space would be destroyed.

And it's not really a matter of if, but when.

It also doesn't have to be a huge astronomical event that causes us to lose many generations' worth of information.

If a pandemic or economic downturn was severe enough, we might be unable to maintain or power the computers that make up the internet.

If you consider how technology has changed in just the last few decades the rise of the internet, the increased use of mobile phones then it's easy to understand how people living a hundred or a thousand years from now are likely to have technology that's wildly different from ours.

The archive is part of our generation's legacy.

As Mr Dhomke says:

"We want to preserve that knowledge and enable future generations to learn about our time, in the same way you can learn (about the past) in a library or a museum."

Australian data has found a home in the archive, too, including the Atlas of Living Australia that details our country's plant and animal biodiversity, and machine learning models from Geoscience Australia that are used to understand bushfires and climate change.

There's no saying who might want to use the archive in the future, so archivists had to come up with a solution both for those who don't speak English and for those who might not understand our coding languages.

The films start with a guide to reading the archive, since there's a decent chance that anyone finding them in the future may not know how to interpret the QR codes.

Even more importantly, that's followed by a document called the Tech Tree, which details software development, programming languages and computer programming in general.

Crucially, it's all readable by eye.

Anyone wanting to read the archives might need to have at least a basic understanding of creating a magnifying lens (something humans achieved about 1,000 years ago) but after that the archive could all be translated using a pen and paper.

The guides aren't just in English, either. Like a modern-day Rosetta Stone, they are also written in Arabic, Spanish, Chinese, and Hindi, so that future historians have the best chance of deciphering the code.

"It takes time, obviously ... but it doesn't need any special machinery," Mr Dhomke says.

"Even if in 1,000 years something dramatic has happened that has thrown us back to the Stone Age, or if extraterrestrials or aliens are coming to the archive, we hope they will all understand what's on those film reels."

More:
Buried deep in the ice is the GitHub code vault humanity's safeguard against devastation - ABC News

NextCorps and SecondMuse Open Application Period for Programs that Help Climate Technology Startups Accelerate Hardware Manufacturing – GlobeNewswire

NEW YORK, Aug. 12, 2020 (GLOBE NEWSWIRE) -- NextCorpsandSecondMuseannounced that they are accepting applications from startups with innovative climate tech hardware prototypes for a spot in their Cohort 3 manufacturing programs:Hardware Scaleup, which NextCorps manages in conjunction withREV: Ithaca Startup Worksin upstate New York, andM-Corps, which SecondMuse manages in New York City. Designed to help support climate tech innovationfrom prototype to productioncompanies are encouraged to attend informational webinars offered through August 31 and submit their application by September 8, 2020.

The Cohort 3 manufacturing programs, which will kick off in October, provide entrepreneurs with a unique blend of support that helps accelerate manufacturing readiness and quickly move prototypes into volume production. This includes access to technical experts and mentors, curated manufacturing tours, funding opportunities, and a supportive community of peers who are on the same journey.

Manufacturing is hard. These programs support climate tech startups to get to market faster by reducing the headaches associated with manufacturing, said Shelby Thompson, senior community manager, SecondMuse. They also help advance New York States clean energy and greenhouse gas reduction goals by making available new technologies that can have a significant impact.

Scaling and commercializing new technologies requires strong collaboration and partnerships, added Mike Riedlinger, managing director, technology commercialization, NextCorps. Our programs are structured to give entrepreneurs access to the connections that can help them successfully move their technologies into volume manufacturing.

Partners in the two programs includeLaunchNY,New Lab,Urban Future Lab,Maiic,CEBIP,Partsimony,Cornell University,REV: Ithaca Startup Works,RIT Golisano Institute for Sustainability,NIST Manufacturing Extension Partnership, and many contract manufacturers, industrial organizations, industry leaders, and funders across the state.

Building on SuccessThe announcement comes after the New York State Energy Research and Development Authority (NYSERDA) renewed its support of NextCorps and SecondMuse earlier this year to advance the scale up of innovation to meet Governor Cuomos nation-leading Climate and Clean Energy goals, as outlined in the Climate Leadership and Community Protection Act (CLCPA). In their already two years of programming, 33 startups have been supported by NextCorps and SecondMuse. Despite disruptions related to COVID-19, companies includingCellec,Ecolectro,Skyven,Southern Tier Technologies,Switched Source, Enertiv,Pvilion,Actasys,Unique Electric Solutions, andTarformcontinue to grow. Collectively these companies have raised an additional $28 million in funding during their time in the program while generating approximately $10 million in sales revenue. The assistance has had dramatic impactson success,demonstrating the value of both manufacturing readiness and access to global and domestic manufacturers in New York State.

The beneficial connections weve made through the program have changed our manufacturing strategy, said Rhonda Staudt, founder, Combined Energies, LLC, a startup participant. Its opened the door to finding a high-quality contract manufacturer that can build our boards, so we dont have to do it ourselves.

Going through the Manufacturing Readiness Level (MRL) assessment helped us truly understand where we are within our manufacturing processes, as well as the steps we need to take to get to full production run," said JC Jung, cofounder, Tarform Motorcycles.

How to ApplyCohort 3 started accepting applicationson July 27 and the application period will be open through September 8, 2020. The selection criterion focuses on the potential of an existing hardware prototype device that can be scaled up to mass production to meaningfully lower carbon emissions and support clean energy. To help companies assess whether the program is right for them, they can attend virtual information sessions that will be held in August. The sessions cover topics such as: what participants can expect from the program, a list of tools and resources that will be available, and how the programs can help push a company forward. Initial sessions were hosted on August 5 and 11, with additional sessions available on the following dates:

To apply to or learn more about the program, go tohttps://mcorps.paperform.co/.

About NextCorpsNextCorps provides a suite of services, including technology commercialization support for very early-stage opportunities, business incubation for high-growth potential startups, and growth services, for manufacturing companies seeking to improve their top- and bottom-line performance. For more information, visitwww.nextcorps.org. For more information on Hardware Scaleup, go toscaleup.nextcorps.org.

About SecondMuseSecondMuse is an impact and innovation company that builds resilient economies by supporting entrepreneurs and the ecosystems around them. They do this by designing, developing, and implementing a mix of innovation programming and investing capital. From Singapore to San Francisco, SecondMuse programs define inspiring visions, build lasting businesses and unite people across the globe. Over the last decade, theyve designed and implemented programs on 7 continents with 600+ organizations such as NASA, The World Bank, and Nike. To find out more about how SecondMuse is positively shaping the world, visit:www.secondmuse.com.For more information on M-Corps, go tohttps://www.manufacturenewyork.com/.

For media inquiries, contact Shannon Wojcikatshannon@rkgcomms.comor shelby.thompson@secondmuse.com.

View post:
NextCorps and SecondMuse Open Application Period for Programs that Help Climate Technology Startups Accelerate Hardware Manufacturing - GlobeNewswire

Persistent memory reshaping advanced analytics to improve customer experiences – IT World Canada

Written by Denis Gaudreault, country manager, Intel Canada

175 zettabytes. That is IDCs prediction for how much data will exist in the world by 2025. Millions of devices generate this data everything from the cell phone in our pockets and PCs in our homes and offices, to computer systems and sensors integrated into our cars, to the factory floor at the industrial park leveraging IoT and automation. While many enterprises are unlocking the value of data by leveraging advanced analytics, others struggle to create value cost-effectively.

Case in point: Imagine the difference between going to your desk to get a piece of information, versus going to the library, versus driving from Toronto to Intels campus in Oregon, or even travelling all the way to Mars to get this information. These distances illustrate the huge chasm in latency between memory and data storage in many of todays software and hardware architectures. As these datasets used for analytics continue to grow larger, the limits of DRAM memory capacity become more apparent.

Keeping hot data closer to the CPU has become increasingly difficult in these capacity-limited situations. For the past 15 years, software and hardware architects have had to make the painful tradeoff between putting all their data in storage (SSDs), which is slow (relative to memory), or paying high prices for memory (DRAM). Over the years, it has become a given for architects to make this decision. So how can companies bridge the gap between SSDs and DRAM, while reducing the distances between where data is stored to make it readily available for data analytics?

Persistent memory solves these problems by providing a new tier for hot data between DRAM and SSDs. This new tier allows an enterprise to deploy either two-tier memory applications or two-tier storage applications. While it is not a new concept to have tiers, this new persistent memory tier with combined memory and storage capability allows architects to match the right tool to the workload. The result is reduced wait times and more efficient use of compute resources, allowing companies to drive cost savings and massive performance increases that help them achieve business results, while at the same time maintaining more tools in the toolboxes that support their digital transformations. Enterprises will also benefit from innovations and discoveries from the software ecosystem as it evolves to support it.

Persistent memory is particularly useful for enterprises looking to do more with their data and affordably extract actionable insights to make quick decisions. The benefits of persistent memory are especially valuable for industries that are experiencing digital transformation, like financial services and retail, where real-time analytics provide tremendous value.

For financial services organizations, real-time analytics workloads could include real-time credit card fraud detection or low-latency financial trading. For online retail, real-time data analytics can speed decisions to adjust supply chain strategies when there is a run on certain products, while at the same time immediately generating new recommendations to customers to shape and guide their shopping experience.

Persistent memory can also expedite recommendations for the next three videos to watch on TikTok or YouTube, keeping consumers engaged for longer periods. In these scenarios, real-time analytics allows these organizations to interact with their end-users more instantaneously, improving customer experiences and enabling the business to achieve a better return on investment. While these real-time analytics applications would be possible without persistent memory, it would be costly to maintain the same level of performance and latency.

For those looking for off the shelf solutions without application changes, the easiest way to adopt persistent memory is to utilize it in Memory Mode to achieve large memory capacity more affordably with performance close to that of DRAM, depending on the workload. In Memory Mode, the CPU memory controller sees all of the persistent memory capacity as volatile system memory (without persistence), while using the DRAM as cache.

Many databases and analytics software or appliance vendors such as SAP HANA, Oracle Exadata, Aerospike, Kx, Redis, and Apache Spark now enable and have released new versions of software that utilizes the full capabilities of both application-aware placement of data and persistence in memory offered with persistent memory. A variety of applications along with the operating system and hypervisor that is aware of persistent memory are available in the ecosystem to be deployed by a customers preferred server vendor.

A new class of software products is also emerging in the market that removes the need to modify individual applications for the full capabilities of persistent memory. Software applications such as Formulus Black FORSA, Memverge Memory Machine and NetApp Maxdata are truly groundbreaking approaches to the new tiered data paradigm that bring the value of persistent memory while minimizing application-specific enabling.

For those who want full customization, software developers also have the option to utilize the industry-standard non-volatile memory programming language model with the help of open-source programming libraries such as PMDK.

We are living in a time unlike any other. Never has it been more important to analyze data in real-time. With the help of persistent memory, businesses can now make more strategic decisions, better support remote workforces and improve end-user experiences.

In just the last six months alone, Ive been impressed by the use cases and innovation Ive seen our customers implement with persistent memory. Looking to the future, I am excited to watch persistent memory, and the ecosystem that has evolved to support it, continue to bring ideas, dreams and concepts to life making the impossible, possible.

Read the rest here:
Persistent memory reshaping advanced analytics to improve customer experiences - IT World Canada

Expanding the Universe of Haptics | by Lofelt | Aug, 2020 – Medium

Haptics is hard. Transmitting touch over long distances using electrical signals rendered by a computer interface, or simulating the tactile materiality of a virtual world, presents a wide range of challenges. The sense of touch is complicated and variegated; the mechanisms by which it functions the foundations of our tactile reality have historically been understudied in psychology, physiology, and neuroscience, in comparison to studies of seeing and hearing. As a sense distributed throughout the body, composed of a range of different submodalities (including movement, pressure, temperature, and pain) it is difficult and perhaps impossible to design a machine that comprehensively stimulates the sense of touch. The hurdles for haptics are not strictly technological and scientific: it is not just a question of designing haptics applications that work well in the lab, or of gaining a deeper and more holistic understanding of how touch operates. As any haptician knows and as the diverse array of haptics devices already developed and abandoned attests to these are certainly immense obstacles that at times appear insurmountable.

But there is also a cultural challenge involved in bringing haptic devices out of the design lab in ways that are meaningful to their imagined users. Theres a tendency among hapticians to assume that their own enthusiasm for the technology extends to the broader public. Its an understandable impulse, given that those drawn to the field are often motivated by a sort of humanistic desire to bring touch to computing: if touch is the most fundamentally human of our senses, and if it is increasingly absent from interactions in a society dependent on computer-mediated communication, then restoring touch entails a restoration of the human. However, not everyone shares this belief. For potential users who may be skeptical or disinterested, how does one explain the value that haptics can add to experiences with digital media?

For a long time, the haptics industry has answered this problem by overpromising and overhyping its forthcoming products, aided and abetted by a popular press that frequently covers new haptics discoveries in a fervent, celebratory, and uncritical tone, with tech companies and the popular technology press situated in a synergistic relationship. Articles (and, especially article headlines) that sensationalize the potential of haptics technologies make for alluring and evergreen clickbait. Such stories announcing the impending arrival of new devices mobilize and rehearse what I have called the dream of haptics: a vision of fully-realized haptic devices that provide a type of photorealism for touch, restoring the missing tactile dimension to our interactions with computers. The problem with this dream of haptics is that it is conjured around virtually every instantiation of the technology, no matter how minor. Each instantiation, accordingly, carries the burden of realizing this dream and is destined to fall short of these promises. This has been a character of haptics marketing and journalistic writing about haptics since at least the late 1990s.

Its easy to understand how this happens: of course those drawn to work in the field of haptics are enthusiastic about it haptics tends to get overlooked in favor of a focus on graphics or audio.

Interestingly, psychologists who worked on touch have consistently lamented, going back at least to the 1950s, that research on the psychology of touch is overlooked in favor of research on the psychology of seeing and hearing: touch, as the lament goes, is considered the neglected sense in everything ranging from psychology to aesthetics to engineering.

In the push to make haptics research legible and compelling, hapticians both haptics marketers and haptics engineers fall back on a set of well-rehearsed tropes about the technology and its immense impact. Journalists, too, revert to a familiar and comfortable framing of haptics as both immanent and transformative; if they do offer qualifiers about the feasibility of these complex devices coming to market, they are often buried toward the end of the article, leaving readers with the impression that they can expect to see these advanced haptics applications distributed ubiquitously in the near future.

For one very recent example, check out the headlines for and reporting on the HaptiRead midair Braille haptic system: these articles contain little mention of the fact that the application is still in the very early stages of research; similarly, reporting on haptics patents often conflates the patenting of a technology with its impending arrival.

So its clear that theres a future orientation to the framing of haptics: a thing youll someday have that adds a fantastical dimension to your experience with existing technologies.

BoingBoings Mark Frauenfelder, commenting on the haptics in the Wii Remote in an early review: It feels like magic. I love it.

The challenge is to foster an appreciation for and understanding of current-generation haptics applications, while explaining how they have and have not lived up to the dream of haptics. Those working in the field would do well to acknowledge and embrace the fields historicity: confront the dashed hopes and failed promises, explain why haptics hasnt lived up its lofty and transformative aspirations, and then finally, as a positive step, show how haptics already has changed the way we interact with digital devices.

Skepticism about the technologys capacity to meet the lofty promises made around it has been present since haptics began to cohere as an industry way back in the 1990s. In an article published on the eve of Immersion Corporations 1999 IPO, one market analyst assessed the state of haptics: Its still very much a nascent technology [] It hasnt lived up to its promise. It could become a part of every PC, or it could just fade away. Im not seeing anything yet that says, wow, youve really got to go out and buy it.

Logitechs senior vice president in that same piece, commenting on vibration-enabled computer mice: we believe that mice using FEELit technology will revolutionize the way people interact with their computers.

Evaluating the analysts concerns over 20 years later, it certainly seems correct that haptics still hasnt lived up to its promise. But haptics has become a part, not of every PC, but of every smartphone, wearable, and game console: almost without notice, we have become accustomed to decoding the variegated patterns of vibrations constantly emanating from these devices. Gradually, we acclimate to this language of vibrations, acquiring a device-specific and platform-specific tactile literacy: an ability to read the messages being sent to us through our skin by a range of digital devices. One does not have to be a trained haptician to notice, for example, that the vibration pattern reminding a FitBit wearer to take 250 steps that hour (two quick jolts) is perceptibly distinct from the celebratory burst of vibrations that rewards the wearer for hitting their 10,000 step daily target (a few short bursts of vibration, followed by a couple of longer ones wrist fireworks that correspond with the display on the screen). Similarly, the vibration pattern indicating an incoming call differs from the pattern used to announce the arrival of a text message, which may both be distinct from the vibratory message alerting the user that the devices battery is running low.

The problem with such vibratory languages, however, is that they lack stability and cohesion. Different actuators found in different devices produce different sensations; phone operating systems and specific applications use vibration alerts inconsistently; and the impulse to overuse vibration notifications could be leading to what Ben Lovejoy called haptic overload, with app developers competing for bandwidth on the haptic channel. So what might otherwise be trumpeted as a major victory for haptics gets muted somewhat by the lack of a shared tactile vocabulary across devices and applications. There are a host of reasons for this fragmentation: a lack of standardized design tools, competition between companies each pushing their own vocabularies, intellectual property concerns, and so on.

However, the important consequence, culturally, concerns the inability of haptics to cohere around a unifying language of vibrations. We may each informally acquire an understanding of the way an individual device communicates to us through haptics, but that literacy becomes obsolete when we move to a new device. When the Apple Watch was announced, for instance, some suggested that the Taptic Engine would usher in a new era of tactile communication, with vibration communication stealthily transforming the way we interact with our devices. Here again, we find another failed promise of haptics: if the Apple Watch provided a new language of touch, where was the dictionary for this language? How did Apple go about training users to read by touch? And did it educate designers on how to best write for this new language of feel?

To their credit, Immersion Corporations efforts on this front were far more systematic: its proposed Instinctive Alerts framework was ambitious in providing over 40 distinct vibration patterns, each attached to specific messages, capable of running on any single-actuator device. Apples Core Haptics provides an impressively detailed set of guidelines for developers, but the moment of excitement around smartwatch and smartphone haptics appears to have been eclipsed somewhat by the renewed hope for VR haptics that followed the commercial release of the Oculus Rift and HTC Vive.

If we broaden this conversation outward to encompass video game controller rumble, which has been a standard feature of console controllers since 1997, we see a similar problem: video game players have acquired a variety of languages of touch, specific to individual games and individual game genres, but game design lacks robust training programs in haptic effects programming, so the haptic vocabularies of games remain haphazardly assembled and fragmented. And vibration feedback in games, too, has been continually framed since its inception as an imperfect instantiation of haptics that will inevitably be overcome with the rise of some new and improved feedback mechanism (it is only recently, first with HD Rumble in the Switch and now with the impending release of Sonys DualSense controller, that this promise is being fulfilled).

That strategic distancing of current generation from next-generation might just be a common marketing maneuver, but it perpetuates the always-on-the-horizon dream of haptics a continual feeling that the dream is bound to be deferred.

I would put this challenge to hapticians: instead of speaking in the language of haptics will, begin thinking in terms of haptics has. Rather than projecting the impact of haptics based on the assumed widespread adoption of fantastical and expensive pieces of hardware, try to convey a sense of what is possible given existing technologies. This is not to suggest that the field should run from dreaming big many of the just-on-the-horizon devices represent significant steps forward for haptics in terms of hardware sophistication and use case diversity, the result of creatively-conceived and carefully-executed research programs but instead, to argue that the field would be better served by not continually pinning its hopes on the uptake of unavoidably expensive new hardware. Moreover, being realistic about the capabilities of current generation devices and circumspect in the forecasted impact of future devices would go a long way toward restoring some of the fields strained credibility. And finally, haptics research has primarily operated with a top-down model, driven by those in industry and the academy, rather than being pushed forward by the interests and agitations of those working outside of professional contexts.

Hapticians would do well to emulate Kyle Machiluss focus on community engagement with his open source Game Haptics Router sex toy control software, which emphasizes adaptability to the various uses different communities might want to put it to. Building haptics applications up from a foundation of community interest would help ensure that the hype around haptics is more than just a product of industry enthusiasm echoing through popular press click-chasing (Dave Birnbaums INIT podcast is a significant step toward growing this sort of public-facing intellectual culture around haptics).

With much of the world and the US in particular still struggling to adapt to the physical distancing protocols required to slow the wildfire spread of COVID, we might also consider the use cases that would be the most immediately beneficial to people, in terms of restoring a lost feeling of connection to those in their affective networks. Broadening our understanding of touch technologies to include this cultural dimension will increase the kit of available tools to help meet the difficult challenges facing those wishing to expand the universe of haptics.

About the author: David Parisi researches the cultural and historical aspects of touch technology. His book Archaeologies of Touch: Interfacing with haptics from Electricity to Computing, provides the first comprehensive origin of haptic interfaces, offering vital insights on the development and future trajectory of technologized touching. A sample of Davids book is available here.

Twitter: @dave_parisi

Read the original post:
Expanding the Universe of Haptics | by Lofelt | Aug, 2020 - Medium

UX Designer Salary: 5 Important Things to Know – Dice Insights

How much can UX designers earn? Thats a crucial question if youre thinking of getting into UX (i.e., user experience design). An excellent UX designer can help elevate a product to best-in-class, but theyll also need to convince executives and managers that their skills are truly invaluable.

First, some additional exposition: Although you sometimes see the terms UI and UX used interchangeably, theyre actually very different. In a software context,UI is what the user sees on a screen: the icons, text, colors, backgrounds, and any moving elements (such as animations). UX, on the other hand, ishowthe user moves through all those UI elements. UX designers spend a lot of their time thinking about how users flow through a product, as well as how much friction they experience while trying to reach a particular goal.

Job-wise, UX designers dont work in a vacuum. They spend a lot of time talking to team members (such as engineers and UI designers), and they often carve out time in their schedule to engage with the products current or future users. As a result of these interactions, they create prototypes that they submit for review and notes. In other words, this is a position that requiressoft skillssuch as empathy and communication.

As fortechnical UX designer skills, heres what pops up most often in job postings for the position, according to Burning Glass, which collects and analyzes millions of job postings from across the country. As you might expect, UI designers use virtually the same tools and programming languages:

(Burning Glass definesdistinguishing skillsas advanced skills that truly differentiate candidates applying for various roles.Defining skillsare the skills needed for day-to-day tasks in many roles, whilenecessary skillsare the lowest barrier to entry.)

With zero to two years of experience, a UX designers salary can range anywhere from $66,000 to $102,000, according to Burning Glass. Thats an exceptionally wide range determined by several factors, including (but certainly not limited to)your portfolio and the projects youve worked on, the company youre working for, and whether you have other skills (such as programming) that can make you a more valuable, cross-disciplinary employee.

The median salary for a UX designer is $98,485.

The most recent Dice Salary Reportpegs the average technology salary at $94,000 in 2019, a 1.3 percent increase from 2018. Software developers at top companies such as Apple and Google can easily earn more than $150,000 per year, once you factor in bonuses, stock options, and other kinds of compensation.

UX designers with lots of experience and a solid portfolio can land a salary competitive with those numbers. For example, Burning Glass suggests that UX designers in the 90th percentile for compensation can make $128,115 per year, on average. Those kind of numbers, though, generally come after a decade or more of working.

According to Burning Glass, the average time to fill an open position is 36 days, roughly similar to many other technology jobs. Its a lengthy-enough timespan to suggest a pretty high level of demand for UX designers; for jobs with lots of open candidates and relatively few open jobs, time-to-fill is often much shorter.

Burning Glass also predicts that UX designer jobs will grow 14.9 percent over the next 10 years, so its not a dying profession by any stretch of the imagination. After all its not like businesses are going to stop designing, building, and releasing productsand as long as that cycle continues, theyll need UX designers to create a seamless experience for users.

Read the rest here:
UX Designer Salary: 5 Important Things to Know - Dice Insights

Mastercard : Accelerate Ignites Next Generation of Fintech Disruptors and Partners to Build the Future of Commerce – Marketscreener.com

11 companies join the Start Path startup engagement program; nearly 50 new deals signed to Engage network to help customers make growth ambitions a reality

Mastercard has announced the expansion of its Accelerate fintech portfolio, adding dynamic entrepreneurs to its award-winning startup engagement program Start Path and more technology partners to its Engage network, providing access to expert engineers and specialists that can help customers deploy new services quickly and efficiently. Mastercard helps emerging brands build and scale their businesses, supporting their programs today and providing the resources they will need in the years to come. Recognizing the important role fintechs play in the worlds rapid digital transformation, Mastercard is continuously diversifying its business by diversifying its perspective looking to new partners and new ways to build on its core competency as a payments network.

"With the dramatic shift towards digital payments, the rise of open banking and the growth of blockchain and cryptocurrencies, there's never been a more exciting time to be an innovator in fintech, said Ken Moore, Executive Vice President and Head of Mastercard Labs. Mastercard is thrilled to partner with some of the worlds most innovative startups to transform the future of commerce."

Mastercard Start PathMastercard is welcoming 11 new startups to its Start Path program, offering a powerful network, innovative technology and deep expertise to help them grow their businesses and scale sustainably. Since 2014, Mastercard has invited more than 230 later-stage startups worldwide to participate in its six-month virtual program, providing technical guidance, operational support and commercial engagements within the Mastercard ecosystem.

Start Path evaluates more than 1,500 applications each year and selects approximately 40 startups that offer the most promising technologies and demonstrate a readiness to scale. Startups in this growing network have gone on to raise $2.7 billion in post-program capital and collaborate with Mastercard, major banks, merchants and other high-profile organizations.

Mastercard EngageConsumer expectations are evolving more rapidly than ever, and banks, financial institutions and digital players are looking for even more agility in bringing new solutions to market. Mastercards global reach and local roots afford it the ability to foster a strong, carefully curated network of technology partners that are qualified based on Mastercard standards and industry requirements. Mastercard is expanding its Engage program to support even more solutions and deliver better learning and promotional opportunities for partners.

Engage identifies, serves and promotes a network of more than 170 strategic partners who comply with Mastercard certification and rules to build and deploy new solutions on behalf of fintechs, banks and merchants. In the first quarter of 2020 alone, Mastercard Engage signed almost 50 new deals with partners such as Antelop, Giesecke+Devrient Mobile Security, MeaWallet, Netcetera, Payair and Thales across Europe, Latin America, Asia Pacific and Middle East Africa.

Strengthening the capabilities of technology partners is even more critical during times of crisis, so fintechs can innovate faster and better serve consumers and businesses. Engage has enabled more than 200 million cards to support financial growth and market entry for fintechs around the world. Additional resources are now available for partners through the new Mastercard Engage website.

Notes to EditorsThe following companies are joining the Start Path network to grow and scale their blockchain and open banking solutions (Bit Capital), financial inclusion platforms (Hello Tractor), carbon footprint measurement products (Doconomy) and more:

About Mastercard (NYSE:MA) Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. http://www.mastercard.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200812005202/en/

Read the original:
Mastercard : Accelerate Ignites Next Generation of Fintech Disruptors and Partners to Build the Future of Commerce - Marketscreener.com

IBM creates an open source tool to simplify API documentation – TechRepublic

OpenAPI Comment Parser for developers aims to make good API documentation easy to write and read.

Image: IBM

APIs are essential to programming, but they can get complicated. IBM has launched a new tool for developers that should make writing API documentation a bit easier: The OpenAPI Comment Parser.

"Developers need instructions on how to use your API and they need a way to try it out. Good documentation handles both," IBM developer advocate Nicholas Bourdakos said in a blog post about the new developer tool.

OpenAPI is a specification that was built as a language-agnostic interface for RESTful APIs, "which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection," said API tool maintainer Swagger.

The Comment Parser is designed to work around a problem that Bourdakos said is common for developers working with the OpenAPI specification: OpenAPI specs for recording comments have to be built and maintained manually, which means they often get forgotten and become bloated and useless.

SEE: Quick Glossary: DevOps (TechRepublic Premium)

"The goal of OpenAPI Comment Parser is to give developers a way to generate this OpenAPI spec from comments inline with their code," Bourdakos said.

The OpenAPI spec under the Comment Parser lives inside the code, broken up into smaller pieces that can be more easily updated because the need to go searching through a giant spec file is eliminated. "On average, this new format has been shown to reduce the amount of spec needed to be written by 50%," Bourdakos said.

Bourdakos gives a demonstration of how the OpenAPI Comment Parser works in a video, where he uses Docusaurus along with the Comment Parser to make an API documentation site. The graphical layout of the site pulls OpenAPI spec comments from his code and lays it out in an easy-to-see fashion using markdown.

The Docusaurus interface makes comments easy to see, search, and review, and because they're written in-line with the code, thanks to the Parser, a developer who needs to make changes to a particular section can simply update that comment.

The Comment Parser, Bourdakos said, is designed to make developer's lives easier by eliminating superfluous documentation code. Not only does this save time, but it also makes the code itself more manageable, he said.

SEE: Top 5 programming languages for systems admins to learn (free PDF) (TechRepublic)

Documentation generation by the Comment Parser can also be used to test the API, so developers can spend "less time waiting for a frontend to be built or having to rely on other tools in order to test drive their API," Bourdakos said.

IBM's OpenAPI Comment Parser was built for use with Node.js, but its command line interface will work with any language that uses a similar comment style. IBM will be adding support for additional "popular languages" in the future.

In the meantime, Devs that use Node.js or a language with a similar commenting format can now try the OpenAPI Comment Parser.

From the hottest programming languages to the jobs with the highest salaries, get the developer news and tips you need to know. Weekly

Link:
IBM creates an open source tool to simplify API documentation - TechRepublic

1Password is coming to Linux – ZDNet

Maybe you can remember dozens of complex passwords, I can't. That's why password managers, such as 1Password, Keeper, and LastPass, are so important. Now, AgilBits, 1Password's parent company, has finally listened to their customers who have been asking for a Linux version for a decade. At long last, the company announced, "1Password is coming to Linux."

Don't get your credit cards out yet though. True, the first development preview version of 1Password is out now. But it's not ready for prime-time yet. It's not a finished product. "For example, the app is currently read-only: there is no item editing, creation of vaults, or item organization."

So, if you want to test it, go for it. But it's in no way, shape, or form ready for a production system or even your home setup. The company suggests that, for now, its Linux customers use 1Password X in their browsers.

So, why not just use 1Password X? Because 1Password will handle far more than just web passwords. You will also be able to use it with FTP, SSH, and SMB network passwords.

On the backend, 1Password runs on Rust, a secure systems programming language that has made a lot of waves in the Linux community. For end-to-end encryption, it uses the open-source ring crypto library. This library's code springs from the BoringSSL, OpenSSL fork. The application interface is being written with the React JavaScript library.

If you work on an open-source team which needs a password manager, the company will give you, and everyone on your team, a free account. To get it, simply open a pull request against its 1Password for Open Source Projects repo.

The program, when completed, will come with the following features:

Simple and secure installs using apt and dnf package managers

Automatic Dark Mode selection based on your GTK theme

Tiling window manager support and descriptive window titles

Unlock with your Linux user account, including biometrics

System tray icon for staying unlocked while closed

X11 clipboard integration and clearing

Keyboard shortcuts

Data export

Unlock multiple accounts with different passwords

Create collections to organize data across accounts and vaults

All versions of 1Password work with your data files synced on 1Password's servers. The company claims it doesn't track users. But you can also save your passwords locally and sync your data file on a server on your own local area network or a Dropbox or iCloud account.

Want to check it out? Read the guide Get to know 1Password for Linux to get started. There are signed apt and rpm package repositories for Debian, Ubuntu, CentOS, Fedora, and Red Hat Enterprise Linux (RHEL). There's also an AppImage available for other distributions. 1Password intends to support all major desktop Linux distros.

After an initial 30-day free trial, a 1Password personal subscription costs $36 per year and comes with 1GB of personal storage. A five-user family subscription costs $60 annually. 1Password Business accounts add advanced access control, with activity logs and centrally managed security policies. These cost $96 per user per year, and include 5GBs of document storage anda free linked family account for each user.

Related Stories:

More:
1Password is coming to Linux - ZDNet

Python may be your safest bet for a career in coding – Gadgets Now

Whether you are looking to add a new programming language to your skillset, or to venture into coding, Python is today the safest bet.

Python is a high-level language, but its more like the English you speak and write. If you read a snippet of code in Python, its easy to figure out the intention behind the code, what the algorithm is trying to do. This makes Python very easy to learn, says Nabarun Pal, an infrastructure engineer and one of the key organisers of PyCon India (Python Conference) 2020, due in October. Nabarun and Sayan Chowdhury, a Linux software engineer and PyCon chair, were our guests at the eleventh edition of Times Techies Webinars.

Python, which broke into the tech scene around the early 90s, is free and open source. But its current chartbuster status owes a lot to the community of developers and the vast collection of libraries (packages) that can be fitted into any problem you are trying to solve. Python has a huge universe of libraries or packages. If you are building a web application, you have Django and Flask, suiting different purposes. Similarly, packages are available for desktop, infrastructure and mobile applications, and for data science, visualisation, research and machine learning. Micropython and Circuitpython let you tinker with hardware, says Pal.

Pal and Chowdhury say this sets Python apart from most other languages, which are useful for specific purposes. R, for instance, is great for data science, but not for much else.

Python drives a range of activities in AI. For data science, some of the most powerful libraries are Pandas, Jupyter and Numpy, which are designed for heavy duty tasks. There are also packages for visualisation, besides the machine learning packages like Keras, Tensorflow and Pytorch.

If you are looking to build something faster, Python is the ideal choice. Instagram is a famous example. From image processing to infrastructure to ML, Python can practically do everything for you, says Chowdhury. Companies that use a lot of Python include Nasa, Uber, Facebook, YouTube, PayPal, Reddit and Pinterest.

Academia too uses a lot of Python. The popular packages used for research, say, for molecular analysis, are MDanalysis, Astropy and SunPy.

See original here:
Python may be your safest bet for a career in coding - Gadgets Now