Buried deep in the ice is the GitHub code vault humanity’s safeguard against devastation – ABC News

Svalbard is a remote, frozen archipelago midway between Norway and the North Pole.

Polar bears outnumber humans, yet it represents arguably the biggest insurance the world holds in case of global technological devastation.

And we just took out a fresh policy.

For the first time ever, open-source code that forms the basis of most of our computerised devices has been archived in a vault that should protect it for 1,000 years.

If you're thinking that an Arctic code vault sounds like a high-tech library crossed with a Bond villain's lair, you're not far off.

Svalbard is remote, home to the world's northernmost town, and is protected by the century-old International Svalbard Treaty.

Crucially, it's already home to the successful Global Seed Vault, which saves seeds in case entire species ever get wiped out by disease or climate change.

Just down the road, the GitHub Archive Program found space in a decommissioned coal mine run by the Arctic World Archive, which already houses and preserves historical and cultural data from several countries.

All put together, the barren archipelago makes the perfect place to seal something you want to protect in a steel vault 250 metres under the permafrost.

The Arctic Code Archive aims to serve as a time capsule for the future, saving huge amounts of open-source computer code alongside a range of data including a record of Australia's biodiversity and examples of culturally significant works.

If you were to make your way into the mine and crack the large steel vault, you'd find 186 film reels inside, each a kilometre long, covered in tiny dots.

It's not just miniaturised text, though. To squeeze in as much as possible, the code is stored in tiny QR codes that pack the information in as densely as possible.

You run into open-source code every day without even knowing it. In fact, you're probably using some to read this article right now.

"Open-source" means the code is shared freely between developers around the world and can be used for any application.

That means a little piece of coding could end up in anything from your TV to a Mars mission.

The concept fosters collaborative software engineering around the globe.

It's incredibly important, and it spans a range of complexity from huge algorithms that mine Bitcoin to single lines of code that determine whether a number is odd or even.

Archiving all of that work means it won't have to be re-invented if it is ever lost, saving time and money.

The archive reels hold a combined 21 terabytes of code. That may not seem much if you have a hard drive at home that holds 2 terabytes.

But we're not storing your photos or movies here each character in a line of code takes up a tiny bit of space.

If someone who types at about 60 words a minute sat down and tried to fill up all that space, it would take 111,300 years and that's if they didn't get tired or need any breaks.

If you're making an archive that's going to last, you've got to make sure it isn't going to degrade over time.

While it might seem intuitive to store the information on something like a Blu-ray disc or on hard drives, these are notorious for breaking down.

They're designed to be convenient, not to be heirlooms you pass down for generations.

"You might have seen this in the past ... years after you touched it last, you try to boot it up again and it wouldn't work," says GitHub's VP of strategic programs, Thomas Dhomke.

"The (information) bits have been lost."

Things that survive the ravages of time tend to be physical. Think papyrus scrolls, Egyptian carvings or Assyrian tablets.

In fact, there's a good chance that people of a distant future will know more about ancient people than they will about us.

When it comes to making physical copies, your office A4 wouldn't cut it, so they used a refined version of century-old darkroom photography technology to create the archival film reels.

Each film is made of polyester coated in very stable silver halide crystals that allow the information to be packed in tightly.

The film has a 500-year life span, but tests that simulate aging suggest it will last twice as long.

Storing it in the Arctic permafrost on Svalbard gives you a host of added benefits.

The cold prevents any degradation caused by heat; it's locked deep in a mountain, protected from damaging UV rays and safe from rising sea levels; and it's remote enough that it's not likely to be lost to looters from a dystopian future.

Despite global warming, and a previous event at the seed bank where some of the permafrost melted, it's believed the archive is buried deep enough that the permafrost should survive.

Just in case, they're not stopping there.

The GitHub Archive Program is working with partners to figure out a way to store all public repositories for a whopping 10,000 years.

Called Project Silica, the goal is to write archives into the molecular structure of quartz glass platters with an incredibly precise laser that pulses a quadrillion times a second.

That's a 1 followed by 15 zeros: 1,000,000,000,000,000.

You might be wondering: doesn't the internet already save all of our information in the cloud?

Yes, but it's not as safe as you might think.

There are actually three levels of archiving, known as hot, warm and cold.

The hot layer is made up of online repositories like GitHub, which allows users to upload their code for anyone to use.

This is copied to servers around the world and is readily accessible to anyone with an internet connection.

While access is quick and easy, if someone removes their code from the hot layer, it is no longer available. That doesn't make for a very reliable archive.

The warm layer is run by the Internet Archive, which runs the Way Back Machine.

It crawls the web and regularly takes snapshots of sites and keeps them on their servers. Anyone can access them, but you have to do a bit of digging.

For example, you can still find the ABC webpage from January 3, 1997, which has the story of then Victorian Premier Jeff Kennett warning against Australia becoming a republic.

The Internet Archive isn't a perfect system it takes regular snapshots, but anything that happened in-between can be lost.

Both the hot and warm layer work well together to give a fair idea of what the internet might have held at any given time, but they both suffer from one critical weakness: they are made up of electronics.

The internet is essentially millions of interconnected computers and huge data storage banks that your device can access.

If there was to be an event that disrupted or destroyed those computers, the information they hold and therefore the internet could be destroyed forever.

The Arctic vault represents the cold layer of archiving.

It's an incomplete snapshot taken at regular intervals (the plan is to add to the archive every five years), but one that should survive the majority of foreseeable calamities.

Some of the potential disasters are academic, but some we've seen before.

In early September 1859, the sun belched, and the world's very rudimentary electronics were fried.

It's known as the Carrington Event, and as the matter ejected from the sun headed towards Earth, the lights of the auroras were seen as far north as Queensland and all the way down to the Caribbean.

When it hit, the largest geomagnetic storm ever recorded caused sparks to fly off telegraph wires, setting fires to their poles. Some operators reported being able to send messages even though their power supplies were disconnected.

If that were to happen today, most of our electronics both here and in space would be destroyed.

And it's not really a matter of if, but when.

It also doesn't have to be a huge astronomical event that causes us to lose many generations' worth of information.

If a pandemic or economic downturn was severe enough, we might be unable to maintain or power the computers that make up the internet.

If you consider how technology has changed in just the last few decades the rise of the internet, the increased use of mobile phones then it's easy to understand how people living a hundred or a thousand years from now are likely to have technology that's wildly different from ours.

The archive is part of our generation's legacy.

As Mr Dhomke says:

"We want to preserve that knowledge and enable future generations to learn about our time, in the same way you can learn (about the past) in a library or a museum."

Australian data has found a home in the archive, too, including the Atlas of Living Australia that details our country's plant and animal biodiversity, and machine learning models from Geoscience Australia that are used to understand bushfires and climate change.

There's no saying who might want to use the archive in the future, so archivists had to come up with a solution both for those who don't speak English and for those who might not understand our coding languages.

The films start with a guide to reading the archive, since there's a decent chance that anyone finding them in the future may not know how to interpret the QR codes.

Even more importantly, that's followed by a document called the Tech Tree, which details software development, programming languages and computer programming in general.

Crucially, it's all readable by eye.

Anyone wanting to read the archives might need to have at least a basic understanding of creating a magnifying lens (something humans achieved about 1,000 years ago) but after that the archive could all be translated using a pen and paper.

The guides aren't just in English, either. Like a modern-day Rosetta Stone, they are also written in Arabic, Spanish, Chinese, and Hindi, so that future historians have the best chance of deciphering the code.

"It takes time, obviously ... but it doesn't need any special machinery," Mr Dhomke says.

"Even if in 1,000 years something dramatic has happened that has thrown us back to the Stone Age, or if extraterrestrials or aliens are coming to the archive, we hope they will all understand what's on those film reels."

More:
Buried deep in the ice is the GitHub code vault humanity's safeguard against devastation - ABC News

NextCorps and SecondMuse Open Application Period for Programs that Help Climate Technology Startups Accelerate Hardware Manufacturing – GlobeNewswire

NEW YORK, Aug. 12, 2020 (GLOBE NEWSWIRE) -- NextCorpsandSecondMuseannounced that they are accepting applications from startups with innovative climate tech hardware prototypes for a spot in their Cohort 3 manufacturing programs:Hardware Scaleup, which NextCorps manages in conjunction withREV: Ithaca Startup Worksin upstate New York, andM-Corps, which SecondMuse manages in New York City. Designed to help support climate tech innovationfrom prototype to productioncompanies are encouraged to attend informational webinars offered through August 31 and submit their application by September 8, 2020.

The Cohort 3 manufacturing programs, which will kick off in October, provide entrepreneurs with a unique blend of support that helps accelerate manufacturing readiness and quickly move prototypes into volume production. This includes access to technical experts and mentors, curated manufacturing tours, funding opportunities, and a supportive community of peers who are on the same journey.

Manufacturing is hard. These programs support climate tech startups to get to market faster by reducing the headaches associated with manufacturing, said Shelby Thompson, senior community manager, SecondMuse. They also help advance New York States clean energy and greenhouse gas reduction goals by making available new technologies that can have a significant impact.

Scaling and commercializing new technologies requires strong collaboration and partnerships, added Mike Riedlinger, managing director, technology commercialization, NextCorps. Our programs are structured to give entrepreneurs access to the connections that can help them successfully move their technologies into volume manufacturing.

Partners in the two programs includeLaunchNY,New Lab,Urban Future Lab,Maiic,CEBIP,Partsimony,Cornell University,REV: Ithaca Startup Works,RIT Golisano Institute for Sustainability,NIST Manufacturing Extension Partnership, and many contract manufacturers, industrial organizations, industry leaders, and funders across the state.

Building on SuccessThe announcement comes after the New York State Energy Research and Development Authority (NYSERDA) renewed its support of NextCorps and SecondMuse earlier this year to advance the scale up of innovation to meet Governor Cuomos nation-leading Climate and Clean Energy goals, as outlined in the Climate Leadership and Community Protection Act (CLCPA). In their already two years of programming, 33 startups have been supported by NextCorps and SecondMuse. Despite disruptions related to COVID-19, companies includingCellec,Ecolectro,Skyven,Southern Tier Technologies,Switched Source, Enertiv,Pvilion,Actasys,Unique Electric Solutions, andTarformcontinue to grow. Collectively these companies have raised an additional $28 million in funding during their time in the program while generating approximately $10 million in sales revenue. The assistance has had dramatic impactson success,demonstrating the value of both manufacturing readiness and access to global and domestic manufacturers in New York State.

The beneficial connections weve made through the program have changed our manufacturing strategy, said Rhonda Staudt, founder, Combined Energies, LLC, a startup participant. Its opened the door to finding a high-quality contract manufacturer that can build our boards, so we dont have to do it ourselves.

Going through the Manufacturing Readiness Level (MRL) assessment helped us truly understand where we are within our manufacturing processes, as well as the steps we need to take to get to full production run," said JC Jung, cofounder, Tarform Motorcycles.

How to ApplyCohort 3 started accepting applicationson July 27 and the application period will be open through September 8, 2020. The selection criterion focuses on the potential of an existing hardware prototype device that can be scaled up to mass production to meaningfully lower carbon emissions and support clean energy. To help companies assess whether the program is right for them, they can attend virtual information sessions that will be held in August. The sessions cover topics such as: what participants can expect from the program, a list of tools and resources that will be available, and how the programs can help push a company forward. Initial sessions were hosted on August 5 and 11, with additional sessions available on the following dates:

To apply to or learn more about the program, go tohttps://mcorps.paperform.co/.

About NextCorpsNextCorps provides a suite of services, including technology commercialization support for very early-stage opportunities, business incubation for high-growth potential startups, and growth services, for manufacturing companies seeking to improve their top- and bottom-line performance. For more information, visitwww.nextcorps.org. For more information on Hardware Scaleup, go toscaleup.nextcorps.org.

About SecondMuseSecondMuse is an impact and innovation company that builds resilient economies by supporting entrepreneurs and the ecosystems around them. They do this by designing, developing, and implementing a mix of innovation programming and investing capital. From Singapore to San Francisco, SecondMuse programs define inspiring visions, build lasting businesses and unite people across the globe. Over the last decade, theyve designed and implemented programs on 7 continents with 600+ organizations such as NASA, The World Bank, and Nike. To find out more about how SecondMuse is positively shaping the world, visit:www.secondmuse.com.For more information on M-Corps, go tohttps://www.manufacturenewyork.com/.

For media inquiries, contact Shannon Wojcikatshannon@rkgcomms.comor shelby.thompson@secondmuse.com.

View post:
NextCorps and SecondMuse Open Application Period for Programs that Help Climate Technology Startups Accelerate Hardware Manufacturing - GlobeNewswire

Persistent memory reshaping advanced analytics to improve customer experiences – IT World Canada

Written by Denis Gaudreault, country manager, Intel Canada

175 zettabytes. That is IDCs prediction for how much data will exist in the world by 2025. Millions of devices generate this data everything from the cell phone in our pockets and PCs in our homes and offices, to computer systems and sensors integrated into our cars, to the factory floor at the industrial park leveraging IoT and automation. While many enterprises are unlocking the value of data by leveraging advanced analytics, others struggle to create value cost-effectively.

Case in point: Imagine the difference between going to your desk to get a piece of information, versus going to the library, versus driving from Toronto to Intels campus in Oregon, or even travelling all the way to Mars to get this information. These distances illustrate the huge chasm in latency between memory and data storage in many of todays software and hardware architectures. As these datasets used for analytics continue to grow larger, the limits of DRAM memory capacity become more apparent.

Keeping hot data closer to the CPU has become increasingly difficult in these capacity-limited situations. For the past 15 years, software and hardware architects have had to make the painful tradeoff between putting all their data in storage (SSDs), which is slow (relative to memory), or paying high prices for memory (DRAM). Over the years, it has become a given for architects to make this decision. So how can companies bridge the gap between SSDs and DRAM, while reducing the distances between where data is stored to make it readily available for data analytics?

Persistent memory solves these problems by providing a new tier for hot data between DRAM and SSDs. This new tier allows an enterprise to deploy either two-tier memory applications or two-tier storage applications. While it is not a new concept to have tiers, this new persistent memory tier with combined memory and storage capability allows architects to match the right tool to the workload. The result is reduced wait times and more efficient use of compute resources, allowing companies to drive cost savings and massive performance increases that help them achieve business results, while at the same time maintaining more tools in the toolboxes that support their digital transformations. Enterprises will also benefit from innovations and discoveries from the software ecosystem as it evolves to support it.

Persistent memory is particularly useful for enterprises looking to do more with their data and affordably extract actionable insights to make quick decisions. The benefits of persistent memory are especially valuable for industries that are experiencing digital transformation, like financial services and retail, where real-time analytics provide tremendous value.

For financial services organizations, real-time analytics workloads could include real-time credit card fraud detection or low-latency financial trading. For online retail, real-time data analytics can speed decisions to adjust supply chain strategies when there is a run on certain products, while at the same time immediately generating new recommendations to customers to shape and guide their shopping experience.

Persistent memory can also expedite recommendations for the next three videos to watch on TikTok or YouTube, keeping consumers engaged for longer periods. In these scenarios, real-time analytics allows these organizations to interact with their end-users more instantaneously, improving customer experiences and enabling the business to achieve a better return on investment. While these real-time analytics applications would be possible without persistent memory, it would be costly to maintain the same level of performance and latency.

For those looking for off the shelf solutions without application changes, the easiest way to adopt persistent memory is to utilize it in Memory Mode to achieve large memory capacity more affordably with performance close to that of DRAM, depending on the workload. In Memory Mode, the CPU memory controller sees all of the persistent memory capacity as volatile system memory (without persistence), while using the DRAM as cache.

Many databases and analytics software or appliance vendors such as SAP HANA, Oracle Exadata, Aerospike, Kx, Redis, and Apache Spark now enable and have released new versions of software that utilizes the full capabilities of both application-aware placement of data and persistence in memory offered with persistent memory. A variety of applications along with the operating system and hypervisor that is aware of persistent memory are available in the ecosystem to be deployed by a customers preferred server vendor.

A new class of software products is also emerging in the market that removes the need to modify individual applications for the full capabilities of persistent memory. Software applications such as Formulus Black FORSA, Memverge Memory Machine and NetApp Maxdata are truly groundbreaking approaches to the new tiered data paradigm that bring the value of persistent memory while minimizing application-specific enabling.

For those who want full customization, software developers also have the option to utilize the industry-standard non-volatile memory programming language model with the help of open-source programming libraries such as PMDK.

We are living in a time unlike any other. Never has it been more important to analyze data in real-time. With the help of persistent memory, businesses can now make more strategic decisions, better support remote workforces and improve end-user experiences.

In just the last six months alone, Ive been impressed by the use cases and innovation Ive seen our customers implement with persistent memory. Looking to the future, I am excited to watch persistent memory, and the ecosystem that has evolved to support it, continue to bring ideas, dreams and concepts to life making the impossible, possible.

Read the rest here:
Persistent memory reshaping advanced analytics to improve customer experiences - IT World Canada

UX Designer Salary: 5 Important Things to Know – Dice Insights

How much can UX designers earn? Thats a crucial question if youre thinking of getting into UX (i.e., user experience design). An excellent UX designer can help elevate a product to best-in-class, but theyll also need to convince executives and managers that their skills are truly invaluable.

First, some additional exposition: Although you sometimes see the terms UI and UX used interchangeably, theyre actually very different. In a software context,UI is what the user sees on a screen: the icons, text, colors, backgrounds, and any moving elements (such as animations). UX, on the other hand, ishowthe user moves through all those UI elements. UX designers spend a lot of their time thinking about how users flow through a product, as well as how much friction they experience while trying to reach a particular goal.

Job-wise, UX designers dont work in a vacuum. They spend a lot of time talking to team members (such as engineers and UI designers), and they often carve out time in their schedule to engage with the products current or future users. As a result of these interactions, they create prototypes that they submit for review and notes. In other words, this is a position that requiressoft skillssuch as empathy and communication.

As fortechnical UX designer skills, heres what pops up most often in job postings for the position, according to Burning Glass, which collects and analyzes millions of job postings from across the country. As you might expect, UI designers use virtually the same tools and programming languages:

(Burning Glass definesdistinguishing skillsas advanced skills that truly differentiate candidates applying for various roles.Defining skillsare the skills needed for day-to-day tasks in many roles, whilenecessary skillsare the lowest barrier to entry.)

With zero to two years of experience, a UX designers salary can range anywhere from $66,000 to $102,000, according to Burning Glass. Thats an exceptionally wide range determined by several factors, including (but certainly not limited to)your portfolio and the projects youve worked on, the company youre working for, and whether you have other skills (such as programming) that can make you a more valuable, cross-disciplinary employee.

The median salary for a UX designer is $98,485.

The most recent Dice Salary Reportpegs the average technology salary at $94,000 in 2019, a 1.3 percent increase from 2018. Software developers at top companies such as Apple and Google can easily earn more than $150,000 per year, once you factor in bonuses, stock options, and other kinds of compensation.

UX designers with lots of experience and a solid portfolio can land a salary competitive with those numbers. For example, Burning Glass suggests that UX designers in the 90th percentile for compensation can make $128,115 per year, on average. Those kind of numbers, though, generally come after a decade or more of working.

According to Burning Glass, the average time to fill an open position is 36 days, roughly similar to many other technology jobs. Its a lengthy-enough timespan to suggest a pretty high level of demand for UX designers; for jobs with lots of open candidates and relatively few open jobs, time-to-fill is often much shorter.

Burning Glass also predicts that UX designer jobs will grow 14.9 percent over the next 10 years, so its not a dying profession by any stretch of the imagination. After all its not like businesses are going to stop designing, building, and releasing productsand as long as that cycle continues, theyll need UX designers to create a seamless experience for users.

Read the rest here:
UX Designer Salary: 5 Important Things to Know - Dice Insights

Expanding the Universe of Haptics | by Lofelt | Aug, 2020 – Medium

Haptics is hard. Transmitting touch over long distances using electrical signals rendered by a computer interface, or simulating the tactile materiality of a virtual world, presents a wide range of challenges. The sense of touch is complicated and variegated; the mechanisms by which it functions the foundations of our tactile reality have historically been understudied in psychology, physiology, and neuroscience, in comparison to studies of seeing and hearing. As a sense distributed throughout the body, composed of a range of different submodalities (including movement, pressure, temperature, and pain) it is difficult and perhaps impossible to design a machine that comprehensively stimulates the sense of touch. The hurdles for haptics are not strictly technological and scientific: it is not just a question of designing haptics applications that work well in the lab, or of gaining a deeper and more holistic understanding of how touch operates. As any haptician knows and as the diverse array of haptics devices already developed and abandoned attests to these are certainly immense obstacles that at times appear insurmountable.

But there is also a cultural challenge involved in bringing haptic devices out of the design lab in ways that are meaningful to their imagined users. Theres a tendency among hapticians to assume that their own enthusiasm for the technology extends to the broader public. Its an understandable impulse, given that those drawn to the field are often motivated by a sort of humanistic desire to bring touch to computing: if touch is the most fundamentally human of our senses, and if it is increasingly absent from interactions in a society dependent on computer-mediated communication, then restoring touch entails a restoration of the human. However, not everyone shares this belief. For potential users who may be skeptical or disinterested, how does one explain the value that haptics can add to experiences with digital media?

For a long time, the haptics industry has answered this problem by overpromising and overhyping its forthcoming products, aided and abetted by a popular press that frequently covers new haptics discoveries in a fervent, celebratory, and uncritical tone, with tech companies and the popular technology press situated in a synergistic relationship. Articles (and, especially article headlines) that sensationalize the potential of haptics technologies make for alluring and evergreen clickbait. Such stories announcing the impending arrival of new devices mobilize and rehearse what I have called the dream of haptics: a vision of fully-realized haptic devices that provide a type of photorealism for touch, restoring the missing tactile dimension to our interactions with computers. The problem with this dream of haptics is that it is conjured around virtually every instantiation of the technology, no matter how minor. Each instantiation, accordingly, carries the burden of realizing this dream and is destined to fall short of these promises. This has been a character of haptics marketing and journalistic writing about haptics since at least the late 1990s.

Its easy to understand how this happens: of course those drawn to work in the field of haptics are enthusiastic about it haptics tends to get overlooked in favor of a focus on graphics or audio.

Interestingly, psychologists who worked on touch have consistently lamented, going back at least to the 1950s, that research on the psychology of touch is overlooked in favor of research on the psychology of seeing and hearing: touch, as the lament goes, is considered the neglected sense in everything ranging from psychology to aesthetics to engineering.

In the push to make haptics research legible and compelling, hapticians both haptics marketers and haptics engineers fall back on a set of well-rehearsed tropes about the technology and its immense impact. Journalists, too, revert to a familiar and comfortable framing of haptics as both immanent and transformative; if they do offer qualifiers about the feasibility of these complex devices coming to market, they are often buried toward the end of the article, leaving readers with the impression that they can expect to see these advanced haptics applications distributed ubiquitously in the near future.

For one very recent example, check out the headlines for and reporting on the HaptiRead midair Braille haptic system: these articles contain little mention of the fact that the application is still in the very early stages of research; similarly, reporting on haptics patents often conflates the patenting of a technology with its impending arrival.

So its clear that theres a future orientation to the framing of haptics: a thing youll someday have that adds a fantastical dimension to your experience with existing technologies.

BoingBoings Mark Frauenfelder, commenting on the haptics in the Wii Remote in an early review: It feels like magic. I love it.

The challenge is to foster an appreciation for and understanding of current-generation haptics applications, while explaining how they have and have not lived up to the dream of haptics. Those working in the field would do well to acknowledge and embrace the fields historicity: confront the dashed hopes and failed promises, explain why haptics hasnt lived up its lofty and transformative aspirations, and then finally, as a positive step, show how haptics already has changed the way we interact with digital devices.

Skepticism about the technologys capacity to meet the lofty promises made around it has been present since haptics began to cohere as an industry way back in the 1990s. In an article published on the eve of Immersion Corporations 1999 IPO, one market analyst assessed the state of haptics: Its still very much a nascent technology [] It hasnt lived up to its promise. It could become a part of every PC, or it could just fade away. Im not seeing anything yet that says, wow, youve really got to go out and buy it.

Logitechs senior vice president in that same piece, commenting on vibration-enabled computer mice: we believe that mice using FEELit technology will revolutionize the way people interact with their computers.

Evaluating the analysts concerns over 20 years later, it certainly seems correct that haptics still hasnt lived up to its promise. But haptics has become a part, not of every PC, but of every smartphone, wearable, and game console: almost without notice, we have become accustomed to decoding the variegated patterns of vibrations constantly emanating from these devices. Gradually, we acclimate to this language of vibrations, acquiring a device-specific and platform-specific tactile literacy: an ability to read the messages being sent to us through our skin by a range of digital devices. One does not have to be a trained haptician to notice, for example, that the vibration pattern reminding a FitBit wearer to take 250 steps that hour (two quick jolts) is perceptibly distinct from the celebratory burst of vibrations that rewards the wearer for hitting their 10,000 step daily target (a few short bursts of vibration, followed by a couple of longer ones wrist fireworks that correspond with the display on the screen). Similarly, the vibration pattern indicating an incoming call differs from the pattern used to announce the arrival of a text message, which may both be distinct from the vibratory message alerting the user that the devices battery is running low.

The problem with such vibratory languages, however, is that they lack stability and cohesion. Different actuators found in different devices produce different sensations; phone operating systems and specific applications use vibration alerts inconsistently; and the impulse to overuse vibration notifications could be leading to what Ben Lovejoy called haptic overload, with app developers competing for bandwidth on the haptic channel. So what might otherwise be trumpeted as a major victory for haptics gets muted somewhat by the lack of a shared tactile vocabulary across devices and applications. There are a host of reasons for this fragmentation: a lack of standardized design tools, competition between companies each pushing their own vocabularies, intellectual property concerns, and so on.

However, the important consequence, culturally, concerns the inability of haptics to cohere around a unifying language of vibrations. We may each informally acquire an understanding of the way an individual device communicates to us through haptics, but that literacy becomes obsolete when we move to a new device. When the Apple Watch was announced, for instance, some suggested that the Taptic Engine would usher in a new era of tactile communication, with vibration communication stealthily transforming the way we interact with our devices. Here again, we find another failed promise of haptics: if the Apple Watch provided a new language of touch, where was the dictionary for this language? How did Apple go about training users to read by touch? And did it educate designers on how to best write for this new language of feel?

To their credit, Immersion Corporations efforts on this front were far more systematic: its proposed Instinctive Alerts framework was ambitious in providing over 40 distinct vibration patterns, each attached to specific messages, capable of running on any single-actuator device. Apples Core Haptics provides an impressively detailed set of guidelines for developers, but the moment of excitement around smartwatch and smartphone haptics appears to have been eclipsed somewhat by the renewed hope for VR haptics that followed the commercial release of the Oculus Rift and HTC Vive.

If we broaden this conversation outward to encompass video game controller rumble, which has been a standard feature of console controllers since 1997, we see a similar problem: video game players have acquired a variety of languages of touch, specific to individual games and individual game genres, but game design lacks robust training programs in haptic effects programming, so the haptic vocabularies of games remain haphazardly assembled and fragmented. And vibration feedback in games, too, has been continually framed since its inception as an imperfect instantiation of haptics that will inevitably be overcome with the rise of some new and improved feedback mechanism (it is only recently, first with HD Rumble in the Switch and now with the impending release of Sonys DualSense controller, that this promise is being fulfilled).

That strategic distancing of current generation from next-generation might just be a common marketing maneuver, but it perpetuates the always-on-the-horizon dream of haptics a continual feeling that the dream is bound to be deferred.

I would put this challenge to hapticians: instead of speaking in the language of haptics will, begin thinking in terms of haptics has. Rather than projecting the impact of haptics based on the assumed widespread adoption of fantastical and expensive pieces of hardware, try to convey a sense of what is possible given existing technologies. This is not to suggest that the field should run from dreaming big many of the just-on-the-horizon devices represent significant steps forward for haptics in terms of hardware sophistication and use case diversity, the result of creatively-conceived and carefully-executed research programs but instead, to argue that the field would be better served by not continually pinning its hopes on the uptake of unavoidably expensive new hardware. Moreover, being realistic about the capabilities of current generation devices and circumspect in the forecasted impact of future devices would go a long way toward restoring some of the fields strained credibility. And finally, haptics research has primarily operated with a top-down model, driven by those in industry and the academy, rather than being pushed forward by the interests and agitations of those working outside of professional contexts.

Hapticians would do well to emulate Kyle Machiluss focus on community engagement with his open source Game Haptics Router sex toy control software, which emphasizes adaptability to the various uses different communities might want to put it to. Building haptics applications up from a foundation of community interest would help ensure that the hype around haptics is more than just a product of industry enthusiasm echoing through popular press click-chasing (Dave Birnbaums INIT podcast is a significant step toward growing this sort of public-facing intellectual culture around haptics).

With much of the world and the US in particular still struggling to adapt to the physical distancing protocols required to slow the wildfire spread of COVID, we might also consider the use cases that would be the most immediately beneficial to people, in terms of restoring a lost feeling of connection to those in their affective networks. Broadening our understanding of touch technologies to include this cultural dimension will increase the kit of available tools to help meet the difficult challenges facing those wishing to expand the universe of haptics.

About the author: David Parisi researches the cultural and historical aspects of touch technology. His book Archaeologies of Touch: Interfacing with haptics from Electricity to Computing, provides the first comprehensive origin of haptic interfaces, offering vital insights on the development and future trajectory of technologized touching. A sample of Davids book is available here.

Twitter: @dave_parisi

Read the original post:
Expanding the Universe of Haptics | by Lofelt | Aug, 2020 - Medium

Mastercard : Accelerate Ignites Next Generation of Fintech Disruptors and Partners to Build the Future of Commerce – Marketscreener.com

11 companies join the Start Path startup engagement program; nearly 50 new deals signed to Engage network to help customers make growth ambitions a reality

Mastercard has announced the expansion of its Accelerate fintech portfolio, adding dynamic entrepreneurs to its award-winning startup engagement program Start Path and more technology partners to its Engage network, providing access to expert engineers and specialists that can help customers deploy new services quickly and efficiently. Mastercard helps emerging brands build and scale their businesses, supporting their programs today and providing the resources they will need in the years to come. Recognizing the important role fintechs play in the worlds rapid digital transformation, Mastercard is continuously diversifying its business by diversifying its perspective looking to new partners and new ways to build on its core competency as a payments network.

"With the dramatic shift towards digital payments, the rise of open banking and the growth of blockchain and cryptocurrencies, there's never been a more exciting time to be an innovator in fintech, said Ken Moore, Executive Vice President and Head of Mastercard Labs. Mastercard is thrilled to partner with some of the worlds most innovative startups to transform the future of commerce."

Mastercard Start PathMastercard is welcoming 11 new startups to its Start Path program, offering a powerful network, innovative technology and deep expertise to help them grow their businesses and scale sustainably. Since 2014, Mastercard has invited more than 230 later-stage startups worldwide to participate in its six-month virtual program, providing technical guidance, operational support and commercial engagements within the Mastercard ecosystem.

Start Path evaluates more than 1,500 applications each year and selects approximately 40 startups that offer the most promising technologies and demonstrate a readiness to scale. Startups in this growing network have gone on to raise $2.7 billion in post-program capital and collaborate with Mastercard, major banks, merchants and other high-profile organizations.

Mastercard EngageConsumer expectations are evolving more rapidly than ever, and banks, financial institutions and digital players are looking for even more agility in bringing new solutions to market. Mastercards global reach and local roots afford it the ability to foster a strong, carefully curated network of technology partners that are qualified based on Mastercard standards and industry requirements. Mastercard is expanding its Engage program to support even more solutions and deliver better learning and promotional opportunities for partners.

Engage identifies, serves and promotes a network of more than 170 strategic partners who comply with Mastercard certification and rules to build and deploy new solutions on behalf of fintechs, banks and merchants. In the first quarter of 2020 alone, Mastercard Engage signed almost 50 new deals with partners such as Antelop, Giesecke+Devrient Mobile Security, MeaWallet, Netcetera, Payair and Thales across Europe, Latin America, Asia Pacific and Middle East Africa.

Strengthening the capabilities of technology partners is even more critical during times of crisis, so fintechs can innovate faster and better serve consumers and businesses. Engage has enabled more than 200 million cards to support financial growth and market entry for fintechs around the world. Additional resources are now available for partners through the new Mastercard Engage website.

Notes to EditorsThe following companies are joining the Start Path network to grow and scale their blockchain and open banking solutions (Bit Capital), financial inclusion platforms (Hello Tractor), carbon footprint measurement products (Doconomy) and more:

About Mastercard (NYSE:MA) Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. http://www.mastercard.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200812005202/en/

Read the original:
Mastercard : Accelerate Ignites Next Generation of Fintech Disruptors and Partners to Build the Future of Commerce - Marketscreener.com

Letter to the editor: Censorship by social media – TribLIVE

You are solely responsible for your comments and by using TribLive.com you agree to ourTerms of Service.

We moderate comments. Our goal is to provide substantive commentary for a general readership. By screening submissions, we provide a space where readers can share intelligent and informed commentary that enhances the quality of our news and information.

While most comments will be posted if they are on-topic and not abusive, moderating decisions are subjective. We will make them as carefully and consistently as we can. Because of the volume of reader comments, we cannot review individual moderation decisions with readers.

We value thoughtful comments representing a range of views that make their point quickly and politely. We make an effort to protect discussions from repeated comments either by the same reader or different readers

We follow the same standards for taste as the daily newspaper. A few things we won't tolerate: personal attacks, obscenity, vulgarity, profanity (including expletives and letters followed by dashes), commercial promotion, impersonations, incoherence, proselytizing and SHOUTING. Don't include URLs to Web sites.

We do not edit comments. They are either approved or deleted. We reserve the right to edit a comment that is quoted or excerpted in an article. In this case, we may fix spelling and punctuation.

We welcome strong opinions and criticism of our work, but we don't want comments to become bogged down with discussions of our policies and we will moderate accordingly.

We appreciate it when readers and people quoted in articles or blog posts point out errors of fact or emphasis and will investigate all assertions. But these suggestions should be sentvia e-mail. To avoid distracting other readers, we won't publish comments that suggest a correction. Instead, corrections will be made in a blog post or in an article.

Read more:

Letter to the editor: Censorship by social media - TribLIVE

Kamala Harriss Former Press Secretary Is the Face of Twitter Censorship – National Review

(Illustration/Dado Ruvic/Reuters)

When CNN hired Sarah Isgur, a former Jeff Sessions spokeswoman and now staff writer at The Dispatch, last year to be a political editor at its Washington bureau, left-wing media types put on a full-court press to smear her professionalism. The CNN newsroom which, last I looked, included former Obama official Jim Sciutto was reportedly demoralized by her very presence. Conservatives, and its probably fair to say that Isgur is a pretty moderate one, arent welcome in mainstream journalism. We dont need to go through all the numbers and polls to stress this point. Journalists have long jumped back and forth between Democratic Party politics and media gigs. The job is the same. The venue is different.

I bring this up because, as my former colleague Sean Davis points out, Nick Pacilio, Kamala Harriss former press secretary, is now in charge of deciding announcing what the president of the United States can and cant say on Twitter to his 85 million followers. Twitter has already removed debatable contentions by the president or, contentions no more misleading than any number of Joe Biden allegations. The point of removing tweets, I assume, has more to do with being able to call Trump a liar than worrying about his spreading misleading information.

But the optics are remarkably terrible for Twitter. Its almost certainly true that whoever holds the job of senior communication manager at the social-media giant will be ideologically progressive like the companys CEO. But could you imagine what the nightly reaction on CNN and MSNBC would be if Mike Pences former spokesperson was seen censoring Joe Bidens tweets during a presidential election? I have no doubt Democrats would be calling for congressional hearings.

*Twitter says Pacilio isnt involved in the removal decisions himself. I have updated the post to reflect his role though Pacilios definitive tweets give users no clue as to how the process plays out or who makes these decisions. I dont think the optics are any better for Twitter, but I should have been more careful.

Follow this link:

Kamala Harriss Former Press Secretary Is the Face of Twitter Censorship - National Review

IAF writes to censor board objecting to its undue negative portrayal in movie Gunjan Saxena – The Tribune India

New Delhi, August 12

The Indian Air Force (IAF) has written a letter to the Censor Board objecting to its undue negative portrayal in the movie Gunjan Saxena: The Kargil Girl, said a senior official.

The movie was released on streaming platform Netflix on Wednesday.

According to the official, the letter mentions concerns related to the movies portrayal of gender bias as an institutional work culture at the IAF.

The movie is based on the life of IAF officer Gunjan Saxena who became the first woman pilot to take part in the 1999 Kargil war. It has been produced by Karan Johars Dharma Productions.

The IAF has written a letter to the Central Board of Film Certification (CBFC) objecting to certain scenes in the movie Gunjan Saxena: The Kargil Girl wherein it has been portrayed in undue negative light, the official said.

The official said a copy of the letter has also been sent to Netflix.

Before the release of the film, the IAF had requested Dharma Productions to modify or delete the objectionable scenes. However, it did not take any action, the official noted.

The Defence Ministry had written to the CBFC last month raising strong objections to the depiction of armed forces personnel in some web series, sources said.

It had urged that production houses may be advised to obtain a no-objection certificate from the ministry before telecasting any film, documentary or web series on an Army theme, they added.

The ministry had received some complaints raising strong objections about the portrayal of Indian Army personnel and the military uniform in an insulting manner, they said.

The sources said the communication last month was also sent to the Ministry of Information and Broadcasting and the Ministry of Electronics and Information Technology for consideration. PTI

Visit link:

IAF writes to censor board objecting to its undue negative portrayal in movie Gunjan Saxena - The Tribune India

Kamala Harris Wants To Ban Trump From Twitter, And Her Former Spokesman Is Now Twitter’s Top Censor – The Federalist

California Democratic Sen. Kamala Harris spent more time during the Democratic primary on a crusade to ban President Donald Trump from Twitter than she did defending her criminal justice record. Now, the California senator has made a re-entry into the presidential race as former Vice President Joe Bidens running mate, and with friends in high places at the social media giant.

Nick Pacilio, served as Harris communications director from 2013 2014 when she was still California attorney general, and now works as Twitters senior communications manager and has been with the company for more than five years.

The arrangement raises questions then, over the social media platforms moderation of political content in the course of the election where the company has already employed selective censorship on right-of-center voices including the prominent flagging of Trump tweets as misinformation, which raised valid concerns over mail-in voting. While executives from the nations four largest tech giants testified before House lawmakers last month fielding questions over internet censorship, Twitter was notably absent even as the website suffered a major security breach compromising accounts of some of the worlds most powerful people just two weeks prior.

When reached by The Federalist whether Pacilio would be involved in decisions moderating political content on the platform, Twitter reduced his role to that of merely a spokesperson.

Spokespeople at Twitter, including Nick, dont make enforcement decisions, wrote Brandon Borrman on behalf of the company. They arent involved in the review process. They share the decisions that are made with the public and answer questions. Thats it.

The optics however, remain dubious given Twitters already high-profile episodes of undue censorship in this years election combined with Harris futile crusade to kick Trump off the website altogether just less than a year ago. It is also public that just two years ago, Twitter had shadow-banned prominent Republicans including several vocal members of Congress and Republican National Committee Chairwoman Ronna McDaniel.

Throughout the Democratic primary, Harris was aggressive in her pursuit to shut down Trumps Twitter account, sending open letters to the companys CEO and attacking the social media giants apparent inaction on the Ohio debate stage in prime time after accusing Trump of using the platform to incite violence.

Twitter should be held accountable and shut down that site, Harris said as she challenged Massachusetts Sen. Elizabeth Warren to join her crusade in silencing political opposition. It is a matter of safety and corporate accountability.

In an autopsy of Harriss failed campaign, the New York Times reported it was partially young staffers obsession with Twitter that ultimately played a role in sinking the flailing ship.

The Times wrote:

One adviser said the fixation that some younger staffers have with liberals on Twitter distorted their view of what issues and moments truly mattered, joking that it was not President Trumps account that should be taken offline, as Ms. Harris has urged, but rather those of their own trigger-happy communications team.

View post:

Kamala Harris Wants To Ban Trump From Twitter, And Her Former Spokesman Is Now Twitter's Top Censor - The Federalist