Xen Project Hypervisor 4.13 Brings Improved Security, Hardware Support and Features to Increase Embedded Use Case Adoption – PRNewswire

SAN FRANCISCO, Dec. 18, 2019 /PRNewswire/ -- TheXen Project, an open source hypervisor hosted atthe Linux Foundation, today announced the release of Xen Project Hypervisor 4.13, which improves security, hardware support, added new options for embedded use cases and reflects a wide array of contributions from the community and ecosystem. This release also represents a fundamental shift in the long-term direction of Xen, one which solidifies its resilience against security threats due to side channel attacks and hardware issues.

"Xen 4.13 combines improved security, broader support for hardware platforms, an easier adoption path for embedded and safety-critical use-cases, as well as a broad representation of diverse community collaboration," said Lars Kurth, Xen Project Advisory Board Chairperson. "In addition to the significant features we are adding, including Core scheduling, late uCode loading, live-patching and added support for OP-TEE and improvements to Dom0less, our community is laying the groundwork for a fully functional and more easily safety certifiable platform for Xen."

SecurityXen 4.13 provides key updates in defence against hardware vulnerabilities including Core scheduling, late uCode loading and branch hardening to mitigate against Spectre v1. Xen 4.13 is the first step in revamping key architectural functionality within Xen that allows users to better balance security and performance.

Key updates include:

Embedded and Safety-CriticalXen 4.13 brings new features that provide easier adoption for embedded and safety-critical use-cases, specifically ISO 26262 and ASIL-B.

Key updates include:

In addition, the Xen Project community has created a Functional Safety Working group supported by multiple vendors, including safety assessors. This group is working on a multi-year plan that makes it possible for vendors to consume Xen Project software in a fashion that is compatible with ASIL-B requirements. This is a significant challenge that requires code and development processes to comply with key tenets of ISO 26262, a challenge which has not yet been solved by any open source project, but which multiple projects are trying to address.

Support for new hardware platforms Xen 4.13 brings support for a variety of hardware platforms. Most notably, Xen 4.13 introduces support for AMD 2nd Generation EPYC with exceptional performance-per-dollar, connectivity options, and security features. In addition, Xen 4.13 also supports Hygon Dhyana 18h processor family, Raspberry Pi4 and Intel AVX512.

Comments from Xen Project Users and Contributors:"AMD has been a long-time contributor to the Xen Project and we are pleased to include Xen in our growing AMD 2nd Generation EPYC ecosystem. The Xen 4.13 based hypervisors running on servers powered by AMD EPYC processors are well suited for many different workloads and help provide customers an attractive total cost of ownership. In particular, the results of VDI performance tests demonstrate the power of Xen on AMD EPYC processors," said Raghu Nambiar, Corporate Vice President and CTO of Datacenter Ecosystems & Application Engineering, AMD.

"The Xen Project Hypervisor has always focused on securely isolating VMs, enabling operators to run multi-tenant workloads with confidence. Xen 4.13 builds on this heritage by further defending against attacks which attempt to leverage hardware-based side channels," said Jacus de Beer, Director of Engineering, Hybrid Cloud Platforms, Citrix. "Xen 4.13 also helps integrators and operators to simplify system maintenance and reduce downtime using the new live-patching, and run-time microcode-loading features. This blend of security and serviceability helps Citrix Hypervisor, which uses Xen at its core, to deliver a dependable platform to our cloud, server and desktop virtualization customers."

"The Xen Project is making huge progress in functional safety compliance, which will allow OEMs and tier 1 suppliers to design mixed safety systems using an open source hypervisor," said Alex Agizim, CTO, Automotive & Embedded, EPAM Systems. "We are excited to be part of this initiative as one of the leaders in Xen's FuSa SiG and enable vehicles to be part of the connected services ecosystem."

"At SUSE we are constantly looking at the requirements of performance and security in our enterprise solutions. Xen's new scheduling option 'core scheduling' is the result of many months of work in the Xen community championed by SUSE," said Claudio Fontana, Engineering Manager, Virtualization, SUSE. "It demonstrates a new way to take advantage of hardware optimizations, without compromising on the security of our customers' systems, that should also be looked at as a successful example to spark similar work and discussions in other large open source projects."

"Xilinx sees Xen Project Hypervisor as the leader in the embedded and automotive virtualization space," said Tony McDowell, Senior Product Marketing Engineer at Xilinx. "Xilinx embraces and continues to enhance with support the Xen Project by completing our development of key features required to have usable and easily configured Dom0-less systems."

Additional Resources

About the Xen ProjectXen Project software is an open source virtualization platform licensed under the GPLv2 with a similar governance structure to the Linux kernel. Designed from the start for cloud computing, the Project has more than a decade of development and is being used by more than 10 million users. A project at The Linux Foundation, the Xen Project community is focused on advancing virtualization in a number of different commercial and open source applications including server virtualization, Infrastructure as a Services (IaaS), desktop virtualization, security applications, embedded and hardware appliances. It counts many industries and open source community leaders among its members includingAlibaba, Amazon Web Services, AMD, Arm, Bitdefender, Citrix, EPAM Systems, Huawei and Intel. For more information about the Xen Project software and to participate, please visit XenProject.org.

Intel and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.AMD, the AMD logo, EPYC, and combinations thereof are trademarks of Advanced Micro Devices, Inc.

About Linux FoundationFounded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world's leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation's projects are critical to the world's infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation's methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us atlinuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media ContactRachel Romoff rromoff@linuxfoundation.org 210-241-8284

SOURCE Xen Project

http://www.xenproject.org

Continued here:
Xen Project Hypervisor 4.13 Brings Improved Security, Hardware Support and Features to Increase Embedded Use Case Adoption - PRNewswire

The world increasingly relies on open source here’s how to control its risks – BetaNews

Open source softwares hold on the IT sector has deepened in the last five years. An estimated 96 percent of applications use open source components, and big players like Microsoft, IBM and even the U.S. government now embrace open source projects for their software needs. But while open source has transformed organizations ability to use proven and maintained code in the development of new software, its not untouchable in terms of security. Using code thats readable by anyone brings risks -- and issues have occurred in the past.

Its true that open source makes security efforts more transparent since its happening out in the open. If there are flaws in the code, theyre often resolved quickly by committed members of the open source community. Additionally, many open source projects have security scans built into their build processes, so contributions that introduce vulnerabilities directly or through dependencies are few and far between. But leaving the code in the open also allows bad actors to write attacks specific to unpatched vulnerabilities or to unrealized vulnerabilities in libraries that products actively depend on. As a result, teams using open source need to take steps to remain secure.

A growing embrace of open source

Its clear why organizations use open source. Would you rather build a tool on proprietary code from the ground up or use blocks of code that are proven effective and maintained by other trustworthy users out in the open?

Most organizations prefer the latter and open source code delivers it to thousands of companies. According to Red Hat, 69 percent of IT leaders said open source is very or extremely important to their organizations infrastructure plans.

Over the past decade, the business world has embraced open source as the new normal for building software. At SPR, we work with more than 300 clients, and I cant name one that doesnt use open source software in some form.

Open source adoption by some of the most powerful tech players has been key to its growing influence. For example, Microsoft now offers many development tools for free and allows users to run them on all operating systems, not just Windows. In 2018, it even acquired GitHub, the well-known open source software development platform.

The ethos of the open source community is to further whats possible with software and share innovations with the rest of the world. When members develop useful code or software, they share it back to the community. But its not a perfect system. Cybersecurity risks often accompany open source projects, and IT teams must proactively respond to those risks.

Managing open source risks

While open source projects tend to be in good hands out in the open, the right sequence of events can lead to attacks from malicious actors. They may inject harmful code into an open source package or write an attack specifically targeting applications that use certain (potentially outdated) versions of tools.

For example, in January 2018, nefarious individuals inserted malicious code into multiple core JavaScript packages that other projects depended on. After a manual operations professional accidentally deleted a certified username, several packages associated with it immediately disappeared. The mistake cleared the way for a bad actor to quickly upload malicious packages that used the same names. As a result, teams accidentally injected these packages into their applications without scanning them. In the end, hundreds of products were likely affected.

With such consequences in mind, your team must have an established policy operating in the background to protect your tools from ever-changing vulnerabilities -- or more rarely, malicious code.

Above all, you should define a policy for using open source and enforce it through automation. Automate the retrieval and scanning of dependencies so your deployments are halted or your builds are broken when new vulnerabilities are detected in packages that your software depends on. If an organization had this type of process during the JavaScript snafu, the malicious code may never have made it into the software because the scanning tool would have picked it up.

This measure goes back to continuous delivery best practices. Whenever I pull code from my repository for a build, it should be automatically tested -- both to ensure correct functionality and to check for known security vulnerabilities, either directly or through one of its dependencies. Your process should have a security scan built into the delivery pipeline so when your build is happening, youll know what could affect your app. Dependency management tools like Node Package Manager (NPM) provide warnings when a projects dependencies contain vulnerabilities -- and you shouldnt ignore them.

Additionally, IT leaders should monitor vulnerabilities by subscribing to the security bulletin mailing list accompanying the software license. Monitoring a security bulletin, like Common Vulnerabilities and Exposures (CVE) -- which is run by an open consortium -- keeps you informed about published vulnerabilities and scheduled fixes. Make sure youre paying attention to bulletins like CVE for all the tools you use.

For all the good open source software has brought to the world, there will always be vulnerabilities and malicious actors intent on wreaking havoc. By implementing the right measures into software delivery timelines, you can position your teams to reap the benefits that open source offers, while confidently minimizing risks.

Photo Credit: Olivier Le Moal / Shutterstock

Justin Rodenbostel is a polyglot software engineer, architect, and lead one of the custom application development practices at SPR Consulting, a digital technology consultancy, where he focuses on building solutions for our clients using open source technology. His teams broad expertise enables them to build their customers, not the most convenient solution or the solution someones most comfortable with -- but the right solution. Throughout his career, he's worked on large, mission-critical enterprise apps, small internal apps, and everything in between -- for clients as small as a 3-person startup and as large as a global Fortune 50.

See original here:
The world increasingly relies on open source here's how to control its risks - BetaNews

How Quantum Computers Work | HowStuffWorks

The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn't count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.

Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.

Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you'll learn what a quantum computer is and just what it'll be used for in the next era of computing.

You don't have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine. Most digital computers, like the one you are using to read this article, are based on the Turing Theory. Learn what this is in the next section.

Originally posted here:

How Quantum Computers Work | HowStuffWorks

How quantum computing could beat climate change – World Economic Forum

Imagine being able to cheaply and easily suck carbon directly out of our atmosphere. Such a capability would be hugely powerful in the fight against climate change and advance us towards the ambitious global climate goals set.

Surely thats science fiction? Well, maybe not. Quantum computing may be just the tool we need to design such a clean, safe and easy-to-deploy innovation.

In 1995 I first learned that quantum computing might bring about a revolution akin to the agricultural, industrial and digital ones weve already had. Back then it seemed far-fetched that quantum mechanics could be harnessed to such momentous effect; given recent events, it seems much, much more likely.

Much excitement followed Googles recent announcement of quantum supremacy: [T]he point where quantum computers can do things that classical computers cant, regardless of whether those tasks are useful.

The question now is whether we can develop the large-scale, error-corrected quantum computers that are required to realize profoundly useful applications.

The good news is we already concretely know how to use such fully-fledged quantum computers for many important tasks across science and technology. One such task is the simulation of molecules to determine their properties, interactions, and reactions with other molecules a.k.a. chemistry the very essence of the material world we live in.

While simulating molecules may seem like an esoteric pastime for scientists, it does, in fact, underpin almost every aspect of the world and our activity in it. Understanding their properties unlocks powerful new pharmaceuticals, batteries, clean-energy devices and even innovations for carbon capture.

To date, we havent found a way to simulate large complex molecules with conventional computers, we never will, because the problem is one that grows exponentially with the size or complexity of the molecules being simulated. Crudely speaking, if simulating a molecule with 10 atoms takes a minute, a molecule with 11 takes two minutes, one with 12 atoms takes four minutes and so on. This exponential scaling quickly renders a traditional computer useless: simulating a molecule with just 70 atoms would take longer than the lifetime of the universe (13 billion years).

This is infuriating, not just because we cant simulate existing important molecules that we find (and use) in nature including within our own body and thereby understand their behaviour; but also because there is an infinite number of new molecules that we could design for new applications.

Thats where quantum computers could come to our rescue, thanks to the late, great physicist Richard Feynman. Back in 1981, he recognized that quantum computers could do that which would be impossible for classical computers when it comes to simulating molecules. Thanks to recent work by Microsoft and others we now have concrete recipes for performing these simulations.

One area of urgent practical importance where quantum simulation could be hugely valuable is in meeting the SDGs not only in health, energy, industry, innovation and infrastructure but also in climate action. Examples include room-temperature superconductors (that could reduce the 10% of energy production lost in transmission), more efficient processes to produce nitrogen-based fertilizers that feed the worlds population and new, far more efficient batteries.

One very powerful application of molecular simulation is in the design of new catalysts that speed up chemical reactions. It is estimated that 90% of all commercially produced chemical products involve catalysts (in living systems, theyre called enzymes).

Annual CO2 emissions globally in 2017

A catalyst for scrubbing carbon dioxide directly from the atmosphere could be a powerful tool in tackling climate change. Although CO2 is captured naturally, by oceans and trees, CO2 production has exceeded these natural capture rates for many decades.

The best way to tackle CO2 is not releasing more CO2; the next best thing is capturing it. While we cant literally turn back time, [it] is a bit like rewinding the emissions clock, according to Torben Daeneke at RMIT University.

There are known catalysts for carbon capture but most contain expensive precious metals or are difficult or expensive to produce and/or deploy. We currently dont know many cheap and readily available catalysts for CO2 reduction, says Ulf-Peter Apfel of Ruhr-University Bochum.

Given the infinite number of candidate molecules that are available, we are right to be optimistic that there is a catalyst (or indeed many) to be found that will do the job cheaply and easily. Finding such a catalyst, however, is a daunting task without the ability to simulate the properties of candidate molecules.

And thats where quantum computing could help.

We might even find a cheap catalyst that enables efficient carbon dioxide recycling and produces useful by-products like hydrogen (a fuel) or carbon monoxide (a common source material in the chemical industry).

We can currently simulate small molecules on prototype quantum computers with up to a few dozen qubits (the quantum equivalent of classical computer bits). But scaling this to useful tasks, like discovering new CO2 catalysts, will require error correction and simulation to the order of 1 million qubits.

Its a challenge I have long believed will only be met on any human timescale certainly by the 2030 target for the SDGs if we use the existing manufacturing capability of the silicon chip industry.

At a meeting of the World Economic Forums Global Future Councils last month a team of experts from across industry, academia and beyond assembled to discuss how quantum computing can help address global challenges, as highlighted by the SDGs, and climate in particular.

As co-chair of the Global Future Council on Quantum Computing, I was excited that we were unanimous in agreeing that the world should devote more resources, including in education, to developing the powerful quantum computing capability that could help tackle climate change, meet the SDGs more widely and much more. We enthusiastically called for more international cooperation to develop this important technology on the 2030 timescale to have an impact on delivering the SDGs, in particular climate.

So the real question for me is: can we do it in time? Will we make sufficiently powerful quantum computers on that timeframe? I believe so. There are, of course, many other things we can and should do to tackle climate change, but developing large-scale, error-corrected quantum computers is a hedge we cannot afford to go without.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Here is the original post:

How quantum computing could beat climate change - World Economic Forum

What WON’T Happen in 2020: 5G Wearables, Quantum Computing, and Self-Driving Trucks to Name a Few – Business Wire

OYSTER BAY, N.Y.--(BUSINESS WIRE)--As 2019 winds down, predictions abound on the technology advancements and innovations expected in the year ahead. However, there are several anticipated advancements, including 5G wearables, quantum computing, and self-driving trucks, that will NOT happen in the first year of the new decade, states global tech market advisory firm, ABI Research.

In its new whitepaper, 54 Technology Trends to Watch in 2020, ABI Researchs analysts have identified 35 trends that will shape the technology market and 19 others that, although attracting huge amounts of speculation and commentary, look less likely to move the needle over the next twelve months. After a tumultuous 2019 that was beset by many challenges, both integral to technology markets and derived from global market dynamics, 2020 looks set to be equally challenging, says Stuart Carlaw, Chief Research Officer at ABI Research. Knowing what wont happen in technology in the next year is important for end users, implementors, and vendors to properly place their investments or focus their strategies.

What wont happen in 2020?

5G Wearables: While smartphones will dominate the 5G market in 2020, 5G wearables wont arrive in 2020, or anytime soon, says Stephanie Tomsett, 5G Devices, Smartphones & Wearables analyst at ABI Research. To bring 5G to wearables, specific 5G chipsets will need to be designed and components will need to be reconfigured to fit in the small form factor. That wont begin to happen until 2024, at the earliest.

Quantum Computing: Despite claims from Google in achieving quantum supremacy, the tech industry is still far away from the democratization of quantum computing technology, says Lian Jye Su, AI & Machine Learning Principal Analyst at ABI Research. Quantum computing is definitely not even remotely close to the large-scale commercial deployment stage.

Self-Driving Trucks: Despite numerous headlines declaring the arrival of driverless, self-driving, or robot vehicles, very little, if any, driver-free commercial usage is underway beyond closed-course operations in the United States, says Susan Beardslee, Freight Transportation & Logistics Principal Analyst at ABI Research.

A Consolidated IoT Platform Market: For many years, there have been predictions that the IoT platform supplier market will begin to consolidate, and it just wont happen, says Dan Shey, Vice President of Enabling Platforms at ABI Research. The simple reason is that there are more than 100 companies that offer device-to-cloud IoT platform services and for every one that is acquired, there are always new ones that come to market.

Edge Will Not Overtake Cloud: The accelerated growth of the edge technology and intelligent device paradigm created one of the largest industry misconceptions: edge technology will cannibalize cloud technology, says Kateryna Dubrova, M2M, IoT & IoE Analyst at ABI Research. In fact, in the future, we will see a rapid development of edge-cloud-fog continuum, where technology will complement each other, rather than cross-cannibalize.

8K TVs: Announcements of 8K Television (TV) sets by major vendors earlier in 2019 attracted much attention and raised many of questions within the industry, says Khin Sandi Lynn, Video & Cloud Services Analyst at ABI Research. The fact is, 8K content is not available and the price of 8K TV sets are exorbitant. The transition from high definition (HD) to 4K will continue in 2020 with very limited 8K shipments less than 1 million worldwide.

For more trends that wont happen in 2020, and the 35 trends that will, download the 54 Technology Trends to Watch in 2020 whitepaper.

About ABI Research

ABI Research provides strategic guidance to visionaries, delivering actionable intelligence on the transformative technologies that are dramatically reshaping industries, economies, and workforces across the world. ABI Researchs global team of analysts publish groundbreaking studies often years ahead of other technology advisory firms, empowering our clients to stay ahead of their markets and their competitors.

For more information about ABI Researchs services, contact us at +1.516.624.2500 in the Americas, +44.203.326.0140 in Europe, +65.6592.0290 in Asia-Pacific or visit http://www.abiresearch.com.

See the article here:

What WON'T Happen in 2020: 5G Wearables, Quantum Computing, and Self-Driving Trucks to Name a Few - Business Wire

AI, 5G, ‘ambient computing’: What to expect in tech in 2020 and beyond – USA TODAY

Tis the end of the year when pundits typically dust off the crystal ball and take a stab at what tech, and its impact on consumers,will look like over the next12 months.

But we're also on the doorstep of a brand-new decade, which this time around promisesfurther advances in 5G networks, artificial intelligence, quantum computing, self-driving vehicles and more, all of which willdramatically alter the way we live, work and play.

So what tech advances can we look forward to in the new year? Heres what we can expect to see in 2020 and in some cases beyond.

(Photo: Getty Images)

The next generation of wireless has showed up on lists like this for years now. But in 2020, 5G really will finally begin to make its mark in the U.S., with all four major national carriers three if the T-Mobile-Sprint merger finally goes through continue to build out their 5G networks across the country.

Weve been hearing about the promise of 5G on the global stage for what seems like forever, and the carriersrecently launched in select markets. Still, the rollout in most places will continue to take time, as will the payoff: blistering fast wireless speeds and network responsiveness on our phones, improved self-driving cars and augmented reality, remote surgery, and entire smartcities.

As 2019 winds down, only a few phones can exploit the latest networks, not to mention all the remaining holes in 5G coverage. But youll see a whole lot more 5G phone introductions in the new year, including what many of us expect will be a 5G iPhone come September.

Dark side of sharing economy: Use Airbnb, Uber or Lyft? Beware. There can be a dark side to sharing with strangers

A look back at the 2010s: Revisiting everything in tech from Alexa to Xbox

When those holes are filled, roughly two-thirds of consumers said theyd be more willing to buy a 5G-capable smartphone, according to a mobile trends survey by Deloitte.

But Deloitte executive Kevin Westcott also said that telcos will need to manage consumer expectations about what 5G can deliver and determine what the killer apps for 5G will be.

The Deloitte survey also found that a combination of economic barriers (pricing, affordability) and a sense that current phones are good enough, will continue to slow the smartphone refresh cycle.

Are you ready for all the tech around you to disappear? No, not right away.The trend towards so-called ambient computing is not going to happen overnight, nor is anyone suggesting that screens and keyboards are going to go away entirely, or that youll stop reaching for a smartphone. But as more tiny sensorsare built into walls, TVs, household appliances, fixtures, what you're wearing, and eventually even your own body, youll be able to gesture or speak to a concealed assistant to get things done.

Steve Koenig, vice president of research at the Consumer Technology Association likens ambient computing to Star Trek, and suggests that at some point we won't need to place Amazon Echo Dots or other smart speakers in every room of house, since well just speak out loud to whatever, wherever.

Self-driving cars have been getting most the attention. But its not just cars that are going autonomous try planes and boats.

Cirrus Aircraft, for example, is in the final stages of getting Federal Aviation Administration approval for a self-landing system for one of its private jets, and the tech, which I recently got to test, has real potential to save lives.

How so? If the pilot becomes incapacitated, a passenger can press a single button on the roof of the main cabin. At that moment, the plane starts acting as if the pilot were still doing things. It factors in real-time weather, wind, the terrain, how much fuel remains, all the nearby airports where an emergency landing is possible, including the lengths of all runways, and automatically broadcasts its whereaboutsto air traffic control.From there the system safely lands the plane.

Or consider the 2020 version of the Mayflower, not a Pilgrim ship, but rather a marine research vessel from IBM and a marine exploration non-profit known as Promare. The plan is to have the unmanned shipcross the Atlantic in September from Plymouth, England to Plymouth, Massachusetts. The ship will be powered by a hybrid propulsion system, utilizing wind, solar, state-of-the-art batteries, and a diesel generator. It plans to follow the 3,220-mile route the original Mayflower took 400 years ago.

Two of Americas biggest passions come together. esports is one of the fastest growing spectator sports around the world, and the Supreme Court cleared a path last year for legalized gambling across the states. The betting community is licking their chops at the prospect of exploiting this mostly untapped market. Youll be able to bet on esports in more places, whetherat a sportsbook inside a casino or through an app on your phone.

One of the scary prospects about artificial intelligence is that it is going to eliminate all these jobs. Research out of MIT and IBM Watson suggests that while AI will for sure impact the workplace, it wont lead to a huge loss of jobs.

That's a somewhat optimistic take given an alternate view thatAI-driven automation is going to displace workers.The research suggests thatAI will increasingly help us with tasks that can be automated, but will have a less direct impact on jobs that require skills such as design expertise and industrial strategy. The onus will be on bosses and employeesto start adapting to newroles and to try and expandtheirskills, effortsthe researchers say will beginin the new year.

The scary signs are still out there, however. For instance, McDonalds is already testing AI-powered drive-thrus that can recognize voice, which could reduce the need for human order-takers.

Perhaps its more wishful thinking than a flat-out prediction, but as Westcott puts it, Im hoping what goes away are the 17 power cords in my briefcase. Presumably a slight exaggeration.

But the thing we all want to see are batteries that dont prematurely peter out, and more seamless charging solutions.

Were still far off from the day where youll be able to get ample power to last all day on your phone or other devices just by walking into a room. But over-the-air wireless charging is slowly but surely progressing. This past June, for example, Seattle company Ossiareceived FCC certification for a first-of-its kind system to deliver over-the-air power at a distance. Devices with Ossias tech built-in should start appearing in the new year.

The Samsung Galaxy Fold smartphone featuring a foldable OLED display.(Photo: Samsung)

We know how the nascent market for foldable phones unfolded in 2019 things were kind of messy.Samsungs Galaxy Fold was delayed for months following screen problems, and even when the phone finally did arrive, it cost nearly $2,000. But that doesnt mean the idea behind flexible screen technologies goes away.

Samsung is still at it, and so is Lenovo-owned Motorola with its new retroRazr. The promise remains the same: let a devicefold or bend in such a way that you can take a smartphone-like form factor and morph it into a small tablet or computer. The ultimate success of such efforts will boil down to at least three of the factors that are always critical in tech: cost, simplicity, andutility.

Data scandals and privacy breaches have placed Facebook, Google and other others under the government's cross-hairs, and ordinary citizens are concerned. Expect some sort of reckoning, though it isn't obviousat this stage what that reckoningwill look like.

Pew recently put out a report that says roughly 6 in 10 Americans believe it is not possible to go about their daily lives without having their data collected.

"The coming decade will be a period of lots of ferment around privacy policy and also around technology related to privacy," says Lee Rainie, director of internet and technology research at Pew Research Center. He says consumers will potentially have more tools to give them a bit more control over how and what data gets shared and under whatcircumstances. "And there will be a lot of debate over what the policy should be."

Open question: Will there be national privacy regulations, perhaps ones modeled after the California law that is set to go into effect in the new year?

It isnt easy to explain quantum computing or the field it harnesses, quantum mechanics. In the simplest terms, think something exponentially more powerful than what we consider conventional computing, which is expressed in1s or 0s of bits. Quantum computing takes a quantum leap with whatare known as "qubits."

And while IBM, Intel, Google, Microsoft and others are all fighting for quantum supremacy, the takeaway over the next decadeis that thetechmay helpsolve problems far faster than before, fromdiagnosing disease to crackingforms of encryption, raising the stakes in data security.

Quantum computing: Google claims quantum computing breakthrough

What tech do you want or expect to see? Email: ebaig@usatoday.com; Follow @edbaig on Twitter.

Read or Share this story: https://www.usatoday.com/story/tech/2019/12/18/tech-trends-2020-battery-power-ai-privacy/4360879002/

Read this article:

AI, 5G, 'ambient computing': What to expect in tech in 2020 and beyond - USA TODAY

What Was The Most Important Physics Of 2019? – Forbes

So, Ive been doing a bunch of talking in terms of decades in the last couple of posts, about the physics defining eras in the 20th century and the physics defining the last couple of decades. Ill most likely do another decadal post in the near future, this one looking ahead to the 2020s, but the end of a decade by definition falls at the end of a year, so its worth taking a look at physics stories on a shorter time scale, as well.

New year 2019 change to 2020 concept, hand change wooden cubes

You can, as always, find a good list of important physics stories in Physics Worlds Breakthrough of the Year shortlist, and there are plenty of other top science stories of 2019 lists out there. Speaking for myself, this is kind of an unusual year, and its tough to make a call as to the top story. Most of the time, these end-of-year things are either stupidly obvious because one story towers above all the others, or totally subjective because there are a whole bunch of stories of roughly equal importance, and the choice of a single one comes down to personal taste.

In 2019, though, I think there were two stories that are head-and-shoulders above everything else, but roughly equal to each other. Both are the culmination of many years of work, and both can also claim to be kicking off a new era for their respective subfields. And Im really not sure how to choose between them.

US computer scientist Katherine Bouman speaks during a House Committee on Science, Space and ... [+] Technology hearing on the "Event Horizon Telescope: The Black hole seen Round the World" in the Rayburn House office building in Washington, DC on May 16, 2019. (Photo by Andrew CABALLERO-REYNOLDS / AFP) (Photo credit should read ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

The first of these is the more photogenic of the two, namely the release of the first image of a black hole by the Event Horizon Telescope collaboration back in April. This one made major news all over, and was one of the experiments that led me to call the 2010s the decade of black holes.

As I wrote around the time of the release, this was very much of a piece with the preceding hundred years of tests of general relativity: while many stories referred to the image as a shadow of the black hole, really its a ring produced by light bending around the event horizon. This is the same basic phenomenon that Eddington measured in 1919 looking at the shift in the apparent position of stars near the Sun, providing confirmation of Einsteins prediction that gravity bends light. Its just that scaling up the mass a few million times produces a far more dramatic bending of spacetime (and thus light) than the gentle curve produced by our Sun.

This Feb. 27, 2018, photo shows electronics for use in a quantum computer in the quantum computing ... [+] lab at the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y. Describing the inner workings of a quantum computer isnt easy, even for top scholars. Thats because the machines process information at the scale of elementary particles such as electrons and photons, where different laws of physics apply. (AP Photo/Seth Wenig)

The other story, in very 2019 fashion, first emerged via a leak: someone at NASA accidentally posted a draft of the paper in which Googles team claimed to have achieved quantum supremacy. They demonstrated reasonably convincingly that their machine took about three and a half minutes to generate a solution to a particular problem that would take vastly longer to solve with a classical computer.

The problem they were working with was very much in the quantum simulation mode that I talked about a year earlier, when I did a high-level overview of quantum computing in general, though a singularly useless version of that. Basically, they took a set of 50-odd qubits and performed a random series of operations on them to put them in a complicated state in which each qubit was in a superposition of multiple states and also entangled with other qubits in the system. Then they measured the probability of finding specific output states.

Qubit, or quantum bit, illustration. The qubit is a unit of quantum information. As a two-state ... [+] system with superposition of both states at the same time, it is fundamental to quantum computing. The illustration shows the Bloch sphere. The north pole is equivalent to one, the south pole to zero. The other locations, anywhere on the surface of the sphere, are quantum superpositions of 0 and 1. When the qubit is measured, the quantum wave function collapses, resulting in an ordinary bit - a one or a zero - which effectively depends on the qubit's 'latitude'. The illustration shows the qubit 'emitting' a stream of wave functions (the Greek letter psi), representing the collapse of the wave function when measured.

Finding the exact distribution of possible outcomes for such a large and entangled system is extremely computationally intensive if youre using a classical computer to do the job, but it happens very naturally in the quantum computer. So they could get a good approximation of the distribution within minutes, while the classical version would take a lot more time, where a lot more time ranges from thousands of years (Googles claim) down to a few days (the claim by a rival group at IBM using a different supercomputer algorithm to run the computation). If youd like a lot more technical detail about what this did and didnt do, see Scott Aaronson.

As with the EHT paper, this is the culmination of years of work by a large team of people. Its also very much of a piece with past work quantum computing as a distinct field is a recent development, but really, the fundamental equations used to do the calculations were pretty well set by 1935.

Glowing new technology in deep space, computer generated abstract background, 3D rendering

Both of these projects also have a solid claim to be at the forefront of something new. The EHT image is the first to be produced, but wont be the last theyre crunching numbers on the Sag A* black hole at the center of the Milky Way, and theres room to improve their imaging in the future. Along with the LIGO discovery from a few years ago, this is the start of a new era of looking directly at black holes, rather than just using them as a playground for theory.

Googles demonstration of quantum supremacy, meanwhile, is the first such result in a highly competitive field: IBM and Microsoft are also invested in similar machines, and there are smaller companies and academic labs exploring other technologies. The random-sampling problem they used is convenient for this sort of demonstration, but not really useful for anything else, but lots of people are hard at work on techniques to make a next generation of machines that will be able to do calculations where people care about the answer. Theres a good long way to go, yet, but a lot of activity in the field driving things forward.

So, in the head-to-head matchup for Top Physics Story of 2019, these two are remarkably evenly matched, and it could really go either way. The EHT result has a slightly deeper history, the Google quantum computer arguably has a brighter future. My inclination would be to split the award between them; if you put a gun to my head and made me pick one, Id go with quantum supremacy, but Id seriously question the life choices that led you to this place, because theyre both awesome accomplishments that deserve to be celebrated.

Original post:

What Was The Most Important Physics Of 2019? - Forbes

Is getting paid to write open source a privilege? Yes and no – TechRepublic

Commentary: While we tend to think of open source as a community endeavor, at its heart open source is inherently selfish, whether as a contributor or user.

Image: SIphotography, Getty Images/iStockphoto

"[O]ften [a] great deal of privilege is needed to be able to dedicate one's efforts to building Free Software full time," declared Matt Wilson, a longtime contributor to open source projects like Linux. He's right. While there are generally no legal hurdles for a would-be contributor to clear, there are much more pragmatic constraints like, for example, rent. While many developers might prefer to spend all of their time writing and releasing open source software, comparatively few can afford to do so or, at least, on a full-time basis.

And that's OK. Because maybe, just maybe, "privilege" implies the wrong thing about open source software.

Who are these developers privileged to get to write open source software? According to GitHub COO Erica Brescia, 80% of the developers actively contributing to open source GitHub repositories come from outside the US. Of course, the vast majority of these developers aren't contributing full-time. According to an IDC analysis, of the roughly 24.2 million global developers, roughly half (12.5 million) get paid to write software full-time, while another seven million get paid to write software part-time.

But this doesn't mean they're getting paid to write open source software, whether full-time or part-time.

SEE: Open source vs. proprietary software: A look at the pros and cons (TechRepublic Premium)

If there are 12.5 million paid full-time developers globally, a small percentage of that number gets paid to write open source software. There simply aren't that many companies that clearly see a return on open source investments. Using open source? Of course. Contributing to open source? Not so much. This is something Red Hat CEO Jim Whitehurst called out as a problem over 10 years ago. It remains a problem.

As for individual developers like osxfuse maintainer Benjamin Fleischer, it's a persistent struggle to figure out how to get paid for the valuable work they do. Going back to Wilson's point, most developers simply don't get to enjoy the privilege of spending their time giving away software.

Is this a bad thing?

When I asked if full-time open source is an activity only the rich (individuals or companies) can indulge, developer Henrik Ingo challenged the assumptions underlying my question. "Why should we expect anyone to contribute to open source in the first place?" he queried. Then he struck at the very core of the assumption that anyone should contribute to open source:

Some of us donate to charity, some others receive that gift. Some do both, at different phases in life, or even at the same time. Yet neither of those roles makes us a better person than the other. With open source, the idea is that we share an abundant resource. If you go back to Cathedral and Bazaar, the idea of "scratch your own itch" is prevalent. You write a tool, or fix an existing one, because you needed it. Or you write code to learn. Or just social reasons! Whatever your reasons, nobody should be expected to contribute code as some kind of tax you have to pay to justify your existence on this planet.

Open source, in other words, is inherently self-interested, and that self-interest brings its own rewards. Sometimes unpaid work becomes paid work, as was the case with Andy Oliver. Sometimes it doesn't. If the work is fulfilling in and of itself, it may not matter whether that developer ever gets the "privilege" to spend all of her time getting paid to write open source software. It also may not matter whether that software is open source or closed.

SEE: How to build a successful developer career (free PDF) (TechRepublic)

To Ingo's point, we may need to stop trying to impose ethical obligations on software developers and users, whether open source or not. I personally think more open source tends toward more good and, frankly, more realization of self-interest, because it can be a great way to share the development load. For downstream users, contributing back can be a great way to minimize the accumulation of technical debt that can collect on a fork.

But in any case, there isn't a compelling reason to browbeat others into contributing. Open source is inherently self-interested, to Ingo's point, whether as a user or contributor. When I use open source software, I benefit. When I contribute, I benefit. Either way, I (and we) am privileged.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

See the original post:
Is getting paid to write open source a privilege? Yes and no - TechRepublic

May the Open Source Force Be with You – Enterprise License Optimization Blog

Image by GooKingSword from Pixabay

Im giving away my affinity for Star Wars. Its true. I was there when the first movie hit the big screen (lets just say, a while ago) and now dreading while at the same time wildly anticipating the release of this last movie in the Skywalker saga, Start Wars: The Rise of Skywalker. As mawkishly sentimental as it may appear, I had to work it into a blog. Only here, in the end, will we most likely understand some of the true plot lines of this epic story.

How does this relate to my thoughts about open source software? I look at how the Skywalker story evolved over the years, how it took shape, and what and who impacted the narrative.

The same lookback can be undertaken for open source and, in fact, it has, right here in my own blog on the occasion (Happy Birthday, Open Source. The Term Turns 21). Lets take that story a little further.

There are events that have shaped the course of open source and how companies use and implement it; lawsuits such as Oracle v. Google, Versata v. Ameriprise, and a number of years ago Free Software Foundation went after Cisco over some GPL code in one of their routers. The Heartbleed vulnerability, of course, has a place in history given the impact it had on companies such as Equifax.

Likewise, Software Composition Analysis (SCA) has changed the course of SCA and the perceived risks associated with open source. SCA enables companies to be more proactive in their management of open source. There was a realization after Heartbleed that just because the source code is open doesnt mean that its without examination and oversight to mitigate risks.

SCA allows companies to better understand what open source software they are using. It allows for the discovery of that open source, and it allows for the remediation of threat issues in a way that isnt possible without the ongoing automated and controlled monitoring of a SCA platform.

During the exposure of the Heartbleed vulnerability, development teams went on hunting missions. They scrambled to understand what version of OpenSSL they had and then conducted fire drills to figure out how to rectify and remediate quickly and efficiently. SCA is a game changer for those companies that had to crawl painstakingly through complicated processes and manual work to get a handle on the situation. With SCA there is a continuous process that allows for in-the-moment understanding of what open source libraries youre using and what versions are in use.

One of the challenges is that companies could have multiple versions of the same open source library in their product. Version control is a real issue. The ability to leverage SCA to make sure you have the latest version of a particular library and the version that is approved, safe, has the most desirable license terms according to your policies, and is used consistently across the entire product line is a huge benefit.

In the end, youre accountable. Youre accountable from a reporting standpoint and to stakeholders about whats in your solutions. Thats peace of mind.

Your focus determines your reality. Qui-Gon Jinn

Follow this link:
May the Open Source Force Be with You - Enterprise License Optimization Blog

Ubuntu turns 15: what impact has it had and what does the future hold? – TechRadar

In 1991, Linus Torvolds created the Linux operating system just for fun and, 15 years later, Ubuntu was born - offering developers a more lightweight, user-friendly distribution.

This year marks Ubuntus 15th birthday, with it now having established itself as the leading open source software operating system across public and OpenStack clouds.

As we reflect on this milestone, those of us at Canonical are thinking about what it is that sets us apart from other Linux distributions and has driven Ubuntu to underpin so many successful projects in its time.

One thing which has really come to the foreground during this time is Ubuntus popularity both as a desktop and a server. There is immense appreciation and adoption from the developer community, with millions of PC users around the world running Ubuntu. As the cloud landscape has matured, so too has the popularity of this OS as a server. Ubuntu runs on the majority of public cloud workloads across the board and, with our continued promise of free access for everyone, it has helped democratise access to innovation.

We frequently hear from people who tell us that they wouldn't be where they are if they hadn't had access to Ubuntu. In fact, our CEO, Mark Shuttleworth, recently said in an interview that "Ubuntu gives millions of software innovators around the world a platform that's very cheap, if not free, with which they can innovate." It helps the future to arrive faster, no matter who or where you are. In this respect it is unique as the one Linux distribution which has made Linux accessible to - and consumable by - everyone.

It is this ideology that has resulted in the numerous Ubuntu success stories. We so often hear from people about their stories and how they have used this open source platform to build incredible businesses upon. As mentioned earlier, the majority of the public cloud workloads run on Ubuntu, and so almost any hyper-scale company today is using Ubuntu. But often the best stories are those where people found ways to be creative and build something new using Ubuntu, or when scientists have made significant breakthroughs and the accompanying photo published in the press shows an Ubuntu desktop in the background.

Paramount in this success is the community mindset. At Canonical, we talk a lot about how the open source community powers fast and efficient innovation through a collaborative approach. The next wave of innovation will be powered by software which builds upon this collaborative effort, not just from a single company, but from a community committed to improving the entire landscape. The future success of self-driving cars, medical robots or smart cities is not something which should be entrusted to a select few companies, but instead to a global community of innovators who can come together to achieve the very best outcome.

Ubuntu will continue to be a platform which serves its users, no matter what their needs. On the desktop, we will continue to focus on performance and the engineering use case. We have increasingly seen requests for Ubuntu as an enterprise desktop for development teams, and that will also play a role. The majority of our OEM partners are looking to partner around AI/ML workstations for data scientists, so again that is another focus area.

On the server side, the focus is on security, performance, and long-term support. We're looking closely at the latest innovations around performance, especially in the Linux kernel, to provide the best possible OS for high-performance workloads. Canonical already already provides five years of free security updates with every LTS release. Furthering this support offering is the continued expansion of the Extended Security Maintenance program, within which our paid customer base benefits from access to even more security patches.

Ubuntu has been a springboard for many and we will continue our commitment to this mission. With the developer community at the heart of this distribution, Canonical will continue to provide the accessibility to development tools that will enable fast and reliable innovation, to power a more successful future.

View original post here:
Ubuntu turns 15: what impact has it had and what does the future hold? - TechRadar