Defense Secretary Nominee: US Faces Enemies Both at Home and Abroad – Voice of America

WASHINGTON - U.S. President-elect Joe Bidens pick to lead the Pentagon warns the country is facing a series of enemies, both at home and abroad, and that it will fall, in part, to the United States military to overcome the dangers.

Retired General Lloyd Austin appeared before lawmakers Tuesday and said his first priority if confirmed as the countrys next secretary of defense would be to make sure all military resources are brought to bear against the coronavirus pandemic.

"The greatest challenge to our country right now ... is the pandemic," Austin told members of the Senate Armed Services Committee, wearing a suit and tie instead of the Army dress uniform he wore when he testified in Congress as the commander of U.S. military forces across the Middle East and South Asia.

"It's killed over 400,000 of our American citizens. That's just an incredible, incredible loss of life," he said. We have to do everything we can to break the cycle of transmission and begin to turn this thing around."

Austin did not offer specifics about how he would ramp up the Pentagons current efforts to distribute the coronavirus vaccines as part of what has been known as Operation Warp Speed. But he said he does believe there is more the Pentagon can do to counter what he described as the most immediate national security challenge.

Countering extremism at home

Austin spoke shortly after U.S. defense officials announced 12 National Guard troops initially assigned to help provide security for Bidens inauguration Wednesday were removed due to extremist ties. Austin pledged to take on what he called the enemy within.

The job of the Department of Defense is to keep America safe from our enemies, but we can't do that if some of those enemies lie with our own ranks," he said.

This [extremism] has no place in the military of the United States of America, Austin added, describing it as part of a broader battle.

I will fight hard to stamp out sexual assault and to rid our ranks of racists and extremists and to create a climate where everyone fit and willing has the opportunity to serve," he told U.S. lawmakers.

The 67-year-old Austin is a familiar face to many of the lawmakers who will vote on whether to confirm him, though his nomination is not without controversy.

U.S. law requires former active-duty military officers to be retired for seven years before they can serve as defense secretary a law meant to ensure civilian control of the military. But Austin retired just five years ago, stepping down as the leader of U.S. Central Command in 2016.

Waivers have been granted just twice, most recently in 2017 for retired General Jim Mattis, who served as outgoing President Donald Trumps first defense secretary.

On Tuesday, some lawmakers, including Republican Senator Tom Cotton and Democratic Senator Richard Blumenthal, told Austin they would not support a waiver. Cotton went as far as to call his support of a waiver for Mattis a mistake.

Austin said he understood the concerns about "having another recently retired general" take the reins at the Pentagon and promised, that if confirmed, the voices of civilian defense officials would be heard.

"The safety and security of our democracy demands competent civilian control of our armed forces," he said. "I have spent my entire life committed to that."

Like many of President-elect Bidens Cabinet selections, Austin focused on a change in course after four years under Trump and his America First policy.

Reaffirming alliances

Austin, in particular, noted the importance of the countrys military alliances, saying that one of his first trips would be to visit Japan, South Korea and Australia, key allies in the Indo-Pacific, where competition with China is heating up.

China is the most concerning competitor that we're facing," he said.

"Their goal is to be a dominant world power," Austin added. We have to make sure that we begin to check their aggression."

The retired general promised lawmakers a laser-like focus on making sure the U.S. maintains a competitive edge over the growing Chinese military, though he said to do so will require investment in new technologies, including artificial intelligence and quantum computing areas in which China has been closing the gap.

Austin said Russia, long viewed as Washingtons other key adversary in what Trump officials have described as an era of great power competition, remains a concern but not in the same way as Beijing.

"Russia is also a threat but it's in decline," he said, warning Moscow can still do "a great deal of damage" in cyberspace, like with the SolarWinds hack, and with influence operations.

In addition to Russia and China, lawmakers questioned Austin about the incoming Biden administrations position on Iran and talk the U.S. might seek to rejoin the so-called Iran nuclear deal.

Iran - a destabilizing element

Austin indicated any reentry to the nuclear deal would require movement by Tehran.

The preconditions for us considering to reenter into that agreement would be that Iran meet the conditions outlined in the agreement back to where they should have been," Austin said.

And while the former CENTCOM commander said while the Trump administrations successful efforts to help normalize ties between Israel and Arab countries in the region may be helping put additional pressure on the regime, the danger remains.

"Iran continues to be a destabilizing element," Austin told lawmakers. [Iran] does present a threat to our partners in the region and those forces that we have stationed in the region."

As for Afghanistan, where a Trump administration drawdown has left just 2,500 U.S. troops, Austin expressed a cautious hope.

"This conflict needs to come to an end. We need to see an agreement reached," he said.

If confirmed by the Senate, the former four-star general would be the first African American to serve as defense secretary.

Originally posted here:
Defense Secretary Nominee: US Faces Enemies Both at Home and Abroad - Voice of America

Quantum computing research helps IBM win top spot in patent race – CNET

An IBM patent shows a hexagonal array of qubits in a quantum computer, arranged to minimize problems controlling the finicky data processing elements.

IBM secured 9,130 US patents in 2020, more than any other company as measured by an annual ranking, and this year quantum computing showed up as part of Big Blue's research effort. The company wouldn't disclose how many of the patents were related to quantum computing -- certainly fewer than the 2,300 it received for artificial intelligence work and 3,000 for cloud computing -- but it's clear the company sees them as key to the future of computing.

The IFI Claims patent monitoring service compiles the list annually, and IBM is a fixture at the top. The IBM Research division, with labs around the globe, has for decades invested in projects that are far away from commercialization. Even though the work doesn't always pay dividends, it's produced Nobel prizes and led to entire industries like hard drives, computer memory and database software.

Get the latest tech stories with CNET Daily News every weekday.

"A lot of the work we do in R&D really is not just about the number of patents, but a way of thinking," Jerry Chow, director of quantum hardware system development, said in an exclusive interview. "New ideas come out of it."

IFI's US patent list is dominated by computer technology companies. Second place went to Samsung with 6,415 patents, followed by Canon with 3,225, Microsoft with 2,905 and Intel with 2,867. Next on the list are Taiwan Semiconductor Manufacturing Corp., LG, Apple, Huawei and Qualcomm. The first non-computing company is Toyota, in 14th place.

Internationally, IBM ranked second to Samsung in patents for 2020, and industrial companies Bosch and General Electric cracked the top 10. Many patents are duplicative internationally since it's possible to file for a single patent in 153 countries.

Quantum computing holds the potential to tackle computing problems out of reach of conventional computers. During a time when it's getting harder to improve ordinary microprocessors, quantum computers could pioneer new high-tech materials for solar panels and batteries, improve chemical processes, speed up package delivery, make factories more efficient and lower financial risks for investors.

Industrywide, quantum computing is a top research priority, with dozens of companies investing millions of dollars even though most don't expect a payoff for years. The US government is bolstering that effort with a massive multilab research effort. It's even become a headline event at this year's CES, a conference that more typically focuses on new TVs, laptops and other consumer products.

"Tactical and strategic funding is critical" to quantum computing's success, said Hyperion Research analyst Bob Sorensen. That's because, unlike more mature technologies, there's not yet any virtuous cycle where profits from today's quantum computing products and services fund the development of tomorrow's more capable successors.

IBM has taken a strong early position in quantum computing, but it's too early to pick winners in the market, Sorensen added.

The long-term goal is what's called a fault tolerant quantum computer, one that uses error correction to keep calculations humming even when individual qubits, the data processing element at the heart of quantum computers, are perturbed. In the nearer term, some customers like financial services giant JPMorgan Chase, carmaker Daimler and aerospace company Airbus are investing in quantum computing work today with the hope that it'll pay off later.

Quantum computing is complicated to say the least, but a few patents illustrate what's going on in IBM's labs.

Patent No. 10,622,536 governs different lattices in which IBM lays out its qubits. Today's 27-qubit "Falcon" quantum computers use this approach, as do the newer 65-qubit "Hummingbird" machines and the much more powerful 1,121-qubit "Condor" systems due in 2023.

A close-up view of an IBM quantum computer. The processor is in the silver-colored cylinder.

IBM's lattices are designed to minimize "crosstalk," in which a control signal for one qubit ends up influencing others, too. That's key to IBM's ability to manufacture working quantum processors and will become more important as qubit counts increase, letting quantum computers tackle harder problems and incorporate error correction, Chow said.

Patent No. 10,810,665 governs a higher-level quantum computing application for assessing risk -- a key part of financial services companies figuring out how to invest money. The more complex the options being judged, the slower the computation, but the IBM approach still outpaces classical computers.

Patent No. 10,599,989 describes a way of speeding up some molecular simulations, a key potential promise of quantum computers, by finding symmetries in molecules that can reduce computational complexity.

More comprehensible is patent No. 10,614,370, which describes quantum computing as a service. Because quantum computers typically must be supercooled to within a hair's breadth of absolute zero to avoid perturbing the qubits, and require spools of complicated wiring, most quantum computing customers are likely to tap into online services from companies like IBM, Google, Amazon and Microsoft that offer access to their own carefully managed machines.

See original here:
Quantum computing research helps IBM win top spot in patent race - CNET

Surprising Discovery of Unexpected Quantum Behavior in Insulators Suggests Existence of Entirely New Type of Particle – SciTechDaily

In a surprising discovery, Princeton physicists have observed an unexpected quantum behavior in an insulator made from a material called tungsten ditelluride. This phenomenon, known as quantum oscillation, is typically observed in metals rather than insulators, and its discovery offers new insights into our understanding of the quantum world. The findings also hint at the existence of an entirely new type of quantum particle.

The discovery challenges a long-held distinction between metals and insulators, because in the established quantum theory of materials, insulators were not thought to be able to experience quantum oscillations.

If our interpretations are correct, we are seeing a fundamentally new form of quantum matter, said Sanfeng Wu, assistant professor of physics at Princeton University and the senior author of a recent paper in Nature detailing this new discovery. We are now imagining a wholly new quantum world hidden in insulators. Its possible that we simply missed identifying them over the last several decades.

The observation of quantum oscillations has long been considered a hallmark of the difference between metals and insulators. In metals, electrons are highly mobile, and resistivity the resistance to electrical conduction is weak. Nearly a century ago, researchers observed that a magnetic field, coupled with very low temperatures, can cause electrons to shift from a classical state to a quantum state, causing oscillations in the metals resistivity. In insulators, by contrast, electrons cannot move and the materials have very high resistivity, so quantum oscillations of this sort are not expected to occur, no matter the strength of magnetic field applied.

The discovery was made when the researchers were studying a material called tungsten ditelluride, which they made into a two-dimensional material. They prepared the material by using standard scotch tape to increasingly exfoliate, or shave, the layers down to what is called a monolayer a single atom-thin layer. Thick tungsten ditelluride behaves like a metal. But once it is converted to a monolayer, it becomes a very strong insulator.

This material has a lot of special quantum properties, Wu said.

The researchers then set about measuring the resistivity of the monolayer tungsten ditelluride under magnetic fields. To their surprise, the resistivity of the insulator, despite being quite large, began to oscillate as the magnetic field was increased, indicating the shift into a quantum state. In effect, the material a very strong insulator was exhibiting the most remarkable quantum property of a metal.

This came as a complete surprise, Wu said. We asked ourselves, Whats going on here? We dont fully understand it yet.

Wu noted that there are no current theories to explain this phenomenon.

Nonetheless, Wu and his colleagues have put forward a provocative hypothesis a form of quantum matter that is neutrally charged. Because of very strong interactions, the electrons are organizing themselves to produce this new kind of quantum matter, Wu said.

But it is ultimately no longer the electrons that are oscillating, said Wu. Instead, the researchers believe that new particles, which they have dubbed neutral fermions, are born out of these strongly interacting electrons and are responsible for creating this highly remarkable quantum effect.

Fermions are a category of quantum particles that include electrons. In quantum materials, charged fermions can be negatively charged electrons or positively charged holes that are responsible for the electrical conduction. Namely, if the material is an electrical insulator, these charged fermions cant move freely. However, particles that are neutral that is, neither negatively nor positively charged are theoretically possible to be present and mobile in an insulator.

Our experimental results conflict with all existing theories based on charged fermions, said Pengjie Wang, co-first author on the paper and postdoctoral research associate, but could be explained in the presence of charge-neutral fermions.

The Princeton team plans further investigation into the quantum properties of tungsten ditelluride. They are particularly interested in discovering whether their hypothesis about the existence of a new quantum particle is valid.

This is only the starting point, Wu said. If were correct, future researchers will find other insulators with this surprising quantum property.

Despite the newness of the research and the tentative interpretation of the results, Wu speculated about how this phenomenon could be put to practical use.

Its possible that neutral fermions could be used in the future for encoding information that would be useful in quantum computing, he said. In the meantime, though, were still in the very early stages of understanding quantum phenomena like this, so fundamental discoveries have to be made.

Reference: Landau quantization and highly mobile fermions in an insulator by Pengjie Wang, Guo Yu, Yanyu Jia, Michael Onyszczak, F. Alexandre Cevallos, Shiming Lei, Sebastian Klemenz, Kenji Watanabe, Takashi Taniguchi, Robert J. Cava, Leslie M. Schoop and Sanfeng Wu, Nature.DOI: 10.1038/s41586-020-03084-9

In addition to Wu and Wang, the team included co-first authors Guo Yu, a graduate student in electrical engineering, and Yanyu Jia, a graduate student in physics. Other key Princeton contributors were Leslie Schoop, assistant professor of chemistry; Robert Cava, the Russell Wellman Moore Professor of Chemistry; Michael Onyszczak, a physics graduate student; and three former postdoctoral research associates: Shiming Lei, Sebastian Klemenz and F. Alexandre Cevallos, who is also a 2018 Princeton Ph.D. alumnus. Kenji Watanabe and Takashi Taniguchi of the National Institute for Material Science in Japan also contributed.

Landau quantization and highly mobile fermions in an insulator, by Pengjie Wang, Guo Yu, Yanyu Jia, Michael Onyszczak, F. Alexandre Cevallos, Shiming Lei, Sebastian Klemenz, Kenji Watanabe, Takashi Taniguchi, Robert J. Cava, Leslie M. Schoop, and Sanfeng Wu, was published Jan. 4 in the journal Nature (DOI: 10.1038/s41586-020-03084-9).

This work was primarily supported by the National Science Foundation (NSF) through the Princeton University Materials Research Science and Engineering Center (DMR-1420541 and DMR-2011750) and a CAREER award (DMR-1942942). Early measurements were performed at the National High Magnetic Field Laboratory, which is supported by an NSF Cooperative Agreement (DMR-1644779), and the State of Florida. Additional support came from the Elemental Strategy Initiative conducted by the Ministry of Education, Culture, Sports, Science and Technology of Japan (JPMXP0112101001), the Japan Society for the Promotion of Sciences KAKENHI program (JP20H00354) and the Japan Science and Technology Agencys CREST program (JPMJCR15F3). Further support came from the U.S. Army Research Office Multidisciplinary University Research Initiative on Topological Insulators (W911NF1210461), the Arnold and Mabel Beckman Foundation through a Beckman Young Investigator grant, and the Gordon and Betty Moore Foundation (GBMF9064).

Read the rest here:
Surprising Discovery of Unexpected Quantum Behavior in Insulators Suggests Existence of Entirely New Type of Particle - SciTechDaily

Find out what Dell Technologies has to say about quantum computing, 5G and more for this year – Nasi Lemak Tech

Dell Technologies have presented what they think about the year 2021 in terms of technologies and the companys view and strategy towards said elements.

For the main discussions, they have shared their insights, analytics, and predictions for the top 4 emerging technologies of 2021, namely quantum computing, silicon chips, 5G, multi-cloud edge solutions.

For starters, the company recognizes the existence and ability of quantum computing but it is not yet practical at least for a couple of years and it should be positioned as an augmentation of conventional computing such as an addition of a new tier towards the highest point of a pyramid hierarchy. They are also impressed by the fact that the cryptography sector has finally met its real challenger in terms of pure brute force speed and have started investing R&D resources to refine modern-day security solutions to match them. Recommendation wise, they are encouraging the development of a simulator and language tailored specifically for quantum computing to train and produce sufficient experts in the future.

Onto semiconductors, they have seen global leaders such as Apple, Intel, and AMD all made their own moves of incorporating their own heterogeneous architectures such as big.LITTLE in their processors one way or another and with NVIDIA purchasing ARM and AMD getting its hands on Xylinx, Dell Technologies are pretty sure future servers are going to follow suit and similar architectures as well, focusing on software modernization, integration platform in conjunction with the silicon chip itself.

The enterprise use of 5G also stemmed the organizations interest as they have predicted that the new standards will really take off during this year as true SA-5G specifications such as mMTC, UR-LLC and MEC provide the groundwork for telecommunications parties to learn, adapt and deploy them in both public and private use cases. Software solutions providers such as Dell Technologies themselves, Microsoft, and more will chime in to continuously refine 5G to be open yet standardized.

Finally, multi-cloud assimilation will solve the issue of edge proliferation which is the excessive independent edge system that currently existed in the ecosystem by clearly classifying resource pools and workload extensions into 2 unique individual categories. In a simpler sense, more workloads and resources targeting public clouds and SaaS edges will involve more logical partitioning compared to the past.

Amit Midha, President of the APAC and Japan region, also added that the entire world is slowly shifting its focus to Asia in terms of business and the technology it carries along and forward into the future. Discussing the companys progress for the social impact aimed for the year 2030 with 9 years to go, they are in the driver seat to achieve a 1:1 ratio of using recycled materials for manufacturing and gender representation for its employees alongside affecting more than 1 billion of lives for a greater good.

Like our Facebook Page here at NasiLemakTech.com for more news and in-depth reviews!Also, join our Facebook Group for insightful information and memes!

View post:
Find out what Dell Technologies has to say about quantum computing, 5G and more for this year - Nasi Lemak Tech

ADU Professor Receives Us Patent for a First-of-Its-Kind Hybrid Device Set To Advance the Field of Quantum Computing – Al-Bawaba

Abu Dhabi Universitys (ADU) Associate Professor of Electrical Engineering in the College of Engineering (CoE), Dr. Montasir Qasymeh, has received a U.S. patent registered under 10,824,048 B2 to develop a first-of-its-kind device that will be capable of connecting superconducting quantum computers over significant distances.

Superconducting quantum computers are the extraordinary computers of the future that will surpass all current ones - and achieve ultrasensitive sensing and unattackable quantum communication networks. Unlike todays conventional computers, quantum computers can process huge amounts of data and perform computations in powerful new ways that were never possible before. Potential applications of quantum computing include accelerating innovations in artificial intelligence and machine learning and tackling future cybersecurity challenges.

Dr. Qasymehs device is composed of graphene; a substance that has been hailed as a miracle material due to its electrical properties and the fact that it is the worlds thinnest and second strongest material. Graphene has already innovated the technology sector and is being applied today to laptops, smartphones and headphones. Dr. Qasymeh has been working with graphene for the past seven years and has numerous publications that have studied this substance. The device converts a quantum microwave signal containing data to a laser beam using properly design graphene layers that are electrically connected and subjected to a laser pump.

Dr. Montasir Qasymeh said: I am humbled and honored to be granted this U.S. patent. This invention will advance the field of quantum computing in the UAE taking us one step further towards the quantum age.

He also added: The coming era is an era of knowledge wealth, that brings with it the opportunity to advance all of humankind. I would like to express my sincerest gratitude to Abu Dhabi University for supporting this project and providing my team with access to its purpose-built academic facilities. I am proud and grateful for Abu Dhabi Universitys continued investment in research.

Dr. Hamdi Sheibani, Dean of the College of Engineering at ADU commented: We are extremely proud of yet another accomplishment from Dr. Montasir Qasymeh. This U.S. patent for one of our professors is evidence of ADUs culture of innovation and our continued commitment to the UAE Governments National Agenda to diversify our economy and strengthen our research and innovation sector. The College of Engineering at Abu Dhabi University is committed to supporting educators who serve as role models and mentors to their students and peers by leading with example through their teachings and projects.

The project was developed with the funding of two important grants: the ADEK Award for Research Excellence grant, which was awarded for the research proposal Graphene-Based Modulator for Passive Transmission and White Light Communications from the Ministry of Education; and the Takamul grant from the Department of Economic Development, which was awarded for patent filling.

Dr. Qasymeh received a Ph.D. degree in electrical engineering from Dalhousie University in Halifax, Canada in 2010. From 2010 to 2011, he was Mitacs Elevate Postdoctoral Fellow at the Microwave Photonics Research Laboratory, University of Ottawa, Canada. He joined Abu Dhabi University in 2011, where he continues to teach. With over 10 years of experience in the education and research industry, he has published more than 40 articles in reputed refereed journals and international conferences and has led on 4 U.S. patents (1 Issued and 3 pending). He has attracted a significant amount of research funding (approximately AED 1.8 million) including 2 ADEK awards for research excellence.

During his tenure with Abu Dhabi University, Dr. Qasymeh has taught more than 17 different undergraduate and graduate courses. He is an active member of several national and international scientific committees and is a senior member of the Institute of Electrical and Electronics Engineers (IEEE), the worlds largest technical professional organization dedicated to advancing technology. He is currently working on topics that include novel terahertz waveguides, room temperature quantum devices and ultrafast modulators.

See the rest here:
ADU Professor Receives Us Patent for a First-of-Its-Kind Hybrid Device Set To Advance the Field of Quantum Computing - Al-Bawaba

Quantum Computing Market Size 2021 By Analysis, Manufacturers, Regions, Type and Application, and Forecasts to 2027 – Jumbo News

Fort Collins, Colorado Report on Quantum Computing Market effectively provides key characteristics of the global investment market, population analysis, companies planning mergers and acquisitions, and concerned or new vendors in the review of research institutes reputable global markets. The Quantum Computing Report by QY Research describes the comprehensive market study covering overview, production, manufacturers, dimensions, revenue, price, consumption, growth rate, sales, import, sourcing, export, future plans and technological advancement for the detailed study of the Quantum Computing Market. Although it allows inexpensive reports readily available, tailor-made research by a team of experts. This report primarily focuses on the consumer and retail sectors.

Global Quantum Computing Market was valued at 193.68 million in 2019 and is projected to reach USD1379.67 million by 2027, growing at a CAGR of 30.02% from 2020 to 2027.

The Quantum Computing Market report comprises various chapters listing the participants which are playing a significant role in the global Quantum Computing Market growth. This section of the report displays the statistics of major players in the international market, including company profile, product specification, market share, and production value. The main type of segmentation mentioned in this report is a commercial and residential category. Based on the extensive historical data a well thought out study on the estimated period for the good expansion of Quantum Computing market globally is produced.

Request a Discount on the report @ https://reportsglobe.com/ask-for-discount/?rid=32953

Market Segments and Sub-segments Covered in the Report are as per below:

Quantum Computing Market, By Offering

Consulting solutions Systems

Quantum Computing Market, By Application

Machine Learning Optimization Material Simulation

Quantum Computing Market, By End-User

Automotive Healthcare Space and Defense Banking and Finance Others

It also provides accurate calculations and sales reports of the segments in terms of volume and value. The report introduces the industrial chain analysis, downstream buyers, and raw material sources along with the accurate insights of market dynamics. The report also studies the individual sales, revenue, and market share of every prominent vendor of the Quantum Computing Market. It majorly focuses on manufacturing analysis including the raw materials, cost structure, process, operations, and manufacturing cost strategies. The report delivers detailed data of big companies with information about their revenue margins, sales data, upcoming innovations and development, business models, strategies, investments, and business estimations.

The Quantum Computing Market reports deliver information about the industry competition between vendors through regional segmentation of markets in terms of revenue generation potential, business opportunities, demand & supply comparison taking place in the future. Understanding the Global perspective, the Quantum Computing Market report introduces an aerial view by analyzing historical data and future growth rate.

Request customization of the report @https://reportsglobe.com/need-customization/?rid=32953

Quantum Computing Market: By Region

North America Europe The Asia Pacific Latin America The Middle East and Africa

The objectives of the Quantum Computing Global Market Study are:

Split the breakdown data by region, type, manufacturer, and application. Identify trends, drivers, and key influencing factors around the world and in the regions Analysis and study of global Quantum Computing status and future forecast, including production, sales, consumption, history, and forecast. Analysis of the potential and advantage, opportunities and challenges, limitations, and risks of the global market and key regions. Analyze competitive developments such as expansions, agreements, product launches, and acquisitions in the market. Introducing the major Quantum Computing manufacturers, production, sales, market share, and recent developments.

To learn more about the report, visit @ https://reportsglobe.com/product/global-quantum-computing-market/

Thanks for reading this article; you can also get individual chapter wise section or region wise report versions like North America, Europe, or Asia.

How Reports Globe is different than other Market Research Providers

The inception of Reports Globe has been backed by providing clients with a holistic view of market conditions and future possibilities/opportunities to reap maximum profits out of their businesses and assist in decision making. Our team of in-house analysts and consultants works tirelessly to understand your needs and suggest the best possible solutions to fulfill your research requirements.

Our team at Reports Globe follows a rigorous process of data validation, which allows us to publish reports from publishers with minimum or no deviations. Reports Globe collects, segregates, and publishes more than 500 reports annually that cater to products and services across numerous domains.

Contact us:

Mr. Mark Willams

Account Manager

US: +1-970-672-0390

Email:[emailprotected]

Web:reportsglobe.com

More here:
Quantum Computing Market Size 2021 By Analysis, Manufacturers, Regions, Type and Application, and Forecasts to 2027 - Jumbo News

A fertilizer revolution is on the horizon – Alberta Express

As fledgling technology goes, quantum computing sounds as science fiction as it gets. Most people have likely not even heard about it, let alone think it can be used for anything immediately useful.

But if IBM fulfills a very bold promise it made in September, crop producers will see the fruits of this technology in a very tangible way within the next five years.

By using quantum computing and artificial intelligence (AI) to speed up the process, IBM researchers are confident they can revolutionize the production of nitrogen fertilizer.

Basically, for every ton of fertilizer produced, we consume one ton of fossil fuel, Teo Laino, manager of IBM Research Zurich, said in an email interview.

We are working to identify and develop materials that will make the conversion of nitrogen into fertilizers happen in a more environmental and sustainable way.

If successful, this could mean lower nutrient costs for producers and given growing concerns about greenhouse gases produce a major PR win for the ag industry, as well.

IBMs goal is to improve the Haber-Bosch process (which turns nitrogen gas into nitrates) in a very fundamental way. This process, created by two German chemists more than a century ago, is both one of the greatest advances in agriculture and one of its biggest challenges.

Fertilizers have helped to sustain two times more people on Earth (than otherwise), said Laino.

(But) this process is consuming nearly two to three per cent of the global energy production on a yearly basis.

The impact of the current process has brought the population to the verge of a sustainability crisis.

This is what prompted the global technology giant to pledge that it would use its quantum computing and AI capability to fundamentally improve the Haber-Bosch process.

We will need to discover new processes that have a greater respect for the environment and the planet, he said. This will also have benefits for the primary producers less impact to the environment means less disaster events related to climate.

This effort to find a much less energy-intensive way to make fertilizer in five years is tremendously exciting, said University of Manitoba soil scientist Mario Tenuta, one of the countrys leading experts on fertilizer use and the senior Canada research chair in 4R nutrient management.

(IBM is) not thinking about making a widget its thinking about making something thats going to change the structure of our industrial processes and get us closer to where we need to go in terms of living and sustaining our presence here, he said.

IBMs specific goal is to find a new catalyst for the Haber-Bosch process a seemingly small thing but one with huge implications. (A catalyst is a substance that makes a chemical reaction proceed much more quickly without being consumed in the reaction.)

Making nitrogen fertilizer requires, not surprisingly, nitrogen. Theres plenty out there (it makes up 78 per cent of the air we breathe), but plants can only use it in its fixed form. In nature, that means it must be harvested from the atmosphere by micro-organisms to form ammonia, nitrites and nitrates which help plants grow.

IBMs bid to greatly reduce the amount of energy needed to make fertilizer would have a huge advance for agriculture, says Mario Tenuta, a University of Manitoba soil scientist and one of the countrys top experts on fertilizer use.photo: Supplied

Legumes can do this, but to do it on an industrial scale with the Haber-Bosch process requires very high temperatures, and hence lots of energy. For decades, researchers have tried to engineer a better catalyst that would reduce the energy needed to produce ammonia through the Haber-Bosch process, but identifying one has been problematic. There are virtually endless combinations of materials to sort through and processing all of them has proven itself beyond the capacity of both humans and traditional computers.

Thats where quantum computing comes in.

Quantum computers are exponentially faster than even the largest mainframe computers. The simplest explanation is instead of encoding information in bits that exist in a binary state of either 1 or 0, they use qubits that exist in states of both 1 and 0 simultaneously. Youd probably need a PhD in quantum physics to understand much more, but it is this state of superposition that makes quantum computing so fast.

IBM researchers plan to extract the materials quantum computers identify as possible catalysts and then with the help of AI construct, test and validate predictive models that could make a more energy-efficient fertilizer production process possible.

We use an entire ecosystem of technologies, from AI to tackle sustainable goal challenges to quantum computing, which isan important part of accelerating the scientific discovery process, said Laino. We currently use quantum computing to address several important chemical challenges in these processes.

Meanwhile, we study how to solve more holistic problems while making progress on a road map that is bringing us closer to running bigger solutions on larger quantum computing hardware.

If all is successful, the next step would be to scale the process.

The company foresees the use of fuel cells that would work like a reverse battery. Basically, instead of storing energy, fuel cells would use energy from renewable sources to combine nitrogen from the atmosphere and hydrogen from water to produce ammonia. The catalytic molecules identified by the technology would be used to lower the amount of energy needed to sustain the nitrogen fixation process.

While this would be a good thing overall, what would it actually mean for crop farmers trying to keep their input costs down?

Basic economics dictate that a less energy-intensive process for making fertilizer should mean savings for fertilizer companies (less energy equals less cost), with those savings theoretically passed on to the producer.

However, there are still some unknown factors at play, particularly when it comes to the kind of energy that will fuel fertilizer production, said Tenuta.

With IBMs focus on using renewables such as solar and hydro as fuel for the production process, how much farmers would wind up paying for the end product is anyones guess, he said.

I am personally expecting that by 2050 our reliance on fossil fuels as an energy source is going to be in the minoritycompared to renewables, he said. You just dont know what the cost of those renewables is going to be down the road.

That said, Tenuta believes the fact that IBM is looking for this catalyst using in-reach technology is itself remarkable.

And who knows where that might lead, he said, adding it might even allow fertilizer to be made on farms.

Maybe a really good catalyst will ensure there is no difference between a factory and a farmers yard, said Tenuta. You would still think that (production) would be more efficient in a big factory than making it at a small scale, but who knows?

Read more here:
A fertilizer revolution is on the horizon - Alberta Express

The Promise and Impact of Quantum Computing on Cybersecurity – Analytics Insight

Quantum computing is emerging as a subfield of quantum information science. This technology has already started attracting interest from researchers and technology companies with almost feverish excitement and activity. Companies have even begun racing to achieve quantum supremacy. In 2019, Google officially announced that it achieved quantum supremacy. Quantum computing promises great potential in diverse areas, including medical research, financial modeling, traffic optimization, artificial intelligence, weather forecasting, and more.

Quantum computing can be a ground-breaking technology for cybersecurity, enabling companies to improve their cybersecurity strategies. It will help detect and deflect quantum computing-based attacks before they cause harm to groups and individuals.

Quantum cybersecurity is the field of study of all aspects affecting the security and privacy of communications and computations owing to the development of quantum technologies. Quantum computers are likely to solve problems that cannot be done by traditional computers, such as solving the algorithms behind encryption keys that safeguard data and the internets infrastructure. Moreover, as most of todays encryption relies heavily on mathematical formulas that would take impractically much time to decode using todays computers, a quantum computer can easily factor those formulas and break the code.

Over 20 years ago, Peter Shor, an MIT professor of applied mathematics, developed a quantum algorithm that could easily factor large numbers far more quickly than a conventional computer. Since then, scientists have been working on developing quantum computers that can break asymmetric encryption.

The development of large quantum computers could have calamitous consequences for cybersecurity. In this context, thinking quantum cybersecurity solutions will be an advantageous edge. Quantum cybersecurity can pave more robust and compelling opportunities for the security of critical and personal data. It will particularly be useful in quantum machine learning and quantum random number generation, as noted byIBM.

The pace of quantum research undoubtedly continues to accelerate in the years ahead. But it will also pose challenges and vulnerabilities to mission-critical information needed to retain its secrecy. Adapting to advanced cryptography to address these threats could be an obvious solution. The quantum cryptography approach is based on creating algorithms that are hard to break even for quantum computers. This approach can also work with conventional computers.

Another security approach against quantum computing attacks is lattice-based cryptography. Conventional cryptographic algorithms can be replaced with lattice-based algorithms that are designed with proven security. These new algorithms can conceal data inside complex math problems called lattices. Google already has begun testing post-quantum cryptography methods that integrate lattice-based algorithms. According to IBM researcher Cecilia Boschini, lattice-based cryptography will prevent future quantum computing-based attacks and form a basis for Fully Homomorphic Encryption (FHE) that makes it possible for users to perform calculations on a file without seeing the data or revealing it to hackers. The NSA, NIST, and other governmental agencies are also starting to invest in this developing method.

Moreover, according to aForbes article, quantum computing can transform cybersecurity in four areas: quantum random number generation is fundamental to cryptography; quantum-secure communications, specifically quantum key distribution (QKD); post-quantum cryptography, and quantum machine learning.

Visit link:
The Promise and Impact of Quantum Computing on Cybersecurity - Analytics Insight

AI Helps Solve Schrdinger’s Equation What Does The Future Hold? – Analytics India Magazine

Scientists at the Freie Universitt Berlin have come up with an AI-based solution for calculating the ground state of the Schrdinger equation in quantum chemistry.

The Schrdingers equation is primarily used to predict the chemical and physical properties of a molecule based on the arrangement of its atoms. The equation helps determine where the electrons and nuclei of a molecule are and under a given set of conditions what their energies are.

The equation has the same central importance as Newtons law motion, which can predict an objects position at a particular moment, but in quantum mechanics that is in atoms or subatomic particles.

The article describes how the neural network developed by the scientists at the Freie Universitt Berlin brings more accuracy in solving the Schrdingers equation and what does this mean for the future.

In principle, the Schrdingers equation can be solved to predict the exact location of atoms or subatomic particles in a molecule, but in practice, this is extremely difficult since it involves a lot of approximation.

Central to the equation is a mathematical object, a wave function that specifies electrons behaviour in a molecule. But the high dimensionality of the wave function makes it extremely difficult to find out how electrons affect each other. Thus the most you get from the mathematical representations is a probabilistic account of it and not exact answers.

This limits the accuracy with which we can find properties of a molecule like the configuration, conformation, size, and shape, which can help define the wave function. The process becomes so complex that it becomes impossible to implement the equation beyond a few atoms.

Replacing the mathematical building blocks, the scientists at Freie Universitt Berlin came up with a deep neural network that is capable of learning the complex patterns of how electrons are located around the nuclei.

The scientists developed a Deep Neural Networks (DNN) model, PauliNet, that has several advantages over conventional methods to study quantum systems like the Quantum Monte Carlo or other classical quantum chemistry methods.

The DNN model developed by these scientists is highly flexible and allows for a variational approach that can aid accurate calculation of electronic properties beyond the electronic energies.

Secondly, it also helps the easy calculation of many-body and more-complex correlation with fewer determinants, reducing the need for higher computation power. The model mainly helped solve a major tradeoff issue between accuracy and computational cost, often faced while solving the Schrodinger equation.

The model can also calculate the local energy of heavy nuclei like heavy metals without using pseudo-potentials or approximations.

Lastly, the model developed in the study has anti-symmetry functions and other principles crucial to electronic wave functions integrated into the DNN model, rather than let the model learn. Thus, building fundamental physics in the model has helped it make meaningful and accurate predictions.

In recent years, artificial intelligence has helped solve many scientific problems that otherwise seemed impossible using traditional methods.

AI has become instrumental in anticipating the results of experiments or simulations of quantum systems, especially due to its sciences complex nature. In 2018, reinforcement learning was used to design new quantum experiments in automated laboratories autonomously.

Recent efforts by the University of Warwick and another IBM and DeepMind have also tried to solve the Schrdingers equation. However, PauliNet, with its greater accuracy of solving the equation now, presents us with a potential to use it in many real-life applications.

Understanding molecules composition can help accelerate drug-discovery, which earlier was difficult due to the approximations to understand its properties.

Similarly, it could also help discover several other elements or metamaterials like new catalysts, industrial chemical applications, new pesticides, among others. It can be used in characterising molecules that are synthesised in laboratories.

Several academic and commercial software use Schrdingers equation at the core but are based on applications. The accuracy of this software will improve. Quantum computing in itself is based on quantum phenomena of superposition and is made up of qubits that take advantage of the principle. Quantum computing performance will improve as qubits will be able to be measured faster.

While the current study has come up with a faster, cheaper, and accurate solution, there are many challenges to overcome before it is industry-ready.

However, once it is ready, the world will witness many applications as a result of greater accuracy in solving Schrdingers equation.

Originally posted here:
AI Helps Solve Schrdinger's Equation What Does The Future Hold? - Analytics India Magazine

The Year Ahead: 3 Predictions From the ‘Father of the Internet’ Vint Cerf – Nextgov

In 2011, the movie "Contagion" eerily predicted what a future world fighting a deadly pandemic would look like. In 2020, I, along with hundreds of thousands of people around the world, saw this Hollywood prediction play out by being diagnosed with COVID-19. It was a frightening year by any measure, as every person was impacted in unique ways.

Having been involved in the development of the Internet in the 1970s, Ive seen first-hand the impact of technology on peoples lives. We are now seeing another major milestone in our lifetimethe development of a COVID-19 vaccine.

What the"Contagion" didnt show is what happens after a vaccine is developed. Now, as we enter 2021, and with the first doses of a COVID-19 vaccine being administered, a return to normal feels within reach. But what will our return to normal look like really? Here are threepredictions for 2021.

1. Continuous and episodic Internet of Medical Things monitoring devices will prove popular for remote medical diagnosis. The COVID-19 pandemic has dramatically changed the practice of clinical medicine at least in the parts of the world where Internet access is widely available and at high enough speeds to support video conferencing. A video consult is often the only choice open to patients short of going to a hospital when outpatient care is insufficient. Video-medicine is unsatisfying in the absence of good clinical data (temperature, blood pressure, pulse for example). The consequence is that health monitoring and measurement devices are increasingly valued to support remote medical diagnosis.

My Prediction: While the COVID-19 pandemic persists into 2021, demand for remote monitoring and measurement will increase. In the long run, this will lead to periodic and continuous monitoring and alerting for a wide range of chronic medical conditions. Remote medicine and early warning health prediction will in turn help citizens save on health care costs and improve and further extend life expectancy.

2. Cities will (finally) adopt self-driving cars. Self-driving cars are anything but new, having emerged from a Defense Advanced Research Projects Agency Grand Challenge in 2004. Sixteen years later, many companies are competing to make this a reality but skeptics around this technology remain.

My Prediction: In the COVID-19 aftermath, I predict driverless car service will grow in 2021 as people will opt for rides that minimize exposure to drivers and self-clean after every passenger. More cities and states will embrace driverless technology to accommodate changing transportation and public transportation preferences.

3. A practical quantum computation will be demonstrated. In 2019, Google reported that it had demonstrated an important quantum supremacy milestone by showing a computation in minutes that would have taken a conventional computer thousands of years to complete. The computation, however, did not solve any particular practical problem.

My Prediction: In the intervening period, progress has been made and it seems likely that by 2021, we will see some serious application of quantum computing to solve one or more optimization problems in mechanical design, logistics scheduling or resource allocation that would be impractical with conventional supercomputing.

Despite the challenges 2020 presented, it also unlocked some opportunities like leapfrogging with tech adoption. My hope is that the public sector sustains the speed for innovation and development to unlock even greater advancements in the year ahead.

Vinton G. Cerf is vice president and chief Internet evangelist for Google. Cerf has held positions at MCI, the Corporation for National Research Initiatives, Stanford University, UCLA and IBM. Vint Cerf served as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) and was founding president of the Internet Society. He served on the U.S. National Science Board from 2013-2018.

View post:
The Year Ahead: 3 Predictions From the 'Father of the Internet' Vint Cerf - Nextgov

IBM Provides Harris-Stowe State University with $2M in AI, Cloud Resources for Student Skill Building – HPCwire

ST. LOUIS, Jan. 6, 2021 Harris-Stowe State University has announced a multi-million dollar collaboration with IBM on a comprehensive program designed to develop diverse and high demand skill sets that align with industry needs and trends so both students and faculty can develop the skills they need today for the jobs of tomorrow.

IBM and Harris-Stowe State University are building on the need to advance digital skills in education and are dedicated to providing future focused curriculum and educational tools to help train the diverse workforce of tomorrow in fast-growing technologies such as artificial intelligence (AI), blockchain, data science, cybersecurity, cloud and quantum.

Harris-Stowe State University is thrilled to collaborate with IBM to provide greater access to skills and training in the tech industry, said Dr. Corey S. Bradford, Sr., president of Harris-Stowe State University. As the world, more than ever relies on the use of science, technology, engineering, and mathematics to solve grand societal challenges, Harris-Stowe must continue to develop well prepared and ready graduates to join the STEM workforce. This collaboration is yet another example of our commitment to supporting student and faculty development and assisting in preparing students to compete and lead globally.

The collaboration extends IBMs recent investment in technology, assets, resources and skills development with HBCUs across the United States through the IBM Skills Academy and enhanced IBM Academic Initiative.

Equal access to skills and jobs is the key to unlocking economic opportunity and prosperity for diverse populations, said Valinda Scarbro Kennedy, HBCU Program Lead, IBM Global University Programs. As we announced earlier this fall, IBM is deeply committed to helping HBCU students build their skills to better prepare for the future of work. Through this collaboration, Harris-Stowe State University students will have an opportunity to gain modern skills in emerging technologies across hybrid cloud, quantum and AI so they can be better prepared for the future of work in the digital economy.

As part of its multi-year Global University Programs, which include the IBM Academic Initiative and the IBM Skills Academy, IBM is providing more than $100M in assets, faculty training, pre-built and maintained curriculum content, hands on labs, use cases, digital badges and software to participating HBCUs. The IBM Academic Initiative provides access to resources at no-charge for teaching, learning and non-commercial research with recent enhancements including access to guest lectures. The IBM Skills Academy is a comprehensive, integrated program through an education portal designed to create a foundation of diverse and high demand skill sets that directly correlate to what students will need in the workplace. The learning tracks address topics such as artificial intelligence, cybersecurity, blockchain, data science and quantum computing.

IBMs investment in HBCUs like Harris-Stowe State University is part of the companys dedicated work to promote social justice and racial equality by creating equitable, innovative experiences for HBCU students to acquire the necessary skills to help unlock economic opportunity and prosperity.

About IBM

IBM is a global leader in business transformation, serving clients in more than 170 countries around the world with open hybrid cloud and AI technology. For more information, please visit here.

About Harris-Stowe State University

Harris-Stowe State University (HSSU), located in midtown St. Louis offers the most affordable bachelors degree in the state of Missouri. The University is a fully accredited four-year institution with more than 50 majors, minors and certificate programs in education, business and arts and sciences. Harris-Stowes mission is to provide outstanding educational opportunities for individuals seeking a rich and engaging academic experience. HSSUs programs are designed to nurture intellectual curiosity and build authentic skills that prepare students for leadership roles in a global society.

Source: IBM

The rest is here:
IBM Provides Harris-Stowe State University with $2M in AI, Cloud Resources for Student Skill Building - HPCwire

Photonic processor heralds new computing era | The Engineer The Engineer – The Engineer

A multinational team of researchers has developed a photonic processor that uses light instead of electronics and could help usher in a new dawn in computing.

Current computing relies on electrical current passed through circuitry on ever-smaller chips, but in recent years this technology has been bumping up against its physical limits.

To facilitate the next generation of computation-hungry technology such as artificial intelligence and autonomous vehicles, researchers have been searching for new methods to process and store data that circumvent those limits, and photonic processors are the obvious candidate.

Funding boost for UK quantum computing

Featuring scientists from the Universities of Oxford, Mnster, Exeter, Pittsburgh, cole Polytechnique Fdrale (EPFL) and IBM Research Europe, the team developed a new approach and processor architecture.

The photonic prototype essentially combines processing and data storage functionalities onto a single chip so-called in-memory processing, but using light.

Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at high speeds and throughputs, said Mnster Universitys Wolfram Pernice, one of the professors who led the research.

This is much faster than conventional chips which rely on electronic data transfer, such as graphic cards or specialised hardware like TPUs [Tensor Processing Unit].

Led by Pernice, the team combined integrated photonic devices with phase-change materials (PCMs) to deliver super-fast, energy-efficient matrix-vector (MV) multiplications. MV multiplications underpin much of modern computing from AI to machine learning and neural network processing and the imperative to carry out such calculations at ever-increasing speeds, but with lower energy consumption, is driving the development of a whole new class of processor chips, so-called tensor processing units (TPUs).

The team developed a new type of photonic TPU capable of carrying out multiple MV multiplications simultaneously and in parallel. This was facilitated by using a chip-based frequency comb as a light source, which enabled the team to use multiple wavelengths of light to do parallel calculations since light has the property of having different colours that do not interfere.

Our study is the first to apply frequency combs in the field of artificially neural networks, said Tobias Kippenberg, Professor at EPFL

The frequency comb provides a variety of optical wavelengths which are processed independently of one another in the same photonic chip.

Described in Nature, the photonic processor is part of a new wave of light-based computing that could fundamentally reshape the digital world and prompt major advances in a range of areas, from AI and neural networks to medical diagnosis.

Our results could have a wide range of applications, said Prof Harish Bhaskaran from the University of Oxford.

A photonic TPU could quickly and efficiently process huge data sets used for medical diagnoses, such as those from CT, MRI and PET scanners.

See the rest here:
Photonic processor heralds new computing era | The Engineer The Engineer - The Engineer

Deep Science: Using machine learning to study anatomy, weather and earthquakes – TechCrunch

Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers particularly in but not limited to artificial intelligence and explain why they matter.

This week has a bit more basic research than consumer applications. Machine learning can be applied to advantage in many ways users benefit from, but its also transformative in areas like seismology and biology, where enormous backlogs of data can be leveraged to train AI models or as raw material to be mined for insights.

Were surrounded by natural phenomena that we dont really understand obviously we know where earthquakes and storms come from, but how exactly do they propagate? What secondary effects are there if you cross-reference different measurements? How far ahead can these things be predicted?

A number of recently published research projects have used machine learning to attempt to better understand or predict these phenomena. With decades of data available to draw from, there are insights to be gained across the board this way if the seismologists, meteorologists and geologists interested in doing so can obtain the funding and expertise to do so.

The most recent discovery, made by researchers at Los Alamos National Labs, uses a new source of data as well as ML to document previously unobserved behavior along faults during slow quakes. Using synthetic aperture radar captured from orbit, which can see through cloud cover and at night to give accurate, regular imaging of the shape of the ground, the team was able to directly observe rupture propagation for the first time, along the North Anatolian Fault in Turkey.

The deep-learning approach we developed makes it possible to automatically detect the small and transient deformation that occurs on faults with unprecedented resolution, paving the way for a systematic study of the interplay between slow and regular earthquakes, at a global scale, said Los Alamos geophysicist Bertrand Rouet-Leduc.

Another effort, which has been ongoing for a few years now at Stanford, helps Earth science researcher Mostafa Mousavi deal with the signal-to-noise problem with seismic data. Poring over data being analyzed by old software for the billionth time one day, he felt there had to be better way and has spent years working on various methods. The most recent is a way of teasing out evidence of tiny earthquakes that went unnoticed but still left a record in the data.

The Earthquake Transformer (named after a machine-learning technique, not the robots) was trained on years of hand-labeled seismographic data. When tested on readings collected during Japans magnitude 6.6 Tottori earthquake, it isolated 21,092 separate events, more than twice what people had found in their original inspection and using data from less than half of the stations that recorded the quake.

Image Credits: Stanford University

The tool wont predict earthquakes on its own, but better understanding the true and full nature of the phenomena means we might be able to by other means. By improving our ability to detect and locate these very small earthquakes, we can get a clearer view of how earthquakes interact or spread out along the fault, how they get started, even how they stop, said co-author Gregory Beroza.

See original here:
Deep Science: Using machine learning to study anatomy, weather and earthquakes - TechCrunch

The future of software testing: Machine learning to the rescue – TechBeacon

The last decadehas seen a relentless push to deliver software faster. Automated testing has emerged as one of the most important technologies for scaling DevOps, companies are investing enormous time and effort to build end-to-end software delivery pipelines, and containers and their ecosystem are holding up on their early promise.

The combination of delivery pipelines and containers has helped high performers to deliver software faster than ever.That said, many organizations are stillstruggling to balance speed and quality. Many are stuck trying to make headway with legacy software, large test suites, and brittle pipelines. So where do yougofrom here?

In the drive to release quickly, end users have become software testers. But theyno longer want to be your testers, and companies are taking note. Companies now want to ensure that quality is not compromised in the pursuit of speed.

Testing is one of the top DevOps controls that organizations can leverage to ensure that their customers engage with a delightful brand experience. Othersinclude access control, activity logging, traceability, and disaster recovery. Our company'sresearch over the past year indicates that slow feedback cycles, slow development loops, and developer productivity will remain the top priorities over the next few years.

Quality and access control are preventative controls, while others are reactive. There will be an increasing focus on quality in the future because it prevents customers from having a bad experience. Thus, delivering value fastor better yet, delivering the right value at the right quality level fastis the key trend that we will see this year and beyond.

Here are the five key trends to watch.

Test automation efforts will continue to accelerate. A surprising number of companiesstill have manual tests in their delivery pipeline, but you can't deliver fast if you have humans in the critical path of the value chain, slowing things down. (The exception isexploratory testing, where humans are a must.)

Automating manual tests is a long process that requires dedicated engineering time. While many organizations have at least some test automation, there's more that needs to be done. That's why automatedtesting willremain one of the top trends going forward.

As teams automate tests and adopt DevOps, quality must become part of the DevOps mindset. That means quality will become a shared responsibility of everyone in the organization.

Figure 2. Top performers shift tests around to create new workflows. They shift left for earlier validation and right to speed up delivery. Source: Launchable

Teams will need to become more intentional about where tests land. Should they shift tests left to catch issues much earlier, or should they add more quality controls to the right? On the "shift-right"side of the house, practices such as chaos engineering and canary deployments are becoming essential.

Shifting large test suites left is difficult because you don't want to introduce long delays while running tests in an earlier part of your workflow. Many companies tag some tests from a large suite to run in pre-merge, but the downside is that these tests may or may not be relevant to a specific change set. Predictive test selection (see trend 5 below) provides a compelling solution for running just the relevant tests.

Over the past six to eightyears, the industry has focused on connecting various tools by building robust delivery pipelines. Each of those tools generates a heavy exhaust of data, but that data is being used minimally, if at all. We have moved from "craft" or "artisanal" solutions to the "at-scale" stage in the evolution of tools in delivery pipelines.

The next phase is to bring smartsto the tooling.Expect to see an increased emphasis by practitioners onmakingdata-driven decisions.

There are two key problems in testing: not enough tests, and too many of them. Test-generation tools take a shot at the first problem.

To create a UI test today, you either must write a lot of code or a tester has to click through the UI manually, which is an incredibly painful and slow process. To relieve this pain, test-generation tools use AI to create and run UI tests on various platforms.

For example, one tool my team exploreduses a "trainer"that lets you record actions on a web app to create scriptless tests. While scriptless testing isnt a new idea, what is new is that this tool "auto-heals"tests in lockstep with the changes to your UI.

Another tool that we explored has AI bots that act like humans. They tap buttons, swipe images, type text, and navigate screens to detect issues. Once they find an issue, they create a ticket in Jira for the developers to take action on.

More testing tools that use AI willgain traction in 2021.

AI has other uses for testing apart from test generation. For organizations struggling with runtimes of large test suites, an emerging technology calledpredictive test selectionis gaining traction.

Many companies have thousands of tests that run all the time. Testing a small change might take hours or even days to get feedback on. While more tests are generally good for quality, it also means that feedback comes more slowly.

To date, companies such as Google and Facebook have developed machine-learning algorithms that process incoming changes and run only the tests that are most likely to fail. This is predictive test selection.

What's amazing about this technology is that you can run between 10% and 20% of your tests to reach 90% confidence that a full run will not fail. This allows you to reduce a five-hour test suite that normally runs post-merge to 30 minuteson pre-merge, running only the tests that are most relevant to the source changes. Another scenario would be to reduce a one-hour run to six minutes.

Expect predictive test selection to become more mainstream in 2021.

Automated testing is taking over the world. Even so, many teams are struggling to make the transition. Continuous quality culture will become part of the DevOps mindset. Tools will continue to become smarter. Test-generation tools will help close the gap between manual and automated testing.

But as teams add more tests, they face real problems with test execution time. While more tests help improve quality, they often become a roadblock to productivity. Machine learning will come to the rescue as we roll into 2021.

See the original post here:
The future of software testing: Machine learning to the rescue - TechBeacon

Five real world AI and machine learning trends that will make an impact in 2021 – IT World Canada

Experts predict artificial intelligence (AI) and machine learning will enter a golden age in 2021, solving some of the hardest business problems.

Machine learning trains computers to learn from data with minimal human intervention. The science isnt new, but recent developments have given it fresh momentum, said Jin-Whan Jung, Senior Director & Leader, Advanced Analytics Lab at SAS. The evolution of technology has really helped us, said Jung. The real-time decision making that supports self-driving cars or robotic automation is possible because of the growth of data and computational power.

The COVID-19 crisis has also pushed the practice forward, said Jung. Were using machine learning more for things like predicting the spread of the disease or the need for personal protective equipment, he said. Lifestyle changes mean that AI is being used more often at home, such as when Netflix makes recommendations on the next show to watch, noted Jung. As well, companies are increasingly turning to AI to improve their agility to help them cope with market disruption.

Jungs observations are backed by the latest IDC forecast. It estimates that global AI spending will double to $110 billion over the next four years. How will AI and machine learning make an impact in 2021? Here are the top five trends identified by Jung and his team of elite data scientists at the SAS Advanced Analytics Lab:

Canadas Armed Forces rely on Lockheed Martins C-130 Hercules aircraft for search and rescue missions. Maintenance of these aircraft has been transformed by the marriage of machine learning and IoT. Six hundred sensors located throughout the aircraft produce 72,000 rows of data per flight hour, including fault codes on failing parts. By applying machine learning, the system develops real-time best practices for the maintenance of the aircraft.

We are embedding the intelligence at the edge, which is faster and smarter and thats the key to the benefits, said Jung. Indeed, the combination is so powerful that Gartner predicts that by 2022, more than 80 per cent of enterprise IoT projects will incorporate AI in some form, up from just 10 per cent today.

Computer vision trains computers to interpret and understand the visual world. Using deep learning models, machines can accurately identify objects in videos, or images in documents, and react to what they see.

The practice is already having a big impact on industries like transportation, healthcare, banking and manufacturing. For example, a camera in a self-driving car can identify objects in front of the car, such as stop signs, traffic signals or pedestrians, and react accordingly, said Jung. Computer vision has also been used to analyze scans to determine whether tumors are cancerous or benign, avoiding the need for a biopsy. In banking, computer vision can be used to spot counterfeit bills or for processing document images, rapidly robotizing cumbersome manual processes. In manufacturing, it can improve defect detection rates by up to 90 per cent. And it is even helping to save lives; whereby cameras monitor and analye power lines to enable early detection of wildfires.

At the core of machine learning is the idea that computers are not simply trained based on a static set of rules but can learn to adapt to changing circumstances. Its similar to the way you learn from your own successes and failures, said Jung. Business is going to be moving more and more in this direction.

Currently, adaptive learning is often used fraud investigations. Machines can use feedback from the data or investigators to fine-tune their ability to spot the fraudsters. It will also play a key role in hyper-automation, a top technology trend identified by Gartner. The idea is that businesses should automate processes wherever possible. If its going to work, however, automated business processes must be able to adapt to different situations over time, Jung said.

To deliver a return for the business, AI cannot be kept solely in the hands of data scientists, said Jung. In 2021, organizations will want to build greater value by putting analytics in the hands of the people who can derive insights to improve the business. We have to make sure that we not only make a good product, we want to make sure that people use those things, said Jung. As an example, Gartner suggests that AI will increasingly become part of the mainstream DevOps process to provide a clearer path to value.

Responsible AI will become a high priority for executives in 2021, said Jung. In the past year, ethical issues have been raised in relation to the use of AI for surveillance by law enforcement agencies, or by businesses for marketing campaigns. There is also talk around the world of legislation related to responsible AI.

There is a possibility for bias in the machine, the data or the way we train the model, said Jung. We have to make every effort to have processes and gatekeepers to double and triple check to ensure compliance, privacy and fairness. Gartner also recommends the creation of an external AI ethics board to advise on the potential impact of AI projects.

Large companies are increasingly hiring Chief Analytics Officers (CAO) and the resources to determine the best way to leverage analytics, said Jung. However, organizations of any size can benefit from AI and machine learning, even if they lack in-house expertise.

Jung recommends that if organizations dont have experience in analytics, they should consider getting an assessment on how to turn data into a competitive advantage. For example, the Advanced Analytics Lab at SAS offers an innovation and advisory service that provides guidance on value-driven analytics strategies; by helping organizations define a roadmap that aligns with business priorities starting from data collection and maintenance to analytics deployment through to execution and monitoring to fulfill the organizations vision, said Jung. As we progress into 2021, organizations will increasingly discover the value of analytics to solve business problems.

SAS highlights a few top trends in AI and machine learning in this video.

Jim Love, Chief Content Officer, IT World Canada

Read the original post:
Five real world AI and machine learning trends that will make an impact in 2021 - IT World Canada

Harnessing the power of machine learning for improved decision-making – GCN.com

INDUSTRY INSIGHT

Across government, IT managers are looking to harness the power of artificial intelligence and machine learning techniques (AI/ML) to extract and analyze data to support mission delivery and better serve citizens.

Practically every large federal agency is executing some type of proof of concept or pilot project related to AI/ML technologies. The governments AI toolkit is diverse and spans the federal administrative state, according to a report commissioned by the Administrative Conference of the United States (ACUS). Nearly half of the 142 federal agencies canvassed have experimented with AI/ML tools, the report, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, states.

Moreover, AI tools are already improving agency operations across the full range of governance tasks, including regulatory mandate enforcement, adjudicating government benefits and privileges, monitoring and analyzing risks to public safety and health, providing weather forecasting information and extracting information from the trove of government data to address consumer complaints.

Agencies with mature data science practices are further along in their AI/ML exploration. However, because agencies are at different stages in their digital journeys, many federal decision-makers still struggle to understand AI/ML. They need a better grasp of the skill sets and best practices needed to derive meaningful insights from data powered by AI/ML tools.

Understanding how AI/ML works

AI mimics human cognitive functions such as the ability to sense, reason, act and adapt, giving machines the ability to act intelligently. Machine learning is a component of AI, which involves the training of algorithms or models that then give predictions about data it has yet to observe. ML models are not programmed like conventional algorithms. They are trained using data -- such as words, log data, time series data or images -- and make predictions on actions to perform.

Within the field of machine learning, there are two main types of tasks: supervised and unsupervised.

With supervised learning, data analysts have prior knowledge of what the output values for their samples should be. The AI system is specifically told what to look for, so the model is trained until it can detect underlying patterns and relationships. For example, an email spam filter is a machine learning program that can learn to flag spam after being given examples of spam emails that are flagged by users and examples of regular non-spam emails. The examples the system uses to learn are called the training set.

Unsupervised learning looks for previously undetected patterns in a dataset with no pre-existing labels and with a minimum of human supervision. For instance, data points with similar characteristics can be automatically grouped into clusters for anomaly detection, such as in fraud detection or identifying defective mechanical parts in predictive maintenance.

Supervised, unsupervised in action

It is not a matter of which approach is better. Both supervised and unsupervised learning are needed for machine learning to be effective.

Both approaches were applied recently to help a large defense financial management and comptroller office resolve over $2 billion in unmatched transactions in an enterprise resource planning system. Many tasks required significant manual effort, so the organization implemented a robotic process automation solution to automatically access data from various financial management systems and process transactions without human intervention. However, RPA fell short when data variances exceeded tolerance for matching data and documents, so AI/ML techniques were used to resolve the unmatched transactions.

The data analyst team used supervised learning with preexisting rules that resulted in these transactions. The team was then able to provide additional value because they applied unsupervised ML techniques to find patterns in the data that they were not previously aware of.

To get a better sense of how AI/ML can help agencies better manage data, it is worth considering these three steps:

Data analysts should think of these steps as a continuous loop. If the output from unsupervised learning is meaningful, they can incorporate it into the supervised learning modeling. Thus, they are involved in a continuous learning process as they explore the data together.

Avoiding pitfalls

It is important for IT teams to realize they cannot just feed data into machine learning models, especially with unsupervised learning, which is a little more art than science. That is where humans really need to be involved. Also, analysts should avoid over-fitting models seeking to derive too much insight.

Remember: AI/ML and RPA are meant to augment humans in the workforce, not merely replace people with autonomous robots or chatbots. To be effective, agencies must strategically organize around the right people, processes and technologies to harness the power of innovative technologies such as AI/ML to achieve the performance they need at scale.

About the Author

Samuel Stewart is a data scientist with World Wide Technology.

Read the original post:
Harnessing the power of machine learning for improved decision-making - GCN.com

Taking Micro Machine Learning to the MAX78000 – Electronic Design

What youll learn

I tend to do only a few hands-on articles a year, so I look for cutting-edge platforms that developers will want to check out. Maxim Integrateds MAX78000 evaluation kit fits in this bucket. The MAX78000 is essentially an Arm Cortex-M4F microcontroller with a lot of hardware around it, including a convolutional-neural-network (CNN) accelerator designed by Maxim (Fig. 1). This machine-learning (ML) support allows the chip to handle chores like identifying voice keywords or even faces in camera images in real time without busting the power budget.

1.The MAX78000 includes a Cortex-M4F and RISC-V cores as well as a CNN accelerator.

The chip also includes a RISC-V core that caught my eye. However, the development tools are so new that the RISC-V support is still in the works as the Cortex-M4F is the main processor. Even the CNN support is just out of the beta stage, but that's where this article will concentrate on.

The MAX78000 has the usual microcontroller peripheral complement, including a range of serial ports, timers, and parallel serial interfaces like I2S. It even has a parallel camera interface. Among the analog peripherals is an 8-channel, 10-bit sigma-delta ADC. There are four comparators as well.

The chip has large 512-kB flash memory along with 128 kB of SRAM and a boot ROM that allows more complex boot procedures such as secure boot support. There's on-chip key storage as well as CRC and AES hardware support. We will get into the CNN support a little later. The Github-based documentation covers some of the features I outline here in step-by-step detail.

The development tools are free and based on Eclipse, which is the basis for other platforms like Texas Instruments' Code Composer Studio and Silicon Labs Simplicity Studio. Maxim doesn't do a lot of customization, but there's enough to facilitate using hardware like the MAX78000 while making it easy to utilize third party plug-ins and tools, which can be quite handy when dealing with cloud or IoT development environments. The default installation includes examples and tutorials that enable easy testing of the CNN hardware and other peripherals.

The MAX78000 development board features two LCD displays. The larger, 3.5-in TFT touch-enabled display is for the processor, while the second, smaller display provides power-management information. The chip doesn't have a display controller built in, so it uses a serial interface to work with the larger display. The power-tracking support is sophisticated, but I won't delve into that now.

There's a 16-MB QSPI flash chip that can be handy for storing image data. In addition, a USB bridge to the flash chip allows for faster and easier downloads.

The board also adds some useful devices like a digital microphone, a 3D accelerometer, and 3D gyro. Several buttons and LEDs round out the peripherals.

There are a couple JTAG headers; the RISC-V core has its own. As noted, I didnt play with the RISC-V core this time around as it's not required for using the CNN supportalthough it could. Right now, the Maxim tools generate C code for the Cortex-M4F to set up the CNN hardware. The CNN hardware is designed to handle a single model, but it's possible to swap in new models quickly.

As with most ML hardware, the underlying hardware tends to be hidden from most programmers, providing more of a black-box operation where you set up the box and feed it data with results coming out the other end. This works well if the models are available; it's a matter of training them with different information or using trained models. The challenge comes when developing and training new models, which is something I'll avoid discussing here.

I did try out two of the models provided by Maxim, including a Keyword Spotting and a Face Identification (FaceID) application. The Keyword Spotting app is essentially the speech-recognition system that can be used to listen for a keyword to start off a cloud-based discussion, which is how most Alexa-based voice systems work since the cloud handles everything after recognizing a keyword.

On the other hand, being able to recognize a number of different keywords makes it possible to build a voice-based command system, such as those used in many car navigation systems. As usual, the Cortex-M4F handles the input and does a bit of munging to provide suitable inputs to the CNN accelerator (Fig. 2). The detected class output specifies which keyword is recognized, if any. The application can then utilize this information.

2. The Cortex-M4F handles the initial audio input stream prior to handing off the information to the CNN accelerator.

The FaceID system highlights the camera support of the MAX78000 (Fig. 3). This could be used to recognize a face or identify a particular part moving by on an assembly line. The sample application can operate using canned inputs, as shown in the figure, or from the camera.

3. The FaceID application highlights the CNNs ability to process images in real time.

Using the defaults is as easy as compiling and programming the chip. Maxim provides all of the sample code and procedures. These can be modified somewhat, but retraining a model is a more involved exercisethough one that Maxims documentation does cover. These examples provide an outline of what's needed to be done as well as what needs to be changed to customize the solution.

Changing the model and application to something like a motor vibration-monitoring system will be a significant job requiring a new model, but one that the chip is likely able to handle. It will require much more machine learning and CNN support, so it's not something that should be taken lightly.

The toolset supports models from platforms like TensorFlow and PyTorch (Fig. 4). This is useful because training isn't handled by the chip, but rather done on platforms like a PC or cloud servers. Likewise, the models can be refined and tested on higher-end hardware to verify the models, which can then be pruned to fit on the MAX78000.

4. PyTorch is just one of the frameworks handled by the MAX78000. Training isn't done on the micro. Maxims tools convert the models to code that drives the CNN hardware.

At this point, the CNN accelerator documentation is a bit sparse, as is the RISC-V support. Maxims CNN model compiler kicks out C code that drops in nicely to the Eclipse IDE. Debugging the regular application code is on par with other cross-development systems where remote debugging via JTAG is the norm.

Maxim also provides the MAX78000FTHR, the little brother of the evaluation kit (Fig. 5), This doesn't have the display or other peripheral hardware, but most I/O is exposed. The board alone is only $25. The chip is priced around $15 in small quantities. The Github-based documentation provides more details.

5. The evaluation kit has a little brother, the MAX78000FTHR.

The MAX78000 was fun to work with. It's a great platform for supporting ML applications on the edge. However, be aware that while it's a very low power solution, it's not the same thing as even a low-end Nvidia Jetson Nano. It will be interesting to check out the power-tracking support since power utilization and requirements will likely be key factors in many MAX78000 applications, especially battery-based solutions.

Original post:
Taking Micro Machine Learning to the MAX78000 - Electronic Design

Northwell Health researchers using Facebook data and AI to spot early stages of severe psychiatric illness – FierceHealthcare

After going missing for three days in 2016, Christian Herrera Gaton of Jackson Heights, New York, was diagnosed with bipolar disorder type 1.

His experiences with bipolar disorder include mood swings, depression and manic episodes. During a recent bout with the illness, he was admitted to Zucker Hillside Hospital in August 2020 due to some stress he was feeling from the COVID-19 pandemic.

While at Zucker for treatment, the Feinstein Institutes for Medical Research, the research arm of New Yorks Northwell Health, approached him to join a study about Facebook data and psychiatric conditions.

The goal of the study was to use machine learning algorithms to predict a patients psychiatric diagnosis more than a year in advance of an official diagnosis and stay in the hospital.

Michael Birnbaum, M.D., assistant professor at Feinstein Institutes Institute of Behavioral Science, saw an opportunity to use the social media platforms that are a part of everyday life to gain insights into the early stages of severe psychiatric illness.

There was an interest in harnessing these ubiquitous, widely used platforms in understanding how we could improve the work that we do, Birnbaum said in an interview. We wanted to know what we can learn from the digital universe and all of the data that's being created and uploaded by the young folks that we treat. That's what motivated our interest.

RELATED:Brigham and Women's taps mental health startup to use AI to track providers' stress

After Gaton, a former student at John Jay College of Criminal Justice, was discharged from the hospital, he shared almost 10 years of Facebook and Instagram data with the Feinstein Institutes. He uploaded an archive that contained pictures, private messages and basic user information.

It's been a difficult experience to deal with [COVID] and to go through everything with the hospitals and losing friends because of doing stupid things during manic episodes, Gaton told Fierce Healthcare. It's not easy, but at least I get to join this research study and help other people.

The study, conducted along with IBM Research, looked at patients with schizophrenia spectrum disorders and mood disorders. Feinstein Institutes researchers handled the participant recruitment and assessments as well as data collection and analysis. Meanwhile, IBM developed the machine learning algorithms that researchers used to analyze Facebook data.

Results of thestudy, called Identifying signals associated with psychiatric illness utilizing language and images posted to Facebook, was published Dec. 3 in Nature Partner Journals (npj) Schizophrenia.

Feinstein Institutes and IBM researchers studied archives of people in an early treatment program to extract meaning from the data to gain an understanding of how people with mental illness use social media.

Essentially, at its core, the machine learns to predict which group an individual belongs to, based on data that we feed it, Birnbaum explained. So, for example, if we show the computer a Facebook post and then we say to the computer, based on what you've learned so far and based on the patterns that you recognize, does this post belong to an individual with schizophrenia or bipolar disorder? Then the computer makes a prediction.

Birnbaum added that the greater the predictions and accuracy, the more effective the algorithms are at predicting which characteristics belong to which group of people.

Feinstein and IBM took care to anonymize the social media data, according to Birnbaum. They stripped out names and addresses from written posts. Words essentially using language-analytic software become vectors, Birnbaum said. The actual content of the sentences, once they're parsed through the software, often becomes meaningless.

In addition, the machine learning software does not analyze participants images closely. Instead, it focuses on shape, size, height, contrast and colors, Birnbaum said.

We did our best to ensure that we identified the data to the extent possible and ensured the confidentiality of our participants because that's one of our top priorities, of course, Birnbaum said.

The study analyzed Facebook data for the 18 months prior to help predict a patients diagnosis or hospitalization a year in advance.

Researchers used machine learning algorithms to study 3.4 million Facebook messages and 142,390 images for 223 participants for up to 18 months before their first psychiatrichospitalization. Study subjects with schizophrenia spectrum disorders and mood disorders were more prone to discuss anger, swearing, sex and negative emotions in their Facebook posts, according to the study.

RELATED:Northwell Health research arm develops AI tool to help hospital patients avoid sleepless nights

Birnbaum sees an opportunity to use the data from social media platforms to gain insights to deliver better healthcare. By using social media, such as analyzing Facebook status updates, researchers can gain insights on personality traits, demographics, political views and substance use.

Harnessing social media platforms could be a significant step forward for psychiatry, which is limited by its reliance on mostly retrospective, self-reported data, the study stated.

Gaton believes that he could have avoided time in the hospital if he received an earlier diagnosis. Like with other subjects in the study, Gaton can sense the warning signs of an episode when he starts to post differently on Facebook.

From analyzing the data, researchers were able to study who would use more swear words compared with healthy volunteers. Some participants would use words related to blood, pain or biological processes. As their conditions progressed and patients neared hospitalization, they would use more punctuation and negative emotional words in their Facebook posts, according to the study.

Other organizations are also turning to artificial intelligence to monitor mental health. Researchers at Brigham and Women's Hospital are using AI technology from startup Rose to monitor the mental well-being of front-line workers during the COVID-19 pandemic. Meanwhile, the Feinstein Institutes recently developed an AI tool that can help patients get better sleep in the hospital.

Researchers see a use for social media data for patients that could be similar to the vital data they pull from a blood or urine sample, according to Birnbaum. I could imagine a world where people go see their psychiatrists and provide their archives in the same way they provide a blood test, which is then analyzed much like a blood test and is used to inform clinical decision-making moving forward, he said.

RELATED:The unexpected ways AI is impacting the delivery of care, including for COVID-19

I think that is where psychiatry is heading, and social media will play a component of a much larger, broader digital representation of behavioral health.

Guillermo Cecchi, principal research staff member, computational psychiatry, at IBM Research, also sees a use for social media data as a common way to evaluate patients.

Our vision is that this type of technology could one day be used in a non-burdensome way, with patient consent and high privacy standards, to provide clinicians with the most comprehensive and relevant information to make treatment decisions, including regular clinical assessments, biomarkers and a patients medical history, Cecchi told Fierce Healthcare.

Researchers hope that the Facebook data can inform future studies.

Ultimately, the language markers we identified with AI in this study could be used to inform future work, shaped with rigorous ethical frameworks, that could help clinicians to monitor the progression of mental health patients considered at-risk for relapse or undergoing treatment, Cecchi said.

Gaton said he would like to see the technology get more accurate. I just hope that with my contributions to the study, the technology gets more accurate and more responsive and can be something that doctors can use in the near futurewith patient consent, of course, he said.

Read the original here:
Northwell Health researchers using Facebook data and AI to spot early stages of severe psychiatric illness - FierceHealthcare

Connected and autonomous vehicles: Protecting data and machine learning innovations – Lexology

The development of connected and autonomous vehicles (CAVs) is technology-driven and data-centric. Zenzics Roadmap to 2030 highlights that 'the intelligence of self-driving vehicles is driven by advanced features such as artificial intelligence (AI) or machine learning (ML) techniques'.[1] Developers of connected and automated mobility (CAM) technologies are engineering advances in machine learning and machine analysis techniques that can create valuable, potentially life-saving, insights from the massive well of data that is being generated.

Diego Black and Lucy Pegler take a look at the legal and regulatory issues involved in protecting data and innovations in CAVs.

The data of driving

It is predicted that the average driverless car will produce around 4TB of data per day, including data on traffic, route choices, passenger preferences, vehicle performance and many more data points[2].

'Data is foundational to emerging CAM technologies, products and services driving their safety, operation and connectivity'.[3]

As Burges Salmon and AXA UK outlined in their joint report as part of FLOURISH, an Innovate UK-funded CAV project, the data produced by CAVs can be broadly divided into a number of categories based on its characteristics. For example, sensitive commercial data, commercial data, personal data. How data should be protected will depend on its characteristics and importantly, the purposes for which it is used. The use of personal data (i.e. data from which an individual can be identified) attracts particular consideration.

The importance of data to the CAM industry and, in particular, the need to share data effectively to enable the deployment and operation of CAM, needs to be balanced against data protection considerations. In 2018, the Open Data Institute (ODI) published a report setting out that it considered that all journey data is personal data[4] consequently bringing journey data within the scope of the General Data Protection Regulation.[5]

Additionally, the European Data Protection Board (EDPB) has confirmed that the ePrivacy directive (2002/58/EC as revised by 2009/136/EC) applies to connected vehicles by virtue of 'the connected vehicle and every device connected to it [being] considered as a 'terminal equipment'.'[6] This means that any machine learning innovations deployed in CAVs will inevitably process vast amounts of personal data. The UK Information Commissioners Office has issued guidance on how to best deal with harnessing both big data and AI in relation to personal data, including emphasising the need for industry to deploy ethical principles, create ethics boards to monitor the new uses of data and ensure that machine learning algorithms are auditable.[7]

Navigating the legal frameworks that apply to the use of data is complex and whilst the EDPB has confirmed its position in relation to connected vehicles, automated vehicles and their potential use cases raise an entirely different set of considerations. Whilst the market is developing rapidly, use case scenarios for automated mobility will focus on how people consume services. Demand responsive transport and ride sharing are likely to play a huge role in the future of personal mobility.

The main issue policy makers now face is the ever evolving nature of the technology. As new, potentially unforeseen, technologies are integrated into CAVs, the industry will require both a stringent data protection framework on the one hand, and flexibility and accessibility on the other hand. These two policy goals are necessarily at odds with one another, and the industry will need to take a realistic, privacy by design approach to future development, working with rather than against regulators.

Whilst the GDPR and ePrivacy Directive will likely form the building blocks of future regulation of CAV data, we anticipate the development of a complementary framework of regulation and standards that recognises the unique applications of CAM technologies and the use of data.

Cyber security

The prolific and regular nature of cyber-attacks poses risks to both public acceptance of CAV technology and to the underlying business interests of organisations involved in the CAV ecosystem.

New technologies can present threat to existing cyber security measures. Tarquin Folliss of Reliance acsn highlights this noting that 'a CAVs mix of operational and information technology will produce systems complex to monitor, where intrusive endpoint monitoring might disrupt inadvertently the technology underpinning safety'. The threat is even more acute when thinking about CAVs in action and as Tarquin notes, the ability for 'malign actors to target a CAV network in the same way they target other critical national infrastructure networks and utilities, in order to disrupt'.

In 2017, the government announced 8 Key principles of Cyber Security for Connected and Automated Vehicles. This, alongside the DCMS IoT code of practice, the CCAVs CAV code of practice and the BSIs PAS 1885, provides a good starting point for CAV manufacturers. Best practices include:

Work continues at pace on cyber security for CAM. In May this year, Zenzic published its Cyber Resilience in Connected and Automated Mobility (CAM) Cyber Feasibility Report which sets out the findings of seven projects tasked with providing a clear picture of the challenges and potential solutions in ensuring digital resilience and cyber security within CAM.

Demonstrating the pace of work in the sector, in June 2020 the United Nations Economic Commission for Europe (UNECE) published two new UN Regulations focused on cyber security in the automotive sector. The Regulations represent another step-change in the approach to managing the significant cyber risk of an increasingly connected automotive sector.

Protecting innovation

As innovation in the CAV sector increases, issues regarding intellectual property and its protection and exploitation become more important. Companies that historically were not involved in the automotive sector are now rapidly becoming key partners providing expertise in technologies such as IT security, telecoms, block chain and machine learning. In autonomous vehicles many of the biggest patent filers in this area have software and telecoms backgrounds[8].

With the increasing use of in and inter-car connectivity and the accumulative amount of data having to be handled per second as levels of autonomy rises, innovators in the CAV space are having to handle issues regarding data security as well as determining how best to handle the large data sets. Furthermore, the recent UK government call for evidence on automated lane keeping systems is being seen by many as the first step of standards being introduced in autonomous vehicles.

In view of these developments new challenges are now being faced by companies looking to benefit from their innovations. Unlike more traditional automotive innovation where the innovations lay in improvements to engineering and machinery many of the innovations in the CAV space reside in electronics and software development. The ability to protect and exploit inventions in the software space has become increasingly of relevance in the automotive industry.

Multiple Intellectual Property rights exist that can be used to protect innovations in CAVs. Some rights can be particularly effective in areas of technology where standards exist, or are likely to exist. Two of the main ways seen at present are through the use of patents and trade secrets. Both can be used in combination, or separately, to provide an effective IP strategy. Such an approach is seen in other industries such as those involved in data security.

For companies that are developing or improving machine learning models, or training sets, the use of trade secrets is particularly common. Companies relying on trade secrets may often license access to, or sell the outputs of, their innovations. Advantageously, trade secrets are free and last indefinitely.

An effective strategy in such fields is to obtain patents that cover the technological standard. By definition if a third party were to adhere to the defined standard, they would necessarily fall within the scope of the patent, thus providing the owner of the patent with a potential revenue stream through licensing agreements. If, as anticipated, standards will be set in CAVs any company that can obtain patents to cover the likely standard will be at an advantage. Such licenses are typically offered under a fair, reasonable and non-discriminatory (FRAND) basis, to ensure that companies are not prevented by patent holders from entering the market.

A key consideration is that the use of trade secrets may be incompatible with the use of standards. If technology standards are introduced for autonomous vehicles, in order to comply with the standards companies would have to demonstrate that their technology complies with the standard. The use of trade secrets may be incompatible with the need to demonstrate compliance with a standard.

However, whilst a patent provides a stronger form of protection in order to enforce a patent the owner must be able to demonstrate a third party is performing the acts as defined in the patent. In the case of machine learning and mathematical-based methods such information is often kept hidden making providing infringement difficult. As a result patents in such areas are often directed towards a visible, or tangible, output. For example in CAVs this may be the control of a vehicle based on the improvements in the machine learning. Due to the difficulty in demonstrating infringement, many companies are choosing to protect their innovations with a mixture of trade secrets and patents.

Legal protections for innovations

For the innovations typically seen in the software side of CAVs, trade secrets and patents are the two main forms of protection.

Trade secrets are, as the name implies, where a company will keep all, or part of, their innovation a secret. In software-based inventions this may be in form of a black-box disclosure where the workings and functionality of the software are kept secret. However, steps do need to be taken to keep the innovation secret, and they do not prevent a third party from independently implementing, or reverse engineering, the innovation. Furthermore, once a trade secret is made public, the value associated with the trade secret is gone.

Patents are an exclusive right, lasting up to 20 years, which allow the holder to prevent, or request a license from, a third party utilising the technology that is covered by the scope of the patent in that territory. Therefore it is not possible to enforce say, a US patent in the UK. Unlike trade secrets publication of patents is an important part of the process.

In order for inventions to be patented they must be new (that is to say they have not been disclosed anywhere in the world before), inventive (not run-of-the-mill improvements), and concern non-excluded subject matter. The exclusions in the UK and Europe cover software, and mathematical methods, amongst other fields, as such. In the case of CAVs a large number of inventions are developed that could fall in the software and mathematical methods categories.

The test regarding whether or not an invention may be seen as excluded subject matter varies between jurisdictions. In Europe if an invention is seen to solve a technical problem, for example relating to the control of vehicles it would be deemed allowable. Many of the innovations in CAVs can be tied to technical problems relating to, for example, the control of vehicles or improvements in data security. As such on the whole CAV inventions may escape the exclusions.

What does the future hold?

Technology is advancing at a rapid rate. At the same time as industry develops more and more sophisticated software to harness data, bad actors gain access to more advanced tools. To combat these increased threats, CAV manufacturers need to be putting in place flexible frameworks to review and audit their uses of data now, looking toward the developments of tomorrow to assess the data security measures they have today. They should also be looking to protect some of their most valuable IP assets from the outset, including machine learning developments in a way that is secure and enforceable.

Originally posted here:
Connected and autonomous vehicles: Protecting data and machine learning innovations - Lexology

Understanding AI: The good, bad and ugly – GCN.com

INDUSTRY INSIGHT

Although it's still in the early stage of adoption, the use of artificial intelligence in the public sector has vast potential. According to McKinsey & Company, AI can help to identify tax-evasion patterns, sort through infrastructure data to target bridge inspections, sift through health and social-service data to prioritize cases for child welfare and support or even predict the spread of infectious diseases.

Yet as the promises of AI grow increasingly obtainable, so do the risks associated with it.

Public-sector organizations, which house and protect sensitive data, must be even more alert and prepared for attacks than other businesses. Plus, as technology becomes more complex and integrated into users personal and professional lives, agencies cant ignore the possibility of more sophisticated attacks, including those that leverage AI.

With that in mind, its important to understand new trends in AI, especially those that impact how agencies should be thinking about security.

Defining adversarial machine learning

Simple or common AI and machine learning developments have the potential to improve outcomes and reduce costs within government agencies, just as it does for other industries. AI and ML technology is already being incorporated into government operations, from customer service chatbots that help automate Department of Motor Vehicle transactions to computer vision and image recognition applications that can spot stress fractures in bridges to assist a human inspector. The technology itself will continue to mature and be implemented more widely, which means understanding of the technology (both the good and the bad) must evolve as well.

AI and ML statistical models rely on two main components to function properly and execute on their intended purposes: observability and data. When considering how to safeguard both the observability and data within the model, there are a few questions to answer: What information could adversaries obtain from the model to build their own model? How similar is the environment an agency is creating compared to others? Is the time-elapsed learning and feedback mechanism modeled and tested?

Models are built on assumptions, so if there are similar underlying assumptions across environments, an adversary has an increased opportunity of doing one of the following to the model:

Essentially, if agencies can teach AI to execute as their team does, an adversary can teach AI how to behave like an attacker as well, as demonstrated by user behavior analytics tools today. Adversarial machine learning, then, is a learning technique that attempts to deceive, undermine or manipulate models by supplying false input into both observability and data.

As attackers become more refined and nuanced in their approach -- from building adversarial machine learning models to model poisoning -- they could completely disrupt all AI-related efforts within an organization.

Getting ahead, preparing for new risks

AI and ML are already helping streamline cybersecurity efforts, and this technology will, of course, play a role in preventing and detecting more sophisticated attacks as well, so long as they are trained to do so. As AI algorithms continue to learn and behaviors are normalized, agencies can better leverage models for authentication, vulnerability management, phishing, monitoring and augmenting personnel.

Today, AI is improving cybersecurity processes in two ways: It filters through the data quickly based on trained algorithms, which know exactly what to look for, and it helps identify and prioritize attacks and behavioral changes that require the attention of the security operations team, who will then verify the information and respond. As AI evolves, the actions and response will be handled by these algorithms/tools with lessening human interaction and increased velocity. For example, adversaries could successfully log in using an employees credentials, which may go unnoticed. If they are logging in for the first time from a new location or at a time when that user was not expected to be online, AI can help quickly recognize those anomalous behaviors and push an alert to the top of the security teams queue or take more immediate action to disallow a behavior.

However, organizations, especially government bodies, must take their knowledge of AI a step further and prepare for the attacks of tomorrow by becoming aware of new, evolving complex risks. Data will must be viewed from both an offensive and defensive perspective, and teams must continuously monitor models and revise and retrain them to obtain deeper levels of intelligence. ML models, for example, must be trained to detect adversarial threats within the AI itself by conducting:

Most agencies are still in initial stages of incorporating AI/ML models into their operations. However, educating agency IT teams on these evolving threats, utilizing existing toolsets and planning and preparing for these attacks should start now. The amount of data being collected and synthesized is massive and will continue to grow exponentially. We must leverage all the tools in the AI tool chest to make sense of this data for the good.

About the Author

Seth Cutler is the chief information security officer at NetApp.

Go here to read the rest:
Understanding AI: The good, bad and ugly - GCN.com