Use cases for AI and ML in cyber security – Information Age

We explore how artificial intelligence (AI) and machine learning (ML) can be incorporated into cyber security

With devices used for work continuting to diversify, so have cyber attacks, but AI can help prevent them.

As cyber attacks get more diverse in nature and targets, its essential that cyber security staff have the right visibility to determine how to solve vulnerabilities accordingly, and AI can help to come up with problems that its human colleagues cant alone.

Cyber security resembles a game of chess, said Greg Day, chief security officer EMEA at Palo Alto Networks. The adversary looks to outmanoeuvre the victim, the victim aims to stop and block the adversarys attack. Data is the king and the ultimate prize.

In 1996, an AI chess system, Deep Blue, won its first game against world champion, Garry Kasparov. Its become clear that AI can both programmatically think broader, faster and further outside the norms, and thats true of many of its applications in cyber security now too.

With this in mind, we explore particular use cases for AI in cyber security that are in place today.

Day went on to expand on how AI can work alongside cyber security staff in order to keep the organisation secure.

We all know there arent enough cyber security staff in the market, so AI can help to fill the gap, he said. Machine learning, a form of AI, can read the input from SoC analysts and transpose it into a database, which becomes ever expanding.

Charlie Roberts, head of business development, UK, Ireland & EU at IDnow, discusses how combining AI and humans can help to tackle cyber fraud. Read here

The next time the SoC analyst enters similar symptoms they are presented with previous similar cases along with the solutions, based on both statistical analysis and the use of neural nets reducing the human effort.

If theres no previous case, the AI can analyse the characteristics of the incident and suggest which SoC engineers would be the strongest team to solve the problem based on past experiences.

All of this is effectively a bot, an automated process that combines human knowledge with digital learning to give a more effective hybrid solution.

Mark Greenwood, head of data science at Netacea, delved into the benefits of bots within cyber security, keeping in mind that companies must distinguish good from bad.

Today, bots make up the majority of all internet traffic, explained Greenwood. And most of them are dangerous. From account takeovers using stolen credentials to fake account creation and fraud, they pose a real cyber security threat.

But businesses cant fight automated threats with human responses alone. They must employ AI and machine learning if theyre serious about tackling the bot problem. Why? Because to truly differentiate between good bots (such as search engine scrapers), bad bots and humans, businesses must use AI and machine learning to build a comprehensive understanding of their website traffic.

Its necessary to ingest and analyse a vast amount of data and AI makes that possible, while taking a machine learning approach allows cyber security teams to adapt their technology to a constantly shifting landscape.

By looking at behavioural patterns, businesses will get answers to the questions what does an average user journey look like and what does a risky unusual journey look like. From here, we can unpick the intent of their website traffic, getting and staying ahead of the bad bots.

When considering certain aspects of cyber security that can benefit from the technology, Tim Brown, vice-president of security architecture at SolarWinds says that AI can play a role in protecting endpoints. This is becoming ever the more important as the amount of remote devices used for work rises.

By following best practice advice and staying current with patches and other updates, an organisation can be reactive and protect against threats, said Brown. But AI may give IT and security professionals an advantage against cyber criminals.

Gartner predicts that 75% of CEOs will be personally liable for cyber-physical security incidents by 2024, as the financial impact of breaches grows. Read here

Antivirus (AV) versus AI-driven endpoint protection is one such example; AV solutions often work based on signatures, and its necessary to keep up with signature definitions to stay protected against the latest threats. This can be a problem if virus definitions fall behind, either because of a failure to update or a lack of knowledge from the AV vendor. If a new, previously unseen ransomware strain is used to attack a business, signature protection wont be able to catch it.

AI-driven endpoint protection takes a different tack, by establishing a baseline of behaviour for the endpoint through a repeated training process. If something out of the ordinary occurs, AI can flag it and take action whether thats sending a notification to a technician or even reverting to a safe state after a ransomware attack. This provides proactive protection against threats, rather than waiting for signature updates.

The AI model has proven itself to be more effective than traditional AV. For many of the small/midsize companies an MSP serves, the cost of AI-driven endpoint protection is typically for a small number of devices and therefore should be of less concern. The other thing to consider is how much cleaning up costs after infection if AI-driven solutions help to avoid potential infection, it can pay for itself by avoiding clean-up costs and in turn, creating higher customer satisfaction.

With more employees working from home, and possibly using their personal devices to complete tasks and collaborate with colleagues more often, its important to be wary of scams that are afoot within text messages.

With malicious actors recently diversifying their attack vectors, using Covid-19 as bait in SMS phishing scams, organisations are under a lot of pressure to bolster their defences, said Brian Foster, senior vice-president of product management at MobileIron.

To protect devices and data from these advanced attacks, the use of machine learning in mobile threat defence (MTD) and other forms of managed threat detection continues to evolve as a highly effective security approach.

Machine learning models can be trained to instantly identify and protect against potentially harmful activity, including unknown and zero-day threats that other solutions cant detect in time. Just as important, when machine learning-based MTD is deployed through a unified endpoint management (UEM) platform, it can augment the foundational security provided by UEM to support a layered enterprise mobile security strategy.

With cyber attacks rising while employees have been working from home, we look at how edge device security can be ensured. Read here

Machine learning is a powerful, yet unobtrusive, technology that continually monitors application and user behaviour over time so it can identify the difference between normal and abnormal behaviour. Targeted attacks usually produce a very subtle change in the device and most of them are invisible to a human analyst. Sometimes detection is only possible by correlating thousands of device parameters through machine learning.

These use cases and more demonstrate the viability of AI and cyber security staff effectively uniting. However, Mike MacIntyre, vice-president of product at Panaseer, believes that the space still has hurdles to overcome in order for this to really come to fruition.

AI certainly has a lot of promise but as an industry we must be clear that its currently not a silver bullet that will alleviate all cyber security challenges and address the skills shortage, said MacIntyre. This is because AI is currently just a term applied to a small subset of machine learning techniques. Much of the hype surrounding AI comes from how enterprise security products have adopted the term and the misconception (willful or otherwise) about what constitutes AI.

Terry Greer-King, vice-president EMEA at SonicWall, discusses looking past the hype when it comes to blockchain and cyber security. Read here

The algorithms embedded in many modern security products could, at best, be called narrow, or weak, AI; they perform highly specialised tasks in a single, narrow field and have been trained on large volumes of data, specific to a single domain. This is a far cry from general, or strong, AI, which is a system that can perform any generalised task and answer questions across multiple domains. Who knows how far away such a system is (there is much debate ranging from the next decade to never), but no CISO should be factoring such a tool in to their three-to-five year strategy.

Another key hurdle that is hindering the effectiveness of AI is the problem of data integrity. There is no point deploying an AI product if you cant get access to the relevant data feeds or arent willing to install something on your network. The future for security is data-driven, but we are a long way from AI products following through on the promises of their marketing hype.

Continue reading here:
Use cases for AI and ML in cyber security - Information Age

Outlook on the AI in Education Global Market to 2025 – by Offering, Technology, End-user and Geography – PRNewswire

DUBLIN, Sept. 14, 2020 /PRNewswire/ -- The "AI in Education Market - Forecasts from 2020 to 2025" report has been added to ResearchAndMarkets.com's offering.

The Artificial Intelligence in education market was valued at US$2.022 billion for the year 2019. The growing adoption of artificial intelligence in the education sector due to the ability of these solutions to enhance the learning experience is one of the key factors which is anticipated to propel its adoption across the globe for education purposes.

The proliferation of smart devices and the rapidly growing trend for digitalization across numerous sectors is also propelling the demand for artificial intelligence solutions in the education sector. Artificial intelligence majorly uses deep learning, machine learning, and advanced analytics especially for monitoring the learning process of the learner such as the marks obtained and speed of a particular individual among others. Also, these solutions offer a personalized learning experience and high-quality education and also helps the learners to enhance pre-existing knowledge and learning.

The growth of the market is also anticipated to be driven by the growth in the use of multilingual translators which are integrated with AI technology. Furthermore, the artificial intelligence solutions also give customizable learning experience depending on the requirements of the end-user, student path prediction, suggestion of learning path, identification of weakness, and also help the learners to analyze their areas for improvement and help them to improve their capabilities.

However, the less availability of skilled workforce, high investment costs coupled with the growth in the risk of data security on account of a surge in the number of cyber-attacks is one of the major factors which is anticipated to hamper the growth of the artificial intelligence in the education market. The integration of AI-empowered educational games is also projected to boost the adoption of these solutions across the K-12 education segment, as the learning for the students become highly interactive and further helps the teachers also to uplift the learning experience of the students. In addition, the integration of AI in the education sector has revolutionized the overall learning experience across all the learning areas through its result-driven approach.

Furthermore, the use of artificial intelligence in education can also be used for automating the basic activities in the education sector which includes grading and assessment purpose among others. This, in turn, leads to saving the teachers time to stay more focused on the quality of education of the professional development of the students. The use of these solutions enables the learners to attain the ultimate academic success through the help of smart content. The global artificial intelligence in the education market has been segmented on the basis of offering, technology, end-use, and geography. On the basis of offering the market has been classified into solutions and services. By technology, the market has been segmented into deep learning, machine learning, and natural language processing. By end-user, the segmentation has been done on the basis of academic learning and corporate learning.

Services are projected to show good growth opportunities

By offering, the market has been segmented into solutions and services. The services segment is estimated to show decent growth opportunities during the forecast period and beyond on account of the rising awareness among education providers regarding the adoption of AI technology for providing high-quality education. Furthermore, the growing trend of professional training services is also leading to the high adoption of AI-powered solutions for corporate learning purposes which also supports the growth of this segment during the next five years.

Corporate learning expected to grow substantially over the forecast period

The corporate learning segment is anticipated to show decent growth over the forecast period on account of the growing adoption of these solutions by corporate companies in order to enhance the skills of their employees with an aim to meet the growing competition. Thus, to enhance the skills of the employees, key players investing heavily to develop new platforms and courses so as to attract big companies and expand their competitive edge in the market. Furthermore, big giants at the global level are adopting AI solutions for corporate learning to equip the global workforce with the skills and knowledge to succeed in the rapidly changing economy further shows the growth potential of artificial intelligence in the education market for the corporate learning segment.

The academic learning segment is projected to hold a decent share in the market owing to the high adoption of these solutions across the school level for enhancing the education quality and for building the careers of the students. Also, the use of AI techniques for learning purposes helps the students to focus more on self-development.

North America expected to hold a substantial market share

Geographically, the global market has been segmented into North America, South America, Europe, Middle East and Africa and the Asia Pacific. North America is projected to hold a noteworthy share in the market on account of well-established infrastructure and early adoption of technology. Furthermore, the presence of a state-of-art education system also bolsters the growth of the market in the North American region. The Asia Pacific region is projected to show robust growth over the next five years on account of the rising adoption of AI-based solutions across the major developing countries such as China and Indonesia among others. The education system in China is one of the most reputed systems across the globe, on the other hand, it is also the most challenging and competitive one. Moreover, India has one of the largest education sectors in the world. Thus, the presence of well-established education systems of the country is expected to supplement the growth of the market during the next five years.

Competitive Insights

Prominent key market players in the artificial intelligence in education market include IBM Corporation, Cognizant, Google LLC, Microsoft Corporation, and Amazon Web Services among others. These companies hold a noteworthy share in the market on account of their good brand image and product offerings.

Major players in the artificial intelligence in education market have been covered along with their relative competitive position and strategies. The report also mentions recent deals and investments of different market players over the last two years.

Key Topics Covered:

1. Introduction1.1. Market Definition1.2. Market Segmentation

2. Research Methodology2.1. Research Data2.2. Assumptions

3. Executive Summary3.1. Research Highlights

4. Market Dynamics4.1. Market Drivers4.2. Market Restraints4.3. Porters Five Forces Analysis4.3.1. Bargaining Power of Suppliers4.3.2. Bargaining Power of Buyers4.3.3. Threat of New Entrants4.3.4. Threat of Substitutes4.3.5. Competitive Rivalry in the Industry4.4. Industry Value Chain Analysis

5. Artificial Intelligence in Education Market Analysis, By Offering5.1. Introduction5.2. Solutions5.3. Services

6. Artificial Intelligence in Education Market Analysis, By Technology6.1. Introduction6.2. Deep Learning6.3. Machine Learning6.4. Natural Language Processing

7. Artificial Intelligence in Education Market Analysis, By End User7.1. Introduction7.2. Academic Learning7.2.1. K-12 Education7.2.2. Higher Education7.3. Corporate Learning

8. Artificial Intelligence in Education Market Analysis, By Geography8.1. Introduction8.2. North America8.2.1. North America Artificial Intelligence in Education Market, By Offering, 2019 to 20258.2.2. North America Artificial Intelligence in Education Market, By Technology, 2019 to 20258.2.3. North America Artificial Intelligence in Education Market, By End User, 2019 to 20258.2.4. By Country8.2.4.1. USA8.2.4.2. Canada8.2.4.3. Mexico8.3. South America8.3.1. South America Artificial Intelligence in Education Market, By Offering, 2019 to 20258.3.2. South America Artificial Intelligence in Education Market, By Technology, 2019 to 20258.3.3. South America Artificial Intelligence in Education Market, By End User, 2019 to 20258.3.4. By Country8.3.4.1. Brazil8.3.4.2. Argentina8.3.4.3. Others8.4. Europe8.4.1. Europe Artificial Intelligence in Education Market, By Offering, 2019 to 20258.4.2. Europe Artificial Intelligence in Education Market, By Technology, 2019 to 20258.4.3. Europe Artificial Intelligence in Education Market, By End User, 2019 to 20258.4.4. By Country8.4.4.1. Germany8.4.4.2. France8.4.4.3. United Kingdom8.4.4.4. Spain8.4.4.5. Others8.5. Middle East and Africa8.5.1. Middle East and Africa Artificial Intelligence in Education Market, By Offering, 2019 to 20258.5.2. Middle East and Africa Artificial Intelligence in Education Market, By Technology, 2019 to 20258.5.3. Middle East and Africa Artificial Intelligence in Education Market, By End User, 2019 to 20258.5.4. By Country8.5.4.1. Saudi Arabia8.5.4.2. Israel8.5.4.3. Others8.6. Asia Pacific8.6.1. Asia Pacific Artificial Intelligence in Education Market, By Offering, 2019 to 20258.6.2. Asia Pacific Artificial Intelligence in Education Market, By Technology, 2019 to 20258.6.3. Asia Pacific Artificial Intelligence in Education Market, By End User, 2019 to 20258.6.4. By Country8.6.4.1. China8.6.4.2. Japan8.6.4.3. South Korea8.6.4.4. India8.6.4.5. Others

9. Competitive Environment and Analysis9.1. Major Players and Strategy Analysis9.2. Emerging Players and Market Lucrativeness9.3. Mergers, Acquisitions, Agreements, and Collaborations9.4. Vendor Competitiveness Matrix

10. Company Profiles10.1. IBM Corporation10.2. Amazon Web Services Inc.10.3. Microsoft Corporation10.4. Google LLC10.5. AI Brain Inc.10.6. Cognizant10.7. Blippar Ltd10.8. Nuance Communications, Inc.10.9. Cerevrum10.10. Cognii, Inc.

For more information about this report visit https://www.researchandmarkets.com/r/3yqxap

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Visit link:
Outlook on the AI in Education Global Market to 2025 - by Offering, Technology, End-user and Geography - PRNewswire

Latin America’s Growing Artificial Intelligence Wave BRINK News and Insights on Global Risk – BRINK

Robots welding at the assembly line of the French car maker Peugeot/Citroen in Brazil. Artificial intelligence offers a chance for the Latin America's economies to leapfrog to greater innovation and economic progress.

Photo: Antonio Scorza/AFP/ Getty Images

Share this article

E-commerce firms have faced a conundrum in Latin America: How can they deliver packages in a region where 25% of urban populations live in informal, squatterneighborhoods with no addresses?

Enter Chazki, a logistics startup from Peru, which partnered with Arequipas Universidad San Pablo to build an artificial intelligence robot to generate new postal maps across the country. The company has now expanded to Argentina, Mexico and Chile, introducing remote communities and city outskirts to online deliveries.

Thats just one example of how machine learning is bringing unique Latin American solutions to unique Latin American challenges. Artificial intelligence and its correlated technologies could prove a major boon to the regions public and private sectors. In turn, its policymakers and business leaders have to better prepare to take full advantage while warding off potential downsides.

Latin America has long been the victim of low productivity and the COVID-19 pandemic is predictably making matters worse. Now, artificial intelligence is a chance for the regions economies to leapfrog to greater innovation and economic progress. Research suggests that AI will add a full percentage point of GDP to five of South Americas biggest economies Argentina, Brazil, Chile, Colombia and Peru by 2035.

Artificial intelligence could play transformative roles in Latin America for just about every sector, according to the Inter-American Development Bank (IADB). That means using AI to predict trade negotiation outcomes, commodity prices and consumer trends, or developing algorithms for use in factories, personalized medicine, infrastructure prototyping, autonomous transportation and energy consumption.

AIs applications in Latin America are already becoming reality. Argentinian Banco Galicia, Colombian airline Avianca, and Brazilian online shopping platform Shop Fcil have all adopted chatbots as virtual customer service assistance to help people. Chiles The Not Company developed an algorithm that analyzes animal-based food products and a database of 400,000 plants to generate recipes for vegan alternatives, and Perus National University of Engineering built machines to autonomously detect dangerous gases.

Expect the trend to continue in the near future. An MIT global survey of senior business executives worldwide found that, by the end of 2019, 79% of companies in Latin America had launched AI programs. The results have been positive; fewer than 2% of respondents reported that the initiatives made lower-than expected returns.

Another key factor is public acceptance of AI and automation. Thus far, Latin Americans are ahead of the curve in embracing the future, with another recent poll showing that 83% of Brazilian consumers said they would trust banking advice entirely generated by a computer, compared to a global average of 71%.

In a region that suffers from endemic corruption, pervasive violence, weak institutions and challenging socioeconomic conditions, governments, policymakers and organizations can use AI to tackle critical issues in the region, including food security, smart cities, natural resources and unemployment.

In Argentina, for example, artificial intelligence is being used to predict and prepare for teenage pregnancy and school dropout, as well as to outline unseen business opportunities in city neighborhoods. In Colombia and Uruguay, software has been developed to predict where crimes are likely to occur. In Brazil, the University of So Paulo is developing machine-learning technology that will rapidly assess the likelihood that patients have dengue fever, Zika or chikungunya when they arrive at a medical center.

At a time when public support for democracy in Latin America is flailing, AI could help come to the rescue. Congressional bodies across the region could use AI to boost the transparency and input of the legislative process. Indeed, the Hacker Laboratory, an innovation lab within Brazils Chamber of Deputies, is using AI platforms to facilitate interactions between lawmakers and citizens.

AI is not risk-free, of course. Elon Musk called AI humanitys biggest existential threat, and Stephen Hawking said it could spell the end of the human race.

Apocalyptic scenarios aside, the immediate danger of AI in Latin America is unemployment and inequality. The Inter-American Development Bank (IADB) warned in a 2018 study that between 36% and 43% of jobs could be lost due to artificial intelligence as a result of automatization. Indeed, Latin Americas governments must be prepared to set up guardrails and implement best practices for the implementation of AI.

Several governments in the region have already announced AI public policy plans. Mexico was one of the first 10 countries in the world to create a national AI strategy. Meanwhile, Brazil launched anational Internet of Things plan, which includes the countrys commitment to a network of AI laboratories across strategic areas including cybersecurity and defense. Chile is coordinating with civil society groups and experts to adopt its own AI plan.

Move aside, Mercosur! Governments in Latin America might also find that machine learning strengthens regional ties. That means harnessing AI to crunch data on trade flows and rules, find areas of consensus in multilateral negotiations, or create algorithms for regional trade. After all, AI models have a 300% greater predictive capacity than traditional econometric models, according to the Inter-American Development Bank.

Beyond competing national plans for AI, Latin American leaders should be drafting a strategy that is specific to its region much like the European Union is doing. A key takeaway from the recent UNESCO Regional Forum on AI in Latin America and the Caribbean was that the technology must develop with respect to universally recognized human rights and values.

In 2021, artificial intelligence could generate almost $3 trillion in business value and 6.2 billion hours of productivity worldwide. Latin America is rightfully jumping onto the bandwagon and has the potential to lead the parade in some areas.

To make full use of what could be a transformational productivity revolution for the region, government and business leaders must pump more resources into technology planning and education. The implementation of AI must improve, not accelerate, the regions inequity.

Follow this link:
Latin America's Growing Artificial Intelligence Wave BRINK News and Insights on Global Risk - BRINK

Inside the Crowdsourced Effort for More Inclusive Voice AI – Built In

A few years ago, a German university student wrote a novel and submitted it to Common Voicean open-source projectlaunched by Mozilla in 2017 to make speech-training data more diverse and inclusive.

The book donation, which added 11,000 sentences, was a bit of an exceptional contribution, said Alex Klepel, a former communications and partnership lead at Mozilla. Most of the voice data comes from more modest contributions excerpts of podcasts, transcripts and movie scripts available in the public domain under a no rights reserved CC0 license.

Text-based sentences from these works are fed into a recently launched multi-language contributor platform, where theyre displayed for volunteers who record themselvesreading them aloud. The resulting audio files are then spooled back into the system for users to listen to and validate.

The goal of the project, as itswebsite states, is to help teach machines how real people speak.

Though speech is becoming an increasingly popular way to interact with electronics from digital assistants like Alexa, Siri and Google Assistant, to hiring screeners and self-serve kiosks at fast food restaurants these systems are largely inaccessible to much of humanity, Klepel told me. A wide swath of the global population speaks languages or dialects these assistants havent been trained on. And in some cases, even if they have, assistants still have a hard time understanding them.

Machines dont understand everyone. They understand a fraction of people. Hence, only a fraction of people benefit from this massive technological shift.

Though developers and researchers have access to a number of public-domain machine learning algorithms, training data is limited and costly to license. The English Fisher data set, for example, is about 2,000 hours and costs about $14,000 for non-members, according to Klepel.

Most of the voice data used to train machine learning algorithms is tied up in the proprietary systems of a handful of major companies, whose systems, many experts believe, reflect their largely homogenous user bases. Andlimited data meanslimited cognition. A recent Stanford University study, as Built Inreported,foundthatthe speech-to-text services used by Amazon, IBM, Google, Microsoft and Apple for batch transcriptions misidentified the words of Black speakers at nearly double the rate of white speakers.

Machines dont understand everyone, explained Klepel by email. They understand a fraction of people. Hence, only a fraction of people benefit from this massive technological shift.

More on Automated Speech RecognitionWhy Racial Bias Still Haunts AI

Common Voice is an attempt to level the playing field. Today, it represents the largest public domain transcribed voice dataset, with more than 7,200 hours of voice data and 54 languages represented, including English, French, German, Spanish, Mandarin, Welsh, Kabyle and Kinyarwanda, according to Klepel.

Megan Branson, a product and UX designer at Mozillawho has overseen much of the projects UX development, said its latest and most exciting incarnation is the release of the multi-language website.

We look at this as a fun task, she said.Its daunting, but we can really do something. To better the internet, definitely, but also to give people better tools.

The project is guided by open-source principles, but it is hardly a free-for-all. Branson describes the website as open-by-design, meaning it is freely available to the public, but intentionally curated to ensure the fidelity and accuracy of voice collections. The goal is to create products that meet Mozillas business goals as well as those of the broader tech community.

In truth, Common Voice has multiple ambitions. It grew out of the need for thousands of hours of high-quality voice data to support Deep Speech, Mozillas automated speech recognition engine, which, according to Klepel, approaches human accuracy and is intended to enable a new wave of products and services.

We look at this as a fun task. Its daunting, but we can really do something. To better the internet, definitely, but also to give people better tools.

Deep Speech is designed not only to help Mozilla develop new voice-powered products, but also to support the global development of automated speech technologies, including in African countries like Rwanda, where it is believed they can begin to proliferate and advance sustainability goals. The idea behind Deep Speech is to develop a speech-to-text engine that can run on anything, from smartphones to an offline Raspberry Pi 4 to a server class machine, obviating the need to pay patent royalties or exorbitant fees for existing speech-to-text services, he wrote.

Over time, the thinking goes, publicly validated data representing many of the worlds languages and cultures might begin to redress algorithmic bias in datasets historically skewed toward white, English-speaking males.

But would it work? Could a voluntary public portal like Common Voice diversify training data? Back when the project started, no one knew and the full impact of Common Voice on training data has yet to be determined but, by the spring of 2017, it was time to test the theory.

Guiding the process was the question, How might we collect voice data for machine learning, knowing that voice data is extremely expensive, very proprietary, and hard to come by? Branson said.

As an early step, the team conducted a paper prototyping experiment in Taipei. Researchers created low-fidelity mock-ups of a sentence-reading tool and a voice-driven dating app and distributed them to people on the street to hear their reactions, as Branson described in Medium. It was guerrilla research, and it led to some counterintuitive findings. People expressed a willingness to voluntarily contribute to the effort, not because of the cool factor of a new app or web design, but out of an altruistic interest in making speech technology more diverse and inclusive.

Establishing licensing protocols was another early milestone. All submissions, Branson said, must fall under a public domain (CC0) license and meet basic requirements for punctuation, abbreviations and length (14 words or less).

The team also developed a set of tools to gather text samples. An online sentence collector allows users to log in and add existing sentences found in works in the public domain. A more recently released sentence extractor gives contributors the option of pulling up to three sentences from Wikipedia articles and submitting them to Mozilla as GitHub pull requests.

Strategic partnerships with universities, NGOs and corporate and government entities havehelped raise awareness of the effort, according to Klepel. In late 2019, for instance, Mozilla began collaborating with the German Ministry for Economic Cooperation and Development. Under an agreement called Artificial Intelligence for All: FAIR FORWARD,the two partners are attempting to open voice technology for languages in Africa and Asia.

In one pilot project, Digital Umuganda, a young Rwandan Artificial Intelligence startup focused on voice technologies is working with Mozilla to build an open speech corpus in Kinyarwanda, a language spoken by 12 million people, to a capacity that will allow it to train a speech-to-text-engine for a use case in support of the UNs Sustainable Development Goals, Klepel wrote.

More on Diversity and InclusionThe Deck Is Stacked Against Black Women in Tech

The work in Africa only scratches the surface of Deep Speechs expanding presence. According to Klepel, Mozilla is working with the Danish government, IBM, Bangor University in Wales, Mycroft AI and the German Aerospace Center in collaborative efforts ranging from growing Common Voice data sets to partnering on speaking engagements, to building voice assistants and moon robotics hardware.

But it is easy to imagine how such high-altitude programs might come with the risk of appearing self-interested. Outside of forging corporate and public partnerships at the institutional level, how do you collect diverse training data? And how do you incentivize conscientious everyday citizens to participate?

[It] reaffirmed something we already knew: our data could be far more diverse. Meaning more gender, accent, dialect and overall language diversity.

Thats where Branson and her team believed the web contributor experience could differentiate Mozillas data collection efforts. The team ran extensive prototype testing, gathering feedback from surveys, stakeholder sessions and tools such as Discourse and GitHub. And in the summer of 2017, a live coded version for English speakers was released to the wild. With a working model, research scientists and machine learning developers could come to the website and download data they could use to build voice tools a major victory for the project.

But development still had a long way to go. A UX assessment review and a long list of feature requests and bug fixes showed there were major holes in the live alpha. Most of these were performance and usability fixes that could be addressed in future iterations, but some of the issues required more radical rethinking.

As Branson explained in Medium, it reaffirmed something we already knew: our data could be far more diverse. Meaning more gender, accent, dialect and overall language diversity.

To address these concerns, Branson and her team began asking more vexing questions:

Early answers emerged in a January 2018 workshop. Mozillas design team invited corporate and academic partners to a journey mapping and feature prioritization exercise, which brought to light several daring ideas. Everything was on the table, including wearable technologies and pop-up recording events. Ultimately, though, flashy concepts took a backseat to a solution less provocative but more pragmatic: the wireframes that would lay the groundwork for the web contributor experience that exists today.

From a users standpoint, the redesigned website could hardly be more straightforward. On the left-hand side of the homepage is a microphone icon beside the word Speak and the phrase Donate your voice. On the right-hand side is a green arrow icon beside the word Listen and the phrase Help us validate our voices. Hover over either icon and you find more information, including the number of clips recorded that day and a goal for each language 1,200 per day for speaking and 2,400 per day for validating. Without logging in, you can begin submitting audio clips, repeating back sentences like these:

The first inhabitants of the Saint George area were Australian aborigines.

The pledge items must be readily marketable.

You can also validate the audio clips of others, which, on a quick test, represent a diversity of accents and include men and women.

The option to set up a profile is designed to build loyalty and add a gamification aspect to the experience. Users with profiles can track their progress in multiple languages against those of other contributors. They can submit optional demographic data, such as age, sex, language, and accent, which is anonymized on the site but can be used by design and development teams to analyze speech contributions.

Current data reported on the site shows that 23 percent of English-language contributors identify their accent as United States English. Other common English accents include England (8 percent), India and South Asia (5 percent) and Southern African (1 percent).

Forty-seven percent of contributors identity as male and 14 percent identify as female, and the highest percentage of contributions by age comes from those ages 19-29. These stats, while hardly phenomenal as a measure of diversity, are evidence of the projects genuine interest in transparency.

Were seeing people collect in languages that are considered endangered, like Welsh and Parisian. Its really, really neat.

A recently released single-word target segment being developed for business use cases, such as voice assistants, includes the digits zero through nine, as well as the words yes, no, hey and Firefox in 18 languages. An additional 70 languages are in progress; once 5,000 sentences have been reviewed and validated in these languages, they can be localized so the canonical site can accept voice recordings and listener validations.

Arguably, though, the most significant leap forward in the redesign was the creation of a multi-language experience. A language tab on the homepage header takes visitors to a page listing launched languages as well as those in progress. Progress bars report key metrics, such as the number of speakers and validated hours in a launched language, and the number of sentences needed for in-progress languages to become localized. The breadth of languages represented on the page is striking.

Were seeing people collect in languages that are considered endangered, like Welsh and Parisian. Its really, really neat, Branson said.

More on Voice-Enabled Technology Will We Ever Want to Use Touchscreens Again?

So far, the team hasnt done much external marketing, in part because the infrastructure wasnt stable enough to meet the demands of a growing user base. With a recent transition to a more robust Kubernetes infrastructure, however, the team is ready to cast a wider net.

How do we actually get this in front of people who arent always in the classic open source communities, right? You know, white males, Branson asked. How do we diversify that?

Addressing that concern is likely the next hurdle for the design team.

If Common Voice is going to focus on moving the needle in 2020, its going to be in sex diversity, helping balance those ratios. And its not a binary topic. Weve got to work with the community, right? Branson said.

If Common Voice is going to focus on moving the needle in 2020, its going to be in sex diversity, helping balance those ratios. And its not a binary topic.

Evaluating the protocols for validation methods is another important consideration. Currently, a user who believes a donated speech clip is accurate can give it a Yes vote. Two Yes votes earns the clip a spot in the Common Voice dataset. A No vote returns the clip to the queue, and two No votes relegates the snippet to the Clip Graveyard as unusable. But criteria for defining accuracy are still a bit murky. What if somebody misspeaks or their inflection is unintelligible to the listener? What if theres background noise and part of a clip cant be heard?

The validation criteria offer guidance for [these cases], but understanding what we mean by accuracy for validating a clip is something that were working to surface in this next quarter, Branson said.

Zion Ariana Mengesha, who is a PhD candidate at Stanford University and an author on the aforementioned study of racial disparity in voice recognition software, sees promise in the ambitions of Common Voice, but stresses that understanding regional and demographic speech differences is crucial. Not only must submitted sentences reflect diversity, but the people listening and validating them must also be diverse to ensure they are properly understood.

Its great that people are compiling more resources and making them openly available, so long as they do so with the care and intention to make sure that there is, within the open-source data set, equal representation across age, gender, region, etc. That could be a great step, Mengesha said.

Another suggestion from Mengesha is to incorporate corpuses that contain language samples from underrepresented and marginalized groups, such as Born in Slavery: Slave Narratives from the Federal Writers Project, 1936-1938, a Library of Congress collection of 2,300 first-person accounts of slavery, and the University of Oregons Corpus of Regional African American Language (CORAAL) the audio recordings and transcripts used in the Stanford study.

How do you achieve racial diversity within voice applications? Branson asked rhetorically. Thats a question we have. We dont have answers for this yet.

Originally posted here:
Inside the Crowdsourced Effort for More Inclusive Voice AI - Built In

Julian Assange extradition delayed by further tech, coronavirus issues – Sydney Morning Herald

Loading

"No publisher of information has ever been successfully prosecuted for publishing national security information ever," Lewis said.

He told the court that if convicted, the 49-year-old would likely spend the rest of his life in jail.

"Under the best-case scenario we are looking at a sentence somewhere between, 20 years, if everything goes brilliantly to 175 years," he said.

Lewis spent around 90 minutes giving evidence before an audio clip interrupted his testimony. The judge walked out as the tech troubles interfered with the hearing which was then adjourned until lunch.

But after lunch court officials could not re-establish a link to Lewis and the hearing was called off for the rest of the day and will resume on Tuesday.

It is not the first time that the hearing has experienced difficulties connecting to, or hearing clearly, witnesses who are choosing to give evidence remotely as a result of the pandemic.

The hearing only resumed at the Old Bailey in central London on Monday following a two-day break after a coronavirus scare, when the wife of one of the lawyers representing the US government developed symptoms which she feared might be COVID-19.

The test proved negative and her diagnosis was no more than a common cold.

But when court resumed on Monday, Assange's legal team asked District Judge Vanessa Baraitser to order that everyone must wear masks in the courtroom.

She refused.

Filmmaker John Pilger stands with supporters of the Wikileaks founder Julian Assange as they gather outside the Old Bailey.Credit:Getty

"Those that wish to wear masks in the well of the court are welcome to do so unless they are directly addressing the court and I understand masks are available for this purpose," she said.

"But there is no obligation to do so and I make no direction ... in this regard."

She instead said that anyone, including Assange who is sitting behind a glass wall in the dock, could wear a mask if they wished.

Assange wore a mask to his extradition hearing for the first time but his QC Mark Summer claimed there had been "difficulties in getting him masks."

Assange's extradition hearing was already delayed by several months due to the pandemic. Assange has subsequently claimed that his incarceration is a risk to his health, exacerbated by coronavirus.

He is being held at Belmarsh prison on London's outskirts. His extradition hearing is expected to run until October.

Our weekly newsletter will deliver expert analysis of the race to the White House from our US correspondent Matthew Knott. Coming soon. Sign up now for the Herald's newsletter here, The Age's here, Brisbane Times' here and WAtoday's here.

Latika Bourke is a journalist for The Sydney Morning Herald and The Age, based in London.

Original post:
Julian Assange extradition delayed by further tech, coronavirus issues - Sydney Morning Herald

Julian Assange extradition hearing: Punishing the publisher – Amnesty International

The last time I saw Julian Assange he looked tired and wan.

Dressed neatly in casual business attire, the Wikileaks founder was sitting in a glass-enclosed dock, at the back of a courtroom adjoining Belmarsh high security prison in London, flanked by two prison officers.

I had travelled from the US to observe the hearing. He had travelled via tunnel from his cell to the courtroom.

Sitting 20 feet away from Julian Assange, I was struck by how much of a shadow of his former self he had become.

Today, Julian Assange will be in court again, for the resumption of proceedings that will ultimately decide on the Trump administrations request for his extradition to the US.

But it is not just Julian Assange that will be in the dock. Beside him will sit the fundamental tenets of media freedom that underpin the rights to freedom of expression and the publics right to access to information. Silence this one man, and the US and its accomplices will gag others, spreading fear of persecution and prosecution over a global media community already under assault in the US and in many other countries worldwide.

The stakes really are that high. If the UK extradites Assange, he would face prosecution in the USA on espionage charges that could send him to prison for decades possibly in a facility reserved for the highest security detainees and subjected to the strictest of daily regimes, including prolonged solitary confinement. All for doing something news editors do the world over publishing public interest information provided by sources .

Indeed, President Donald Trump has called Wikileaks disgraceful and said that its actions in publishing classified information should carry the death penalty

The chilling effect on other publishers, investigative journalists and any person who would dare to facilitate the publication of classified information of government wrongdoing would be immediate and severe. And the US would boldly go beyond its own borders with a long arm to reach non-citizens, like Assange, who is Australian.

You dont need to be an expert in extradition law to understand that the charges against Assange are politically-motivated.

The US governments relentless pursuit of Assange - and the UKs willing participation in his hunt and capture - has now landed him in a prison typically reserved for seasoned criminals. It has diminished him both physically and emotionally often to the point of disorientation. Breaking him by isolating Assange from family, friends and his legal team, seems part and parcel of the USs strategy and it seems to be working.

You dont need to know the vagaries of extradition law to understand that the charges against Assange are not only classic political offences and thus barred under extradition law, but more crucially, the charges are politically-motivated.

The 17 charges levelled by the US under the 1918 Espionage Act could bring 175 years in prison; add a conviction on the single computer fraud charge (said to complement the Espionage Act by dragging it into the computer era), and you get another gratuitous five years. Assange is the only publisher ever to bear the brunt of such espionage charges.

There is no doubt that the charges are politically-motivated under this US administration, which has all but convicted Assange in the public arena. Secretary of State Mike Pompeo has claimed that Wikileaks is a hostile intelligence service whose activities must be mitigated and managed. The flagrantly unfair prosecution of Assange is an example of how far the US will go to manage the flow of information about government wrongdoing and thus undermine the publics right to know.

Assange was on Barack Obamas radar, too, but the Obama administration declined to prosecute Assange. Current US Attorney General William Barr, however, has turned out not one, but two indictments since 2019, the latest at the end of June. That second indictment was a surprise not only to Assanges defence team, but to the Crown lawyer and the judge who were also taken unawares by the new indictment.

Earlier this year, sitting 20 feet away from Julian Assange, I was struck by how much of a shadow of his former self he had become. He did spontaneously stand up several times during that week of hearings to address the judge. He told her he was confused. He told her he could not properly hear the proceedings. He said that barriers in the prison and in court meant that he had not been able to consult with his lawyers. He was not technically permitted to address the judge directly, but he did repeatedly, flashes of the aggressive tactics used in the past to advocate for himself and the principles he has espoused.

Publishing such information is a cornerstone of media freedom and the public's right to access information. It must be protected, not criminalized.

If Julian Assange is extradited it will have far reaching human rights implications, setting a chilling precedent for the protection of those who publish leaked or classified information that is in the public interest.

Publishing such information is a cornerstone of media freedom and the public's right to access information. It must be protected, not criminalized.

Julia Hall is Amnesty International'sexpert on human rights in Europe

Originally posted here:
Julian Assange extradition hearing: Punishing the publisher - Amnesty International

WATCH: On the Journalism of Julian Assange With John Pilger, Craig Murray, Andrew Fowler and Serena Tinari – Consortium News

Watch Pilger, Murray, Fowler and Tinari Sunday at 6 am EDT, 11 am BST, and 8 pm AEST discussing the journalism of the imprisoned WikiLeaks publisher who is facing extradition to the U.S.

Reporting after the first week of Julian Assanges reconvened extradition trial, veteran journalists John Pilger, Andrew Fowler and Serena Tinari, and former UK ambassador and writer Craig Murray join #FreeTheTruth as we turn the spotlight onto the journalism of Julian Assange.

The ongoing extradition hearing in Old Bailey, London has huge significance in terms of freedom of the press, whistleblower protection, extraterritorial rights of the U.S. state and the publics right to know about the crimes governments commit in their name including the rights of those tortured, maimed and killed in Iraq and Afghanistan to know the truth about crimes undertaken by occupying forces.)

Also under the spotlight are the political motives of the prosecution, plus the covering up of war crimes, spying on Assanges legally privileged conversations, Julians right to a fair trial and so on as detailed:

1) so throughly by Prof Nils Melzer, UN rapporteur on torture and arbitrary detention here.

2) by journalist Max Blumenthal (re the spying) here.

3) by historian Mark Curtis and journalist Matt Kennard at DeclassifiedUK re concerns about judicial conflicts of interest here (7 instances) here.

Hosted by Deepa Driver.

Be sure to watch the Consortium News daily reports on court proceedings and to read Craig Murrays accounts in his daily series: Your Man in the Public Gallery, at craigmurray.org and on Consortium News.

Here is the original post:
WATCH: On the Journalism of Julian Assange With John Pilger, Craig Murray, Andrew Fowler and Serena Tinari - Consortium News

Quantum startup CEO suggests we are only five years away from a quantum desktop computer – TechCrunch

Today at TechCrunch Disrupt 2020, leaders from three quantum computing startups joined TechCrunch editor Frederic Lardinois to discuss the future of the technology. IonQ CEO and president Peter Chapman suggested we could be as little as five years away from a desktop quantum computer, but not everyone agreed on that optimistic timeline.

I think within the next several years, five years or so, youll start to see [desktop quantum machines]. Our goal is to get to a rack-mounted quantum computer, Chapman said.

But that seemed a tad optimistic to Alan Baratz, CEO at D-Wave Systems. He says that when it comes to developing the super-conducting technology that his company is building, it requires a special kind of rather large quantum refrigeration unit called a dilution fridge, and that unit would make a five-year goal of having a desktop quantum PC highly unlikely.

Itamar Sivan, CEO at Quantum Machines, too, believes we have a lot of steps to go before we see that kind of technology, and a lot of hurdles to overcome to make that happen.

This challenge is not within a specific, singular problem about finding the right material or solving some very specific equation, or anything. Its really a challenge, which is multidisciplinary to be solved here, Sivan said.

Chapman also sees a day when we could have edge quantum machines, for instance on a military plane, that couldnt access quantum machines from the cloud efficiently.

You know, you cant rely on a system which is sitting in a cloud. So it needs to be on the plane itself. If youre going to apply quantum to military applications, then youre going to need edge-deployed quantum computers, he said.

One thing worth mentioning is that IonQs approach to quantum is very different from D-Waves and Quantum Machines .

IonQ relies on technology pioneered in atomic clocks for its form of quantum computing. Quantum Machines doesnt build quantum processors. Instead, it builds the hardware and software layer to control these machines, which are reaching a point where that cant be done with classical computers anymore.

D-Wave, on the other hand, uses a concept called quantum annealing, which allows it to create thousands of qubits, but at the cost of higher error rates.

As the technology develops further in the coming decades, these companies believe they are offering value by giving customers a starting point into this powerful form of computing, which when harnessed will change the way we think of computing in a classical sense. But Sivan says there are many steps to get there.

This is a huge challenge that would also require focused and highly specialized teams that specialize in each layer of the quantum computing stack, he said. One way to help solve that is by partnering broadly to help solve some of these fundamental problems, and working with the cloud companies to bring quantum computing, however they choose to build it today, to a wider audience.

In this regard, I think that this year weve seen some very interesting partnerships form which are essential for this to happen. Weve seen companies like IonQ and D-Wave, and others partnering with cloud providers who deliver their own quantum computers through other companies cloud service, Sivan said. And he said his company would be announcing some partnerships of its own in the coming weeks.

The ultimate goal of all three companies is to eventually build a universal quantum computer, one that can achieve the goal of providing true quantum power. We can and should continue marching toward universal quantum to get to the point where we can do things that just cant be done classically, Baratz said. But he and the others recognize we are still in the very early stages of reaching that end game.

Read this article:
Quantum startup CEO suggests we are only five years away from a quantum desktop computer - TechCrunch

Are We Close To Realising A Quantum Computer? Yes And No, Quantum Style – Swarajya

Scientists have been hard at work to get a new kind of computer going for about a couple of decades. This new variety is not a simple upgrade over what you and I use every day. It is different. They call it a quantum computer.

The name doesnt leave much to the imagination. It is a machine based on the central tenets of the most successful theory of physics yet devised quantum mechanics. And since it is based on such a powerful theory, it promises to be so advanced that a conventional computer, the one we know and recognise, cannot keep up with it.

Think of the complex real-world problems that are hard to solve and its likely that quantum computers will throw up answers to them someday. Examples include simulating complex molecules to design new materials, making better forecasts for weather, earthquakes or volcanoes, map out the reaches of the universe, and, yes, demystify quantum mechanics itself.

One of the major goals of quantum computers is to simulate a quantum system. It is probably the reason why quantum computation is becoming a major reality, says Dr Arindam Ghosh, professor at the Department of Physics, Indian Institute of Science.

Given that the quantum computer is full of promise, and work on it has been underway for decades, its fair to ask do we have one yet?

This is a million-dollar question, and there is no simple answer to it, says Dr Rajamani Vijayaraghavan, the head of the Quantum Measurement and Control Laboratory at the Tata Institute of Fundamental Research (TIFR). Depending on how you view it, we already have a quantum computer, or we will have one in the future if the aim is to have one that is practical or commercial in nature.

We have it and dont. That sounds about quantum.

In the United States, Google has been setting new benchmarks in quantum computing.

Last year, in October, it declared quantum supremacy a demonstration of a quantum computers superiority over its classical counterpart. Googles Sycamore processor took 200 seconds to make a calculation that, the company claims, would have taken 10,000 years on the worlds most powerful supercomputer.

This accomplishment came with conditions attached. IBM, whose supercomputer Summit (the worlds fastest) came second-best to Sycamore, contested the 10,000-year claim and said that the calculation would have instead taken two and a half days with a tweak to how the supercomputer approached the task.

Some experts suggested that the nature of the task, generating random numbers in a quantum way, was not particularly suited to the classical machine. Besides, Googles quantum processor didnt dabble in a real-world application.

Yet, Google was on to something. For even the harsh critic, it provided a glimpse of the spectacular processing power of a quantum computer and whats possible down the road.

Google did one better recently. They simulated a chemical reaction on their quantum computer the rearrangement of hydrogen atoms around nitrogen atoms in a diazene molecule (nitrogen hydride or N2H2).

The reaction was a simple one, but it opened the doors to simulating more complex molecules in the future an eager expectation from a quantum computer.

But how do we get there? That would require scaling up the system. More precisely, the number of qubits in the machine would have to increase.

Short for quantum bits, qubits are the basic building blocks of quantum computers. They are equivalent to the classical binary bits, zero and one, but with an important difference. While the classical bits can assume states of zero or one, quantum bits can accommodate both zero and one at the same time a principle in quantum mechanics called superposition.

Similarly, quantum bits can be entangled. That is when two qubits in superposition are bound in such a way that one dictates the state of the other. It is what Albert Einstein in his lifetime described, and dismissed, as spooky action at a distance.

Qubits in these counterintuitive states are what allow a quantum computer to work its magic.

Presently, the most qubits, 72, are found on a Google device. The Sycamore processor, the Google chip behind the simulation of a chemical reaction, has a 53-qubit configuration. IBM has 53 qubits too, and Intel has 49. Some of the academic labs working with quantum computing technology, such as the one at Harvard, have about 40-50 qubits. In China, researchers say they are on course to develop a 60-qubit quantum computing system within this year.

The grouping is evident. The convergence is, more or less, around 50-60 qubits. That puts us in an interesting place. About 50 qubits can be considered the breakeven point the one where the classical computer struggles to keep up with its quantum counterpart, says Dr Vijayaraghavan.

It is generally acknowledged that once qubits rise to about 100, the classical computer gets left behind entirely. That stage is not far away. According to Dr Ghosh of IISc, the rate of qubit increase is today faster than the development of electronics in the early days.

Over the next couple of years, we can get to 100-200 qubits, Dr Vijayaraghavan says.

A few more years later, we could possibly reach 300 qubits. For a perspective on how high that is, this is what Harvard Quantum Initiative co-director Mikhail Lukin has said about such a machine: If you had a system of 300 qubits, you could store and process more bits of information than the number of particles in the universe.

In Indian labs, we are working with much fewer qubits. There is some catching up to do. Typically, India is slow to get off the blocks to pursue frontier research. But the good news is that over the years, the pace is picking up, especially in the quantum area.

At TIFR, researchers have developed a unique three-qubit trimon quantum processor. Three qubits might seem small in comparison to examples cited earlier, but together they pack a punch. We have shown that for certain types of algorithms, our three-qubit processor does better than the IBM machine. It turns out that some gate operations are more efficient on our system than the IBM one, says Dr Vijayaraghavan.

The special ingredient of the trimon processor is three well-connected qubits rather than three individual qubits a subtle but important difference.

Dr Vijayaraghavan plans to build more of these trimon quantum processors going forward, hoping that the advantages of a single trimon system spill over on to the larger machines.

TIFR is simultaneously developing a conventional seven-qubit transmon (as opposed to trimon) system. It is expected to be ready in about one and a half years.

About a thousand kilometres south, at IISc, two labs under the Department of Instrumentation and Applied Physics are developing quantum processors too, with allied research underway in the Departments of Computer Science and Automation, and Physics, as well as the Centre for Nano Science and Engineering.

IISc plans to develop an eight-qubit superconducting processor within three years.

Once we have the know-how to build a working eight-qubit processor, scaling it up to tens of qubits in the future is easier, as it is then a matter of engineering progression, says Dr Ghosh, who is associated with the Quantum Materials and Devices Group at IISc.

It is not hard to imagine India catching up with the more advanced players in the quantum field this decade. The key is to not think of India building the biggest or the best machine it is not necessary that they have the most number of qubits. Little scientific breakthroughs that have the power to move the quantum dial decisively forward can come from any lab in India.

Zooming out to a global point of view, the trajectory of quantum computing is hazy beyond a few years. We have been talking about qubits in the hundreds, but, to have commercial relevance, a quantum computer needs to have lakhs of qubits in its armoury. That is the challenge, and a mighty big one.

It isnt even the case that simply piling up qubits will do the job. As the number of qubits go up in a system, it needs to be ensured that they are stable, highly connected, and error-free. This is because qubits cannot hang on to their quantum states in the event of environmental noise such as heat or stray atoms or molecules. In fact, that is the reason quantum computers are operated at temperatures in the range of a few millikelvin to a kelvin. The slightest disturbance can knock the qubits off their quantum states of superposition and entanglement, leaving them to operate as classical bits.

If you are trying to simulate a quantum system, thats no good.

For that reason, even if the qubits are few, quantum computation can work well if the qubits are highly connected and error-free.

Companies like Honeywell and IBM are, therefore, looking beyond the number of qubits and instead eyeing a parameter called quantum volume.

Honeywell claimed earlier this year that they had the worlds highest performing quantum computer on the basis of quantum volume, even though it had just six qubits.

Dr Ghosh says quantum volume is indeed an important metric. Number of qubits alone is not the benchmark. You do need enough of them to do meaningful computation, but you need to look at quantum volume, which measures the length and complexity of quantum circuits. The higher the quantum volume, the higher is the potential for solving real-world problems.

It comes down to error correction. Dr Vijayaraghavan says none of the big quantum machines in the US today use error-correction technology. If that can be demonstrated over the next five years, it would count as a real breakthrough, he says.

Guarding the system against faults or "errors" is the focus of researchers now as they look to scale up the qubits in a system. Developing a system with hundreds of thousands of qubits without correcting for errors cancels the benefits of a quantum computer.

As is the case with any research in the frontier areas, progress will have to accompany scientific breakthroughs across several different fields, from software to physics to materials science and engineering.

In light of that, collaboration between academia and industry is going to play a major role going forward. Depending on each of their strengths, academic labs can focus on supplying the core expertise necessary to get a quantum computer going while the industry can provide the engineering muscle to build the intricate stuff. Both are important parts of the quantum computing puzzle. At the end of the day, the quantum part of a quantum computer is tiny. Most of the machine is high-end electronics. The industry can support that.

It is useful to recall at this point that even our conventional computers took decades to develop, starting from the first transistor in 1947 to the first microprocessor in 1971. The computers that we use today would be unrecognisable to people in the 1970s. In the same way, how quantum computing in the future, say, 20 years down the line, is unknown to us today.

However, governments around the world, including India, are putting their weight behind the development of quantum technology. It is clear to see why. Hopefully, this decade can be the springboard that launches quantum computing higher than ever before. All signs point to it.

See more here:
Are We Close To Realising A Quantum Computer? Yes And No, Quantum Style - Swarajya

Spin-Based Quantum Computing Breakthrough: Physicists Achieve Tunable Spin Wave Excitation – SciTechDaily

Magnon excitation. Credit: Daria Sokol/MIPT Press Office

Physicists from MIPT and the Russian Quantum Center, joined by colleagues from Saratov State University and Michigan Technological University, have demonstrated new methods forcontrolling spin waves in nanostructured bismuth iron garnet films via short laser pulses. Presented inNano Letters, the solution has potential for applications in energy-efficient information transfer and spin-based quantum computing.

Aparticles spin is its intrinsic angular momentum, which always has a direction. Inmagnetized materials, the spins all point in one direction. A local disruption of this magnetic order is accompanied by the propagation of spin waves, whose quanta are known as magnons.

Unlike the electrical current, spin wave propagation does not involve a transfer of matter. Asaresult, using magnons rather than electrons to transmit information leads to much smaller thermal losses. Data can be encoded in the phase or amplitude of a spin wave and processed via wave interference or nonlinear effects.

Simple logical components based on magnons are already available as sample devices. However, one of the challenges of implementing this new technology is the need to control certain spin wave parameters. Inmany regards, exciting magnons optically is more convenient than by other means, with one of the advantages presented in the recent paper in Nano Letters.

The researchers excited spin waves in a nanostructured bismuth iron garnet. Even without nanopatterning, that material has unique optomagnetic properties. It is characterized by low magnetic attenuation, allowing magnons topropagate over large distances even at room temperature. It is also highly optically transparent in the near infrared range and has a high Verdet constant.

The film used in the study had an elaborate structure: a smooth lower layer with a one-dimensional grating formed on top, with a 450-nanometer period (fig.1). This geometry enables the excitation ofmagnons with a very specific spin distribution, which is not possible for an unmodified film.

To excite magnetization precession, the team used linearly polarized pump laser pulses, whose characteristics affected spin dynamics and the type of spin waves generated. Importantly, wave excitation resulted from optomagnetic rather than thermal effects.

Schematic representation of spin wave excitation by optical pulses. The laser pump pulse generates magnons by locally disrupting the ordering of spins shown as violet arrows in bismuth iron garnet (BiIG). A probe pulse is then used to recover information about the excited magnons. GGG denotes gadolinium gallium garnet, which serves as the substrate. Credit: Alexander Chernov et al./Nano Letters

The researchers relied on 250-femtosecond probe pulses to track the state of the sample and extract spin wave characteristics. Aprobe pulse can be directed to any point on the sample with adesired delay relative to the pump pulse. This yields information about the magnetization dynamics in a given point, which can be processed to determine the spin waves spectral frequency, type, and other parameters.

Unlike the previously available methods, the new approach enables controlling the generated wave by varying several parameters of the laser pulse that excites it. In addition to that, thegeometry of the nanostructured film allows the excitation center to be localized inaspot about 10 nanometers in size. The nanopattern also makes it possible to generate multiple distinct types of spin waves. The angle of incidence, the wavelength and polarization of the laser pulses enable the resonant excitation of the waveguide modes of the sample, which are determined by the nanostructure characteristics, so the type of spin waves excited can be controlled. It is possible for each of the characteristics associated with optical excitation to be varied independently to produce the desired effect.

Nanophotonics opens up new possibilities in the area of ultrafast magnetism, said the studys co-author, Alexander Chernov, who heads the Magnetic Heterostructures and Spintronics Lab at MIPT. The creation of practical applications will depend on being able to go beyond the submicrometer scale, increasing operation speed and the capacity for multitasking. We have shown a way to overcome these limitations by nanostructuring a magnetic material. We have successfully localized light in a spot few tens of nanometers across and effectively excited standing spin waves of various orders. This type of spin waves enables the devices operating at high frequencies, up to the terahertz range.

The paper experimentally demonstrates an improved launch efficiency and ability to control spin dynamics under optical excitation by short laser pulses in a specially designed nanopatterned film of bismuth iron garnet. It opens up new prospects for magnetic data processing and quantum computing based on coherent spin oscillations.

Reference: All-Dielectric Nanophotonics Enables Tunable Excitation of the Exchange Spin Waves by Alexander I. Chernov*, Mikhail A. Kozhaev, Daria O. Ignatyeva, Evgeniy N. Beginin, Alexandr V. Sadovnikov, Andrey A. Voronov, Dolendra Karki, Miguel Levy and Vladimir I. Belotelov, 9 June 2020, Nano Letters.DOI: 10.1021/acs.nanolett.0c01528

The study was supported by the Russian Ministry of Science and Higher Education.

Follow this link:
Spin-Based Quantum Computing Breakthrough: Physicists Achieve Tunable Spin Wave Excitation - SciTechDaily