Latin America’s Growing Artificial Intelligence Wave BRINK News and Insights on Global Risk – BRINK

Robots welding at the assembly line of the French car maker Peugeot/Citroen in Brazil. Artificial intelligence offers a chance for the Latin America's economies to leapfrog to greater innovation and economic progress.

Photo: Antonio Scorza/AFP/ Getty Images

Share this article

E-commerce firms have faced a conundrum in Latin America: How can they deliver packages in a region where 25% of urban populations live in informal, squatterneighborhoods with no addresses?

Enter Chazki, a logistics startup from Peru, which partnered with Arequipas Universidad San Pablo to build an artificial intelligence robot to generate new postal maps across the country. The company has now expanded to Argentina, Mexico and Chile, introducing remote communities and city outskirts to online deliveries.

Thats just one example of how machine learning is bringing unique Latin American solutions to unique Latin American challenges. Artificial intelligence and its correlated technologies could prove a major boon to the regions public and private sectors. In turn, its policymakers and business leaders have to better prepare to take full advantage while warding off potential downsides.

Latin America has long been the victim of low productivity and the COVID-19 pandemic is predictably making matters worse. Now, artificial intelligence is a chance for the regions economies to leapfrog to greater innovation and economic progress. Research suggests that AI will add a full percentage point of GDP to five of South Americas biggest economies Argentina, Brazil, Chile, Colombia and Peru by 2035.

Artificial intelligence could play transformative roles in Latin America for just about every sector, according to the Inter-American Development Bank (IADB). That means using AI to predict trade negotiation outcomes, commodity prices and consumer trends, or developing algorithms for use in factories, personalized medicine, infrastructure prototyping, autonomous transportation and energy consumption.

AIs applications in Latin America are already becoming reality. Argentinian Banco Galicia, Colombian airline Avianca, and Brazilian online shopping platform Shop Fcil have all adopted chatbots as virtual customer service assistance to help people. Chiles The Not Company developed an algorithm that analyzes animal-based food products and a database of 400,000 plants to generate recipes for vegan alternatives, and Perus National University of Engineering built machines to autonomously detect dangerous gases.

Expect the trend to continue in the near future. An MIT global survey of senior business executives worldwide found that, by the end of 2019, 79% of companies in Latin America had launched AI programs. The results have been positive; fewer than 2% of respondents reported that the initiatives made lower-than expected returns.

Another key factor is public acceptance of AI and automation. Thus far, Latin Americans are ahead of the curve in embracing the future, with another recent poll showing that 83% of Brazilian consumers said they would trust banking advice entirely generated by a computer, compared to a global average of 71%.

In a region that suffers from endemic corruption, pervasive violence, weak institutions and challenging socioeconomic conditions, governments, policymakers and organizations can use AI to tackle critical issues in the region, including food security, smart cities, natural resources and unemployment.

In Argentina, for example, artificial intelligence is being used to predict and prepare for teenage pregnancy and school dropout, as well as to outline unseen business opportunities in city neighborhoods. In Colombia and Uruguay, software has been developed to predict where crimes are likely to occur. In Brazil, the University of So Paulo is developing machine-learning technology that will rapidly assess the likelihood that patients have dengue fever, Zika or chikungunya when they arrive at a medical center.

At a time when public support for democracy in Latin America is flailing, AI could help come to the rescue. Congressional bodies across the region could use AI to boost the transparency and input of the legislative process. Indeed, the Hacker Laboratory, an innovation lab within Brazils Chamber of Deputies, is using AI platforms to facilitate interactions between lawmakers and citizens.

AI is not risk-free, of course. Elon Musk called AI humanitys biggest existential threat, and Stephen Hawking said it could spell the end of the human race.

Apocalyptic scenarios aside, the immediate danger of AI in Latin America is unemployment and inequality. The Inter-American Development Bank (IADB) warned in a 2018 study that between 36% and 43% of jobs could be lost due to artificial intelligence as a result of automatization. Indeed, Latin Americas governments must be prepared to set up guardrails and implement best practices for the implementation of AI.

Several governments in the region have already announced AI public policy plans. Mexico was one of the first 10 countries in the world to create a national AI strategy. Meanwhile, Brazil launched anational Internet of Things plan, which includes the countrys commitment to a network of AI laboratories across strategic areas including cybersecurity and defense. Chile is coordinating with civil society groups and experts to adopt its own AI plan.

Move aside, Mercosur! Governments in Latin America might also find that machine learning strengthens regional ties. That means harnessing AI to crunch data on trade flows and rules, find areas of consensus in multilateral negotiations, or create algorithms for regional trade. After all, AI models have a 300% greater predictive capacity than traditional econometric models, according to the Inter-American Development Bank.

Beyond competing national plans for AI, Latin American leaders should be drafting a strategy that is specific to its region much like the European Union is doing. A key takeaway from the recent UNESCO Regional Forum on AI in Latin America and the Caribbean was that the technology must develop with respect to universally recognized human rights and values.

In 2021, artificial intelligence could generate almost $3 trillion in business value and 6.2 billion hours of productivity worldwide. Latin America is rightfully jumping onto the bandwagon and has the potential to lead the parade in some areas.

To make full use of what could be a transformational productivity revolution for the region, government and business leaders must pump more resources into technology planning and education. The implementation of AI must improve, not accelerate, the regions inequity.

Follow this link:
Latin America's Growing Artificial Intelligence Wave BRINK News and Insights on Global Risk - BRINK

Inside the Crowdsourced Effort for More Inclusive Voice AI – Built In

A few years ago, a German university student wrote a novel and submitted it to Common Voicean open-source projectlaunched by Mozilla in 2017 to make speech-training data more diverse and inclusive.

The book donation, which added 11,000 sentences, was a bit of an exceptional contribution, said Alex Klepel, a former communications and partnership lead at Mozilla. Most of the voice data comes from more modest contributions excerpts of podcasts, transcripts and movie scripts available in the public domain under a no rights reserved CC0 license.

Text-based sentences from these works are fed into a recently launched multi-language contributor platform, where theyre displayed for volunteers who record themselvesreading them aloud. The resulting audio files are then spooled back into the system for users to listen to and validate.

The goal of the project, as itswebsite states, is to help teach machines how real people speak.

Though speech is becoming an increasingly popular way to interact with electronics from digital assistants like Alexa, Siri and Google Assistant, to hiring screeners and self-serve kiosks at fast food restaurants these systems are largely inaccessible to much of humanity, Klepel told me. A wide swath of the global population speaks languages or dialects these assistants havent been trained on. And in some cases, even if they have, assistants still have a hard time understanding them.

Machines dont understand everyone. They understand a fraction of people. Hence, only a fraction of people benefit from this massive technological shift.

Though developers and researchers have access to a number of public-domain machine learning algorithms, training data is limited and costly to license. The English Fisher data set, for example, is about 2,000 hours and costs about $14,000 for non-members, according to Klepel.

Most of the voice data used to train machine learning algorithms is tied up in the proprietary systems of a handful of major companies, whose systems, many experts believe, reflect their largely homogenous user bases. Andlimited data meanslimited cognition. A recent Stanford University study, as Built Inreported,foundthatthe speech-to-text services used by Amazon, IBM, Google, Microsoft and Apple for batch transcriptions misidentified the words of Black speakers at nearly double the rate of white speakers.

Machines dont understand everyone, explained Klepel by email. They understand a fraction of people. Hence, only a fraction of people benefit from this massive technological shift.

More on Automated Speech RecognitionWhy Racial Bias Still Haunts AI

Common Voice is an attempt to level the playing field. Today, it represents the largest public domain transcribed voice dataset, with more than 7,200 hours of voice data and 54 languages represented, including English, French, German, Spanish, Mandarin, Welsh, Kabyle and Kinyarwanda, according to Klepel.

Megan Branson, a product and UX designer at Mozillawho has overseen much of the projects UX development, said its latest and most exciting incarnation is the release of the multi-language website.

We look at this as a fun task, she said.Its daunting, but we can really do something. To better the internet, definitely, but also to give people better tools.

The project is guided by open-source principles, but it is hardly a free-for-all. Branson describes the website as open-by-design, meaning it is freely available to the public, but intentionally curated to ensure the fidelity and accuracy of voice collections. The goal is to create products that meet Mozillas business goals as well as those of the broader tech community.

In truth, Common Voice has multiple ambitions. It grew out of the need for thousands of hours of high-quality voice data to support Deep Speech, Mozillas automated speech recognition engine, which, according to Klepel, approaches human accuracy and is intended to enable a new wave of products and services.

We look at this as a fun task. Its daunting, but we can really do something. To better the internet, definitely, but also to give people better tools.

Deep Speech is designed not only to help Mozilla develop new voice-powered products, but also to support the global development of automated speech technologies, including in African countries like Rwanda, where it is believed they can begin to proliferate and advance sustainability goals. The idea behind Deep Speech is to develop a speech-to-text engine that can run on anything, from smartphones to an offline Raspberry Pi 4 to a server class machine, obviating the need to pay patent royalties or exorbitant fees for existing speech-to-text services, he wrote.

Over time, the thinking goes, publicly validated data representing many of the worlds languages and cultures might begin to redress algorithmic bias in datasets historically skewed toward white, English-speaking males.

But would it work? Could a voluntary public portal like Common Voice diversify training data? Back when the project started, no one knew and the full impact of Common Voice on training data has yet to be determined but, by the spring of 2017, it was time to test the theory.

Guiding the process was the question, How might we collect voice data for machine learning, knowing that voice data is extremely expensive, very proprietary, and hard to come by? Branson said.

As an early step, the team conducted a paper prototyping experiment in Taipei. Researchers created low-fidelity mock-ups of a sentence-reading tool and a voice-driven dating app and distributed them to people on the street to hear their reactions, as Branson described in Medium. It was guerrilla research, and it led to some counterintuitive findings. People expressed a willingness to voluntarily contribute to the effort, not because of the cool factor of a new app or web design, but out of an altruistic interest in making speech technology more diverse and inclusive.

Establishing licensing protocols was another early milestone. All submissions, Branson said, must fall under a public domain (CC0) license and meet basic requirements for punctuation, abbreviations and length (14 words or less).

The team also developed a set of tools to gather text samples. An online sentence collector allows users to log in and add existing sentences found in works in the public domain. A more recently released sentence extractor gives contributors the option of pulling up to three sentences from Wikipedia articles and submitting them to Mozilla as GitHub pull requests.

Strategic partnerships with universities, NGOs and corporate and government entities havehelped raise awareness of the effort, according to Klepel. In late 2019, for instance, Mozilla began collaborating with the German Ministry for Economic Cooperation and Development. Under an agreement called Artificial Intelligence for All: FAIR FORWARD,the two partners are attempting to open voice technology for languages in Africa and Asia.

In one pilot project, Digital Umuganda, a young Rwandan Artificial Intelligence startup focused on voice technologies is working with Mozilla to build an open speech corpus in Kinyarwanda, a language spoken by 12 million people, to a capacity that will allow it to train a speech-to-text-engine for a use case in support of the UNs Sustainable Development Goals, Klepel wrote.

More on Diversity and InclusionThe Deck Is Stacked Against Black Women in Tech

The work in Africa only scratches the surface of Deep Speechs expanding presence. According to Klepel, Mozilla is working with the Danish government, IBM, Bangor University in Wales, Mycroft AI and the German Aerospace Center in collaborative efforts ranging from growing Common Voice data sets to partnering on speaking engagements, to building voice assistants and moon robotics hardware.

But it is easy to imagine how such high-altitude programs might come with the risk of appearing self-interested. Outside of forging corporate and public partnerships at the institutional level, how do you collect diverse training data? And how do you incentivize conscientious everyday citizens to participate?

[It] reaffirmed something we already knew: our data could be far more diverse. Meaning more gender, accent, dialect and overall language diversity.

Thats where Branson and her team believed the web contributor experience could differentiate Mozillas data collection efforts. The team ran extensive prototype testing, gathering feedback from surveys, stakeholder sessions and tools such as Discourse and GitHub. And in the summer of 2017, a live coded version for English speakers was released to the wild. With a working model, research scientists and machine learning developers could come to the website and download data they could use to build voice tools a major victory for the project.

But development still had a long way to go. A UX assessment review and a long list of feature requests and bug fixes showed there were major holes in the live alpha. Most of these were performance and usability fixes that could be addressed in future iterations, but some of the issues required more radical rethinking.

As Branson explained in Medium, it reaffirmed something we already knew: our data could be far more diverse. Meaning more gender, accent, dialect and overall language diversity.

To address these concerns, Branson and her team began asking more vexing questions:

Early answers emerged in a January 2018 workshop. Mozillas design team invited corporate and academic partners to a journey mapping and feature prioritization exercise, which brought to light several daring ideas. Everything was on the table, including wearable technologies and pop-up recording events. Ultimately, though, flashy concepts took a backseat to a solution less provocative but more pragmatic: the wireframes that would lay the groundwork for the web contributor experience that exists today.

From a users standpoint, the redesigned website could hardly be more straightforward. On the left-hand side of the homepage is a microphone icon beside the word Speak and the phrase Donate your voice. On the right-hand side is a green arrow icon beside the word Listen and the phrase Help us validate our voices. Hover over either icon and you find more information, including the number of clips recorded that day and a goal for each language 1,200 per day for speaking and 2,400 per day for validating. Without logging in, you can begin submitting audio clips, repeating back sentences like these:

The first inhabitants of the Saint George area were Australian aborigines.

The pledge items must be readily marketable.

You can also validate the audio clips of others, which, on a quick test, represent a diversity of accents and include men and women.

The option to set up a profile is designed to build loyalty and add a gamification aspect to the experience. Users with profiles can track their progress in multiple languages against those of other contributors. They can submit optional demographic data, such as age, sex, language, and accent, which is anonymized on the site but can be used by design and development teams to analyze speech contributions.

Current data reported on the site shows that 23 percent of English-language contributors identify their accent as United States English. Other common English accents include England (8 percent), India and South Asia (5 percent) and Southern African (1 percent).

Forty-seven percent of contributors identity as male and 14 percent identify as female, and the highest percentage of contributions by age comes from those ages 19-29. These stats, while hardly phenomenal as a measure of diversity, are evidence of the projects genuine interest in transparency.

Were seeing people collect in languages that are considered endangered, like Welsh and Parisian. Its really, really neat.

A recently released single-word target segment being developed for business use cases, such as voice assistants, includes the digits zero through nine, as well as the words yes, no, hey and Firefox in 18 languages. An additional 70 languages are in progress; once 5,000 sentences have been reviewed and validated in these languages, they can be localized so the canonical site can accept voice recordings and listener validations.

Arguably, though, the most significant leap forward in the redesign was the creation of a multi-language experience. A language tab on the homepage header takes visitors to a page listing launched languages as well as those in progress. Progress bars report key metrics, such as the number of speakers and validated hours in a launched language, and the number of sentences needed for in-progress languages to become localized. The breadth of languages represented on the page is striking.

Were seeing people collect in languages that are considered endangered, like Welsh and Parisian. Its really, really neat, Branson said.

More on Voice-Enabled Technology Will We Ever Want to Use Touchscreens Again?

So far, the team hasnt done much external marketing, in part because the infrastructure wasnt stable enough to meet the demands of a growing user base. With a recent transition to a more robust Kubernetes infrastructure, however, the team is ready to cast a wider net.

How do we actually get this in front of people who arent always in the classic open source communities, right? You know, white males, Branson asked. How do we diversify that?

Addressing that concern is likely the next hurdle for the design team.

If Common Voice is going to focus on moving the needle in 2020, its going to be in sex diversity, helping balance those ratios. And its not a binary topic. Weve got to work with the community, right? Branson said.

If Common Voice is going to focus on moving the needle in 2020, its going to be in sex diversity, helping balance those ratios. And its not a binary topic.

Evaluating the protocols for validation methods is another important consideration. Currently, a user who believes a donated speech clip is accurate can give it a Yes vote. Two Yes votes earns the clip a spot in the Common Voice dataset. A No vote returns the clip to the queue, and two No votes relegates the snippet to the Clip Graveyard as unusable. But criteria for defining accuracy are still a bit murky. What if somebody misspeaks or their inflection is unintelligible to the listener? What if theres background noise and part of a clip cant be heard?

The validation criteria offer guidance for [these cases], but understanding what we mean by accuracy for validating a clip is something that were working to surface in this next quarter, Branson said.

Zion Ariana Mengesha, who is a PhD candidate at Stanford University and an author on the aforementioned study of racial disparity in voice recognition software, sees promise in the ambitions of Common Voice, but stresses that understanding regional and demographic speech differences is crucial. Not only must submitted sentences reflect diversity, but the people listening and validating them must also be diverse to ensure they are properly understood.

Its great that people are compiling more resources and making them openly available, so long as they do so with the care and intention to make sure that there is, within the open-source data set, equal representation across age, gender, region, etc. That could be a great step, Mengesha said.

Another suggestion from Mengesha is to incorporate corpuses that contain language samples from underrepresented and marginalized groups, such as Born in Slavery: Slave Narratives from the Federal Writers Project, 1936-1938, a Library of Congress collection of 2,300 first-person accounts of slavery, and the University of Oregons Corpus of Regional African American Language (CORAAL) the audio recordings and transcripts used in the Stanford study.

How do you achieve racial diversity within voice applications? Branson asked rhetorically. Thats a question we have. We dont have answers for this yet.

Originally posted here:
Inside the Crowdsourced Effort for More Inclusive Voice AI - Built In

Combatting COVID-19 misinformation with machine learning (VB Live) – VentureBeat

Presented by AWS Machine Learning

As machine learning has evolved, so have best practices, especially in the wake of COVID-19. Join this VB Live event to learn from experts about how machine learning solutions are helping companies respond in these uncertain times and the lessons learned along the way.

Register here for free.

Misinformation around COVID-19 is driving human behavior across the world. Here in the information age, sensationalized clickbait headlines are crowding out actual fact-based content, and, as a result misinformation spreads virally. Conversations within small communities become the epicenter of false information, and that misinformation spreads as people talk, both online and off. As the number of misinformed people grow, this infodemic grows.

The spread of misinformation around COVID-19 is especially problematic, because it could overshadow the key messaging around safety measures from public health and government officials.

In an effort to counter misinformed narratives in central and west Africa, Novetta Mission Analytics (NMA) is working with Africa CDC (Center for Disease Control) to discover and identify narratives and behavior patterns around the disease, says David Cyprian, product owner at Novetta. And machine learning is key.

They supply data that measures the acceptability, impact, and effectiveness of public health and social measures. In turn, the Africa CDC analysis of the data enables them to generate tailored guidelines for each country.

With all these different narratives out there, we can use machine learning to quantify which ones are really affecting the largest population, Cyprian explains. We uncover how quickly these things are spreading, how many people are talking about the issues, and whether anyone is actually criticizing the misinformation itself.

NMA uncovered trending phrases that indicate worry around the disease, mistrust about official messaging, and criticisms of local measures to combat the disease. They found that herbal remedies are becoming popular, as is the idea of herd immunity.

We know all of these different narratives are changing behavior, Cyprian says. Theyre causing people to make decisions that make it more difficult for the COVID-19 response community to be effective and implement countermeasures that are going to mitigate the effects of the virus.

To identify these narrative threads, Novetta ingests publicly-available social media at scale and pairs it with a collection of domestic and international news media. They process and analyze that raw social and traditional media content in their ML platform built on AWS to identify where people are talking about these things, and where events are happening that drive the conversations. They also use natural language processing for directed sentiment analysis to discover whether narratives are being driven by mistrust of a local government entity, the west, or international organizations, as well as identifying influencers that are engendering a lot of positive sentiment among users and building trust.

Pieces of content are tagged as positive or negative to local and global pandemic measures and public entities, creating small human-labeled data sets about specific micronarratives for specific populations that might be trading in misinformation.

By fusing rapid ingestion with a human labeling process of just a few hundred artifacts, theyre able to kick off machine learning and apply it to the scale of social media. This allows them to have more than one learning model that is used for all the problem sets.

We dont have a one-size-fits-all approach, says Cyprian. Were always tuning and researching accuracy for specific narratives, and then were able to provide large, near-real-time insights into how these narratives are propagating or spreading in the field.

Built on AWS, their machine learning architecture allows their development team to focus on what they do well, which is develop new applications and new widgets to be able to analyze this data.

They dont need to worry about any server management, or scaling, since thats taken care of for them with Amazon EC2 and S3. Their microservices architecture uses some additional features that Amazon offers, particularly Elastic Kubernetes Service (EKS), to orchestrate their services, and Amazon Elastic Container Registry (ECR), to store images and run vulnerability testing before they deploy.

Novettas approach is cross-disciplinary, bringing in domain experts from the health field, media analysts, machine learning research engineers, and software developers. They work in small teams to solve problems together.

In my experience, thats been the best way for machine learning to make a practical difference, he says. I would just urge folks who are facing these similar difficult problems to enable their people to do what people do well, and then have the machine learning engineers help to harden, verify, and scale those efforts so you can bring countermeasures to bear quickly.

To learn more about the impact machine learning solutions can deliver and lessons learned along the way, dont miss this round table with leaders from Kabbage and Novetta, as well as Michelle K. Lee, VP of the Amazon Machine Learning Solutions Lab.

Dont miss out!

Register here for free.

Youll learn:

Speakers:

More here:
Combatting COVID-19 misinformation with machine learning (VB Live) - VentureBeat

Machine Learning Chips Market Dynamics Analysis to Grow at Cagr with Major Companies and Forecast 2026 – The Scarlet

Machine Learning Chips Market 2018: Global Industry Insights by Global Players, Regional Segmentation, Growth, Applications, Major Drivers, Value and Foreseen till 2024

The recent published research report sheds light on critical aspects of the global Machine Learning Chips market such as vendor landscape, competitive strategies, market drivers and challenges along with the regional analysis. The report helps the readers to draw a suitable conclusion and clearly understand the current and future scenario and trends of global Machine Learning Chips market. The research study comes out as a compilation of useful guidelines for players to understand and define their strategies more efficiently in order to keep themselves ahead of their competitors. The report profiles leading companies of the global Machine Learning Chips market along with the emerging new ventures who are creating an impact on the global market with their latest innovations and technologies.

Request Sample Report @ https://www.marketresearchhub.com/enquiry.php?type=S&repid=2632983&source=atm

The recent published study includes information on key segmentation of the global Machine Learning Chips market on the basis of type/product, application and geography (country/region). Each of the segments included in the report is studies in relations to different factors such as market size, market share, value, growth rate and other quantitate information.

The competitive analysis included in the global Machine Learning Chips market study allows their readers to understand the difference between players and how they are operating amounts themselves on global scale. The research study gives a deep insight on the current and future trends of the market along with the opportunities for the new players who are in process of entering global Machine Learning Chips market. Market dynamic analysis such as market drivers, market restraints are explained thoroughly in the most detailed and easiest possible manner. The companies can also find several recommendations improve their business on the global scale.

The readers of the Machine Learning Chips Market report can also extract several key insights such as market size of varies products and application along with their market share and growth rate. The report also includes information for next five years as forested data and past five years as historical data and the market share of the several key information.

Make An EnquiryAbout This Report @ https://www.marketresearchhub.com/enquiry.php?type=E&repid=2632983&source=atm

Global Machine Learning Chips Market by Companies:

The company profile section of the report offers great insights such as market revenue and market share of global Machine Learning Chips market. Key companies listed in the report are:

Market Segment AnalysisThe research report includes specific segments by Type and by Application. Each type provides information about the production during the forecast period of 2015 to 2026. Application segment also provides consumption during the forecast period of 2015 to 2026. Understanding the segments helps in identifying the importance of different factors that aid the market growth.Segment by TypeNeuromorphic ChipGraphics Processing Unit (GPU) ChipFlash Based ChipField Programmable Gate Array (FPGA) ChipOther

Segment by ApplicationRobotics IndustryConsumer ElectronicsAutomotiveHealthcareOther

Global Machine Learning Chips Market: Regional AnalysisThe report offers in-depth assessment of the growth and other aspects of the Machine Learning Chips market in important regions, including the U.S., Canada, Germany, France, U.K., Italy, Russia, China, Japan, South Korea, Taiwan, Southeast Asia, Mexico, and Brazil, etc. Key regions covered in the report are North America, Europe, Asia-Pacific and Latin America.The report has been curated after observing and studying various factors that determine regional growth such as economic, environmental, social, technological, and political status of the particular region. Analysts have studied the data of revenue, production, and manufacturers of each region. This section analyses region-wise revenue and volume for the forecast period of 2015 to 2026. These analyses will help the reader to understand the potential worth of investment in a particular region.Global Machine Learning Chips Market: Competitive LandscapeThis section of the report identifies various key manufacturers of the market. It helps the reader understand the strategies and collaborations that players are focusing on combat competition in the market. The comprehensive report provides a significant microscopic look at the market. The reader can identify the footprints of the manufacturers by knowing about the global revenue of manufacturers, the global price of manufacturers, and production by manufacturers during the forecast period of 2015 to 2019.The major players in the market include Wave Computing, Graphcore, Google Inc, Intel Corporation, IBM Corporation, Nvidia Corporation, Qualcomm, Taiwan Semiconductor Manufacturing, etc.

Global Machine Learning Chips Market by Geography:

You can Buy This Report from Here @ https://www.marketresearchhub.com/checkout?rep_id=2632983&licType=S&source=atm

Some of the Major Highlights of TOC covers in Machine Learning Chips Market Report:

Chapter 1: Methodology & Scope of Machine Learning Chips Market

Chapter 2: Executive Summary of Machine Learning Chips Market

Chapter 3: Machine Learning Chips Industry Insights

Chapter 4: Machine Learning Chips Market, By Region

Chapter 5: Company Profile

And Continue

Read more from the original source:
Machine Learning Chips Market Dynamics Analysis to Grow at Cagr with Major Companies and Forecast 2026 - The Scarlet

This artist used machine learning to create realistic portraits of Roman emperors – The World

Some people have spent their quarantine downtime bakingsourdough bread. Others experiment with tie-dye. But others namely Toronto-based artist Daniel Voshart have createdpainstaking portraits of all 54 Roman emperors of the Principate period, which spanned from 27 BC to 285 AD.

The portraits help people visualize what the Roman emperors would have looked like when they were alive.

Included are Vosharts best artistic guesses of the faces of emperors Augustus, Nero, Caligula, Marcus Aurelius and Claudius, among others. They dont look particularly heroic or epic rather, they look like regular people, with craggy foreheads, receding hairlines and bags under their eyes.

To make the portraits, Voshart used a design software called Artbreeder, which relies on a kind of artificial intelligence called generative adversarial networks (GANs).

Voshart starts by feeding the GANs hundreds of images of the emperors collected from ancient sculpted busts, coins and statues. Then he gets a composite image, which he tweaks in Photoshop. To choose characteristics such as hair color and eye color, Voshart researches the emperors backgrounds and lineages.

It was a bit of a challenge, he says. About a quarter of the project was doing research, trying to figure out if theres something written about their appearance.

He also needed to find good images to feed the GANs.

Another quarter of the research was finding the bust, finding when it was carved because a lot of these busts are recarvings or carved hundreds of years later, he says.

In a statement posted on Medium, Voshartwrites: My goal was not to romanticize emperors or make them seem heroic. In choosing bust/sculptures, my approach was to favor the bust that was made when the emperor was alive. Otherwise, I favored the bust made with the greatest craftsmanship and where the emperor was stereotypically uglier my pet theory being that artists were likely trying to flatter their subjects.

Related:Battle of the bums: Museums complete over best artistic behinds

Voshart is not a Rome expert. His background is in architecture and design, and by day he works in the art department of the TV show "Star Trek: Discovery," where he designs virtual reality walkthroughs of the sets before they'rebuilt.

But when the coronavirus pandemic hit, Voshart was furloughed. He used the extra time on his hands to learn how to use the Artbreeder software.The idea for the Roman emperor project came from a Reddit threadwhere people were posting realistic-looking images theyd created on Artbreeder using photos of Roman busts. Voshart gave it a try and went into exacting detail with his research and design process, doing multiple iterations of the images.

Voshart says he made some mistakes along the way. For example, Voshart initially based his portrait of Caligula, a notoriously sadistic emperor, on a beautifully preserved bust in the Metropolitan Museum of Art. But the bust was too perfect-looking, Voshart says.

Multiple people told me he was disfigured, and another bust was more accurate, he says.

So, for the second iteration of the portrait, Voshart favored a different bust where one eye was lower than the other.

People have been telling me my first depiction of Caligula was hot, he says. Now, no ones telling me that.

Voshart says people who see his portraits on Twitter and Reddit often approach them like theyd approachTinder profiles.

I get maybe a few too many comments, like such-and-such is hot. But a lot of these emperors are such awful people!

I get maybe a few too many comments, like such-and-such is hot. But a lot of these emperors are such awful people! Voshart says.

Voshart keeps a list on his computer of all the funny comparisons people have made to present-day celebrities and public figures.

Ive heard Nero looks like a football player. Augustus looks like Daniel Craigmy early depiction of Marcus Aurelius looks like the Dude from 'The Big Lebowski.'

But the No. 1 comment? Augustus looks like Putin.

Related:UNESCO says scammers are using its logo to defraudartcollectors

No one knows for sure whether Augustus actually looked like Vladimir Putin in real life.Voshart says his portraits are speculative.

Its definitely an artistic interpretation, he says. Im sure if you time-traveled, youd be very angry at me."

Visit link:
This artist used machine learning to create realistic portraits of Roman emperors - The World

Demonstration Of What-If Tool For Machine Learning Model Investigation – Analytics India Magazine

Machine learning era has reached the stage of interpretability where developing models and making predictions is simply not enough any more. To make a powerful impact and get good results on the data it is important to investigate and probe the dataset and the models. A good model investigation involves digging deep into the understanding of the model to find insights and inconsistencies in the developed model. This task usually involves writing a lot of custom functions. But, with tools like What-If, it makes the probing task very easy and saves time and efforts for programmers.

In this article we will learn about:

What-If tool is a visualization tool that is designed to interactively probe the machine learning models. WIT allows users to understand machine learning models like classification, regression and deep neural networks by providing methods to evaluate, analyse and compare the model. It is user friendly and can be used not only by developers but also by researchers and non-programmers very easily.

WIT was developed by Google under the People+AI research (PAIR) program. It is open-source and brings together researchers across Google to study and redesign the ways people interact with AI systems.

This tool provides multiple features and advantages for users to investigate the model.

Some of the features of using this are:

WIT can be used with a Google Colab notebook or Jupyter notebook. It can also be used with Tensorflow Board.

Let us take a sample dataset to understand the different features of WIT. I will choose the forest fire dataset available for download on Kaggle. You can click here for downloading the dataset. The goal here is to predict the areas affected by forest fires given the temperature, month, amount of rain etc.

I will implement this tool on google collaboratory. Before we load the dataset and perform the processing, we will first install the WIT. To install this tool use,

!pip install witwidget

Once we have split the data, we can convert the columns month and day to categorical values using label encoder.

Now we can build our model. I will use sklearn ensemble model and implement the gradient boosting regression model.

Now that we have the model trained, we will write a function to predict the data since we need to use this for the widget.

Next, we will write the code to call the widget.

This opens an interactive widget with two panels.

To the left, there is a panel for selecting multiple techniques to perform on the data and to the right is the data points.

As you can see on the right panel we have options to select features in the dataset along X-axis and Y-axis. I will set these values and check the graphs.

Here I have set FFMC along the X-axis and area as the target. Keep in mind that these points are displayed after the regression is performed.

Let us now explore each of the options provided to us.

You can select a random data point and highlight the point selected. You can also change the value of the datapoint and observe how the predictions change dynamically and immediately.

As you can see, changing the values changes the predicted outcomes. You can change multiple values and experiment with the model behaviour.

Another way to understand the behaviour of a model is to use counterfactuals. Counterfactuals are slight changes made that can cause a model to flip its decision.

By clicking on the slide button shown below we can identify the counterfactual which gets highlighted in green.

This plot shows the effects that the features have on the trained machine learning model.

As shown below, we can see the inference of all the features with the target value.

This tab allows us to look at the overall model performance. You can evaluate the model performance with respect to one feature or more than the one feature. There are multiple options available for analysis of the performance.

I have selected two features FFMC and temp against the area to understand performance using mean error.

If multiple training models are used their performance can be evaluated here.

The features tab is used to get the statistics of each feature in the dataset. It displays the data in the form of histograms or quantile charts.

The tab also enables us to look into the distribution of values for each feature in the dataset.

It also highlights the features that are most non-uniform in comparison to the other features in the dataset.

Identifying non-uniformity is a good way to reduce bias in the model.

WIT is a very useful tool for analysis of model performance. Ability to inspect models in a simple no-code environment will be of great help especially in the business perspective.

It also gives insights to factors beyond training the model like understanding why and how that model was created and how the dataset is fitting in the model.

comments

Read the original:
Demonstration Of What-If Tool For Machine Learning Model Investigation - Analytics India Magazine

Machine Learning & Big Data Analytics Education Market Size is Thriving Worldwide 2020 | Growth and Profit Analysis, Forecast by 2027 – The Daily…

Fort Collins, Colorado The Global Machine Learning & Big Data Analytics Education Market research report offers insightful information on the Global Machine Learning & Big Data Analytics Education market for the base year 2019 and is forecast between 2020 and 2027. Market value, market share, market size, and sales have been estimated based on product types, application prospects, and regional industry segmentation. Important industry segments were analyzed for the global and regional markets.

The effects of the COVID-19 pandemic have been observed across all sectors of all industries. The economic landscape has changed dynamically due to the crisis, and a change in requirements and trends has also been observed. The report studies the impact of COVID-19 on the market and analyzes key changes in trends and growth patterns. It also includes an estimate of the current and future impact of COVID-19 on overall industry growth.

Get a sample of the report @ https://reportsglobe.com/download-sample/?rid=64357

The report has a complete analysis of the Global Machine Learning & Big Data Analytics Education Market on a global as well as regional level. The forecast has been presented in terms of value and price for the 8 year period from 2020 to 2027. The report provides an in-depth study of market drivers and restraints on a global level, and provides an impact analysis of these market drivers and restraints on the relationship of supply and demand for the Global Machine Learning & Big Data Analytics Education Market throughout the forecast period.

The report provides an in-depth analysis of the major market players along with their business overview, expansion plans, and strategies. The main actors examined in the report are:

The Global Machine Learning & Big Data Analytics Education Market Report offers a deeper understanding and a comprehensive overview of the Global Machine Learning & Big Data Analytics Education division. Porters Five Forces Analysis and SWOT Analysis have been addressed in the report to provide insightful data on the competitive landscape. The study also covers the market analysis and provides an in-depth analysis of the application segment based on the market size, growth rate and trends.

Request a discount on the report @ https://reportsglobe.com/ask-for-discount/?rid=64357

The research report is an investigative study that provides a conclusive overview of the Global Machine Learning & Big Data Analytics Education business division through in-depth market segmentation into key applications, types, and regions. These segments are analyzed based on current, emerging and future trends. Regional segmentation provides current and demand estimates for the Global Machine Learning & Big Data Analytics Education industry in key regions in North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa.

Global Machine Learning & Big Data Analytics Education Market Segmentation:

In market segmentation by types of Global Machine Learning & Big Data Analytics Education , the report covers-

In market segmentation by applications of the Global Machine Learning & Big Data Analytics Education , the report covers the following uses-

Request customization of the report @https://reportsglobe.com/need-customization/?rid=64357

Overview of the table of contents of the report:

To learn more about the report, visit @ https://reportsglobe.com/product/global-machine-learning-big-data-analytics-education-assessment/

Thank you for reading our report. To learn more about report details or for customization information, please contact us. Our team will ensure that the report is customized according to your requirements.

How Reports Globe is different than other Market Research Providers

The inception of Reports Globe has been backed by providing clients with a holistic view of market conditions and future possibilities/opportunities to reap maximum profits out of their businesses and assist in decision making. Our team of in-house analysts and consultants works tirelessly to understand your needs and suggest the best possible solutions to fulfill your research requirements.

Our team at Reports Globe follows a rigorous process of data validation, which allows us to publish reports from publishers with minimum or no deviations. Reports Globe collects, segregates, and publishes more than 500 reports annually that cater to products and services across numerous domains.

Contact us:

Mr. Mark Willams

Account Manager

US: +1-970-672-0390

Email:[emailprotected]

Web:reportsglobe.com

See the original post:
Machine Learning & Big Data Analytics Education Market Size is Thriving Worldwide 2020 | Growth and Profit Analysis, Forecast by 2027 - The Daily...

Improving The Use Of Social Media For Disaster Management – Texas A&M University Today

The algorithm could be used to quickly identify social media posts related to a disaster.

Getty Images

There has been a significant increase in the use of social media to share updates, seek help and report emergencies during a disaster. Algorithms keeping track of social media posts that signal the occurrence of natural disasters must be swift so that relief operations can be mobilized immediately.

A team of researchers led by Ruihong Huang, assistant professor in the Department of Computer Science and Engineering at Texas A&M University, has developed a novel weakly supervised approach that can train machine learning algorithms quickly to recognize tweets related to disasters.

Because of the sudden nature of disasters, theres not much time available to build an event recognition system, Huang said. Our goal is to be able to detect life-threatening events using individual social media messages and recognize similar events in the affected areas.

Text on social media platforms, like Twitter, can be categorized using standard algorithms called classifiers. This sorting algorithm separates data into labeled classes or categories, similar to how spam filters in email service providers scan incoming emails and classify them as either spam or not spam based on its prior knowledge of spam messages.

Most classifiers are an integral part of machine learning algorithms that make predictions based on carefully labeled sets of data. In the past, machine learning algorithms have been used for event detection based on tweets or a burst of words within tweets. To ensure a reliable classifier for the machine learning algorithms, human annotators have to manually label large amounts of data instances one by one, which usually takes several days, sometimes even weeks or months.

The researchers also found that it is essentially impossible to find a keyword that does not have more than one meaning on social media depending on the context of the tweet. For example, if the word dead is used as a keyword, it will pull in tweets talking about a variety of topics such as a phone battery being dead or the television series The Walking Dead.

We have to be able to know which tweets that contain the predetermined keywords are relevant to the disaster and separate them from the tweets that contain the correct keywords but are not relevant, Huang said.

To build more reliable labeled datasets, the researchers first used an automatic clustering algorithm to put them into small groups. Next, a domain expert looked at the context of the tweets in each group to identify if it was relevant to the disaster. The labeled tweets were then used to train the classifier how to recognize the relevant tweets.

Using data gathered from the most impacted time periods for Hurricane Harvey and Hurricane Florence, the researchers found that their data labeling method and overall weakly-supervised system took one to two person-hours instead of the 50 person-hours that were required to go through thousands of carefully annotated tweets using the supervised approach.

Despite the classifiers overall good performance, they also observed that the system still missed several tweets that were relevant but used a different vocabulary than the predetermined keywords.

Users can be very creative when discussing a particular type of event using the predefined keywords, so the classifier would have to be able to handle those types of tweets, Huang said. Theres room to further improve the systems coverage.

In the future, the researchers will look to explore how to extract information about the users location so first responders will know exactly where to dispatch their resources.

Other contributors to this research include Wenlin Yao, a doctoral student supervised by Huang from the computer science and engineering department; Ali Mostafavi and Cheng Zhang from the Zachry Department of Civil and Environmental Engineering; and Shiva Saravanan, former intern of the Natural Language Processing Lab at Texas A&M.

The researchers described their findings in the proceedings from the Association for the Advancement of Artificial Intelligences 34th Conference on Artificial Intelligence.

This work is supported by funds from the National Science Foundation.

Follow this link:
Improving The Use Of Social Media For Disaster Management - Texas A&M University Today

Machine Learning in Medical Imaging Market 2020 : Analysis by Geographical Regions, Type and Application Till 2025 | Zebra, Arterys, Aidoc, MaxQ AI -…

Global Machine Learning in Medical Imaging Industry: with growing significant CAGR during Forecast 2020-2025

Latest Research Report on Machine Learning in Medical Imaging Market which covers Market Overview, Future Economic Impact, Competition by Manufacturers, Supply (Production), and Consumption Analysis

Understand the influence of COVID-19 on the Machine Learning in Medical Imaging Market with our analysts monitoring the situation across the globe. Request Now

The market research report on the global Machine Learning in Medical Imaging industry provides a comprehensive study of the various techniques and materials used in the production of Machine Learning in Medical Imaging market products. Starting from industry chain analysis to cost structure analysis, the report analyzes multiple aspects, including the production and end-use segments of the Machine Learning in Medical Imaging market products. The latest trends in the pharmaceutical industry have been detailed in the report to measure their impact on the production of Machine Learning in Medical Imaging market products.

Leading key players in the Machine Learning in Medical Imaging market are Zebra, Arterys, Aidoc, MaxQ AI, Google, Tencent, Alibaba

Get sample of this report @ https://grandviewreport.com/sample/21159

Product Types:, Supervised Learning, Unsupervised Learning, Semi Supervised Learning, Reinforced Leaning

By Application/ End-user:, Breast, Lung, Neurology, Cardiovascular, Liver

Regional Analysis For Machine Learning in Medical ImagingMarket

North America(the United States, Canada, and Mexico)Europe(Germany, France, UK, Russia, and Italy)Asia-Pacific(China, Japan, Korea, India, and Southeast Asia)South America(Brazil, Argentina, Colombia, etc.)The Middle East and Africa(Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)

Get Discount on Machine Learning in Medical Imaging report @ https://grandviewreport.com/discount/21159

This report comes along with an added Excel data-sheet suite taking quantitative data from all numeric forecasts presented in the report.

Research Methodology:The Machine Learning in Medical Imagingmarket has been analyzed using an optimum mix of secondary sources and benchmark methodology besides a unique blend of primary insights. The contemporary valuation of the market is an integral part of our market sizing and forecasting methodology. Our industry experts and panel of primary members have helped in compiling appropriate aspects with realistic parametric assessments for a comprehensive study.

Whats in the offering: The report provides in-depth knowledge about the utilization and adoption of Machine Learning in Medical Imaging Industries in various applications, types, and regions/countries. Furthermore, the key stakeholders can ascertain the major trends, investments, drivers, vertical players initiatives, government pursuits towards the product acceptance in the upcoming years, and insights of commercial products present in the market.

Full Report Link @ https://grandviewreport.com/industry-growth/Machine-Learning-in-Medical-Imaging-Market-21159

Lastly, the Machine Learning in Medical Imaging Market study provides essential information about the major challenges that are going to influence market growth. The report additionally provides overall details about the business opportunities to key stakeholders to expand their business and capture revenues in the precise verticals. The report will help the existing or upcoming companies in this market to examine the various aspects of this domain before investing or expanding their business in the Machine Learning in Medical Imaging market.

Contact Us:Grand View Report(UK) +44-208-133-9198(APAC) +91-73789-80300Email : [emailprotected]

See the rest here:
Machine Learning in Medical Imaging Market 2020 : Analysis by Geographical Regions, Type and Application Till 2025 | Zebra, Arterys, Aidoc, MaxQ AI -...

Machine Learning Does Not Improve Upon Traditional Regression in Predicting Outcomes in Atrial Fibrillation: An Analysis of the ORBIT-AF and…

Aims

Prediction models for outcomes in atrial fibrillation (AF) are used to guide treatment. While regression models have been the analytic standard for prediction modelling, machine learning (ML) has been promoted as a potentially superior methodology. We compared the performance of ML and regression models in predicting outcomes in AF patients.

The Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF) and Global Anticoagulant Registry in the FIELD (GARFIELD-AF) are population-based registries that include 74 792 AF patients. Models were generated from potential predictors using stepwise logistic regression (STEP), random forests (RF), gradient boosting (GB), and two neural networks (NNs). Discriminatory power was highest for death [STEP area under the curve (AUC) = 0.80 in ORBIT-AF, 0.75 in GARFIELD-AF] and lowest for stroke in all models (STEP AUC = 0.67 in ORBIT-AF, 0.66 in GARFIELD-AF). The discriminatory power of the ML models was similar or lower than the STEP models for most outcomes. The GB model had a higher AUC than STEP for death in GARFIELD-AF (0.76 vs. 0.75), but only nominally, and both performed similarly in ORBIT-AF. The multilayer NN had the lowest discriminatory power for all outcomes. The calibration of the STEP modelswere more aligned with the observed events for all outcomes. In the cross-registry models, the discriminatory power of the ML models was similar or lower than the STEP for most cases.

When developed from two large, community-based AF registries, ML techniques did not improve prediction modelling of death, major bleeding, or stroke.

View post:
Machine Learning Does Not Improve Upon Traditional Regression in Predicting Outcomes in Atrial Fibrillation: An Analysis of the ORBIT-AF and...