RapidMiner Named a Leader in Multimodal Predictive Analytics and Machine Learning Platforms by Independent Research Firm – Benzinga

BOSTON, Sept. 14, 2020 /PRNewswire-PRWeb/ --RapidMiner, an enterprise AI platform for people of all skill levels, today announces that it has been recognized as a Leader in the Forrester Research, Inc. September 2020 report, The Forrester Wave: Multimodal Predictive Analytics And Machine Learning, Q3 2020.

This year's report evaluated 11 multimodal predictive analytics and machine learning (PAML) platforms based on 26 criteria, which are grouped into three high-level categories: current offering, strategy and market presence. Criteria for the platforms assessed include collaboration, model evaluation, model operations (ModelOps) and more. Of the platforms evaluated, RapidMiner received the highest possible scores in the modeling and model evaluation criteria, as well as in the ability to execute and solution roadmap criteria.

According to the report, "RapidMiner might not just have something for everyone; it could have everything for everyone." The report also notes, "[RapidMiner] has some of the most productivity-enhancing capabilities for automated data preparation (Turbo Prep) and model development (Auto Model) in the multimodal market, along with one of the most comprehensive visual tools for building data and ML pipelines."

RapidMiner is a data science platform that puts people not technology at the center of the enterprise AI journey. The platform empowers users of all domains and skillsets through a seamless combination of automated data science, drag and drop visual workflows, and notebook-based approaches. This allows business users, non-coding data scientists and coders to collaborate more effectively and work interchangeably.

"We believe that being named a Leader in this Wave evaluation from Forrester validates our commitment to reinvent enterprise AI so that anyone has the power to positively shape the future," said Peter Lee, CEO of RapidMiner. "By putting people at the center of the AI journey, we've developed a strong track record of helping companies in all major industries drive revenue, cut costs, and avoid risks."

In this evaluation, Forrester included vendors which, among other inclusion criteria, offer a solution that can operate on large data sets and provide capabilities for data acquisition and preparation, statistical and machine learning (ML) algorithms, a differentiated user interface to build models, and ModelOps features.

To read The Forrester Wave: Multimodal Predictive Analytics and Machine Learning, Q3 2020, visit https://rapidminer.com/resource/forrester-wave-predictive-analytics-machine-learning/.

About RapidMiner RapidMiner is reinventing enterprise AI so that anyone has the power to positively shape the future. We're doing this by enabling data loving people of all skill levels across the enterprise to rapidly create and operate AI solutions for immediate business impact. We offer a full lifecycle platform that unifies data prep, machine learning, and model operations with a user experience that provides depth for data scientists and simplifies complex tasks for everyone else. The RapidMiner Center of Excellence methodology and the RapidMiner Academy ensures customers are successful, no matter their experience or resource levels. More than 40,000 organizations in over 150 countries rely on RapidMiner to increase revenue, cut costs, and reduce risk. Learn more at rapidminer.com.

Media Contact: Zoe Cushman Matter Communications 617-874-5201 RapidMiner@matternow.com

SOURCE RapidMiner

Read more:
RapidMiner Named a Leader in Multimodal Predictive Analytics and Machine Learning Platforms by Independent Research Firm - Benzinga

PODCAST: NVIDIA’s Director of Data Science Talks Machine Learning for Airlines and Aerospace – Aviation Today

Geoffrey Levene is the Director of Global Business Development for Data Science and Space at NVIDIA.

On this episode of the Connected Aircraft Podcast, we learn how airlines and aerospace manufacturers are adopting the use of data science workstations to develop task-specific machine learning models with Geoffrey Levene, Director, Global Business Development for Data Science and Space at NVIDIA.

In a May 7 blog, NVIDIA one of the worlds largest suppliers of graphics processing units and computer chips to the video gaming, automotive and other industries explained how American Airlines is using its data science workstations to integrate machine learning into its air cargo operations planning. During this interview, Levene expands on other airline and aerospace uses of those same workstations and how they are creating new opportunities for efficiency.

Have suggestions or topics we should focus on in the next episode? Email the host, Woodrow Bellamy atwbellamy@accessintel.com, or drop him a line on Twitter@WbellamyIIIAC.

Listen to this episode below, orcheck it out on iTunesorGoogle PlayIf you like the show, subscribe on your favorite podcast app to get new episodes as soon as theyre released.

See the original post here:
PODCAST: NVIDIA's Director of Data Science Talks Machine Learning for Airlines and Aerospace - Aviation Today

How Can Machine Learning Help the Teaching Profession? – FE News

Further Education News

The FE News Channel gives you the latest education news and updates on emerging education strategies and the#FutureofEducation and the #FutureofWork.

Providing trustworthy and positive Further Education news and views since 2003, we are a digital news channel with a mixture of written word articles, podcasts and videos. Our specialisation is providing you with a mixture of the latest education news, our stance is always positive, sector building and sharing different perspectives and views from thought leaders, to provide you with a think tank of new ideas and solutions to bring the education sector together and come up with new innovative solutions and ideas.

FE News publish exclusive peer to peer thought leadership articles from our feature writers, as well as user generated content across our network of over 3000 Newsrooms, offering multiple sources of the latest education news across the Education and Employability sectors.

FE News also broadcast live events, podcasts with leading experts and thought leaders, webinars, video interviews and Further Education news bulletins so you receive the latest developments inSkills Newsand across the Apprenticeship, Further Education and Employability sectors.

Every week FE News has over 200 articles and new pieces of content per week. We are a news channel providing the latest Further Education News, giving insight from multiple sources on the latest education policy developments, latest strategies, through to our thought leaders who provide blue sky thinking strategy, best practice and innovation to help look into the future developments for education and the future of work.

In May 2020, FE News had over 120,000 unique visitors according to Google Analytics and over 200 new pieces of news content every week, from thought leadership articles, to the latest education news via written word, podcasts, video to press releases from across the sector.

We thought it would be helpful to explain how we tier our latest education news content and how you can get involved and understand how you can read the latest daily Further Education news and how we structure our FE Week of content:

Our main features are exclusive and are thought leadership articles and blue sky thinking with experts writing peer to peer news articles about the future of education and the future of work. The focus is solution led thought leadership, sharing best practice, innovation and emerging strategy. These are often articles about the future of education and the future of work, they often then create future education news articles. We limit our main features to a maximum of 20 per week, as they are often about new concepts and new thought processes. Our main features are also exclusive articles responding to the latest education news, maybe an insight from an expert into a policy announcement or response to an education think tank report or a white paper.

FE Voices was originally set up as a section on FE News to give a voice back to the sector. As we now have over 3,000 newsrooms and contributors, FE Voices are usually thought leadership articles, they dont necessarily have to be exclusive, but usually are, they are slightly shorter than Main Features. FE Voices can include more mixed media with the Further Education News articles, such as embedded podcasts and videos. Our sector response articles asking for different comments and opinions to education policy announcements or responding to a report of white paper are usually held in the FE Voices section. If we have a live podcast in an evening or a radio show such as SkillsWorldLive radio show, the next morning we place the FE podcast recording in the FE Voices section.

In sector news we have a blend of content from Press Releases, education resources, reports, education research, white papers from a range of contributors. We have a lot of positive education news articles from colleges, awarding organisations and Apprenticeship Training Providers, press releases from DfE to Think Tanks giving the overview of a report, through to helpful resources to help you with delivering education strategies to your learners and students.

We have a range of education podcasts on FE News, from hour long full production FE podcasts such as SkillsWorldLive in conjunction with the Federation of Awarding Bodies, to weekly podcasts from experts and thought leaders, providing advice and guidance to leaders. FE News also record podcasts at conferences and events, giving you one on one podcasts with education and skills experts on the latest strategies and developments.

We have over 150 education podcasts on FE News, ranging from EdTech podcasts with experts discussing Education 4.0 and how technology is complimenting and transforming education, to podcasts with experts discussing education research, the future of work, how to develop skills systems for jobs of the future to interviews with the Apprenticeship and Skills Minister.

We record our own exclusive FE News podcasts, work in conjunction with sector partners such as FAB to create weekly podcasts and daily education podcasts, through to working with sector leaders creating exclusive education news podcasts.

FE News have over 700 FE Video interviews and have been recording education video interviews with experts for over 12 years. These are usually vox pop video interviews with experts across education and work, discussing blue sky thinking ideas and views about the future of education and work.

FE News has a free events calendar to check out the latest conferences, webinars and events to keep up to date with the latest education news and strategies.

The FE Newsroom is home to your content if you are a FE News contributor. It also help the audience develop relationship with either you as an individual or your organisation as they can click through and box set consume all of your previous thought leadership articles, latest education news press releases, videos and education podcasts.

Do you want to contribute, share your ideas or vision or share a press release?

If you want to write a thought leadership article, share your ideas and vision for the future of education or the future of work, write a press release sharing the latest education news or contribute to a podcast, first of all you need to set up a FE Newsroom login (which is free): once the team have approved your newsroom (all content, newsrooms are all approved by a member of the FE News team- no robots are used in this process!), you can then start adding content (again all articles, videos and podcasts are all approved by the FE News editorial team before they go live on FE News). As all newsrooms and content are approved by the FE News team, there will be a slight delay on the team being able to review and approve content.

RSS Feed Selection Page

Go here to see the original:
How Can Machine Learning Help the Teaching Profession? - FE News

Finance Sector Benefits from Machine Learning Development and AI – Legal Reader

Banking and finance rely on experts but the new expert on the scene is your AI/ML combo, able to do far more, do it fast and do it accurately.

Making the right decisions and grabbing opportunities in the fast moving world of finance can make a difference to your bottom line. This is where artificial intelligence and machine learning make a tangential difference. Engage machine learning development services in your finance segment and life will not be the same. Markets and Markets study shows that artificial intelligence in financial segment will grow to over $ 7300 million by 2022.

Data

The simple reason you need machine learning development company to help you make better decisions with the help of AI/ML is data. Data flows in torrents from diverse sources and contains precious nuggets of information. This can be the basis of understanding customer behaviors and it can help you gain predictive capabilities. Data analysis with ML can also help identify patterns that could be indicative of attempts at fraud and you save your reputation and money by tackling it in time.

The key

Normalize huge sets of data and derive information in real time according to specifiable parameters. Machine Learning algorithms can help you train the system to carry out fast analysis and deliver results based on algorithm models created for the purpose by Machine Learning Development Company for you. As it ages the system actually becomes smarter because it learns as it goes along.

To achieve the same result manually using standard IT solutions you would employ a team of IT specialists but even then it is doubtful if you could get outputs in time to help you take decisive action.

Fraud prevention

This is one case where prevention is better than cure. A typical bank may have hundreds of thousands of customers carry out any number of different transactions. All such data is under the watchful eye of the ML imbued system and it is quick to detect anomalies. In fact, ML has been known to cause misunderstanding because a customer not familiar with credit card operations repeatedly fumbled and that raised a false alarm. Still, it is better to be safe than sorry and carry out firefighting after the event.

Stock trading

Day trading went algorithmic quite a few years back and helped brokers profit by getting the system to make automatic profitable trades. Apart from day trading there are derivatives, forex, commodities and binary where specific models for ML can help you, as a trader or a broker, anticipate price movements. This is one area where price is influenced not just by demand-supply but also by political factors, climate, company results and unforeseen calamities. ML keeps track of all and integrates them into a predictive capability to keep you ahead of the game.

Investment decisions

Likewise, investments in other areas like bonds, mutual funds and real estate need to be based on smart analysis of present and future while factoring external influencers. No one, for example, foresaw the covid-19 devastation that froze economies and dried up sources of funds that have an impact on investments, especially in real estate. However, if you have machine learning based system it would keep track of developments and alert you in advance so that you can be prepared. Then there are more mundane tasks in finance sector where ML does help. Portfolio managers always walk a tight rope and rely on experts who can make false decisions and affect clients capital. Tap into the power of ML to stay on top and grow wealth of wealthy clients. Their recommendations will get you more clients making the investment in ML solutions more than worthwhile. It could be the best investment you make.

Automation

Banks, private lenders, institutions and insurance companies routinely carry out repetitive and mundane tasks like attending to inquiries, processing forms and handling transactions. This does involve extreme manpower usage leading to high costs. Your employees work under a deluge of such tasks and cannot do anything productive. Switch to ML technologies to automate such repetitive tasks. You will have two benefits:

The second one alone is worth the investment. In the normal course of things you would have to devote considerable energies to identify developing patterns whereas the ML solution presents trends based on which you can modify services, design offers or address customer pain points and ensure loyalty.

Risk mitigation

Smart operators are always gaming the system such as finding ways to improve credit score and obtain credit despite being ineligible. Such operators would pass the normal scanning technique of banks. However, if you have ML for assessment of loan application the system delves deeper and digs to find out all relevant information, collate it and analyze it to help you get a true picture. Non-performing assets cause immense losses to banks and this is one area where Machine Learning solutions put in place by expert machine learning development services can and does prove immensely valuable.

Banking and finance rely on experts but the new expert on the scene is your AI/ML combo, able to do far more, do it fast and do it accurately.

Link:
Finance Sector Benefits from Machine Learning Development and AI - Legal Reader

Using machine learning to organize the chemical diversity – Tech Explorist

Because of the popularity of MOFs, scientists are developing, synthesizing, studying, and cataloging MOFs. However, the sheer number of MOFs is creating a problem.

Even if synthesizing new MOF, it is quite challenging to know whether it is new and not some minor variation of a structure that has already been synthesized.

To address this problem, EPFL scientists, in collaboration with MIT, have used machine-learning to organize the chemical diversity found in the ever-growing databases for the popular metal-organic framework materials. Using machine learning, scientists developed a language to compare two materials and quantify their differences.

Through this new language, scientists set off to determine the chemical diversity in MOF databases.

Professor Berend Smit at EPFL said,Before, the focus was on the number of structures. But now, we discovered that the major databases have all kinds of bias towards particular structures. There is no point in carrying out expensive screening studies on similar structures. One is better off in carefully selecting a set of very diverse structures, which will give much better results with far fewer structures.

Another exciting application is scientific archeology: The researchers used their machine-learning system to identify the MOF structures that, at the time of the study, were published as very different from the ones that are already known.

Smit said,So we now have a straightforward tool that can tell an experimental group how different their novel MOF is compared to the 90,000 other structures already reported.

See the article here:
Using machine learning to organize the chemical diversity - Tech Explorist

Use cases for AI and ML in cyber security – Information Age

We explore how artificial intelligence (AI) and machine learning (ML) can be incorporated into cyber security

With devices used for work continuting to diversify, so have cyber attacks, but AI can help prevent them.

As cyber attacks get more diverse in nature and targets, its essential that cyber security staff have the right visibility to determine how to solve vulnerabilities accordingly, and AI can help to come up with problems that its human colleagues cant alone.

Cyber security resembles a game of chess, said Greg Day, chief security officer EMEA at Palo Alto Networks. The adversary looks to outmanoeuvre the victim, the victim aims to stop and block the adversarys attack. Data is the king and the ultimate prize.

In 1996, an AI chess system, Deep Blue, won its first game against world champion, Garry Kasparov. Its become clear that AI can both programmatically think broader, faster and further outside the norms, and thats true of many of its applications in cyber security now too.

With this in mind, we explore particular use cases for AI in cyber security that are in place today.

Day went on to expand on how AI can work alongside cyber security staff in order to keep the organisation secure.

We all know there arent enough cyber security staff in the market, so AI can help to fill the gap, he said. Machine learning, a form of AI, can read the input from SoC analysts and transpose it into a database, which becomes ever expanding.

Charlie Roberts, head of business development, UK, Ireland & EU at IDnow, discusses how combining AI and humans can help to tackle cyber fraud. Read here

The next time the SoC analyst enters similar symptoms they are presented with previous similar cases along with the solutions, based on both statistical analysis and the use of neural nets reducing the human effort.

If theres no previous case, the AI can analyse the characteristics of the incident and suggest which SoC engineers would be the strongest team to solve the problem based on past experiences.

All of this is effectively a bot, an automated process that combines human knowledge with digital learning to give a more effective hybrid solution.

Mark Greenwood, head of data science at Netacea, delved into the benefits of bots within cyber security, keeping in mind that companies must distinguish good from bad.

Today, bots make up the majority of all internet traffic, explained Greenwood. And most of them are dangerous. From account takeovers using stolen credentials to fake account creation and fraud, they pose a real cyber security threat.

But businesses cant fight automated threats with human responses alone. They must employ AI and machine learning if theyre serious about tackling the bot problem. Why? Because to truly differentiate between good bots (such as search engine scrapers), bad bots and humans, businesses must use AI and machine learning to build a comprehensive understanding of their website traffic.

Its necessary to ingest and analyse a vast amount of data and AI makes that possible, while taking a machine learning approach allows cyber security teams to adapt their technology to a constantly shifting landscape.

By looking at behavioural patterns, businesses will get answers to the questions what does an average user journey look like and what does a risky unusual journey look like. From here, we can unpick the intent of their website traffic, getting and staying ahead of the bad bots.

When considering certain aspects of cyber security that can benefit from the technology, Tim Brown, vice-president of security architecture at SolarWinds says that AI can play a role in protecting endpoints. This is becoming ever the more important as the amount of remote devices used for work rises.

By following best practice advice and staying current with patches and other updates, an organisation can be reactive and protect against threats, said Brown. But AI may give IT and security professionals an advantage against cyber criminals.

Gartner predicts that 75% of CEOs will be personally liable for cyber-physical security incidents by 2024, as the financial impact of breaches grows. Read here

Antivirus (AV) versus AI-driven endpoint protection is one such example; AV solutions often work based on signatures, and its necessary to keep up with signature definitions to stay protected against the latest threats. This can be a problem if virus definitions fall behind, either because of a failure to update or a lack of knowledge from the AV vendor. If a new, previously unseen ransomware strain is used to attack a business, signature protection wont be able to catch it.

AI-driven endpoint protection takes a different tack, by establishing a baseline of behaviour for the endpoint through a repeated training process. If something out of the ordinary occurs, AI can flag it and take action whether thats sending a notification to a technician or even reverting to a safe state after a ransomware attack. This provides proactive protection against threats, rather than waiting for signature updates.

The AI model has proven itself to be more effective than traditional AV. For many of the small/midsize companies an MSP serves, the cost of AI-driven endpoint protection is typically for a small number of devices and therefore should be of less concern. The other thing to consider is how much cleaning up costs after infection if AI-driven solutions help to avoid potential infection, it can pay for itself by avoiding clean-up costs and in turn, creating higher customer satisfaction.

With more employees working from home, and possibly using their personal devices to complete tasks and collaborate with colleagues more often, its important to be wary of scams that are afoot within text messages.

With malicious actors recently diversifying their attack vectors, using Covid-19 as bait in SMS phishing scams, organisations are under a lot of pressure to bolster their defences, said Brian Foster, senior vice-president of product management at MobileIron.

To protect devices and data from these advanced attacks, the use of machine learning in mobile threat defence (MTD) and other forms of managed threat detection continues to evolve as a highly effective security approach.

Machine learning models can be trained to instantly identify and protect against potentially harmful activity, including unknown and zero-day threats that other solutions cant detect in time. Just as important, when machine learning-based MTD is deployed through a unified endpoint management (UEM) platform, it can augment the foundational security provided by UEM to support a layered enterprise mobile security strategy.

With cyber attacks rising while employees have been working from home, we look at how edge device security can be ensured. Read here

Machine learning is a powerful, yet unobtrusive, technology that continually monitors application and user behaviour over time so it can identify the difference between normal and abnormal behaviour. Targeted attacks usually produce a very subtle change in the device and most of them are invisible to a human analyst. Sometimes detection is only possible by correlating thousands of device parameters through machine learning.

These use cases and more demonstrate the viability of AI and cyber security staff effectively uniting. However, Mike MacIntyre, vice-president of product at Panaseer, believes that the space still has hurdles to overcome in order for this to really come to fruition.

AI certainly has a lot of promise but as an industry we must be clear that its currently not a silver bullet that will alleviate all cyber security challenges and address the skills shortage, said MacIntyre. This is because AI is currently just a term applied to a small subset of machine learning techniques. Much of the hype surrounding AI comes from how enterprise security products have adopted the term and the misconception (willful or otherwise) about what constitutes AI.

Terry Greer-King, vice-president EMEA at SonicWall, discusses looking past the hype when it comes to blockchain and cyber security. Read here

The algorithms embedded in many modern security products could, at best, be called narrow, or weak, AI; they perform highly specialised tasks in a single, narrow field and have been trained on large volumes of data, specific to a single domain. This is a far cry from general, or strong, AI, which is a system that can perform any generalised task and answer questions across multiple domains. Who knows how far away such a system is (there is much debate ranging from the next decade to never), but no CISO should be factoring such a tool in to their three-to-five year strategy.

Another key hurdle that is hindering the effectiveness of AI is the problem of data integrity. There is no point deploying an AI product if you cant get access to the relevant data feeds or arent willing to install something on your network. The future for security is data-driven, but we are a long way from AI products following through on the promises of their marketing hype.

Continue reading here:
Use cases for AI and ML in cyber security - Information Age

Outlook on the AI in Education Global Market to 2025 – by Offering, Technology, End-user and Geography – PRNewswire

DUBLIN, Sept. 14, 2020 /PRNewswire/ -- The "AI in Education Market - Forecasts from 2020 to 2025" report has been added to ResearchAndMarkets.com's offering.

The Artificial Intelligence in education market was valued at US$2.022 billion for the year 2019. The growing adoption of artificial intelligence in the education sector due to the ability of these solutions to enhance the learning experience is one of the key factors which is anticipated to propel its adoption across the globe for education purposes.

The proliferation of smart devices and the rapidly growing trend for digitalization across numerous sectors is also propelling the demand for artificial intelligence solutions in the education sector. Artificial intelligence majorly uses deep learning, machine learning, and advanced analytics especially for monitoring the learning process of the learner such as the marks obtained and speed of a particular individual among others. Also, these solutions offer a personalized learning experience and high-quality education and also helps the learners to enhance pre-existing knowledge and learning.

The growth of the market is also anticipated to be driven by the growth in the use of multilingual translators which are integrated with AI technology. Furthermore, the artificial intelligence solutions also give customizable learning experience depending on the requirements of the end-user, student path prediction, suggestion of learning path, identification of weakness, and also help the learners to analyze their areas for improvement and help them to improve their capabilities.

However, the less availability of skilled workforce, high investment costs coupled with the growth in the risk of data security on account of a surge in the number of cyber-attacks is one of the major factors which is anticipated to hamper the growth of the artificial intelligence in the education market. The integration of AI-empowered educational games is also projected to boost the adoption of these solutions across the K-12 education segment, as the learning for the students become highly interactive and further helps the teachers also to uplift the learning experience of the students. In addition, the integration of AI in the education sector has revolutionized the overall learning experience across all the learning areas through its result-driven approach.

Furthermore, the use of artificial intelligence in education can also be used for automating the basic activities in the education sector which includes grading and assessment purpose among others. This, in turn, leads to saving the teachers time to stay more focused on the quality of education of the professional development of the students. The use of these solutions enables the learners to attain the ultimate academic success through the help of smart content. The global artificial intelligence in the education market has been segmented on the basis of offering, technology, end-use, and geography. On the basis of offering the market has been classified into solutions and services. By technology, the market has been segmented into deep learning, machine learning, and natural language processing. By end-user, the segmentation has been done on the basis of academic learning and corporate learning.

Services are projected to show good growth opportunities

By offering, the market has been segmented into solutions and services. The services segment is estimated to show decent growth opportunities during the forecast period and beyond on account of the rising awareness among education providers regarding the adoption of AI technology for providing high-quality education. Furthermore, the growing trend of professional training services is also leading to the high adoption of AI-powered solutions for corporate learning purposes which also supports the growth of this segment during the next five years.

Corporate learning expected to grow substantially over the forecast period

The corporate learning segment is anticipated to show decent growth over the forecast period on account of the growing adoption of these solutions by corporate companies in order to enhance the skills of their employees with an aim to meet the growing competition. Thus, to enhance the skills of the employees, key players investing heavily to develop new platforms and courses so as to attract big companies and expand their competitive edge in the market. Furthermore, big giants at the global level are adopting AI solutions for corporate learning to equip the global workforce with the skills and knowledge to succeed in the rapidly changing economy further shows the growth potential of artificial intelligence in the education market for the corporate learning segment.

The academic learning segment is projected to hold a decent share in the market owing to the high adoption of these solutions across the school level for enhancing the education quality and for building the careers of the students. Also, the use of AI techniques for learning purposes helps the students to focus more on self-development.

North America expected to hold a substantial market share

Geographically, the global market has been segmented into North America, South America, Europe, Middle East and Africa and the Asia Pacific. North America is projected to hold a noteworthy share in the market on account of well-established infrastructure and early adoption of technology. Furthermore, the presence of a state-of-art education system also bolsters the growth of the market in the North American region. The Asia Pacific region is projected to show robust growth over the next five years on account of the rising adoption of AI-based solutions across the major developing countries such as China and Indonesia among others. The education system in China is one of the most reputed systems across the globe, on the other hand, it is also the most challenging and competitive one. Moreover, India has one of the largest education sectors in the world. Thus, the presence of well-established education systems of the country is expected to supplement the growth of the market during the next five years.

Competitive Insights

Prominent key market players in the artificial intelligence in education market include IBM Corporation, Cognizant, Google LLC, Microsoft Corporation, and Amazon Web Services among others. These companies hold a noteworthy share in the market on account of their good brand image and product offerings.

Major players in the artificial intelligence in education market have been covered along with their relative competitive position and strategies. The report also mentions recent deals and investments of different market players over the last two years.

Key Topics Covered:

1. Introduction1.1. Market Definition1.2. Market Segmentation

2. Research Methodology2.1. Research Data2.2. Assumptions

3. Executive Summary3.1. Research Highlights

4. Market Dynamics4.1. Market Drivers4.2. Market Restraints4.3. Porters Five Forces Analysis4.3.1. Bargaining Power of Suppliers4.3.2. Bargaining Power of Buyers4.3.3. Threat of New Entrants4.3.4. Threat of Substitutes4.3.5. Competitive Rivalry in the Industry4.4. Industry Value Chain Analysis

5. Artificial Intelligence in Education Market Analysis, By Offering5.1. Introduction5.2. Solutions5.3. Services

6. Artificial Intelligence in Education Market Analysis, By Technology6.1. Introduction6.2. Deep Learning6.3. Machine Learning6.4. Natural Language Processing

7. Artificial Intelligence in Education Market Analysis, By End User7.1. Introduction7.2. Academic Learning7.2.1. K-12 Education7.2.2. Higher Education7.3. Corporate Learning

8. Artificial Intelligence in Education Market Analysis, By Geography8.1. Introduction8.2. North America8.2.1. North America Artificial Intelligence in Education Market, By Offering, 2019 to 20258.2.2. North America Artificial Intelligence in Education Market, By Technology, 2019 to 20258.2.3. North America Artificial Intelligence in Education Market, By End User, 2019 to 20258.2.4. By Country8.2.4.1. USA8.2.4.2. Canada8.2.4.3. Mexico8.3. South America8.3.1. South America Artificial Intelligence in Education Market, By Offering, 2019 to 20258.3.2. South America Artificial Intelligence in Education Market, By Technology, 2019 to 20258.3.3. South America Artificial Intelligence in Education Market, By End User, 2019 to 20258.3.4. By Country8.3.4.1. Brazil8.3.4.2. Argentina8.3.4.3. Others8.4. Europe8.4.1. Europe Artificial Intelligence in Education Market, By Offering, 2019 to 20258.4.2. Europe Artificial Intelligence in Education Market, By Technology, 2019 to 20258.4.3. Europe Artificial Intelligence in Education Market, By End User, 2019 to 20258.4.4. By Country8.4.4.1. Germany8.4.4.2. France8.4.4.3. United Kingdom8.4.4.4. Spain8.4.4.5. Others8.5. Middle East and Africa8.5.1. Middle East and Africa Artificial Intelligence in Education Market, By Offering, 2019 to 20258.5.2. Middle East and Africa Artificial Intelligence in Education Market, By Technology, 2019 to 20258.5.3. Middle East and Africa Artificial Intelligence in Education Market, By End User, 2019 to 20258.5.4. By Country8.5.4.1. Saudi Arabia8.5.4.2. Israel8.5.4.3. Others8.6. Asia Pacific8.6.1. Asia Pacific Artificial Intelligence in Education Market, By Offering, 2019 to 20258.6.2. Asia Pacific Artificial Intelligence in Education Market, By Technology, 2019 to 20258.6.3. Asia Pacific Artificial Intelligence in Education Market, By End User, 2019 to 20258.6.4. By Country8.6.4.1. China8.6.4.2. Japan8.6.4.3. South Korea8.6.4.4. India8.6.4.5. Others

9. Competitive Environment and Analysis9.1. Major Players and Strategy Analysis9.2. Emerging Players and Market Lucrativeness9.3. Mergers, Acquisitions, Agreements, and Collaborations9.4. Vendor Competitiveness Matrix

10. Company Profiles10.1. IBM Corporation10.2. Amazon Web Services Inc.10.3. Microsoft Corporation10.4. Google LLC10.5. AI Brain Inc.10.6. Cognizant10.7. Blippar Ltd10.8. Nuance Communications, Inc.10.9. Cerevrum10.10. Cognii, Inc.

For more information about this report visit https://www.researchandmarkets.com/r/3yqxap

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Visit link:
Outlook on the AI in Education Global Market to 2025 - by Offering, Technology, End-user and Geography - PRNewswire

Latin America’s Growing Artificial Intelligence Wave BRINK News and Insights on Global Risk – BRINK

Robots welding at the assembly line of the French car maker Peugeot/Citroen in Brazil. Artificial intelligence offers a chance for the Latin America's economies to leapfrog to greater innovation and economic progress.

Photo: Antonio Scorza/AFP/ Getty Images

Share this article

E-commerce firms have faced a conundrum in Latin America: How can they deliver packages in a region where 25% of urban populations live in informal, squatterneighborhoods with no addresses?

Enter Chazki, a logistics startup from Peru, which partnered with Arequipas Universidad San Pablo to build an artificial intelligence robot to generate new postal maps across the country. The company has now expanded to Argentina, Mexico and Chile, introducing remote communities and city outskirts to online deliveries.

Thats just one example of how machine learning is bringing unique Latin American solutions to unique Latin American challenges. Artificial intelligence and its correlated technologies could prove a major boon to the regions public and private sectors. In turn, its policymakers and business leaders have to better prepare to take full advantage while warding off potential downsides.

Latin America has long been the victim of low productivity and the COVID-19 pandemic is predictably making matters worse. Now, artificial intelligence is a chance for the regions economies to leapfrog to greater innovation and economic progress. Research suggests that AI will add a full percentage point of GDP to five of South Americas biggest economies Argentina, Brazil, Chile, Colombia and Peru by 2035.

Artificial intelligence could play transformative roles in Latin America for just about every sector, according to the Inter-American Development Bank (IADB). That means using AI to predict trade negotiation outcomes, commodity prices and consumer trends, or developing algorithms for use in factories, personalized medicine, infrastructure prototyping, autonomous transportation and energy consumption.

AIs applications in Latin America are already becoming reality. Argentinian Banco Galicia, Colombian airline Avianca, and Brazilian online shopping platform Shop Fcil have all adopted chatbots as virtual customer service assistance to help people. Chiles The Not Company developed an algorithm that analyzes animal-based food products and a database of 400,000 plants to generate recipes for vegan alternatives, and Perus National University of Engineering built machines to autonomously detect dangerous gases.

Expect the trend to continue in the near future. An MIT global survey of senior business executives worldwide found that, by the end of 2019, 79% of companies in Latin America had launched AI programs. The results have been positive; fewer than 2% of respondents reported that the initiatives made lower-than expected returns.

Another key factor is public acceptance of AI and automation. Thus far, Latin Americans are ahead of the curve in embracing the future, with another recent poll showing that 83% of Brazilian consumers said they would trust banking advice entirely generated by a computer, compared to a global average of 71%.

In a region that suffers from endemic corruption, pervasive violence, weak institutions and challenging socioeconomic conditions, governments, policymakers and organizations can use AI to tackle critical issues in the region, including food security, smart cities, natural resources and unemployment.

In Argentina, for example, artificial intelligence is being used to predict and prepare for teenage pregnancy and school dropout, as well as to outline unseen business opportunities in city neighborhoods. In Colombia and Uruguay, software has been developed to predict where crimes are likely to occur. In Brazil, the University of So Paulo is developing machine-learning technology that will rapidly assess the likelihood that patients have dengue fever, Zika or chikungunya when they arrive at a medical center.

At a time when public support for democracy in Latin America is flailing, AI could help come to the rescue. Congressional bodies across the region could use AI to boost the transparency and input of the legislative process. Indeed, the Hacker Laboratory, an innovation lab within Brazils Chamber of Deputies, is using AI platforms to facilitate interactions between lawmakers and citizens.

AI is not risk-free, of course. Elon Musk called AI humanitys biggest existential threat, and Stephen Hawking said it could spell the end of the human race.

Apocalyptic scenarios aside, the immediate danger of AI in Latin America is unemployment and inequality. The Inter-American Development Bank (IADB) warned in a 2018 study that between 36% and 43% of jobs could be lost due to artificial intelligence as a result of automatization. Indeed, Latin Americas governments must be prepared to set up guardrails and implement best practices for the implementation of AI.

Several governments in the region have already announced AI public policy plans. Mexico was one of the first 10 countries in the world to create a national AI strategy. Meanwhile, Brazil launched anational Internet of Things plan, which includes the countrys commitment to a network of AI laboratories across strategic areas including cybersecurity and defense. Chile is coordinating with civil society groups and experts to adopt its own AI plan.

Move aside, Mercosur! Governments in Latin America might also find that machine learning strengthens regional ties. That means harnessing AI to crunch data on trade flows and rules, find areas of consensus in multilateral negotiations, or create algorithms for regional trade. After all, AI models have a 300% greater predictive capacity than traditional econometric models, according to the Inter-American Development Bank.

Beyond competing national plans for AI, Latin American leaders should be drafting a strategy that is specific to its region much like the European Union is doing. A key takeaway from the recent UNESCO Regional Forum on AI in Latin America and the Caribbean was that the technology must develop with respect to universally recognized human rights and values.

In 2021, artificial intelligence could generate almost $3 trillion in business value and 6.2 billion hours of productivity worldwide. Latin America is rightfully jumping onto the bandwagon and has the potential to lead the parade in some areas.

To make full use of what could be a transformational productivity revolution for the region, government and business leaders must pump more resources into technology planning and education. The implementation of AI must improve, not accelerate, the regions inequity.

Follow this link:
Latin America's Growing Artificial Intelligence Wave BRINK News and Insights on Global Risk - BRINK

Inside the Crowdsourced Effort for More Inclusive Voice AI – Built In

A few years ago, a German university student wrote a novel and submitted it to Common Voicean open-source projectlaunched by Mozilla in 2017 to make speech-training data more diverse and inclusive.

The book donation, which added 11,000 sentences, was a bit of an exceptional contribution, said Alex Klepel, a former communications and partnership lead at Mozilla. Most of the voice data comes from more modest contributions excerpts of podcasts, transcripts and movie scripts available in the public domain under a no rights reserved CC0 license.

Text-based sentences from these works are fed into a recently launched multi-language contributor platform, where theyre displayed for volunteers who record themselvesreading them aloud. The resulting audio files are then spooled back into the system for users to listen to and validate.

The goal of the project, as itswebsite states, is to help teach machines how real people speak.

Though speech is becoming an increasingly popular way to interact with electronics from digital assistants like Alexa, Siri and Google Assistant, to hiring screeners and self-serve kiosks at fast food restaurants these systems are largely inaccessible to much of humanity, Klepel told me. A wide swath of the global population speaks languages or dialects these assistants havent been trained on. And in some cases, even if they have, assistants still have a hard time understanding them.

Machines dont understand everyone. They understand a fraction of people. Hence, only a fraction of people benefit from this massive technological shift.

Though developers and researchers have access to a number of public-domain machine learning algorithms, training data is limited and costly to license. The English Fisher data set, for example, is about 2,000 hours and costs about $14,000 for non-members, according to Klepel.

Most of the voice data used to train machine learning algorithms is tied up in the proprietary systems of a handful of major companies, whose systems, many experts believe, reflect their largely homogenous user bases. Andlimited data meanslimited cognition. A recent Stanford University study, as Built Inreported,foundthatthe speech-to-text services used by Amazon, IBM, Google, Microsoft and Apple for batch transcriptions misidentified the words of Black speakers at nearly double the rate of white speakers.

Machines dont understand everyone, explained Klepel by email. They understand a fraction of people. Hence, only a fraction of people benefit from this massive technological shift.

More on Automated Speech RecognitionWhy Racial Bias Still Haunts AI

Common Voice is an attempt to level the playing field. Today, it represents the largest public domain transcribed voice dataset, with more than 7,200 hours of voice data and 54 languages represented, including English, French, German, Spanish, Mandarin, Welsh, Kabyle and Kinyarwanda, according to Klepel.

Megan Branson, a product and UX designer at Mozillawho has overseen much of the projects UX development, said its latest and most exciting incarnation is the release of the multi-language website.

We look at this as a fun task, she said.Its daunting, but we can really do something. To better the internet, definitely, but also to give people better tools.

The project is guided by open-source principles, but it is hardly a free-for-all. Branson describes the website as open-by-design, meaning it is freely available to the public, but intentionally curated to ensure the fidelity and accuracy of voice collections. The goal is to create products that meet Mozillas business goals as well as those of the broader tech community.

In truth, Common Voice has multiple ambitions. It grew out of the need for thousands of hours of high-quality voice data to support Deep Speech, Mozillas automated speech recognition engine, which, according to Klepel, approaches human accuracy and is intended to enable a new wave of products and services.

We look at this as a fun task. Its daunting, but we can really do something. To better the internet, definitely, but also to give people better tools.

Deep Speech is designed not only to help Mozilla develop new voice-powered products, but also to support the global development of automated speech technologies, including in African countries like Rwanda, where it is believed they can begin to proliferate and advance sustainability goals. The idea behind Deep Speech is to develop a speech-to-text engine that can run on anything, from smartphones to an offline Raspberry Pi 4 to a server class machine, obviating the need to pay patent royalties or exorbitant fees for existing speech-to-text services, he wrote.

Over time, the thinking goes, publicly validated data representing many of the worlds languages and cultures might begin to redress algorithmic bias in datasets historically skewed toward white, English-speaking males.

But would it work? Could a voluntary public portal like Common Voice diversify training data? Back when the project started, no one knew and the full impact of Common Voice on training data has yet to be determined but, by the spring of 2017, it was time to test the theory.

Guiding the process was the question, How might we collect voice data for machine learning, knowing that voice data is extremely expensive, very proprietary, and hard to come by? Branson said.

As an early step, the team conducted a paper prototyping experiment in Taipei. Researchers created low-fidelity mock-ups of a sentence-reading tool and a voice-driven dating app and distributed them to people on the street to hear their reactions, as Branson described in Medium. It was guerrilla research, and it led to some counterintuitive findings. People expressed a willingness to voluntarily contribute to the effort, not because of the cool factor of a new app or web design, but out of an altruistic interest in making speech technology more diverse and inclusive.

Establishing licensing protocols was another early milestone. All submissions, Branson said, must fall under a public domain (CC0) license and meet basic requirements for punctuation, abbreviations and length (14 words or less).

The team also developed a set of tools to gather text samples. An online sentence collector allows users to log in and add existing sentences found in works in the public domain. A more recently released sentence extractor gives contributors the option of pulling up to three sentences from Wikipedia articles and submitting them to Mozilla as GitHub pull requests.

Strategic partnerships with universities, NGOs and corporate and government entities havehelped raise awareness of the effort, according to Klepel. In late 2019, for instance, Mozilla began collaborating with the German Ministry for Economic Cooperation and Development. Under an agreement called Artificial Intelligence for All: FAIR FORWARD,the two partners are attempting to open voice technology for languages in Africa and Asia.

In one pilot project, Digital Umuganda, a young Rwandan Artificial Intelligence startup focused on voice technologies is working with Mozilla to build an open speech corpus in Kinyarwanda, a language spoken by 12 million people, to a capacity that will allow it to train a speech-to-text-engine for a use case in support of the UNs Sustainable Development Goals, Klepel wrote.

More on Diversity and InclusionThe Deck Is Stacked Against Black Women in Tech

The work in Africa only scratches the surface of Deep Speechs expanding presence. According to Klepel, Mozilla is working with the Danish government, IBM, Bangor University in Wales, Mycroft AI and the German Aerospace Center in collaborative efforts ranging from growing Common Voice data sets to partnering on speaking engagements, to building voice assistants and moon robotics hardware.

But it is easy to imagine how such high-altitude programs might come with the risk of appearing self-interested. Outside of forging corporate and public partnerships at the institutional level, how do you collect diverse training data? And how do you incentivize conscientious everyday citizens to participate?

[It] reaffirmed something we already knew: our data could be far more diverse. Meaning more gender, accent, dialect and overall language diversity.

Thats where Branson and her team believed the web contributor experience could differentiate Mozillas data collection efforts. The team ran extensive prototype testing, gathering feedback from surveys, stakeholder sessions and tools such as Discourse and GitHub. And in the summer of 2017, a live coded version for English speakers was released to the wild. With a working model, research scientists and machine learning developers could come to the website and download data they could use to build voice tools a major victory for the project.

But development still had a long way to go. A UX assessment review and a long list of feature requests and bug fixes showed there were major holes in the live alpha. Most of these were performance and usability fixes that could be addressed in future iterations, but some of the issues required more radical rethinking.

As Branson explained in Medium, it reaffirmed something we already knew: our data could be far more diverse. Meaning more gender, accent, dialect and overall language diversity.

To address these concerns, Branson and her team began asking more vexing questions:

Early answers emerged in a January 2018 workshop. Mozillas design team invited corporate and academic partners to a journey mapping and feature prioritization exercise, which brought to light several daring ideas. Everything was on the table, including wearable technologies and pop-up recording events. Ultimately, though, flashy concepts took a backseat to a solution less provocative but more pragmatic: the wireframes that would lay the groundwork for the web contributor experience that exists today.

From a users standpoint, the redesigned website could hardly be more straightforward. On the left-hand side of the homepage is a microphone icon beside the word Speak and the phrase Donate your voice. On the right-hand side is a green arrow icon beside the word Listen and the phrase Help us validate our voices. Hover over either icon and you find more information, including the number of clips recorded that day and a goal for each language 1,200 per day for speaking and 2,400 per day for validating. Without logging in, you can begin submitting audio clips, repeating back sentences like these:

The first inhabitants of the Saint George area were Australian aborigines.

The pledge items must be readily marketable.

You can also validate the audio clips of others, which, on a quick test, represent a diversity of accents and include men and women.

The option to set up a profile is designed to build loyalty and add a gamification aspect to the experience. Users with profiles can track their progress in multiple languages against those of other contributors. They can submit optional demographic data, such as age, sex, language, and accent, which is anonymized on the site but can be used by design and development teams to analyze speech contributions.

Current data reported on the site shows that 23 percent of English-language contributors identify their accent as United States English. Other common English accents include England (8 percent), India and South Asia (5 percent) and Southern African (1 percent).

Forty-seven percent of contributors identity as male and 14 percent identify as female, and the highest percentage of contributions by age comes from those ages 19-29. These stats, while hardly phenomenal as a measure of diversity, are evidence of the projects genuine interest in transparency.

Were seeing people collect in languages that are considered endangered, like Welsh and Parisian. Its really, really neat.

A recently released single-word target segment being developed for business use cases, such as voice assistants, includes the digits zero through nine, as well as the words yes, no, hey and Firefox in 18 languages. An additional 70 languages are in progress; once 5,000 sentences have been reviewed and validated in these languages, they can be localized so the canonical site can accept voice recordings and listener validations.

Arguably, though, the most significant leap forward in the redesign was the creation of a multi-language experience. A language tab on the homepage header takes visitors to a page listing launched languages as well as those in progress. Progress bars report key metrics, such as the number of speakers and validated hours in a launched language, and the number of sentences needed for in-progress languages to become localized. The breadth of languages represented on the page is striking.

Were seeing people collect in languages that are considered endangered, like Welsh and Parisian. Its really, really neat, Branson said.

More on Voice-Enabled Technology Will We Ever Want to Use Touchscreens Again?

So far, the team hasnt done much external marketing, in part because the infrastructure wasnt stable enough to meet the demands of a growing user base. With a recent transition to a more robust Kubernetes infrastructure, however, the team is ready to cast a wider net.

How do we actually get this in front of people who arent always in the classic open source communities, right? You know, white males, Branson asked. How do we diversify that?

Addressing that concern is likely the next hurdle for the design team.

If Common Voice is going to focus on moving the needle in 2020, its going to be in sex diversity, helping balance those ratios. And its not a binary topic. Weve got to work with the community, right? Branson said.

If Common Voice is going to focus on moving the needle in 2020, its going to be in sex diversity, helping balance those ratios. And its not a binary topic.

Evaluating the protocols for validation methods is another important consideration. Currently, a user who believes a donated speech clip is accurate can give it a Yes vote. Two Yes votes earns the clip a spot in the Common Voice dataset. A No vote returns the clip to the queue, and two No votes relegates the snippet to the Clip Graveyard as unusable. But criteria for defining accuracy are still a bit murky. What if somebody misspeaks or their inflection is unintelligible to the listener? What if theres background noise and part of a clip cant be heard?

The validation criteria offer guidance for [these cases], but understanding what we mean by accuracy for validating a clip is something that were working to surface in this next quarter, Branson said.

Zion Ariana Mengesha, who is a PhD candidate at Stanford University and an author on the aforementioned study of racial disparity in voice recognition software, sees promise in the ambitions of Common Voice, but stresses that understanding regional and demographic speech differences is crucial. Not only must submitted sentences reflect diversity, but the people listening and validating them must also be diverse to ensure they are properly understood.

Its great that people are compiling more resources and making them openly available, so long as they do so with the care and intention to make sure that there is, within the open-source data set, equal representation across age, gender, region, etc. That could be a great step, Mengesha said.

Another suggestion from Mengesha is to incorporate corpuses that contain language samples from underrepresented and marginalized groups, such as Born in Slavery: Slave Narratives from the Federal Writers Project, 1936-1938, a Library of Congress collection of 2,300 first-person accounts of slavery, and the University of Oregons Corpus of Regional African American Language (CORAAL) the audio recordings and transcripts used in the Stanford study.

How do you achieve racial diversity within voice applications? Branson asked rhetorically. Thats a question we have. We dont have answers for this yet.

Originally posted here:
Inside the Crowdsourced Effort for More Inclusive Voice AI - Built In

Julian Assange extradition delayed by further tech, coronavirus issues – Sydney Morning Herald

Loading

"No publisher of information has ever been successfully prosecuted for publishing national security information ever," Lewis said.

He told the court that if convicted, the 49-year-old would likely spend the rest of his life in jail.

"Under the best-case scenario we are looking at a sentence somewhere between, 20 years, if everything goes brilliantly to 175 years," he said.

Lewis spent around 90 minutes giving evidence before an audio clip interrupted his testimony. The judge walked out as the tech troubles interfered with the hearing which was then adjourned until lunch.

But after lunch court officials could not re-establish a link to Lewis and the hearing was called off for the rest of the day and will resume on Tuesday.

It is not the first time that the hearing has experienced difficulties connecting to, or hearing clearly, witnesses who are choosing to give evidence remotely as a result of the pandemic.

The hearing only resumed at the Old Bailey in central London on Monday following a two-day break after a coronavirus scare, when the wife of one of the lawyers representing the US government developed symptoms which she feared might be COVID-19.

The test proved negative and her diagnosis was no more than a common cold.

But when court resumed on Monday, Assange's legal team asked District Judge Vanessa Baraitser to order that everyone must wear masks in the courtroom.

She refused.

Filmmaker John Pilger stands with supporters of the Wikileaks founder Julian Assange as they gather outside the Old Bailey.Credit:Getty

"Those that wish to wear masks in the well of the court are welcome to do so unless they are directly addressing the court and I understand masks are available for this purpose," she said.

"But there is no obligation to do so and I make no direction ... in this regard."

She instead said that anyone, including Assange who is sitting behind a glass wall in the dock, could wear a mask if they wished.

Assange wore a mask to his extradition hearing for the first time but his QC Mark Summer claimed there had been "difficulties in getting him masks."

Assange's extradition hearing was already delayed by several months due to the pandemic. Assange has subsequently claimed that his incarceration is a risk to his health, exacerbated by coronavirus.

He is being held at Belmarsh prison on London's outskirts. His extradition hearing is expected to run until October.

Our weekly newsletter will deliver expert analysis of the race to the White House from our US correspondent Matthew Knott. Coming soon. Sign up now for the Herald's newsletter here, The Age's here, Brisbane Times' here and WAtoday's here.

Latika Bourke is a journalist for The Sydney Morning Herald and The Age, based in London.

Original post:
Julian Assange extradition delayed by further tech, coronavirus issues - Sydney Morning Herald