Deep Science: Using machine learning to study anatomy, weather and earthquakes – TechCrunch

Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers particularly in but not limited to artificial intelligence and explain why they matter.

This week has a bit more basic research than consumer applications. Machine learning can be applied to advantage in many ways users benefit from, but its also transformative in areas like seismology and biology, where enormous backlogs of data can be leveraged to train AI models or as raw material to be mined for insights.

Were surrounded by natural phenomena that we dont really understand obviously we know where earthquakes and storms come from, but how exactly do they propagate? What secondary effects are there if you cross-reference different measurements? How far ahead can these things be predicted?

A number of recently published research projects have used machine learning to attempt to better understand or predict these phenomena. With decades of data available to draw from, there are insights to be gained across the board this way if the seismologists, meteorologists and geologists interested in doing so can obtain the funding and expertise to do so.

The most recent discovery, made by researchers at Los Alamos National Labs, uses a new source of data as well as ML to document previously unobserved behavior along faults during slow quakes. Using synthetic aperture radar captured from orbit, which can see through cloud cover and at night to give accurate, regular imaging of the shape of the ground, the team was able to directly observe rupture propagation for the first time, along the North Anatolian Fault in Turkey.

The deep-learning approach we developed makes it possible to automatically detect the small and transient deformation that occurs on faults with unprecedented resolution, paving the way for a systematic study of the interplay between slow and regular earthquakes, at a global scale, said Los Alamos geophysicist Bertrand Rouet-Leduc.

Another effort, which has been ongoing for a few years now at Stanford, helps Earth science researcher Mostafa Mousavi deal with the signal-to-noise problem with seismic data. Poring over data being analyzed by old software for the billionth time one day, he felt there had to be better way and has spent years working on various methods. The most recent is a way of teasing out evidence of tiny earthquakes that went unnoticed but still left a record in the data.

The Earthquake Transformer (named after a machine-learning technique, not the robots) was trained on years of hand-labeled seismographic data. When tested on readings collected during Japans magnitude 6.6 Tottori earthquake, it isolated 21,092 separate events, more than twice what people had found in their original inspection and using data from less than half of the stations that recorded the quake.

Image Credits: Stanford University

The tool wont predict earthquakes on its own, but better understanding the true and full nature of the phenomena means we might be able to by other means. By improving our ability to detect and locate these very small earthquakes, we can get a clearer view of how earthquakes interact or spread out along the fault, how they get started, even how they stop, said co-author Gregory Beroza.

See original here:
Deep Science: Using machine learning to study anatomy, weather and earthquakes - TechCrunch

AI and Machine Learning Can Help Fintechs if We Focus on Practical Implementation and Move Away from Overhyped Narratives, Researcher Says – Crowdfund…

Artificial intelligence (AI) and machine learning (ML) algorithms are increasingly being used by Fintech platform developers to make more intelligent or informed decisions regarding key processes. This may include using AI to identify potentially fraudulent transactions, determining the creditworthiness of a borrower applying for a loan, and many other use cases.

Research conducted by Accenture found that 87% of business owners in the United Kingdom claim that theyre struggling with finding the best ways to adopt AI or ML technologies. Three out of four or 75% of C-Suite executives responding to Accentures survey said they really need to effectively adopt AI solutions within 5 years, so that they dont lose business to competitors.

As reported by IT Pro Portal, theres currently a gap between what may be considered just hype and actual or practical implementation of AI technologies and platforms.

Less than 5% of firms have actually managed to effectively apply Ai, meanwhile, more than 80% are currently just exploring basic proof of concepts for applying AL or ML algorithms. Many firms are also not familiar or dont have the expertise to figure out how to best apply these technologies to specific business use cases.

Yann Stadnicki, an experienced technologist and research engineer, argues that these technologies can play a key role in streamlining business operations. For example, they can help Fintech firms with lowering their operational costs while boosting their overall efficiency. They can also make it easier for a companys CFO to do their job and become a key player when it comes to supporting the growth of their firm.

Stadnicki points out that a research study suggests that company executives werent struggling to adopt AI solutions due to budgetary constraints or limitations. He adds that the study shows there may be certain operational challenges when it comes to effectively integrating AI and ML technologies.

He also mentions:

The inability to set up a supportive organizational structure, the absence of foundational data capabilities, and the lack of employee adoption are barriers to harnessing AI and machine learning within an organization.

He adds:

For businesses to harness the benefits of AI and machine learning, there needs to be a move away from an overhyped theoretical narrative towards practical implementation.It is important to formulate a plan and integration strategy of how your business will use AI and ML, to both mitigate the risks of cybercrime and fraud, while embracing the opportunity of tangible business impact.

Fintech firms and organizations across the globe are now leveraging AI and ML technologies to improve their products and services. In a recent interview with Crowdfund Insider, Michael Rennie, a U.K.-based product manager for Mendix, a Siemens business and the global leader in enterprise low-code, explained how emerging tech can be used to enhance business processes.

He noted:

Prior to low-code, the application and use of cutting-edge technologies within the banking sector have been more academic than actual. But low-code now enables you to apply emerging technologies like AI in a practical way so that they actually make an impact. For example, you could pair a customer-focused banking application built with low-code with a machine learning (ML) engine to identify user behaviors. Then you could make more informed decisions about where to invest in customer experience and most benefit your business.

He added:

Its easy to see the value in this. The problem is that without the correct technology, its too difficult to integrate traditional customer-facing applications with new technology systems. Such integrations typically require millions of dollars in investment and years of work. By the time an organization finishes that intensive work, the market may have moved on. Low-code eliminates that problem, makes integration easy and your business more agile.

Go here to see the original:
AI and Machine Learning Can Help Fintechs if We Focus on Practical Implementation and Move Away from Overhyped Narratives, Researcher Says - Crowdfund...

ATL Special Report Podcast: Tactical Use Cases And Machine Learning With Lexis+ – Above the Law

Welcome back listeners to this exclusive Above the Law Lexis+ Special Report Podcast: Introducing a New Era in Legal Research, brought to you by LexisNexis. This is the second episode in our special series.

Join us once again as LexisNexis Chief Product Officer for North America Jeff Pfeifer (@JeffPfeifer) and Evolve the Law Contributing Editor Ian Connett (@QuantumJurist) dive deeper into Lexis+, sharing tactical use cases, new tools like brief analysis and Ravel view utilizing data visualization, and howJeffs engineering team at Lexis Labs took Google machine learning technology to law school to provide Lexis+ users with the ultimate legal research experience.

This is the second episode of our special four part series. You can listen to our first episode with Jeff Pfeifer here for more on Lexis+. We hope you enjoy this special report featuring Jeff Pfeifer and will stay tuned for the next episodes in the series.

Links and Resources from this Episode

Review and Subscribe

If you like what you hear please leave a review by clicking here

Subscribe to the podcast on your favorite player to get the latest episodes.

More here:
ATL Special Report Podcast: Tactical Use Cases And Machine Learning With Lexis+ - Above the Law

5 Emerging AI And Machine Learning Trends To Watch In 2021 – CRN: Technology news for channel partners and solution providers

Artificial Intelligence and machine learning have been hot topics in 2020 as AI and ML technologies increasingly find their way into everything from advanced quantum computing systems and leading-edge medical diagnostic systems to consumer electronics and smart personal assistants.

Revenue generated by AI hardware, software and services is expected to reach $156.5 billion worldwide this year, according to market researcher IDC, up 12.3 percent from 2019.

But it can be easy to lose sight of the forest for the trees when it comes to trends in the development and use of AI and ML technologies. As we approach the end of a turbulent 2020, heres a big-picture look at five key AI and machine learning trends not just in the types of applications they are finding their way into, but also in how they are being developed and the ways they are being used.

The Growing Role Of AI And Machine Learning In Hyperautomation

Hyperautomation, an IT mega-trend identified by market research firm Gartner, is the idea that most anything within an organization that can be automated such as legacy business processes should be automated. The pandemic has accelerated adoption of the concept, which is also known as digital process automation and intelligent process automation.

AI and machine learning are key components and major drivers of hyperautomation (along with other technologies like robot process automation tools). To be successful hyperautomation initiatives cannot rely on static packaged software. Automated business processes must be able to adapt to changing circumstances and respond to unexpected situations.

Thats where AI, machine learning models and deep learning technology come in, using learning algorithms and models, along with data generated by the automated system, to allow the system to automatically improve over time and respond to changing business processes and requirements. (Deep learning is a subset of machine learning that utilizes neural network algorithms to learn from large volumes of data.)

Bringing Discipline To AI Development Through AI Engineering

Only about 53 percent of AI projects successfully make it from prototype to full production, according to Gartner research. When trying to deploy newly developed AI systems and machine learning models, businesses and organizations often struggle with system maintainability, scalability and governance, and AI initiatives often fail to generate the hoped-for returns.

Businesses and organizations are coming to understand that a robust AI engineering strategy will improve the performance, scalability, interpretability and reliability of AI models and deliver the full value of AI investments, according to Gartners list of Top Strategic Technology Trends for 2021.

Developing a disciplined AI engineering process is key. AI engineering incorporates elements of DataOps, ModelOps and DevOps and makes AI a part of the mainstream DevOps process, rather than a set of specialized and isolated projects, according to Gartner.

Increased Use Of AI For Cybersecurity Applications

Artificial intelligence and machine learning technology is increasingly finding its way into cybersecurity systems for both corporate systems and home security.

Developers of cybersecurity systems are in a never-ending race to update their technology to keep pace with constantly evolving threats from malware, ransomware, DDS attacks and more. AI and machine learning technology can be employed to help identify threats, including variants of earlier threats.

AI-powered cybersecurity tools also can collect data from a companys own transactional systems, communications networks, digital activity and websites, as well as from external public sources, and utilize AI algorithms to recognize patterns and identify threatening activity such as detecting suspicious IP addresses and potential data breaches.

AI use in home security systems today is largely limited to systems integrated with consumer video cameras and intruder alarm systems integrated with a voice assistant, according to research firm IHS Markit. But IHS says AI use will expand to create smart homes where the system learns the ways, habits and preferences of its occupants improving its ability to identify intruders.

The Intersection Of AI/ML and IoT

The Internet of Things has been a fast-growing area in recent years with market researcher Transforma Insights forecasting that the global IoT market will grow to 24.1 billion devices in 2030, generating $1.5 trillion in revenue.

The use of AI/ML is increasingly intertwined with IoT. AI, machine learning and deep learning, for example, are already being employed to make IoT devices and services smarter and more secure. But the benefits flow both ways given that AI and ML require large volumes of data to operate successfully exactly what networks of IoT sensors and devices provide.

In an industrial setting, for example, IoT networks throughout a manufacturing plant can collect operational and performance data, which is then analyzed by AI systems to improve production system performance, boost efficiency and predict when machines will require maintenance.

What some are calling Artificial Intelligence of Things: (AIoT) could redefine industrial automation.

Persistent Ethical Questions Around AI Technology

Earlier this year as protests against racial injustice were at their peak, several leading IT vendors, including Microsoft, IBM and Amazon, announced that they would limit the use of their AI-based facial recognition technology by police departments until there are federal laws regulating the technologys use, according to a Washington Post story.

That has put the spotlight on a range of ethical questions around the increasing use of artificial intelligence technology. That includes the obvious misuse of AI for deepfake misinformation efforts and for cyberattacks. But it also includes grayer areas such as the use of AI by governments and law enforcement organizations for surveillance and related activities and the use of AI by businesses for marketing and customer relationship applications.

Thats all before delving into the even deeper questions about the potential use of AI in systems that could replace human workers altogether.

A December 2019 Forbes article said the first step here is asking the necessary questions and weve begun to do that. In some applications federal regulation and legislation may be needed, as with the use of AI technology for law enforcement.

In business, Gartner recommends the creation of external AI ethics boards to prevent AI dangers that could jeopardize a companys brand, draw regulatory actions or lead to boycotts or destroy business value. Such a board, including representatives of a companys customers, can provide guidance about the potential impact of AI development projects and improve transparency and accountability around AI projects.

Read more here:
5 Emerging AI And Machine Learning Trends To Watch In 2021 - CRN: Technology news for channel partners and solution providers

Is Quantum Machine Learning the next thing? | by Alessandro Crimi | ILLUMINATION-Curated | Oct, 2020 – Medium

In classical computers, bits are stored as either a 0 or a 1 in binary notation. Quantum computers use quantum bits or qubits which can be both 0 and 1, this is called superimposition. Last year Google and NASA claimed to have achieved quantum supremacy, raising some controversies though. Quantum supremacy means that a quantum computer can perform a single calculation that no conventional computer, even the biggest supercomputer can perform in a reasonable amount of time. Indeed, according to Google, the Sycamore is a computer with a 54-qubit processor, which is can perform fast computations.

Machines like Sycamore can speed up simulation of quantum mechanical systems, drug design, the creation of new materials through molecular and atomic maps, the Deutsch Oracle problem and machine learning.

When data points are projected in high dimensions during machine learning tasks, it is hard for classical computers to deal with such large computations (no matter the TensorFlow optimizations and so on). Even if the classical computer can handle it, an extensive amount of computational time is necessary.

In other words, the current computers we use can be sometime slow while doing certain machine learning application compared to quantum systems.

Indeed, superposition and entanglement can come in hand to train properly support vector machine or neural networks to behave similarly to a quantum system.

How we do this in practice can be summarized as

In practice, quantum computers can be used and trained like neural networks, or better neural networks comprises some aspects of quantum physics. More specifically, in photonic hardware, a trained circuit of quantum computer can be used to classify the content of images, by encoding the image into the physical state of the device and taking measurements. If it sounds weird, it is because this topic is weird and difficult to digest. Moreover, the story is bigger than just using quantum computers to solve machine learning problems. Quantum circuits are differentiable, and a quantum computer itself can compute the change (rewrite) in control parameters needed to become better at a given task, pushing further the concept of learning.

See original here:
Is Quantum Machine Learning the next thing? | by Alessandro Crimi | ILLUMINATION-Curated | Oct, 2020 - Medium

Top 8 Books on Machine Learning In Cybersecurity One Must Read – Analytics India Magazine

With the proliferation of information technologies and data among us, cybersecurity has become a necessity. Machine learning helps organisations by getting insights from raw data, predicting future outcomes and more.

For a few years now, such utilisation of machine learning techniques has been started being implemented in cybersecurity. It helps in several ways, including identifying frauds, malicious codes and other such.

In this article, we list down the top eight books, in no particular order, on machine learning In cybersecurity that one must-read.

About: Written by Sumeet Dua and Xian Du, this book introduces the basic notions in machine learning and data mining. It provides a unified reference for specific machine learning solutions to cybersecurity problems as well as provides a foundation in cybersecurity fundamentals, including surveys of contemporary challenges.

The book details some of the cutting-edge machine learning and data mining techniques that can be used in cybersecurity, such as in-depth discussions of machine learning solutions to detection problems, contemporary cybersecurity problems, categorising methods for detecting, scanning, and profiling intrusions and anomalies, among others.

Get the book here.

About: In Malware Data Science, security data scientist Joshua Saxe introduces machine learning, statistics, social network analysis, and data visualisation, and shows you how to apply these methods to malware detection and analysis.

Youll learn how to analyse malware using static analysis, identify adversary groups through shared code analysis, detect vulnerabilities by building machine learning detectors, identify malware campaigns, trends, and relationships through data visualisation, etc.

Get the book here.

About: This book begins with an introduction of machine learning and algorithms that are used to build AI systems. After gaining a fair understanding of how security products leverage machine learning, you will learn the core concepts of breaching the AI and ML systems.

With the help of hands-on cases, you will understand how to find loopholes as well as surpass a self-learning security system. After completing this book, readers will be able to identify the loopholes in a self-learning security system and will also be able to breach a machine learning system efficiently.

Get the book here.

About: In this book, youll learn how to use popular Python libraries such as TensorFlow, Scikit-learn, etc. to implement the latest AI techniques and manage difficulties faced by the cybersecurity researchers.

The book will lead you through classifiers as well as features for malware, which will help you to train and test on real samples. You will also build self-learning, reliant systems to handle the cybersecurity tasks such as identifying malicious URLs, spam email detection, intrusion detection, tracking user and process behaviour, among others.

Get the book here.

About: This book is for the data scientists, machine learning developers, security researchers, and anyone keen to apply machine learning to up-skill computer security. In this book, you will learn how to use machine learning algorithms with complex datasets to implement cybersecurity concepts, implement machine learning algorithms such as clustering, k-means, and Naive Bayes to solve real-world problems, etc.

You will also learn how to speed up a system using Python libraries with NumPy, Scikit-learn, and CUDA, combat malware, detect spam and fight financial fraud to mitigate cybercrimes, among others.

Get the book here.

About: This book teaches you how to use machine learning for penetration testing. You will learn a hands-on and practical manner, how to use the machine learning to perform penetration testing attacks, and how to perform penetration testing attacks on machine learning systems. You will also learn the techniques that few hackers or security experts know about.

Get the book here.

About: In this book, you will learn machine learning in cybersecurity self-assessment, how to identify and describe the business environment in cybersecurity projects using machine learning, etc.

The book covers all machine learning in cybersecurity essentials, such as extensive criteria grounded in the past and current successful projects and activities by experienced machine learning in cybersecurity practitioners, among others.

Get the book here.

About: This book presents a collection of state-of-the-art AI approaches to cybersecurity and cyber threat intelligence. It offers strategic defence mechanisms for malware, addressing cybercrime, and assessing vulnerabilities to yield proactive rather than reactive countermeasures.

Get the book here.

Read the original here:
Top 8 Books on Machine Learning In Cybersecurity One Must Read - Analytics India Magazine

Six notable benefits of AI in finance, and what they mean for humans – Daily Maverick

Addressing AI anxiety

A common narrative around emerging technologies like AI, machine learning, and robotic process automation is the anxiety and fear that theyll replace humans. In South Africa, with an unemployment rate of over 30%, these concerns are valid.

But if we dig deep into what we can do with AI, we learn it will elevate the work that humans do, making it more valuable than ever.

Sage research found that most senior financial decision-makers (90%) are comfortable with automation performing more of their day-to-day accounting tasks in the future, and 40% believe that AI and machine learning (ML) will improve forecasting and financial planning.

Whats more, two-thirds of respondents expect emerging technology to audit results continuously and to automate period-end reporting and corporate audits, reducing time to close in the process.

The key to realising these benefits is to secure buy-in from the entire organisation. With 87% of CFOs now playing a hands-on role in digital transformation, their perspective on technology is key to creating a digitally receptive team culture. And their leadership is vital in ensuring their organisations maximise their technology investments. Until employees make the same mindset shift as CFOs have, theyll need to be guided and reassured about the businesss automation strategy and the potential for upskilling.

Six benefits of AI in laymans terms

Speaking during an exclusive virtual event to announce the results of the CFO 3.0 research, as well as the launch of Sage Intacct in South Africa, Aaron Harris, CTO for the Sage, said one reason for the misperception about AIs impact on business and labour is that SaaS companies too often speak in technical jargon.

We talk about AI and machine learning as if theyre these magical capabilities, but we dont actually explain what they do and what problems they solve. We dont put it into terms that matter for business leaders and labour. We dont do a good job as an industry, explaining that machine learning isnt an outcome we should be looking to achieve its the technology that enables business outcomes, like efficiency gains and smarter predictive analytics.

For Harris, AI has remarkable benefits in six key areas:

Digital culture champions

Evolving from a traditional management style that relied on intuition, to a more contemporary one based on data-driven evidence, can be a culturally disruptive process. Interestingly, driving a cultural change wasnt a concern for most South African CFOs, with 73% saying their organisations are ready for more automation.

In fact, AI holds no fear for senior financial decision-makers: over two-thirds are not at all concerned about it, and only one in 10 believe that it will take away jobs.

So, how can businesses reimagine the work of humans when software bots are taking care of all the repetitive work?

How can we leverage the unique skills of humans, like collaboration, contextual understanding, and empathy?

The future world is a world of connections, says Harris. It will be about connecting humans in ways that allow them to work at a higher level. It will be about connecting businesses across their ecosystems so that they can implement digital business models to effectively and competitively operate in their markets. And it will be about creating connections across technology so that traditional, monolithic experiences are replaced with modern ones that reflect new ways of working and that are tailored to how individuals and humans will be most effective in this world.

New world of work

We can envision this world across three areas:

Sharing knowledge and timelines on strategic developments and explaining the significance of these changes will help CFOs to alleviate the fear of the unknown.

Technology may be the enabler driving this change, but how it transforms a business lies with those who are bold enough to take the lead. DM

Continue reading here:
Six notable benefits of AI in finance, and what they mean for humans - Daily Maverick

Machine learning to transform delivery of major rail projects in UK – Global Railway Review

By utilising machine learning, Network Rail can increase prediction accuracy, reduce delays, unlock early risk detection and enable significant cost savings.

Credit: Network Rail

Network Rail has announced that it is working with technology startup nPlan to use machine learning technology across its portfolio of projects, which has the potential to transform the way major rail projects are delivered across Britain.

Through using data from past projects to produce accurate cost and time forecasts, the partnership will deliver efficiencies in the way projects are planned and carried out, and improve service reliability for passengers by reducing the risk of overruns.

In a world-first for such work on this scale, Network Rail tested nPlans risk analysis and assurance solution on two of its largest rail projects on the Great Western Main Line and the Salisbury to Exeter Signalling project representing over 3 billion of capital expenditure.

This exercise showed that, by leveraging past data, cost savings of up to 30 million could have been achieved on the Great Western Main Line project alone. This was primarily achieved by flagging unknown risks to the project team those that are invisible to the human eye due to the size and complexity of the project data and allowing them to mitigate those risks before they occur at a significantly lower cost than if they are missed or ignored.

The machine learning technology works by learning from patterns in historical project performance. Put simply, the algorithm learns by comparing what was planned against what actually happened on a project at an individual activity level. This facilitates transparency and a shared, improved view of risk between project partners.

Following the success of this trial, nPlan and Network Rail will now embark on the next phase of deployment, rolling out the software on 40 projects before scaling up on all Network Rail projects by mid-2021. Using data from over 100,000 programmes, Network Rail will increase prediction accuracy, reduce delays, allow for better budgeting and unlock early risk detection, leading to greater certainty in the outcome of these projects.

Network Rails Programme Director for Affordability, Alastair Forbes, said: By championing innovation and using forward-thinking technologies, we can deliver efficiencies in the way we plan and carry out rail upgrade and maintenance projects. It also has the benefit of reducing the risk of project overruns, which means, in turn, we can improve reliability for passengers.

Dev Amratia, CEO and co-founder of nPlan, said: Network Rail is amongst the largest infrastructure operators in Europe, and adopting technology to forecast and assure projects can lead to better outcomes for all of Britains rail industry, from contractors to passengers. I look forward to significantly delayed construction projects, and the disruption that they cause for passengers, becoming a thing of the past, with our railways becoming safer and more resilient.

See original here:
Machine learning to transform delivery of major rail projects in UK - Global Railway Review

Improving The Use Of Social Media For Disaster Management – Texas A&M University Today

The algorithm could be used to quickly identify social media posts related to a disaster.

Getty Images

There has been a significant increase in the use of social media to share updates, seek help and report emergencies during a disaster. Algorithms keeping track of social media posts that signal the occurrence of natural disasters must be swift so that relief operations can be mobilized immediately.

A team of researchers led by Ruihong Huang, assistant professor in the Department of Computer Science and Engineering at Texas A&M University, has developed a novel weakly supervised approach that can train machine learning algorithms quickly to recognize tweets related to disasters.

Because of the sudden nature of disasters, theres not much time available to build an event recognition system, Huang said. Our goal is to be able to detect life-threatening events using individual social media messages and recognize similar events in the affected areas.

Text on social media platforms, like Twitter, can be categorized using standard algorithms called classifiers. This sorting algorithm separates data into labeled classes or categories, similar to how spam filters in email service providers scan incoming emails and classify them as either spam or not spam based on its prior knowledge of spam messages.

Most classifiers are an integral part of machine learning algorithms that make predictions based on carefully labeled sets of data. In the past, machine learning algorithms have been used for event detection based on tweets or a burst of words within tweets. To ensure a reliable classifier for the machine learning algorithms, human annotators have to manually label large amounts of data instances one by one, which usually takes several days, sometimes even weeks or months.

The researchers also found that it is essentially impossible to find a keyword that does not have more than one meaning on social media depending on the context of the tweet. For example, if the word dead is used as a keyword, it will pull in tweets talking about a variety of topics such as a phone battery being dead or the television series The Walking Dead.

We have to be able to know which tweets that contain the predetermined keywords are relevant to the disaster and separate them from the tweets that contain the correct keywords but are not relevant, Huang said.

To build more reliable labeled datasets, the researchers first used an automatic clustering algorithm to put them into small groups. Next, a domain expert looked at the context of the tweets in each group to identify if it was relevant to the disaster. The labeled tweets were then used to train the classifier how to recognize the relevant tweets.

Using data gathered from the most impacted time periods for Hurricane Harvey and Hurricane Florence, the researchers found that their data labeling method and overall weakly-supervised system took one to two person-hours instead of the 50 person-hours that were required to go through thousands of carefully annotated tweets using the supervised approach.

Despite the classifiers overall good performance, they also observed that the system still missed several tweets that were relevant but used a different vocabulary than the predetermined keywords.

Users can be very creative when discussing a particular type of event using the predefined keywords, so the classifier would have to be able to handle those types of tweets, Huang said. Theres room to further improve the systems coverage.

In the future, the researchers will look to explore how to extract information about the users location so first responders will know exactly where to dispatch their resources.

Other contributors to this research include Wenlin Yao, a doctoral student supervised by Huang from the computer science and engineering department; Ali Mostafavi and Cheng Zhang from the Zachry Department of Civil and Environmental Engineering; and Shiva Saravanan, former intern of the Natural Language Processing Lab at Texas A&M.

The researchers described their findings in the proceedings from the Association for the Advancement of Artificial Intelligences 34th Conference on Artificial Intelligence.

This work is supported by funds from the National Science Foundation.

Originally posted here:
Improving The Use Of Social Media For Disaster Management - Texas A&M University Today

Machine Learning & Big Data Analytics Education Market Size is Thriving Worldwide 2020 | Growth and Profit Analysis, Forecast by 2027 – The Daily…

Fort Collins, Colorado The Global Machine Learning & Big Data Analytics Education Market research report offers insightful information on the Global Machine Learning & Big Data Analytics Education market for the base year 2019 and is forecast between 2020 and 2027. Market value, market share, market size, and sales have been estimated based on product types, application prospects, and regional industry segmentation. Important industry segments were analyzed for the global and regional markets.

The effects of the COVID-19 pandemic have been observed across all sectors of all industries. The economic landscape has changed dynamically due to the crisis, and a change in requirements and trends has also been observed. The report studies the impact of COVID-19 on the market and analyzes key changes in trends and growth patterns. It also includes an estimate of the current and future impact of COVID-19 on overall industry growth.

Get a sample of the report @ https://reportsglobe.com/download-sample/?rid=64357

The report has a complete analysis of the Global Machine Learning & Big Data Analytics Education Market on a global as well as regional level. The forecast has been presented in terms of value and price for the 8 year period from 2020 to 2027. The report provides an in-depth study of market drivers and restraints on a global level, and provides an impact analysis of these market drivers and restraints on the relationship of supply and demand for the Global Machine Learning & Big Data Analytics Education Market throughout the forecast period.

The report provides an in-depth analysis of the major market players along with their business overview, expansion plans, and strategies. The main actors examined in the report are:

The Global Machine Learning & Big Data Analytics Education Market Report offers a deeper understanding and a comprehensive overview of the Global Machine Learning & Big Data Analytics Education division. Porters Five Forces Analysis and SWOT Analysis have been addressed in the report to provide insightful data on the competitive landscape. The study also covers the market analysis and provides an in-depth analysis of the application segment based on the market size, growth rate and trends.

Request a discount on the report @ https://reportsglobe.com/ask-for-discount/?rid=64357

The research report is an investigative study that provides a conclusive overview of the Global Machine Learning & Big Data Analytics Education business division through in-depth market segmentation into key applications, types, and regions. These segments are analyzed based on current, emerging and future trends. Regional segmentation provides current and demand estimates for the Global Machine Learning & Big Data Analytics Education industry in key regions in North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa.

Global Machine Learning & Big Data Analytics Education Market Segmentation:

In market segmentation by types of Global Machine Learning & Big Data Analytics Education , the report covers-

In market segmentation by applications of the Global Machine Learning & Big Data Analytics Education , the report covers the following uses-

Request customization of the report @https://reportsglobe.com/need-customization/?rid=64357

Overview of the table of contents of the report:

To learn more about the report, visit @ https://reportsglobe.com/product/global-machine-learning-big-data-analytics-education-assessment/

Thank you for reading our report. To learn more about report details or for customization information, please contact us. Our team will ensure that the report is customized according to your requirements.

How Reports Globe is different than other Market Research Providers

The inception of Reports Globe has been backed by providing clients with a holistic view of market conditions and future possibilities/opportunities to reap maximum profits out of their businesses and assist in decision making. Our team of in-house analysts and consultants works tirelessly to understand your needs and suggest the best possible solutions to fulfill your research requirements.

Our team at Reports Globe follows a rigorous process of data validation, which allows us to publish reports from publishers with minimum or no deviations. Reports Globe collects, segregates, and publishes more than 500 reports annually that cater to products and services across numerous domains.

Contact us:

Mr. Mark Willams

Account Manager

US: +1-970-672-0390

Email:[emailprotected]

Web:reportsglobe.com

Follow this link:
Machine Learning & Big Data Analytics Education Market Size is Thriving Worldwide 2020 | Growth and Profit Analysis, Forecast by 2027 - The Daily...

news and analysis for omnichannel retailers – Retail Technology Innovation Hub

Machine learning algorithms will learn patterns from the past data and predict trends and best price. These algorithms can predict the best price, discount price and promotional price based on competition, macroeconomic variables, seasonality etc.

To find out the correct pricing in real-time retailers follow the following steps:

Gather input data

In order to build a machine learning algorithm, retailers collect various data points from the customers. These are:

Transactional data

This includes the sales history of each customer and the products, which they have bought in the past.

Product description

The brands, product category, style, photos and the selling price of the previously sold products are collected. Past promotions and campaigns are also analysed to find the effect of price changes on each category.

Customer details

Demographic details and customer feedback are gathered.

Competition and inventory

Retailers also try to find the data regarding the price of products sold by their competitors and supply chain and inventory data.

Depending on the set of key performance indicators defined by the retailers, the relevant data is filtered.

For every industry, pricing would involve different goals and constraints. In terms of the dynamic nature, the retail industry can be compared to the casino industry where machine learning is involved inonline live dealer casino games too.

Like casinos, retail also has the target of profit maximisation and retention of customer loyalty. Each of these goals and constraints can be fed to a machine learning algorithm to generate dynamic prices of products.

Read more:
news and analysis for omnichannel retailers - Retail Technology Innovation Hub

Machine Learning Does Not Improve Upon Traditional Regression in Predicting Outcomes in Atrial Fibrillation: An Analysis of the ORBIT-AF and…

Aims

Prediction models for outcomes in atrial fibrillation (AF) are used to guide treatment. While regression models have been the analytic standard for prediction modelling, machine learning (ML) has been promoted as a potentially superior methodology. We compared the performance of ML and regression models in predicting outcomes in AF patients.

The Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF) and Global Anticoagulant Registry in the FIELD (GARFIELD-AF) are population-based registries that include 74 792 AF patients. Models were generated from potential predictors using stepwise logistic regression (STEP), random forests (RF), gradient boosting (GB), and two neural networks (NNs). Discriminatory power was highest for death [STEP area under the curve (AUC) = 0.80 in ORBIT-AF, 0.75 in GARFIELD-AF] and lowest for stroke in all models (STEP AUC = 0.67 in ORBIT-AF, 0.66 in GARFIELD-AF). The discriminatory power of the ML models was similar or lower than the STEP models for most outcomes. The GB model had a higher AUC than STEP for death in GARFIELD-AF (0.76 vs. 0.75), but only nominally, and both performed similarly in ORBIT-AF. The multilayer NN had the lowest discriminatory power for all outcomes. The calibration of the STEP modelswere more aligned with the observed events for all outcomes. In the cross-registry models, the discriminatory power of the ML models was similar or lower than the STEP for most cases.

When developed from two large, community-based AF registries, ML techniques did not improve prediction modelling of death, major bleeding, or stroke.

Read the rest here:
Machine Learning Does Not Improve Upon Traditional Regression in Predicting Outcomes in Atrial Fibrillation: An Analysis of the ORBIT-AF and...

How to choose between rule-based AI and machine learning – TechTalks

By Elana Krasner

Companies across industries are exploring and implementing artificial intelligence (AI) projects, from big data to robotics, to automate business processes, improve customer experience, and innovate product development. According to McKinsey, embracing AI promises considerable benefits for businesses and economies through its contributions to productivity and growth. But with that promise comes challenges.

Computers and machines dont come into this world with inherent knowledge or an understanding of how things work. Like humans, they need to be taught that a red light means stop and green means go. So, how do these machines actually gain the intelligence they need to carry out tasks like driving a car or diagnosing a disease?

There are multiple ways to achieve AI, and existential to them all is data. Without quality data, artificial intelligence is a pipedream. There are two ways data can be manipulatedeither through rules or machine learningto achieve AI, and some best practices to help you choose between the two methods.

Long before AI and machine learning (ML) became mainstream terms outside of the high-tech field, developers were encoding human knowledge into computer systems as rules that get stored in a knowledge base. These rules define all aspects of a task, typically in the form of If statements (if A, then do B, else if X, then do Y).

While the number of rules that have to be written depends on the number of actions you want a system to handle (for example, 20 actions means manually writing and coding at least 20 rules), rules-based systems are generally lower effort, more cost-effective and less risky since these rules wont change or update on their own. However, rules can limit AI capabilities with rigid intelligence that can only do what theyve been written to do.

While a rules-based system could be considered as having fixed intelligence, in contrast, a machine learning system is adaptive and attempts to simulate human intelligence. There is still a layer of underlying rules, but instead of a human writing a fixed set, the machine has the ability to learn new rules on its own, and discard ones that arent working anymore.

In practice, there are several ways a machine can learn, but supervised trainingwhen the machine is given data to train onis generally the first step in a machine learning program. Eventually, the machine will be able to interpret, categorize, and perform other tasks with unlabeled data or unknown information on its own.

The anticipated benefits to AI are high, so the decisions a company makes early in its execution can be critical to success. Foundational is aligning your technology choices to the underlying business goals that AI was set forth to achieve. What problems are you trying to solve, or challenges are you trying to meet?

The decision to implement a rules-based or machine learning system will have a long-term impact on how a companys AI program evolves and scales. Here are some best practices to consider when evaluating which approach is right for your organization:

When choosing a rules-based approach makes sense:

When to apply machine learning:

The promises of AI are real, but for many organizations, the challenge is where to begin. If you fall into this category, start by determining whether a rules-based or ML method will work best for your organization.

About the author:

Elana Krasner is Product Marketing Manager at 7Park Data, a data and analytics company that transforms raw data into analytics-ready products using machine learning and NLP technologies. She has been in the tech marketing field for almost 10 years and has worked across the industry in Cloud Computing, SaaS and Data Analytics.

Read the rest here:
How to choose between rule-based AI and machine learning - TechTalks

Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding…

Data Bridge Market Research has recently added a concise research on the Global Machine Learning Chip Market to depict valuable insights related to significant market trends driving the industry. The report features analysis based on key opportunities and challenges confronted by market leaders while highlighting their competitive setting and corporate strategies for the estimated timeline. The development plans, market risks, opportunities and development threats are explained in detail. The CAGR value, technological development, new product launches and Machine Learning Chip Industry competitive structure is elaborated. As per study key players of this market are Google Inc, Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding Company, Intel Corporation, Xilinx, SAMSUNG, Qualcomm Technologies, Inc.,

Click HERE To get SAMPLE COPY OF THIS REPORT (Including Full TOC, Table & Figures) [emailprotected] https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-machine-learning-chip-market

Machine learning chip market is expected to reach USD 72.45 billion by 2027 witnessing market growth with the rate of 40.60% in the forecast period of 2020 to 2027. Data Bridge Market Research report on machine learning chip market provides analysis and insights regarding the various factors expected to be prevalent throughout the forecast period while providing their impacts on the markets growth.

Global Machine Learning Chip Market Dynamics:

Global Machine Learning Chip Market Scope and Market Size

Machine learning chip market is segmented on the basis of chip type, technology and industry vertical. The growth among segments helps you analyse niche pockets of growth and strategies to approach the market and determine your core application areas and the difference in your target markets.

Important Features of the Global Machine Learning Chip Market Report:

1) What all companies are currently profiled in the report?

List of players that are currently profiled in the report- NVIDIA Corporation, Wave Computing, Inc., Graphcore, IBM Corporation, Taiwan Semiconductor Manufacturing Company Limited, Micron Technology, Inc.,

** List of companies mentioned may vary in the final report subject to Name Change / Merger etc.

2) What all regional segmentation covered? Can specific country of interest be added?

Currently, research report gives special attention and focus on following regions:

North America, Europe, Asia-Pacific etc.

** One country of specific interest can be included at no added cost. For inclusion of more regional segment quote may vary.

3) Can inclusion of additional Segmentation / Market breakdown is possible?

Yes, inclusion of additional segmentation / Market breakdown is possible subject to data availability and difficulty of survey. However a detailed requirement needs to be shared with our research before giving final confirmation to client.

** Depending upon the requirement the deliverable time and quote will vary.

Global Machine Learning Chip Market Segmentation:

By Chip Type (GPU, ASIC, FPGA, CPU, Others),

Technology (System-on-Chip, System-in-Package, Multi-Chip Module, Others),

Industry Vertical (Media & Advertising, BFSI, IT & Telecom, Retail, Healthcare, Automotive & Transportation, Others),

Country (U.S., Canada, Mexico, Brazil, Argentina, Rest of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific, Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa) Industry Trends and Forecast to 2027

New Business Strategies, Challenges & Policies are mentioned in Table of Content, Request TOC @ https://www.databridgemarketresearch.com/toc/?dbmr=global-machine-learning-chip-market

Strategic Points Covered in Table of Content of Global Machine Learning Chip Market:

Chapter 1:Introduction, market driving force product Objective of Study and Research Scope Machine Learning Chip market

Chapter 2:Exclusive Summary the basic information of Machine Learning Chip Market.

Chapter 3:Displaying the Market Dynamics- Drivers, Trends and Challenges of Machine Learning Chip

Chapter 4:Presenting Machine Learning Chip Market Factor Analysis Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.

Chapter 5:Displaying the by Type, End User and Region 2013-2018

Chapter 6:Evaluating theleading manufacturers of Machine Learning Chip marketwhich consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile

Chapter 7:To evaluate the market by segments, by countries and by manufacturers with revenue share and sales by key countries in these various regions.

Chapter 8 & 9:Displaying the Appendix, Methodology and Data Source

Region wise analysis of the top producers and consumers, focus on product capacity, production, value, consumption, market share and growth opportunity in below mentioned key regions:

North America U.S., Canada, Mexico

Europe : U.K, France, Italy, Germany, Russia, Spain, etc.

Asia-Pacific China, Japan, India, Southeast Asia etc.

South America Brazil, Argentina, etc.

Middle East & Africa Saudi Arabia, African countries etc.

What the Report has in Store for you?

Industry Size & Forecast: The industry analysts have offered historical, current, and expected projections of the industry size from the cost and volume point of view

Future Opportunities: In this segment of the report, Machine Learning Chip competitors are offered with the data on the future aspects that the Machine Learning Chip industry is likely to provide

Industry Trends & Developments: Here, authors of the report have talked about the main developments and trends taking place within the Machine Learning Chip marketplace and their anticipated impact at the overall growth

Study on Industry Segmentation: Detailed breakdown of the key Machine Learning Chip industry segments together with product type, application, and vertical has been done in this portion of the report

Regional Analysis: Machine Learning Chip market vendors are served with vital information of the high growth regions and their respective countries, thus assist them to invest in profitable regions

Competitive Landscape: This section of the report sheds light on the competitive situation of the Machine Learning Chip market by focusing at the crucial strategies taken up through the players to consolidate their presence inside the Machine Learning Chip industry.

Key questions answered in this report

About Data Bridge Market Research:

An absolute way to forecast what future holds is to comprehend the trend today!Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market.

Contact:

US: +1 888 387 2818

UK: +44 208 089 1725

Hong Kong: +852 8192 7475

[emailprotected]

See the article here:
Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding...

Semiconductor Miniaturisation Is Running Out Of Steam. Time To Focus On Smarter Algorithms – Analytics India Magazine

Recently, a team of researchers from MIT CSAIL recommended that researchers should focus on three key areas that prioritise to deliver computing speed-ups, which are new algorithms, higher-performance software and more specialised hardware, and the need for moving away from focusing on creating only smaller hardware.

The researchers stated that semiconductor miniaturisation is running out of steam as a viable way to grow computer performance, and industries will soon face challenges in their productivity. However, the opportunities for growth in computing performance will still be available if the researchers focus more on software, algorithms, including hardware architecture.

Transistors have brought a plethora of advances and growth in computer performance over the past few decades. These improvements in computer performance come from decades of miniaturisation of computer components, for instance, from a room-sized computer to a cellphone. For decades, programmers have been able to prioritise writing code quickly rather than writing it so that it runs quickly since smaller, faster computer chips have always been able to pick up the slack.

In 1975, Intel founder Gordon Moore predicted the regularity of this miniaturisation trend, which is now called Moores law the number of transistors on computer chips would double every 24 months.

The researchers broke down their recommendations into the categories, they are software, algorithms, and hardware architecture as mentioned below.

According to the researchers, software can be made more efficient by performance engineering such as restructuring the software to make it run faster. Performance engineering can remove inefficiencies in programs, also known as software bloat. Software bloating is an issue that arises from traditional software-development strategies that aim to minimise applications development time rather than the time it takes to run. Performance engineering can also tailor the software to the hardware on which it runs, for example, to take advantage of parallel processors and vector units.

Algorithms offer more-efficient ways to solve problems. The researchers stated that the biggest benefits coming from algorithms are for new problem domains. For instance, machine learning and new theoretical machine models that better reflect emerging hardware.

According to the researchers, hardware architectures can be streamlined through processor simplification, where a complex processing core is replaced with a simpler core that requires fewer transistors. Then, the freed-up transistor budget can be redeployed in other ways. For example, by increasing the number of processor cores running in parallel, which can lead to large efficiency gains for problems that can exploit parallelism.

Also, another form of streamlining is domain specialisation, where hardware is customised for a particular application domain. This type of specialisation discards processor functionality that is not needed for the domain and can allow more customisation to the specific characteristics of the domain by decreasing floating-point precision for artificial intelligence and machine-learning applications.

Researchers have been following Moores law for a few decades now, i.e the overall processing power for computers will double every two years. Software development in the Moore era has generally focused on minimising the time it takes to develop an application, rather than the time it takes to run that application once it is deployed.

The researchers stated that as miniaturisation wanes, the silicon-fabrication improvements at the Bottom will no longer provide the predictable, broad-based gains in computer performance that society has enjoyed for more than 50 years.

In the post-Moore era, performance engineering, development of algorithms, and hardware streamlining will be most effective within big system components. From engineering-management and economic points of view, these changes will be easier to implement if they occur within big system components that include reusable software with typically more than a million lines of code or hardware of comparable complexity or a similarly large software-hardware hybrid.

comments

The rest is here:
Semiconductor Miniaturisation Is Running Out Of Steam. Time To Focus On Smarter Algorithms - Analytics India Magazine

Microsoft throws weight behind machine learning hacking competition – The Daily Swig

Emma Woollacott02 June 2020 at 13:14 UTC Updated: 02 June 2020 at 14:48 UTC

ML security evasion event is based on a similar competition held at DEF CON 27 last summer

The defensive capabilities of machine learning (ML) systems will be stretched to the limit at a Microsoft security event this summer.

Along with various industry partners, the company is sponsoring a Machine Learning Security Evasion Competition involving both ML experts and cybersecurity professionals.

The event is based on a similar competition held at AI Village at DEF CON 27 last summer, where contestants took part in a white-box attack against static malware machine learning models.

Several participants discovered approaches that completely and simultaneously bypassed three different machine learning anti-malware models.

The 2020 Machine Learning Security Evasion Competition is similarly designed to surface countermeasures to adversarial behavior and raise awareness about the variety of ways ML systems may be evaded by malware, in order to better defend against these techniques, says Hyrum Anderson, Microsofts principal architect for enterprise protection and detection.

The competition will consist of two different challenges. A Defender Challenge will run from June 15 through July 23, with the aim of identifying new defenses to counter cyber-attacks.

The winning defensive technique will need to be able to detect real-world malware with moderate false-positive rates, says the team.

Next, an Attacker Challenge running from August 6 through September 18 provides a black-box threat model.

Participants will be given API access to hosted anti-malware models, including those developed in the Defender Challenge.

RECOMMENDED DEF CON 2020: Safe Mode virtual event will be free to attend, organizers confirm

Contestants will attempt to evade defenses using hard-label query results, with samples from final submissions detonated in a sandbox to make sure theyre still functional.

The final ranking will depend on the total number of API queries required by a contestant, as well as evasion rates, says the team.

Each challenge will net the winner $2,500 in Azure credits, with the runner up getting $500 in Azure credits.

To win, researchers must publish their detection or evasion strategies. Individuals or teams can register on the MLSec website.

Companies investing heavily in machine learning are being subjected to various degrees of adversarial behavior, and most organizations are not well-positioned to adapt, says Anderson.

It is our goal that through our internal research and external partnerships and engagements including this competition well collectively begin to change that.

READ MORE Going deep: How advances in machine learning can improve DDoS attack detection

See original here:
Microsoft throws weight behind machine learning hacking competition - The Daily Swig

Using Machine Learning in Financial Services and the regulatory implications – Lexology

Financial services firms have been increasingly incorporating Artificial Intelligence (AI) into their strategies to drive operational and cost efficiencies. Firms must ensure effective governance of any use of AI. The Financial Conduct Authority (FCA) is active in this area, currently collaborating with The Alan Turing Institute to examine a potential framework for transparency in the use of AI in financial markets.

In simple terms, AI involves algorithms that can make human-like decisions, often on the basis of large volumes of data, but typically at a much faster and more efficient rate. In 2019, the FCA and the Bank of England (BoE) issued a survey to almost 300 firms, including banks, credit brokers, e-money institutions, financial market infrastructure firms, investment managers, insurers, non-bank lenders and principal trading firms, to understand the extent to which they were using Machine Learning (ML), a sub-category of AI. While AI is a broad concept, ML involves a methodology whereby a computer programme learns to recognise patterns of data without being explicitly programmed.

The key findings included:

The use cases for ML identified by the FCA and BoE were largely focused around the following areas:

Anti-money laundering and countering the financing of terrorism

Financial institutions have to analyse customer data continuously from a wide-range of sources in order to comply with their AML obligations. The FCA and BoE found that ML was being used at several stages within the process to:

Customer engagement

Firms were increasingly using Chatbots, which enable customers to contact firms without having to go through human agents via call centres or customer support. Chatbots can reduce the time and resources needed to resolve consumer queries.

ML can facilitate faster identification of user intent and recommend associated content which can help address consumers issues. For more complex matters which cannot be addressed by the Chatbot, the ML will transfer the consumer to a human agent who should be better placed to deal with the query.

Sales and trading

The FCA and BoE reported that ML use cases in sales and trading broadly fell under three categories ranging from client-facing to pricing and execution:

Insurance pricing

The majority of respondents in the insurance sector used ML to price general insurance products, including motor, marine, flight, building and contents insurance. In particular, ML applications were used for:

Insurance claims management

Of the respondents in the general insurance sector, 83% used ML for claims management in the following scenarios:

Asset management

ML currently appears to provide only a supporting role in the asset management sector. Systems are often used to provide suggestions to fund management (which apply equally to portfolio decision-making or execution only trades):

All of these applications have back-up systems and human-in-the-loop safeguards. They are aimed at providing fund managers with suggestions, with a human in charge of the decision making and trade execution.

Regulatory obligations

Although there is no overarching legal framework which governs the use of AI in financial services, Principle 3 of the FCAs Principles for Business makes clear that firms must take reasonable care to organise and control their affairs responsibly and effectively, with adequate risk management systems. If regulated activities being conducted by firms are increasingly dependent on ML or, more broadly, AI, firms will need to ensure that there is effective governance around the use of AI and that systems and controls adequately ensure that the use of ML and AI is not causing harm to consumers or the markets.

There are a number of risks in adopting AI, for example, algorithmic bias caused by insufficient or inaccurate data (note that the main barrier to widespread adoption of AI is the availability of data) and lack of training of systems and AI users, which could lead to poor decisions being made. It is therefore imperative that firms fully understand the design of the MI, have stress-tested the technology prior to its roll-out in business areas and have effective quality assurance and system feedback measures in place to detect and prevent poor outcomes.

Clear records should be kept of the data used by the ML, the decision making around the use of ML and how systems are trained and tested. Ultimately, firms should be able to explain how the ML reached a particular decision.

Where firms outsource to AI service providers, they retain the regulatory risk if things go wrong. As such, the regulated firm should ensure it carries out sufficient due diligence on the service provider, that it understands the underlying decision-making process of the service providers AI and ensure the contract includes adequate monitoring and oversight mechanisms where the AI services are important in the context of the firms regulated business, and appropriate termination provisions.

The FCA announced in July 2019 that it is working with The Alan Turing Institute on a year-long collaboration on AI transparency in which they will propose a high-level framework for thinking about transparency needs concerning uses of AI in financial markets. The Alan Turing Institute has already completed a project on explainable AI with the Information Commissioner in the content of data protection. A recent blog published by the FCA stated:

the need or desire to access information about a given AI system may be motivated by a variety of reasons there are a diverse range of concerns that may be addressed through transparency measures. one important function of transparency is to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems transparency may [also] enable customers to understand and where appropriate challenge the basis of particular outcomes.

Read the original post:
Using Machine Learning in Financial Services and the regulatory implications - Lexology

Research Associate in Computer Vision and Machine Learning for Robotics job with UNIVERSITY OF LINCOLN | 238417 – Times Higher Education (THE)

School of Computer Science

Location: LincolnSalary: From 33,797 per annumThis post is full time and fixed term until 13 August 2021Closing Date: Sunday 10 January 2021Interview Date: Thursday 28 January 2021Reference: COS707B

The University of Lincoln is seeking to appoint a Research Associate. The position is funded by the Ceres Agri-Tech Knowledge Exchange Partnership, which aims to build a second-generation robotic with advanced stereovision in conjunction with a novel high tack surface gripper/end effector.

In our previous project, a UoL team, which included LMF Mushrooms and Stelram Engineering, successfully built a picking prototype robot that can pick individual upright mushrooms with minimal damage. The system was operated by a combination of novel soft robotic actuators and an advanced tracking system driven by powerful 3D perception algorithms. The problem that this project will try to solve is picking mushrooms that grow in highly complex and biologically variable clusters. There is a lack of a simple universal grasping actuator to pick mushrooms without damage, as well as the need to develop powerful 3D perception algorithms to target mushrooms and to integrate this into motion planning and control systems. This project will attempt to solve these issues by highly novel soft robotic actuators deployed in combination with advanced guidance and tracking systems operating within a 3D vision sensed environment.

This project has the potential to change the mushroom sector and the application of soft robotics combined with novel tracking algorithms has the capability to underpin the wider deployment of RAS in multiple sectors of food and manufacturing.

We are looking to recruit a postdoctoral Research Associate specialised in the following:

The successful candidate will contribute to the University's ambition to achieve international recognition as a research-intensive institution and will be expected to design, conduct and manage original research in the above subject areas as well contribute to the wider activities of Lincoln School of Computer Science. Evidence of authorship of research outputs of international standing is essential, as is the ability to work collaboratively as part of a team, including excellent written and spoken communication skills. Opportunities to mentor and co-supervise PhD students working in the project team will also be available to outstanding candidates.

Informal enquiries about the post can be made to Dr Bashir Al-Diri (email: baldiri@lincoln.ac.uk).

Read more here:
Research Associate in Computer Vision and Machine Learning for Robotics job with UNIVERSITY OF LINCOLN | 238417 - Times Higher Education (THE)

What is machine learning? Here’s what you need to know – Business Insider – Business Insider

Machine learning is a fast-growing and successful branch of artificial intelligence. In essence, machine learning is the process of allowing a computer system to teach itself how to perform complex tasks by analyzing large sets of data, rather than being explicitly programmed with a particular algorithm or solution.

In this way, machine learning enables a computer to learn how to perform a task on its own and to continue to optimize its approach over time, without direct human input.

In other words, it's the computer that is creating the algorithm, not the programmers, and often these algorithms are sufficiently complicated that programmers can't explain how the computer is solving the problem. Humans can't trace the computer's logic from beginning to end; they can only determine if it's finding the right solution to the assigned problem, which is output as a "prediction."

There are several different approaches to training expert systems that rely on machine learning, specifically "deep" learning that functions through the processing of computational nodes. Here are the most common forms:

Supervised learning is a model in which computers are given data that has already been structured by humans. For example, computers can learn from databases and spreadsheets in which the data has already been organized, such as financial data or geographic observations recorded by satellites.

Unsupervised learning uses databases that are mostly or entirely unstructured. This is common in situations where the data is collected in a way that humans can't easily organize or structure it. A common example of unstructured learning is spam detection, in which a computer is given access to enormous quantities of emails and it learns on its own to distinguish between wanted and unwanted mail.

Reinforcement learning is when humans monitor the output of the computer system and help guide it toward the optimal solution through trial and error. One way to visualize reinforcement learning is to view the algorithm as being "rewarded" for achieving the best outcome, which helps it determine how to interpret its data more accurately.

The field of machine learning is very active right now, with many common applications in business, academia, and industry. Here are a few representative examples:

Recommendation engines use machine learning to learn from previous choices people have made. For example, machine learning is commonly used in software like video streaming services to suggest movies or TV shows that users might want to watch based on previous viewing choices, as well as "you might also like" recommendations on retail sites.

Banks and insurance companies rely on machine learning to detect and prevent fraud through subtle signals of strange behavior and unexpected transactions. Traditional methods for flagging suspicious activity are usually very rigid and rules-based, which can miss new and unexpected patterns, while also overwhelming investigators with false positives. Machine learning algorithms can be trained with real-world fraud data, allowing the system to classify suspicious fraud cases far more accurately.

Inventory optimization a part of the retail workflow is increasingly performed by systems trained with machine learning. Machine learning systems can analyze vast quantities of sales and inventory data to find patterns that elude human inventory planners. These computer systems can make more accurate probability forecasting for customer demand.

Machine automation increasingly relies on machine learning. For example, self-driving car technology is deeply indebted to machine learning algorithms for the ability to detect objects on the road, classify those objects, and make accurate predictions about their potential movement and behavior.

View post:
What is machine learning? Here's what you need to know - Business Insider - Business Insider

New machine learning, automation capabilities added to PagerDuty’s digital operations management platform – SiliconANGLE News

During a time when it seems as though the entire planet has gone digital, the role of PagerDuty Inc. has come into sharper focus as a key player in keeping the critical work of IT organizations up and running.

Mindful of enterprise and consumer need at such an important time, the company has chosen this weeksvirtual Summit event to unveil a significant number of new product releases.

We have the biggest set of releases and investments in innovation that were unleashing in the history of the company, said Jonathan Rende (pictured), senior vice president of product and marketing at PagerDuty. PagerDuty has a unique place in that whole ecosystem in whats considered crucial and critical now. These services have never been more important and more essential to everything we do.

Rende spoke with Lisa Martin, host of theCUBE, SiliconANGLE Medias livestreaming studio, during thePagerDuty Summit 2020. They discussed the companys focus on automation to help customers manage incidents, the introduction of new tools for organizational collaboration and a trend toward full-service ownership. (* Disclosure below.)

The latest releases are focused on PagerDutys expertise in machine learning and automation to leverage customer data for faster and more accurate incident response.

In our new releases, we raised the game on what were doing to take advantage of our data that we capture and this increase in information thats coming in, Rende said. A big part of our releases has also been about applying machine learning to add context and speed up fixing, resolving and finding the root cause of issues. Were applying machine learning to better group and intelligently organize information into singular incidents that really matter.

PagerDuty is also leveraging its partner and customer network to introduce new tools for collaboration as part of its platform.

One of the things weve done in the new platform is were introducing industry-first video war rooms with our partners and customers, Zoom as well as Microsoft Teams, and updating our Slack integrations as well, Rende explained. Weve also added the ability to manage an issue through Zoom and Microsoft Teams as a part of PagerDuty.

These latest announcements are a part of what Rende describes as a move in larger companies toward broader direct involvement of both developers and IT staff in operational responsibility.

There is a material seismic shift towards full-service ownership, Rende said. Were seeing larger organizations have major initiatives around this notion of the front-line teams being empowered to work directly on these issues. Full-service ownership means you build it, you ship it, you own it, and thats for both development and IT organizations.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of PagerDuty Summit 2020. (* Disclosure: TheCUBE is a paid media partner for PagerDuty Summit 2020. Neither PagerDuty Inc., the sponsor for theCUBEs event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

See original here:
New machine learning, automation capabilities added to PagerDuty's digital operations management platform - SiliconANGLE News