Five real world AI and machine learning trends that will make an impact in 2021 – IT World Canada

Experts predict artificial intelligence (AI) and machine learning will enter a golden age in 2021, solving some of the hardest business problems.

Machine learning trains computers to learn from data with minimal human intervention. The science isnt new, but recent developments have given it fresh momentum, said Jin-Whan Jung, Senior Director & Leader, Advanced Analytics Lab at SAS. The evolution of technology has really helped us, said Jung. The real-time decision making that supports self-driving cars or robotic automation is possible because of the growth of data and computational power.

The COVID-19 crisis has also pushed the practice forward, said Jung. Were using machine learning more for things like predicting the spread of the disease or the need for personal protective equipment, he said. Lifestyle changes mean that AI is being used more often at home, such as when Netflix makes recommendations on the next show to watch, noted Jung. As well, companies are increasingly turning to AI to improve their agility to help them cope with market disruption.

Jungs observations are backed by the latest IDC forecast. It estimates that global AI spending will double to $110 billion over the next four years. How will AI and machine learning make an impact in 2021? Here are the top five trends identified by Jung and his team of elite data scientists at the SAS Advanced Analytics Lab:

Canadas Armed Forces rely on Lockheed Martins C-130 Hercules aircraft for search and rescue missions. Maintenance of these aircraft has been transformed by the marriage of machine learning and IoT. Six hundred sensors located throughout the aircraft produce 72,000 rows of data per flight hour, including fault codes on failing parts. By applying machine learning, the system develops real-time best practices for the maintenance of the aircraft.

We are embedding the intelligence at the edge, which is faster and smarter and thats the key to the benefits, said Jung. Indeed, the combination is so powerful that Gartner predicts that by 2022, more than 80 per cent of enterprise IoT projects will incorporate AI in some form, up from just 10 per cent today.

Computer vision trains computers to interpret and understand the visual world. Using deep learning models, machines can accurately identify objects in videos, or images in documents, and react to what they see.

The practice is already having a big impact on industries like transportation, healthcare, banking and manufacturing. For example, a camera in a self-driving car can identify objects in front of the car, such as stop signs, traffic signals or pedestrians, and react accordingly, said Jung. Computer vision has also been used to analyze scans to determine whether tumors are cancerous or benign, avoiding the need for a biopsy. In banking, computer vision can be used to spot counterfeit bills or for processing document images, rapidly robotizing cumbersome manual processes. In manufacturing, it can improve defect detection rates by up to 90 per cent. And it is even helping to save lives; whereby cameras monitor and analye power lines to enable early detection of wildfires.

At the core of machine learning is the idea that computers are not simply trained based on a static set of rules but can learn to adapt to changing circumstances. Its similar to the way you learn from your own successes and failures, said Jung. Business is going to be moving more and more in this direction.

Currently, adaptive learning is often used fraud investigations. Machines can use feedback from the data or investigators to fine-tune their ability to spot the fraudsters. It will also play a key role in hyper-automation, a top technology trend identified by Gartner. The idea is that businesses should automate processes wherever possible. If its going to work, however, automated business processes must be able to adapt to different situations over time, Jung said.

To deliver a return for the business, AI cannot be kept solely in the hands of data scientists, said Jung. In 2021, organizations will want to build greater value by putting analytics in the hands of the people who can derive insights to improve the business. We have to make sure that we not only make a good product, we want to make sure that people use those things, said Jung. As an example, Gartner suggests that AI will increasingly become part of the mainstream DevOps process to provide a clearer path to value.

Responsible AI will become a high priority for executives in 2021, said Jung. In the past year, ethical issues have been raised in relation to the use of AI for surveillance by law enforcement agencies, or by businesses for marketing campaigns. There is also talk around the world of legislation related to responsible AI.

There is a possibility for bias in the machine, the data or the way we train the model, said Jung. We have to make every effort to have processes and gatekeepers to double and triple check to ensure compliance, privacy and fairness. Gartner also recommends the creation of an external AI ethics board to advise on the potential impact of AI projects.

Large companies are increasingly hiring Chief Analytics Officers (CAO) and the resources to determine the best way to leverage analytics, said Jung. However, organizations of any size can benefit from AI and machine learning, even if they lack in-house expertise.

Jung recommends that if organizations dont have experience in analytics, they should consider getting an assessment on how to turn data into a competitive advantage. For example, the Advanced Analytics Lab at SAS offers an innovation and advisory service that provides guidance on value-driven analytics strategies; by helping organizations define a roadmap that aligns with business priorities starting from data collection and maintenance to analytics deployment through to execution and monitoring to fulfill the organizations vision, said Jung. As we progress into 2021, organizations will increasingly discover the value of analytics to solve business problems.

SAS highlights a few top trends in AI and machine learning in this video.

Jim Love, Chief Content Officer, IT World Canada

Read more from the original source:
Five real world AI and machine learning trends that will make an impact in 2021 - IT World Canada

Harnessing the power of machine learning for improved decision-making – GCN.com

INDUSTRY INSIGHT

Across government, IT managers are looking to harness the power of artificial intelligence and machine learning techniques (AI/ML) to extract and analyze data to support mission delivery and better serve citizens.

Practically every large federal agency is executing some type of proof of concept or pilot project related to AI/ML technologies. The governments AI toolkit is diverse and spans the federal administrative state, according to a report commissioned by the Administrative Conference of the United States (ACUS). Nearly half of the 142 federal agencies canvassed have experimented with AI/ML tools, the report, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, states.

Moreover, AI tools are already improving agency operations across the full range of governance tasks, including regulatory mandate enforcement, adjudicating government benefits and privileges, monitoring and analyzing risks to public safety and health, providing weather forecasting information and extracting information from the trove of government data to address consumer complaints.

Agencies with mature data science practices are further along in their AI/ML exploration. However, because agencies are at different stages in their digital journeys, many federal decision-makers still struggle to understand AI/ML. They need a better grasp of the skill sets and best practices needed to derive meaningful insights from data powered by AI/ML tools.

Understanding how AI/ML works

AI mimics human cognitive functions such as the ability to sense, reason, act and adapt, giving machines the ability to act intelligently. Machine learning is a component of AI, which involves the training of algorithms or models that then give predictions about data it has yet to observe. ML models are not programmed like conventional algorithms. They are trained using data -- such as words, log data, time series data or images -- and make predictions on actions to perform.

Within the field of machine learning, there are two main types of tasks: supervised and unsupervised.

With supervised learning, data analysts have prior knowledge of what the output values for their samples should be. The AI system is specifically told what to look for, so the model is trained until it can detect underlying patterns and relationships. For example, an email spam filter is a machine learning program that can learn to flag spam after being given examples of spam emails that are flagged by users and examples of regular non-spam emails. The examples the system uses to learn are called the training set.

Unsupervised learning looks for previously undetected patterns in a dataset with no pre-existing labels and with a minimum of human supervision. For instance, data points with similar characteristics can be automatically grouped into clusters for anomaly detection, such as in fraud detection or identifying defective mechanical parts in predictive maintenance.

Supervised, unsupervised in action

It is not a matter of which approach is better. Both supervised and unsupervised learning are needed for machine learning to be effective.

Both approaches were applied recently to help a large defense financial management and comptroller office resolve over $2 billion in unmatched transactions in an enterprise resource planning system. Many tasks required significant manual effort, so the organization implemented a robotic process automation solution to automatically access data from various financial management systems and process transactions without human intervention. However, RPA fell short when data variances exceeded tolerance for matching data and documents, so AI/ML techniques were used to resolve the unmatched transactions.

The data analyst team used supervised learning with preexisting rules that resulted in these transactions. The team was then able to provide additional value because they applied unsupervised ML techniques to find patterns in the data that they were not previously aware of.

To get a better sense of how AI/ML can help agencies better manage data, it is worth considering these three steps:

Data analysts should think of these steps as a continuous loop. If the output from unsupervised learning is meaningful, they can incorporate it into the supervised learning modeling. Thus, they are involved in a continuous learning process as they explore the data together.

Avoiding pitfalls

It is important for IT teams to realize they cannot just feed data into machine learning models, especially with unsupervised learning, which is a little more art than science. That is where humans really need to be involved. Also, analysts should avoid over-fitting models seeking to derive too much insight.

Remember: AI/ML and RPA are meant to augment humans in the workforce, not merely replace people with autonomous robots or chatbots. To be effective, agencies must strategically organize around the right people, processes and technologies to harness the power of innovative technologies such as AI/ML to achieve the performance they need at scale.

About the Author

Samuel Stewart is a data scientist with World Wide Technology.

Read the original post:
Harnessing the power of machine learning for improved decision-making - GCN.com

Agencies may issue information request on AI adoption, Fed official says – American Banker

WASHINGTON The Federal Reserve and other banking regulators are considering a formal request for public feedback about the adoption of artificial intelligence in the financial services sector, Fed Gov. Lael Brainard said Tuesday.

If the agencies move forward with the request for information, it could be the first step toward an interagency policy on AI.

Brainard said the RFI would accompany the Feds own efforts to explore how AI and machine learning can be used for bank supervision purposes.

To ensure that society benefits from the application of AI to financial services, we must understand the potential benefits and risks, and make clear our expectations for how the risks can be managed effectively by banks, Brainard said in remarks for a symposium on AI hosted by the central bank. Regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve.

Bloomberg News

Financial institutions have started using AI for operational risk management purposes and for customer-facing applications, as well as fraud prevention efforts, Brainard said. Those functions could remake the way banks monitor suspicious activity, she added.

Machine learning-based fraud detection tools have the potential to parse through troves of data both structured and unstructured to identify suspicious activity with greater accuracy and speed, and potentially enable firms to respond in real time, she said.

AI could also be used to analyze alternative data for customers, Brainard said. She added that that could be particularly helpful to the segment of the population that is credit invisible."

But Brainard also acknowledged challenges in the widespread adoption of AI and machine learning in banking. If models are based on historical data that has racial bias baked in, they could amplify rather than ameliorate racial gaps in access to credit and lead to digital redlining.

It is our collective responsibility to ensure that as we innovate, we build appropriate guardrails and protections to prevent such bias and ensure that AI is designed to promote equitable outcomes, she said.

Brainard explained that there is also often a lack of transparency in how AI and machine learning processes work behind the scenes to accomplish tasks. The adoption of the technology in the financial services space should work to avoid a one size fits all explanation, she said.

To ensure that the model comports with fair-lending laws that prohibit discrimination, as well as the prohibition against unfair or deceptive practices, firms need to understand the basis on which a machine learning model determines creditworthiness, she said.

Read more:
Agencies may issue information request on AI adoption, Fed official says - American Banker

Connected and autonomous vehicles: Protecting data and machine learning innovations – Lexology

The development of connected and autonomous vehicles (CAVs) is technology-driven and data-centric. Zenzics Roadmap to 2030 highlights that 'the intelligence of self-driving vehicles is driven by advanced features such as artificial intelligence (AI) or machine learning (ML) techniques'.[1] Developers of connected and automated mobility (CAM) technologies are engineering advances in machine learning and machine analysis techniques that can create valuable, potentially life-saving, insights from the massive well of data that is being generated.

Diego Black and Lucy Pegler take a look at the legal and regulatory issues involved in protecting data and innovations in CAVs.

The data of driving

It is predicted that the average driverless car will produce around 4TB of data per day, including data on traffic, route choices, passenger preferences, vehicle performance and many more data points[2].

'Data is foundational to emerging CAM technologies, products and services driving their safety, operation and connectivity'.[3]

As Burges Salmon and AXA UK outlined in their joint report as part of FLOURISH, an Innovate UK-funded CAV project, the data produced by CAVs can be broadly divided into a number of categories based on its characteristics. For example, sensitive commercial data, commercial data, personal data. How data should be protected will depend on its characteristics and importantly, the purposes for which it is used. The use of personal data (i.e. data from which an individual can be identified) attracts particular consideration.

The importance of data to the CAM industry and, in particular, the need to share data effectively to enable the deployment and operation of CAM, needs to be balanced against data protection considerations. In 2018, the Open Data Institute (ODI) published a report setting out that it considered that all journey data is personal data[4] consequently bringing journey data within the scope of the General Data Protection Regulation.[5]

Additionally, the European Data Protection Board (EDPB) has confirmed that the ePrivacy directive (2002/58/EC as revised by 2009/136/EC) applies to connected vehicles by virtue of 'the connected vehicle and every device connected to it [being] considered as a 'terminal equipment'.'[6] This means that any machine learning innovations deployed in CAVs will inevitably process vast amounts of personal data. The UK Information Commissioners Office has issued guidance on how to best deal with harnessing both big data and AI in relation to personal data, including emphasising the need for industry to deploy ethical principles, create ethics boards to monitor the new uses of data and ensure that machine learning algorithms are auditable.[7]

Navigating the legal frameworks that apply to the use of data is complex and whilst the EDPB has confirmed its position in relation to connected vehicles, automated vehicles and their potential use cases raise an entirely different set of considerations. Whilst the market is developing rapidly, use case scenarios for automated mobility will focus on how people consume services. Demand responsive transport and ride sharing are likely to play a huge role in the future of personal mobility.

The main issue policy makers now face is the ever evolving nature of the technology. As new, potentially unforeseen, technologies are integrated into CAVs, the industry will require both a stringent data protection framework on the one hand, and flexibility and accessibility on the other hand. These two policy goals are necessarily at odds with one another, and the industry will need to take a realistic, privacy by design approach to future development, working with rather than against regulators.

Whilst the GDPR and ePrivacy Directive will likely form the building blocks of future regulation of CAV data, we anticipate the development of a complementary framework of regulation and standards that recognises the unique applications of CAM technologies and the use of data.

Cyber security

The prolific and regular nature of cyber-attacks poses risks to both public acceptance of CAV technology and to the underlying business interests of organisations involved in the CAV ecosystem.

New technologies can present threat to existing cyber security measures. Tarquin Folliss of Reliance acsn highlights this noting that 'a CAVs mix of operational and information technology will produce systems complex to monitor, where intrusive endpoint monitoring might disrupt inadvertently the technology underpinning safety'. The threat is even more acute when thinking about CAVs in action and as Tarquin notes, the ability for 'malign actors to target a CAV network in the same way they target other critical national infrastructure networks and utilities, in order to disrupt'.

In 2017, the government announced 8 Key principles of Cyber Security for Connected and Automated Vehicles. This, alongside the DCMS IoT code of practice, the CCAVs CAV code of practice and the BSIs PAS 1885, provides a good starting point for CAV manufacturers. Best practices include:

Work continues at pace on cyber security for CAM. In May this year, Zenzic published its Cyber Resilience in Connected and Automated Mobility (CAM) Cyber Feasibility Report which sets out the findings of seven projects tasked with providing a clear picture of the challenges and potential solutions in ensuring digital resilience and cyber security within CAM.

Demonstrating the pace of work in the sector, in June 2020 the United Nations Economic Commission for Europe (UNECE) published two new UN Regulations focused on cyber security in the automotive sector. The Regulations represent another step-change in the approach to managing the significant cyber risk of an increasingly connected automotive sector.

Protecting innovation

As innovation in the CAV sector increases, issues regarding intellectual property and its protection and exploitation become more important. Companies that historically were not involved in the automotive sector are now rapidly becoming key partners providing expertise in technologies such as IT security, telecoms, block chain and machine learning. In autonomous vehicles many of the biggest patent filers in this area have software and telecoms backgrounds[8].

With the increasing use of in and inter-car connectivity and the accumulative amount of data having to be handled per second as levels of autonomy rises, innovators in the CAV space are having to handle issues regarding data security as well as determining how best to handle the large data sets. Furthermore, the recent UK government call for evidence on automated lane keeping systems is being seen by many as the first step of standards being introduced in autonomous vehicles.

In view of these developments new challenges are now being faced by companies looking to benefit from their innovations. Unlike more traditional automotive innovation where the innovations lay in improvements to engineering and machinery many of the innovations in the CAV space reside in electronics and software development. The ability to protect and exploit inventions in the software space has become increasingly of relevance in the automotive industry.

Multiple Intellectual Property rights exist that can be used to protect innovations in CAVs. Some rights can be particularly effective in areas of technology where standards exist, or are likely to exist. Two of the main ways seen at present are through the use of patents and trade secrets. Both can be used in combination, or separately, to provide an effective IP strategy. Such an approach is seen in other industries such as those involved in data security.

For companies that are developing or improving machine learning models, or training sets, the use of trade secrets is particularly common. Companies relying on trade secrets may often license access to, or sell the outputs of, their innovations. Advantageously, trade secrets are free and last indefinitely.

An effective strategy in such fields is to obtain patents that cover the technological standard. By definition if a third party were to adhere to the defined standard, they would necessarily fall within the scope of the patent, thus providing the owner of the patent with a potential revenue stream through licensing agreements. If, as anticipated, standards will be set in CAVs any company that can obtain patents to cover the likely standard will be at an advantage. Such licenses are typically offered under a fair, reasonable and non-discriminatory (FRAND) basis, to ensure that companies are not prevented by patent holders from entering the market.

A key consideration is that the use of trade secrets may be incompatible with the use of standards. If technology standards are introduced for autonomous vehicles, in order to comply with the standards companies would have to demonstrate that their technology complies with the standard. The use of trade secrets may be incompatible with the need to demonstrate compliance with a standard.

However, whilst a patent provides a stronger form of protection in order to enforce a patent the owner must be able to demonstrate a third party is performing the acts as defined in the patent. In the case of machine learning and mathematical-based methods such information is often kept hidden making providing infringement difficult. As a result patents in such areas are often directed towards a visible, or tangible, output. For example in CAVs this may be the control of a vehicle based on the improvements in the machine learning. Due to the difficulty in demonstrating infringement, many companies are choosing to protect their innovations with a mixture of trade secrets and patents.

Legal protections for innovations

For the innovations typically seen in the software side of CAVs, trade secrets and patents are the two main forms of protection.

Trade secrets are, as the name implies, where a company will keep all, or part of, their innovation a secret. In software-based inventions this may be in form of a black-box disclosure where the workings and functionality of the software are kept secret. However, steps do need to be taken to keep the innovation secret, and they do not prevent a third party from independently implementing, or reverse engineering, the innovation. Furthermore, once a trade secret is made public, the value associated with the trade secret is gone.

Patents are an exclusive right, lasting up to 20 years, which allow the holder to prevent, or request a license from, a third party utilising the technology that is covered by the scope of the patent in that territory. Therefore it is not possible to enforce say, a US patent in the UK. Unlike trade secrets publication of patents is an important part of the process.

In order for inventions to be patented they must be new (that is to say they have not been disclosed anywhere in the world before), inventive (not run-of-the-mill improvements), and concern non-excluded subject matter. The exclusions in the UK and Europe cover software, and mathematical methods, amongst other fields, as such. In the case of CAVs a large number of inventions are developed that could fall in the software and mathematical methods categories.

The test regarding whether or not an invention may be seen as excluded subject matter varies between jurisdictions. In Europe if an invention is seen to solve a technical problem, for example relating to the control of vehicles it would be deemed allowable. Many of the innovations in CAVs can be tied to technical problems relating to, for example, the control of vehicles or improvements in data security. As such on the whole CAV inventions may escape the exclusions.

What does the future hold?

Technology is advancing at a rapid rate. At the same time as industry develops more and more sophisticated software to harness data, bad actors gain access to more advanced tools. To combat these increased threats, CAV manufacturers need to be putting in place flexible frameworks to review and audit their uses of data now, looking toward the developments of tomorrow to assess the data security measures they have today. They should also be looking to protect some of their most valuable IP assets from the outset, including machine learning developments in a way that is secure and enforceable.

See original here:
Connected and autonomous vehicles: Protecting data and machine learning innovations - Lexology

What is machine learning? Here’s what you need to know – Business Insider – Business Insider

Machine learning is a fast-growing and successful branch of artificial intelligence. In essence, machine learning is the process of allowing a computer system to teach itself how to perform complex tasks by analyzing large sets of data, rather than being explicitly programmed with a particular algorithm or solution.

In this way, machine learning enables a computer to learn how to perform a task on its own and to continue to optimize its approach over time, without direct human input.

In other words, it's the computer that is creating the algorithm, not the programmers, and often these algorithms are sufficiently complicated that programmers can't explain how the computer is solving the problem. Humans can't trace the computer's logic from beginning to end; they can only determine if it's finding the right solution to the assigned problem, which is output as a "prediction."

There are several different approaches to training expert systems that rely on machine learning, specifically "deep" learning that functions through the processing of computational nodes. Here are the most common forms:

Supervised learning is a model in which computers are given data that has already been structured by humans. For example, computers can learn from databases and spreadsheets in which the data has already been organized, such as financial data or geographic observations recorded by satellites.

Unsupervised learning uses databases that are mostly or entirely unstructured. This is common in situations where the data is collected in a way that humans can't easily organize or structure it. A common example of unstructured learning is spam detection, in which a computer is given access to enormous quantities of emails and it learns on its own to distinguish between wanted and unwanted mail.

Reinforcement learning is when humans monitor the output of the computer system and help guide it toward the optimal solution through trial and error. One way to visualize reinforcement learning is to view the algorithm as being "rewarded" for achieving the best outcome, which helps it determine how to interpret its data more accurately.

The field of machine learning is very active right now, with many common applications in business, academia, and industry. Here are a few representative examples:

Recommendation engines use machine learning to learn from previous choices people have made. For example, machine learning is commonly used in software like video streaming services to suggest movies or TV shows that users might want to watch based on previous viewing choices, as well as "you might also like" recommendations on retail sites.

Banks and insurance companies rely on machine learning to detect and prevent fraud through subtle signals of strange behavior and unexpected transactions. Traditional methods for flagging suspicious activity are usually very rigid and rules-based, which can miss new and unexpected patterns, while also overwhelming investigators with false positives. Machine learning algorithms can be trained with real-world fraud data, allowing the system to classify suspicious fraud cases far more accurately.

Inventory optimization a part of the retail workflow is increasingly performed by systems trained with machine learning. Machine learning systems can analyze vast quantities of sales and inventory data to find patterns that elude human inventory planners. These computer systems can make more accurate probability forecasting for customer demand.

Machine automation increasingly relies on machine learning. For example, self-driving car technology is deeply indebted to machine learning algorithms for the ability to detect objects on the road, classify those objects, and make accurate predictions about their potential movement and behavior.

Read more:
What is machine learning? Here's what you need to know - Business Insider - Business Insider

What are the roles of artificial intelligence and machine learning in GNSS positioning? – Inside GNSS

For decades, artificial intelligence and machine learning have advanced at a rapid pace. Today, there are many ways artificial intelligence and machine learning are used behind the scenes to impact our everyday lives, such as social media, shopping recommendations, email spam detection, speech recognition, self-driving cars, UAVs, and so on.

The simulation of human intelligence is programmed to think like humans and mimic our actions to achieve a specific goal. In our own field, machine learning has also changed the ways to solve navigation problems and taken on a significant role in advancing PNT technologies in the future.

LI-TA HSU, HONG KONG POLYTECHNIC UNIVERSITY

Q: Can machine learning replace conventional GNSS positioning techniques?

Actually, it makes no sense to use ML when the exact physics/mathematical models of GNSS positioning are known, and when using machine learning (ML) techniques over any appreciable area to collect extensive data and train the network to estimate receiver locations would be an impractically large undertaking. We, human beings, designed the satellite navigation systems based on the laws of physics discovered. For example, we use Keplers laws to model the position of satellites in an orbit. We use the spread-spectrum technique to model the satellite signal allowing us to acquire very weak signals transmitted from the medium-Earth orbits. We understand the Doppler effect and design tracking loops to track the signal and decode the navigation message. We finally make use of trilateration to model the positioning and use the least square to estimate the location of the receiver. By the efforts of GNSS scientists and engineers for the past several decades, GNSS can now achieve centimeter-level positioning. The problem is; if everything is so perfect, why dont we have a perfect GNSS positioning?

The answer for me as an ML specialist is that the assumptions made are not always valid in all contexts and applications! In trilateration, we assume the satellite signal always transmitted in direct line-of-sight (LOS). However, different layers in the atmosphere can diffract the signal. Luckily, remote-sensing scientists studied the troposphere and ionosphere and came up with sophisticated models to mitigate the ranging error caused by transmission delay. But the multipath effects and non-line-of-sight (NLOS) receptions caused by buildings and obstacles on the ground are much harder to deal with due to their high nonlinearity and complexity.

Q: What are the challenges of GNSS and how can machine learning help with it?

GNSS performs very differently under different contexts. Context means what and where. For example, a pedestrian walks in an urban canyon or a pedestrian sits in a car that drives in a highway. The notorious multipath and NLOS play major roles to affect the performance GNSS receiver under different context. If we follow the same logic of the ionospheric research to deal with the multipath effect, we need to study 3D building models which is the main cause of the reflections. Extracting from our previous research, the right of Figure 1 is simulated based on the LOD1 building model and single-reflected ray-tracing algorithm. It reveals the positioning error caused by the multipath and NLOS is highly site-dependent. In other words, the nonlinearity and complexity of multipath and NLOS are very high.

Generally speaking, ML derives a model based on data. What exactly does ML do best?

Phenomena we simply do not know how to model by explicit laws of physics/math, for example, contexts and semantics.

Phenomena with high complexity, time variance and nonlinearity.

Looking at the challenges of GNSS multipath and the potential of ML, it becomes straightforward to apply artificial intelligence to mitigate multipath and NLOS. One mainstream idea is to use ML to train the models to classify LOS, multipath and NLOS measurements. This idea is illustrated in Figure 2. Three-steps, data labeling, classifier training, and classifier evaluation, are required. In fact, there are also challenges in each step.

Are we confident in our labeling?

In our work, we use 3D city models and ray-tracing simulation to label the measurements we received from the GNSS receiver. The label may not be 100% correct since the 3D models are not conclusive enough to represent the real world. Trees and dynamic objects (vehicles and pedestrians) are not included. In addition, the multiple reflected signals are very hard to trace and the 3D models could have errors.

What are the classes and features?

For the classes, popular selections are the presence (binary) of multipath or NLOS and their associated pseudorange errors. The features are selected based on the variables that are affected by multipath, including carrier-to-noise ratio, pseudorange residual, DOP, etc. If we can assess a step deeper into the correlator, the shape of correlators in code and carrier are also excellent features. Our study evaluates the comparison between the different levels (correlator, RINEX, and NMEA) of features for the GNSS classifier and reveals that the rawer the feature it is, the better classification accuracy can be obtained. Finally, the methods of exploratory data analysis, such as principle component analysis, can better select the features that are more representative to the class.

Are we confident that the data we used to train the classifier are representative enough for the general application cases?

Overfitting of the data is always being a challenge for ML. Multipath and NLOS effects are very difficult in different cities. For example, the architectures in Europe and Asia are very different, producing different multipath effects. Classifiers trained using the data in Hong Kong do not necessarily perform well in London. The categorization of cities or urban areas in terms of their effects on GNSS multipath and NLOS is still an open question.

Q: What are the challenges of integrated navigation systems and how can machine learning can help with them?

Seamless positioning has always been the ultimate goal. However, each sensor has a different performance in different areas. Table 1 gives a rough picture. Inertial sensors seem to perform stably in most areas. But the MEMS-INS suffers from drift and is highly affected by the random noise caused by the temperature variations. Naturally, integrated navigation is a solution. The sensor integration, in fact, shall be regarded in both long-term and short-term.

Long-term Sensor SelectionIn the long term, available sensors for positioning are generally more than enough. The determination of the best subsets of sensors to integrate is the question to ask. Consider an example of seamless positioning for a city dweller travelling from home to the office:

Walking on a street to the subway station (GNSS+IMU)

Walking in a subway station (Wi-Fi/BLE+IMU)

Traveling on a subway (IMU)

Walking in an urban area to the office (VPS+ GNSS+ Wi-Fi/BLE+IMU)

This example clearly shows that seamless positioning should integrate different sensors. The selection of the sensors can be done heuristically or by maximizing the observability of sensors. If the sensors are selected heuristically, we must have the ability to know what context the system is operating under. This is one of the best angles for ML to cut in. In fact, the classification of the scenarios or contexts is exactly what ML does best. A recently published journal paper demonstrates how to detect different contexts using smartphone sensors for context-adaptive navigation (Gao and Groves 2020). Sensors in smartphones are used in the models trained by supervised ML to determine not only the environment but also the behavior (such as transportation modes, including static, pedestrian walk, and sitting on a car or a subway, etc.).

According to their result, the state-of-the-art detection algorithm can achieve over 95% for pedestrians under indoor, intermediate, and outdoor scenarios. This finding encourages the use of ML to intelligently select the right navigation systems for an integrated navigation system under different areas. The same methodology can be easily extended to vehicular applications with a proper modification in the selections of features, classes, and machine learning algorithms.

Short-term Sensor Weighting

Technically speaking, an optimal integrated solution can be obtained if the uncertainty of the sensor can be optimally described. Presumably, the sensors uncertainty remains unchanged under a certain environment. As a result, most of the sensors uncertainty is carefully calibrated before its use in integration systems.

However, the problem is that the environment could change rapidly within a short period of time. For example, a car drives in an urban area with several viaducts or a car drives in an open sky with a canopy of foliage. These scenarios affect the performance of GNSS greatly, however, the affecting periods were too short to exclude the GNSS from the subset of sensors used. The best solution against these unexpected and transient effects are de-weighting the affected sensors in the system.

Due to the complexity of these effects, adaptive tuning of the uncertainty based on ML is getting popular. Our team demonstrated this potential by an experiment of a loosely coupled GNSS/INS integration. This experiment took place in an urban canyon with commercial GNSS and MEMS INS. Different ML algorithms are used to classify the GNSS positioning errors into four classes: healthy, slightly shifted, inaccurate, and dangerous. These are represented as 1 to 4 in the bottom of Figure 4. The top and bottom of the figure show the error of the commercial GNSS solution and the predicted classes by different ML. It clearly shows that ML can do a very good job predicting the class of the GNSS solution, enabling the integrated to allocate proper weighting to GNSS. Table 2 shows the improvement made by the ML-aided integration system.

This is just an example to preliminarily show the potential of ML in estimating/predicting sensors uncertainty. The methodology can also be applied to different sensor integration such as Wi-Fi/BLE/IMU integration. The challenge of the trained classifier may be too specific for a certain area due to the over-fitting of the data. This remains an open research question in the field.

Q: Machine Learning or Deep Learning for Navigation Systems?

Based on research in object recognition in computer science, deep learning (DL) is the currently the mainstream method because it generally outperforms ML when two conditions are fulfilled, data and computation. The trained model of DL is completely data-driven, while ML trains models to fit assumed (known) mathematical models. A rule of thumb to select ML or DL is the availability of the data in hand. If extensive and conclusive data are available, DL achieves excellent performance due to its superiority in data fitting. In the other words, DL can automatically discover features that affect the classes. However, a model trained by ML is much more comprehensible compared to that trained by DL. The DL model becomes like a black box. In addition, the nodes and layers of convolution in DL are used to extract features. The selection of the number of layers and the number of nodes is still very hard to determine, so that in trial-and-error approaches are widely adopted. These are the major challenges in DL.

If a DL-trained neutral network can be perfectly designed for the integrated navigation system, then it should consider both long-term and short-term challenges. Figure 5 shows this idea. Several hidden layers will be designed to predict the environments (or contexts) and the others are to predict the sensor uncertainty. The idea is straightforward, whereas the challenges remain:

Are we confident that the data we used to train the classifier are representative enough for the general applications cases?

What are the classes?

What are the features?

How many layers and the number of nodes should be used?

Q: How does machine learning affect the field of navigation?

ML will accelerate the development of seamless positioning. With the presence of ML in the navigation field, a perfect INS is no longer the only solution. These AI technologies facilitate the selection of the appropriate sensors or raw measurements (with appropriate trust) against complex navigation challenges. The transient selection of the sensors (well-known as plug-and-play) will affect the integration algorithm. Integration R&D engineers in navigation have been working on the Kalman filter and its variants. However, the flexibility of the Kalman filter makes it hard to accommodate the plug-and-play of sensors. The graph optimization that is widely used in the robotics field could be a very strong candidate to integrate sensors for navigation purposes.

Other than GNSS and the integrated navigation system mentioned above, the recently developed visual positioning system (VPS) by Google could replace the visual corner point detection by the semantic information that detected by ML. Looking at how we navigated before GNSS, we compare visual landmarks with our memory (database) to infer where we are and where we are heading. ML can segment and classify images taken by a camera into different classes, including building, foliage, road, curb, etc., and compare the distribution of the semantic information with that in the database in the cloud server. If they match, the associated position and orientation tag in the database can be regarded as the user location.

AI technologies are coming. They will influence navigation research and development. In my opinion, the best we can do is to mobilize AI to tackle the challenges to which we currently lack solutions. It is highly probable that technology advances and learning focus will depend greatly on MLs development and achievement in the field of navigation.

References

(1) Groves PD, Challenges of Integrated Navigation, ION GNSS+ 2018, Miami, Florida, pp. 3237-3264.

(2) Gao H, Groves PD. (2020) Improving environment detection by behavior association for context-adaptive navigation. NAVIGATION, 67:4360. https://doi.org/10.1002/navi.349

(3) Sun R., Hsu L.T., Xue D., Zhang G., Washington Y.O., (2019) GPS Signal Reception Classification Using Adaptive Neuro-Fuzzy Inference System, Journal of Navigation, 72(3): 685-701.

(4) Hsu L.T. GNSS Multipath Detection Using a Machine Learning Approach, IEEE ITSC 2017, Yokohama, Japan.

(5) Yozevitch R., and Moshe BB. (2015) A robust shadow matching algorithm for GNSS positioning. NAVIGATION, 62.2: 95-109.

(6) Chen P.Y., Chen H., Tsai M.H., Kuo H.K., Tsai Y.M., Chiou T.Y., Jau P.H. Performance of Machine Learning Models in Determining the GNSS Position Usage for a Loosely Coupled GNSS/IMU System, ION GNSS+ 2020, virtually, September 21-25, 2020.

(7) Suzuki T., Nakano, Y., Amano, Y. NLOS Multipath Detection by Using Machine Learning in Urban Environments, ION GNSS+ 2017, Portland, Oregon, pp. 3958-3967.

(8) Xu B., Jia Q., Luo Y., Hsu L.T. (2019) Intelligent GPS L1 LOS/Multipath/NLOS Classifiers Based on Correlator-, RINEX-and NMEA-Level Measurements, Remote Sensing 11(16):1851.

(9) Chiu H.P., Zhou X., Carlone L., Dellaert F., Samarasekera S., and Kumar R., Constrained Optimal Selection for Multi-Sensor Robot Navigation Using Plug-and-Play Factor Graphs, IEEE ICRA 2014, Hong Kong, China.

(10) Zhang G., Hsu L.T. (2018) Intelligent GNSS/INS Integrated Navigation System for a Commercial UAV Flight Control System, Aerospace Science and Technology, 80:368-380.

(11) Kumar R., Samarasekera S., Chiu H.P., Trinh N., Dellaert F., Williams S., Kaess M., Leonard J., Plug-and-Play Navigation Algorithms Using Factor Graphs, Joint Navigation Conference (JNC), 2012.

Excerpt from:
What are the roles of artificial intelligence and machine learning in GNSS positioning? - Inside GNSS

PathAI Present Machine Learning Models that Predict the Homologous Recombination Deficiency Status of Breast Cancer Biopsies at the 2020 SABCS – PR…

BOSTON (PRWEB) December 09, 2020

PathAI, a global provider of AI-powered technology applied to pathology research, today announced the result of a proof-of-concept investigation into ML model prediction of HRD directly from H&E-stained biopsy slides. DNA damage repair pathways, such as homologous recombination, have essential roles in healthy cells, and mutations in these pathways are closely associated with an increased risk for cancer, as well as cancer progression. HRD results from mutations in BRCA1/2, as well as other genes that encode the homologous recombination components that are responsible for error-free repair of double-strand breaks in DNA. HRD tumors are sensitive to poly-ADP ribose polymerase (PARP) inhibitors and platinum-based chemotherapy, making determination of a patients tumor HRD status clinically important. Genomic sequencing is currently the gold standard to classify a tumor as HRD or homologous recombination proficient, but this method has a high error rate, leaving a great unmet need to develop robust and reliable HRD scoring tools.

Identifying the underlying molecular drivers of cancer has tremendous significance not only for our fundamental understanding of the disease biology, but because these image-based assays may also play an important role in making patient treatment decisions in the future, like choosing the most effective therapeutic, said PathAI co-founder and Chief Executive Officer Andy Beck MD, PhD. Our ability to find these signatures in widely available H&E images suggests that our models could have a great impact, and we look forward to investigating this further and validating these results in future studies.

PathAI used two different approaches to predict the HRD status of a tumor from the H&E-stained tissue biopsy. Models were trained using breast cancer tumor biopsy images from TCGA and HRD scores of these same biopsies generated by Knijnenburg and colleagues (published in Cell Reports. 2018. 23:239-254). The Human Interpretable Features (HIF)-based model was trained using thousands of expert pathologist annotations of cell- and tissue-level features of the TCGA images to predict HRD status from HIF-based correlations, whereas the end-to-end model learned to predict HRD status directly from the biopsy image.

Both models predicted HRD with high accuracy, with the HIF-based model having an AUROC of 0.87, and the end-to-end model a AUROC of 0.80. The HIF-based model also revealed that HRD tumors have greater degree of necrosis and also more lymphocytes within the tumor itself than homologous recombination proficient tumors. These results show the enormous potential for digital pathology to identify clinically-significant genomic phenotypes that could not be detected using traditional pathology methods. PathAI will continue to develop and validate these important models for future clinical application.

About PathAIPathAI is a leading provider of AI-powered research tools and services for pathology. PathAIs platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit https://www.pathai.com.

Share article on social media or email:

Go here to read the rest:
PathAI Present Machine Learning Models that Predict the Homologous Recombination Deficiency Status of Breast Cancer Biopsies at the 2020 SABCS - PR...

Apple’s SVP of Machine Learning & AI John Giannandrea has been assigned to Oversee Apple’s Secretive ‘Project Titan’ – Patently Apple

Patently Apple has been covering the latest Project Titan patents for years, including a granted patent report posted this morning covering another side of LiDAR that was never covered before. While some in the industry have doubted Apple will ever do anything with this project, Apple has now reportedly moved its self-driving car unit under the leadership of top artificial intelligence executive John Giannandrea, who will oversee the companys continued work on an autonomous system that could eventually be used in its own car.

Bloomberg's Mark Gurman is reporting today that Project Titan is run day-to-day by Doug Field. His team of hundreds of engineers have moved to Giannandreas artificial intelligence and machine-learning group, according to people familiar with the change.

Previously, Field reported to Bob Mansfield, Apples former senior vice president of hardware engineering. Mansfield has now fully retired from Apple, leading to Giannandrea taking over. Mansfield oversaw a shift from the development of a car to just the underlying autonomous system.

In 2017, Patently Apple posted a report titled "Apple's CEO Confirms Project Titan is the 'Mother of all AI Projects' Focused on Self-Driving Vehicles." For more read the full Bloomberg report.

Like with all major Apple projects, be it for a head-mounted display device, smartglasses, folding devices, Apple keeps its secrets and prototypes under wraps until they've holistically worked out their roadmap.

That's why following Apple's patents is the best way to keep on top of the technology that Apple's engineers are actually working on in some capacity within the various ongoing projects. Review our Project Titan patent archive to see what Apple has been working on.

See the original post:
Apple's SVP of Machine Learning & AI John Giannandrea has been assigned to Oversee Apple's Secretive 'Project Titan' - Patently Apple

Machine learning is the new key to healthcare – Gadget

As healthcare professionals are facing massive pressure notonly to ensure the quality of care, but also to come up with new solutions,cures and treatments, they are becoming increasingly dependent on advancedtechnologies like artificial intelligence (AI) and machine learning (ML).

But it is hardly a smooth partnership. The issues of skills shortages at the entry-level and of messy data in leveraging patient records at the high end are merely book-ends for a range of challenges that span these fields.

Last weeks annual Amazon Web Services Re:Invent conference,one of the largest cloud-focused events in the world, saw the launch ordemonstration of a range of new cloud-based tools that are ideal for healthresearch and treatment. ML, defined as computer algorithms that improveautomatically through experience, was at the heart of these.

The tools raised two key questions in terms of global andlocal relevance, namely how messy data is addressed, and how relevant these areto South Africa.

We asked a man at the heart of AWSs health initiatives, ShezPartovi, AWS director of worldwide business development for healthcare, lifesciences, and genomics. It all starts with ML, he says.

In South Africa, we have seen how providing access toadvanced technologies such as ML is vital to stopping the spread of COVID-19and helping individuals quickly find medical help when they fall ill. GovChat,South Africas largest citizen engagement platform, launched a COVID-19 chatbotin less than two weeks using Amazon Lex, an AI service for building conversationalinterfaces into any application using voice and text.

The chatbot provides health advice and recommendations onwhether to get a test for COVID-19, information on the nearest COVID-19 testingfacility, the ability to receive test results, and the option for citizens toreport COVID-19 symptoms for themselves, their family, or household members.

ML in particular is being roped in globally to address themassive volumes of data being gathered from a variety of unrelated sources, hesays.

ML has the potential to serve as an assistive tool forhealthcare professionals, providing the support they need to process and analysethe increasing amount of data generated by doctors, hospitals, researchers, andorganisations, including structured data like Electronic Health Record forms,as well as unstructured data, such as emails, text documents, and even voicenotes.

ML is being used in a variety of tasks such as analysing medical images to advancing precision medicine. Tools that leverage natural language processing, pattern recognition, and risk identification are also fuelling new models for predictive, preventive, and population health and have the potential to help providers identify gaps in care and improve the health of individuals and communities.

Go to the next page to read about how Amazons latest machine learning tools can read a doctors handwriting.

Read more from the original source:
Machine learning is the new key to healthcare - Gadget

QA Increasingly Benefits from AI and Machine Learning – RTInsights

While the human element will still exist, incorporating AI/ML will improve the QA testing within an organization.

The needle in quality assurance (QA) testing is moving in the direction of increased use of artificial intelligence (AI) and machine learning (ML). However, the integration of AI/ML in the testing process is not across the board. The adoption of advanced technologies still tends to be skewed towards large companies.

Some companies have held back, waiting to see if AI met the initial hype as being a disruptor in various industries. However, the growing consensus is that the use of AI benefits the organizations that have implemented it and improves efficiencies.

Small- andmid-sized could benefit from testing software using AI/ML to meet some of thechallenges faced by QA teams. While AI and ML are not substitutes for humantesting, they can be a supplement to the testing methodology.

See also: Real-time Applications and Business Transformation

As development is completed and moves to the testing stage of the system development life cycle, QA teams must prove that end-users can use the application as intended and without issue. Part of end-to-end (E2E) testing includes identifying the following:

E2E testingplans should incorporate all of these to improve deployment success. Even whilefacing time constraints and ever-changing requirements, testing cycles areincreasingly quick and short. Yet, they still demand high quality in order tomeet end-user needs.

Lets look at some of the specific ways AI and ML can streamline the testing process while also making it more robust.

AI in softwaretesting reduces the time spent on manually testing. Teams are then able toapply their efforts to more complex tasks that require human interpretation.

Developers andQA staff will need to apply less effort in designing, prioritizing, writing,and maintaining E2E tests. This will expedite timelines for delivery and freeup resources to work on developing new products rather than testing a newrelease.

With more rapiddeployment, there is an increased need for regression testing, to the pointwhere humans cannot realistically keep up. Companies can use AI for some of themore tedious regression testing tasks, where ML can be used to generate testscripts.

In the exampleof a UI change, AI/ML can be used to scan for color, shape, size, or overlap.Where these would otherwise be manual tests, AI can be used for validation ofthe changes that a QA tester may miss.

Whenintroducing a change, how many tests are needed to pass QA and validate thatthere are no issues? Leveraging ML can determine how many tests to run based oncode changes and the outcomes of past changes and tests.

ML can alsoselect the appropriate tests to run by identifying the particular subset ofscenarios affected and the likelihood of failure. This creates more targetedtesting.

With changesthat may impact a large number of fields, AI/ML automate the validation ofthese fields. For example, a scenario might be Every field that is apercentage should display two decimals. Rather than manually checkingeach field, this can be automated.

ML can adapt tominor code changes so that the code can self-correct or self-healover time. This is something that could otherwise take hours for a human to fixand re-test.

While QAtesters are good at finding and addressing complex problems and proving outtest scenarios, they are still human. Errors can occur in testing, especiallyfrom burnout syndrome of completing tedious processing. AI is not affected bythe number of repeat tests and therefore yields more accurate and reliableresults.

Softwaredevelopment teams are also ultimately composed of people, and thereforepersonalities. Friction can occur between developers and QA analysts, particularlyunder time constraints or the outcomes found during testing. AI/ML can removethose human interactions that may cause holdups in the testing process byproviding objective results.

Often when afailure occurs during testing, the QA tester or developer will need todetermine the root cause. This can include parsing out the code to determinethe exact point of failure and resolving it from there.

In place ofgoing through thousands of lines of codes, AI will be able to sort through thelog files, scan the codes, and detect errors within seconds. This saves hoursof time and allows the developer to dive into the specific part of the code tofix the problem.

While the humanelement will still exist, introducing testing software that incorporates AI/MLwill overall improve the QA testing within an organization. Equally asimportant as knowing when to use AI and ML is knowing when not to use it.Specific scenario testing or applying human logic in a scenario to verify theoutcome are not well suited for AI and ML.

But forunderstanding user behavior, gathering data analytics will build theappropriate test cases. This information identifies the failures that are mostlikely to occur, which makes for better testing models.

AI/ML can also specify patterns over time, build test environments, and stabilize test scripts. All of these allow the organization to spend more time developing new product and less time testing.

More:
QA Increasingly Benefits from AI and Machine Learning - RTInsights