Researchers Develop New Machine Learning Technique to Predict Progress of COVID-19 Patients | The Weather Channel – Articles from The Weather Channel…

An illustration of novel coronavirus SARS-CoV-2.

Researchers have published one of the first studies using a Machine Learning (ML) technique called "federated learning" to examine electronic health records to better predict how COVID-19 patients will progress.

The study, published in the Journal of Medical Internet Research - Medical Informatics, indicates that the emerging technique holds promise to create more robust machine learning models that extend beyond a single health system without compromising patient privacy.

These models, in turn, can help triage patients and improve the quality of their care. "Machine Learning models in health care often require diverse and large-scale data to be robust and translatable outside the patient population they were trained on," said co-author Benjamin Glicksberg, Assistant Professor at Mount Sinai.

Federated learning is a technique that trains an algorithm across multiple devices or servers holding local data samples but avoids clinical data aggregation, which is undesirable for reasons including patient privacy issues.

For the study, the researchers implemented and assessed federated learning models using data from electronic health records at five separate hospitals within the Health System to predict mortality in COVID-19 patients.

They compared the performance of a federated model against ones built using data from each hospital separately, referred to as local models.

After training their models on a federated network and testing the data of local models at each hospital, the researchers found the federated models demonstrated enhanced predictive power and outperformed local models at most of the hospitals.

**

The above article has been published from a wire agency with minimal modifications to the headline and text.

Continue reading here:
Researchers Develop New Machine Learning Technique to Predict Progress of COVID-19 Patients | The Weather Channel - Articles from The Weather Channel...

How machines are changing the way companies talk – VentureBeat

Anyone whos ever been on an earnings call knows company executives already tend to look at the world through rose-colored glasses, but a new study by economics and machine learning researchers says thats getting worse, thanks to machine learning. The analysis found that companies are adapting their language in forecasts, SEC regulatory filings, and earnings calls due to the proliferation of AI used to analyze and derive signals from the words they use. In other words: Businesses are beginning to change the way they talk because they know machines are listening.

Forms of natural language processing are used to parse and process text in the financial documents companies are required to submit to the SEC. Machine learning tools are then able to do things like summarize text or determine whether language used is positive, neutral, or negative. Signals these tools provide are used to inform the decisions advisors, analysts, and investors make. Machine downloads are associated with faster trading after an SEC filing is posted.

This trend has implications for the financial industry and economy, as more companies shift their language in an attempt to influence machine learning reports. A paper detailing the analysis, originally published in October by researchers from Columbia University and Georgia State Universitys J. Mack Robinson College of Business, was highlighted in this months National Bureau of Economic Research (NBER) digest. Lead author Sean Cao studies how deep learning can be applied to corporate accounting and disclosure data.

More and more companies realize that the target audience of their mandatory and voluntary disclosures no longer consists of just human analysts and investors. A substantial amount of buying and selling of shares [is] triggered by recommendations made by robots and algorithms which process information with machine learning tools and natural language processing kits, the paper reads. Anecdotal evidence suggests that executives have become aware that their speech patterns and emotions, evaluated by human or software, impact their assessment by investors and analysts.

The researchers examined nearly 360,000 SEC filings between 2003 and 2016. Over that time period, regulatory filing downloads from the SECs Electronic Data Gathering, Analysis, and Retrieval (EDGAR) tool increased from roughly 360,000 filing downloads to 165 million, climbing from 39% of all downloads in 2003 to 78% in 2016.

A 2011 study concluded that the majority of words identified as negative by a Harvard dictionary arent actually considered negative in a financial context. That study also included lists of negative words used in 10-K filings. After the release of that list,researchers found high machine download companies began to change their behavior and use fewer negative words.

Generally, the stock market responds more positively to disclosures with fewer negative words or strong modal words.

As more and more investors use AI tools such as natural language processing and sentiment analyses, we hypothesize that companies adjust the way they talk in order to communicate effectively and predictably, the paper reads. If managers are aware that their disclosure documents could be parsed by machines, then they should also expect that their machine readers may also be using voice analyzers to extract signals from vocal patterns and emotions contained in managers speeches.

A study released earlier this year by Yale University researchers used machine learning to analyze startup pitch videos and found that positive (i.e., passionate, warm) pitches increase funding probability. And another study from earlier this year (by Crane, Crotty, and Umar) showed hedge funds that use machines to automate downloads of corporate filings perform better than those that do not.

In other applications at the locus of AI and investor decisions, last year InReach Ventures launched a $60 million fund that uses AI as part of its process for evaluating startups.

See the article here:
How machines are changing the way companies talk - VentureBeat

Taking Micro Machine Learning to the MAX78000 – Electronic Design

What youll learn

I tend to do only a few hands-on articles a year, so I look for cutting-edge platforms that developers will want to check out. Maxim Integrateds MAX78000 evaluation kit fits in this bucket. The MAX78000 is essentially an Arm Cortex-M4F microcontroller with a lot of hardware around it, including a convolutional-neural-network (CNN) accelerator designed by Maxim (Fig. 1). This machine-learning (ML) support allows the chip to handle chores like identifying voice keywords or even faces in camera images in real time without busting the power budget.

1.The MAX78000 includes a Cortex-M4F and RISC-V cores as well as a CNN accelerator.

The chip also includes a RISC-V core that caught my eye. However, the development tools are so new that the RISC-V support is still in the works as the Cortex-M4F is the main processor. Even the CNN support is just out of the beta stage, but that's where this article will concentrate on.

The MAX78000 has the usual microcontroller peripheral complement, including a range of serial ports, timers, and parallel serial interfaces like I2S. It even has a parallel camera interface. Among the analog peripherals is an 8-channel, 10-bit sigma-delta ADC. There are four comparators as well.

The chip has large 512-kB flash memory along with 128 kB of SRAM and a boot ROM that allows more complex boot procedures such as secure boot support. There's on-chip key storage as well as CRC and AES hardware support. We will get into the CNN support a little later. The Github-based documentation covers some of the features I outline here in step-by-step detail.

The development tools are free and based on Eclipse, which is the basis for other platforms like Texas Instruments' Code Composer Studio and Silicon Labs Simplicity Studio. Maxim doesn't do a lot of customization, but there's enough to facilitate using hardware like the MAX78000 while making it easy to utilize third party plug-ins and tools, which can be quite handy when dealing with cloud or IoT development environments. The default installation includes examples and tutorials that enable easy testing of the CNN hardware and other peripherals.

The MAX78000 development board features two LCD displays. The larger, 3.5-in TFT touch-enabled display is for the processor, while the second, smaller display provides power-management information. The chip doesn't have a display controller built in, so it uses a serial interface to work with the larger display. The power-tracking support is sophisticated, but I won't delve into that now.

There's a 16-MB QSPI flash chip that can be handy for storing image data. In addition, a USB bridge to the flash chip allows for faster and easier downloads.

The board also adds some useful devices like a digital microphone, a 3D accelerometer, and 3D gyro. Several buttons and LEDs round out the peripherals.

There are a couple JTAG headers; the RISC-V core has its own. As noted, I didnt play with the RISC-V core this time around as it's not required for using the CNN supportalthough it could. Right now, the Maxim tools generate C code for the Cortex-M4F to set up the CNN hardware. The CNN hardware is designed to handle a single model, but it's possible to swap in new models quickly.

As with most ML hardware, the underlying hardware tends to be hidden from most programmers, providing more of a black-box operation where you set up the box and feed it data with results coming out the other end. This works well if the models are available; it's a matter of training them with different information or using trained models. The challenge comes when developing and training new models, which is something I'll avoid discussing here.

I did try out two of the models provided by Maxim, including a Keyword Spotting and a Face Identification (FaceID) application. The Keyword Spotting app is essentially the speech-recognition system that can be used to listen for a keyword to start off a cloud-based discussion, which is how most Alexa-based voice systems work since the cloud handles everything after recognizing a keyword.

On the other hand, being able to recognize a number of different keywords makes it possible to build a voice-based command system, such as those used in many car navigation systems. As usual, the Cortex-M4F handles the input and does a bit of munging to provide suitable inputs to the CNN accelerator (Fig. 2). The detected class output specifies which keyword is recognized, if any. The application can then utilize this information.

2. The Cortex-M4F handles the initial audio input stream prior to handing off the information to the CNN accelerator.

The FaceID system highlights the camera support of the MAX78000 (Fig. 3). This could be used to recognize a face or identify a particular part moving by on an assembly line. The sample application can operate using canned inputs, as shown in the figure, or from the camera.

3. The FaceID application highlights the CNNs ability to process images in real time.

Using the defaults is as easy as compiling and programming the chip. Maxim provides all of the sample code and procedures. These can be modified somewhat, but retraining a model is a more involved exercisethough one that Maxims documentation does cover. These examples provide an outline of what's needed to be done as well as what needs to be changed to customize the solution.

Changing the model and application to something like a motor vibration-monitoring system will be a significant job requiring a new model, but one that the chip is likely able to handle. It will require much more machine learning and CNN support, so it's not something that should be taken lightly.

The toolset supports models from platforms like TensorFlow and PyTorch (Fig. 4). This is useful because training isn't handled by the chip, but rather done on platforms like a PC or cloud servers. Likewise, the models can be refined and tested on higher-end hardware to verify the models, which can then be pruned to fit on the MAX78000.

4. PyTorch is just one of the frameworks handled by the MAX78000. Training isn't done on the micro. Maxims tools convert the models to code that drives the CNN hardware.

At this point, the CNN accelerator documentation is a bit sparse, as is the RISC-V support. Maxims CNN model compiler kicks out C code that drops in nicely to the Eclipse IDE. Debugging the regular application code is on par with other cross-development systems where remote debugging via JTAG is the norm.

Maxim also provides the MAX78000FTHR, the little brother of the evaluation kit (Fig. 5), This doesn't have the display or other peripheral hardware, but most I/O is exposed. The board alone is only $25. The chip is priced around $15 in small quantities. The Github-based documentation provides more details.

5. The evaluation kit has a little brother, the MAX78000FTHR.

The MAX78000 was fun to work with. It's a great platform for supporting ML applications on the edge. However, be aware that while it's a very low power solution, it's not the same thing as even a low-end Nvidia Jetson Nano. It will be interesting to check out the power-tracking support since power utilization and requirements will likely be key factors in many MAX78000 applications, especially battery-based solutions.

Original post:
Taking Micro Machine Learning to the MAX78000 - Electronic Design

Top 10 AI and machine learning stories of 2020 – Healthcare IT News

Toward the tail end of pre-pandemic 2019, Mayo Clinic Chief Information Officer Cris Ross stood on a stage in California and declared, "This artificial intelligence stuff is real."

Indeed, while some may argue that AI and machine learning might have been harnessed better during the early days of COVID-19, and while the risk of algorithmic bias is very real, there's little question that artificial intelligence, evolving and maturing by the day for an array of use cases across healthcare.

Here are the most-read stories about AI during this most unusual year.

UK to use AI for COVID-19 vaccine side effects. On a day when vaccines, developed in record time, first begin to be administered in the U.S., it's worth remembering AI's crucial role in helping the world get to this hopefully pivotal moment.

AI algorithm IDs abnormal chest X-rays from COVID-19 patients. Machine learning has been a hugely valuable diagnostic tool as well, as illustrated by this story about a tool from cognitive computing vendor behold.ai that promises 'instant triage" based on lung scans offering faster diagnosis of COVID-19 patients and helping with resource allocation.

How AI use cases are evolving in the time of COVID-19. In a HIMSS20 Digital presentation, leaders from Google Cloud, Nuance and Health Data Analytics Institute shared perspective on how AI and automation were being deployed for pandemic response from the hunt for therapeutics and vaccines to analytics to optimize revenue cycle strategies.

Microsoft launches major $40M AI for Health initiative. The company said the the five-year AI for Health (part of its $165 million AI for Good initiative) will help healthcare organizations around the world deploy with leading edge technologies in the service of three key areas: accelerating medical research, improving worldwide understanding to protect against global health crises such as COVID-19 and reducing health inequity.

How AI and machine learning are transforming clinical decision support. "Todays digital tools only scratch the surface," said Mayo Clinic Platform President Dr. John Halamka. "Incorporating newly developed algorithms that take advantage of machine learning, neural networks, and a variety of other types of artificial intelligence can help address many of the shortcomings of human intelligence."

Clinical AI vendor Jvion unveils COVID Community Vulnerability Map. In the very early days of the pandemic, clinical AI company Jvion launched this intereactive map, which tracks the social determinants of health, helping identify populations down to the census-block level that are at risk for severe outcomes.

AI bias may worsen COVID-19 health disparities for people of color. An article in the Journal of the American Medical Informatics Association asserts that biased data models could further the disproportionate impact the COVID-19 pandemic is already having on people of color. "If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden," said researchers.

The origins of AI in healthcare, and where it can help the industry now. "The intersection of medicine and AI is really not a new concept," said Dr. Taha Kass-Hout, director of machine learning and chief medical officer at Amazon Web Services. (There were limited chatbots and other clinical applications as far back as the mid-60s.) But over the past few years, it has become ubiquitous across the healthcare ecosystem. "Today, if youre looking at PubMed, it cites over 12,000 publications with deep learning, over 50,000 machine learning," he said.

AI, telehealth could help address hospital workforce challenges. "Labor is the largest single cost for most hospitals, and the workforce is essential to the critical mission of providing life-saving care," noted a January American Hospital Association report on the administrative, financial, operational and clinical uses of artificial intelligence. "Although there are challenges, there also are opportunities to improve care, motivate and re-skill staff, and modernize processes and business models that reflect the shift toward providing the right care, at the right time, in the right setting."

AI is helping reinvent CDS, unlock COVID-19 insights at Mayo Clinic. In a HIMSS20 presentation, JohnHalamka shared some of the most promising recent clinical decision support advances at the Minnesota health system and described how they're informing treatment decisions for an array of different specialties and helping shape its understanding of COVID-19. "Imagine the power [of] an AI algorithm if you could make available every pathology slide that has ever been created in the history of the Mayo Clinic," he said. "That's something we're certainly working on."

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.comHealthcare IT News is a HIMSS publication.

Read the original post:
Top 10 AI and machine learning stories of 2020 - Healthcare IT News

Machine Learning Market Size 2020 by Top Key Players, Global Trend, Types, Applications, Regional Demand, Forecast to 2027 – LionLowdown

New Jersey, United States,- The report, titled Machine Learning Market Size By Types, Applications, Segmentation, and Growth Global Analysis and Forecast to 2019-2027 first introduced the fundamentals of Machine Learning: definitions, classifications, applications and market overview; Product specifications; Production method; Cost Structures, Raw Materials, etc. The report takes into account the impact of the novel COVID-19 pandemic on the Machine Learning market and also provides an assessment of the market definition as well as the identification of the top key manufacturers which are analyzed in-depth as opposed to the competitive landscape. In terms of Price, Sales, Capacity, Import, Export, Machine Learning Market Size, Consumption, Gross, Gross Margin, Sales, and Market Share. Quantitative analysis of the Machine Learning industry from 2019 to 2027 by region, type, application, and consumption rating by region.

Impact of COVID-19 on Machine Learning Market: The Coronavirus Recession is an economic recession that will hit the global economy in 2020 due to the COVID-19 pandemic. The pandemic could affect three main aspects of the global economy: manufacturing, supply chain, business and financial markets. The report offers a full version of the Machine Learning Market, outlining the impact of COVID-19 and the changes expected on the future prospects of the industry, taking into account political, economic, social, and technological parameters.

Request Sample Copy of this Report @ Machine Learning Market Size

In market segmentation by manufacturers, the report covers the following companies-

How to overcome obstacles for the septennial 2020-2027 using the Global Machine Learning market report?

Presently, going to the main part-outside elements. Porters five powers are the main components to be thought of while moving into new business markets. The customers get the opportunity to use the approaches to plan the field-tested strategies without any preparation for the impending monetary years.

We have faith in our services and the data we share with our esteemed customers. In this way, we have done long periods of examination and top to bottom investigation of the Global Machine Learning market to give out profound bits of knowledge about the Global Machine Learning market. Along these lines, the customers are enabled with the instruments of data (as far as raw numbers are concerned).

The graphs, diagrams and infographics are utilized to speak out about the market drifts that have formed the market. Past patterns uncover the market turbulences and the final results on the markets. Then again, the investigation of latest things uncovered the ways, the organizations must take for shaping themselves to line up with the market.

Machine Learning Market: Regional analysis includes:

?Asia-Pacific(Vietnam, China, Malaysia, Japan, Philippines, Korea, Thailand, India, Indonesia, and Australia)?Europe(Turkey, Germany, Russia UK, Italy, France, etc.)?North America(the United States, Mexico, and Canada.)?South America(Brazil etc.)?The Middle East and Africa(GCC Countries and Egypt.)

The report includes Competitors Landscape:

? Major trends and growth projections by region and country? Key winning strategies followed by the competitors? Who are the key competitors in this industry?? What shall be the potential of this industry over the forecast tenure?? What are the factors propelling the demand for the Machine Learning Industry?? What are the opportunities that shall aid in the significant proliferation of market growth?? What are the regional and country wise regulations that shall either hamper or boost the demand for Machine Learning Industry?? How has the covid-19 impacted the growth of the market?? Has the supply chain disruption caused changes in the entire value chain?

The report also covers the trade scenario,Porters Analysis,PESTLE analysis, value chain analysis, company market share, segmental analysis.

About us:

Market Research Blogs is a leading Global Research and Consulting firm servicing over 5000+ customers. Market Research Blogs provides advanced analytical research solutions while offering information enriched research studies. We offer insight into strategic and growth analyses, Data necessary to achieve corporate goals, and critical revenue decisions.

Our 250 Analysts and SMEs offer a high level of expertise in data collection and governance use industrial techniques to collect and analyze data on more than 15,000 high impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise, and years of collective experience to produce informative and accurate research.

Get More Market Research Report Click @ Market Research Blogs

Read more:
Machine Learning Market Size 2020 by Top Key Players, Global Trend, Types, Applications, Regional Demand, Forecast to 2027 - LionLowdown

Hateful Memes Challenge Winners Machine Learning Times – The Predictive Analytics Times

By: Douwe Kiela, Hamed Firooz and Tony Nelli Originally published in Facebook AI, Dec 11, 2020.

AI has made progress in detecting hate speech, but important and difficult technical challenges remain. Back in May 2020, Facebook AI partnered with Getty Images and DrivenData to launch the Hateful Memes Challenge, a first-of-its-kind $100K competition and data set to accelerate research on the problem of detecting hate speech that combines images and text. As part of the challenge, Facebook AI created a unique data set of 10,000+ new multimodal examples, using licensed images from Getty Images so that researchers could easily use them in their work.

More than 3,300 participants from around the world entered the Hateful Memes Challenge, and we are now sharing details on the winning entries. The top-performing teams were:

Ron Zhu link to code

Niklas Muennighoff link to code

Team HateDetectron: Riza Velioglu and Jewgeni Rose link to code

Team Kingsterdam: Phillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova and Helen Yannakoudakis link to code

Vlad Sandulescu link to code

You can see the full leaderboard here. As part of the NeurIPS 2020 competition track, the top five winners will discuss their solutions and we facilitated a Q&A with participants from around the world. Each of these five implementations has been made open source and is available now.

To continue reading this article, click here.

View post:
Hateful Memes Challenge Winners Machine Learning Times - The Predictive Analytics Times

How This CEO is Using Synthetic Data to Reshape Machine Learning for Real-World Applications – Yahoo Finance

Artificial Intelligence (AI) and Machine Learning (ML) are certainly not new industries. As early as the 1950s, the term machine learning was introduced by IBM AI pioneer Arthur Samuel. It has been in recent years wherein AI and ML have seen significant growth. IDC, for one, estimates the market for AI to be valued at $156.5 billion in 2020 with a 12.3 percent growth over 2019. Even amid global economic uncertainties, this market is set to grow to $300 billion by 2024, a compound annual growth of 17.1 percent.

There are challenges to be overcome, however, as AI becomes increasingly interwoven into real-world applications and industries. While AI has seen meaningful use in behavioral analysis and marketing, for instance, it is also seeing growth in many business processes.

"The role of AI Applications in enterprises is rapidly evolving. It is transforming how your customers buy, your suppliers deliver, and your competitors compete. AI applications continue to be at the forefront of digital transformation (DX) initiatives, driving both innovation and improvement to business operations," said Ritu Jyoti, program vice president, Artificial Intelligence Research at IDC.

Even with the increasing utilization of sensors and internet-of-things, there is only so much that machines can learn from real-world environments. The limitations come in the form of cost and replicable scenarios. Heres where synthetic data will play a big part

Dor Herman

We need to teach algorithms what it is exactly that we want them to look for, and thats where ML comes in. Without getting too technical, algorithms need a training process, where they go through incredible amounts of annotated data, data that has been marked with different identifiers. And this is, finally, where synthetic data comes in, says Dor Herman, Co-Founder and Chief Executive Officer of OneView, a Tel Aviv-based startup that accelerates ML training with the use of synthetic data.

Story continues

Herman says that real-world data can oftentimes be either inaccessible or too expensive to use for training AI. Thus, synthetic data can be generated with built-in annotations in order to accelerate the training process and make it more efficient. He cites four distinct advantages of using synthetic data over real-world data in ML: cost, scale, customization, and the ability to train AI to make decisions on scenarios that are not likely to occur in real-world scenarios.

You can create synthetic data for everything, for any use case, which brings us to the most important advantage of synthetic data--its ability to provide training data for even the rarest occurrences that by their nature dont have real coverage.

Herman gives the example of oil spills, weapons launches, infrastructure damage, and other such catastrophic or rare events. Synthetic data can provide the needed data, data that could have not been obtained in the real world, he says.

Herman cites a case study wherein a client needed AI to detect oil spills. Remember, algorithms need a massive amount of data in order to learn what an oil spill looks like and the company didnt have numerous instances of oil spills, nor did it have aerial images of it.

Since the oil company utilized aerial images for ongoing inspection of their pipelines, OneView applied synthetic data instead. we created, from scratch, aerial-like images of oil spills according to their needs, meaning, in various weather conditions, from different angles and heights, different formations of spills--where everything is customized to the type of airplanes and cameras used.

This would have been an otherwise costly endeavor. Without synthetic data, they would never be able to put algorithms on the detection mission and will need to continue using folks to go over hours and hours of detection flights every day.

With synthetic data, users can define the parameters for training AI, in order for better decision-making once real-world scenarios occur. The OneView platform can generate data customized to their needs. An example involves training computer vision to detect certain inputs based on sensor or visual data.

You input your desired sensor, define the environment and conditions like weather, time of day, shooting angles and so on, add any objects-of-interest--and our platform generates your data; fully annotated, ready for machine learning model training datasets, says Herman.

Annotation also has advantages over real-world data, which will often require manual annotation, which takes extensive time and cost to process. The swift and automated process that produces hundreds of thousands of images replaces a manual, prolonged, cumbersome and error-prone process that hinders computer vision ML algorithms from racing forward, he adds.

OneViews synthetic data generation involves a six-layer process wherein 3D models are created using gaming engines and then flattened to create 2D images.

We start with the layout of the scene so to speak, where the basic elements of the environment are laid out The next step is the placement of objects-of-interest that are the goal of detection, the objects that the algorithms will be trained to discover. We also put in distractors, objects that are similar so the algorithms can learn how to differentiate the goal object from similar-looking objects. Then the appearance building stage follows, when colors, textures, random erosions, noises, and other detailed visual elements are added to mimic how real images look like, with all their imperfections, Herman shares.

The fourth step involves the application of conditions such as weather and time of the day. For the fifth step, sensor parameters (the camera lens type) are implemented, meaning, we adapt the entire image to look like it was taken by a specific remote sensing system, resolution-wise, and other unique technical attributes each system has. Lastly, annotations are added.

Annotations are the marks that are used to define to the algorithm what it is looking at. For example, the algorithm can be trained that this is a car, this is a truck, this is an airplane, and so on. The resulting synthetic datasets are ready for machine learning model training.

For Herman, the biggest contribution of synthetic data is actually paradoxical. By using synthetic data, AI and AI users get a better understanding of the real world and how it works--through machine learning. Image analytics comes with bottlenecks in processing, and computer vision algorithms cannot scale unless this bottleneck is overcome.

Remote sensing data (imagery captured by satellites, airplanes and drones) provides a unique channel to uncover valuable insights on a very large scale for a wide spectrum of industries. In order to do that, you need computer vision AI as a way to study these vast amounts of data collected and return intelligence, Herman explains.

Next, this intelligence is transformed to insights that help us better understand this planet we live on, and of course drive decision making, whether by governments or businesses. The massive growth in computing power enabled the flourishing of AI in recent years, but the collection and preparation of data for computer vision machine learning is the fundamental factor that holds back AI.

He circles back to how OneView intends to reshape machine learning: releasing this bottleneck with synthetic data so the full potential of remote sensing imagery analytics can be realized and thus a better understanding of earth emerges.

The main driver behind Artificial Intelligence and Machine Learning is, of course, business and economic value. Countries, enterprises, businesses, and other stakeholders benefit from the advantages that AI offers, in terms of decision-making, process improvement, and innovation.

The Big message OneView brings is that we enable a better understanding of our planet through the empowerment of computer vision, concludes Herman. Synthetic data is not fake data. Rather, it is purpose-built inputs that enable faster, more efficient, more targeted, and cost-effective machine learning that will be responsive to the needs of real-world decision-making processes.

Continue reading here:
How This CEO is Using Synthetic Data to Reshape Machine Learning for Real-World Applications - Yahoo Finance

Machine-learning, robotics and biology to deliver drug discovery of tomorrow – – pharmaphorum

Biology 2.0: Combining machine-learning, robotics and biology to deliver drug discovery of tomorrow

Intelligent OMICS, Arctoris and Medicines Discovery Catapult test in silico pipeline for identifying new molecules for cancer treatment.

Medicines discovery innovators, Intelligent OMICS, supported by Arctoris and Medicines Discovery Catapult, are applying artificial intelligence to find new disease drivers and candidate drugs for lung cancer. This collaboration, backed by Innovate UK, will de-risk future R&D projects and also demonstrate new cost and time-saving approaches to drug discovery.

Analysing a broad set of existing biological information, previously hidden components of disease biology can be identified which in turn lead to the identification of new drugs for development. This provides the catalyst for an AI-driven acceleration in drug discovery and the team has just won a significant Innovate UK grant in order to prove that it works.

Intelligent OMICS, the company leading the project, use in silico (computer-based) tools to find alternative druggable targets. They have already completed a successful analysis of cellular signalling pathways elsewhere in lung cancer pathways and are now selectively targeting the KRAS signalling pathway.

As Intelligent OMICS technology identifies novel biological mechanisms, Medicines Discovery Catapult will explore the appropriate chemical tools and leads that can be used against these new targets, and Arctoris will use their automated drug discovery platform in Oxford to conduct the biological assays which will validate them experimentally.

Working together, the group will provide druggable chemistry against the entire in silico pipeline, offering new benchmarks of cost and time effectiveness over conventional methods of discovery.

Much has been written about the wonders of artificial intelligence and its potential in healthcare, says Dr Simon Haworth, CEO of Intelligent OMICS. Our newsflows are full of details of AI applications in process automation, image analysis and computational chemistry. The DeepMind protein folding breakthrough has also hit the headlines recently as a further AI application. But what does Intelligent OMICS do that is different?

By analysing transcriptomic and similar molecular data our neural networks algorithms re-model known pathways and identify new, important targets. This enables us to develop and own a broad stream of new drugs. Lung cancer is just the start we have parallel programs running in many other areas of cancer, in infectious diseases, in auto-immune disease, in Alzheimers and elsewhere.

We have to thank Innovate UK for backing this important work. The independent validation of our methodology by the highly respected cheminformatics team at MDC coupled with the extraordinarily rapid, wet lab validation provided by Arctoris, will finally prove that, in drug discovery, the era of AI has arrived.

Dr Martin-Immanuel Bittner, Chief Executive Officer of Arctoris commented:

We are thrilled to combine our strengths in robotics-powered drug discovery assay development and execution with the expertise in machine learning that Intelligent OMICS and Medicines Discovery Catapult possess. This unique setup demonstrates the next stage in drug discovery evolution, which is based on high quality datasets and machine intelligence. Together, we will be able to rapidly identify and validate novel targets, leading to promising new drug discovery programmes that will ultimately benefit patients worldwide.

Prof. John P. Overington, Chief Informatics Officer at Medicines Discovery Catapult:

Computational based approaches allow us to explore a top-down approach to identifying novel biological mechanisms of disease, which critically can be validated by selecting the most appropriate chemical modulators and assessing their effects in cellular assay technologies.

Working with Intelligent OMICS and with support from Arctoris we are delighted to play our part in laying the groundwork for computer-augmented, automated drug discovery. Should these methods indeed prove fruitful, it will be transformative for both our industry and patients alike.

If this validation is successful, the partners will have established a unique pipeline of promising new targets and compounds for a specific pathway in lung cancer. But more than that they will also have validated an entirely new drug discovery approach which can then be further scaled to other pathways and diseases.

Follow this link:
Machine-learning, robotics and biology to deliver drug discovery of tomorrow - - pharmaphorum

U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations – BroadbandBreakfast.com

December 11, 2020 In todays digital environment, winning wars requires more than boots on the ground. It also requires computer algorithms and artificial intelligence.

The United States Special Operations Command is currently playing a critical role advancing the employment of AI and machine learning in the fight against the countrys current and future advisories, through Project Maven.

To discuss the initiatives taking place as part of the project, General Richard Clarke, who currently serves as the Commander of USSOCOM, and Richard Shultz, who has served as a security consultant to various U.S. government agencies since the mid-1980s, joined the Hudson Institute for a virtual discussion on Monday.

Among other objectives, Project Maven aims to develop and integrate computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that the Department of Defense collects every day in support of counterinsurgency and counter terrorism operation, according to Clarke.

When troops carry out militarized site exploration, or military raids, they bring back copious amounts of computers, papers, and hard drives, filled with potential evidence. In order to manage enormous quantities of information in real time to achieve strategic objectives, the Algorithmic Warfare Cross-Function task force, launched in April 2017, began utilizing AI to help.

We had to find a way to put all of this data into a common database, said Clarke. Over the last few years, humans were tasked with sorting through this content watching every video, and reading every detainee report. A human cannot sort and shift through this data quickly and deeply enough, he said.

AI and machine learning have demonstrated that algorithmic warfare can aid military operations.

Project Maven initiatives helped increase the frequency of raid operations from 20 raids a month to 300 raids a month, said Schultz. AI technology increases both the number of decisions that can be made, and the scale. Faster more effective decisions on your part, are going to give enemies more issues.

Project Maven initiatives have increased the accuracy of bomb targeting. Instead of hundreds of people working on these initiatives, today it is tens of people, said Clarke.

AI has also been used to rival adversary propaganda. I now spend over 70 percent of my time in the information environment. If we dont influence a population first, ISIS will get information out more quickly, said Clarke.

AI and machine learning tools, enable USSOCOM to understand what an enemy is sending and receiving, what are false narratives, what are bots, and more, the detection of which allows decision makers to make faster, and more accurate, calls.

Military use of machine learning for precision raids and bomb strikes naturally raises concerns. In 2018, more than 3,000 Google employees signed a petition in protest against the companys involvement with Project Maven.

In an open letter addressed to CEO Sundar Pichai, Google employees expressed concern that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. We believe that Google should not be in the business of war, the letter read.

Go here to read the rest:
U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations - BroadbandBreakfast.com

Machine learning in human resources: how it works & its real-world applications – iTMunch

According to research conducted by Glassdoor, on average, the entire interview process conducted by companies in the United Stated usually takes about 22.9 days and the same in Germany, France and the UK takes 4-9 days longer [1]. Another research by the Society for Human Resources that studied data from more than 275,000 members in 160 countries found that the average time taken to fill a position is 42 days [2]. Clearly, hiring is a time-consuming and tedious process. Groundbreaking technologies like cloud computing, big data, augmented reality, virtual reality, blockchain technology and the Internet of Things can play a key role in making this process move faster. Machine learning in human resources is one such technology that has made the recruitment process not just faster but more effective.

Machine learning (ML) is treated as a subset of artificial intelligence (AI). AI is a branch of computer science which deals with building smart machines that are capable of performing certain tasks that typically require human intelligence. Machine learning, by definition, is the study of algorithms that enhance itself automatically over time with more data and experience. It is the science of getting machines (computers) to learn how to think and act like humans. To improve the learnings of a machine learning algorithm, data is fed into it over time in the form of observations and real-world interactions.The algorithms of ML are built on models based on sample or training data to make predictions and decisions without being explicitly programmed to do so.

Machine learning in itself is not a new technology but its integration with the HR function of organizations has been gradual and only recently started to have an impact. In this blog, we talk about how machine learning has contributed in making HR processes easier, how it works and what are its real-world applications. Let us begin by learning about this concept in brief.

The HR departments responsibilities with regards to recruitment used to be gathering and screening resumes, reaching out to candidates that fit the job description, lining up interviews and sending offer letters. It also includes managing a new employees on-boarding process and taking care of the exit process of an employee that decides to leave. Today, the human resource department is about all of this and much more. The department is now also expected to be able to predict employee attrition and candidate success, and this is possible through AI and machine learning in HR.

The objective behind integrating machine learning in human resource processes is the identification and automation of repetitive, time consuming tasks to free up the HR staff. By automating these processes, they can devote more time and resources to other imperative strategic projects and actual human interactions with prospective employees. ML is capable of efficiently handling the following HR roles, tasks and functions:

SEE ALSO:The Role of AI and Machine Learning in Affiliate Marketing

An HR professional keeps track of who saw the job posting and the job portal on which the applicant saw the posting. They collect the CVs and resumes of all the applicants and come up with a way to categorize the data in those documents. Additionally, they schedule, standardize and streamline the entire interview process. Moreover, they keep track of the social media activities of applicants along with other relevant data. All of this data collected by the HR professional is fed into a machine learning HR software from the first day itself. Soon enough, HR analytics in machine learning begins analyzing the data fed to discover and display insights and patterns.

The opportunities of learning through insights provided by machine learning HR are endless. The software helps HR professionals discover things like which interviewer is better at identifying the right candidate and which job portal or job posting attracts more or quality applicants.

With HR analytics and machine learning, fine-tuning and personalization of training is possible which makes the training experience more relevant to the freshly hired employee. It helps in identifying knowledge gaps or loopholes in training early on. It can also become a useful resource for company-related FAQs and information like company policies, code of conduct, benefits and conflict resolution.

The best way to better understand how machine learning has made HR processes more efficient is by getting acquainted with the real world applications of this technology. Let us have a look at some applications below.

SEE ALSO:The Importance of Human Resources Analytics

Scheduling is generally a time-demanding task. It includes coordinating with candidates and scheduling interviews, enhancing the onboarding experience, calling the candidates for follow-ups, performance reviews, training, testing and answering the common HR queries. Automating these tedious processes is one of the first applications of machine learning in human resource. ML takes away the burden of these cumbersome tasks from the HR staff by streamlining and automating it which frees up their time to focus on bigger issues at hand.A few of the best recruitment scheduling software are Beamery, Yello and Avature.

Once an HR professional is informed about the kind of talent that is needed to be hired in a company, one challenge is letting this information out and attracting the right set of candidates that might be fit for the role. Huge amount of companies trust ML for this task. Renowned job search platforms like LinkedIn and Glassdoor use machine learning and intelligent algorithms to help HR professionals filter and find out the best suitable candidates for the job.

Machine learning in human resources is also used to track new and potential applicants as they come into the system. A study was conducted by Capterra to look at how the use of recruitment software or applicant tracking software helped recruiters. It found 75% of the recruiters they contacted used some form of recruitment or applicant tracking software with 94% agreeing that it improved their hiring process. It further found that just 5% of recruiters thought that using applicant tracking software had a negative impact on their company [3].

Using such software also gives the HR professional access to predictive analytics which helps them analyze if the person would be best suitable for the job and a good fit for the company. Some of the best applicant tracking software that are available in the market are Pinpoint, Greenhouse and ClearCompany.

If hiring an employee is difficult, retaining an employee is even more challenging. There are factors in a company that make an employee stay or move to their next job. A study which was conducted by Gallup asked employees from different organizations if theyd leave or stay if certain perks were provided to them. The study found that 37% would quit their present job and take up a new job thatll allow them to work remotely part-time. 54% would switch for monetary bonuses, 51% for flexible working hours and 51% for employers offering retirement plans with pensions [4]. Though employee retention depends on various factors, it is imperative for an HR professional to understand, manage and predict employee attrition.

Machine learning HR tools provide valuable data and insights into the above mentioned factors and help HR professionals make decisions regarding employing someone (or not) more efficiently. By understanding this data about employee turnover, they are in a better position to take corrective measures well in advance to eliminate or minimize the issues.

An engaged employee is one who is involved in, committed to and enthusiastic about their work and workplace. The State of the Global Workforce report by Gallop found that 85% of the employees in the workplace are disengaged. Translation: Majority of the workforce views their workplace negatively or only does the bare minimum to get through the day, with little to no attachment to their work or workplace. The study further addresses why employee engagement is necessary. It found that offices with more engaged employees result in 10% higher customer metrics, 17% higher productivity, 20% more sales and 21% more profitability. Moreover, it found that highly engaged workplaces saw 41% less absenteeism [5].

Machine learning HR software helps the human resource department in making the employees more engaged. The insights provided by HR analytics by machine learning software help the HR team significantly in increasing employee productivity and reducing employee turnover rates. Software from Workometry and Glint aids immeasurable in measuring, analyzing and reporting on employee engagement and the general feeling towards their work.

The applications of machine learning in human resources we read above are already in use by HR professionals across the globe. Though the human element from human resources wont completely disappear, machine learning can guide and assist HR professionals substantially in ensuring the various functions of this department are well aligned and the strategic decisions made on a day-to-day basis are more accurate.

These are definitely exciting times for the HR industry and it is crucial that those working in this department are aware of the existing cutting-edge solutions available and the new trends that continue to develop.

The automation of HR functions like hiring & recruitment, training, development and retention has already made a profound positive effect on companies. Companies that refuse to or are slow to adapt and adopt machine learning and other new technologies will find themselves at a competitive disadvantage while those embrace them happily will flourish.

SEE ALSO:Future of Human Resource Management: HR Tech Trends of 2019

For more updates and latest tech news, keep reading iTMunch

Sources

[1] Glassdoor (2015) Why is Hiring Taking Longer, New Insights from Glassdoor Data [Online] Available from: https://www.glassdoor.com/research/app/uploads/sites/2/2015/06/GD_Report_3-2.pdf [Accessed December 2020]

[2] [Society for Human Resource Management (2016) 2016 Human Capital Benchmarking Report [Online] Available from: https://www.ebiinc.com/wp-content/uploads/attachments/2016-Human-Capital-Report.pdf [Accessed December 2020]

[3] Capterra (2015) Recruiting Software Impact Report [Online] Available from: https://www.capterra.com/recruiting-software/impact-of-recruiting-software-on-businesses [Accessed December 2020]

[4] Gallup (2017) State of the American Workplace Report [Online] Available from: https://www.gallup.com/workplace/238085/state-american-workplace-report-2017.aspx [Accessed December 2020]

[5] Gallup (2017) State of the Global Workplace [Online] Available from: https://www.gallup.com/workplace/238079/state-global-workplace-2017.aspx#formheader [Accessed December 2020]

Image Courtesy

Image 1: Background vector created by starline http://www.freepik.com

Image 2: Business photo created by yanalya http://www.freepik.com

Read this article:
Machine learning in human resources: how it works & its real-world applications - iTMunch

Machine Learning and AI – What Does The Future Hold? – Analytics Insight

In data, companies trust. By 2021, one in four forward-thinking enterprises will push AI to new frontiers, such as holographic meetings for remote work and on-demand personalised manufacturing, as per new predictions by Forrester Research. Even today, all of us are subconsciously using Machine Learning in our daily lives. Wish to travel? Maps: AI-powered. Wish to stay home and yet be social? Facebook, Snapchat: ML-AI powered.A nascent domain thats roughly 60 years old, has changed the way humans and machines perform, thats for sure.

AI will create 2.3 million jobs in 2020. By 2020, Artificial Intelligence to create more jobs than it eliminates, says Gartner. Todays tech-ready industries already use AI for automated jobs that are highly repeatable, where large quantities of observations and decisions can be analysed for patterns.

To stay relevant and secure an irreplaceable position in your industry, it is important to upskill and be in the know of the latest trends and technologies. The first step in doing so, would be to pursue online programs that allow you to work while you learn. It is extremely crucial to keep in mind that only listening to professors half-mindedly while bingeing in a parallel tab will not cut it. The program that you choose to pursue needs to be as rigorous and engaging as any offline university that you go to. upGrad, Indias largest online higher education company, has collaborated with top national and global universities like IIIT Bangalore, IIT Madras, and Liverpool John Moores University to deliver online Machine Learning programs to working professionals. These programs are 100% online and cover industry-relevant case studies and projects, allowing learners to get practical knowledge along with theoretical comprehension, thanks to best-in-class content and live lectures from industry leaders. Based on your interest, you can choose a format of your choice, be it a PG Diploma, an Advanced Certification, or a Masters degree. With one-on-one mentorship from industry leaders and personalised assistance from dedicated student mentors, upGrad ensures that every learner hits the ground running, as soon as they graduate.

Though the global pandemic is affecting millions of jobs worldwide, according to Indeed, a leading job portal, the demand for AI jobs in India has been on the upswing for five years, and has particularly increased in the past six months. Python, a programming language and Natural Language Processing (NLP), essential to making Artificial Intelligence effective as it is the study of computer and human language interaction are the most high demand skills within AI jobs.*With a rise in demand, the competition rises as well. Stay ahead in your career, online programs from top universities are a few clicks away (thanks to Machine Learning and AI!).

Read more here:
Machine Learning and AI - What Does The Future Hold? - Analytics Insight

What are the roles of artificial intelligence and machine learning in GNSS positioning? – Inside GNSS

For decades, artificial intelligence and machine learning have advanced at a rapid pace. Today, there are many ways artificial intelligence and machine learning are used behind the scenes to impact our everyday lives, such as social media, shopping recommendations, email spam detection, speech recognition, self-driving cars, UAVs, and so on.

The simulation of human intelligence is programmed to think like humans and mimic our actions to achieve a specific goal. In our own field, machine learning has also changed the ways to solve navigation problems and taken on a significant role in advancing PNT technologies in the future.

LI-TA HSU, HONG KONG POLYTECHNIC UNIVERSITY

Q: Can machine learning replace conventional GNSS positioning techniques?

Actually, it makes no sense to use ML when the exact physics/mathematical models of GNSS positioning are known, and when using machine learning (ML) techniques over any appreciable area to collect extensive data and train the network to estimate receiver locations would be an impractically large undertaking. We, human beings, designed the satellite navigation systems based on the laws of physics discovered. For example, we use Keplers laws to model the position of satellites in an orbit. We use the spread-spectrum technique to model the satellite signal allowing us to acquire very weak signals transmitted from the medium-Earth orbits. We understand the Doppler effect and design tracking loops to track the signal and decode the navigation message. We finally make use of trilateration to model the positioning and use the least square to estimate the location of the receiver. By the efforts of GNSS scientists and engineers for the past several decades, GNSS can now achieve centimeter-level positioning. The problem is; if everything is so perfect, why dont we have a perfect GNSS positioning?

The answer for me as an ML specialist is that the assumptions made are not always valid in all contexts and applications! In trilateration, we assume the satellite signal always transmitted in direct line-of-sight (LOS). However, different layers in the atmosphere can diffract the signal. Luckily, remote-sensing scientists studied the troposphere and ionosphere and came up with sophisticated models to mitigate the ranging error caused by transmission delay. But the multipath effects and non-line-of-sight (NLOS) receptions caused by buildings and obstacles on the ground are much harder to deal with due to their high nonlinearity and complexity.

Q: What are the challenges of GNSS and how can machine learning help with it?

GNSS performs very differently under different contexts. Context means what and where. For example, a pedestrian walks in an urban canyon or a pedestrian sits in a car that drives in a highway. The notorious multipath and NLOS play major roles to affect the performance GNSS receiver under different context. If we follow the same logic of the ionospheric research to deal with the multipath effect, we need to study 3D building models which is the main cause of the reflections. Extracting from our previous research, the right of Figure 1 is simulated based on the LOD1 building model and single-reflected ray-tracing algorithm. It reveals the positioning error caused by the multipath and NLOS is highly site-dependent. In other words, the nonlinearity and complexity of multipath and NLOS are very high.

Generally speaking, ML derives a model based on data. What exactly does ML do best?

Phenomena we simply do not know how to model by explicit laws of physics/math, for example, contexts and semantics.

Phenomena with high complexity, time variance and nonlinearity.

Looking at the challenges of GNSS multipath and the potential of ML, it becomes straightforward to apply artificial intelligence to mitigate multipath and NLOS. One mainstream idea is to use ML to train the models to classify LOS, multipath and NLOS measurements. This idea is illustrated in Figure 2. Three-steps, data labeling, classifier training, and classifier evaluation, are required. In fact, there are also challenges in each step.

Are we confident in our labeling?

In our work, we use 3D city models and ray-tracing simulation to label the measurements we received from the GNSS receiver. The label may not be 100% correct since the 3D models are not conclusive enough to represent the real world. Trees and dynamic objects (vehicles and pedestrians) are not included. In addition, the multiple reflected signals are very hard to trace and the 3D models could have errors.

What are the classes and features?

For the classes, popular selections are the presence (binary) of multipath or NLOS and their associated pseudorange errors. The features are selected based on the variables that are affected by multipath, including carrier-to-noise ratio, pseudorange residual, DOP, etc. If we can assess a step deeper into the correlator, the shape of correlators in code and carrier are also excellent features. Our study evaluates the comparison between the different levels (correlator, RINEX, and NMEA) of features for the GNSS classifier and reveals that the rawer the feature it is, the better classification accuracy can be obtained. Finally, the methods of exploratory data analysis, such as principle component analysis, can better select the features that are more representative to the class.

Are we confident that the data we used to train the classifier are representative enough for the general application cases?

Overfitting of the data is always being a challenge for ML. Multipath and NLOS effects are very difficult in different cities. For example, the architectures in Europe and Asia are very different, producing different multipath effects. Classifiers trained using the data in Hong Kong do not necessarily perform well in London. The categorization of cities or urban areas in terms of their effects on GNSS multipath and NLOS is still an open question.

Q: What are the challenges of integrated navigation systems and how can machine learning can help with them?

Seamless positioning has always been the ultimate goal. However, each sensor has a different performance in different areas. Table 1 gives a rough picture. Inertial sensors seem to perform stably in most areas. But the MEMS-INS suffers from drift and is highly affected by the random noise caused by the temperature variations. Naturally, integrated navigation is a solution. The sensor integration, in fact, shall be regarded in both long-term and short-term.

Long-term Sensor SelectionIn the long term, available sensors for positioning are generally more than enough. The determination of the best subsets of sensors to integrate is the question to ask. Consider an example of seamless positioning for a city dweller travelling from home to the office:

Walking on a street to the subway station (GNSS+IMU)

Walking in a subway station (Wi-Fi/BLE+IMU)

Traveling on a subway (IMU)

Walking in an urban area to the office (VPS+ GNSS+ Wi-Fi/BLE+IMU)

This example clearly shows that seamless positioning should integrate different sensors. The selection of the sensors can be done heuristically or by maximizing the observability of sensors. If the sensors are selected heuristically, we must have the ability to know what context the system is operating under. This is one of the best angles for ML to cut in. In fact, the classification of the scenarios or contexts is exactly what ML does best. A recently published journal paper demonstrates how to detect different contexts using smartphone sensors for context-adaptive navigation (Gao and Groves 2020). Sensors in smartphones are used in the models trained by supervised ML to determine not only the environment but also the behavior (such as transportation modes, including static, pedestrian walk, and sitting on a car or a subway, etc.).

According to their result, the state-of-the-art detection algorithm can achieve over 95% for pedestrians under indoor, intermediate, and outdoor scenarios. This finding encourages the use of ML to intelligently select the right navigation systems for an integrated navigation system under different areas. The same methodology can be easily extended to vehicular applications with a proper modification in the selections of features, classes, and machine learning algorithms.

Short-term Sensor Weighting

Technically speaking, an optimal integrated solution can be obtained if the uncertainty of the sensor can be optimally described. Presumably, the sensors uncertainty remains unchanged under a certain environment. As a result, most of the sensors uncertainty is carefully calibrated before its use in integration systems.

However, the problem is that the environment could change rapidly within a short period of time. For example, a car drives in an urban area with several viaducts or a car drives in an open sky with a canopy of foliage. These scenarios affect the performance of GNSS greatly, however, the affecting periods were too short to exclude the GNSS from the subset of sensors used. The best solution against these unexpected and transient effects are de-weighting the affected sensors in the system.

Due to the complexity of these effects, adaptive tuning of the uncertainty based on ML is getting popular. Our team demonstrated this potential by an experiment of a loosely coupled GNSS/INS integration. This experiment took place in an urban canyon with commercial GNSS and MEMS INS. Different ML algorithms are used to classify the GNSS positioning errors into four classes: healthy, slightly shifted, inaccurate, and dangerous. These are represented as 1 to 4 in the bottom of Figure 4. The top and bottom of the figure show the error of the commercial GNSS solution and the predicted classes by different ML. It clearly shows that ML can do a very good job predicting the class of the GNSS solution, enabling the integrated to allocate proper weighting to GNSS. Table 2 shows the improvement made by the ML-aided integration system.

This is just an example to preliminarily show the potential of ML in estimating/predicting sensors uncertainty. The methodology can also be applied to different sensor integration such as Wi-Fi/BLE/IMU integration. The challenge of the trained classifier may be too specific for a certain area due to the over-fitting of the data. This remains an open research question in the field.

Q: Machine Learning or Deep Learning for Navigation Systems?

Based on research in object recognition in computer science, deep learning (DL) is the currently the mainstream method because it generally outperforms ML when two conditions are fulfilled, data and computation. The trained model of DL is completely data-driven, while ML trains models to fit assumed (known) mathematical models. A rule of thumb to select ML or DL is the availability of the data in hand. If extensive and conclusive data are available, DL achieves excellent performance due to its superiority in data fitting. In the other words, DL can automatically discover features that affect the classes. However, a model trained by ML is much more comprehensible compared to that trained by DL. The DL model becomes like a black box. In addition, the nodes and layers of convolution in DL are used to extract features. The selection of the number of layers and the number of nodes is still very hard to determine, so that in trial-and-error approaches are widely adopted. These are the major challenges in DL.

If a DL-trained neutral network can be perfectly designed for the integrated navigation system, then it should consider both long-term and short-term challenges. Figure 5 shows this idea. Several hidden layers will be designed to predict the environments (or contexts) and the others are to predict the sensor uncertainty. The idea is straightforward, whereas the challenges remain:

Are we confident that the data we used to train the classifier are representative enough for the general applications cases?

What are the classes?

What are the features?

How many layers and the number of nodes should be used?

Q: How does machine learning affect the field of navigation?

ML will accelerate the development of seamless positioning. With the presence of ML in the navigation field, a perfect INS is no longer the only solution. These AI technologies facilitate the selection of the appropriate sensors or raw measurements (with appropriate trust) against complex navigation challenges. The transient selection of the sensors (well-known as plug-and-play) will affect the integration algorithm. Integration R&D engineers in navigation have been working on the Kalman filter and its variants. However, the flexibility of the Kalman filter makes it hard to accommodate the plug-and-play of sensors. The graph optimization that is widely used in the robotics field could be a very strong candidate to integrate sensors for navigation purposes.

Other than GNSS and the integrated navigation system mentioned above, the recently developed visual positioning system (VPS) by Google could replace the visual corner point detection by the semantic information that detected by ML. Looking at how we navigated before GNSS, we compare visual landmarks with our memory (database) to infer where we are and where we are heading. ML can segment and classify images taken by a camera into different classes, including building, foliage, road, curb, etc., and compare the distribution of the semantic information with that in the database in the cloud server. If they match, the associated position and orientation tag in the database can be regarded as the user location.

AI technologies are coming. They will influence navigation research and development. In my opinion, the best we can do is to mobilize AI to tackle the challenges to which we currently lack solutions. It is highly probable that technology advances and learning focus will depend greatly on MLs development and achievement in the field of navigation.

References

(1) Groves PD, Challenges of Integrated Navigation, ION GNSS+ 2018, Miami, Florida, pp. 3237-3264.

(2) Gao H, Groves PD. (2020) Improving environment detection by behavior association for context-adaptive navigation. NAVIGATION, 67:4360. https://doi.org/10.1002/navi.349

(3) Sun R., Hsu L.T., Xue D., Zhang G., Washington Y.O., (2019) GPS Signal Reception Classification Using Adaptive Neuro-Fuzzy Inference System, Journal of Navigation, 72(3): 685-701.

(4) Hsu L.T. GNSS Multipath Detection Using a Machine Learning Approach, IEEE ITSC 2017, Yokohama, Japan.

(5) Yozevitch R., and Moshe BB. (2015) A robust shadow matching algorithm for GNSS positioning. NAVIGATION, 62.2: 95-109.

(6) Chen P.Y., Chen H., Tsai M.H., Kuo H.K., Tsai Y.M., Chiou T.Y., Jau P.H. Performance of Machine Learning Models in Determining the GNSS Position Usage for a Loosely Coupled GNSS/IMU System, ION GNSS+ 2020, virtually, September 21-25, 2020.

(7) Suzuki T., Nakano, Y., Amano, Y. NLOS Multipath Detection by Using Machine Learning in Urban Environments, ION GNSS+ 2017, Portland, Oregon, pp. 3958-3967.

(8) Xu B., Jia Q., Luo Y., Hsu L.T. (2019) Intelligent GPS L1 LOS/Multipath/NLOS Classifiers Based on Correlator-, RINEX-and NMEA-Level Measurements, Remote Sensing 11(16):1851.

(9) Chiu H.P., Zhou X., Carlone L., Dellaert F., Samarasekera S., and Kumar R., Constrained Optimal Selection for Multi-Sensor Robot Navigation Using Plug-and-Play Factor Graphs, IEEE ICRA 2014, Hong Kong, China.

(10) Zhang G., Hsu L.T. (2018) Intelligent GNSS/INS Integrated Navigation System for a Commercial UAV Flight Control System, Aerospace Science and Technology, 80:368-380.

(11) Kumar R., Samarasekera S., Chiu H.P., Trinh N., Dellaert F., Williams S., Kaess M., Leonard J., Plug-and-Play Navigation Algorithms Using Factor Graphs, Joint Navigation Conference (JNC), 2012.

Read more here:
What are the roles of artificial intelligence and machine learning in GNSS positioning? - Inside GNSS

SiMa.ai Adopts Arm Technology to Deliver a Purpose-built Heterogeneous Machine Learning Compute Platform for the Embedded Edge – Design and Reuse

Licensing agreement enables machine learning intelligence with best-in-class performance and power for robotics, surveillance, autonomous, and automotive applications

SAN JOSE, Calif.-- November 18, 2020 -- SiMa.ai, the machine learning company enabling high performance compute at the lowest power, today announced the adoption of low-power Arm compute technology to build its purpose-built Machine Learning SoC (MLSoC) platform. The licensing of this technology brings machine learning intelligence with best-in-class performance and power to a broad set of embedded edge applications including robotics, surveillance, autonomous, and automotive.

SiMa.ai is adopting Arm Cortex-A and Cortex-M processors optimized for power, throughput efficiency, and safety-critical tasks. In addition, SiMa.ai is leveraging a combination of widely used open-source machine learning frameworks from Arms vast ecosystem, to allow software to seamlessly enable machine learning for legacy applications at the embedded edge.

Arm is the industry leader in energy-efficient processor design and advanced computing, said Krishna Rangasayee, founder and CEO of SiMa.ai. The integration of SiMa.ai's high performance and low power machine learning accelerator with Arm technology accelerates our progress in bringing our MLSoC to the market, creating new solutions underpinned by industry-leading IP, the broad Arm ecosystem, and world-class support from its field and development teams."

From autonomous systems to smart cities, the applications enabled by ML at the edge are delivering increased functionality, leading to more complex device requirements, said Dipti Vachani, senior vice president and general manager, Automotive and IoT Line of Business at Arm. SiMa.ai is innovating on top of Arms foundational IP to create a unique low power ML SoC that will provide intelligence to the next generation of embedded edge use cases.

SiMa.ai is strategically leveraging Arm technology to deliver its unique Machine Learning SoC. This includes:

About SiMa.ai

SiMa.ai is a machine learning company enabling high performance compute at the lowest power. Initially focused on solutions for computer vision applications at the embedded edge, the company is led by a team of technology experts committed to delivering the industrys highest frames per second per watt solution to its customers. To learn more, visit http://www.sima.ai.

Continue reading here:
SiMa.ai Adopts Arm Technology to Deliver a Purpose-built Heterogeneous Machine Learning Compute Platform for the Embedded Edge - Design and Reuse

Commentary: Pathmind applies AI, machine learning to industrial operations – FreightWaves

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

In this installment of the AI in Supply Chain series (#AIinSupplyChain), we explore how Pathmind, an early-stage startup based in San Francisco, is helping companies apply simulation and reinforcement learning to industrial operations.

I asked Chris Nicholson, CEO and founder of Pathmind, What is the problem that Pathmind solves for its customers? Who is the typical customer?

Nicholson said: The typical Pathmind customer is an industrial engineer working at a simulation consulting firm or on the simulation team of a large corporation with industrial operations to optimize. This ranges from manufacturing companies to the natural resources sector, such as mining and oil and gas. Our clients build simulations of physical systems for routing, job scheduling or price forecasting, and then search for strategies to get more efficient.

Pathminds software is suited for manufacturing resource management, energy usage management optimization and logistics optimization.

As with every other startup that I have highlighted as a case in this #AIinSupplyChain series, I asked, What is the secret sauce that makes Pathmind successful? What is unique about your approach? Deep learning seems to be all the rage these days. Does Pathmind use a form of deep learning? Reinforcement learning?

Nicholson responded: We automate tasks that our users find tedious or frustrating so that they can focus on whats interesting. For example, we set up and maintain a distributed computing cluster for training algorithms. We automatically select and tune the right reinforcement learning algorithms, so that our users can focus on building the right simulations and coaching their AI agents.

Echoing topics that we have discussed in earlier articles in this series, he continued: Pathmind uses some of the latest deep reinforcement learning algorithms from OpenAI and DeepMind to find new optimization strategies for our users. Deep reinforcement learning has achieved breakthroughs in gaming, and it is beginning to show the same performance for industrial operations and supply chain.

On its website, Pathmind describes saving a large metals processor 10% of its expenditures on power. It also describes the use of its software to increase ore preparation by 19% at an open-pit mining site.

Given how difficult it is to obtain good quality data for AI and machine learning systems for industrial settings, I asked how Pathmind handles that problem.

Simulations generate synthetic data, and lots of it, said Slin Lee, Pathminds head of engineering. The challenge is to build a simulation that reflects your underlying operations, but there are many tools to validate results.

Once you pass the simulation stage, you can integrate your reinforcement learning policy into an ERP. Most companies have a lot of the data they need in those systems. And yes, theres always data cleansing to do, he added.

As the customer success examples Pathmind provides on its website suggest, mining companies are increasingly looking to adopt and implement new software to increase efficiencies in their internal operations. This is happening because the industry as a whole runs on very old technology, and deposits of ore are becoming increasingly difficult to access as existing mines reach maturity. Moreover, the growing trend toward the decarbonization of supply chains, and the regulations that will eventually follow to make decarbonization a requirement, provide an incentive for mining companies to seize the initiative in figuring out how to achieve that goal by implementing new technology

The areas in which AI and machine learning are making the greatest inroads are mineral exploration using geological data to make the process of seeking new mineral deposits less prone to error and waste; predictive maintenance and safety using data to preemptively repair expensive machinery before breakdowns occur; cyberphysical systems creating digital models of the mining operation in order to quickly simulate various scenarios; and autonomous vehicles using autonomous trucks and other autonomous vehicles and machinery to move resources within the area in which mining operations are taking place.

According to Statista, The revenue of the top 40 global mining companies, which represent a vast majority of the whole industry, amounted to some 692 billion U.S. dollars in 2019. The net profit margin of the mining industry decreased from 25 percent in 2010 to nine percent in 2019.

The trend toward mining companies and other natural-resource-intensive industries adopting new technology is going to continue. So this is a topic we will continue to pay attention to in this column.

Conclusion

If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains, wed love to tell your story at FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Dig deeper into the #AIinSupplyChain Series with FreightWaves:

Commentary: Optimal Dynamics the decision layer of logistics? (July 7)

Commentary: Combine optimization, machine learning and simulation to move freight (July 17)

Commentary: SmartHop brings AI to owner-operators and brokers (July 22)

Commentary: Optimizing a truck fleet using artificial intelligence (July 28)

Commentary: FleetOps tries to solve data fragmentation issues in trucking (Aug. 5)

Commentary: Bulgarias Transmetrics uses augmented intelligence to help customers (Aug. 11)

Commentary: Applying AI to decision-making in shipping and commodities markets (Aug. 27)

Commentary: The enabling technologies for the factories of the future (Sept. 3)

Commentary: The enabling technologies for the networks of the future (Sept. 10)

Commentary: Understanding the data issues that slow adoption of industrial AI (Sept. 16)

Commentary: How AI and machine learning improve supply chain visibility, shipping insurance (Sept. 24)

Commentary: How AI, machine learning are streamlining workflows in freight forwarding, customs brokerage (Oct. 1)

Commentary: Can AI and machine learning improve the economy? (Oct. 8)

Commentary: Savitude and StyleSage leverage AI, machine learning in fashion retail (Oct. 15)

Commentary: How Japans ABEJA helps large companies operationalize AI, machine learning (Oct. 26)

Authors disclosure: I am not an investor in any early-stage startups mentioned in this article, either personally or through REFASHIOND Ventures. I have no other financial relationship with any entities mentioned in this article.

Original post:
Commentary: Pathmind applies AI, machine learning to industrial operations - FreightWaves

93% of security operations centers employing AI and machine learning tools to detect advanced threats – Security Magazine

93% of security operations center employing AI and machine learning tools to detect advanced threats | 2020-10-30 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

More:
93% of security operations centers employing AI and machine learning tools to detect advanced threats - Security Magazine

2021 in Ed Tech: AI, Data Analytics Were Top Priorities – Government Technology

The last 12 months were a time of experimentation for both K-12 and higher education institutions. Flush with new federal funding but straining against disruptions such as COVID-19 and rampant cyber threats, schools adapted with help from ed-tech companies, nonprofits and other industry partners to meet a growing demand for flexible online learning options, as well as to improve student performance and tackle learning loss that resulted from last year's school closures.

For school districts, colleges and universities, often this work included efforts to close the digital divide and distribute tablets and laptops; start coding boot camps and other training programs to prepare the future workforce for new technologies; and make cybersecurity investments and study programs to create a bulwark of infrastructure and skills against cyber criminals.

For ed-tech companies and industry leaders helping schools through this, much of the focus was on student data and AI-driven programs designed to assist with lesson planning, student feedback and educational content.

Along those lines, Google announced the creation of an AI tutor last month to provide students with personalized feedback on assignments, academic coaching and course advisement. It was as an expansion of Googles Student Success Services, a software suite released in 2020, which includes virtual assistant functions, analytics, enrollment algorithms and other higher ed applications.

They all are thinking about how we can make learning more personalized, aligning it to when you need it for access 24/7, and using data more effectively to engage students, Butschi said. As you think about that, it starts to tee up to why were seeing data analytics, artificial intelligence and machine learning to personalize and gauge learning starting to pop up more and more.

According to a recent report from Market Research Engine, the global market for artificial intelligence in education technology will reach $5.80 billion by 2025, with a compound annual growth rate of 45 percent.

Neil Heffernan, a computer science professor at Worcester Polytechnic Institute and lead developer of the AI-based student feedback program ASSISTments, said this projected growth is partly to do with AIs potential to identify and address areas in need of improvement and help close achievement gaps.

He said ASSISTments AI feature Quick-Comments won an $8 million grant last week from the U.S. Department of Education's Education Innovation and Research program to improve its machine-learning tutoring functions.

What we want to do is find out which human tutors are doing a good job, look at what theyre doing, and put that back into the computer so that when no humans are around, we can have the program doing that, he said. When we have the computer doing that, we can measure how they do on the next problem.

While AI is helping schools with tutoring and curricula, new data management systems are streamlining the collection and storage of student performance data to identify and address areas where improvement is needed. The aim is to make the data more readable and enable data systems to integrate with learning management systems (LMS) like Google Classroom and Canvas that have only become more commonplace in K-12 during COVID-19.

Over the past year, K-12 districts and state education officials have worked with organizations such as the analytics nonprofit Ed-Fi Alliance and adopted tools like the Apigee API platform from Google Cloud to standardize data systems and make them interoperable.

Trenton Goble, VP of K-12 Strategy at Instructure, said schools need student performance data that can "flow into a data warehouse environment with clear and easy-to-use reporting" and gauge the impact of remote learning.

As schools went back to a face-to-face environment this fall, we saw a lot of interest in assessments, he said. "Assessments only have value insofar as teachers are using the data, so being able to present data in a meaningful way is a big trend."

Goble said the adoption of Instructures Canvas LMS has witnessed a lot of significant growth during this year, as schools slowly made the transition to using LMS for lower elementary grade levels following last years first closures.

He said one of the main advantages of Canvas has been its ability to integrate new digital learning tools into the LMS, noting the emergence of new AI-driven ed-tech products marketed to educators overwhelmed with choices in an ever-growing market.

Weve always been open and extensible as a platform. Our ability to allow third-party resources to integrate into the LMS is vital. The ability to integrate is, I think, key. Its an expectation at this point, he said, as schools are becoming more sophisticated in working with new technologies. [Choosing the right tools] is the toughest element for school districts. For districts that want to be open in allowing teachers to find their own tools in the K-12 space, you want those tools to integrate into the LMS."

According to a recent report from the policy think tank Information Technology and Innovation Foundation, AR/VR technology could prove a promising addition to digital learning toolkits at schools and universities, eventually.

Ellysse Dick, a policy analyst from ITIF and author of the report, said AR/VR programs enable experiential lessons that might make up for learning loss that occurred over the past two years.

A virtual field trip isnt a full replacement for a real-life field trip, but for those students who wouldnt otherwise be able to visit places that might be a bus ride away for others, VR can give them opportunities to experience some of those things, she told Government Technology in September.

But while AR and VR tools and the gamification" of learning have garnered interest in schools, Google's Butschi reiterated that "data analytics and AI are top priorities when it comes to tracking and improving grades.

Heffernan also said this years focus on machine learning in ed tech eclipsed AR/VR, which he said "continues to be totally sexy and totally oversold." He expects this trend to continue into 2022 as ed-tech developers and researchers make improvements to AI's capabilities.

When some people think about AI, they think too much about Hollywood and computers taking over, and Im not worried about that at all because I know [todays] systems are really dumb, he said, noting that AI has already helped teachers do their jobs more effectively despite current limitations.

See the article here:
2021 in Ed Tech: AI, Data Analytics Were Top Priorities - Government Technology

Microsoft/MITRE group declares war on machine learning vulnerabilities with Adversarial ML Threat Matrix – Diginomica

(Pixabay)

The extraordinary advances in machine learning that drive the increasing accuracy and reliability of artificial intelligence systems have been matched by a corresponding growth in malicious attacks by bad actors seeking to exploit a new breed of vulnerabilities designed to distort the results.

Microsoft reports it has seen a notable increase in attacks on commercial ML systems over the past four years. Other reports have also brought attention to this problem.Gartner's Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that:

Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.

Training data poisoning happens when an adversary is able to introduce bad data into your model's training pool, and hence get it to learn things that are wrong. One approach is to target your ML's availability; the other targets its integrity (commonly known as "backdoor" attacks). Availability attacks aim to inject so much bad data into your system that whatever boundaries your model learns are basically worthless. Integrity attacks are more insidious because the developer isn't aware of them so attackers can sneak in and get the system to do what they want.

Model theft techniques are used to recover models or information about data used during training which is a major concern because AI models represent valuable intellectual property trained on potentially sensitive data including financial trades, medical records, or user transactions.The aim of adversaries is to recreate AI models by utilizing the public API and refining their own model using it as a guide.

Adversarial examples are inputs to machine learning models that attackers haveintentionally designed to cause the model to make a mistake.Basically, they are like optical illusions for machines.

All of these methods are dangerous and growing in both volume and sophistication. As Ann Johnson Corporate Vice President, SCI Business Development at Microsoft wrote in ablog post:

Despite the compelling reasons to secure ML systems, Microsoft's survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they don't have the right tools in place to secure their ML systems. What's more, they are explicitly looking for guidance. We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations.

Responding to the growing threat, last week, Microsoft, the nonprofit MITRE Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch released theAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with MITRE to build a schema that organizes the approaches employed by malicious actors in subverting machine learning models, bolstering monitoring strategies around organizations' mission-critical systems.Said Johnson:

Microsoft worked with MITRE to create the Adversarial ML Threat Matrix, because we believe the first step in empowering security teams to defend against attacks on ML systems, is to have a framework that systematically organizes the techniques employed by malicious adversaries in subverting ML systems. We hope that the security community can use the tabulated tactics and techniques to bolster their monitoring strategies around their organization's mission critical ML systems.

The Adversarial ML Threat, modeled after the MITRE ATT&CK Framework, aims to address the problem with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against production systems. With input from researchers at the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University, Microsoft and MITRE created a list of tactics that correspond to broad categories of adversary action.

Techniques in the schema fall within one tactic and are illustrated by a series of case studies covering how well-known attacks such as the Microsoft Tay poisoning, the Proofpoint evasion attack, and other attacks could be analyzed using the Threat Matrix. Noted Charles Clancy, MITRE's chief futurist, senior vice president, and general manager of MITRE Labs:

Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle.

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE's Decision Science research programs, said that AI is now at the same stage now where the internet was in the late 1980s when people were focused on getting the technology to work and not thinking that much about longer term implications for security and privacy. That, he says, was a mistake that we can learn from.

The Adversarial ML Threat Matrix will allow security analysts to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning and to develop a common language that allows for better communications and collaboration.

View post:
Microsoft/MITRE group declares war on machine learning vulnerabilities with Adversarial ML Threat Matrix - Diginomica

New research project will use machine learning to advance metal alloys for aerospace – Metal Additive Manufacturing magazine

Ian Brooks, AM Technical Fellow, AMRC North West with Renishaws RenAM 500Q metal Additive Manufacturing machine (Courtesy Renishaw/ AMRC North West)

UK-based Intellegens, a University of Cambridge spin-out specialising in artificial intelligence; the University of Sheffield Advanced Manufacturing Research Centre (AMRC) North West, Preston, Lancashire, UK; and Boeing will collaborate on Project MEDAL: Machine Learning for Additive Manufacturing Experimental Design.

The project aims to accelerate the product development lifecycle of aerospace components by using a machine learning model to optimise Additive Manufacturing processing parameters for new metal alloys at a lower cost and faster rate. The research will focus on metal Laser Beam Powder Bed Fusion (PBF-LB), specifically on key parameter variables required to manufacture high density, high strength parts.

Project MEDAL is part of the National Aerospace Technology Exploitation Programme (NATEP), a 10 million initiative for UK SMEs to develop innovative aerospace technologies funded by the Department for Business, Energy and Industrial Strategy and delivered in partnership with the Aerospace Technology Institute (ATI) and Innovate UK. Intellegens was a startup in the first group of companies to complete the ATI Boeing Accelerator last year.

We are very excited to be launching this project in conjunction with the AMRC, stated Ben Pellegrini, CEO of Intellegens. The intersection of machine learning, design of experiments and Additive Manufacturing holds enormous potential to rapidly develop and deploy custom parts not only in aerospace, as proven by the involvement of Boeing, but in medical, transport and consumer product applications.

James Hughes, Research Director for University of Sheffield AMRC North West, explained that the project will build the AMRCs knowledge and expertise in alloy development so it can help other UK manufacturers.

Hughes commented, At the AMRC we have experienced first-hand, and through our partner network, how onerous it is to develop a robust set of process parameters for AM. It relies on a multi-disciplinary team of engineers and scientists and comes at great expense in both time and capital equipment.

It is our intention to develop a robust, end-to-end methodology for process parameter development that encompasses how we operate our machinery right through to how we generate response variables quickly and efficiently. Intellegens AI-embedded platform Alchemite will be at the heart of all of this.

There are many barriers to the adoption of metallic AM but by providing users, and maybe more importantly new users, with the tools they need to process a required material should not be one of them, Hughes continued. With the AMRCs knowledge in AM, and Intellegens AI tools, all the required experience and expertise is in place in order to deliver a rapid, data-driven software toolset for developing parameters for metallic AM processes to make them cheaper and faster.

Sir Martin Donnelly, president of Boeing Europe and managing director of Boeing in the UK and Ireland, reported that the project shows how industry can successfully partner with government and academia to spur UK innovation.

Donnelly noted, We are proud to see this project move forward because of what it promises aviation and manufacturing, and because of what it represents for the UKs innovation ecosystem. We helped found the AMRC two decades ago, Intellegens was one of the companies we invested in as part of the ATI Boeing Accelerator and we have longstanding research partnerships with Cambridge University and the University of Sheffield.

He added, We are excited to see what comes from this continued collaboration and how we might replicate this formula in other ways within the UK and beyond.

Aerospace components have to withstand certain loads and temperature resistances, and some materials are limited in what they can offer. There is also simultaneous push for lower weight and higher temperature resistance for better fuel efficiency, bringing new or previously impractical-to-machine metals into the aerospace material mix.

One of the main drawbacks of AM is the limited material selection currently available and the design of new materials, particularly in the aerospace industry, requires expensive and extensive testing and certification cycles which can take longer than a year to complete and cost as much as 1 million to undertake.

Pellegrini explained that experimental design techniques are extremely important to develop new products and processes in a cost-effective and confident manner. The most common approach is Design of Experiments (DOE), a statistical method that builds a mathematical model of a system by simultaneously investigating the effects of various factors.

Pellegrini added, DOE is a more efficient, systematic way of choosing and carrying out experiments compared to the Change One Separate variable at a Time (COST) approach. However, the high number of experiments required to obtain a reliable covering of the search space means that DOE can still be a lengthy and costly process, which can be improved.

The machine learning solution in this project can significantly reduce the need for many experimental cycles by around 80%. The software platform will be able to suggest the most important experiments needed to optimise AM processing parameters, in order to manufacture parts that meet specific target properties. The platform will make the development process for AM metal alloys more time and cost-efficient. This will in turn accelerate the production of more lightweight and integrated aerospace components, leading to more efficient aircrafts and improved environmental impact, concluded Pellegrini.

Intellegens will produce a software platform with an underlying machine learning algorithm based on its Alchemite platform. It has reportedly already been used successfully to overcome material design problems in a University of Cambridge research project with a leading OEM where a new alloy was designed, developed and verified in eighteen months rather than the expected twenty-year timeline, saving approximately $10 million.

http://www.intellegens.ai

http://www.amrc.co.uk

http://www.boeing.com

View original post here:
New research project will use machine learning to advance metal alloys for aerospace - Metal Additive Manufacturing magazine

Five trends in machine learning-enhanced analytics to watch in 2021 – Information Age

AI usage is growing rapidly. What does 2021 hold for the world of analytics, and how will AI drive it?

Progress of AI-powered operations looks set to grow this year.

As the world prepares to recover from the Covid-19 pandemic, businesses will need to increasingly rely on analytics to deal with new consumer behaviour.

According to Gartner analyst Rita Sallam, In the face of unprecedented market shifts, data and analytics leaders require an ever-increasing velocity and scale of analysis in terms of processing and access to accelerate innovation and forge new paths to a post-Covid-19 world.

Machine learning and artificial intelligence are finding increasingly significant use cases in data analytics for business. Here are five trends to watch out for in 2021.

Gartner predicts that by 2024, 75% of enterprises will shift towards putting AI and ML into operation. A big reason for this is the way the pandemic has changed consumer behaviour. Regression learning models that rely on historical data might not be valid anymore. In their place, reinforcement and distributed learning models will find more use, thanks to their adaptability.

A large share of businesses have already democratised their data through the use of embedded analytics dashboards. The use of AI to generate augmented analytics to drive business decisions will increase as businesses seek to react faster to shifting conditions. Powering data democratisation efforts with AI will help non-technical users make a greater number of business decisions, without having to rely on IT support to query data.

Companies such as Sisense already offer companies the ability to integrate powerful analytics into custom applications. As AI algorithms become smarter, its a given that theyll help companies use low-latency alerts to help managers react to quantifiable anomalies that indicate changes in their business. Also, AI is expected to play a major role in delivering dynamic data stories and might reduce a users role in data exploration.

A fact thats often forgotten in AI conversations is that these technologies are still nascent. Many of the major developments have been driven by open source efforts, but 2021 will see an increasing number of companies commercialise AI through product releases.

This event will truly be a marker of AI going mainstream. While open source has been highly beneficial to AI, scaling these projects for commercial purposes has been difficult. With companies investing more in AI research, expect a greater proliferation of AI technology in project management, data reusability, and transparency products.

Using AI for better data management is a particular focus of big companies right now. A Pathfinder report in 2018 found that a lack of skilled resources in data management was hampering AI development. However, with ML growing increasingly sophisticated, companies are beginning to use AI to manage data, which fuels even faster AI development.

As a result, metadata management becomes streamlined, and architectures become simpler. Moving forward, expect an increasing number of AI-driven solutions to be released commercially instead of on open source platforms.

Vendors such as Informatica are already using AI and ML algorithms to help develop better enterprise data management solutions for their clients. Everything from data extraction to enrichment is optimised by AI, according to the company.

This article explores the ways in which Kubernetes enhances the use of machine learning (ML) within the enterprise. Read here

Voice search and data is increasing by the day. With products such as Amazons Alexa and Googles Assistant finding their way into smartphones and growing adoption of smart speakers in our homes, natural language processing will increase.

Companies will wake up to the immense benefits of voice analytics and will provide their customers with voice tools. The benefits of enhanced NLP include better social listening, sentiment analysis, and increased personalisation.

Companies such as AX Semantics provide self-service natural language generation software that allows customers to self-automate text commands. Companies such as Porsche, Deloitte and Nivea are among their customers.

As augmented analytics make their way into embedded dashboards, low-level data analysis tasks will be automated. An area that is ripe for automation is data collection and synthesis. Currently, data scientists spend large amounts of time cleaning and collecting data. Automating these tasks by specifying standardised protocols will help companies employ their talent in tasks better suited to their abilities.

A side effect of data analysis automation will be the speeding up of analytics and reporting. As a result, we can expect businesses to make decisions faster along with installing infrastructure that allows them to respond and react to changing conditions quickly.

As the worlds of data and analytics come closer together, vendors who provide end-to-end stacks will provide better value to their customers. Combine this with increased data democratisation and its easy to see why legacy enterprise software vendors such as SAP offer everything from data management to analytics to storage solutions to their clients.

Tech experts provide their tips on how to effectively implement automation into your customer relationship management (CRM) process. Read here

IoT devices are making their way into not just B2C products but B2B, enterprise and public projects as well, from smart cities to industry 4.0.

Data is being generated at unprecedented rates, and to make sense of it, companies are increasingly turning to AI. With so much signal, this is a key help for arriving at insights.

While the rise of embedded and augmented analytics has already been discussed, its critical to point out that the sources of data are more varied than ever before. This makes the use of AI critical, since manual processes cannot process such large volumes efficiently.

As AI technology continues to make giant strides the business world is gearing up to take full advantage of it. Weve reached a stage where AI is powering further AI development, and the rate of progress will only increase.

Original post:
Five trends in machine learning-enhanced analytics to watch in 2021 - Information Age

Understanding AI: The good, bad and ugly – GCN.com

INDUSTRY INSIGHT

Although it's still in the early stage of adoption, the use of artificial intelligence in the public sector has vast potential. According to McKinsey & Company, AI can help to identify tax-evasion patterns, sort through infrastructure data to target bridge inspections, sift through health and social-service data to prioritize cases for child welfare and support or even predict the spread of infectious diseases.

Yet as the promises of AI grow increasingly obtainable, so do the risks associated with it.

Public-sector organizations, which house and protect sensitive data, must be even more alert and prepared for attacks than other businesses. Plus, as technology becomes more complex and integrated into users personal and professional lives, agencies cant ignore the possibility of more sophisticated attacks, including those that leverage AI.

With that in mind, its important to understand new trends in AI, especially those that impact how agencies should be thinking about security.

Defining adversarial machine learning

Simple or common AI and machine learning developments have the potential to improve outcomes and reduce costs within government agencies, just as it does for other industries. AI and ML technology is already being incorporated into government operations, from customer service chatbots that help automate Department of Motor Vehicle transactions to computer vision and image recognition applications that can spot stress fractures in bridges to assist a human inspector. The technology itself will continue to mature and be implemented more widely, which means understanding of the technology (both the good and the bad) must evolve as well.

AI and ML statistical models rely on two main components to function properly and execute on their intended purposes: observability and data. When considering how to safeguard both the observability and data within the model, there are a few questions to answer: What information could adversaries obtain from the model to build their own model? How similar is the environment an agency is creating compared to others? Is the time-elapsed learning and feedback mechanism modeled and tested?

Models are built on assumptions, so if there are similar underlying assumptions across environments, an adversary has an increased opportunity of doing one of the following to the model:

Essentially, if agencies can teach AI to execute as their team does, an adversary can teach AI how to behave like an attacker as well, as demonstrated by user behavior analytics tools today. Adversarial machine learning, then, is a learning technique that attempts to deceive, undermine or manipulate models by supplying false input into both observability and data.

As attackers become more refined and nuanced in their approach -- from building adversarial machine learning models to model poisoning -- they could completely disrupt all AI-related efforts within an organization.

Getting ahead, preparing for new risks

AI and ML are already helping streamline cybersecurity efforts, and this technology will, of course, play a role in preventing and detecting more sophisticated attacks as well, so long as they are trained to do so. As AI algorithms continue to learn and behaviors are normalized, agencies can better leverage models for authentication, vulnerability management, phishing, monitoring and augmenting personnel.

Today, AI is improving cybersecurity processes in two ways: It filters through the data quickly based on trained algorithms, which know exactly what to look for, and it helps identify and prioritize attacks and behavioral changes that require the attention of the security operations team, who will then verify the information and respond. As AI evolves, the actions and response will be handled by these algorithms/tools with lessening human interaction and increased velocity. For example, adversaries could successfully log in using an employees credentials, which may go unnoticed. If they are logging in for the first time from a new location or at a time when that user was not expected to be online, AI can help quickly recognize those anomalous behaviors and push an alert to the top of the security teams queue or take more immediate action to disallow a behavior.

However, organizations, especially government bodies, must take their knowledge of AI a step further and prepare for the attacks of tomorrow by becoming aware of new, evolving complex risks. Data will must be viewed from both an offensive and defensive perspective, and teams must continuously monitor models and revise and retrain them to obtain deeper levels of intelligence. ML models, for example, must be trained to detect adversarial threats within the AI itself by conducting:

Most agencies are still in initial stages of incorporating AI/ML models into their operations. However, educating agency IT teams on these evolving threats, utilizing existing toolsets and planning and preparing for these attacks should start now. The amount of data being collected and synthesized is massive and will continue to grow exponentially. We must leverage all the tools in the AI tool chest to make sense of this data for the good.

About the Author

Seth Cutler is the chief information security officer at NetApp.

Go here to read the rest:
Understanding AI: The good, bad and ugly - GCN.com