The AI & Machine Learning community stands ready to help in the climate crisis battle – ITProPortal

The worlds on fire.

For three days in August, 7 billion tons of rain fell on the peak of Greenland, which is just not the largest amount since records began 71 years back, but the first time we know that rain, not snow, fell on the countrys highest peak. Wildfires in Siberia broke another terrifying record for annual fire-related emissions of carbon dioxide, losing almost 19,300 square miles (500,000 square kilometers) of vegetation to the fires. And in the same month, the latest (sixth) scientific report from the Intergovernmental Panel on Climate Change sounded the emergency alarm yet again on the need for strong and sustained reductions in emissions of carbon dioxide and other greenhouse gases to try and save a common future for us all.

We might have known this for a while, but the rate of extreme weather events and the accumulation of more and more data about the global emergency is now inescapable. Its genuinely no exaggeration to say were in a fight for survival now. There are also more and more financial and business knock-on effects of all this that have already cost global economies uncounted billions, indeed more: Capgemini research shows that in the past twenty years, there were 7,348 major recorded disaster events claiming 1.23 million lives, affecting 4.2 billion people and resulting in approximately $3 trillion in global economic losses.

For sure, were going to need to do a lot more than just recycle our soda cans or eat a little less meat per week (though we need to keep on doing all that, too); scientists are now talking of the need for serious, large-scale geoengineering to try and save us. Climate change should be on every organizations agendabut the IT world, which (rightly) gets criticized for its less than stellar record on exorbitant electricity consumption, as part of general economic activity (which is rising again), has a particular responsibility to help.

Why? Because we burn a lot of kilowatts, but also because a lot of smart people work in our world, many of whom are deeply concerned about the threat of anthropogenic climate change. As a global citizen and IT professional, I feel this concern too: and I also work in the AI (Artificial Intelligence) world. So I asked myself, what can AI and AI professionals do to help here: and this is what I found.

At the top level, AI provides powerful tools to researchers, engineers, chemists, biologists, town planners and policymakersin short, everyone is trying to make a positive difference. All these people need the very best, most recent, most granular data to make their interventions or design remediation techniques, which will absolutely include carbon capture, Greener transport and new post-carbon industries and ways of living. But AI, or more specifically, machine learning, is also already lending a hand in a variety of practical ways and climate crisis use cases:

The potential of machine learning in this space has already been called out by the EU, which has stated, in a report on the potential of AI to achieve its ambitious Green Deal targets, noted that, The transformative potential of Artificial Intelligence to contribute to the achievement of the goals of a green transition have been increasingly and prominently highlighted [due to its ability to] accelerate the analysis of large amounts of data to increase our knowledge base, allowing us to better understand and tackle environmental challenges, especially around relevant information for environmental planning, decision- making, management and monitoring of the progress of environmental policies. And as Brussels also points out, AI-generated information could also help consumers and businesses to adapt towards more sustainable behavior, among other potential benefits.

That same study does point out the downside, or potential downside: AI could also negatively contribute via unforeseen consequences like automatically more efficient products might actually have the effect of causing users to give up control over their energy consumption and over-consume.

Yesbut we in the machine learning sector are very conscious of these issues, as are national governments and other legislators. But I am convinced that AI can, and should, play a central and positive role in helping put out the global fire, and could also be used by companies to help start incorporating the impact of climate change into their future planning processes.

In our own modest way, were eating our down dog food ourselves at the company I work for, H2O.ai. Our technology has been used for a number of positive climate projects and initiatives, including our work with a non-profit focused on wildlife conservation and research called Wildbook, which is blending structured wildlife research with AI, citizen science, and computer vision to speed population analysis and develop new insights to help fight the extinction of threatened species like the elephant.

Could we be doing more? Yes. And we need to. Could we all be doing more? Yes, and we need to. I believe that the climate emergency can be controlled, and a climate AI culture emerging between technologists, policymakers, domain experts, philosophers and the open-source community to optimize the design and deployment of helpful AI tools could really help.

Mark Bakker, Regional Lead Benelux, H2O.ai

Originally posted here:
The AI & Machine Learning community stands ready to help in the climate crisis battle - ITProPortal

Algorithmia Machine Learning Operations Selected for Use by Raytheon Technologies Homeland Security Today – HSToday

Algorithmia, a provider of enterprise machine learning operations (MLOps) software, has been selected by Raytheon Intelligence & Space, a Raytheon Technologies (NYSE: RTX) business to support the teams development of the U.S. Armys Tactical Intelligence Targeting Access Node (TITAN) program. TITAN is a tactical ground station that finds and tracks threats to support long-range precision targeting.

Algorithmia, along with other leaders in artificial intelligence and machine learning, will enable Raytheon TechnologiesTITAN team to deliver easily digestible data to Army operators. TITAN will ingest data from space and high-altitude, aerial and terrestrial sensors to provide targetable data to defense systems. It also provides multi-source intelligence support to targeting, and situational awareness and understanding for commanders.

Algorithmias MLOps platform has been used by over 130,000 data scientists in a wide range of organizations. Its customers include large and midsize enterprises, Fortune 500 companies, the United Nations and multiple government intelligence agencies. The companys momentum is a product of growing interest in AI-based applications and the need organizations have to efficiently manage cost and security for machine learning models.

Machine learning significantly accelerates the process by which organizations can uncover important data points and respond to critical issues, said Diego Oppenheimer, CEO of Algorithmia. Our platform streamlines the deployment of machine learning models into production while providing important oversight, including review for ethical standards, to ensure models operate when and how they should, which makes Algorithmia a natural fit for sensitive applications. We are excited to join Raytheon in supporting its work with the U.S. Army.

(Visited 147 times, 1 visits today)

Original post:
Algorithmia Machine Learning Operations Selected for Use by Raytheon Technologies Homeland Security Today - HSToday

From Socrates to machine learning: Arts and Sciences fellows spend the summer on research projects – URI Today

KINGSTON, R.I. July 1, 2021 Was Socrates a man or a god? How can you remove societal biases from machine learning? How should solitary confinement in prisons be reformed?

Those are just a few of the 11 research projects being tackled this summer by College of Arts and Science Fellows at the University of Rhode Island.

The summer fellowship program funds undergraduates in an Arts and Sciences major to participate in research, scholarly or creative projects under the supervision of a faculty member for up to 10 weeks. This year, the program is awarding $28,000 in stipends supporting approximately 2,400 hours of research for students majoring in such fields as criminology and criminal justice, political science, computer science, and philosophy.

In addition to support from the College of Arts and Sciences RhodyNow Fund and its Deans Excellence Endowment, the fellowship program is supported by a generous gift from Bob and RenamarieDiMuccio in honor of President David M. Dooley. As President Dooley retires at the end of July, the DiMuccios wished to recognize his leadership in transforming URI over the last 12 years with a gift to support undergraduate research experiences that visibly impact students and build a pathway for their future success.

Hannah Beaucaire 22, a political science and criminology and criminal justice major from Gardiner, Maine, will spend the summer researching solitary confinement practices in U.S. prisons. Working with Assistant Professor Natalie Pifer, Beaucaire will examine large-scale reforms that her home state is enacting to determine if the reforms should be adopted nationally.

One of the reasons I was interested in studying solitary confinement was the extreme physiological consequences it has been known to cause, she said. For such an extreme practice, I find solitary confinement to be under-regulated.

The end result of her research will be an online platform that will include short videos providing a history of solitary confinement, its consequences and the reforms Maine is attempting. She plans to use social media to attract interest in the site, which she hopes will serve as an educational and advocacy tool.

Without the monetary award I would have spent most of my time working a summer job, she said, but now I get to use that time to study something I find really exciting.

For John Mancini 22, the summer will be spent reading the dialogues of the ancient Greek philosopher Plato to determine if Plato viewed fellow contemporary philosopher Socrates as a god or a mortal. Plato, who lived from about 428 to 348 B.C., wrote about 35 dialogues. Socrates was a main character in many.

How do you determine someones divinity?

Mancini, a philosophy and political science major from Westerly, will look at Platos writings to determine what he considered gods and the characteristics of his Forms, his theory of the metaphysical structure of the universe. He will also look at secondary sources to answer what makes a person divine, godlike, or a Form.

The Forms are what make things the way they are and so explain what things are in themselves, he said. For example, the Form of Beauty is responsible for all things that contain beauty; the Form of Tallness is the reason that some things are considered tall. Platos theory basically answers the why question: I am beautiful because I partake in the Beautiful; I am tall because I partake in Tallness.

Mancini will conduct research and discuss his conclusions with Professor Doug Reed, who specializes in ancient philosophy, and plans to write a paper explaining his findings.

When my findings are published, other philosophers will be able to offer me pushback and constructive criticism, he said. This will allow me to better develop my positionshould I need to. Philosophy is very much a discussion, and after drawing my conclusions, someone is bound to disagree with me. I welcome any opposition so philosophers can gain a fuller understanding of Platonic dialogues.

This summer, Jacob Afonso 22, a computer science major, will be researching fairness and bias in machine learning models, under the supervision of Assistant Professor Sarah Brown. The goal of the project is to test and find ways to eliminate biases from the models.

In the data used to create machine learning models, societal biases are often present, said Afonso, who lives in Smithfield. When using biased data, the resulting model used for any sort of predicting will have those underlying biases.

I wanted to research this because I believe this is one of the largest areas of machine learning that makes people skeptical of its effectiveness, he added. It is also important for the future of equality of all groups of people as the use of machine learning continues to grow.

Afonsos research will include reading papers on the topic and learning code libraries, which hold the code for the different machine learning algorithms. He will use those to create and test fair models. Eventually, the outcome of his research will provide different ideas for removing biases, along with an analysis of the best and worst of them, he said.

Other 2021 Arts & Science Fellows are:

Mia Giglietti 23, of West Hartford, Connecticut, who is majoring in political science and economics with a minor in Spanish, will analyze economic literature over the 20th century to look at the elite interconnections among corporate boards and their links with governmental bodies to see how those connections benefit those corporations.

I wanted to participate in this because Ive always wanted to learn more about how corporations and economic/political corruption work to maintain the power of major corporations and the wealthy, said Giglietti, who is working with Assistant Professor Nina Eichacker. I think it is crucial to understand those concepts in the era of severe income inequality that we are currently living through.

Samantha Murphy 22, of Cumberland, who is studying applied economics with a minor in music, will work with Associate Professor Smita Ramnarain to compare public health disasters with other types of disasters, looking at how health disasters, such as the COVID-19 pandemic, interact with other crises and social inequalities, and how they have gendered impacts.

I wanted to do this research project because of my growing interest in heterodox economics, said Murphy. The fellowship is giving me the opportunity to do research under the guidance of a faculty member who is well-versed in the fields that I am interested in further studying after my undergraduate time at URI.

Jason Phillips 23, of Barrington, Illinois, who is majoring in English, journalism and writing and rhetoric, will be looking at how aware students are of the way colleges and universities treat adjunct faculty. Phillips plans to interview students around the country over the summer on their understanding and feelings about the issues with plans to publish a research paper.

I chose to research this because, for a large part, adjunct faculty are treated poorly, yet students do not often understand the problem, said Phillips, who is working with Professor Carolyn Betensky. I am passionate about understanding how students truly feel about how their professors are treated at their universities.

Abigail Dodd 22, of Wakefield, Rhode Island, a history and gender and women studies major, will identify primary sources from URI Distinctive Collections to document the changing role of womenstudents and facultyat URI between 1950 and 1980. Her research will create a content module on women that will be used in the course The URI Campus: A Walk Through Time. Faculty mentor: Senior Lecturer Catherine DeCesare, with assistance from Karen Morse, director of Distinctive Collections.

Kevin Hart 22, of Wakefield, Massachusetts, who is majoring in political science and history with a minor in economics, is researching right-wing terrorism in the U.S. and the motivation behind it. I wanted to research the topic because it is an under-researched case of political violence but an important one with important implications, he said. Working with Assistant Professor Brendan Skip Mark will give me the guidance and experience I need to develop a worthwhile and academically sound project.

Sierra Obi 21, of Danville, New Hampshire, who is majoring in computer science and Spanish, is working on a project exploring computer authentication difficulty faced by people with upper extremity impairment part of National Science Foundation-funded research being conducted by Assistant Professor Krishna Venkatasubramanian. Obi is working to understand the reasons and circumstances around who people with the impairment share their personal computing devices and credentials with in an effort to improve login security.

Alfred Timperley 23, of East Greenwich, who is majoring in computer science and data science, will be working on a project to develop a novel tool that will enable future research into program classification source code authorship, plagiarism detection, malware identification, and others. Faculty mentor: Assistant Professor Marco Alvarez.

Ethan Wyllie 22, of North Kingstown, who is majoring in political science and Spanish, is researching racial inequality in welfare participation. He will track participation rates by different groupsWhites, Blacks, Hispanics, and immigrantsat the state level over the past 20 years to determine racial disparity in U.S. social safety net coverage. Faculty mentor: Associate Professor Ping Xu.

Original post:
From Socrates to machine learning: Arts and Sciences fellows spend the summer on research projects - URI Today

Ten Ways to Apply Machine Learning in Earth and Space Sciences – Eos

The Earth and space sciences present ideal use cases for machine learning (ML) applications because the problems being addressed are globally important and the data are often freely available, voluminous, and of high quality.Machine learning (ML), loosely defined as the ability of computers to learn from data without being explicitly programmed, has become tremendously popular in technical disciplines over the past decade or so, with applications including complex game playing and image recognition carried out with superhuman capabilities. The Earth and space sciences (ESS) community has also increasingly adopted ML approaches to help tackle pressing questions and unwieldy data sets. From 2009 to 2019, for example, the number of studies involving ML published in AGU journals approximately doubled.

In many ways, ESS present ideal use cases for ML applications because the problems being addressedlike climate change, weather forecasting, and natural hazards assessmentare globally important; the data are often freely available, voluminous, and of high quality; and computational resources required to develop ML models are steadily becoming more affordable. Free computational languages and ML code libraries are also now available (e.g., scikit-learn, PyTorch, and TensorFlow), contributing to making entry barriers lower than ever. Nevertheless, our experience has been that many young scientists and students interested in applying ML techniques to ESS data do not have a clear sense of how to do so.

An ML algorithm can be thought of broadly as a mathematical function containing many free parameters (thousands or even millions) that takes inputs (features) and maps those features into one or more outputs (targets). The process of training an ML algorithm involves optimizing the free parameters to map the features to the targets accurately.

There are two broad categories of ML algorithms relevant in most ESS applications: supervised and unsupervised learning (a third category, reinforcement learning, is used infrequently in ESS). Supervised learning, which involves presenting an ML algorithm with many examples of input-output pairs (called the training set), can be further divided, according to the type of target that is being learned, as either categorical (classification; e.g., does a given image show a star cluster or not?) or continuous (regression; e.g., what is the temperature at a given location on Earth?). In unsupervised learning, algorithms are not given a particular target to predict; rather, an algorithms task is to learn the natural structure in a data set without being told what that structure is.

Supervised learning is more commonly used in ESS, although it has the disadvantage that it requires labeled data sets (in which each training input sample must be tagged, or labeled, with a corresponding output target), which are not always available. Unsupervised learning, on the other hand, may find multiple structures in a data set, which can reveal unanticipated patterns and relationships, but it may not always be clear which structures or patterns are correct (i.e., which represent genuine physical phenomena).

Books and classes about ML often present a range of algorithms but leave people to imagine specific applications of these algorithms on their own.Books and classes about ML often present a range of algorithms that fall into one of the above categories but leave people to imagine specific applications of these algorithms on their own. However, in practice, it is usually not obvious how such approaches (some seemingly simple) may be applied in a rich variety of ways, which can create an imposing obstacle for scientists new to ML. Below we briefly describe various themes and ways in which ML is currently applied to ESS data sets (Figure 1), with the hope that this listnecessarily incomplete and biased by our personal experienceinspires readers to apply ML in their research and catalyzes new and creative use cases.

One of the simplest and most powerful applications of ML algorithms is pattern identification, which works particularly well with very large data sets that cannot be traversed manually and in which signals of interest are faint or highly dimensional. Researchers, for example, applied ML in this way to detect signatures of Earth-sized exoplanets in noisy data making up millions of light curves observed by the Kepler space telescope. Detected signals can be further split into groups through clustering, an unsupervised form of ML, to identify natural structure in a data set.

Conversely, atypical signals may be teased out of data by first identifying and excluding typical signals, a process called anomaly or outlier detection. This technique is useful, for example, in searching for signatures of new physics in particle collider experiments.

An important and widespread application of supervised ML is the prediction of time series data from instruments or from an index (or average value) that is intended to encapsulate the behavior of a large-scale system. Approaches to this application often involve using past data in the time series itself to predict future values; they also commonly involve additional inputs that act as drivers of the quantities measured in the time series. A typical example of ML applied to time series in ESS is its use in local weather prediction, with which trends in observed air temperature and pressure data, along with other quantities, can be predicted.

In many instances, however, predicting a single time series of data is insufficient, and knowledge of the temporal evolution of a physical system over regional (or global) spatial scales is required. This spatiotemporal approach is used, for example, in attempts to predict weather across the entire globe as a function of time and 3D space in high-capacity models such as deep neural networks.

Physics-based simulations can take days or weeks to run on even the most powerful computers. An alternate solution is to train ML models to act as emulators for physics-based models.Traditional, physics-based simulations (e.g., global climate models) are often used to model complex systems, but such models can take days or weeks to run on even the most powerful computers, limiting their utility in practice. An alternate solution is to train ML models to act as emulators for physics-based models or to replicate computationally intensive portions within such models. For example, global climate models that run on a coarse grid (e.g., 50- to 100-kilometer resolution) can include subgrid processes, like convection, modeled using ML-based parameterizations. Results with these approaches are often indistinguishable from those produced by the original model alone but can run millions or billions of times faster.

Many physics-based simulations proceed by integrating a set of partial differential equations (PDEs) that rely on time-varying boundary conditions and other conditions that drive interior parts of the simulation. The physics-based model then propagates information from these boundary and driver conditions into the simulation spaceimagine, for example, a 3D cube being heated at its boundary faces with time-varying heating rates or with thermal conductivity that varies spatiotemporally within the cube. ML models can be trained to reflect the time-varying parameterizations both within and along the simulation boundaries of a physical model, which again may be computationally cheaper and faster.

If a spatiotemporal ML model of a physical system can be trained to produce accurate results under a variety of input conditions, then the implication is that the model implicitly accounts for all the physical processes that drive that system, and thus, it can be probed to gain insights into how the system works. Certain algorithms (e.g., random forests) can automatically provide a ranking of feature importance, giving the user a sense of which input parameters affect the output most and hence an intuition about how the system works.

More sophisticated techniques, such as layerwise relevance propagation, can provide deeper insights into how different features interact to produce a given output at a particular location and time. For example, a neural network trained to predict the evolution of the El NioSouthern Oscillation (ENSO), which is predominantly associated with changes in sea surface temperature in the equatorial Pacific Ocean, revealed that precursor conditions for ENSO events occur in the South Pacific and Indian Oceans.

A ubiquitous challenge in ESS is to invert observations of a physical entity or process into fundamental information about the entity or the causes of the process (e.g., interpreting seismic data to determine rock properties). Historically, inverse problems are solved in a Bayesian framework requiring multiple runs of a forward model, which can be computationally expensive and often inaccurate. ML offers alternative methods to approach inverse problems, either by using emulators to speed up forward models or by using physics-informed machine learning to discover hidden physical quantities directly. ML models trained on prerun physics-based model outputs can be used for rapid inversion.

Satellite observations often provide global, albeit low-resolution and sometimes indirect (i.e., proxy-based), measurements of quantities of interest, whereas local measurements provide more accurate and direct observations of those quantities at smaller scales. A popular and powerful use for ML models is to estimate the relationship between global proxy satellite observations and local accurate observations, which enables the creation of estimated global observations on the basis of localized measurements. This approach often includes the use of ML to create superresolution images and other data products.

Typically, uncertainty in model outputs is quantified using a single metric such as the root-mean-square of the residual (the difference between model predictions and observations). ML models can be trained to explicitly predict the confidence interval, or inherent uncertainty, of this residual value, which not only serves to indicate conditions under which model predictions are trustworthy (or dubious) but can also be used to generate insights about model performance. For instance, if there is a large error at a certain location in a model output under specific conditions, it could suggest that a particular physical process is not being properly represented in the simulation.

Domain experts analyzing data from a given system, even in relatively small quantities, are often able to extrapolate the behavior of the systemat least conceptuallybecause of their understanding of and trained intuition about the system based on physical principles. In a similar way, laws and relationships that govern physical processes and conserved quantities can be explicitly encoded into neural network algorithms, resulting in more accurate and physically meaningful models that require less training data.

In certain applications, the values of terms or coefficients in PDEs that drive a systemand thus that should be represented in a modelare not known. Various ML algorithms were developed recently that automatically determine PDEs that are consistent with the available physical observations, affording a new and powerful discovery tool.

In still newer work, ML methods are being developed to directly solve PDEs. These methods offer accuracy comparable to traditional numerical integrators but can be dramatically faster, potentially allowing large-scale simulations of complex sets of PDEs that have otherwise been unattainable.

The Earth and space sciences are poised for a revolution centered around the application of existing and rapidly emerging ML techniques to large and complex ESS data sets being collected. These techniques have great potential to help scientists address some of the most urgent challenges and questions about the natural world facing us today. We hope the above list sparks creative and valuable new applications of ML, particularly among students and young scientists, and that it becomes a community resource to which the ESS community can add more ideas.

We thank the AGU Nonlinear Geophysics section for promoting interdisciplinary, data-driven research, for supporting the idea of writing this article, and for suggesting Eos as the ideal venue for dissemination. The authors gratefully acknowledge the following sources of support: J.B. from subgrant 1559841 to the University of California, Los Angeles, from the University of Colorado Boulder under NASA Prime Grant agreement 80NSSC20K1580, the Defense Advanced Research Projects Agency under U.S. Department of the Interior award D19AC00009, and NASA/SWO2R grant 80NSSC19K0239 and E.C. from NASA grants 80NSSC20K1580 and 80NSSC20K1275. Some of the ideas discussed in this paper originated during the 2019 Machine Learning in Heliophysics conference.

Jacob Bortnik ([emailprotected]), University of California, Los Angeles; and Enrico Camporeale, Space Weather Prediction Center, NOAA, Boulder, Colo.; also at Cooperative Institute for Research in Environmental Sciences, University of Colorado Boulder

See the original post:
Ten Ways to Apply Machine Learning in Earth and Space Sciences - Eos

The home of The Spot 518 – Real-Local-News – Spotlight News

TROY Artificial intelligence and machine learning are revolutionizing the ways in which we live, work, and spend our free time, from the smart devices in our homes to the tasks our phones can carry out. This transformation is being made possible by a surge in data and computing power that can help machine learning algorithms not only perform device-specific tasks, but also help them gain intelligence or knowledge over time.

In the not-so-distant future, artificial intelligence and machine learning tasks will be carried out among connected devices through wireless networks, dramatically enhancing the capabilities of future smartphones, tablets, and sensors, and achieving whats known as distributed intelligence. As technology stands right now, however, machine learning algorithms are not efficient enough to be run over wireless networks and wireless networks are not yet ready to transmit this type of intelligence.

With the support of a National Science Foundation Faculty Early Career Development Program grant, Tianyi Chen, an assistant professor of electrical, computer, and systems engineering at Rensselaer Polytechnic Institute and member of the Rensselaer-IBM Artificial Intelligence Research Collaboration (AIRC), is exploring how to make such knowledge-sharing tools a reality.

I think in the future, the main terminal of intelligence will be our phones. Our phones will be able to control our computers, our cars, our meeting rooms, our apartments, Chen said. This will be powered by resource-efficient machine learning algorithms and also the support of future wireless networks.

Through his collaboration with the Lighting Enabled Systems and Applications Center at Rensselaer, Chen will validate the algorithms he develops using the centers smart conference room.

The conference room is equipped with devices that are capable of sensing the environment, processing that information, and efficiently sharing it with other devices on the network the same framework the algorithms are being designed to function within.

We need to redesign our wireless networks to support not only traditional traffic, like video and voice, but to support new traffic such as transmittable intelligence, Chen said. We need to design more efficient learning algorithms that are suitable for running on the wireless network.

Chen also stressed the importance of ensuring that knowledge-sharing algorithms only extract anonymized information in order to maintain data privacy as our devices and daily lives become increasingly networked. While the goals of this research are foundational in nature.

Chen said the potential for future applications is wide-ranging from power grids to urban transportation systems.

Founded in 1824, Rensselaer Polytechnic Institute is Americas first technological research university. Rensselaer encompasses five schools, 32 research centers, more than 145 academic programs, and a dynamic community made up of more than 7,600 students and more than 100,000 living alumni. Rensselaer faculty and alumni include more than 145 National Academy members, six members of the National Inventors Hall of Fame, six National Medal of Technology winners, five National Medal of Science winners, and a Nobel Prize winner in Physics. With nearly 200 years of experience advancing scientific and technological knowledge, Rensselaer remains focused on addressing global challenges with a spirit of ingenuity and collaboration.

This feature was originally published on the Rensselaer website.

View post:
The home of The Spot 518 - Real-Local-News - Spotlight News

Which Industries are Hiring AI and Machine Learning Roles? – Dice Insights

Companies everywhere are pouring resources into artificial intelligence (A.I.) and machine learning (ML) initiatives. Many technologists believe that apps smartened with A.I. and ML tools will eventually offer better customer personalization; managers hope that A.I. will lead to better data analysis, which in turn will power better business strategies.

But which industries are actually hiring A.I. specialists? If you answer that question, it might give you a better idea of where those resources are being deployed. Fortunately,CompTIAs latest Tech Jobs Reportoffers a breakdown of A.I. hiring, using data from Burning Glass, which collects and analyzes millions of job postings from across the country. Check it out:

Perhaps its no surprise that manufacturing tops this list; after all, manufacturers have been steadily automating their production processes for years, and it stands to reason that they would turn to A.I. and ML to streamline things even more. In theory, A.I. will also help manufacturers do everythingfrom reducing downtime to improving supply chainsalthough it may take some time to get the models right.

The presence of healthcare, banking, and public administration likewise seem logical.These three industries have the money to invest in A.I. and ML right now and have the greatest opportunity to see the investment pay off, fast, Gus Walker, director of product at Veritone, an A.I. tech company based in Costa Mesa, California,told Dicelate last year.That being said, the pandemic has caused industries hit the hardest to take a step back and look at how they can leverage AI and ML to rebuild or adjust in the new normal.

Compared to overall tech hiring, the number of A.I.-related job postings is still relatively small. Right now, mastering and deploying A.I. and machine learning is something of a specialist industry; but as these technologies become more commodified, and companies develop tools that allow more employees to integrate A.I. and ML into their projects, the number of job postings for A.I. and ML positions could increase over the next several years. Indeed, one IDC report from 2020 found three-quarters of commercial enterprise applications could lean on A.I. in some way by2021.

Its also worth examining where all that A.I. hiring is taking place; its interesting that Washington DC tops this particular list, with New York City a close second; Silicon Valley and Seattle, the nations other big tech hubs, are somewhat further behind, at least for the moment. Washington DC is notable not only for federal government hiring, but the growing presence of companies such as Amazon that hunger for talent skilled in artificial intelligence:

Jobs that leverage artificial intelligence are potentially lucrative, with a current median salary (according to Burning Glass)of $105,000. Its also a skill-set thatmore technologists may need to become familiar with, especially managers and executives.A.I. is not going to replace managers but managers that use A.I. will replace those that do not, Rob Thomas, senior vice president of IBMscloudand data platform,recently told CNBC. If you mention A.I. or ML on your resume and applications, make sure you know your stuff before the job interview; chances are good youll be tested on it.

Want more great insights?Create a Dice profile today to receive the weekly Dice Advisor newsletter, packed with everything you need to boost your career in tech. Register now

Read more from the original source:
Which Industries are Hiring AI and Machine Learning Roles? - Dice Insights

Battle of the buzzwords: AIOps vs. MLOps square up – TechTarget

AIOps and MLOps are terms that might appear to have a similar meaning, given that the acronyms on which they are based -- AI and ML -- are often used in similar contexts. However, AIOps and MLOps mean radically different things.

A team or company might use both AIOps and MLOps at the same time but not for the same purposes. Let's dig into what each is individually and then whether they can be used together.

AIOps, which stands for artificial intelligence for IT operations, is the use of AI to help perform IT operations work.

For example, a team that uses AIOps might use AI to analyze the alerts generated by its monitoring tools and then prioritize the alerts so that the team knows which ones to focus on. Or an AIOps tool could automatically find and fix an application that has crashed, using AI to determine the cause of the problem and the proper remediation.

Short for machine learning IT operations, MLOps is a technique that helps organizations optimize their use of machine learning and AI tools.

The core idea behind MLOps is that the stakeholders involved in making decisions about machine learning and AI are typically siloed from each other. Data scientists know how AI and machine learning algorithms work. But they don't usually collaborate closely with IT engineers, responsible for deploying AI and machine learning tools, or with compliance officers, who manage security and regulatory aspects of machine learning and AI use.

Put another way, MLOps is like DevOps in that it seeks to break down the silos that separate different types of teams. But, whereas DevOps is all about encouraging collaboration between developers and IT operations teams, MLOps focuses on collaboration between everyone who plays a role in choosing or managing machine learning and AI resources.

It's tempting to assume that AIOps and MLOps basically mean the same thing, given that AI and machine learning mean similar -- albeit not identical -- things.

But, in fact, the terms are not closely related at all. You could argue that a healthy MLOps practice would help organizations choose and deploy AIOps tools, but that's only one possible goal of MLOps. Beyond that, AIOps and MLOps don't intersect.

This is a sign that the tech community has overused the -Ops construction. When you can take any noun, add the -Ops suffix and invent a new buzzword -- without logical consistency to unite it with similarly formed buzzwords -- it might be time to move onto new techniques for labeling buzzwords.

Link:
Battle of the buzzwords: AIOps vs. MLOps square up - TechTarget

Machine Learning Algorithm Trained on Images of Everyday Items Detects COVID-19 in Chest X-Rays with 99% Accuracy – HospiMedica

New research using machine learning on images of everyday items is improving the accuracy and speed of detecting respiratory diseases, reducing the need for specialist medical expertise.

In a study by researchers at Edith Cowan University (Perth, Australia), the results of this technique, known as transfer learning, achieved a 99.24% success rate when detecting COVID-19 in chest X-rays. The study tackles one of the biggest challenges in image recognition machine learning: algorithms needing huge quantities of data, in this case images, to be able to recognize certain attributes accurately.

According to the researchers, this was incredibly useful for identifying and diagnosing emerging or uncommon medical conditions. The key to significantly decreasing the time needed to adapt the approach to other medical issues was pre-training the algorithm with the large ImageNet database. The researchers hope that the technique can be further refined in future research to increase accuracy and further reduce training time.

"Our technique has the capacity to not only detect COVID-19 in chest x-rays, but also other chest diseases such as pneumonia. We have tested it on 10 different chest diseases, achieving highly accurate results," said ECU School of Science researcher Dr. Shams Islam. "Normally, it is difficult for AI-based methods to perform detection of chest diseases accurately because the AI models need a very large amount of training data to understand the characteristic signatures of the diseases. The data needs to be carefully annotated by medical experts, this is not only a cumbersome process, it also entails a significant cost. Our method bypasses this requirement and learns accurate models with a very limited amount of annotated data. While this technique is unlikely to replace the rapid COVID-19 tests we use now, there are important implications for the use of image recognition in other medical diagnoses."

Related Links:Edith Cowan University

Follow this link:
Machine Learning Algorithm Trained on Images of Everyday Items Detects COVID-19 in Chest X-Rays with 99% Accuracy - HospiMedica

Machine Learning Can Reduce Worry About Nanoparticles In Food – Texas A&M Today – Texas A&M University Today

Machine learning algorithms developed by researchers can predict the presence of any nanoparticle in most plant species.

Getty Images

While crop yield has achieved a substantial boost from nanotechnology in recent years, alarms over the health risks posed by nanoparticles within fresh produce and grains have also increased. In particular, nanoparticles entering the soil through irrigation, fertilizers and other sources have raised concerns about whether plants absorb these minute particles enough to cause toxicity.

In a new study published online in the journalEnvironmental Science and Technology,researchers at Texas A&M University have used machine learning to evaluate the salient properties of metallic nanoparticles that make them more susceptible for plant uptake. The researchers said their algorithm could indicate how much plants accumulate nanoparticles in their roots and shoots.

Nanoparticles are a burgeoning trend in several fields, including medicine, consumer products and agriculture. Depending on the type of nanoparticle, some have favorable surface properties, charge and magnetism, among other features. These qualities make them ideal for a number of applications. For example, in agriculture, nanoparticles may be used as antimicrobials to protect plants from pathogens. Alternatively, they can be used to bind to fertilizers or insecticides and then programmed for slow release to increase plant absorption.

These agricultural practices and others, like irrigation, can cause nanoparticles to accumulate in the soil. However, with the different types of nanoparticles that could exist in the ground and a staggeringly large number of terrestrial plant species, including food crops, it is not clearly known if certain properties of nanoparticles make them more likely to be absorbed by some plant species than others.

As you can imagine, if we have to test the presence of each nanoparticle for every plant species, it is a huge number of experiments, which is very time-consuming and expensive, said Xingmao Samuel Ma, associate professor in the Zachry Department of Civil and Environmental Engineering. To give you an idea, silver nanoparticles alone can have hundreds of different sizes, shapes and surface coatings, and so, experimentally testing each one, even for a single plant species, is impractical.

Instead, for their study, the researchers chose two different machine learning algorithms, an artificial neural network and gene-expression programming. They first trained these algorithms on a database created from past research on different metallic nanoparticles and the specific plants in which they accumulated. In particular, their database contained the size, shape and other characteristics of different nanoparticles, along with information on how much of these particles were absorbed from soil or nutrient-enriched water into the plant body.

Once trained, their machine learning algorithms could correctly predict the likelihood of a given metallic nanoparticle to accumulate in a plant species. Also, their algorithms revealed that when plants are in a nutrient-enriched or hydroponic solution, the chemical makeup of the metallic nanoparticle determines the propensity of accumulation in the roots and shoots. But if plants are grown in soil, the contents of organic matter and the clay in soil are key to nanoparticle uptake.

Ma said that while the machine learning algorithms could make predictions for most food crops and terrestrial plants, they might not yet be ready for aquatic plants. He also noted that the next step in his research would be to investigate if the machine learning algorithms could predict nanoparticle uptake from leaves rather than through the roots.

It is quiteunderstandable that people are concerned about the presence of nanoparticles in their fruits, vegetables and grains, said Ma. But instead of not using nanotechnology altogether, we would like farmers to reap the many benefits provided by this technology but avoid the potential food safety concerns.

Other contributors include Xiaoxuan Wang, Liwei Liu and Weilan Zhang from the civil and environmental engineering department.

This research is partly funded by the National Science Foundation and the Ministry of Science and Technology, Taiwan under the Graduate Students Study Abroad Program.

Originally posted here:
Machine Learning Can Reduce Worry About Nanoparticles In Food - Texas A&M Today - Texas A&M University Today

Veritone : What Is MLOps? | A Complete Guide to Machine Learning Operations – Marketscreener.com

Table of contents:

What Is MLOps + How Does It Work?Why Do You Need MLOps?What Problems Does MLOps Solve?How Do You Implement MLOps In Your Organization?How Do I Learn MLOps?Want to Learn Even More About MLOps?

Machine learning operations, or MLOps, is the term given to the process of creating, deploying, and maintaining machine learning models. It's a discipline that combines machine learning, DevOps, and data engineering with the goal of finding faster, simpler, and more effective ways to productize machine learning. When done right, MLOps can help organizations align their models with their unique business needs, as well as regulatory requirements. Keep reading to find out how you can implement MLOps with your team.

What Is MLOps + How Does It Work?

A typical MLOps process looks like this: a business goal is defined, the relevant data is collected and cleaned, and then a machine learning model is built and deployed. Or maybe we should say that's what a typical MLOps process is supposed to look like, but many organizations are struggling to get it down.

Productizing machine learning, or ML, is one of the biggest challenges in AI practices today. Many organizations are desperate to figure out how to convert the insights discovered by data scientists into tangible value for their business-which is easier said than done.

It requires unifying multiple processes across multiple teams-starting with defining business objectives and continuing all the way through data acquisition and model development and deployment.

This unification is achieved through a set of best practices for communication and collaboration between the data engineers who acquire the data, the data scientists who prepare the data and develop the model, and the operations professionals who serve the models.

Why Do You Need MLOps?

Businesses are dealing with more data than ever before. In a recent study, the IBM Institute for Business Value found that 59% of companies have accelerated their digital transformation. This pivot to digital-first enterprise strategy means continued investments in data, analytics, and AI capabilities have never been more critical.

Leveraging data as a strategic asset can lead to accelerated business growth and increased revenue. According to McKinsey, companies with the greatest overall growth in revenue and earnings receive a significant proportion of that boost from data and analytics. If you're hoping to replicate this growth and set your business up for sustainable success, ad hoc initiatives and one-off projects won't cut it. You'll need a well-planned data strategy that brings the best practices of software development and applies them to data science-which is where MLOps comes in.

MLOps bridges the gap between gathering data and turning that data into actionable business value. A successful MLOps strategy leverages the best of data science with the best of operations to streamline scalable, repeatable machine learning from end to end. It empowers organizations to approach this new era of data with confidence and reap the benefits of machine learning and AI in real life.

In addition to increased growth and revenue, benefits include faster go-to-market times and lower operational costs. With a solid framework for your data science and DevOps teams to follow, managers can spend more time thinking through strategy and individual contributors can be more agile.

What Problems Does MLOps Solve?

Let's dig into specifics. Applying MLOps best practices solves a variety of the problems that plague businesses around the globe, including:

Poor Communication

No matter how your company is organized, it's likely that your data scientists, software engineers, and operations managers live in very different worlds. This silo effect kills communication, collaboration, and productivity.

Without collaboration, you can forget about simplifying and automating the deployment of machine learning models in large-scale production environments. MLOps solves this problem by establishing dynamic pipelines and adaptable frameworks that keep everyone on the same page-reducing friction and opening up bottlenecks.

Unfinished Projects

As VentureBeat reports, 87% of machine learning models never make it into production. In other words, only about 1 in 10 data scientists' workdays actually end up producing something of value for the company. This sad statistic represents lost revenue, wasted time, and a growing sense of frustration and fatigue in data scientists everywhere. MLOps solves this problem by first ensuring all key stakeholders are on board with a project before it kicks off. MLOps then supports and optimizes every step of the process, ensuring that each model can journey its way toward production without any lag (and without the never-ending email chains).

Lost Learnings

We already talked about the silo effect, but it rears its ugly head again here. Creating and serving ML models requires input and expertise from multiple different teams, with each team driving a different part of the process. Without communication and collaboration between everyone involved, key learnings and critical insights will remain stuck within each silo. MLOps solves this problem by bringing together different teams with one central hub for testing and optimization. MLOps best practices make it easy to share learnings that can be used to improve the model and rapidly redeploy.

Redundancy

Lengthy development and deployment cycles mean that, way too often, evolving business objectives make models redundant before they've even been fully developed. Or the changing business objectives mean that the ML system needs to be retrained immediately after deployment. MLOps solves these issues by implementing best practices across the entire process-making productizing ML faster at every stage. MLOps best practices also build in room for adjustments, so your models can adapt to your changing business needs.

Misuse of Talent

Data scientists are not software engineers and vice versa. They have different focuses, different skill sets, and very different priorities. Expecting one to perform the tasks of the other is a recipe for failure. Unfortunately, many organizations make this mistake while trying to cut corners or speed up the process of getting machine learning models into production. MLOps solves this problem by bringing both disciplines together in a way that lets each use their respective talents in the best way possible-laying the groundwork for long-term success.

Noncompliance

The age of big data is accompanied by the age of intense, ever-changing regulation and compliance systems. Many organizations struggle to meet data compliance standards, let alone remain adaptable for future iterations and addendums. MLOps solves this problem by implementing a comprehensive plan for governance. This ensures that each model, whether new or updated, is compliant with original standards. MLOps also ensures that all data programs are auditable and explainable by introducing monitoring tools.

How Do You Implement MLOps In Your Organization?

Now that you're sold on the benefits of MLOps, it's time to figure out how you can bring the discipline to life at your organization.

The good news is that MLOps is still a relatively new discipline, which means even if you are just now getting started you aren't far behind other organizations. The bad news is that MLOps is still a relatively new discipline, which means there aren't many tried-and-true formulas for success readily available for you to replicate at your organization. However, ModelOps platforms with ready-to-deploy models can accelerate the MLOps process.

That being said, if you are ready to invest in machine learning there are a few ways you can set your organization up for success. Let's dive into how to achieve MLOps success in more detail:

MLOps Teams

Start by looking at your teams to confirm you have the necessary skill sets covered. We've already established that productizing ML models require a set of skills that, up until now, organizations have considered separate. So, it's likely that your data engineers, data scientists, software engineers, and operations professionals will be dispersed throughout various departments.

You don't need to alter your entire organizational structure to create a MLOps team. Instead, consider creating a hybrid team with cross-functionality. This way you can cover a wide range of skills without too much disruption to your organization. Alternatively, you may choose to use a solution like aiWARE that can rapidly deploy and scale AI within your applications and business processes without requiring AI developers and ML engineers.

Your MLOps team will need to cover 4 main areas:

Scoping

The first stage in a typical machine learning lifecycle is scoping. This stage consists of scoping out the project by identifying what business problem(s) you are aiming to solve with AI.

This stage usually involves collaborators with a deep understanding of the potential business problems that can be solved with AI such as d-level managers and above. It also usually includes collaborators that are intimately familiar with the data such as senior data scientists.

Data

The second stage in a typical ML lifecycle is data. This stage starts with acquiring the data and continues through cleaning, processing, organizing, and storing the data.

Stage two usually involves both data engineers and data scientists along with product managers.

Modeling

Stage three in the typical ML lifecycle is modeling. In this stage, the data from stage two is used to train, test, and refine ML models.

This third stage usually involves both data engineers and data scientists (and even ML architects if you have them). It also requires feedback and input from cross-functional stakeholders.

Deployment

The fourth and final stage in the typical machine learning lifecycle is deployment. Trained models are deployed into production.

This stage usually involves collaborators that have experience with machine learning and the DevOps process, such as machine learning engineers or DevOps specialists.

The exact composition and organization of the team will vary depending on your individual business needs, but the essential part is ensuring that each skillset is covered by someone.

MLOps Tools

In addition to having the right team, you'll also need to have the right tools in place to achieve MLOps success. MLOps is a relatively new, rapidly growing field. And, as is often the case in such fields, a large variety of tools have been created to help manage and streamline the processes involved.

When putting together your MLOps toolkit, you'll need to consider a few different factors such as the MLOps tasks you need to address, the languages and libraries your data scientists will be using, the level of product support you'll need, which cloud provider(s) you'll be working with, what AI models and engines to utilize, etc.

Once you build models, you can easily onboard them into a production-ready environment with aiWARE. This option allows you to rapidly deploy models that solve real-world business problems. And flexible API integrations make it easy to customize the solution to your business needs.

How Do I Learn MLOps?

As we've already mentioned, MLOps is a rapidly growing field. And that massive growth is only expected to continue-with 60% of companies planning to accelerate their process automation in the next 2 years, according to the IBV Trending Insights report.

This increased investment has made MLOps, or DevOps for machine learning, a necessary skill set at companies in nearly every industry. According to the LinkedIn emerging jobs report, the hiring for machine learning and artificial intelligence roles grew 74% annually between 2015 and 2019. This makes MLOps the top emerging job in the U.S.

And it's experiencing a talent shortage. There are many factors contributing to the MLOps talent crunch, the biggest being an overwhelming number of platforms and tools to learn, a lack of clarity in role and responsibility, a shortage of dedicated courses for MLOps engineers and an overwhelming number of platforms and tools to learn.

All that to say, if you're looking to get your foot in the MLOps door there's no better time than right now. We recommend checking out some of these great resources:

MLOps Resources

This course, currently available on Coursera, is a great jumping-off point if you're new to MLOps. Primarily intended for data scientists and software engineers that are looking to develop MLOps skills, this course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring, and operating production ML systems on Google Cloud.

This course, currently available on Coursera, is for those that have already nailed the fundamentals. It covers deep MLOps concepts as well as production engineering capabilities. You'll learn how to use well-established tools and methodologies to conceptualize, build and maintain integrated systems that continuously operate in production.

This book, by Mark Treveil and the Dataiku Team, was written specifically for the people directly facing the task of scaling ML in production. It's a guide for creating a successful MLOps environment, from the organizational to the technical challenges involved.

This seminar series takes a look at the frontier of ML. It aims to drive research focus to interesting questions and stir up conversations around ML topics. Every seminar is live-streamed on YouTube, and they encourage viewers to ask questions in the live chat. Videos of the talks are available on YouTube afterward as well. Past seminars are available for viewing on YouTube as well.

This book, by Andriy Burkov, offers a 'theory of the practice' approach. It provides readers with an overview of the problems, questions, and best practices of machine learning problems.

We also highly recommend joining the MLOps community on slack. An open community for all enthusiasts of ML and MLOps, you can learn many interesting things and broaden your knowledge. Both amateurs and professionals alike are welcome to join the conversation.

Want to Learn Even More About MLOps?

In the coming weeks, we'll be digging into some core MLOps topics that may interest you. If you're interested in diving deeper, keep an eye on our blog. We'll publish more in-depth content that covers MLOps best practices, ModelOps, MLOps tools, and MLOps versus AIOps.

Ready to dig into another MLOps resource right away? Check out this on-demand webinar: MLOps Done Right: Best Practices to Deploy. Integrate, Scale, Monitor, and Comply.

Continued here:
Veritone : What Is MLOps? | A Complete Guide to Machine Learning Operations - Marketscreener.com