Artificial intelligence was supposed to transform health care. It hasn’t. – POLITICO

Companies come in promising the world and often dont deliver, said Bob Wachter, head of the department of medicine at the University of California, San Francisco. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Administrators say algorithms the software that processes data from outside companies dont always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But its slow going. Research based on job postings shows health care behind every industry except construction in adopting AI.

The Food and Drug Administration has taken steps to develop a model for evaluating AI, but it is still in its early days. There are questions about how regulators can monitor algorithms as they evolve and rein in the technologys detrimental aspects, such as bias that threaten to exacerbate health care inequities.

Sometimes theres an assumption that AI is working, and its just a matter of adopting it, which is not necessarily true, said Florenta Teodoridis, a professor at the University of Southern Californias business school whose research focuses on AI. She added that being unable to understand why an algorithm came to a certain result is fine for things like predicting the weather. But in health care, its impact is potentially life-changing.

Despite the obstacles, the tech industry is still enthusiastic about AIs potential to transform health care.

The transition is slightly slower than I hoped but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026, Hinton told POLITICO via email. He said he never suggested that we should get rid of radiologists, but that we should let AI read scans for them.

If hes right, artificial intelligence will start taking on more of the rote tasks in medicine, giving doctors more time to spend with patients to reach the right diagnosis or develop a comprehensive treatment plan.

I see us moving as a medical community to a better understanding of what it can and cannot do, said Lara Jehi, chief research information officer for the Cleveland Clinic. It is not going to replace radiologists, and it shouldnt replace radiologists.

Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hours-long process oncologists and physicists undertake to map out a surgical plan for removing complicated head and neck tumors.

An algorithm can do the job in an hour, said John D. Halamka, president of Mayo Clinic Platform: Weve taken 80 percent of the human effort out of it. The technology gives doctors a blueprint they can review and tweak without having to do the basic physics themselves, he said.

NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebooks Artificial Intelligence Research group to reduce the time it takes to get an MRI from one hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who worked on the research, sees opportunity in AIs ability to downsize the amount of data doctors need to review.

When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Bob Wachter, head of the department of medicine at the University of California, San Francisco

Covid has accelerated AIs development. Throughout the pandemic, health providers and researchers shared data on the disease and anonymized patient data to crowdsource treatments.

Microsoft and Adaptive Biotechnologies, which partner on machine learning to better understand the immune system, put their technology to work on patient data to see how the virus affected the immune system.

The amount of knowledge thats been obtained and the amount of progress has just been really exciting, said Peter Lee, corporate vice president of research and incubations at Microsoft.

There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to adopt it, the health system created a response team to monitor the technology for alerts and take action when needed.

Im calling it our care traffic control, said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis is declining.

The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.

Health systems need to enable algorithms to access patient data. Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. But thats not as easy for smaller players.

Another problem is that every health system is unique in its technology and the way it treats patients. That means an algorithm may not work as well everywhere.

Over the last year, an independent study on a widely used sepsis detection algorithm from EHR giant Epic showed poor results in real-world settings, suggesting where and how hospitals used the AI mattered.

This quandary has led top health systems to build out their own engineering teams and develop AI in-house.

That could create complications down the road. Unless health systems sell their technology, its unlikely to undergo the type of vetting that commercial software would. That could allow flaws to go unfixed for longer than they might otherwise. Its not just that the health systems are implementing AI while no ones looking. Its also that the stakeholders in artificial intelligence, in health care, technology and government, havent agreed upon standards.

A lack of quality data which gives algorithms material to work with is another significant barrier in rolling out the technology in health care settings.

Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence.|Elaine Thompson/AP Photo

Much data comes from electronic health records but is often siloed among health care systems, making it more difficult to gather sizable data sets. For example, a hospital may have complete data on one visit, but the rest of a patients medical history is kept elsewhere, making it harder to draw inferences about how to proceed in caring for the patient.

We have pieces and parts, but not the whole, said Aneesh Chopra, who served as the governments chief technology officer under former President Barack Obama and is now president of data company CareJourney.

While some health systems have invested in pulling data from a variety of sources into a single repository, not all hospitals have the resources to do that.

Health care also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the sector behind others in terms of algorithmic horsepower.

Importantly, not enough strong data on health outcomes is available, making it more difficult for providers to use AI to improve how they treat patients.

That may be changing. A recent series of studies on a sepsis algorithm included copious details on how to use the technology in practice and documented physician adoption rates. Experts have hailed the studies as a good template for how future AI studies should be conducted.

But working with health care data is also more difficult than in other sectors because it is highly individualized.

We found that even internally across our different locations and sites, these models dont have a uniform performance, said Jehi of the Cleveland Clinic.

And the stakes are high if things go wrong. The number of paths that patients can take are very different than the number of paths that I can take when Im on Amazon trying to order a product, Wachter said.

Health experts also worry that algorithms could amplify bias and health care disparities.

For example, a 2019 study found that a hospital algorithm more often pushed white patients toward programs aiming to provide better care than Black patients, even while controlling for the level of sickness.

Last year, the FDA published a set of guidelines for using AI as a medical device, calling for the establishment of good machine learning practices, oversight of how algorithms behave in real-world scenarios and development of research methods for rooting out bias.

The agency subsequently published more specific guidelines on machine learning in radiological devices, requiring companies to outline how the technology is supposed to perform and provide evidence that it works as intended. The FDA has cleared more than 300 AI-enabled devices, largely in radiology, since 1997.

Regulating algorithms is a challenge, particularly given how quickly the technology advances. The FDA is attempting to head that off by requiring companies to institute real-time monitoring and submit plans on future changes.

But in-house AI isnt subject to FDA oversight. Bakul Patel, former head of the FDAs Center for Devices and Radiological Health and now Googles senior director for global digital health strategy and regulatory affairs, said that the FDA is thinking about how it might regulate noncommercial artificial intelligence inside of health systems, but he adds, theres no easy answer.

FDA has to thread the needle between taking enough action to mitigate flaws in algorithms while also not stifling AIs potential, he said.

Some argue that public-private standards for AI would help advance the technology. Groups, including the Coalition for Health AI, whose members include major health systems and universities as well as Google and Microsoft, are working on this approach.

But the standards they envision would be voluntary, which could blunt their impact if not widely adopted.

Original post:
Artificial intelligence was supposed to transform health care. It hasn't. - POLITICO

Deep learning algorithm predicts Cardano to trade above $2 by the end of August – Finbold – Finance in Bold

The price of Cardano (ADA) has mainly traded in the green in recent weeks as the network dubbed Ethereum killer continues to record increased blockchain development.

Specifically, the Cardano community is projecting a possible rise in the tokens value, especially with the upcoming Vasil hard fork.

In this line, NeuralProphets PyTorch-based price prediction algorithm that deploys an open-source machine learning framework has predicted that ADA would trade at $2.26 by August 31, 2022.

Although the prediction model covers the time period from July 31st to December 31st, 2022, and it is not an accurate indicator of future prices, its predictions have historically proven to be relatively accurate up until the abrupt market collapse of the algorithm-based stablecoin project TerraUSD (UST).

However, the prediction aligns with the generally bullish sentiment around ADA that stems from the network activity aimed at improving the assets utility. As reported by Finbold, Cardano founder Charles Hoskinson revealed the highly anticipated Vasil hard fork is ready to be rolled after delays.

It is worth noting that despite minor gains, ADA is yet to show any significant reaction to the upgrade, but the tokens proponents are glued to the price movement as it shows signs of recovery. Similarly, the token has benefitted from the recent two-month-long rally across the general cryptocurrency market.

Elsewhere, the CoinMarketCap community is projecting that ADA will trade at $0.58 by the end of August. The prediction is supported by about 17,877 community members, representing a price growth of about 8.71% from the tokens current value.

For September, the community has placed the prediction at $0.5891, a growth of about 9% from the current price. Interestingly, the algorithm predicts that ADA will trade at $1.77 by the end of September. Overall, both prediction platforms indicate an increase from the digital assets current price.

By press time, the token was trading at $0.53 with gains of less than 1% in the last 24 hours.

In general, multiple investors are aiming to capitalize on the Vasil hard fork, especially with Cardano clarifying the upgrade is going on according to plan.

Disclaimer:The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.

See the original post here:
Deep learning algorithm predicts Cardano to trade above $2 by the end of August - Finbold - Finance in Bold

Federated learning uses the data right on our devices – GCN.com

An approach called federated learning trains machine learning models on devices like smartphones and laptops, rather than requiring the transfer of private data to central servers.

The biggest benchmarking data set to date for a machine learning technique designed with data privacy in mind is now available open source.

By training in-situ on data where it is generated, we can train on larger real-world data, explains Fan Lai, a doctoral student in computer science and engineering at the University of Michigan, who presents the FedScale training environment at the International Conference on Machine Learning this week. Apaperon the work is available on ArXiv.

This also allows us to mitigate privacy risks and high communication and storage costs associated with collecting the raw data from end-user devices into the cloud, Lai says.

Still a new technology, federated learning relies on analgorithmthat serves as a centralized coordinator. It delivers the model to the devices, trains it locally on the relevant user data, and then brings each partially trained model back and uses them to generate a final global model.

For a number of applications, this workflow provides an added data privacy and security safeguard. Messaging apps,health care data, personal documents, and other sensitive but useful training materials can improve models without fear of data center vulnerabilities.

In addition to protecting privacy, federated learning could make model training more resource-efficient by cutting down and sometimes eliminating big data transfers, but it faces several challenges before it can be widely used. Training across multiple devices means that there are no guarantees about the computing resources available, and uncertainties like user connection speeds and device specs lead to a pool of data options with varying quality.

Federated learning is growing rapidly as a research area, says Mosharaf Chowdhury, associate professor of computer science and engineering. But most of the work makes use of a handful of data sets, which are very small and do not represent many aspects of federated learning.

And this is where FedScale comes in. The platform can simulate the behavior of millions of user devices on a few GPUs and CPUs, enabling developers of machine learning models to explore how their federated learning program will perform without the need for large-scale deployment. It serves a variety of popular learning tasks, including image classification, object detection, language modeling, speech recognition, and machine translation.

Anything that uses machine learning on end-user data could be federated, Chowdhury says. Applications should be able to learn and improve how they provide their services without actually recording everything their users do.

The authors specify several conditions that must be accounted for to realistically mimic the federated learning experience: heterogeneity of data, heterogeneity of devices, heterogeneous connectivity and availability conditions, all with an ability to operate at multiple scales on a broad variety of machine learning tasks. FedScales data sets are the largest released to date that cater specifically to these challenges in federated learning, according to Chowdhury.

Over the course of the last couple years, we have collected dozens of data sets. The raw data are mostly publicly available, but hard to use because they are in various sources and formats, Lai says. We are continuously working on supporting large-scale on-device deployment, as well.

The FedScale team has also launched a leaderboard to promote the most successful federated learning solutions trained on the universitys system.

The National Science Foundation and Cisco supported the work.

This article was originally published inFuturity. It has been republished under theAttribution 4.0 International license

Read this article:
Federated learning uses the data right on our devices - GCN.com

The role of AI and machine learning in revolutionizing clinical research – MedCity News

Advanced technologies such as artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) have become a cornerstone of successful modern clinical trials, integrated into many of the technologies enabling the transformation of clinical development.

The health and life sciences industrys dramatic leap forward into the digital age in recent years has been a game-changer with innovations and scientific breakthroughs that are improving patient outcomes and population health. Consequently, embracing digital transformation is no longer an option but an industry standard. Lets explore what that truly means for clinical development.

An accelerated path to better results

Over the years, technology has equipped clinical leaders to successfully reduce costs while accelerating stages of research and development. These technologies have aided in the structurization of complex data environmentsa need created by the exponential growth in data sources containing valuable information for clinical research.

Today, the volume, variety and velocity of structured and unstructured data generated by clinical trials are outpacing traditional data management processes. The reality is that there is simply too much data coming from too many sources to be manageable by human teams alone. As a response to this, AI/ML technologies have proven in recent years to hold the remarkable potential to automate data standardization while ensuring quality control, in turn easing the burden on researchers with minimal manual intervention.

Once the collection and streamlining of data is compiled within a single automated ecosystem, clinical trial leaders begin to benefit from faster and smarter insights driven by the application of machine analysis. These include the creation of predictive and prescriptive insights that can aid researchers and sites to uncover best practices for future processes. Altogether, these capabilities can improve research outcomes, patients experience and safety.

A look into compliance and privacy

When we think about the use of patient data, privacy and compliance adherence must be a consideration. The bar is set high for any technology being implemented into clinical trial execution.

Efforts must adhere to Good Clinical Practice (GcP) and validation requirements that ensure an outcome is valid by it being predictable and repeatable. Additionally, there must be transparency and explainability around how any AI algorithm makes decisions to prove correctness and avoidance of any potential bias. This is becoming more essential than ever from a compliance perspective as regulators look at algorithms as part of what they base their approvals on.

Keeping the h(uman) in healthcare

The goal of implementing AI/ML in clinical research is not to replace humans with digital tools but to increase their productivity through high-efficiency human augmentation and the automation of mundane tasks. Before the application of advanced technologies to clinical trials, there was an unmet need for an agile methodology where researchers and organizers could solely focus on critical requirements and the delivery of results.

The intelligent application of technology allows for human interaction with AI models to bring better outcomes to research, and even in its most advanced stage, data science technology never replaces the human data scientist. It does, however, provide a mutually beneficial circumstance wherein the augmentation of workflows allows data scientists to ease data burden while AI models flourish through human feedback. This continuous learning by an AI model is known as Continuous Integration/Continuous Delivery (CI/CD).

The integration of human capacity and technology results in accelerated efficiency, improved compliance and superb patient personalization. Furthermore, regardless of how efficient algorithms become, the decision-making power will always belong to humans.

Envisioning a bold future

AI/ML strategies are redefining the clinical development cycle like never beforeand as the industry leaps into new frontiers, digital transformation is leading the way to incredible advancements that will revolutionize the space forever. Leaders today have the opportunity to apply advanced technologies to solve historically complicated problems in the field.

Already, weve seen better site selection, more effective risk-based quality management, improved patient monitoring and safety, enhanced patient recruitment and engagement, and improved overall study qualityand this is just the beginning.

Photo: Blue Planet Studio, Getty Images

Continue reading here:
The role of AI and machine learning in revolutionizing clinical research - MedCity News

Keeping water on the radar: Machine learning to aid in essential water cycle measurement – CU Boulder Today

Department of Computer Science assistant professor Chris Heckman and CIRES research hydrologist Toby Minear have been awarded a Grand Challenge Research & Innovation Seed Grant to create an instrument that could revolutionize our understanding of the amount of water in our rivers, lakes, wetlands and coastal areas by greatly increasing the places where we measure it.

The new low-cost instrument would use radar and machine learning to quickly and safely measure water levels in a variety of scenarios.

This work could prove vital as the USDA recently proclaimed the entire state of Colorado to be a "primary natural disaster area" due to an ongoing drought that has made the American West potentially the driest it has been in over a millennium. Other climate records across the globe also continue to be broken, year after year. Our understanding of the changing water cycle has never been more essential at a local, national and global level.

A fundamental part to developing this understanding is knowing changes in the surface height of bodies of water. Currently, measuring changing water surface levels involves high-cost sensors that are easily damaged by floods, difficult to install and time consuming to maintain.

"One of the big issues is that we have limited locations where we take measurements of surface water heights," Minear said.

Heckman and Minear are aiming to change this by building a low-cost instrument that doesn't need to be in a body of water to read its average water surface level. It can instead be placed several meters away safely elevated from floods.

The instrument, roughly the size of two credit-cards stacked on one another, relies on high-frequency radio waves, often referred to as "millimeter wave", which have only been made commercially accessible in the last decade.

Through radar, these short waves can be used to measure the distance between the sensor and the surface of a body of water with great specificity. As the water's surface level increases or decreases over time, the distance between the sensor and the water's surface level changes.

The instrument's small form-factor and potential off-the-shelf usability separate it from previous efforts to identify water through radar.

It also streamlines data transmitted over often limited and expensive cellular and satellite networks, lowering the cost.

In addition, the instrument will use machine learning to determine whether a change in measurements could be a temporary outlier, like a bird swimming by, and whether or not a surface is liquid water.

Machine learning is a form of data analysis that seeks to identify patterns from data to make decisions with little human intervention.

While traditionally radar has been used to detect solid objects, liquids require different considerations to avoid being misidentified. Heckman believes that traditional ways of processing radar may not be enough to measure liquid surfaces at such close proximity.

"We're considering moving further up the radar processing chain and reconsidering how some of these algorithms have been developed in light of new techniques in this kind of signal processing," Heckman said.

In addition to possible fundamental shifts in radar processing, the project could empower communities of citizen scientists, according to Minear.

"Right now, many of the systems that we use need an expert installer. Our idea is to internalize some of those expert decisions, which takes out a lot of the cost and makes this instrument more friendly to a citizen science approach," he said.

By lowering the barrier of entry to water surface level measurement through low-cost devices with smaller data requirements, the researchers broaden opportunities for communities, even in areas with limited cellular networks, to measure their own water sources.

The team is also committing to open-source principles to ensure that anyone can use and build on the technology, allowing for new innovations to happen more quickly and democratically.

Minear, who is a Science Team and Cal/Val Team member for the upcoming NASA Surface Water and Ocean Topography (SWOT) Mission, also hopes that the new instrument could help check the accuracy of water surface level measurements made by satellites.

These sensors could also give local, regional and national communities more insight into their water usage and supply over time and could be used to help make evidence-informed policy decisions about water rights and usage.

"I'm very excited about the opportunities that are presented by getting data in places that we don't currently get it. I anticipate that this could give us better insight into what is happening with our water sources, even in our backyard," said Heckman.

More here:
Keeping water on the radar: Machine learning to aid in essential water cycle measurement - CU Boulder Today

How we learned to break down barriers to machine learning – Ars Technica

Dr. Sephus discusses breaking down barriers to machine learning at Ars Frontiers 2022. Click here for transcript.

Welcome to the week after Ars Frontiers! This article is the first in a short series of pieces that will recap each of the day's talks for the benefit of those who weren't able to travel to DC for our first conference. We'll be running one of these every few days for the next couple of weeks, and each one will include an embedded video of the talk (along with a transcript).

For today's recap, we're going over our talk with Amazon Web Services tech evangelist Dr. Nashlie Sephus. Our discussion was titled "Breaking Barriers to Machine Learning."

Dr. Sephus came to AWS via a roundabout path, growing up in Mississippi before eventually joining a tech startup called Partpic. Partpic was an artificial intelligence and machine-learning (AI/ML) company with a neat premise: Users could take photographs of tooling and parts, and the Partpic app would algorithmically analyze the pictures, identify the part, and provide information on what the part was and where to buy more of it. Partpic was acquired by Amazon in 2016, and Dr. Sephus took her machine-learning skills to AWS.

When asked, she identified accessasthe biggest barrier to the greater use of AI/MLin a lot of ways, it's another wrinkle in the old problem of the digital divide. A core component of being able to utilize most common AI/ML tools is having reliable and fast Internet access, and drawing on experience from her background, Dr. Sephus pointed out that a lack of access to technology in primary schools in poorer areas of the country sets kids on a path away from being able to use the kinds of tools we're talking about.

Furthermore, lack of early access leads to resistance to technology later in life. "You're talking about a concept that a lot of people think is pretty intimidating," she explained. "A lot of people are scared. They feel threatened by the technology."

One way of tackling the divide here, in addition to simply increasing access, is changing the way that technologists communicate about complex topics like AI/ML to regular folks. "I understand that, as technologists, a lot of times we just like to build cool stuff, right?" Dr. Sephus said. "We're not thinking about the longer-term impact, but that's why it's so important to have that diversity of thought at the table and those different perspectives."

Dr. Sephus said that AWS has been hiring sociologists and psychologists to join its tech teams to figure out ways to tackle the digital divide by meeting people where they are rather than forcing them to come to the technology.

Simply reframing complex AI/ML topics in terms of everyday actions can remove barriers. Dr. Sephus explained that one way of doing this is to point out that almost everyone has a cell phone, and when you're talking to your phone or using facial recognition to unlock it, or when you're getting recommendations for a movie or for the next song to listen tothese things are all examples of interacting with machine learning. Not everyone groks that, especially technological laypersons, and showing people that these things are driven by AI/ML can be revelatory.

"Meeting them where they are, showing them how these technologies affect them in their everyday lives, and having programming out there in a way that's very approachableI think that's something we should focus on," she said.

Continued here:
How we learned to break down barriers to machine learning - Ars Technica

NSF award will boost UAB research in machine-learning-enabled plasma synthesis of novel materials – University of Alabama at Birmingham

The $20 million National Science Foundation award will help UAB and eight other Alabama-based universities build research infrastructure. UABs share will be about $2 million.

Yogesh Vohra Yogesh Vohra, Ph.D., is a co-principal investigator on a National Science Foundation award that will bring the University of Alabama at Birmingham about $2 million over five years.

The total NSF EPSCoR Research Infrastructure Improvement Program award of $20 million with its principal investigator Gary Zank, Ph.D., based at the University of Alabama in Huntsville will help strengthen research infrastructure at UAB, UAH, Auburn University, Tuskegee University, the University of South Alabama, Alabama A&M University, Alabama State University, Oakwood University, and the University of Alabama.

The award, Future technologies and enabling plasma processes, or FTPP, aims to develop new technologies using plasma in hard and soft biomaterials, food safety and sterilization, and space weather prediction. This project will build plasma expertise, research and industrial capacity, as well as a highly trained and capable plasma science and engineering workforce, across Alabama.

Unlike solids, liquids and gas, plasma the fourth state of matter does not exist naturally on Earth. This ionized gaseous substance can be made by heating neutral gases. At UAB, Vohra, a professor and university scholar in the UAB Department of Physics, has employed microwave-generated plasmas to create thin diamond films that have many potential uses, including super-hard coatings and diamond-encapsulated sensors for extreme environments. This new FTPP grant will support research into plasma synthesis of materials that maintain their strength at high temperatures, superconducting thin films and developing plasma surface modifications that incorporate antimicrobial materials in biomedical implants.

Vohra says the UAB Department of Physics will mostly use its share of the award to support faculty in the UAB Center for Nanoscale Materials and Biointegration and two full-time postdoctoral scholars, and support hiring of a new faculty member in computational physics with a background in machine-learning. The machine-learning predictions using the existing databases on materials properties will enable our research team to reduce the time from materials discovery to actual deployment in real-world applications, Vohra said.

The NSF EPSCoR Research Infrastructure Improvement Program helps establish partnerships among academic institutions to make sustainable improvements in research infrastructure, and research and development capacity. EPSCoR is the acronym for Established Program to Stimulate Competitive Research, an effort to level the playing field for states, territories and a commonwealth that historically have received lesser amounts of federal research and development funding.

Jurisdictions can compete for NSF EPSCoR awards if their five-year level of total NSF funding is less than 0.75 percent of the total NSF budget. Current qualifiers include Alabama, 22 other states, and Guam, the U.S. Virgin Islands and Puerto Rico.

Besides Alabama, the other four 2022 EPSCoR Research Infrastructure Improvement Program awardees are Hawaii, Kansas, Nevada and Wyoming.

In 2017, UAB was part of another five-year, $20 million NSF EPSCoR award to Alabama universities.

The Department of Physics is part of the UAB College of Arts and Sciences.

Read the original post:
NSF award will boost UAB research in machine-learning-enabled plasma synthesis of novel materials - University of Alabama at Birmingham

Link Machine Learning (LML) has a Neutral Sentiment Score, is Rising, and Outperforming the Crypto Market Sunday: What’s Next? – InvestorsObserver

Link Machine Learning (LML) gets a neutral rating from InvestorsObserver Sunday. The token is up 82.14% to $0.004303122367 while the broader crypto market is up 2.46%.

The Sentiment Score provides a quick, short-term look at the cryptos recent performance. This can be useful for both short-term investors looking to ride a rally and longer-term investors trying to buy the dip.

Link Machine Learning price is currently above resistance. With support set around $0.00162854200285358 and resistance at $0.00377953835819346, Link Machine Learning is potentially in a volatile position if the rally burns out.

Link Machine Learning has traded on low volume recently. This means that today's volume is below its average volume over the past seven days.

Due to a lack of data, this crypto may be less suitable for some investors.

Click here to unlock the rest of the report on Link Machine Learning

Subscribe to our daily morning update newsletter and never miss out on the need-to-know market news, movements, and more.

Thank you for signing up! You're all set to receive the Morning Update newsletter

Link:
Link Machine Learning (LML) has a Neutral Sentiment Score, is Rising, and Outperforming the Crypto Market Sunday: What's Next? - InvestorsObserver

Machine Learning to Virtual Reality: Learn from Anywhere with 5 Online Courses by IITs – The Better India

In a welcome move, the Indian Institute of Technology (IIT) has partnered with the online learning platform, Coursera. You can now access several courses from the comfort of your home while getting degrees certified by the premium institute.

Here are five courses that you may want to check out.

The course offers a strong foundation in business and technology. This is an opportunity to learn from industry experts at the B school. Dive into your area of specialisation, after choosing from over 55 electives. The curriculum spans business, management, data science and data analytics.

Eligibility criteria: A Bachelors degree with 65 per cent; four years of relevant work experience after graduation.Fees: Rs. 10,93,000/Duration: 24 months to 60 months

For more details, click here.

As algorithms shape our world and businesses, it is becoming more important to keep pace. In the course, you will gain exposure to the different algorithms that are needed for machine learning. You will also be trained in the application of Python programming in solving real-world financial problems.

Eligibility criteria: Knowledge of basic mathematics, linear algebra, calculus, statistics and spreadsheetsFees: Rs. 90,000/Duration: 6 months

For more details, click here.

Learn from industry experts who share with you their knowledge of mechatronics. In this course, you will be introduced to manufacturing processes and how these can be enhanced through computer technology. You will be trained in computer-aided design (CAD) and computer-aided manufacturing (CAM) softwares.

Eligibility criteria: Bachelors degree in any technology or engineering field; basic knowledge of programming. Students pursuing BE or BTech may also enrol.Fees: 1,12,500/Duration: 6 months

For more details, click here.

If you have been intrigued by the world of virtual reality, this course will help you delve deeper into understanding it. The course gives you a firm footing in the design and development of these technologies. Get expert insights into how to build deep learning models such as Encoder-Decoder.

Eligibility criteria: Bachelors degree in a related field with basic knowledge of programming.Fees: 1,12,500/Duration: 6 months

For more details, click here.

Understand the computational properties of Natural Language Processing with this course. Learn how to integrate machine learning and natural language processing to solve real-world problems across industries.

Eligibility criteria: Bachelors degree in a related field; a mathematics background in linear algebra, calculus, probability, statistics, data structures and algorithms; knowledge of Python.Fees: 1,12,500/Duration: 6 months

For more details, click here.

Read the original:
Machine Learning to Virtual Reality: Learn from Anywhere with 5 Online Courses by IITs - The Better India

AiM Future Joins the Edge AI and Vision Alliance – AiThority

AiM Future, a leader in embedded machine learning intellectual property (IP) for edge computing devices, announced it has joined the Edge AI and Vision Alliance.

AiM Future is accelerating the transition from centralized cloud-native AI to the distributed intelligent edge. Its market-proven NeuroMosAIc Processor (NMP) family of machine learning hardware accelerators and software, NeuroMosAIc Studio, enables the efficient execution of deep learning models common to computer vision applications. Shipping in smart home devices since 2019, the co-designed hardware and software offer a highly flexible and scalable solution meeting end-application performance, power, and cost requirements from always-on, battery-operated cameras to high-performance edge infrastructure.

Recommended AI News: How Startups are Leveraging the Cloud to Scale

It is our companys pleasure to join the Edge AI and Vision Alliance, said ChangSoo Kim, founder, and CEO of AiM Future. As a premier organization for technology innovators revolutionizing artificial intelligence across the edge computing spectrum, the partnership is a natural fit. It is clear AiM Futures vision of bringing the impossible to reality is shared by the Alliance and its ecosystem. The field of edge AI is rapidly advancing and partnerships are fundamental to addressing the many challenges and limitations of todays edge devices.

Today, more and more devices and systems are gaining the ability to see and understand their environments, said Jeff Bier, founder of the Edge AI and Vision Alliance. And thanks to visual perception, these machines are becoming more autonomous, safer, more capable and easier to use. With its processing architecture and accompanying toolset, AiM Future is implementing an intriguing approach to deep learning inference acceleration. We welcome AiM Future as one of the Edge AI and Vision Alliances newest members and look forward to their participation at the Embedded Vision Summit.

Recommended AI News: Nintex Named a Leader in Workflow and Content Automation by Aragon Research

[To share your insights with us, please write to sghosh@martechseries.com]

Read more:
AiM Future Joins the Edge AI and Vision Alliance - AiThority

UT Researchers Aim to Change the Cancer Equation – UT News – University of Texas

Cancer is arguably the greatest health challenge of our time. During the past 50 years, clinical advances have substantially reduced the mortality rate for people with cancer, but new breakthroughs often require years of trial and error in the lab.

An innovative partnership between The University of Texas at Austins Machine Learning Lab, Oden Institute for Computational Engineering and Sciences and Dell Medical School aims to speed up those discoveries, saving lives in the process. What would have previously taken years in the lab can potentially be accomplished in days with the appropriate computing simulations.

The research collaboration is possible because of a $10 million leadership gift from Dheeraj and Swapna Pandey.

The biggest promise of computational oncology is personalized medicine, Dheeraj Pandey said. The ability for us to answer questions that save precious lives. More importantly, the field is attempting to break silos between physics, biology, and computing researchers who are fighting indefatigably against cancer.

UT researchers will integrate two emerging disciplines computational oncology and machine learning to transform the future of cancer care. Machine learning applies algorithms to large data sets to build classifiers that can make accurate predictions, even in complex biological and chemical domains. Computational oncology uses physics-based and data-driven advanced mathematical and computational approaches to model tumors, calibrate patient-specific models, and simulate patient responses to potential treatment options.

Modeling and simulation occur across a spectrum of scales, from the cellular level to the organ level of the human body. The models can be theory-driven, knowledge-driven, or data-driven. Or, increasingly, a combination of all three. Substantial computational skills and capabilities, as well as medical knowledge, are required to capture the individuality of each cancer patients situation for accurate decision making at all levels.

UT Austin has a unique environment that enables the interdisciplinary research critical to tackling societal grand challenges such as personalized care for cancer patients, said Karen Willcox, director of the Oden Institute. We are thrilled to build a new partnership with the Machine Learning Lab, building on the Oden Institutes strength in computational oncology and our existing partnerships with Dell Med, MD Anderson Cancer Center and the Texas Advanced Computing Center. Computational medicine is a top priority for the Oden Institute, and the generosity of the Pandey family is a game changer in taking our efforts to a new level.

The Oden Institute and its Center for Computational Oncology sit at the forefront of developing mechanism-based modeling techniques that optimize treatment and outcomes for an individual patient. The Machine Learning Laboratory is the universitys headquarters for machine learning and artificial intelligence.

A new wave of machine learning is creating predictive models that are transforming science, said Adam Klivans, director of the Machine Learning Lab and NSF-funded Institute for Foundations of Machine Learning. Our technologies can anticipate new biological and chemical interactions to advance the automated discovery of new treatments.

Currently, cancer biologists and chemists rely on trial and error to determine what treatments will be most effective. Connecting university research with community providers is central to the mission of Dell Med. Through initiatives such as the Livestrong Cancer Institutes, Dell Med translates leading-edge research into high-quality clinical trials and patient-focused precision medicine.

Time is critical when treating cancer, said Gail Eckhardt, director of the Livestrong Cancer Institutes at Dell Med. The Pandeys gift brings us that much closer to the day when clinicians and researchers can integrate patient data and computational methods to individualize therapy, thereby improving the lives of patients with cancer.

Computational approaches are the key to accelerating progress against cancer, said David Jaffray, chief technology and digital officer at The University of Texas MD Anderson Cancer Center. This investment will further the collaborative, team science approach we have developed with the leadership at UT Austin. Together, we are building a critical mass of talent to use the power of data and computing to make real progress against this terrible disease.

Read the feature story to learn more about this partnership.

Link:
UT Researchers Aim to Change the Cancer Equation - UT News - University of Texas

AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars – Forbes

AI Ethics quandary about using adversarial attacks against Machine Learning even if done for ... [+] purposes of goodness.

It is widely accepted sage wisdom to garner as much as you can about your adversaries.

Frederick The Great, the famous king of Prussia and a noted military strategist, stridently said this: Great advantage is drawn from knowledge of your adversary, and when you know the measure of their intelligence and character, you can use it to play on their weakness.

Astutely leveraging the awareness of your adversaries is both a vociferous defense and a compelling offense-driven strategy in life. On the one hand, you can be better prepared for whatever your adversary might try to destructively do to you. The other side of that coin is that you are likely able to carry out better attacks against your adversary via the known and suspected weaknesses of any vaunted foe.

Per the historically revered statesman and ingenious inventor Benjamin Franklin, those that are on their guard and appear ready to receive their adversaries are in much less danger of being attacked, much more so than otherwise being unawares, supine, and negligent in preparation.

Why all this talk about adversaries?

Because one of the biggest concerns facing much of todays AI is that cyber crooks and other evildoers are deviously attacking AI systems using what is commonly referred to as adversarial attacks. This can cause an AI system to falter and fail to perform its designated functions. As youll see in a moment, there are a variety of vexing AI Ethics and Ethical AI issues underlying the matter, such as ensuring that AI systems are protected against such scheming adversaries, see my ongoing and extensive coverage of AI Ethics at the link here and the link here, just to name a few.

Perhaps even worse than getting the AI to simply stumble, the adversarial attack can sometimes be used to get AI to perform as the wrongdoer wishes the AI to perform. The attacker can essentially trick the AI into doing the bidding of the malefactor. Whereas some adversarial attacks seek to disrupt or confound the AI, another equally if not more insidious form of deception involves getting the AI to act on the behalf of the attacker.

It is almost as though one might use a mind trick or hypnotic means to get a human to do wrong acts and yet the person is blissfully unaware that they have been fooled into doing something that they should not particularly have done. To clarify, the act that is performed does not necessarily have to be wrong per se or illegal in its merits. For example, conning a bank teller to open the safe or vault for you is not in itself a wrong or illegal act. The bank teller is doing what they legitimately are able to perform as a valid bank-approved task. Of course, if they open the vault and doing so allows a robber to steal the money and all of the gold bullion therein, the bank teller has been tricked into performing an act that they should not have undertaken in the given circumstances.

The use of adversarial attacks against AI has to a great extent arisen because of the way in which much of contemporary AI is devised. You see, this latest era of AI has tended to emphasize the use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern matching techniques and technologies which have dramatically aided the advancement of modern-day AI systems. ML/DL is often used as a key element in many of the AI systems that you interact with daily, such as the use of conversational interactive systems or Natural Language Processing (NLP) akin to Alexa and Siri.

The manner in which ML/DL is designed and fielded provides a fertile opening for the leveraging of adversarial attacks. Cybercrooks generally can guess how the ML/DL was built. They can make reasoned guesses about how the ML/DL will react when put into use. There are only so many ways that ML/DL is usually constructed. As such, the evildoer hackers can try a slew of underhanded ML/DL adversarial tricks to get the AI to either go awry or do their bidding.

In contrast, during the prior era of AI systems, it was somewhat harder to undertake adversarial attacks since much of the AI was more idiosyncratic and written in a more proprietary or individualistic manner. You would have had a more challenging time trying to guess how the AI was constructed and also how it might react when placed into active use. In comparison, ML/DL is largely more predictable as to its susceptibilities (this is not always the case, and please know that I am broadly generalizing).

You might be thinking that if adversarial attacks are relatively able to be targeted specifically at ML/DL then certainly there be should a boatload of cybersecurity measures available to protect against those attacks. One would hope that those devising and releasing their AI applications would ensure that the app was securely able to fight against those adversarial attacks.

The answer is yes and no.

Yes, there exist numerous cybersecurity protections that can be used by and within ML/DL to guard against adversarial attacks. Unfortunately, the answer is also somewhat a no in that many of the AI builders are not especially versed in those protections or are not explicitly including those protections.

There are lots of reasons for this.

One is that some AI software engineers concentrate solely on the AI side and are not particularly caring about the cybersecurity elements. They figure that someone else further along in the chain of making and releasing the AI will deal with any needed cybersecurity protections. Another reason for the lack of protection against adversarial attacks is that it can be a burden of sorts to the AI project. An AI project might be under a tight deadline to get the AI out the door. Adding into the mix a bunch of cybersecurity protections that need to be crafted or set up will potentially delay the production cycle of the AI. Furthermore, the cost of creating AI is bound to go up too.

Note that none of those are satisfactory as to allow an AI system to be vulnerable to adversarial attacks. Those that are in the know would say the famous line of either pay me now or pay me later would come to play in this instance. You can skirt past the cybersecurity portions to get an AI system sooner into production, but the chances are that it will then suffer an adversarial attack. A cost-benefit analysis and ROI (return on investment) needs to be properly assessed as to whether the cost upfront and the benefits thereof are going to be more profitable against the costs to repair and deal with cybersecurity intrusions further down the pike.

There is no free lunch when it comes to making ML/DL that is well-protected against adversarial attacks.

That being said, you dont necessarily need to move heaven and earth to be moderately protected against those evildoing tricks. Savvy specialists that are versed in cybersecurity protections can pretty much sit side-by-side with the AI crews and dovetail the security into the AI as it is being devised. There is also the assumption that a well-versed AI builder can readily use AI constructing techniques and technologies that simultaneously aid their AI building and that seamlessly encompasses adversarial attack protections. To adequately do so, they usually need to know about the nature of adversarial attacks and how to best blunt or mitigate them. This is something only gradually becoming regularly instituted as part of devising AI systems.

A twist of sorts is that more and more people are getting into the arena of developing ML/DL applications. Regrettably, some of those people are not versed in AI per se, and neither are they versed in cybersecurity. The idea overall is that perhaps by making the ability to craft AI systems with ML/DL widely available to all we are aiming to democratize AI. That sounds good, but there are downsides to this popular exhortation, see my analysis and coverage at the link here.

Speaking of twists, I will momentarily get to the biggest twist of them all, namely, I am going to shock you with a recently emerging notion that some find sensible and others believe is reprehensible. Ill give you a taste of where I am heading on this heated and altogether controversial matter.

Are you ready?

There is a movement toward using adversarial attacks as a means to disrupt or fool AI systems that are being used by wrongdoers.

Let me explain.

So far, I have implied that AI is seemingly always being used in the most innocent and positive of ways and that only miscreants would wish to confound the AI via the use of adversarial attacks. But keep in mind that bad people can readily devise AI and use that AI for doing bad things.

You know how it is, whats good for the goose is good for the gander.

Criminals and cybercrooks are eagerly wising up to the building and using AI ML/DL to carry out untoward acts. When you come in contact with an AI system, you might not have any means of knowing whether it is an AI For Good versus an AI For Bad type of system. Be on the watch! Just because AI is being deployed someplace does not somehow guarantee that the AI will be crafted by well-intended builders. The AI could be deliberately devised for foul purposes.

Here then is the million-dollar question.

Should we be okay with using adversarial attacks on purportedly AI For Bad systems?

Im sure that your first thought is that we ought to indeed be willing to fight fire with fire. If AI For Good systems can be shaken up via adversarial attacks, we can use those same evildoing adversarial attacks to shake up those atrocious AI For Bad systems. We can rightfully turn the attacking capabilities into an act of goodness. Fight evil using the appalling trickery of evil. The net result would seem to be an outcome of good.

Not everyone agrees with that sentiment.

From an AI Ethics perspective, there is a lot of handwringing going on about this meaty topic. Some would argue that by leveraging adversarial attacks, even when the intent is for the good, you are perpetuating the use of adversarial attacks all-told. You are basically saying that it is okay to launch and promulgate adversarial attacks. Shame on you, they exclaim. We ought to be stamping out evil rather than encouraging or expanding upon evil (even if the evil is ostensibly aiming to offset evil and carry out the work of the good).

Those against the use of adversarial attacks would also argue that by keeping adversarial attacks in the game that you are going to merely step into a death knell of quicksand. More and stronger adversarial attacks will be devised under the guise of attacking the AI For Bad systems. That seems like a tremendously noble pursuit. The problem is that the evildoers will undoubtedly also grab hold of those emboldened and super-duper adversarial attacks and aim them squarely at the AI For Good.

You are blindly promoting the cat and mouse gambit. We might be shooting our own foot.

A retort to this position is that there are no practical means of stamping out adversarial attacks. No matter whether you want them to exist or not, the evildoers are going to make sure they do persist. In fact, the evildoers are probably going to be making the adversarial attacks more resilient and potent, doing so to overcome whatever cyber protections are put in place to block them. Thus, a proverbial head-in-the-sand approach to dreamily pretending that adversarial attacks will simply slip quietly away into the night is pure nonsense.

You could contend that adversarial attacks against AI are a double-edged sword. AI researchers have noted this quandary, as stated by these authors in a telling article in AI And Ethics journal: Sadly, AI solutions have already been utilized for various violations and theft, even receiving the name AI or Crime (AIC). This poses a challenge: are cybersecurity experts thus justified to attack malicious AI algorithms, methods and systems as well, to stop them? Would that be fair and ethical? Furthermore, AI and machine learning algorithms are prone to be fooled or misled by the so-called adversarial attacks. However, adversarial attacks could be used by cybersecurity experts to stop the criminals using AI, and tamper with their systems. The paper argues that this kind of attacks could be named Ethical Adversarial Attacks (EAA), and if used fairly, within the regulations and legal frameworks, they would prove to be a valuable aid in the fight against cybercrime (article by Micha Chora and Micha Woniak, The Double-Edged Sword Of AI: Ethical Adversarial Attacks To Counter Artificial Intelligence For Crime).

Id ask you to mull this topic over and render a vote in your mind.

Is it unethical to use AI adversarial attacks against AI For Bad, or can we construe this as an entirely unapologetic Ethical AI practice?

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Lets take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we can set the stage by looking at some examples of adversarial attacks to establish what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which Ive discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, Ill share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isnt as yet a singular list of universal appeal and concurrence. Thats the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, lets cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as Ive covered in-depth at the link here, these are their identified six primary AI ethics principles:

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as Ive covered in-depth at the link here, these are their six primary AI ethics principles:

Ive also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled The Global Landscape Of AI Ethics Guidelines (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that only coders or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Lets also make sure we are on the same page about the nature of todays AI.

There isnt any AI today that is sentient. We dont have this. We dont know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Lets keep things more down to earth and consider todays computational non-sentient AI.

Realize that todays AI is not able to think in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isnt any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the old or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

I trust that you can readily see how adversarial attacks fit into these AI Ethics matters. Evildoers are undoubtedly going to use adversarial attacks against ML/DL and other AI that is supposed to be doing AI For Good. Meanwhile, those evildoers are indubitably going to be devising AI For Bad that they foster upon us all. To try and fight against those AI For Bad systems, we could arm ourselves with adversarial attacks. The question is whether we are doing more good or more harm by leveraging and continuing the advent of adversarial attacks.

Time will tell.

One vexing issue is that there is a myriad of adversarial attacks that can be used against AI ML/DL. You might say there are more than you can shake a stick at. Trying to devise protective cybersecurity measures to negate all of the various possible attacks is somewhat problematic. Just when you might think youve done a great job of dealing with one type of adversarial attack, your AI might get blindsided by a different variant. A determined evildoer is likely to toss all manner of adversarial attacks at your AI and be hoping that at least one or more sticks. Of course, if we are using adversarial attacks against AI For Bad, we too would take the same advantageous scattergun approach.

Some of the most popular types of adversarial attacks include:

At this juncture of this weighty discussion, Id bet that you are desirous of some illustrative examples that might showcase the nature and scope of adversarial attacks against AI and particularly aimed at Machine Learning and Deep Learning. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Heres then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the nature of adversarial attacks against AI, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isnt a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isnt a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Id like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we dont yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Adversarial Attacks Against AI

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in todays AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to todays AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system wont natively somehow know about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Lets dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesnt do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

As earlier mentioned, some of the most popular types of adversarial attacks include:

We can showcase the nature of each such adversarial attack and do so in the context of AI-based self-driving cars.

Adversarial Falsification Attacks

Consider the use of adversarial falsifications.

There are generally two such types: (1) false-positive attacks, and (2) false-negative attacks. In the false-positive attack, the emphasis is on presenting to AI a so-called negative sample that is then incorrectly classified by the ML/DL as a positive one. The jargon for this is that it is a Type I effort (this is reminiscent perhaps of your days of taking a statistics class in college). In contrast, the false-negative attack entails presenting a positive sample for which the ML/DL incorrectly classifies as a negative instance, known as a Type II error.

Suppose that we had trained an AI driving system to detect Stop signs. We used an ML/DL that we had trained beforehand with thousands of images that contained Stop signs. The idea is that we would be using video cameras on the self-driving car to collect video and images of the roadway scene surrounding the autonomous vehicle during a driving journey. As the digital imagery real-time streams into an onboard computer, the ML/DL scans the digital data to detect any indication of a nearby Stop sign. The detection of a Stop sign is obviously crucial for the AI driving system. If a Stop sign is detected by the ML/DL, this is conveyed to the AI driving system and the AI would need to ascertain a suitable means to use the driving controls to bring the self-driving car to a proper and safe stop.

Humans seem to readily be able to detect Stop signs, at least most of the time. Our human perception of such signs is keenly honed by our seemingly innate cognitive pattern matching capacities. All we need to do is learn what a Stop sign looks like and we take things from there. A toddler learns soon enough that a Stop sign is typically red in color, contains the word STOP in large letters, has a special rectangular shape, usually is posted adjacent to the roadway and resides at a persons height, and so on.

Imagine an evildoer that wants to make trouble for self-driving cars.

In a false-positive adversarial attack, the wrongdoer would try to trick the ML/DL into computationally calculating that a Stop sign exists even when there isnt a Stop sign present. Maybe the wrongdoer puts up a red sign along a roadway that looks generally similar to a Stop sign but lacks the word STOP on it. A human would likely realize that this is merely a red sign and not a driving directive. The ML/DL might though calculate that the sign resembles sufficiently enough a Stop sign to the degree that the AI ought to consider the sign as in fact a Stop sign.

You might be tempted to think that this is not much of an adversarial attack and that it seems rather innocuous. Well, suppose that you are driving in a car and meanwhile a self-driving car that is ahead of you suddenly and seemingly without any basis for doing so comes to an abrupt stop (due to having misconstrued a red sign near the roadway as being a Stop sign). You might ram into that self-driving car. It could be that the AI was fooled into computationally calculating that a non-stop sign was a Stop sign, thus committing a false-positive error. You get injured, the passengers in the self-driving car get injured, and perhaps even pedestrians get injured by this dreadful false-positive adversarial attack.

A false-negative adversarial attack is somewhat akin to this preceding depiction though based on tricking the ML/DL into incorrectly misclassifying in the other direction, as it were. Imagine that a Stop sign is sitting next to the roadway and for all usual visual reasons seems to be a Stop sign. Humans accept that this is indeed a valid Stop sign.

Visit link:
AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars - Forbes

OVH Groupe : A journey into the wondrous land of Machine Learning, or Cleaning data is funnier than cleaning my flat! (Part 3) – Marketscreener.com

What am I doing here? The story so far

As you might know if you have read our blog for more than a year, a few years ago, I bought a flat in Paris. If you don't know, the real estate market in Paris is expensive but despite that, it is so tight that a good flat at a correct price can be for sale for less than a day.

Obviously, you have to take a decision quite fast, and considering the prices, you have to trust your decision. Of course, to trust your decision, you have to take your time, study the market, make some visits etc This process can be quite long (in my case it took a year between the time I decided that I wanted to buy a flat and the time I actually commited to buying my current flat), and even spending a lot of time will never allow you to have a perfect understanding of the market. What if there was a way to do that very quickly and with a better accuracy than with the standard process?

As you might also know if you are one of our regular readers, I tried to solve this problem with Machine Learning, using an end-to-end software called Dataiku. In a first blog post, we learned how to make a basic use of Dataiku, and discovered that just knowing how to click on a few buttons wasn't quite enough: you had to bring some sense in your data and in the training algorithm, or you would find absurd results.

In a second entry, we studied a bit more the data, tweaked a few parameters and values in Dataiku's algorithms and trained a new model. This yielded a much better result, and this new model was - if not accurate - at least relevant: the same flat had a higher predicted place when it was bigger or supposedly in a better neighbourhood. However, it was far from perfect and really lacked accuracy for several reasons, some of them out of our control.

However, all of this was done on one instance of Dataiku - a licensed software - on a single VM. There are multiple reasons that could push me to do things differently:

What we did very intuitively (and somewhat naively) with Dataiku was actually a quite complex pipeline that is often called ELT, for Extract, Load and Transform.

And obviously, after this ELT process, we added a step to train a model on the transformed data.

So what are we going to do to redo all of that without Dataiku's help?

When ELT becomes ELTT

Now that we know what we are going to do, let us proceed!

Before beginning, we have to properly set up our environment to be able to launch the different tools and products. Throughout this tutorial, we will show you how to do everything with CLIs. However, all these manipulations can also be done on OVHcloud's manager (GUI), in which case you won't have to configure these tools.

For all the manipulations described in the next phase of this article, we will use a Virtual Machine deployed in OVHcloud's Public Cloud that will serve as the extraction agent to download the raw data from the web and push it to S3 as well as a CLI machine to launch data processing and notebook jobs. It is a d2-4 flavor with 4GB of RAM, 2 vCores and 50 GB of local storage running Debian 10, deployed in Graveline's datacenter. During this tutorial, I run a few UNIX commands but you should easily be able to adapt them to whatever OS you use if needed. All the CLI tools specific to OVHcloud's products are available on multiple OSs.

You will also need an OVHcloud NIC (user account) as well as a Public Cloud Project created for this account with a quota high enough to deploy a GPU (if that is not the case, you will be able to deploy a notebook on CPU rather than GPU, the training phase will juste take more time). To create a Public Cloud project, you can follow these steps.

Here is a list of the CLI tools and other that we will use during this tutorial and why:

Additionally you will find commented code samples for the processing and training steps in this Github repository.

In this tutorial, we will use several object storage buckets. Since we will use the S3 API, we will call them S3 bucket, but as mentioned above, if you use OVHcloud standard Public Cloud Storage, you could also use the Swift API. However, you are restricted to only the S3 API if you use our new high-performance object storage offer, currently in Beta.

For this tutorial, we are going to create and use the following S3 buckets:

To create these buckets, use the following commands after having configured your aws CLI as explained above:

Now that you have your environment set up and your S3 buckets ready, we can begin the tutorial!

First, let us download the data files directly on Etalab's website and unzip them:

You should now have the following files in your directory, each one corresponding to the French real estate transaction of a specific year:

Now, use the S3 CLI to push these files in the relevant S3 bucket:

You should now have those 5 files in your S3 bucket:

What we just did with a small VM was ingesting data into a S3 bucket. In real-life usecases with more data, we would probably use dedicated tools to ingest the data. However, in our example with just a few GB of data coming from a public website, this does the trick.

Now that you have your raw data in place to be processed, you just have to upload the code necessary to run your data processing job. Our data processing product allows you to run Spark code written either in Java, Scala or Python. In our case, we used Pyspark on Python. Your code should consist in 3 files:

Once you have your code files, go to the folder containing them and push them on the appropriate S3 bucket:

Your bucket should now look like that:

You are now ready to launch your data processing job. The following command will allow you to launch this job on 10 executors, each with 4 vCores and 15 GB of RAM.

Note that the data processing product uses the Swift API to retrieve the code files. This is totally transparent to the user, and the fact that we used the S3 CLI to create the bucket has absolutely no impact. When the job is over, you should see the following in your transactions-ecoex-clean bucket:

Before going further, let us look at the size of the data before and after cleaning:

As you can see, with ~2.5 GB of raw data, we extracted only ~10 MB of actually useful data (only 0,4%)!! What is noteworthy here is that that you can easily imagine usecases where you need a large-scale infrastructure to ingest and process the raw data but where one or a few VMs are enough to work on the clean data. Obviously, this is more often the case when working with text/structured data than with raw sound/image/videos.

Before we start training a model, take a look at these two screenshots from OVHcloud's data processing UI to erase any doubt you have about the power of distributed computing:

In the first picture, you see the time taken for this job when launching only 1 executor- 8:35 minutes. This duration is reduced to only 2:56 minutes when launching the same job (same code etc) on 4 executors: almost 3 times faster. And since you pay-as-you go, this will only cost you ~33% more in that case for the same operation done 3 times faster- without any modification to your code, only one argument in the CLI call. Let us now use this data to train a model.

To train the model, you are going to use OVHcloud AI notebook to deploy a notebook! With the following command, you will:

In our case, we launch a notebook with only 1 GPU because the code samples we provide would not leverage several GPUs for a single job. I could adapt my code to parallelize the training phase on multiple GPUs, in which case I could launch a job with up to 4 parallel GPUs.Once this is done, just get the URL of your notebook with the following command and connect to it with your browser:

Once you're done, just get the URL of your notebook with the following command and connect to it with your browser:

You can now import the real-estate-training.ipynb file to the notebook with just a few clicks. If you don't want to import it from the computer you use to access the notebook (for example if like me you use a VM to work and have cloned the git repo on this VM and not on your computer), you can push the .ipynb file to your transactions-ecoex-clean or transactions-ecoex-model bucket and re-synchronize the bucket to your notebook while it runs by using the ovhai notebook pull-data command. You will then find the notebook file in the corresponding directory.

Once you have imported the notebook file to your notebook instance, just open it and follow the directives. If you are interested in the result but don't want to do it yourself, let's sum up what the notebook does:

Use the models built in this tutorial at your own risk

So, what can we conclude from all of this? First, even if the second model is obviously better than the first, it is still very noisy: while not far from correct on average, there is still a huge variance. Where does this variance come from?

Well, it is not easy to say. To paraphrase the finishing part of my last article:

In this article, I tried to give you a glimpse at the tools that Data Scientists commonly use to manipulate data and train models at scale, in the Cloud or on their own infrastructure:

Hopefuly, you now have a better understanding on how Machine Learning algorithms work, what their limitations are, and how Data Scientists work on data to create models.

As explained earlier, all the code used to obtain these results can be found here. Please don't hesitate to replicate what I did or adapt it to other usecases!

Solutions ArchitectatOVHCloud|+ posts

Read the rest here:
OVH Groupe : A journey into the wondrous land of Machine Learning, or Cleaning data is funnier than cleaning my flat! (Part 3) - Marketscreener.com

Ensuring compliance with data governance regulations in the Healthcare Machine learning (ML) space – BSA bureau

"Establishing decentralized Machine learning (ML) framework optimises and accelerates clinical decision-making for evidence-based medicine" says Krishna Prasad Shastry, Chief Technologist (AI Strategy and Solutions) at Hewlett-Packard Enterprise

The healthcare industry is becoming increasingly information-driven. Smart machines are creating a positive impact to enhance capabilities in healthcare and R&D. Promising technologies are aiding healthcare staff in areas with limited resources, helping to achieve a more efficient healthcare system. Yet, with all its benefits, using data to deliver more value-based care is not without risks. Krishna Prasad Shastry, Chief Technologist (AI Strategy and Solutions) at Hewlett-Packard Enterprise, Singapore shares further details on the establishment of a decentralized machine learning framework while ensuring compliance with data governance regulations.

Technology will be indispensable in the future of healthcare, with advancements in various technologies such as artificial intelligence (AI), robotics, and nanotechnology. Machine learning (ML) a subset of AI now plays a key role in many health-related realms, such as disease diagnosis. For example, ML models can assist radiologists to diagnose diseases, like Leukaemia or Tuberculosis, more accurately and more rapidly. By using ML algorithms to evaluate imaging such as chest X-rays, MRI, or CT scans, and applying ML to analyse medical imaging, radiologists can better prioritise which potential positive cases to investigate. Similarly, ML models can be developed to recommend personalised patient care, by observing various vital parameters, sensors, or electronic health records (EHRs). The efficiency gains that ML offers stand to take the pressure off the healthcare system especially valuable when resources are stretched and access to hospitals and clinics are disrupted.

Data underpins these digital healthcare advancements. Healthcare organisations globally are embracing digital transformation and using data to enhance operations. Yet, with all its benefits, using data to deliver more value-based care is not without risks. For example, using ML for diagnostic purposes requires a diverse set of data in order to avoid bias. But, access to diverse data sets is often limited by privacy regulations in the health sector. Healthcare leaders face the challenge of how to use data to fuel innovation in a secure and compliant manner.

For instance, HPEs Swarm Learning, a decentralized machine learning framework allows insights generated from data to be shared without having to share the raw data itself. The insights generated by each owner in a group are shared, allowing all participants to still benefit from the collaborative insights of the network. In the case of a hospital thats building an ML model for diagnostics, Swarm Learning enables decentralized model training that benefits from access to insights of a larger data set, while respecting privacy regulations.

Partnering with stakeholders across the public and private sectors will enable us to better provide patients access to new digital healthcare solutions that can reform the management of challenging diseases such as cancer. Our recent partnership with AstraZeneca, under their A. Catalyst Network aims to drive healthcare improvement across Singapores healthcare ecosystem. Further, Swarm Learning can reduce the risk of breaching data governance regulations and can accelerate medical research.

The future of healthcare lies in working in tandem with technology; innovations in the AI and ML space are already being implemented across the treatment chain in the healthcare industry, with successful case studies that we can learn from. From diagnosis to patient management, AI and ML can be used to perform tasks such as predicting diseases, identifying high-risk patients, and automating hospital operations. As ML models are increasingly used in the diagnosis of diseases, there is an increasing need for data sets covering a diverse set of patients. This is a challenging demand to fulfill due to privacy and regulatory restrictions. Gaining insights from a diverse set of data without compromising on privacy might help, as in Swarm Learning.

AI models are used in precision medicine to improve diagnostic outcomes through integration and by modeling multiple data points, including genetic, biochemical, and clinical data. They are also used to optimise and accelerate clinical decision-making for evidence-based medicine. In the sphere of life sciences, AI models are used in areas such as drug discovery, drug toxicity prediction, clinical trials, and adverse event management. For all these cases, Swarm Learning can help build better models by collaborating across siloed data sets.

As we progress towards a technology-driven future, the question of how humans and technology can work hand in hand for the greater good will remain a question to be answered. But I believe that we will be able to maximise the benefits of digital healthcare, as long as we continue to facilitate collaboration between healthcare and IT professionals to bridge the existing gaps in the industry.

Read the original here:
Ensuring compliance with data governance regulations in the Healthcare Machine learning (ML) space - BSA bureau

When It Comes to AI, Can We Ditch the Datasets? Using Synthetic Data for Training Machine-Learning Models – SciTechDaily

A machine-learning model for image classification thats trained using synthetic data can rival one trained on the real thing, a study shows.

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a models performance.

To circumvent some of the problems presented by datasets, MIT researchers developed a method for training a machine learning model that, rather than using a dataset, uses a special type of machine-learning model to generate extremely realistic synthetic data that can train another model for downstream vision tasks.

Their results show that a contrastive representation learning model trained using only these synthetic data is able to learn visual representations that rival or even outperform those learned from real data.

MIT researchers have demonstrated the use of a generative machine-learning model to create synthetic data, based on real data, that can be used to train another model for image classification. This image shows examples of the generative models transformation methods. Credit: Courtesy of the researchers

This special machine-learning model, known as a generative model, requires far less memory to store or share than a dataset. Using synthetic data also has the potential to sidestep some concerns around privacy and usage rights that limit how some real data can be distributed. A generative model could also be edited to remove certain attributes, like race or gender, which could address some biases that exist in traditional datasets.

We knew that this method should eventually work; we just needed to wait for these generative models to get better and better. But we were especially pleased when we showed that this method sometimes does even better than the real thing, says Ali Jahanian, a research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper.

Jahanian wrote the paper with CSAIL grad students Xavier Puig and Yonglong Tian, and senior author Phillip Isola, an assistant professor in the Department of Electrical Engineering and Computer Science. The research will be presented at the International Conference on Learning Representations.

Once a generative model has been trained on real data, it can generate synthetic data that are so realistic they are nearly indistinguishable from the real thing. The training process involves showing the generative model millions of images that contain objects in a particular class (like cars or cats), and then it learns what a car or cat looks like so it can generate similar objects.

Essentially by flipping a switch, researchers can use a pretrained generative model to output a steady stream of unique, realistic images that are based on those in the models training dataset, Jahanian says.

But generative models are even more useful because they learn how to transform the underlying data on which they are trained, he says. If the model is trained on images of cars, it can imagine how a car would look in different situations situations it did not see during training and then output images that show the car in unique poses, colors, or sizes.

Having multiple views of the same image is important for a technique called contrastive learning, where a machine-learning model is shown many unlabeled images to learn which pairs are similar or different.

The researchers connected a pretrained generative model to a contrastive learning model in a way that allowed the two models to work together automatically. The contrastive learner could tell the generative model to produce different views of an object, and then learn to identify that object from multiple angles, Jahanian explains.

This was like connecting two building blocks. Because the generative model can give us different views of the same thing, it can help the contrastive method to learn better representations, he says.

The researchers compared their method to several other image classification models that were trained using real data and found that their method performed as well, and sometimes better, than the other models.

One advantage of using a generative model is that it can, in theory, create an infinite number of samples. So, the researchers also studied how the number of samples influenced the models performance. They found that, in some instances, generating larger numbers of unique samples led to additional improvements.

The cool thing about these generative models is that someone else trained them for you. You can find them in online repositories, so everyone can use them. And you dont need to intervene in the model to get good representations, Jahanian says.

But he cautions that there are some limitations to using generative models. In some cases, these models can reveal source data, which can pose privacy risks, and they could amplify biases in the datasets they are trained on if they arent properly audited.

He and his collaborators plan to address those limitations in future work. Another area they want to explore is using this technique to generate corner cases that could improve machine learning models. Corner cases often cant be learned from real data. For instance, if researchers are training a computer vision model for a self-driving car, real data wouldnt contain examples of a dog and his owner running down a highway, so the model would never learn what to do in this situation. Generating that corner case data synthetically could improve the performance of machine learning models in some high-stakes situations.

The researchers also want to continue improving generative models so they can compose images that are even more sophisticated, he says.

Reference: Generative Models as a Data Source for Multiview Representation Learning by Ali Jahanian, Xavier Puig, Yonglong Tian and Phillip Isola.PDF

This research was supported, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

Read more from the original source:
When It Comes to AI, Can We Ditch the Datasets? Using Synthetic Data for Training Machine-Learning Models - SciTechDaily

Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 – Times…

The Role

The Sustainable and Green Finance Institute (SGFIN) is a new university-level research institute in the National University of Singapore (NUS), jointly supported by the Monetary Authority of Singapore (MAS) and NUS. SGFIN aspires to develop deep research capabilities in sustainable and green finance, provide thought leadership in the sustainability space, and shape sustainability outcomes across the financial sector and the economy at large.

This role is ideally suited for those wishing to work in academic or industry research in quantitative analysis, particularly in the area of machine learning and artificial intelligence. The responsibilities of the role will include designing and developing various analytical frameworks to analyze structure, unstructured and non-traditional data related to corporate financial, environmental, and social indicators.

There are no teaching obligations for this position, and the candidate will have the opportunity to develop their research portfolio.

Duties and Responsibilities

The successful candidate will be expected to assume the following responsibilities:

Qualifications

Covid-19 Message

At NUS, the health and safety of our staff and students are one of our utmost priorities, and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require a significant amount of physical interactions with students/staff/public members. Even for job roles that may be performed remotely, there will be instances where on-campus presences are required.

In accordance with Singapore's legal requirements, unvaccinated workers will not be able to work on the NUS premises with effect from 15 January 2022. As such, job applicants will need to be fully COVID-19 vaccinated to secure successful employment with NUS.

See the original post here:
Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 - Times...

Australian Institute for Machine Learning (AIML …

News

14

Apr

AI for space research delivers back-to-back success in global satellite challenge

South Australias leadership in space innovation has been recognised, with an AIML-led team securing first place in a global AI competition organised by the European Space Agency.

12

Apr

Tech and defence experts call to build AI Australia

Australia must commit to building its sovereign AI research and innovation capability, or risk being left behind as other countries race to pursue their ambitious AI strategies.

11

Feb

Meet the amazing women training AI machines

For International Day of Women and Girls in Science, meet some of the women at AIML who are building great new things and leading the way in cutting-edge machine learning technology.

16

Dec

Machine learning students say cheers with AI beers

How do you build a neural network that can learn how to make beer? We'll show you.

08

Dec

AI + industry collaborations bring award-winning success

South Australias capacity to lead innovation in AI and machine learning has been recognised at the 2021 SA Science and Innovation Excellence Awards, with an AIML team winning the category of Excellence in Science and Industry Collaboration.

24

Nov

New centre boosts AIMLs advanced machine learning research and innovation

Australia's advancedmachine learning capability has received a boost, with a new $20m research and innovation initiative now underway at AIML.

More here:
Australian Institute for Machine Learning (AIML ...

Politics, Machine Learning, and Zoom Conferences in a Pandemic: A Conversation with an Undergraduate Researcher – Caltech

In every election, after the polls close and the votes are counted, there comes a time for reflection. Pundits appear on cable news to offer theories, columnists pen op-eds with warnings and advice for the winners and losers, and parties conduct postmortems.

The 2020 U.S. presidential election in which Donald Trump lost to Joe Biden was no exception.

For Caltech undergrad Sreemanti Dey, the election offered a chance to do her own sort of reflection. Dey, an undergrad majoring in computer science, has a particular interest in using computers to better understand politics. Working with Michael Alvarez, professor of political and computational social science, Dey used machine learning and data collected during the 2020 election to find out what actually motivated people to vote for one presidential candidate over another.

In December, Dey presented her work on the topic at the fourth-annual International Conference on Applied Machine Learning and Data Analytics, which was held remotely and was recognized by the organizers as having the best paper at the conference.

We recently chatted with Dey and Alvarez, who is co-chair of the Caltech-MIT Voting Project, about their research, what machine learning can offer to political scientists, and what it is like for undergrads doing research at Caltech.

Sreemanti Dey: I think that how elections are run has become a really salient issue in the past couple of years. Politics is in the forefront of people's minds because things have gotten so, I guess, strange and chaotic recently. That, along with a lot of factors in 2020, made people care a lot more about voting. That makes me think it's really important to study how elections work and how people choose candidates in general.

Sreemanti: I've learned from Mike that a lot of social science studies are deductive in nature. So, you pick a hypothesis and then you pick the data that would best help you understand the hypothesis that you've chosen. We wanted to take a more open-ended approach and see what the data itself told us. And, of course, that's precisely what machine learning is good for.

In this particular case, it was a matter of working with a large amount of data that you can't filter through yourself without introducing a lot of bias. And that could be just you choosing to focus on the wrong issues. Machine learning and the model that we used are a good way to reduce the amount of information you're looking at without bias.

Basically it's a way of reducing high-dimensional data sets to the most important factors in the data set. So it goes through a couple steps. It first groups all the features of the data into these modules so that the features within a module are very correlated with each other, but there is not much correlation between modules. Then, since each module represents the same type of features, it reduces how many features are in each module. And then at the very end, it combines all the modules together and then takes one last pass to see if it can be reduced by anything else.

Mike: This technique was developed by Christina Ramirez (MS' 96, PhD '99), a PhD graduate of our program now at UCLA. Christina is someone who I've collaborated with quite a bit. Sreemanti and I were meeting pretty regularly with Christina and getting some advice from her along the way about this project and some others that we're thinking about.

Sreemanti: I think we got pretty much what we expected, except for what the most partisan-coded issues are. Those I found a little bit surprising. The most partisan questions turned out to be about filling the Supreme Court seats. I thought that it was interesting.

Sreemanti: It's really incredible. I find it astonishing that a person like Professor Alvarez has the time to focus so much on the undergraduates in lab. I did research in high school, and it was an extremely competitive environment trying to get attention from professors or even your mentor.

It's a really nice feature of Caltech that professors are very involved with what their undergraduates are doing. I would say it's a really incredible opportunity.

Mike: I and most of my colleagues work really hard to involve the Caltech undergraduates in a lot of the research that we do. A lot of that happens in the SURF [Summer Undergraduate Research Fellowship] program in the summers. But it also happens throughout the course of the academic year.

What's unusual a little bit here is that undergraduate students typically take on smaller projects. They typically work on things for a quarter or a summer. And while they do a good job on them, they don't usually reach the point where they produce something that's potentially publication quality.

Sreemanti started this at the beginning of her freshman year and we worked on it through her entire freshman year. That gave her the opportunity to really learn the tools, read the political science literature, read the machine learning literature, and take this to a point where at the end of the year, she had produced something that was of publication quality.

Sreemanti: It was a little bit strange, first of all, because of the time zone issue. This conference was in a completely different time zone, so I ended up waking up at 4 a.m. for it. And then I had an audio glitch halfway through that I had to fix, so I had some very typical Zoom-era problems and all that.

Mike: This is a pandemic-era story with how we were all working to cope and trying to maintain the educational experience that we want our undergraduates to have. We were all trying to make sure that they had the experience that they deserved as a Caltech undergraduate and trying to make sure they made it through the freshman year.

We have the most amazing students imaginable, and to be able to help them understand what the research experience is like is just an amazing opportunity. Working with students like Sreemanti is the sort of thing that makes being a Caltech faculty member very special. And it's a large part of the reason why people like myself like to be professors at Caltech.

Sreemanti: I think I would want to continue studying how people make their choices about candidates but maybe in a slightly different way with different data sets. Right now, from my other projects, I think I'm learning how to not rely on surveys and rely on more organic data, for example, from social media. I would be interested in trying to find a way to study their candidatepeople's candidate choice from their more organic interactions with other people.

Sreemanti's paper, titled, "Fuzzy Forests for Feature Selection in High-Dimensional Survey Data: An Application to the 2020 U.S. Presidential Election," was presented in December at the fourth-annual International Conference on Applied Machine Learning and Data Analytics," where it won the best paper award.

Originally posted here:
Politics, Machine Learning, and Zoom Conferences in a Pandemic: A Conversation with an Undergraduate Researcher - Caltech

AI Dynamics Will Employ Machine Learning to Triage TB Patients More Accurately, Quickly, Simply and Inexpensively Using Cough Sound Data, Bringing…

Selected by QB3 and UCSF for R2D2 TB Networks Scale Up Your TB Diagnostic Solution Program

BELLEVUE, Wash., April 26, 2022 (GLOBE NEWSWIRE) -- AI Dynamics, an organization founded on the belief that everyone should have access to the power of artificial intelligence (AI) to change the world, has been selected for the Rapid Research in Diagnostics Development for TB Networks (R2D2 TB Network) Scale Up Your TB Diagnostic Solution Program, hosted by QB3 and the UCSF Rosenman Institute. With 1.5 million deaths reported each year, Tuberculosis (TB) is the worldwide leading cause of death from a single infectious disease agent. The goal of the program is to harness machine learning technology for triaging TB using simple and affordable tests that can be performed on easy-to-collect samples such as cough sounds.

Currently, two weeks of cough sound data is widely used to determine who requires costly confirmatory testing, which delays the initiation of the treatment. AI Dynamics will build a proof-of-concept machine learning model to triage TB patients more accurately, quickly, simply and inexpensively using cough sounds, relieving patients from paying for unnecessary molecular and culture TB tests. Due to the prevalence of TB in under-resourced and remote locations, access to affordable early detection options is necessary to prevent disease transmissions and deaths in such countries.

At the core of AI Dynamics mission is providing equal access to the power of AI to everyone and we are committed to working with like-minded companies that recognize the positive impact innovative technology can have on the world, Rajeev Dutt, Founder and CEO of AI Dynamics said. The collaboration and accessible datasets that the R2D2 TB Network provides help to facilitate life-changing diagnostics for the most vulnerable populations.

The R2D2 TB Network offers a transparent and partner-engaged process for the identification, evaluation and advancement of promising TB diagnostics by providing experts and data and facilitating rigorous clinical study evaluation. AI Dynamics will build and validate a model using cough sounds collected from sites worldwide through the R2D2 TB Network.

About AI Dynamics:

AI Dynamics aims to make artificial intelligence (AI) accessible to organizations of all sizes. The company's NeoPulse Framework is an intuitive development and management platform for AI, which enables companies to develop and implement deep neural networks and other machine learning models that can improve key performance metrics. The company's team brings decades of experience in the fields of machine learning and artificial intelligence from leading companies and research organizations. For more information, please visit aidynamics.com.

About The R2D2 TB Network:

The Rapid Research in Diagnostics Development for TB Network (R2D2 TB Network) brings together various TB experts with highly experienced clinical study sites in 10 countries. For further information, please visit their website at https://www.r2d2tbnetwork.org/.

Media Contact:

Justine GoodielUPRAISE Marketing + PR for AI Dynamicsaidynamics@upraisepr.com

Originally posted here:
AI Dynamics Will Employ Machine Learning to Triage TB Patients More Accurately, Quickly, Simply and Inexpensively Using Cough Sound Data, Bringing...

Five Machine Learning Project Pitfalls to Avoid in 2022 – EnterpriseTalk

Machine Learning (ML) systems are complex, and this complexity increases the chances of failure as well. Knowing what may go wrong is critical for developing robust machine learning systems.

Machine Learning (ML) initiatives fail 85% of the time, according to Gartner. Worse yet, according to the research firm, this tendency will continue until the end of 2022.

There are a number of foreseeable reasons why machine learning initiatives fail, many of which may be avoided with the right knowledge and diligence. Here are some of the most common challenges that machine learning projects face, as well as ways to prevent them.

All AI/ML endeavors require data, which is needed for testing, training, and operating models. However, acquiring such data is a stumbling block because most organizational data is dispersed among on-premises and cloud data repositories, each with its own set of compliance and quality control standards, making data consolidation and analysis that much more complex.

Another stumbling block is data silos. When teams use multiple systems to store and handle data sets, data silos collections of data controlled by one team but not completely available to others can form. That might, however, be a result of a siloed organizational structure.

In reality, no one knows everything. It is critical to have at least one ML expert on the team, to be able to do the foundational work, for the successful adoption and implementation of ML in enterprise projects. Being overly confident, without the right skill, sets in the team will only add to the chances of failure.

Organizations are nearly drowning in large volumes of observational data. Thanks to developments in technology such as integrated smart devices and telematics as well as relatively inexpensive and available big data storage and a desire to incorporate more data science into business decisions. However, a high level of data availability might result in observational data dumpster diving.

Also Read: How Enterprises can Keep Machine Learning Models on Track with Crucial Guard Rails

When adopting a strong tool like machine learning, it pays to be more aware about what organizations are searching for. Businesses should take advantage of their large observational data resources to uncover potentially valuable insights, but evaluate those hypotheses through AB or multivariate testing to distinguish reality from fiction.

The ability to evaluate the overall performance of a trained model is crucial in machine learning. Its critical to assess how well the model performs when compared to both training and test data. This data will be used to choose the model to use, the hyper-parameters to utilize, and decide if the model is ready for production use.

It is vital to select the right assessment measures for the job at hand when evaluating model performance.

Machine learning has become more accessible in various ways. There are far more machine learning tools available today than there were even a few years ago, and data science knowledge has multiplied.

Having a data science team to work on an AI and ML project in isolation, on the other hand, might drive the organization down the most difficult path to success. They may come across unanticipated difficulties unless they have prior familiarity with them. Unfortunately, they can also get into the thick of a project before recognizing they are not adequately prepared.

Its imperative to make sure that domain specialists like process engineers and plant operators are not left out of the process because they are familiar with its complexity and the context of relevant data.

Check Out The NewEnterprisetalk Podcast.For more such updates follow us on Google NewsEnterprisetalk News.

Originally posted here:
Five Machine Learning Project Pitfalls to Avoid in 2022 - EnterpriseTalk