Developing Machine Learning and Statistical Tools to Evaluate the Accessibility of Public Health Advice on Infectious Diseases among Vulnerable People…

Comput Intell Neurosci. 2021 Dec 17;2021:1916690. doi: 10.1155/2021/1916690. eCollection 2021.

ABSTRACT

BACKGROUND: From Ebola, Zika, to the latest COVID-19 pandemic, outbreaks of highly infectious diseases continue to reveal severe consequences of social and health inequalities. People from low socioeconomic and educational backgrounds as well as low health literacy tend to be affected by the uncertainty, complexity, volatility, and progressiveness of public health crises and emergencies. A key lesson that governments have taken from the ongoing coronavirus pandemic is the importance of developing and disseminating highly accessible, actionable, inclusive, coherent public health advice, which represent a critical tool to help people with diverse cultural, educational backgrounds and varying abilities to effectively implement health policies at the grassroots level.

OBJECTIVE: We aimed to translate the best practices of accessible, inclusive public health advice (purposefully designed for people with low socioeconomic and educational background, health literacy levels, limited English proficiency, and cognitive/functional impairments) on COVID-19 from health authorities in English-speaking multicultural countries (USA, Australia, and UK) to adaptive tools for the evaluation of the accessibility of public health advice in other languages.

METHODS: We developed an optimised Bayesian classifier to produce probabilistic prediction of the accessibility of official health advice among vulnerable people including migrants and foreigners living in China. We developed an adaptive statistical formula for the rapid evaluation of the accessibility of health advice among vulnerable people in China.

RESULTS: Our study provides needed research tools to fill in a persistent gap in Chinese public health research on accessible, inclusive communication of infectious diseases prevention and management. For the probabilistic prediction, using the optimised Bayesian machine learning classifier (GNB), the largest positive likelihood ratio (LR+) 16.685 (95% confidence interval: 4.35, 64.04) was identified when the probability threshold was set at 0.2 (sensitivity: 0.98; specificity: 0.94).

CONCLUSION: Effective communication of health risks through accessible, inclusive, actionable public advice represents a powerful tool to reduce health inequalities amidst health crises and emergencies. Our study translated the best-practice public health advice developed during the pandemic into intuitive machine learning classifiers for health authorities to develop evidence-based guidelines of accessible health advice. In addition, we developed adaptive statistical tools for frontline health professionals to assess accessibility of public health advice for people from non-English speaking backgrounds.

PMID:34925484 | PMC:PMC8683224 | DOI:10.1155/2021/1916690

See the original post:
Developing Machine Learning and Statistical Tools to Evaluate the Accessibility of Public Health Advice on Infectious Diseases among Vulnerable People...

Human Rights Documentation In The Digital Age: Why Machine Learning Isnt A Silver Bullet – Forbes

When the Syrian uprising started nearly 10 years ago, videos taken by citizens of attacks against them such as chemical and barrel bomb strikes started appearing on social media. While international human rights investigators couldn't get into the country, people on the ground documented and shared what was happening. Yet soon, videos and pictures of war atrocities were deleted from social media platforms a pattern that has continued to date. Ashoka Fellow Hadi al-Khatib, founder of the Syrian Archive and Mnemonic, works to save these audiovisual documents so they are available as evidence for lawyers, human rights investigators, historians, prosecutors, and journalists. In the wake of the Facebook Leaks, which are drawing needed attention to the topic of content moderation and human rights, Ashokas Konstanze Frischen caught up with Hadi.

Hadi al-Khatib, founder of Mnemonic and the Syrian Archive warns us against an over-reliance on ... [+] machine learning for online content moderation.

Konstanze Frischen: Hadi, you verify and save images and videos that show potential human rights violations, and ensure that prosecutors and journalists can use them later to investigate crimes against humanity. How and why did you start this work?

Hadi al-Khatib: I come from Hama, a city in the north of Damascus in Syria, where the first uprising against the Syrian government happened in 1982, and thousands of people died at the hands of the Syrian military. Unfortunately, at the time, there was very little documentation about what happened. Growing up, when my family spoke about these incidents, they would speak very quietly, or avoid the topic when I asked them about it. They would say: be careful, even the walls have ears. In 2011, during the second big uprising against the Syrian government, the situation was quite different. We immediately saw a huge scale of audio-visual documentation on social media - videos and photos captured by people witnessing the peaceful protests first, and then the violence against protesters. People wanted to make sure the crimes that they were witnessing were documented, in contrast to what happened in Hama in 1982. My work is to ensure that this documentation captured by people who risked their lives is not lost and is accessible in the future.

Frischen: With people publishing this on social media on a very large scale, many people might assume It's all out there, so why do I need someone else to archive it?

al-Khatib: Yes, good question. When we work with journalists, photographers, citizens from around the world, most of them do think of social media as a place where they can safely archive their materials. They think We have the archive. It's on social media, Dropbox, or Google Drive. But its not safe there once this media is uploaded on social media platforms, we lose control of it. From March 2011 until I founded the Syrian Archive in 2014, footage got deleted on a very large scale and it still is until now because of social media platforms content moderation policies. It got worse after 2017 when social media companies like YouTube started to use machine learning to detect content that shows violence automatically.

Frischen: Why do you think the materials get removed from social media platforms?

al-Kathib: Because the machine learning algorithm they have developed doesn't really differentiate between a video that shows extremist content or graphic content, and a video that documents a human rights violation. They all get detected automatically and removed.

Frischen: Though its well intended, machine learning cant handle the complexity?

al-Khatib: Exactly. The use of machine learning is very dangerous for human rights documentation, not just in Syria, but around the world. Social media platforms would need to invest more in human intelligence, not just machine intelligence, to make sound decisions.

Frischen: The Syrian Archive, one of the organizations you founded, has archived over 3.5 million records of digital content. How does that work in practice? How do you balance machine learning and manual work?

al-Khatib: The first step is to monitor specific sources, locations, and keywords around current or historical events. Once we discover content, we make sure that we preserve it automatically, as fast as possible. This is always our priority. Each of the 3.5 million records we have collected come from social media platforms, websites, or apps like Telegram. We archive them all in a way that provides availability, accessibility and authenticity for these records. We use machine learning with the project VFRAME to help us discover what we have in the archives that is most relevant for human rights investigations, journalism reporting or legal case building within this large pool of media. Then, we manually verify the location, date, time. We also verify any kind of objects we can see in the video, and make sure we are able to link it with other pieces of archived media and corroborate it with other types of evidence, to construct a verified incident. We also use blockchain to timestamp the materials, with a third-party company called Enigio. We want to provide long term, safe accessibility to the documents, and authenticate them in a way that proves we haven't tampered with the material during the archival process.

Frischen: Machine learning is great for analyzing large data sets, but then human judgment and a deep knowledge of history, politics, and the region must be brought to bear?

al-Khatib: Exactly. Knowledge of context, language, and history is vital for verification. This is all a manual process where researchers use certain tools and techniques to verify the location, date, time of every record, and make sure that it's clustered together into incidents. Those incidents are also clustered together into collections to form a bigger picture understanding of the pattern of violence and the impact it has on people.

Frischen: These findings can in turn be leveraged: You feed the results of your investigations to governments and prosecutors. What has the impact been?

al-Khatib: We realize that any legal accountability is going to take a long time. One of the main legal cases we are working on right now is about the use of chemical weapons in Syria. We focus on two incidents in two locations in Syria, in Eastern Ghouta (2013), and in Khan Sheikhoun (2017), where we saw the biggest uses of chemical weapons (i.e. Sarin gas) in recent history. We submitted a legal complaint to the German, French and Swedish prosecutors in collaboration with the Syrian Center for Media and Freedom of Expression, Civil Rights Defenders, and the Open Society Justice Initiative. Part of that submission was media evidence verified and collected by the Syrian Archive. Our investigations into the Syrian chemical supply chain resulted in the conviction of three Belgian firms who violated European Union sanctions, an internal audit of the Belgian customs system, parliamentary inquiries in multiple countries, a change in Swiss export laws to reflect European Union sanctions laws on specific chemicals, and the filing of complaints urging the governments of Germany and Belgium to initiate investigations into additional shipments to Syria.

Frischen: Wow. Let me come back to the automated content removal on social media platforms when this happens, i.e. when pieces of evidence of atrocities by the government are deleted, does this then opens up windows of opportunity for actors like the Syrian government to then flood social media with other, positive images, and thus take over newsfeeds?

al-Khatib: Yes, absolutely. Over the last 10 years, we've seen this kind information propaganda coming from all sides of the conflict in Syria. And our role within this information environment is to counter disinformation by archiving, collecting and verifying visual materials to reconstruct what really happened and to make sure that this reconstruction is based on facts. And we are doing this transparently, so anyone can see our methodology and tools we are using.

Frischen: How are the big social media companies responding? Do you see them as collaborative or as distant?

al-Khatib: Many civil society organizations from around the world, have been engaging with social media companies and asking them to invest more resources into this issue. So far, nothing has changed. The use of machine learning is still happening. A huge amount of content related to human rights documentation is still being removed. But there has absolutely been engagement and collaboration throughout the years, especially since 2017. We worked with YouTube for example to reinstate some of the channels that were removed, as well as thousands of videos that were published by credible human rights and media organizations in Syria. But unfortunately, a big part of this documentation is still being removed. The Facebook Leaks reveal the company knew about this problem, but they are continuing to use machine learning, erasing the history and memory of people around the world.

Frischen: How do you attend to the wellbeing of the humans involved in gathering and triaging violent and traumatic content?

al-Khatib: This is a very important question. We need to make sure there is a system of support for all researchers looking at this content practical assistance from psychologists that understand all the challenges and mitigate some of them. We are setting up protocols, so the researchers have access to experts. There are also some technical efforts underway. For example, we work with machine learning to blur images at the beginning, so researchers are not seeing graphic images directly on their screen. This is something that we want to do more work on.

Frischen: What gives you hope?

al-Khatib: The will of people who are facing the violence firsthand, and the families of victims. Whether in Syria or other countries, they did not yet get the accountability they deserve, but regardless, they are asking for it, fighting for it. This is what gives me hope working together with them, adding value by linking documentation to justice and accountability, and using this process to reconstruct the future of the country again.

Hadi al-Khatib (@Hadi_alkhatib) is the founder of Syrian Archive and its umbrella organization Mnemonic.

This conversation was condensed and edited. Watch the full conversation & browse more insights on Tech & Humanity.

Read the original post:
Human Rights Documentation In The Digital Age: Why Machine Learning Isnt A Silver Bullet - Forbes

METiS Therapeutics Launches With $86 Million Series A Financing to Transform Drug Discovery and Delivery With Machine Learning and Artificial…

Dec. 7, 2021 11:00 UTC

CAMBRIDGE, Mass.--(BUSINESS WIRE)-- METiS Therapeutics debuts today with $86 million Series A financing to harness artificial intelligence (AI) and machine learning to redefine drug discovery and delivery and develop optimal therapies for patients with serious diseases. PICC PE and China Life led the financing and were joined by Sequoia Capital China, Lightspeed, 5Y Capital, FreeS Fund and CMBI Zhaoxin Wuji Fund. The financing will be used to advance the companys pipeline of novel assets with high therapeutic potential and the continued development of its AI-driven drug discovery and delivery platform.

METiS is well-positioned to change the drug discovery and delivery landscape with the creation of a proprietary predictive AI platform. We leverage machine learning, AI and quantum simulation to uncover novel drug candidates and to transform drug discovery and development, ultimately bringing the best therapies to patients in need, said Chris Lai, CEO, and Founder, METiS Therapeutics. We are fortunate that our world-class roster of investors believes in our vision and todays news represents the first of many significant milestones that we will be accomplishing throughout the next year.

The METiS platform (AiTEM) combines state-of-the-art AI data-driven algorithms, mechanism-driven quantum mechanics and molecular dynamics simulations to calculate Active Pharmaceutical Ingredient (API) properties, elucidate API-target and API-excipient interactions, and predict chemical, physical and pharmacokinetic properties of small molecule and nucleic acid therapeutics in specific microenvironments. This enables efficient lead optimization, candidate selection and formulation design. Founded by a team of MIT researchers, serial entrepreneurs and biotech industry veterans, METiS develops and in-licenses novel assets with high therapeutic potential that could benefit from its data-driven platform.

About METiS Therapeutics METiS Therapeutics is a biotechnology company that aims to drive best-in-class therapies in a wide range of disease areas by integrating drug discovery and delivery with AI, machine learning, and quantum simulation. To learn more, visit http://www.metistx.com/.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211207005197/en/

View post:
METiS Therapeutics Launches With $86 Million Series A Financing to Transform Drug Discovery and Delivery With Machine Learning and Artificial...

3 Applications of Machine Learning and AI in Finance – TAPinto.net

Thanks to advanced technology, consumers can now access, spend, and invest their money in safer ways. Lenders looking to win new business should apply technology to make processes faster and more efficient.

Artificial intelligence has transformed the way we handle money by giving the financial industry a smarter, more convenient way to meet customer demands.

Sign Up for Elizabeth Newsletter

Our newsletter delivers the local news that you can trust.

You have successfully signed up for the TAPinto Elizabeth Newsletter.

Machine learning helps financial institutions develop systems that improve user experiences by adjusting parameters automatically. It's become easier to handle the extensive amount of data related to daily financial transactions.

Machine learning and AI are changing how the financial industry does business in these ways:

Fraud Detection

The need to enhance fraud detection and cybersecurity is no longer an option. People pay bills, transfer money, trade stocks, and deposit checks through smartphone applications or online accounts.

Many businesses store their information online, increasing the risk of security breaches. Fraud is a major concern for companies that offer financial services--including banks--which lose billions of dollars yearly.

Machine learning and artificial intelligence technologies improve online finance security by scanning data and identifying unique activities. They then highlight these activities for further investigation. This technology can also prevent credential stuffing and credit application fraud.

Cognito is a cyber-threat detection and hunting software impacting the financial space positively. Its built by a company called Vectra. Besides detecting threats automatically, it can expose hidden attackers that target financial institutions and also pinpoint compromised information.

Making Credit Decisions

Having good credit can help you rent an apartment of your choice, land a great job, and explore different financing options. Now more than ever, many things depend on your credit history, even taking loans and credit cards.

Lenders and banks now use artificial intelligence to make smarter decisions. They use AI to accurately assess borrowers, simplifying the underwriting process. This helps save time and financial resources that would have been spent on humans.

Data--such as income, age, and credit behavior--can be used to determine if customers qualify for loans or insurance. Machine learning accurately calculates credit scores using several factors, making loan approval quick and easy.

AI software like ZestFinance can help you to easily find online lenders, all you do is type title loans near me. Its automated machine learning platform (ZAML) works with companies to assess borrowers without credit history and little to no credit information. The transparent platform helps lenders to better evaluate borrowers who are considered high risk.

Algorithmic Trading

Many businesses depend on accurate forecasts for their continued existence. In the finance industry, time is money. Financial markets are now using machine learning to develop faster, more exact mathematical models. These are better at identifying risks, showing trends, and providing advanced information in real time.

Financial institutions and hedge fund managers are applying artificial intelligence in quantitative or algorithmic trading. This trading captures patterns from large data sets to identify factors that may cause security prices to rise or fall, making trading strategic.

Tools like Kavout combine quantitative analysis with machine learning to simultaneously process large, complex, unstructured data faster and more efficiently. The Kai Score ranks stocks using AI to generate numbers. A higher Kai Score means the stock is likely to outperform the market.

Online lenders and other financial institutions can now streamline processes thanks to faster, more efficient tools. Consumers no longer have to worry about unnecessary delays and the safety of their transactions.

About The Author:

Aqib Ijaz is a content writingguru at Eyes on Solution. He is adept in IT as well. He loves to write on different topics. In his free time, he likes to travel and explore different parts of the world.

See the original post here:
3 Applications of Machine Learning and AI in Finance - TAPinto.net

Machine learning security vulnerabilities are a growing threat to the web, report highlights – The Daily Swig

Security industry needs to tackle nascent AI threats before its too late

As machine learning (ML) systems become a staple of everyday life, the security threats they entail will spill over into all kinds of applications we use, according to a new report.

Unlike traditional software, where flaws in design and source code account for most security issues, in AI systems, vulnerabilities can exist in images, audio files, text, and other data used to train and run machine learning models.

This is according to researchers from Adversa, a Tel Aviv-based start-up that focuses on security for artificial intelligence (AI) systems, who outlined their latest findings in their report, The Road to Secure and Trusted AI, this month.

This makes it more difficult to filter, handle, and detect malicious inputs and interactions, the report warns, adding that threat actors will eventually weaponize AI for malicious purposes.

Unfortunately, the AI industry hasnt even begun to solve these challenges yet, jeopardizing the security of already deployed and future AI systems.

Theres already a body of research that shows many machine learning systems are vulnerable to adversarial attacks, imperceptible manipulations that cause models to behave erratically.

BACKGROUND Adversarial attacks against machine learning systems everything you need to know

According to the researchers at Adversa, machine learning systems that process visual data account for most of the work on adversarial attacks, followed by analytics, language processing, and autonomy.

Machine learning systems have a distinct attack surface

With the growth of AI, cyberattacks will focus on fooling new visual and conversational Interfaces, the researchers write.

Additionally, as AI systems rely on their own learning and decision making, cybercriminals will shift their attention from traditional software workflows to algorithms powering analytical and autonomy capabilities of AI systems.

Web developers who are integrating machine learning models into their applications should take note of these security issues, warned Alex Polyakov, co-founder and CEO of Adversa.

There is definitely a big difference in so-called digital and physical attacks. Now, it is much easier to perform digital attacks against web applications: sometimes changing only one pixel is enough to cause a misclassification, Polyakov told The Daily Swig, adding that attacks against ML systems in the physical world have more stringent demands and require much more time and knowledge.

Read more of the latest infosec research news

Polyakov also warned about vulnerabilities in machine learning models served over the web such as API services provided by large tech companies.

Most of the models we saw online are vulnerable, and it has been proven by several research reports as well as by our internal tests, Polyakov. With some tricks, it is possible to train an attack on one model and then transfer it to another model without knowing any special details of it.

Also, you can perform CopyCat attack to steal a model, apply the attack on it and then use this attack on the API.

Most machine learning algorithms require large sets of labeled data to train models. In many cases, instead of going through the effort of creating their own datasets, machine learning developers search and download datasets published on GitHub, Kaggle, or other web platforms.

Eugene Neelou, co-founder and CTO of Adversa, warned about potential vulnerabilities in these datasets that can lead to data poisoning attacks.

Poisoning data with maliciously crafted data samples may make AI models learn those data entries during training, thus learning malicious triggers, Neelou told The Daily Swig. The model will behave as intended in normal conditions, but malicious actors may call those hidden triggers during attacks.

RELATED TrojanNet a simple yet effective attack on machine learning models

Neelou also warned about trojan attacks, where adversaries distribute contaminated models on web platforms.

Instead of poisoning data, attackers have control over the AI model internal parameters, Neelou said. They could train/customize and distribute their infected models via GitHub or model platforms/marketplaces.

Unfortunately, GitHub and other platforms dont yet have any safeguards in place to detect and defend against data poisoning schemes. This makes it very easy for attackers to spread contaminated datasets and models across the web.

Attacks against machine learning and AI systems are set to increase over the coming years

Neelou warned that while AI is extensively used in myriads of organizations, there are no efficient AI defenses.

He also raised concern that under currently established roles and procedures, no one is responsible for AI/ML security.

AI security is fundamentally different from traditional computer security, so it falls under the radar for cybersecurity teams, he said. Its also often out of scope for practitioners involved in responsible/ethical AI, and regular AI engineering hasn't solved the MLOps and QA testing yet.

Check out more machine learning security news

On the bright side, Polyakov said that adversarial attacks can also be used for good. Adversa recently helped one of its clients use adversarial manipulations to develop web CAPTCHA queries that are resilient against bot attacks.

The technology itself is a double-edged sword and can serve both good and bad, he said.

Adversa is one of several organizations involved in dealing with the emerging threats of machine learning systems.

Last year, in a joint effort, several major tech companies released the Adversarial Threat ML Matrix, a set of practices and procedures meant to secure the machine learning training and delivery pipeline in different settings.

RECOMMENDED Emotet clean-up: Security pros draw lessons from botnet menace as kill switch is activated

Originally posted here:
Machine learning security vulnerabilities are a growing threat to the web, report highlights - The Daily Swig

Apple will focus on machine learning, AI jobs in new NC campus – VentureBeat

Join Transform 2021 this July 12-16. Register for the AI event of the year.

(Reuters) Apple on Monday said it will establish a new campus in North Carolina that will house up to 3,000 employees, expand its operations in several other U.S. states and increase its spending targets with U.S. suppliers.

Apple said it plans to spend $1 billion as it builds a new campus and engineering hub in the Research Triangle area of North Carolina, with most of the jobs expected to focus on machine learning, artificial intelligence, software engineering and other technology fields. It joins a $1 billion Austin, Texas campus announced in 2019.

North Carolinas Economic Investment Committee on Monday approved a job-development grant that could provide Apple as much as $845.8 million in tax reimbursements over 39 years if Apple hits job and growth targets. State officials said the 3,000 jobs are expected to create $1.97 billion in new tax revenues to the state over the grant period.

The iPhone maker said it would also establish a $100 million fund to support schools in the Raleigh-Durham area of North Carolina and throughout the state, as well as contribute $110 million to help build infrastructure such as broadband internet, roads, bridges and public schools in 80 North Carolina counties.

As a North-Carolina native, Im thrilled Apple is expanding and creating new long-term job opportunities in the community I grew up in, Jeff Williams, Apples chief operating officer, said in a statement.

Were proud that this new investment will also be supporting education and critical infrastructure projects across the state.

Apple also said it expanded hiring targets at other U.S. locations to hit a goal 20,000 additional jobs by 2026, setting new goals for facilities in Colorado, Massachusetts and Washington state.

In Apples home state of California, the company said it will aim to hire 5,000 people in San Diego and 3,000 people in Culver City in the Los Angeles area.

Apple also increased a U.S. spending target to $430 billion by 2026, up from a five-year goal of $350 billion Apple set in 2018, and said it was on track to exceed.

The target includes Apples U.S. data centers, capital expenditures and spending to create original television content in 20 states. It also includes spending with Apples U.S.-headquartered suppliers, though Apple has not said whether it applies only to goods made in those suppliers U.S. facilities.

Go here to see the original:
Apple will focus on machine learning, AI jobs in new NC campus - VentureBeat

Using AI and Machine Learning will increase in horti industry – hortidaily.com

The expectation is that in 2021, artificial intelligence and machine learning technologies will continue to become more mainstream. Businesses that havent traditionally viewed themselves as candidates for AI applications will embrace these technologies.

A great story of machine learning being used in an industry that is not known for its technology investments is the story of Makoto Koike. Using Googles TensorFlow, Makoto initially developed a cucumber sorting system using pictures that he took of the cucumbers. With that small step, a machine learning cucumber sorting system was born.

Getting started with AI and machine learning is becoming increasingly accessible for organizations of all sizes. Technology-as-a-service companies including Microsoft, AWS and Google all have offerings that will get most organizations started on their AI and machine learning journeys. These technologies can be used to automate and streamline manual business processes that have historically been resource-intensive.

An article on forbes.com claims that, as business leaders continue to refine their processes to support the new normal of the Covid-19 pandemic, they should be considering where these technologies might help reduce manual, resource-intensive or paper-based processes. Any manual process should be fair game for review for automation possibilities.

Photo source: Dreamstime.com

Follow this link:
Using AI and Machine Learning will increase in horti industry - hortidaily.com

New Machine Learning Theory Raises Questions About the Very Nature of Science – SciTechDaily

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

The algorithm, devised by a scientist at the U.S. Department of Energys (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations, said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. What Im doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a serving algorithm, then made accurate predictions of the orbits of other planets in the solar system without using Newtons laws of motion and gravitation. Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data, Qin said. There is no law of physics in the middle.

PPPL physicist Hong Qin in front of images of planetary orbits and computer code. Credit: Elle Starkman / PPPL Office of Communications

The program does not happen upon accurate predictions by accident. Hong taught the program the underlying principle used by nature to determine the dynamics of any physical system, said Joshua Burby, a physicist at the DOEs Los Alamos National Laboratory who earned his Ph.D. at Princeton under Qins mentorship. The payoff is that the network learns the laws of planetary motion after witnessing very few training examples. In other words, his code really learns the laws of physics.

Machine learning is what makes computer programs like Google Translate possible. Google Translate sifts through a vast amount of information to determine how frequently one word in one language has been translated into a word in the other language. In this way, the program can make an accurate translation without actually learning either language.

The process also appears in philosophical thought experiments like John Searles Chinese Room. In that scenario, a person who did not know Chinese could nevertheless translate a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostroms philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. If we live in a simulation, our world has to be discrete, Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

The resulting pixelated view of the world, akin to what is portrayed in the movie The Matrix, is known as a discrete field theory, which views the universe as composed of individual bits and differs from the theories that people normally create. While scientists typically devise overarching concepts of how the physical world behaves, computers just assemble a collection of data points.

Qin and Eric Palmerduca, a graduate student in the Princeton University Program in Plasma Physics, are now developing ways to use discrete field theories to predict the behavior of particles of plasma in fusion experiments conducted by scientists around the world. The most widely used fusion facilities are doughnut-shaped tokamaks that confine the plasma in powerful magnetic fields.

Fusion, the power that drives the sun and stars, combines light elements in the form of plasma the hot, charged state of matter composed of free electrons and atomic nuclei that represents 99% of the visible universe to generate massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

In a magnetic fusion device, the dynamics of plasmas are complexand multi-scale, and the effective governing laws or computational models for a particular physical process that we are interested in are not always clear, Qin said. In these scenarios, we can apply the machine learning technique that I developed to create a discrete field theory and then apply this discrete field theory to understand and predict new experimental observations.

This process opens up questions about the nature of science itself. Dont scientists want to develop physics theories that explain the world, instead of simply amassing data? Arent theories fundamental to physics and necessary to explain and understand phenomena?

I would argue that the ultimate goal of any scientist is prediction, Qin said. You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I dont need to know Newtons laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newtons laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.

Machine learning could also open up possibilities for more research. It significantly broadens the scope of problems that you can tackle because all you need to get going is data, Palmerduca said.

The technique could also lead to the development of a traditional physical theory. While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one, Palmerduca said. When youre trying to deduce a theory, youd like to have as much data at your disposal as possible. If youre given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.

Reference: Machine learning and serving of discrete field theories by Hong Qin, 9 November 2020, Scientific Reports.DOI: 10.1038/s41598-020-76301-0

Read this article:
New Machine Learning Theory Raises Questions About the Very Nature of Science - SciTechDaily

The head of JPMorgan’s machine learning platform explained what it’s like to work there – eFinancialCareers

For the past few years, JPMorgan has been busy building out its machine learning capability underDaryush Laqab, its San Francisco-based head of AI platform product management, who was hired from Google in 2019. Last time we looked, the bank seemed to be paying salaries of $160-$170k to new joiners onLaqab's team.

If that sounds appealing, you might want to watch the video below so that you know what you're getting into. Recorded at the AWS re:Invent conferencein December, it's only just made it to you YouTube. The video is flagged as a day in the life of JPMorgan's machine learning data scientists, butLaqab arguably does a better of job of highlighting some of the constraints data professionals at allbanks have to work under.

"There are some barriers to smooth data science at JPMorgan," he explains - a bank is not the same as a large technology firm.

For example, data scientists at JPMorgan have to check data is authorized for use, saysLaqab: "They need to go to a process to log that use and make surethat they have the adequate approvals for that intent in terms of use."

They also have to deal with the legacy infrastructureissue: "We are a large organization, we have a lot of legacy infrastructure," says Laqab. "Like any other legacy infrastructure, it is built over time,it is patched over time. These are tightly integrated,so moving part or all of that infrastructure to public cloud,replacing rule base engines with AI/ML based engines.All of that takes time and brings inertia to the innovation."

JPMorgan's size and complexity is another source of inertia as multiple business lines in multiple regulated entities in different regulated environments need to be considered. "Making sure that those regulatory obligationsare taken care of, again, slows down data science at times," saysLaqab.

And then there are more specific regulations such as those concerning model governance. At JPMorgan, a machine learning model can't go straight into a production environment."It needs to go through a model review and a model governance process," says Laqab. "- To make sure we have another set of eyes that looksat how that model was created, how that model was developed..." And then there are software governance issues too.

Despite all these hindrances, JPMorgan has already productionized AI models and built an 'Omni AI ecosystem,'which Laqab heads,to help employees to identify and ingest minimum viable data so that they canbuild models faster. Laqab saysthe bank saved $150m in expenses in 2019 as a result. JPMorgan's AI researchers are now working on everything fromFAQ bots and chat bots, to NLP search models for the bank'sown content, pattern recognition in equities markets and email processing. - The breadth of work on offer is considerable. "We play in every market that is out there," saysLaqab,

The bank has also learned that the best way to structure its AI team is to split people into data scientists who train and create models and machine learning engineers who operationalize models, saysLaqab. - Before you apply, you might want to consider which you'd rather be.

Photo by NeONBRAND on Unsplash

Have a confidential story, tip, or comment youd like to share? Contact:sbutcher@efinancialcareers.comin the first instance. Whatsapp/Signal/Telegram also available. Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will unless its offensive or libelous (in which case it wont.)

Read this article:
The head of JPMorgan's machine learning platform explained what it's like to work there - eFinancialCareers

Machine Learning in Medicine Market 2021 to Perceive Biggest Trend and Opportunity by 2028 KSU | The Sentinel Newspaper – KSU | The Sentinel…

Machine Learning in Medicine Market Comprehensive Study is an expert and top to bottom investigation on the momentum condition of the worldwide Machine Learning in Medicine industry with an attention on the Global market. The report gives key insights available status of the Machine Learning in Medicine producers and is an important wellspring of direction and course for organizations and people keen on the business. By and large, the report gives an inside and out understanding of 2021-2028 worldwide Machine Learning in Medicine Market covering extremely significant parameters.

Free Sample Report @:

https://www.marketresearchinc.com/request-sample.php?id=28540

Key Players in This Report Include,:GoogleBio BeatsJvionLumiataDreaMedHealintArterysAtomwiseHealth FidelityGinger

(Market Size & Forecast, Different Demand Market by Region, Main Consumer Profile etcBrief Summary of Machine Learning in Medicine:

Machine Learning in Medicine Helps in Avoiding Delays in Processing, Turn-Around-Time, & Redundant Operational Costs. It is Efficient in the Management of Entire Claim Administrative Processes, Such as Adjudication, Pricing, Authorizations, & Analytics. It Provides Real-Time Claim Processing With No Wait Time for Batch Processes

Market Drivers The Rise in the Number ofPatients Opting For Medical Insurance & Increasein Premium Costs The Surge in the Geriatric Population with Chronic Diseases

Market Trend Growth in the Health Insurance Claims

Restraints High Cost Linked With Machine Learning in Medicine

The Global Machine Learning in Medicine Market segments and Market Data Break Down are illuminated below:by Type (Integrated Solutions, Standalone Solutions), Application (Healthcare Payers, Healthcare Providers, Other), Delivery Mode (On-Premise, Cloud-Based), Component (Software, Services)

This research report represents a 360-degree overview of the competitive landscape of the Global Machine Learning in Medicine Market. Furthermore, it offers massive data relating to recent trends, technological, advancements, tools, and methodologies. The research report analyzes the Global Machine Learning in Medicine Market in a detailed and concise manner for better insights into the businesses.

Regions Covered in the Machine Learning in Medicine Market: TheMiddle EastandAfrica(South Africa,Saudi Arabia,UAE,Israel,Egypt, etc.)North America(United States,Mexico&Canada)South America(Brazil,Venezuela,Argentina,Ecuador,Peru,Colombia, etc.)Europe(Turkey,Spain,Turkey, Netherlands Denmark,Belgium,Switzerland,Germany, RussiaUK,Italy,France, etc.)Asia-Pacific(Taiwan,Hong Kong,Singapore,Vietnam,China,Malaysia,Japan,Philippines,Korea,Thailand,India,Indonesia, andAustralia).

Get Upto 40% Discount on The Report @https://www.marketresearchinc.com/ask-for-discount.php?id=28540

The research study has taken the help of graphical presentation techniques such as infographics, charts, tables, and pictures. It provides guidelines for both established players and new entrants in the Global Machine Learning in Medicine Market.

The detailed elaboration of the Global Machine Learning in Medicine Market has been provided by applying industry analysis techniques such as SWOT and Porters five-technique. Collectively, this research report offers a reliable evaluation of the global market to present the overall framework of businesses.

Attractions of the Machine Learning in Medicine Market Report: The report provides granular level information about the market size, regional market share, historic market (2014-2018) and forecast (2021-2028) The report covers in-detail insights about the competitors overview, company share analysis, key market developments, and their key strategies The report outlines drivers, restraints, unmet needs, and trends that are currently affecting the market The report tracks recent innovations, key developments and start-ups details that are actively working in the market The report provides plethora of information about market entry strategies, regulatory framework and reimbursement scenario

Enquire for customization in Report @https://www.marketresearchinc.com/enquiry-before-buying.php?id=28540

Key Points Covered in the Table of Content:Chapter 1 to explain Introduction, market review, market risk and opportunities, market driving force, product scope of Machine Learning in Medicine Market;Chapter 2 to inspect the leading manufacturers (Cost Structure, Raw Material) with sales Analysis, revenue Analysis, and price Analysis of Machine Learning in Medicine Market;Chapter 3 to show the focused circumstance among the best producers, with deals, income, and Machine Learning in Medicine market share 2021;Chapter 4 to display the regional analysis of Global Machine Learning in Medicine Market with revenue and sales of an industry, from 2021to 2028;Chapter 5, 6, 7 to analyze the key countries (United States,China,Europe,Japan,Korea&Taiwan), with sales, revenue and market share in key regions;Chapter 8 and 9 to exhibit International and Regional Marketing Type Analysis, Supply Chain Analysis, Trade Type Analysis;Chapter 10 and 11 to analyze the market by product type and application/end users (industry sales, share, and growth rate) from2021 to 2028Chapter 12 to show Machine Learning in Medicine Market forecast by regions, forecast by type and forecast by application with revenue and sales, from 2021 to 2028;Chapter 13, 14 & 15 to specify Research Findings and Conclusion, Appendix, methodology and data source of Machine Learning in Medicine market buyers, merchants, dealers, sales channel.

Browse for Full Report at @:

Machine Learning in Medicine Market research provides answers to the following key questions:What is the expected growth rate of the Machine Learning in Medicine Market?What will be the Machine Learning in Medicine Market size for the forecast period, 2021 2028?What are the main driving forces responsible for changing the Machine Learning in Medicine Market trajectory?Who are the big suppliers that dominate the Machine Learning in Medicine Market across different regions? Which are their wins to stay ahead in the competition?What are the Machine Learning in Medicine Market trends business owners can rely upon in the coming years?What are the threats and challenges expected to restrict the progress of the Machine Learning in Medicine Market across different countries?

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Kevin

51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us: +1 (628) 225-1818

Write Us@sales@marketresearchinc.com

https://www.marketresearchinc.com

Read more here:
Machine Learning in Medicine Market 2021 to Perceive Biggest Trend and Opportunity by 2028 KSU | The Sentinel Newspaper - KSU | The Sentinel...

5 Ways the IoT and Machine Learning Improve Operations – BOSS Magazine

Reading Time: 4 minutes

By Emily Newton

The Internet of Things (IoT) and machine learning are two of the most disruptive technologies in business today. Separately, both of these innovations can bring remarkable benefits to any company. Together, they can transform your business entirely.

The intersection of IoT devices and machine learning is a natural progression. Machine learning needs large pools of relevant data to work at its best, and the IoT can supply it. As adoption of both soars, companies should start using them in conjunction.

Here are five ways the IoT and machine learning can improve operations in any business.

Around 25% of businesses today use IoT devices, and this figure will keep climbing. As companies implement more of these sensors, they add places where they can gather data. Machine learning algorithms can then analyze this data to find inefficiencies in the workplace.

Looking at various workplace data, a machine learning program could see where a company spends an unusually high amount of time. It could then suggest a new workflow that would reduce the effort employees expend in that area. Business leaders may not have ever realized this was a problem area without machine learning.

Machine learning programs are skilled at making connections between data points that humans may miss. They can also make predictions 20 times earlier than traditional tools and do so with more accuracy. With IoT devices feeding them more data, theyll only become faster and more accurate.

Machine learning and the IoT can also automate routine tasks. Business process automation (BPA) leverages AI to handle a range of administrative tasks, so workers dont have to. As IoT devices feed more data into these programs, they become even more effective.

Over time, technology like this has contributed to a 40% productivity increase in some industries. Automating and streamlining tasks like scheduling and record-keeping frees employees to focus on other, value-adding work. BPAs potential doesnt stop there, either.

BPA can automate more than straightforward data manipulation tasks. It can talk to customers, plan and schedule events, run marketing campaigns and more. With more comprehensive IoT implementation, it would have access to more areas, becoming even more versatile.

One of the most promising areas for IoT implementation is in the supply chain. IoT sensors in vehicles or shipping containers can provide companies with critical information like real-time location data or product quality. This data alone improves supply chain visibility, but paired with machine learning, it could transform your business.

Machine learning programs can take this real-time data from IoT sensors and put it into action. It could predict possible disruptions and warn workers so they can respond accordingly. These predictive analytics could save companies the all-too-familiar headache of supply chain delays.

UPS Orion tool is the gold standard for what machine learning can do for supply chains. The system has saved the shipping giant 10 million gallons of fuel a year by adjusting routes on the fly based on traffic and weather data.

If a company cant understand the vulnerabilities it faces, business leaders cant make fully informed decisions. IoT devices can provide the data businesses need to get a better understanding of these risks. Machine learning can take it a step further and find points of concern in this data that humans could miss.

IoT devices can gather data about the workplace or customers that machine learning programs then process. For example, Progressive has made more than 1.7 trillion observations about its customers driving habits through Snapshot, an IoT tracking device. These analytics help the company adjust clients insurance rates based on the dangers their driving presents.

Business risks arent the only hazards the Internet of Things and machine learning can predict. IoT air quality sensors could alert businesses when to change HVAC filters to protect employee health. Similarly, machine learning cybersecurity programs could sense when hackers are trying to infiltrate a companys network.

Another way the IoT and machine learning could transform your business is by eliminating waste. Data from IoT sensors can reveal where the company could be using more resources than it needs. Machine learning algorithms can then analyze this data to suggest ways to improve.

One of the most common culprits of waste in businesses is energy. Thanks to various inefficiencies, 68% of power in America ends up wasted. IoT sensors can measure where this waste is happening, and with machine learning, adjust to stop it.

Machine learning algorithms in conjunction with IoT devices could restrict energy use, so processes only use what they need. Alternatively, they could suggest new workflows or procedures that would be less wasteful. While many of these steps may seem small, they add up to substantial savings.

Without the IoT and machine learning, businesses cant reach their full potential. These technologies enable savings companies couldnt achieve otherwise. As they advance, theyll only become more effective.

The Internet of Things and machine learning are reshaping the business world. Those that dont take advantage of them now could soon fall behind.

Emily Newton is the Editor-in-Chief of Revolutionized, a magazine exploring how innovations change our world. She has over 3 years experience writing articles in the industrial and tech sectors.

See the article here:
5 Ways the IoT and Machine Learning Improve Operations - BOSS Magazine

Mental health diagnoses and the role of machine learning – Health Europa

It is common for patients with psychosis or depression to experience symptoms of both conditions which has meant that traditionally, mental health diagnoses have been given for a primary illness with secondary symptoms of the other.

Making an accurate diagnosis often poses difficulties to mental health clinicians and diagnoses often do not accurately reflect the complexity of individual experience or neurobiology. For example, a patient being diagnosed with psychosis will often have depression regarded as a secondary condition, with more focus on the psychosis symptoms, such as hallucinations or delusions; this has implications on treatment decisions for patients.

A team at the University of Birminghams Institute for Mental Health and Centre for Human Brain Health, along with researchers at the European Union-funded PRONIA consortium, explored the possibility of using machine learning to create extremely accurate models of pure forms of both illnesses and using these models to investigate the diagnostic accuracy of a cohort of patients with mixed symptoms. The results of this study have been published in Schizophrenia Bulletin.

Paris Alexandros Lalousis, lead author, explains that the majority of patients have co-morbidities, so people with psychosis also have depressive symptoms and vice versa That presents a big challenge for clinicians in terms of diagnosing and then delivering treatments that are designed for patients without co-morbidity. Its not that patients are misdiagnosed, but the current diagnostic categories we have do not accurately reflect the clinical and neurobiological reality.

The researchers analysed questionnaire responses and detailed clinical interviews, as well as data from structural magnetic resonance imaging from a cohort of 300 patients taking part in the study. From this group of patients, they identified small subgroups of patients, who could be classified as suffering either from psychosis without any symptoms of depression, or from depression without any psychotic symptoms.

With the goal of developing a precise disease profile for each patient and testing it against their diagnosis to see how accurate it was, the research team was able to identify machine learning models of pure depression, and pure psychosis by using the collected data. They were then able to use machine learning methods to apply these models to patients with symptoms of both illnesses.

The team discovered that patients with depression as a primary illness were more likely to have accurate mental health diagnoses, whereas patients with psychosis with depression had symptoms which most frequently leaned towards the depression dimension. This may suggest that depression plays a greater part in the illness than had previously been thought.

Lalousis added: There is a pressing need for better treatments for psychosis and depression, conditions which constitute a major mental health challenge worldwide. Our study highlights the need for clinicians to understand better the complex neurobiology of these conditions, and the role of co-morbid symptoms; in particular considering carefully the role that depression is playing in the illness.

In this study we have shown how using sophisticated machine learning algorithms, which take into account clinical, neurocognitive, and neurobiological factors can aid our understanding of the complexity of mental illness. In the future, we think machine learning could become a critical tool for accurate diagnosis. We have a real opportunity to develop data-driven diagnostic methods this is an area in which mental health is keeping pace with physical health and its really important that we keep up that momentum.

Continued here:
Mental health diagnoses and the role of machine learning - Health Europa

Parascript and SFORCE Partner to Leverage Machine Learning Eliminating Barriers to Automation – GlobeNewswire

Longmont, CO, Feb. 09, 2021 (GLOBE NEWSWIRE) -- Parascript, which provides document analysis software processing for over 100 billion documents each year, announced today the Smart-Force (SFORCE) and Parascript partnership to provide a digital workforce that augments operations by combining cognitive Robotic Process Automation (RPA) technology with customers current investments for high scalability, improved accuracy and an enhanced customer experience in Mexico and across Latin America.

Partnering with Smart-Force means we get to help solve some of the greatest digital transformation challenges in Intelligent Document Processing instead of just the low-hanging fruit. Smart-Force is forward-thinking and committed to futureproofing their customers processes, even with hard-to-automate, unstructured documents where the application of techniques such as NLP is often required, said Greg Council, Vice President of Marketing and Product Management at Parascript. Smart-Force leverages bots to genuinely collaborate with staff so that the staff no longer have to spend all their time on finding information, and performing data entry and verification, even for the most complex multi-page documents that you see in lending and insurance.

Smart-Force specializes in digital transformation by identifying processes in need of automation and implementing RPA to improve those processes so that they run faster without errors. SFORCE routinely enables increased productivity, improves customer satisfaction, and improves staff morale through leveraging the technology of Automation Anywhere, Inc., a leader in RPA, and now Parascript Intelligent Document Processing.

As intelligent automation technology becomes more ubiquitous, it has created opportunities for organizations to ignite their staff towards new ways of working freeing up time from the manual tasks to focus on creative, strategic projects, what humans are meant to do, said Griffin Pickard, Director of Technology Alliance Program at Automation Anywhere. By creating an alliance with Parascript and Smart-Force, we have enabled customers to advance their automation strategy by leveraging ML and accelerate end-to-end business processes.

Our focus at SFORCE is on RPA with Machine Learning to transform how customers are doing things. We dont replace; we compliment the technology investments of our customers to improve how they are working, said Alejandro Castrejn, Founder of SFORCE. We make processes faster, more efficient and augment their staff capabilities. In terms of RPA processes that focus on complex document-based information, we havent seen anything approach what Parascript can do.

We found that Parascript does a lot more than other IDP providers. Our customers need a point-to-point RPA solution. Where Parascript software becomes essential is in extracting and verifying data from complex documents such as legal contracts. Manual data entry and review produces a lot of errors and takes time, said Barbara Mair, Partner at SFORCE. Using Parascript software, we can significantly accelerate contract execution, customer onboarding and many other processes without introducing errors.

The ability to process simple to very complex documents such as unstructured contracts and policies within RPA leveraging FormXtra.AI represents real opportunities for digital transformation across the enterprise. FormXtra.AI and its Smart Learning allow for easy configuration, and by training the systems on client-specific data, the automation is rapidly deployed with the ability to adapt to new information introduced in dynamic production environments.

About SFORCE, S.A. de C.V.

SFORCE offers services that allow customers to adopt digital transformation at whatever pace the organization needs. SFORCE is dedicated to helping customers get the most out of their existing investments in technology. SFORCE provides point-to-point solutions that combine existing technologies with next generation technology, which allows customers to transform operations, dramatically increase efficiency as well as automate manual tasks that are rote and error-prone, so that staff can focus on high-value activities that significantly increase revenue. From exploring process automation to planning a disruptive change that ensures high levels of automation, our team of specialists helps design and implement the automation of processes for digital transformation. Visit SFORCE.

About Parascript

Parascript software, driven by data science and powered by machine learning, configures and optimizes itself to automate simple and complex document-oriented tasks such as document classification, document separation and data entry for payments, lending and AP/AR processes. Every year, over 100 billion documents involved in banking, insurance, and government are processed by Parascript software. Parascript offers its technology both as software products and as software-enabled services to our partners. Visit Parascript.

Read more from the original source:
Parascript and SFORCE Partner to Leverage Machine Learning Eliminating Barriers to Automation - GlobeNewswire

The Collision of AI’s Machine Learning and Manipulation: Deepfake Litigation Risks to Companies from a Product Liability, Privacy, and Cyber…

AI and machine-learning advances have made it possible to produce fake videos and photos that seem real, commonly known as deepfakes. Deepfake content is exploding in popularity.[i] In Star Wars: The Rise of Skywalker, for instance, a visage of Carrie Fischer graced the screen, generated through artificial intelligence models trained on historic footage. Using thousands of hours of interviews with Salvador Dali, the Dali Museum of Florida created an interactive exhibit featuring the artist.[ii] For Game of Thrones fans miffed over plot holes in the season finale, Jon Snow can be seen profusely apologizing in a deepfake video that looks all too real.[iii]

Deepfake technologyhow does it work? From a technical perspective, deepfakes (also referred to as synthetic media) are made from artificial intelligence and machine-learning models trained on data sets of real photos or videos. These trained algorithms then produce altered media that looks and sounds just like the real deal. Behind the scenes, generative adversarial networks (GANs) power deepfake creation.[iv] With GANs, two AI algorithms are pitted against one another: one creates the forgery while the other tries to detect it, teaching itself along the way. The more data is fed into GANs, the more believable the deepfake will be. Researchers at academic institutions such as MIT, Carnegie Mellon, and Stanford University, as well as large Fortune 500 corporations, are experimenting with deepfake technology.[v] Yet deepfakes are not solely the province of technical universities or AI product development groups. Anybody with an internet connection can download publicly available deepfake software and crank out content.[vi]

Deepfake risks and abuse. Deepfakes are not always fun and games. Deepfake videos can phish employees to reveal credentials or confidential information, e-commerce platforms may face deepfake circumvention of authentication technologies for purposes of fraud, and intellectual property owners may find their properties featured in videos without authorization. For consumer-facing online platforms, certain actors may attempt to leverage deepfakes to spread misinformation. Another well-documented and unfortunate abuse of deepfake technology is for purposes of revenge pornography.[vii]

In response, online platforms and consumer-facing companies have begun enforcing limitations on the use of deepfake media. Twitter, for example, announced a new policy within the last year to prohibit users from sharing synthetic or manipulated media that are likely to cause harm. Per its policy, Twitter reserves the right to apply a label or warning to Tweets containing such media.[viii] Reddit also updated its policies to ban content that impersonates individuals or entities in a misleading or deceptive manner (while still permitting satire and parody).[ix] Others have followed. Yet social media and online platforms are not the only industries concerned with deepfakes. Companies across industry sectors, including financial and healthcare, face growing rates of identity theft and imposter scams in government services, online shopping, and credit bureaus as deepfake media proliferates.[x]

Deepfake legal claims and litigation risks. We are seeing legal claims and litigation relating to deepfakes across multiple vectors:

1. Claims brought by those who object to their appearance in deepfakes. Victims of deepfake media sometimes pursue tort law claims for false light, invasion of privacy, defamation, and intentional infliction of emotional distress. At a high level, these overlapping tort claims typically require the person harmed by the deepfake to prove that the deepfake creator published something that gives a false or misleading impression of the subject person in a manner that (a) damages the subjects reputation, (b) would be highly offensive to a reasonable person, or (c) causes mental anguish or suffering. As more companies begin to implement countermeasures, the lack of sufficient safeguards against misleading deepfakes may give rise to a negligence claim. Companies could face negligence claims for failure to detect deepfakes, either alongside the deepfake creator or alone if the creator is unknown or unreachable.

2. Product liability issues related to deepfakes on platforms. Section 230 of the Communications Decency Act shields online companies from claims arising from user content published on the companys platform or website. The law typically bars defamation and similar tort claims. But e-commerce companies can also use Section 230 to dismiss product liability and breach of warranty claims where the underlying allegations focus on a third-party sellers representation (such as a product description or express warranty). Businesses sued for product liability or other tort claims should look to assert Section 230 immunity as a defense where the alleged harm stems from a deepfake video posted by a user. Note, however, the immunity may be lost where the host platform performs editorial functions with respect to the published content at issue. As a result, it is important for businesses to implement clear policies addressing harmful deepfake videos that broadly apply to all users and avoid wading into influencing a specific users content.

3. Claims from consumers who suffer account compromise due to deepfakes. Multiple claims may arise where cyber criminals leverage deepfakes to compromise consumer credentials for various financial, online service, or other accounts. The California Consumer Privacy Act (CCPA), for instance, provides consumers with a private right of action to bring claims against businesses that violate the duty to implement and maintain reasonable security procedures and practices.[xi] Plaintiffs may also bring claims for negligence, invasion of privacy claims under common law or certain state constitutions, and state unfair competition or false advertising statutes (e.g., Californias Unfair Competition Law and Consumers Legal Remedies Act).

4. Claims available to platforms enforcing Terms of Use prohibitions of certain kinds of deepfakes. Online content platforms may be able to enforce prohibitions on abusive or malicious deepfakes through claims involving breach of contract and potential violations of the Computer Fraud and Abuse Act (CFAA), among others. These claims may turn on nuanced issues around what conduct constitutes exceeding authorized access under the CFAA, or Terms of Use assent and enforceability of particular provisions.

5. Claims related to state statutes limiting deepfakes. As malicious deepfakes proliferate, several states such as California, Texas, and Virginia have enacted statutes prohibiting their use to interfere with elections or criminalizing pornographic deepfake revenge video distribution.[xii] More such statutes are pending.

Practical tips for companies managing deepfake risks. While every company and situation is unique, companies dealing with deepfakes on their platforms, or as a potential threat vector for information security attacks, can consider several practical avenues to manage risks:

While the future of deepfakes is uncertain, it is apparent that the underlying AI and machine-learning technology is very real and here to staypresenting both risks and opportunity for organizations across industries.

Read more here:
The Collision of AI's Machine Learning and Manipulation: Deepfake Litigation Risks to Companies from a Product Liability, Privacy, and Cyber...

There Is No Silver Bullet Machine Learning Solution – Analytics India Magazine

A recommendation engine is a class of machine learning algorithm that suggests products, services, information to users based on analysis of data. Robust recommendation systems are the key differentiator in the operations of big companies like Netflix, Amazon, and Byte Dance (TikTok parent) etc.

Alok Menthe, Data Scientist at Ericsson, gave an informative talk on building Custom recommendation engines for real-world problems at the Machine Learning Developers Summit (MLDS) 2021. Whenever a niche business problem comes in, it has complicated intertwined ways of working. Standard ML techniques may be inadequate and might not serve the customers purpose. That is where the need for a custom-made engine comes in. We were also faced with such a problem with our service network unit at Ericsson, he said.

Menthe said the unit wanted to implement a recommendation system to provide suggestions for assignment workflow a model to delegate the incoming projects to the most appropriate team or resource pool

Credit: Alok Menthe

There were three kinds of data available:

Pool definition data: It relates to the composition of a particular resource poolthe number of people, their competence, and other metadata.

Historical demand data: This kind of data helps in establishing a relationship between the feature demand and a particular resource pool.

Transactional data: It is used for operational purposes.

Menthe said building a custom recommendation system in this context involves the following steps:

Credit: Alok Menthe

After building our model, the most difficult part was feature engineering, which is imperative for building an efficient system. Among the two major modules classification and clusteringwe faced challenges with respect to the latter. We had only categorical information making it difficult to find distances within the objects. We went out of the box to see if we can do any special encoding for the data. We adopted data encoding techniques and frequency-based encoding in this regard, said Menthe.

Clustering module: For this module, initially the team implemented K-modes and agglomerative. However, the results were far from perfect, prompting the team to consider the good-old K-means algorithm. For evaluation purposes, it was done manually with the help of subject matter experts.

The final model had 700 resource pools condensed to 15 pool clusters.

Classification module: For this module, three kinds of algorithm iterations were usedRandom Forest, Artificial Neural Network, XGBoost. Classification accuracy was used as an evaluation metric. Finally, upon 50,00,000 training records, this module demonstrated an accuracy of 71 percent.

Menthe said this recommendation model is monitored on a fortnightly basis by validating the suggested pools against the allocated pools for project demands:

The model has proved to be successful on three fronts:

Menthe summarised the three major takeaways from this project in his concluding remarks: the need to preserve business nuances in ML solutions; thinking beyond standard ML approaches; and understanding that there is no silver bullet ML solution.

I am a journalist with a postgraduate degree in computer network engineering. When not reading or writing, one can find me doodling away to my hearts content.

Here is the original post:
There Is No Silver Bullet Machine Learning Solution - Analytics India Magazine

Postdoctoral Research Associate in Digital Humanities and Machine Learning job with DURHAM UNIVERSITY | 246392 – Times Higher Education (THE)

Department of Computer Science

Grade 7:-33,797 - 40,322 per annumFixed Term-Full TimeContract Duration:7 monthsContracted Hours per Week:35Closing Date:13-Mar-2021, 7:59:00 AM

Durham University

Durham University is one of the world's top universities with strengths across the Arts and Humanities, Sciences and Social Sciences. We are home to some of the most talented scholars and researchers from around the world who are tackling global issues and making a difference to people's lives.

The University sits in a beautiful historic city where it shares ownership of a UNESCO World Heritage Site with Durham Cathedral, the greatest Romanesque building in Western Europe. A collegiate University, Durham recruits outstanding students from across the world and offers an unmatched wider student experience.

Less than 3 hours north of London, and an hour and a half south of Edinburgh, County Durham is a region steeped in history and natural beauty. The Durham Dales, including the North Pennines Area of Outstanding Natural Beauty, are home to breathtaking scenery and attractions. Durham offers an excellent choice of city, suburban and rural residential locations. The University provides a range of benefits including pension and childcare benefits and the Universitys Relocation Manager can assist with potential schooling requirements.

Durham University seeks to promote and maintain an inclusive and supportive environment for work and study that assists all members of our University community to reach their full potential. Diversity brings strength and we welcome applications from across the international, national and regional communities that we work with and serve.

The Department

The Department of Computer Science is rapidly expanding. A new building for the department (joint with Mathematical Sciences) has recently opened to house the expanded Department. The current Department has research strengths in (1) algorithms and complexity, (2) computer vision, imaging, and visualisation and (3) high-performance computing, cloud computing, and simulation. We work closely with industry and government departments. Research-led teaching is a key strength of the Department, which came 5th in the Complete University Guide. The department offers BSc and MEng undergraduate degrees and is currently redeveloping its interdisciplinary taught postgraduate degrees. The size of its student cohort has more than trebled in the past five years. The Department has an exceptionally strong External Advisory Board that provides strategic support for developing research and education, consisting of high-profile industrialists and academics.Computer Science is one of the very best UK Computer Science Departments with an outstanding reputation for excellence in teaching, research and employability of our students.

The Role

Postdoctoral Research Associate to work on the AHRC-funded project Visitor Interaction and Machine Curation in the Virtual Liverpool Biennial.

The project looks at virtual art exhibitions that are curated by machines, or even co-curated by humans and machines; and how audiences interact with these exhibitions in the era of online art shows. The project is in close collaboration with the 2020 (now 2021) Liverpool Biennial (http://biennial.com/). The role of the post holder is, along with the PI Leonardo Impett, to implement different strategies of user-machine interaction for virtual art exhibits; and to investigate the interaction behaviour of different types of users with such systems.

Responsibilities:

This post is fixed term until31 August 2021 as the research project is time limited and will end on 31 August 2021.

The post-holder is employed to work on research/a research project which will be led by another colleague. Whilst this means that the post-holder will not be carrying out independent research in his/her own right, the expectation is that they will contribute to the advancement of the project, through the development of their own research ideas/adaptation and development of research protocols.

Successful applicants will, ideally, be in post byFebruary 2021.

How to Apply

For informal enquiries please contactDr Leonardo Impett (leonardo.l.impett@durham.ac.uk).All enquiries will be treated in the strictest confidence.

We prefer to receive applications online via the Durham University Vacancies Site.https://www.dur.ac.uk/jobs/. As part of the application process, you should provide details of 3 (preferably academic/research) referees and the details of your current line manager so that we may seek an employment reference.

Applications are particularly welcome from women and black and minority ethnic candidates, who are under-represented in academic posts in the University.We are committed to equality: if for any reason you have taken a career break or periods of leave that may have impacted on your career path, such as maternity, adoption or parental leave, you may wish to disclose this in your application.The selection committee will recognise that this may have reduced the quantity of your research accordingly.

What to Submit

All applicants are asked to submit:

The Requirements

Essential:

Qualifications

Experience

Skills

Desirable:

Experience

Skills

DBS Requirement:Not Applicable.

Read more:
Postdoctoral Research Associate in Digital Humanities and Machine Learning job with DURHAM UNIVERSITY | 246392 - Times Higher Education (THE)

Research Engineer, Machine Learning job with NATIONAL UNIVERSITY OF SINGAPORE | 279415 – Times Higher Education (THE)

Job Description

Vessel Collision Avoidance System is a real-time framework to predict and prevent vessel collisions based on historical movement of vessels in heavy traffic regions such as Singapore strait. We are looking for talented developers to join our development team to help us develop machine learning and agent-based simulation models to quantify vessel collision risk at Singapore strait and port. If you are data curious, excited about deriving insights from data, and motivated by solving a real-world problem, we want to hear from you.

Qualifications

A B.Sc. in a quantitative field (e.g., Computer Science, Statistics, Engineering, Science) Good coding habit in Python and able to solve problems in a fast pace Familiar with popular machine learning models Eager to learn new things and has passion in work Take responsibility, team oriented, and result oriented The ability to communicate results clearly and a focus on driving impact

More Information

Location: Kent Ridge CampusOrganization: EngineeringDepartment : Industrial Systems Engineering And ManagementEmployee Referral Eligible: NoJob requisition ID : 7334

See the original post:
Research Engineer, Machine Learning job with NATIONAL UNIVERSITY OF SINGAPORE | 279415 - Times Higher Education (THE)

Debit: The Long Count review Mayans, machine learning and music – The Guardian

There is an uncanniness in listening to a musical instrument you have never heard being played for the first time. As your brain makes sense of a new sound, it tries to frame it within the realm of familiarity, producing a tussle between the known and unknown.

The second album from Mexican-American producer Delia Beatriz, AKA Debit, embraces this dissonance. Taking the flutes of the ancient Mayan courts as her raw material and inspiration, Beatriz used archival recordings from the Mayan Studies Institute at the Universidad Nacional Autnoma de Mxico to create a digital library of their sounds. She then processed these ancient samples through a machine-learning program to create woozy, ambient soundscapes.

Since no written music has survived from the Mayan civilisation, Beatriz crafts a new language for these ancient wind instruments, straddling the electronic world of her 2017 debut Animus and the dilatory experimentalism of ambient music. The resulting 10 tracks make for a deliciously strange listening experience.

Opener 1st Day establishes the undulating tones that unify the record. They flutter like contemplative humming and veer from acoustic warmth to metallic note-bending. Each track is given a numbered day and time, as if documenting the passage of a ritual, and echoes resonate down the record: whistles appear like sirens during the moans of 1st Night and 3rd Night; snatches of birdsong are tucked between the reverb of 2nd Day and 5th Day.

The Long Count of the records title seems to express the linear passage of time itself, one replicated in the eternal, fluid flute tones. We hear in them the warmth of the human breath that first produced their sound, as well as Beatrizs electronic filtering that extends their notes until they imperceptibly bleed into one another and fuzz like keys on a synth. It is a startlingly original and enveloping sound that leaves us with that ineffable feeling: the past unearthed and made new once more.

Korean composer Park Jiha releases her third album, The Gleam (tak:til), a solo work featuring uniquely sparse compositions of saenghwang mouth organ, piri oboe and yanggeum dulcimer. British-Ghanaian rapper KOG brings his debut LP, Zone 6, Agege (Heavenly Sweetness), a deeply propulsive mix of English, Pidgin and Ga lyrics set to Afrobeat fanfares. Cellist and composer Ana Carla Maza releases her latest album, Baha (Persona Editorial), an affecting combination of Cuban son, bossa and chanson in homage to the music of her birthplace of Havana.

See the original post here:
Debit: The Long Count review Mayans, machine learning and music - The Guardian

Bringing AI and machine learning to the edge with matter-ready platform – Electropages

28-01-2022 | Silicon Laboratories Inc | Semiconductors

Silicon Labs offers the BG24 and MG24 families of 2.4GHz wireless SoCs for Bluetooth and Multiple-protocol operations and a new software toolkit. This new co-optimised hardware and software platform assists in bringing AI/ML applications and wireless high performance to battery-powered edge devices. Matter-ready, the ultra-low-power families support multiple wireless protocols and include PSA Level 3 Secure Vault protection, excellent for diverse smart home, medical and industrial applications.

The company solutions comprise two new families of 2.4GHz wireless SoCs, providing the industry's first integrated AI/ML accelerators, support for Matter, OpenThread, Zigbee, Bluetooth Low Energy, Bluetooth mesh, proprietary and multi-protocol operation, the highest level of industry security certification, ultra-low power abilities and the largest memory and flash capacity in the company's portfolio. Also offered is a new software toolkit developed to enable developers to quickly build and deploy AI and machine learning algorithms employing some of the most popular tool suites such as TensorFlow.

"The BG24 and MG24 wireless SoCs represent an awesome combination of industry capabilities including broad wireless multi-protocol support, battery life, machine learning, and security for IoT Edge applications," said Matt Johnson, CEO of Silicon Labs.

The families also have the largest Flash and RAM capacities in the company portfolio. This indicates that the device may evolve for multi-protocol support, Matter, and trained ML algorithms for large datasets. PSA Level 3-Certified Secure Vault, the highest level of security certification for IoT devices, offers the security required in products such as medical equipment, door locks, and other sensitive deployments where hardening the device from external threats is essential.

Go here to read the rest:
Bringing AI and machine learning to the edge with matter-ready platform - Electropages

Artificial Intelligence and Machine Learning drive FIAs initiatives for financial inclusivity in India – Express Computer

In an exclusive video interview with Express Computer, Seema Prem, Co-founder and CEO, FIA Global shares about the companys investment in Artificial Intelligence and Machine Learning in the last five years for financial inclusivity in the country.

FIA, a financial inclusivity neo bank delivers financial services through its app, Finvesta. The app employs AI, facial recognition and Natural Language Processing to aggregate, redesign, recommend and deliver financial products at scale. The app uses icons for user interface, for ease of use where literacy levels are low.

Seema Prem, Co-founder and CEO, FIA says, We have reaped significant benefits by incorporating AI and ML in our operations. So we handle very tiny transactions and big data. The algorithm modules, especially rule-based modules have reached a certain performance plateau. AI and ML have been incorporated for smart bot applications for servicing the customers, audit where we look at embedding facial recognition, pattern detection for predicting the performance of business, analysing large volumes of data and many more. It helps us to ensure that manual intervention comes down significantly. Last year, after the pandemic we automated like there is no tomorrow and that automation has resulted in huge productivity for us.

FIAs role in the financial inclusivity in India is largely associated with Pradham Mantri Jan Dhan Yojana where they tie-up with banks to set up centres in very remote and secluded regions of India like Uri, Kargil, Kedarnath, Kanyakumari, etc.

Prem states, We work in 715 districts of the country in areas like a bank branch that have never been there. Once the bank account opens in such areas then people get the confidence in remote areas for banking. Eventually, we try to fulfil the needs of people for other products like pension, insurance, healthcare, livestock loans, vehicle insurance and property insurance. We provide doorstep delivery of pension to our customers. So our services also endure community engagement besides financial inclusivity targeting various special groups like women and old age people.

Watch Full Video:

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

Advertisement

See the rest here:
Artificial Intelligence and Machine Learning drive FIAs initiatives for financial inclusivity in India - Express Computer