Ding Dong Merrily on AI: The British Neuroscience Association’s Christmas Symposium Explores the Future of Neuroscience and AI – Technology Networks

A Christmas symposium from the British Neuroscience Association (BNA) has reviewed the growing relationship between neuroscience and artificial intelligence (AI) techniques. The online event featured talks from across the UK, which reviewed how AI has changed brain science and the many unrealized applications of what remains a nascent technology.

Opening the day with his talk, Shake your Foundations: the future of neuroscience in a world where AI is less rubbish, Prof. Christopher Summerfield, from the University of Oxford, looked at the idiotic, ludic and pragmatic stages of AI. We are moving from the idiotic phase, where virtual assistants are usually unreliable and AI-controlled cars crash into random objects they fail to notice, to the ludic phase, where some AI tools are actually quite handy. Summerfield highlighted a program called DALL-E, an AI that converts text prompts into images, and a language generator called gopher that can answer complicated ethical questions with eerily natural responses.

What could these advances in AI mean for neuroscience? Summerfield suggested that they invite researchers to consider the limits of current neuroscience practice that could be enhanced by AI in the future.

Integration of neuroscience subfields could be enabled by AI, said Summerfield. Currently, he said People who study language dont care about vision. People who study vision dont care about memory. AI systems dont work properly if only one distinct subfield is considered and Summerfield suggested that, as we learn more about how to create a more complete AI, similar advances will be seen in our study of the biological brain.

Another element of AI that could drag neuroscience into the future is the level of grounding required for it to succeed. Currently, AI models are provided with contextual training data before they can learn associations, whereas the human brain learns from scratch. What makes it possible for a volunteer in a psychologists experiment to be told to do something, and then just do it? To create more natural AIs, this is a problem that neuroscience will have to solve in the biological brain first.

The University of Oxfords Prof. Mihaela van der Schaar looked at how we can use machine learning to empower human learning in her talk, Quantitative Epistemology: a new human-machine partnership. Van der Schaars talks discussed practical applications of machine learning in healthcare by teaching clinicians through a process called meta-learning. This is where, said van der Schaar, learners become aware of and increasingly in control of habits of perception, inquiry, learning and growth.

This approach provides a potential look at how AI might supplement the future of healthcare, by advising clinicians on how they make decisions and how to avoid potential error when undertaking certain practices. Van der Schaar gave an insight into how AI models can be set up to make these continuous improvements. In healthcare, which, at least in the UK, is slow to adopt new technology, van der Schaars talk offered a tantalizing glimpse of what a truly digital approach to healthcare could achieve.

Dovetailing nicely from van der Schaars talk was Imperial College London professor Aldo Faisals presentation, entitled AI and Neuroscience the Virtuous Cycle. Faisal looked at systems where humans and AI interact and how they can be classified. Whereas in van der Schaars clinical decision support systems, humans remain responsible for the final decision and AIs merely advise, in an AI-augmented prosthetic, for example, the roles are reversed. A user can suggest a course of action, such as pick up this glass, by sending nerve impulses and the AI can then find a response that addresses this suggestion, by, for example, directing a prosthetic hand to move in a certain way. Faisal then went into detail on how these paradigms can inform real-world learning tasks, such as motion-tracked subjects learning to play pool.

One fascinating study involved a balance board task, where a human subject could tilt the board in one axis, while an AI controlled another, meaning that the two had to collaborate to succeed. After time, the strategies learned by the AI could be copied between certain subjects, suggesting the human learning component was similar. But for other subjects, this wasnt possible.

Faisal suggested this hinted at complexities in how different individuals learn that could inform behavioral neuroscience, AI systems and future devices, like neuroprostheses, where the two must play nicely together.

The afternoons session featured presentations that touched on the complexities of the human and animal brain. The University of Sheffields Professor Eleni Vasilaki explained how mushroom bodies, regions of the fly brain that play roles in learning and memory, can provide insight into sparse reservoir computing. Thomas Nowotny, professor of informatics at the University of Sussex, reviewed a process called asynchrony, where neurons activate at slightly different times in response to certain stimuli. Nowotny explained how this enables relatively simple systems like the bee brain to perform incredible feats of communication and navigation using only a few thousand neurons.

Wrapping up the days presentations was a lecture that showed an uncanny future for social AIs, delivered by the Henry Shevlin, a senior researcher at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.

Shevlin reviewed the theory of mind, which enables us to understand what other people might be thinking by, in effect modeling their thoughts and emotions. Do AIs have minds in the same way that we do? Shevlin reviewed a series of AI that have been out in the world, acting as humans, here in 2021.

One such AI, OpenAIs language model, GPT-3, spent a week posting on internet forum site Reddit, chatting with human Redditors and racking up hundreds of comments. Chatbots like Replika that personalize themselves to individual users, creating pseudo-relationships that feel as real as human connections (at least to some users). But current systems, said Shevlin, are excellent at fooling humans, but have no mental depth and are, in effect, extremely proficient versions of the predictive text systems our phones use.

While the rapid advance of some of these systems might feel dizzying or unsettling, AI and neuroscience are likely to be wedded together in future research. So much can be learned from pairing these fields and true advances will be gained not from retreating from complex AI theories but by embracing them. At the end of Summerfields talk, he summed up the idea that AIs are black boxes that we dont fully understand as lazy. If we treat deep networks and other AIs systems as neurobiological theories instead, the next decade could see unprecedented advances for both neuroscience and AI.

Originally posted here:
Ding Dong Merrily on AI: The British Neuroscience Association's Christmas Symposium Explores the Future of Neuroscience and AI - Technology Networks

Hexatone’s FinanceAI Delivers the Power of Artificial Intelligence and Cognitive Analysis to the Financial Sector – Yahoo Finance

Herzliya, Israel--(Newsfile Corp. - December 19, 2021) - Hexatone's FinanceAI offers Semi-Automated KYC verification that leverages artificial intelligence (AI) and its applications based on machine learning and Cognitive analysis to reduce the reliance on internal resources and manual processes.

Hexatone Financial Intelligence

To view an enhanced version of this graphic, please visit:https://orders.newsfilecorp.com/files/8444/108077_d2ab0a8d94d1d91e_001full.jpg

Hexatone's FinanceAI Features

Automating image quality checks

When a customer submits a poor-quality image, it can delay the KYC process by days or weeks as they have to upload new information. Computer vision algorithms can provide immediate feedback to the customer, allowing them to complete the image verification process in minutes rather than waiting.

Automatic verification

Object detection algorithms can automatically scan documents and check that all the relevant information is available. For example, if the customer fills in a form, it can validate that the data is correct without requiring a manual reviewer to do so.

Detecting fraud

Machine learning algorithms can analyze a vast number of transactions in seconds. The models can spot the signals of non-compliance and irregularities. Humans don't need to spend time manually sifting through transactions and flagging suspicious behaviour.

Automatic document digitization

When documents and images are verified, optical recognition models can extract data and enter it into back-office software systems. In the best-case scenario, the automation eliminates the need for manual data entry.

Omri Raiter, Co-Founder and Chief Technology Officer of Hexatone Finance says, "When implemented correctly, KYC automation by Hexatone's FinanceAI offers a significant boost to finance firms wanting to ensure regulatory compliance, and by improving their Customer Experience and overall business success."

What is the KYC Process?

Story continues

In financial services, the Know Your Customer (KYC) process includes all the actions firms need to take to ensure customers are genuine, assess, and monitor risks. The KYC process includes verifying ID, documents and faces with proof from the customer. All financial institutions must comply with KYC regulations to negate fraud and anti-money laundering (AML). Penalties will be applied if they fail to do so.

KYC Process

To view an enhanced version of this graphic, please visit:https://orders.newsfilecorp.com/files/8444/108077_d2ab0a8d94d1d91e_002full.jpg

Why is KYC so important?

Every year, it is estimated that between 2% and 5% of GDP is laundered, equal to around $2 trillion. KYC has become an essential part of AML regulations and processes to attempt to reduce that amount.

A KYC check helps to remove the risk associated with onboarding customers. They can assess whether people are involved in money laundering, fraud, or other criminal activities. People who are working with larger organizations or public figures, KYC is especially important as those people could be targets for bribery or corruption.

When financial firms don't get KYC right, they may face reputational damage as well as prosecution and fines. It's best practice to repeat the process regularly after onboarding, but it should be done at the acquisition stage as a minimum. A more regular KYC process can check for factors such as:

Spikes in an activity that might be a signal of criminal behaviour

Unusual cross-border activities

Reviewing the customer identity against government sanction lists

Adverse offline or online media attention

KYC is important to understand the customer account is up-to-date, the transactions match the original purpose of the account, and the risk level is appropriate for the type of transactions.

Who is KYC for?

Any financial institution that deals with customers during the process of opening and maintaining their accounts needs KYC in place. That includes banks, credit unions, wealth management firms, fintech companies, private lenders, accountants, tax firms, and lending platforms. Essentially, KYC regulations apply to any firm that interacts with money, which in the 21st century is pretty much all of them.

About Hexatone's FinanceAI

Hexatone's FinanceAI is an artificial intelligence-based solution for the Financial and banking sector. FinanceAI automatically evaluates the financial profiles of entities, companies, and their customers, enabling banks and financial institutes to make faster, better, and more business-relevant decisions. Using AI, machine learning, and cognitive analysis.

Media Contact

Company: Hexatone FinanceEmail: contactus@Hexatone.net

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/108077

Read the original here:
Hexatone's FinanceAI Delivers the Power of Artificial Intelligence and Cognitive Analysis to the Financial Sector - Yahoo Finance

Projecting armed conflict risk in Africa towards 2050 along the SSP-RCP scenarios: a machine learning approach Peace Research Institute Oslo – Peace…

Hoch, Jannis M.; Sophie P. de Bruin; Halvard Buhaug; Nina von Uexkull; Rens van Beek & Niko Wanders (2021) Projecting armed conflict risk in Africa towards 2050 along the SSP-RCP scenarios: a machine learning approach, Environmental Research Letters 16(12): 124068.

In the past decade, several efforts have been made toproject armed conflict risk into the future.

This study broadens current approaches by presenting a first-of-its-kind application of machine learning (ML) methods to project sub-national armed conflict risk over the African continent along three Shared Socioeconomic Pathway (SSP) scenarios and three Representative Concentration Pathways towards 2050. Results of the open-source ML framework CoPro are consistent with the underlying socioeconomic storylines of the SSPs, and the resulting out-of-sample armed conflict projections obtained with Random Forest classifiers agree with the patterns observed in comparable studies. In SSP1-RCP2.6, conflict risk is low in most regions although the Horn of Africa and parts of East Africa continue to be conflict-prone. Conflict risk increases in the more adverse SSP3-RCP6.0 scenario, especially in Central Africa and large parts of Western Africa. We specifically assessed the role of hydro-climatic indicators as drivers of armed conflict. Overall, their importance is limited compared to main conflict predictors but results suggest that changing climatic conditions may both increase and decrease conflict risk, depending on the location: in Northern Africa and large parts of Eastern Africa climate change increases projected conflict risk whereas for areas in the West and northern part of the Sahel shifting climatic conditions may reduce conflict risk. With our study being at the forefront of ML applications for conflict risk projections, we identify various challenges for this arising scientific field. A major concern is the limited selection of relevant quantified indicators for the SSPs at present. Nevertheless, ML models such as the one presented here are a viable and scalable way forward in the field of armed conflict risk projections, and can help to inform the policy-making process with respect to climate security.

Originally posted here:
Projecting armed conflict risk in Africa towards 2050 along the SSP-RCP scenarios: a machine learning approach Peace Research Institute Oslo - Peace...

Developing Machine Learning and Statistical Tools to Evaluate the Accessibility of Public Health Advice on Infectious Diseases among Vulnerable People…

Comput Intell Neurosci. 2021 Dec 17;2021:1916690. doi: 10.1155/2021/1916690. eCollection 2021.

ABSTRACT

BACKGROUND: From Ebola, Zika, to the latest COVID-19 pandemic, outbreaks of highly infectious diseases continue to reveal severe consequences of social and health inequalities. People from low socioeconomic and educational backgrounds as well as low health literacy tend to be affected by the uncertainty, complexity, volatility, and progressiveness of public health crises and emergencies. A key lesson that governments have taken from the ongoing coronavirus pandemic is the importance of developing and disseminating highly accessible, actionable, inclusive, coherent public health advice, which represent a critical tool to help people with diverse cultural, educational backgrounds and varying abilities to effectively implement health policies at the grassroots level.

OBJECTIVE: We aimed to translate the best practices of accessible, inclusive public health advice (purposefully designed for people with low socioeconomic and educational background, health literacy levels, limited English proficiency, and cognitive/functional impairments) on COVID-19 from health authorities in English-speaking multicultural countries (USA, Australia, and UK) to adaptive tools for the evaluation of the accessibility of public health advice in other languages.

METHODS: We developed an optimised Bayesian classifier to produce probabilistic prediction of the accessibility of official health advice among vulnerable people including migrants and foreigners living in China. We developed an adaptive statistical formula for the rapid evaluation of the accessibility of health advice among vulnerable people in China.

RESULTS: Our study provides needed research tools to fill in a persistent gap in Chinese public health research on accessible, inclusive communication of infectious diseases prevention and management. For the probabilistic prediction, using the optimised Bayesian machine learning classifier (GNB), the largest positive likelihood ratio (LR+) 16.685 (95% confidence interval: 4.35, 64.04) was identified when the probability threshold was set at 0.2 (sensitivity: 0.98; specificity: 0.94).

CONCLUSION: Effective communication of health risks through accessible, inclusive, actionable public advice represents a powerful tool to reduce health inequalities amidst health crises and emergencies. Our study translated the best-practice public health advice developed during the pandemic into intuitive machine learning classifiers for health authorities to develop evidence-based guidelines of accessible health advice. In addition, we developed adaptive statistical tools for frontline health professionals to assess accessibility of public health advice for people from non-English speaking backgrounds.

PMID:34925484 | PMC:PMC8683224 | DOI:10.1155/2021/1916690

See the original post:
Developing Machine Learning and Statistical Tools to Evaluate the Accessibility of Public Health Advice on Infectious Diseases among Vulnerable People...

From AI to Machine Learning, 4 ways in which technology is upscaling wealth management space – Zee Business

WealthTech(Technology) companies have rapidly spawnedinrecent years. Cutting-edgetechnologies are making their wayinto almost allindustries from manufacturing to logistics to financial services.

Within financial services,technologies such as data analytics, ArtificialIntelligence, Machine Learning among others are leading the wayinchanging business processes with faster turnaround time and superior customer experience.See Zee Business Live TV Streaming Below:

Astechnology evolves, business models must be changed to remain relevant. Thewealthmanagementsectorisalso notinsulated from this phenomenon!

Ankur Maheshwari CEO-Wealth, Equirus decodes the impact of newtechnology advancementsinthewealthmanagementindustry:

Wealthtechupscalingthewealthmanagementspace

Wealthtechaids companiesindelivering a more convenient, hassle-free and engaging experience to clients at a relatively low cost.

The adoption of new-agetechnologies such as big data analytics, ArtificialIntelligence (AI), and Machine Learning (ML) are helpingwealthmanagementcompanies stay ahead of the curveinthe new age ofinvesting.

While the adoption of advancedtechnologies has been underway for quite some time, the pandemic has rapidlyincreased the pace of the adoption oftechnology.

New ageinvestors and the young population are usingtechnologyina big way. Thisisevident from the fact that the total digital transactionsinIndia have grown from 14.59 billioninFY18 to43.71 billioninFY21 as reported by the RBI.

According to a report released by ACI Worldwide Globally, more than 70.3 billion real-time transactions were processedinthe year 2020, withIndia at the top spot with more than 25 billion real-time payment transactions.

Thisindicates the rising use oftechnology globally andinIndia within the financial servicesindustry.

There are various areas wheretechnology has had a significant impact on client experience and offerings ofwealthmanagementcompanies.

Client Meetings andInteractions

Inthe old days,wealthmanagers would physically meet theinvestors to discuss theirwealthmanagementrequirements. However, recently we see that a lot ofinvestors are demanding more digital touchpointswhichoffer more convenience.

Video calling and shared desktop features have been rapidly adopted by bothinvestors andwealthmanagers to provide a seamless experience.

24*7 digital touchpoints available

Technology has also enabled companies to provide cost-effective digital touchpoint solutions to clients that enable easier and faster access to portfolio updates, various reports such as capital gains reports, and holding statements and enable ease of doing transactions.

Features such as Chatbots and WhatsApp-enabled touchpoints are helpingindelivering a high-end client experienceina quick turnaround time.

Portfolio analytics and reporting

Data analytics has not only augmented the waywealthmanagers analyseinvestors portfolios but have also reduced time spent bywealthmanagers on spreadsheets.

WealthTechalso offers deeperinsightsinto the portfolioswhichassistwealthmanagersinproviding a more comprehensive and customized offering toinvestorswhichmatch their expectations and risk appetite.

ArtificialIntelligence and Machine Learningtechnologies combined with big data analytics are disruptingwealthmanagementspaceina big way. Robo-advisory and quant-based product offerings are making strong headwayinto thisspace.

Ease of process and documentation

Inthe earlier days, documentation and KYC process used to be a bottleneck with processing time goinginto several days as wellinsome cases. Storage of documentsisalso challenging as this requires safe storagespaceand documents are prone to damage and/or being misplaced.

With the advancementintechnologies, we are now moving towards a fully digital and/or phy-gital mode of operations. Whileinvestinginsome products like mutual funds the processiscompletely digital for other products like PMS, AIF, structures, etc. the processes are moving towards phy-gital mode.

The use of Aadhar based digital signature and video KYC have made it possible to reduce the overall processing time significantly!

Summing up:

A shift towards holistic offerings rather than product-based offering

Theincreasing young populationiscominginto the workforce and thereby creating a shiftinfocus towards new-ageinvestors.

These new-ageinvestors are not onlytech-savvy and early adopters oftechnology but are also demanding moreinterms of offerings.

With easy access toinformation and growing awareness,investors are looking for holistic offerings rather than merely product-based offeringswhichencompass all theirwealthmanagementneeds.

Incumbentsinthewealthmanagementspaceshould, if they havent already,incorporatetechnology as anintegral part of their client offering to stay relevant.

Forincumbents, it may prove to be cheaper and faster to getinto the tie-ups, partnerships, or acquire new agetechnology companies to quickly come up the curve rather than buildingin-housetechnology solutions.

As the adage goes, the only constantinlifeischange;technologyisa change for thewealthmanagementdomain that needs to be embraced!

(Disclaimer: The views/suggestions/advice expressed hereinthis article are solely byinvestment experts. Zee Business suggests its readers to consult with theirinvestment advisers before making any financial decision.)

Read this article:
From AI to Machine Learning, 4 ways in which technology is upscaling wealth management space - Zee Business

Artificial intelligence and machine learning can detect and predict depression in University of Newcastle research – Newcastle Herald

newsletters, editors-pick-list,

Artificial intelligence is being used to detect and predict depression in people in a University of Newcastle research project that aims to improve quality of life. Associate Professor Raymond Chiong's research team has developed machine-learning models that "detect signs of depression using social media posts with over 98 per cent accuracy". "We have used machine learning to analyse social media posts such as tweets, journal entries, as well as environmental factors such as demographic, social and economic information about a person," Dr Chiong said. This was done to detect if people were suffering from depression and to "predict their likelihood of suffering from depression in the future". Dr Chiong said early detection of depression and poor mental health can "prevent self-harm, relapse or suicide, as well as improve the quality of life" of those affected. "More than four million Australians suffer from depression every year and over 3000 die from suicide, with depression being a major risk factor," he said. People often use social media to "express their feelings" and this can "identify multiple aspects of psychological concerns and human behaviour". The next stage of the team's research will involve "detecting signs of depression by analysing physiological data collected from different kinds of devices". "This should allow us to make more reliable and actionable predictions/detections of a person's mental health, even when all data sources are not available," he said. "Data from wearable devices such as activity measurements, heart rate and sleeping patterns can be used for behaviour and physiological monitoring. "By combining and analysing data from these sources, we can potentially get a very good picture of a person's mental health." The goal is to make such tools available on a smartphone application, which will allow people to regularly monitor their mental health and seek help in the early stages of depression. "Such an app will also build the ability of mental health and wellbeing providers to integrate digital technologies when monitoring their patients, by giving them a source of regular updates about the mental health status of their patients," he said. "We want to use artificial intelligence and machine learning to develop tools that can detect signs of depression by utilising data from things we use on a regular basis, such as social media posts, or data from smartwatches or fitness devices." The research team aims to develop smartphone apps that can be used by mental health professionals to better monitor their patients and help them provide more effective treatment. The overarching goal of the research is to "improve quality of life". "Depression can seriously impact one's enjoyment of life. It does not discriminate - anyone can suffer from it," Dr Chiong said. "To live a high quality of life, one needs to be in good mental health. Good mental health helps people deal with environmental stressors, such as loss of a job or partner, illness and many other challenges in life." The technology involved can help people monitor how well they are coping in challenging circumstances. This can encourage them to seek help from family, friends and professionals in the early stages of ailing mental health. By doing so, professionals could help people prone to depression and other mental illnesses well before the situation becomes risky. "They could also use this technology to get more information about their patients, in addition to what they can glean during consultation," he said. This makes early interventions possible and "reduces the likelihood of self-harm or suicide attempts". Depending on funding, the team plans to work on integrating people's health data from smart-fitness devices, such as heart rate, sleeping patterns and physical activity. The intention is to work with Hunter New England mental health professionals on this stage of the research. "Following this, our goal is to develop a smartphone app that can not only be used by clinical practitioners, but also everyday individuals to monitor their mental health status in real time." He said machine learning models had shown "great potential in terms of learning from training data and making highly accurate predictions". "For example, the application of machine learning/deep learning for image recognition is a major success story," he said. Studies have shown that machine learning had "enormous potential in the field of mental health as well". "The fact that we were able to obtain more than 98 per cent accuracy in detecting signs of ill mental health demonstrates that there is great potential for machine learning in this field." However, he said the technology does face challenges before it can be applied in real-world scenarios. "Some mobile apps have been developed that use machine learning to provide customised physical or other activities for their users, with the goal of helping them stay in good mental health," he said. "However, our proposed app will be one of the first that allows users to monitor their mental health status in real time, by analysing their social media posts and health measurements." Clinical practitioners could use this app to monitor their patients, but convincing them to use the technology will be one of the challenges.

/images/transform/v1/crop/frm/3AijacentBN9GedHCvcASxG/cf2280ff-31ca-4da2-bbb1-672ee0fdc28e.jpg/r1431_550_4993_2563_w1200_h678_fmax.jpg

December 19 2021 - 4:30PM

Detection: Dr Raymond Chiong said "we can potentially get a very good picture of a person's mental health" with artificial intelligence. Picture: Simone De Peak

Artificial intelligence is being used to detect and predict depression in people in a University of Newcastle research project that aims to improve quality of life.

Associate Professor Raymond Chiong's research team has developed machine-learning models that "detect signs of depression using social media posts with over 98 per cent accuracy".

"We have used machine learning to analyse social media posts such as tweets, journal entries, as well as environmental factors such as demographic, social and economic information about a person," Dr Chiong said.

This was done to detect if people were suffering from depression and to "predict their likelihood of suffering from depression in the future".

Dr Chiong said early detection of depression and poor mental health can "prevent self-harm, relapse or suicide, as well as improve the quality of life" of those affected.

"More than four million Australians suffer from depression every year and over 3000 die from suicide, with depression being a major risk factor," he said.

People often use social media to "express their feelings" and this can "identify multiple aspects of psychological concerns and human behaviour".

The next stage of the team's research will involve "detecting signs of depression by analysing physiological data collected from different kinds of devices".

"This should allow us to make more reliable and actionable predictions/detections of a person's mental health, even when all data sources are not available," he said.

"Data from wearable devices such as activity measurements, heart rate and sleeping patterns can be used for behaviour and physiological monitoring.

"By combining and analysing data from these sources, we can potentially get a very good picture of a person's mental health."

The goal is to make such tools available on a smartphone application, which will allow people to regularly monitor their mental health and seek help in the early stages of depression.

"Such an app will also build the ability of mental health and wellbeing providers to integrate digital technologies when monitoring their patients, by giving them a source of regular updates about the mental health status of their patients," he said.

"We want to use artificial intelligence and machine learning to develop tools that can detect signs of depression by utilising data from things we use on a regular basis, such as social media posts, or data from smartwatches or fitness devices."

The research team aims to develop smartphone apps that can be used by mental health professionals to better monitor their patients and help them provide more effective treatment.

The overarching goal of the research is to "improve quality of life".

"Depression can seriously impact one's enjoyment of life. It does not discriminate - anyone can suffer from it," Dr Chiong said.

"To live a high quality of life, one needs to be in good mental health. Good mental health helps people deal with environmental stressors, such as loss of a job or partner, illness and many other challenges in life."

The technology involved can help people monitor how well they are coping in challenging circumstances.

This can encourage them to seek help from family, friends and professionals in the early stages of ailing mental health.

By doing so, professionals could help people prone to depression and other mental illnesses well before the situation becomes risky.

"They could also use this technology to get more information about their patients, in addition to what they can glean during consultation," he said.

This makes early interventions possible and "reduces the likelihood of self-harm or suicide attempts".

Depending on funding, the team plans to work on integrating people's health data from smart-fitness devices, such as heart rate, sleeping patterns and physical activity.

The intention is to work with Hunter New England mental health professionals on this stage of the research.

"Following this, our goal is to develop a smartphone app that can not only be used by clinical practitioners, but also everyday individuals to monitor their mental health status in real time."

He said machine learning models had shown "great potential in terms of learning from training data and making highly accurate predictions".

"For example, the application of machine learning/deep learning for image recognition is a major success story," he said.

Studies have shown that machine learning had "enormous potential in the field of mental health as well".

"The fact that we were able to obtain more than 98 per cent accuracy in detecting signs of ill mental health demonstrates that there is great potential for machine learning in this field."

However, he said the technology does face challenges before it can be applied in real-world scenarios.

"Some mobile apps have been developed that use machine learning to provide customised physical or other activities for their users, with the goal of helping them stay in good mental health," he said.

"However, our proposed app will be one of the first that allows users to monitor their mental health status in real time, by analysing their social media posts and health measurements."

Clinical practitioners could use this app to monitor their patients, but convincing them to use the technology will be one of the challenges.

See the original post:
Artificial intelligence and machine learning can detect and predict depression in University of Newcastle research - Newcastle Herald

David Sinclair Supplements List Deep Dive – Updated 2021

Despite being 50 years of age, David looks much younger. Given that his focus is on tackling aging and he appears to exemplify this work its natural to ask whats his secret?

David doesnt give health recommendations or endorse brands, but he does share his personal supplementation:

Davids Daily Supplement Regimen:

After touching on Davids diet & exercise routines below, well look in detail at his use of NMN, resveratrol and metformin.

Davids Diet:

Davids Exercise routine:

Davids Lifestyle Choices:

David describes resveratrol and NMN as critical for the activation of sirtuin genes. Sirtuins play a key role in functions that help us to live longer particularly DNA repair.

He describes resveratrol as the accelerator pedal for the sirtuin genes (increasing their activation), and NMN as the fuel. Without the fuel, resveratrol wont be as effective.

The reason that resveratrol wont work effectively without NMN, is that sirtuin activation requires youthful NAD levels, but by 50 years old, David says, we have about half the level of NAD we had in our 20s. NAD being a molecule that is essential to energy production in our cells.

Graph showing NAD+ decrease with age via PLOS paper

So in effect, you take resveratrol to increase activation of the sirtuin genes, and NMN to ensure the sirtuins have enough energy to work properly.

Below well dig deeper into the 3 longevity supplements David takes; NMN, Resveratrol & Metformin.

First well look at the sirtuin activator David takes; Resveratrol.

Resveratrol is a molecule thats found (in small amounts) in the skin of foods like grapes, blueberries, raspberries, mulberries, and peanuts.

If you remember the hype some years ago around red wine being healthy, part of that was due to it containing tiny amounts of resveratrol.

Unfortunately, all food sources contain tiny amounts, so we need a concentrated supplement in order to see benefits!

Resveratrol is though to act as a caloric restriction mimetic, which activates beneficial cellular pathways. Studies have pointed to benefits such as:

Whilst Davids resveratrol comes from excess product leftover from lab experiments, not all of us have this luxury! Therefore we are forced to look online.

If you pop resveratrol into an Amazon search, youll find a host of different options, many of (potentially) dubious quality.

The first thing to note is that we should be looking for trans-resveratrol, not cis-resveratrol.

From Davids studies, cis-Resveratrol did not activate the sirtuin enzyme, but trans-Resveratrol did.

Next, the purity of the trans-resveratrol is important, were looking for 98%+. David mentions this at 1:17:54 of his Ben Greenfield interview, noting that 50% purity can even give diarrhea, because theres other stuff that comes along with the molecule. He also confirms that Polygonum cuspidatum (Japanese Knotweed) is a good source for the resveratrol.

To get closer to the quality that David is likely taking, we can look at research published by an old company of his; Sirtris (who were sold to GSK for $720 million). In this paper they were doing clinical tests on a formulation of resveratrol they call SRT501. Noting that:

Due to the poor aqueous solubility exhibited by resveratrol, digestive absorption is greatly influenced by drug dissolution rate. In an effort to increase absorption across the gastro-intestinal tract and thus systemically available parent compound, there has been considerable interest in the pharmaceutical manipulation of resveratrol. Decreasing the particle size of such chemicals can improve their rate of dissolution and thus their absorption. Therefore, the aim of this clinical study was to investigate whether consumption of SRT501, a micronized resveratrol formulation designed by Sirtris, a GSK Company is safe and generates measurable and pharmacologically active levels of parent agent in the circulation and in the liver.

Thats a wordy quote from the paper, but in essence, they were testing a micronized resveratrol formulation against a non-micronized version. Their study found that levels of resveratrol in the blood were 3.6x greater when using the micronized formulation, and other markers they were comparing also improved.

We see this with other molecules too; where reducing particle size increases bioavailability. For example with curcumin, whose absorption can be improved through micronization (for example Theracurmin). So this makes sense.

Micronized resveratrol options include:

Note: Whichever source of trans-resveratrol you take, according to David, you will increase its bio-availability if you take it with a fat source.

David takes it on an empty stomach in the morning, so mixes it with a bit of yogurt. However it should also be possible to take it with a meal containing fat.

David mentions in his interview with Rhonda Patrick a few nuances around the storage of resveratrol:

David takes his resveratrol in the morning, mixed into a spoon of homemade yogurt (using the Bravo starter culture), in order to increase its bio-availability.

His studies showed that without fat, resveratrol absorption was 5x lower. So consumption with yogurt (or another fat source) is important. David clarified on the recent podcast with Rhonda Patrick that the NMN doesnt need to be taken with a fat source he specifically mentions taking his NMN in capsules, downed with a glass of water in the morning.

Of course you dont need to make your own yogurt, a store bought version will work adequately. However, if youre interested to make your own version expand the box below to learn more.

David has described his yogurt making process as so:

David has specifically mentioned Bravo as the brand of yogurt culture he uses, for example at 1:12:28 of his interview on the Ben Greenfield podcast. Proponents of Bravo yogurt tout it as having a very high amount of gut friendly bacteria, when compared to other similar products. Bravo seems like a fairly expensive product to me, however, once nice trick with yogurts is that you can make a new batch using a small amount from the old batch. Removing the need to use fresh starter sachets again

In terms of further details on the yogurt making process, Ive summarized some of the key points below:

This YouTube video gives a nice (but slow-paced) example of the homemade yogurt making process.

We talked above about the sirtuin activator Resveratrol, now lets talk about NMN, which helps provides the fuel for the sirtuins to work.

NMN falls into a category of supplements, along with Nicotinamide Riboside (NR), referred to as NAD boosters which have become increasingly popular.

NAD is required for every cell of our body to help facilitate energy production. As discussed above, by age 50 you have about half as much NAD as at age 20!

The intention is that by supplementing precursors we can boost the cellular level of NAD closer to youthful levels.

Theres little to no doubt in the research community that we need to restore NAD function; but the jury is still out on what the best method will be. Currently David has his eggs in the basket of NMN.

Davids NMN powder comes from excess product left over from lab experiments. This is good to know, but doesnt help us when it comes to sourcing some. Below we will look at various possible buying options.

Potential considerations when buying include:

Assuming all the above are ok, the last crucial question is:

What Ive done below is put some of the more highly reviewed options (within USA) into a table, calculated the approximate price per gram, and added links to any 3rd party analysis certificates the companies display.

The above table provides a start, but for a detailed analysis table see this post, which also includes options for UK buyers.

Price per gramThe average price per gram appears around $4-$6. For products noticeably cheaper, it would be worth exercising caution around their authenticity.

Capsulating the PowdersWith the bulk powder versions of NMN above, you could put them into capsules yourself at home, using a capsule filling machine.

This emulates the method David uses to take his NMN; in capsules swallowed with a glass of water.

Using size 00 capsules, it takes 3 capsules to capsulate 1g of NMN. Depending on how tightly you fill them you may be a marginally over or under 1g, but it wont be by much. With enough powder, most machines can fill 100 capsules per time which would be 33 days (~1 month) supply.

TestingThere are two main types of tests companies will do. The first is third party testing on the purity of their NMN. The second is contaminant testing, for things such as heavy metals. Its a positive indicator if they can provide both.

Nicotinamide Riboside is a precursor to NAD, similar to NMN. David states in his book that his lab finds:

That being said, he isnt against NR, hes just more optimistic on NMN being the better molecule for raising NAD in the long run. He notes in a blog post on NMN & NR that:

The brand leader in sales of Nicotinamide Riboside is Chromadexs Niagen (pictured above). Amongst Chromadexs scientific advisors is Charles Brenner, who first discovered NR, and showed it could extend the life of yeast cells.

Niagens recommended serving size is 300mg (1 capsule) which may be less efficient at raising NAD levels than 1g of NMN.

If we compare NR & NMN at a price per gram, theyre more similar than I expected. Niagen works out approximately $5.22/gram, and NMN is around $5-$6/gram depending on brand.

In Davids recent interview with Rhonda Patrick, he discussed details around storage, saying:

Since David explained this Ive come to learn that Nicotinamide Riboside, when it its chloride form; Nicotinamide Riboside Chloride (as sold by Niagen), is in a stabilized form. This means that it doesnt need to be kept cold to have an adequate shelf life. More on that below

Looking at the data online around stabilized NR, I found:

What I gather from that, is that NR in its chloride form is stabilized. But like most edible products, cooling it does slow down the degradation that occurs over time. However for most people, the product isnt intended to sit on the shelf for a long time, and thus it will be consumed before the degradation becomes a problem.

There has been some concern in the field that consuming NR or NMN could decrease the bodys methyl groups and lead to health problems. The dropdown section below looks in detail at that issue.

So methylation itself, which utilizes methyl groups (CH), is an essential process for a host of critical functions in the body, including regulation of gene expression and the removal of waste products.

Consuming Niacin derivatives (which includes NR and NMN) will require the body to use up methyl groups in order to later degrade and excrete them. There has been some discussion and concern that by increasing the amount of methylation the body needs to do (through supplementation of NR/NMN), we might deplete the body of methyl groups needed to carry out essential processes.

David discussed this in his podcast with Paul Saladino (see 44mins mark), acknowledging that Niacin derivates (including NR/NMN) require methylation for excretion, but asserting that at this stage the idea of methyl depletion is anecdotal, and not something that has been shown in any NR/NMN studies.

Initially (circa 2019) David mentioned taking a supplement called betaine, also known as trimethylglycine. Then he moved to taking a combination of methyl folate plus methyl B12. This was all in an abundance of caution, rather than due to any new research that backed up the risk of methyl depletion.

After taking the B12/Folate supplement for a few months, in February 2020 David got some blood tests done, and found his B12 levels were double the recommended maximum so he stopped taking it (source: Davids Facebook post). He hasnt mentioned replacing it with anything since.

As Dr Brenner points out below, monitoring homocysteine levels (via blood test) is a proxy for methylation issues.

Methyl groups are primarily derived from nutrients in the diet, including; methionine (amino acid), folate (vitamin B9), choline, betaine, riboflavin (vitamin B2), pyridoxine (vitamin B6) and cobalamin (vitamin B12). For foods rich in these, see table 1 in this research paper.

A further source to add to this discussion is the research done by Chromadex. They hold a patent on nicotinamide riboside production, and make Niagen. In a tweet thread by their chief scientific adviser Charles Brenner, he explains that Chromadex took the potential risk of NR depleting methyl groups seriously. To test this they performed a randomized double blind placebo controlled trial administering 100, 300, or 1,000mg of NR over 56 days (study link). They used homocysteine levels as a proxy for methylation disturbance, and found no change to homocysteine in any of the dosage groups, including up to 1,000mg (see this image). If there was a shortage of methyl groups, they would have expected to homocysteine levels rise. Its worth noting the study used NR, not NMN.

In summary, current evidence for this issue is lacking, and as far as I can tell, David Sinclair is no longer taking any supplements to tackle potential methyl group depletion. However, if you wanted to be super careful, Dr Charles Brenner (an NAD researcher) mentions elevated homocysteine in the blood can be a sign of lower methyl status so one could get a blood test to check that.

Metformin is actually a relatively old drug, first discussed in medical literature in 1922, and studied in humans in the 1950s. It is derived from a plant called the French Lilac. Its primary use in medicine is for the treatment of diabetes, thanks to its ability to decrease blood glucose levels in patients.

Because Metformin has been used for years, and has an established track record of safety, this makes it more attractive as a longevity drug. Molecules that are discovered today will need years of testing before they can even come close to rival the amount of data and patient years accumulated by metformin.

Its thought the longevity benefits are at least in part derived from activation of the AMPK cellular pathway. This has a host of knock-on effects (visualized below), some of which are involved in beneficial processes like mediating inflammation and increasing autophagy (cellular cleanup).

Metformin is a prescription drug, and thus needs to be acquired through a doctors prescription, at least in most countries. It isnt (yet) considered a drug that can help improve healthspan or lifespan, and so you may need to find a forward thinking doctor if you want it prescribed for general health. Typically doctors only prescribe Metformin for blood sugar control issues (type 2 diabetes).

Typically Metformin is taken daily both by diabetics, and by people using it for healthspan extension. However, on the latest interview with Joe Rogan, they discussed a 2018 paper which showed metformin inhibits mitochondrial adaptations to aerobic exercise training. David explained that this makes sense, and its exactly metformins inhibition of mitochondrial function that leads to some of the health benefits. Specifically, they cause the cell to think its in a nutrient restricted state, and it turns on pathways typically reserved for times of scarcity. The function of these pathways is hypothesized to lead to better healthspan outcomes.

When not exercising, which is most days for David, he opts to take 0.5g of metformin in the morning and 0.5g in the evening (for source, see 1:16:45 of his Ivy Lecture, which supersedes what he said in his book). Then on exercise days, he opts not to take it at all. For similar reasons he also skips resveratrol on exercise days (source: see last paragraph of section 1 Get Moving on Davids blog post).

This is viable for David who exercises vigorously in the order of 1-2x per week, but for someone training often, this might be impractical. At which point it would come down to a decision whether the benefits of metformin/resveratrol outweigh the (potential) small impact on recovery.

In a Reddit AMA (link) David was asked whether he would take Berberine if he didnt have access to Metformin. He responds by saying he would likely take Berberine.

Berberine is interesting to many people because it has similar properties to metformin, but it doesnt require a doctors prescription. In common with metformin, it has the ability to:

Berberine dosage in treating diabetes is not entirely dissimilar to Metformin. For example in this study, the patients took 500mg of Berberine 3x per day. Then in this study they took 850mg of Metformin 3x per day. We know with David he takes 500mg of Metformin 2x per day.

Both compounds can induce gastrointestinal distress, so its common to start off on lower dosages, and gradually increase to the desired amount. This gives the gut a chance to adapt, and allows the user to back off the dosage if gastrointestinal distress is reached.

Go here to read the rest:
David Sinclair Supplements List Deep Dive - Updated 2021

David Sinclair | The Sinclair Lab

David A. Sinclair, Ph.D., A.O. is a Professor in the Department of Genetics and co-Director of the Paul F. Glenn Center for Biology of Aging Research at Harvard Medical School. He is best known for his work on understanding why we age and how to slow its effects. He obtained his Ph.D. in Molecular Genetics at the University of New South Wales, Sydney in 1995. He worked as a postdoctoral researcher at M.I.T. with Dr. Leonard Guarente where he co discovered a cause of aging for yeast as well as the role of Sir2 in epigenetic changes driven by genome instability. In 1999 he was recruited to Harvard Medical School where he has been teaching aging biology and translational medicine for aging for the past 16 years. His research has been primarily focused on the sirtuins, protein-modifying enzymes that respond to changing NAD+ levels and to caloric restriction (CR) with associated interests in chromatin, energy metabolism, mitochondria, learning and memory, neurodegeneration, and cancer. The Sinclair lab was the first one to identify a role for NAD+ biosynthesis in regulation of lifespan and first showed that sirtuins are involved in CR in mammals. They first identified small molecules that activate SIRT1 such as resveratrol and studied how they improve metabolic function using a combination of genetic, enzymological, biophysical and pharmacological approaches. They recently showed that natural and synthetic activators require SIRT1 to mediate the in vivo effects in muscle and identified a structured activation domain. They demonstrated that miscommunication between the mitochondrial and nuclear genomes is a cause of age-related physiological decline and that relocalization of chromatin factors in response to DNA breaks may be a cause of aging.

Dr. Sinclair is co-founder of several biotechnology companies (Sirtris, Ovascience, Genocea, Cohbar, MetroBiotech, ArcBio, Liberty Biosecurity) and is on the boards of several others. He is also co-founder and co-chief editor of the journal Aging. His work is featured in five books, two documentary movies, 60 Minutes, Morgan Freemans Through the Wormhole and other media. He is an inventor on 35 patents and has received more than 25 awards and honors including the CSL Prize, The Australian Commonwealth Prize, Thompson Prize, Helen Hay Whitney Postdoctoral Award, Charles Hood Fellowship, Leukemia Society Fellowship, Ludwig Scholarship, Harvard-Armenise Fellowship, American Association for Aging Research Fellowship, Nathan Shock Award from the National Institutes of Health, Ellison Medical Foundation Junior and Senior Scholar Awards, Merck Prize, Genzyme Outstanding Achievement in Biomedical Science Award, Bio-Innovator Award, David Murdock-Dole Lectureship, Fisher Honorary Lectureship, Les Lazarus Lectureship, Australian Medical Research Medal, The Frontiers in Aging and Regeneration Award, Top 100 Australian Innovators, and TIME magazines list of the 100 most influential people in the world.

David A. Sinclairs Past and Present Advisory roles, Board Positions, Funding Sources, Licensed Inventions, Investments, Funding, and Invited Talks.

Go here to see the original:
David Sinclair | The Sinclair Lab

GeoMol: New deep learning model to predict the 3D shapes of a molecule – Tech Explorist

Dealing with molecules in their natural 3D structure is essential in cheminformatics or computational drug discovery. These 3D conformations determine the biological, chemical, and physical properties.

Determining the 3D shapes of a molecule helps understand how it will attach to specific protein surfaces. But, thats not an easy task. Plus, it is time consuming and expensive process.

MIT scientists have come up with a solution to ease this task. Using machine learning, they have created a deep learning model called GeoMol that predicts the 3D shape. As molecules are generally represented in small graphs, the GeoMol works based on a graph in 2D of its molecular structure.

Unlike other machine learning models, the GeoMol processes molecules in only seconds and performs better. Plus, it determines the 3D structure of each bond individually.

Usually, pharmaceutical companies need to test several molecules in lab experiments. According to scientists, the GeoMol could help those companies accelerate the drug discovery process by diminishing the need for testing molecules.

Lagnajit Pattanaik, a graduate student in the Department of Chemical Engineering and co-lead author of the paper, said,When you are thinking about how these structures move in 3D space, there are really only certain parts of the molecule that are flexible, these rotatable bonds. One of the key innovations of our work is that we think about modeling conformational flexibility like a chemical engineer would. It is really about trying to predict the potential distribution of rotatable bonds in the structure.

GeoMol leverages a recent tool in deep learning called a message passing neural network. It is specially designed to operate on graphs. By adapting a message passing neural network, scientists could predict specific elements of molecular geometry.

The model, at first, predicts the lengths of the chemical bonds between atoms and the angles of those individual bonds. The arrangement and connection of atoms determine which bonds can rotate.

It then predicts the structure of each atoms surrounding individually. Later, it assembles neighboring rotatable bonds by computing the torsion angles and then aligning them.

Pattanaik said,Here, the rotatable bonds can take a huge range of possible values. So, using these message passing neural networks allows us to capture a lot of the local and global environments that influence that prediction. The rotatable bond can take multiple values, and we want our prediction to be able to reflect that underlying distribution.

As mentioned above, the model determines each bonds structure individually; it explicitly defines chirality during the prediction process. Hence, there is no need for optimization after-the-fact.

Octavian-Eugen Ganea, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL), said,What we can do now is take our model and connect it end-to-end with a model that predicts this attachment to specific protein surfaces. Our model is not a separate pipeline. It is very easy to integrate with other deep learning models.

Scientists used a dataset of molecules and the likely 3D shapes they could take to test their model. By comparing the model with other methods and models, they evaluated how many were likely to capture 3D structures. They found that GeoMol outperformed the other models on all tested metrics.

Pattanaik said,We found that our model is super-fast, which was exciting to see. And importantly, as you add more rotatable bonds, you expect these algorithms to slow down significantly. But we didnt see that. The speed scales nicely with the number of rotatable bonds, which is promising for using these types of models down the line, especially for applications where you are trying to predict the 3D structures inside these proteins quickly.

Scientists are planning to use GeoMol in high-throughput virtual screening. This would help them determine small molecule structures that interact with a specific protein.

Journal Reference:

Link:
GeoMol: New deep learning model to predict the 3D shapes of a molecule - Tech Explorist

VA Aims To Reduce Administrative Tasks With AI, Machine Learning – Nextgov

Officials at the Department of Veterans Affairs are looking to increase efficiency and optimize their clinicians professional capabilities, featuring advanced artificial intelligence and machine learning technologies.

In a November presolicitation, the VA seeks to gauge market readiness for advanced healthcare device manufacturing, ranging from prosthetic solutions, surgical instruments, and personalized digital health assistant technology, as well as artificial intelligence and machine learning capabilities.

Dubbed Accelerating VA Innovation and Learning, or AVAIL, the program is looking to supplement and support agency health care operations, according to Amanda Purnell, an Innovation Specialist with the VA

What we are trying to do is utilize AI and machine learning to remove administrative burden of tasks, she told Nextgov.

The technology requested by the department will be tailored to areas where a computer can do a better, more efficient job than a human, and thereby give people back time to complete demanding tasks that require human judgement.

Some of these areas the AI and machine learning technology could be implemented include surgical preplanning, manufacturing submissions, and 3D printing, along with injection molding to produce plastic medical devices and other equipment.

Purnell also said that the VA is looking for technology that can handle the bulk of document analyses. Using machine learning and natural language processing to scan and detect patterns in medical images, such as CT scans, MRIs and dermatology scans is one of the ways the VA aims to digitize its administrative workload.

Staff at the VA is currently tasked with looking through faxes and other clinical data to siphon it to the right place. AVAIL would combine natural language processing to manage these operations and add human review when necessary.

Purnell said that the forthcoming technology would emphasize streamlining processes that are better and faster done by machines and allowing humans to do something that is more kind of human-meaningful, and also allowing clinicians to operate to the top of their license.

She noted that machines are highly adept at scanning and analyzing images with AI. The VA procedure would likely have the AI technology to do a preliminary scan, followed by a human clinician to make their expert opinion based on results.

With machine learning handling the bulk of these processes along with other manufacturing and designing needs, clinicians and surgeons within the VA could focus more on applying their medical and surgical skills. Purnell used the example of a prosthetist getting more time to foster a human connection with a client rather than oversee other health care devices and manufacturing details.

It is making sure humans are used to their best advantage, and that were using technology to augment the human experience, she said.

The AVAIL program also stands to improve the ongoing modernization effort of the VAs beleaguered electronic health record (EHR) system, which has suffered deployment hiccups thanks to difficult interfaces and budget constraints.

The AI and machine learning technology outlined in the presolicitation could also support new EHR infrastructure and focus on an improved user experience, mainly with an improved platform interface and other accessibility features.

Purnell underscored that having AI manage form processing and data sharing capabilities, including veteran claims and benefits, is another beneficial use case.

Were alleviating that admin burden and increasing the experience both for veterans and our clinicians, in that veterans are getting more facetime with our clinicians and clinicians are doing more of what they are trained to do, Purnell said.

Read this article:
VA Aims To Reduce Administrative Tasks With AI, Machine Learning - Nextgov

3D Information and Biomedicine: How Artificial Intelligence/Machine Intelligence will contribute to Cancer Patient Care and Vaccine Design – Newswise

Newswise New Brunswick, N.J., December 7, 2021 Artificial Intelligence/Machine Learning (AI/ML)is the development of computer systems that are able to perform tasks that would normally require human intelligence. AI/ML is used by people every day, for example, while using smart home devices or digital voice assistants. The use of AI/ML is also rapidly growing in biomedical research and health care. In a recent viewpoint paper, investigators at Rutgers Cancer Institute of New Jersey and Rutgers New Jersey Medical School (NJMS) explored how AI/ML will complement existing approaches focused on genome-protein sequence information, including identifying mutations in human tumors.

Stephen K. Burley, MD, DPhil, co-program leader of the Cancer Pharmacology Research Program at Rutgers Cancer Institute, and university professor and Henry Rutgers Chair and Director of the Institute for Quantitative Biomedicine at Rutgers University, along with Renata Pasqualini, PhD, resident member of Rutgers Cancer Institute and chief of the Division of Cancer Biology, Department of Radiation Oncology at Rutgers NJMS, and Wadih Arap, MD, PhD, director of Rutgers Cancer Institute at University Hospital, co-program leader of the Clinical Investigations and Precision Therapeutics Research Program at Rutgers Cancer Institute, and chief of the Division of Hematology/Oncology, Department of Medicine at Rutgers NJMS, share more insight on the paper, published online December 2 in The New England Journal of Medicine (DOI: 10.1056/NEJMcibr2113027).

What is the potential of AI/MI in cancer research and clinical practice?

We foresee that the most immediate applications of computed structure modeling will focus on point mutations detected in human tumors (germline or somatic). Computed structure models of frequently mutated oncoproteins (e.g., Epidermal Growth Factor Receptor, EGFR, shown in Figure 2B of the paper) are already being used to help identify cancer-driver genes, enable therapeutics discovery, explain drug resistance, and inform treatment plans.

What are some of the biggest challenges for AI/ML in healthcare?

In the broadest terms, the essential challenges would likely include AI/ML research and development, technology validation, efficient/equitable deployment and coherent integration into the existing healthcare systems, and inherent issues related to the regulatory environment along with complex medical reimbursement issues.

How will this technology have an impact on vaccine design, especially with regard to SARS CoV2?

Going beyond 3D structure knowledge across entire proteomes (parts lists for biology and biomedicine), accurate computational modeling will enable analyses of clinically significant genetic changes manifest in 3D by individual proteins. For example, the SARS-CoV-2 Delta Variant of Concern spike protein carries 13 amino changes. Experimentally-determined 3D structures of SARS-CoV-2 spike protein variants bound to various antibodies, all available open access from the Protein Data Bank (rcsb.org), can be used with computed structure models of new Variant of Concern spike proteins to understand the potential impact other amino acid changes. In currently ongoing work (as yet unpublished), we have used AI/ML approaches to understand the structure-function relationship of SARS-CoV-2 Omicron Variant of Concern spike protein (with more than 30 amino acid changes), illustrating practical and immediate application of this emerging technology.

What is the next step to better utilizing AI/ML in cancer research?

Development and equitable dissemination of user-friendly tools that cancer biologists can use to understand the three-dimensional structures proteins implicated in human cancers and how somatic mutations affect structure and function leading to uncontrolled tumor cell proliferation.

###

Read the rest here:
3D Information and Biomedicine: How Artificial Intelligence/Machine Intelligence will contribute to Cancer Patient Care and Vaccine Design - Newswise

Machines that see the world more like humans do – MIT News

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it.

Move that computer vision system to a self-driving car and the stakes become much higher for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street.

To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects.

The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene. Probabilistic inference allows the system to infer whether mismatches are likely due to noise or to errors in the scene interpretation that need to be corrected by further processing.

This common-sense safeguard allows the system to detect and correct many errors that plague the deep-learning approaches that have also been used for computer vision. Probabilistic programming also makes it possible to infer probable contact relationships between objects in the scene, and use common-sense reasoning about these contacts to infer more accurate positions for objects.

If you dont know about the contact relationships, then you could say that an object is floating above the table that would be a valid explanation. As humans, it is obvious to us that this is physically unrealistic and the object resting on top of the table is a more likely pose of the object. Because our reasoning system is aware of this sort of knowledge, it can infer more accurate poses. That is a key insight of this work, says lead author Nishad Gothoskar, an electrical engineering and computer science (EECS) PhD student with the Probabilistic Computing Project.

In addition to improving the safety of self-driving cars, this work could enhance the performance of computer perception systems that must interpret complicated arrangements of objects, like a robot tasked with cleaning a cluttered kitchen.

Gothoskars co-authors include recent EECS PhD graduate Marco Cusumano-Towner; research engineer Ben Zinberg; visiting student Matin Ghavamizadeh; Falk Pollok, a software engineer in the MIT-IBM Watson AI Lab; recent EECS masters graduate Austin Garrett; Dan Gutfreund, a principal investigator in the MIT-IBM Watson AI Lab; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences (BCS) and a member of the Computer Science and Artificial Intelligence Laboratory; and senior author Vikash K. Mansinghka, principal research scientist and leader of the Probabilistic Computing Project in BCS. The research is being presented at the Conference on Neural Information Processing Systems in December.

A blast from the past

To develop the system, called 3D Scene Perception via Probabilistic Programming (3DP3), the researchers drew on a concept from the early days of AI research, which is that computer vision can be thought of as the "inverse" of computer graphics.

Computer graphics focuses on generating images based on the representation of a scene; computer vision can be seen as the inverse of this process. Gothoskar and his collaborators made this technique more learnable and scalable by incorporating it into a framework built using probabilistic programming.

Probabilistic programming allows us to write down our knowledge about some aspects of the world in a way a computer can interpret, but at the same time, it allows us to express what we dont know, the uncertainty. So, the system is able to automatically learn from data and also automatically detect when the rules dont hold, Cusumano-Towner explains.

In this case, the model is encoded with prior knowledge about 3D scenes. For instance, 3DP3 knows that scenes are composed of different objects, and that these objects often lay flat on top of each other but they may not always be in such simple relationships. This enables the model to reason about a scene with more common sense.

Learning shapes and scenes

To analyze an image of a scene, 3DP3 first learns about the objects in that scene. After being shown only five images of an object, each taken from a different angle, 3DP3 learns the objects shape and estimates the volume it would occupy in space.

If I show you an object from five different perspectives, you can build a pretty good representation of that object. Youd understand its color, its shape, and youd be able to recognize that object in many different scenes, Gothoskar says.

Mansinghka adds, "This is way less data than deep-learning approaches. For example, the Dense Fusion neural object detection system requires thousands of training examples for each object type. In contrast, 3DP3 only requires a few images per object, and reports uncertainty about the parts of each objects' shape that it doesn't know."

The 3DP3 system generates a graph to represent the scene, where each object is a node and the lines that connect the nodes indicate which objects are in contact with one another. This enables 3DP3 to produce a more accurate estimation of how the objects are arranged. (Deep-learning approaches rely on depth images to estimate object poses, but these methods dont produce a graph structure of contact relationships, so their estimations are less accurate.)

Outperforming baseline models

The researchers compared 3DP3 with several deep-learning systems, all tasked with estimating the poses of 3D objects in a scene.

In nearly all instances, 3DP3 generated more accurate poses than other models and performed far better when some objects were partially obstructing others. And 3DP3 only needed to see five images of each object, while each of the baseline models it outperformed needed thousands of images for training.

When used in conjunction with another model, 3DP3 was able to improve its accuracy. For instance, a deep-learning model might predict that a bowl is floating slightly above a table, but because 3DP3 has knowledge of the contact relationships and can see that this is an unlikely configuration, it is able to make a correction by aligning the bowl with the table.

I found it surprising to see how large the errors from deep learning could sometimes be producing scene representations where objects really didnt match with what people would perceive. I also found it surprising that only a little bit of model-based inference in our causal probabilistic program was enough to detect and fix these errors. Of course, there is still a long way to go to make it fast and robust enough for challenging real-time vision systems but for the first time, we're seeing probabilistic programming and structured causal models improving robustness over deep learning on hard 3D vision benchmarks, Mansinghka says.

In the future, the researchers would like to push the system further so it can learn about an object from a single image, or a single frame in a movie, and then be able to detect that object robustly in different scenes. They would also like to explore the use of 3DP3 to gather training data for a neural network. It is often difficult for humans to manually label images with 3D geometry, so 3DP3 could be used to generate more complex image labels.

The 3DP3 system combines low-fidelity graphics modeling with common-sense reasoning to correct large scene interpretation errors made by deep learning neural nets. This type of approach could have broad applicability as it addresses important failure modes of deep learning. The MIT researchers accomplishment also shows how probabilistic programming technology previously developed under DARPAs Probabilistic Programming for Advancing Machine Learning (PPAML) program can be applied to solve central problems of common-sense AI under DARPAs current Machine Common Sense (MCS) program, says Matt Turek, DARPA Program Manager for the Machine Common Sense Program, who was not involved in this research, though the program partially funded the study.

Additional funders include the Singapore Defense Science and Technology Agency collaboration with the MIT Schwarzman College of Computing, Intels Probabilistic Computing Center, the MIT-IBM Watson AI Lab, the Aphorism Foundation, and the Siegel Family Foundation.

View post:
Machines that see the world more like humans do - MIT News

METiS Therapeutics Launches With $86 Million Series A Financing to Transform Drug Discovery and Delivery With Machine Learning and Artificial…

Dec. 7, 2021 11:00 UTC

CAMBRIDGE, Mass.--(BUSINESS WIRE)-- METiS Therapeutics debuts today with $86 million Series A financing to harness artificial intelligence (AI) and machine learning to redefine drug discovery and delivery and develop optimal therapies for patients with serious diseases. PICC PE and China Life led the financing and were joined by Sequoia Capital China, Lightspeed, 5Y Capital, FreeS Fund and CMBI Zhaoxin Wuji Fund. The financing will be used to advance the companys pipeline of novel assets with high therapeutic potential and the continued development of its AI-driven drug discovery and delivery platform.

METiS is well-positioned to change the drug discovery and delivery landscape with the creation of a proprietary predictive AI platform. We leverage machine learning, AI and quantum simulation to uncover novel drug candidates and to transform drug discovery and development, ultimately bringing the best therapies to patients in need, said Chris Lai, CEO, and Founder, METiS Therapeutics. We are fortunate that our world-class roster of investors believes in our vision and todays news represents the first of many significant milestones that we will be accomplishing throughout the next year.

The METiS platform (AiTEM) combines state-of-the-art AI data-driven algorithms, mechanism-driven quantum mechanics and molecular dynamics simulations to calculate Active Pharmaceutical Ingredient (API) properties, elucidate API-target and API-excipient interactions, and predict chemical, physical and pharmacokinetic properties of small molecule and nucleic acid therapeutics in specific microenvironments. This enables efficient lead optimization, candidate selection and formulation design. Founded by a team of MIT researchers, serial entrepreneurs and biotech industry veterans, METiS develops and in-licenses novel assets with high therapeutic potential that could benefit from its data-driven platform.

About METiS Therapeutics METiS Therapeutics is a biotechnology company that aims to drive best-in-class therapies in a wide range of disease areas by integrating drug discovery and delivery with AI, machine learning, and quantum simulation. To learn more, visit http://www.metistx.com/.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211207005197/en/

View post:
METiS Therapeutics Launches With $86 Million Series A Financing to Transform Drug Discovery and Delivery With Machine Learning and Artificial...

Human Rights Documentation In The Digital Age: Why Machine Learning Isnt A Silver Bullet – Forbes

When the Syrian uprising started nearly 10 years ago, videos taken by citizens of attacks against them such as chemical and barrel bomb strikes started appearing on social media. While international human rights investigators couldn't get into the country, people on the ground documented and shared what was happening. Yet soon, videos and pictures of war atrocities were deleted from social media platforms a pattern that has continued to date. Ashoka Fellow Hadi al-Khatib, founder of the Syrian Archive and Mnemonic, works to save these audiovisual documents so they are available as evidence for lawyers, human rights investigators, historians, prosecutors, and journalists. In the wake of the Facebook Leaks, which are drawing needed attention to the topic of content moderation and human rights, Ashokas Konstanze Frischen caught up with Hadi.

Hadi al-Khatib, founder of Mnemonic and the Syrian Archive warns us against an over-reliance on ... [+] machine learning for online content moderation.

Konstanze Frischen: Hadi, you verify and save images and videos that show potential human rights violations, and ensure that prosecutors and journalists can use them later to investigate crimes against humanity. How and why did you start this work?

Hadi al-Khatib: I come from Hama, a city in the north of Damascus in Syria, where the first uprising against the Syrian government happened in 1982, and thousands of people died at the hands of the Syrian military. Unfortunately, at the time, there was very little documentation about what happened. Growing up, when my family spoke about these incidents, they would speak very quietly, or avoid the topic when I asked them about it. They would say: be careful, even the walls have ears. In 2011, during the second big uprising against the Syrian government, the situation was quite different. We immediately saw a huge scale of audio-visual documentation on social media - videos and photos captured by people witnessing the peaceful protests first, and then the violence against protesters. People wanted to make sure the crimes that they were witnessing were documented, in contrast to what happened in Hama in 1982. My work is to ensure that this documentation captured by people who risked their lives is not lost and is accessible in the future.

Frischen: With people publishing this on social media on a very large scale, many people might assume It's all out there, so why do I need someone else to archive it?

al-Khatib: Yes, good question. When we work with journalists, photographers, citizens from around the world, most of them do think of social media as a place where they can safely archive their materials. They think We have the archive. It's on social media, Dropbox, or Google Drive. But its not safe there once this media is uploaded on social media platforms, we lose control of it. From March 2011 until I founded the Syrian Archive in 2014, footage got deleted on a very large scale and it still is until now because of social media platforms content moderation policies. It got worse after 2017 when social media companies like YouTube started to use machine learning to detect content that shows violence automatically.

Frischen: Why do you think the materials get removed from social media platforms?

al-Kathib: Because the machine learning algorithm they have developed doesn't really differentiate between a video that shows extremist content or graphic content, and a video that documents a human rights violation. They all get detected automatically and removed.

Frischen: Though its well intended, machine learning cant handle the complexity?

al-Khatib: Exactly. The use of machine learning is very dangerous for human rights documentation, not just in Syria, but around the world. Social media platforms would need to invest more in human intelligence, not just machine intelligence, to make sound decisions.

Frischen: The Syrian Archive, one of the organizations you founded, has archived over 3.5 million records of digital content. How does that work in practice? How do you balance machine learning and manual work?

al-Khatib: The first step is to monitor specific sources, locations, and keywords around current or historical events. Once we discover content, we make sure that we preserve it automatically, as fast as possible. This is always our priority. Each of the 3.5 million records we have collected come from social media platforms, websites, or apps like Telegram. We archive them all in a way that provides availability, accessibility and authenticity for these records. We use machine learning with the project VFRAME to help us discover what we have in the archives that is most relevant for human rights investigations, journalism reporting or legal case building within this large pool of media. Then, we manually verify the location, date, time. We also verify any kind of objects we can see in the video, and make sure we are able to link it with other pieces of archived media and corroborate it with other types of evidence, to construct a verified incident. We also use blockchain to timestamp the materials, with a third-party company called Enigio. We want to provide long term, safe accessibility to the documents, and authenticate them in a way that proves we haven't tampered with the material during the archival process.

Frischen: Machine learning is great for analyzing large data sets, but then human judgment and a deep knowledge of history, politics, and the region must be brought to bear?

al-Khatib: Exactly. Knowledge of context, language, and history is vital for verification. This is all a manual process where researchers use certain tools and techniques to verify the location, date, time of every record, and make sure that it's clustered together into incidents. Those incidents are also clustered together into collections to form a bigger picture understanding of the pattern of violence and the impact it has on people.

Frischen: These findings can in turn be leveraged: You feed the results of your investigations to governments and prosecutors. What has the impact been?

al-Khatib: We realize that any legal accountability is going to take a long time. One of the main legal cases we are working on right now is about the use of chemical weapons in Syria. We focus on two incidents in two locations in Syria, in Eastern Ghouta (2013), and in Khan Sheikhoun (2017), where we saw the biggest uses of chemical weapons (i.e. Sarin gas) in recent history. We submitted a legal complaint to the German, French and Swedish prosecutors in collaboration with the Syrian Center for Media and Freedom of Expression, Civil Rights Defenders, and the Open Society Justice Initiative. Part of that submission was media evidence verified and collected by the Syrian Archive. Our investigations into the Syrian chemical supply chain resulted in the conviction of three Belgian firms who violated European Union sanctions, an internal audit of the Belgian customs system, parliamentary inquiries in multiple countries, a change in Swiss export laws to reflect European Union sanctions laws on specific chemicals, and the filing of complaints urging the governments of Germany and Belgium to initiate investigations into additional shipments to Syria.

Frischen: Wow. Let me come back to the automated content removal on social media platforms when this happens, i.e. when pieces of evidence of atrocities by the government are deleted, does this then opens up windows of opportunity for actors like the Syrian government to then flood social media with other, positive images, and thus take over newsfeeds?

al-Khatib: Yes, absolutely. Over the last 10 years, we've seen this kind information propaganda coming from all sides of the conflict in Syria. And our role within this information environment is to counter disinformation by archiving, collecting and verifying visual materials to reconstruct what really happened and to make sure that this reconstruction is based on facts. And we are doing this transparently, so anyone can see our methodology and tools we are using.

Frischen: How are the big social media companies responding? Do you see them as collaborative or as distant?

al-Khatib: Many civil society organizations from around the world, have been engaging with social media companies and asking them to invest more resources into this issue. So far, nothing has changed. The use of machine learning is still happening. A huge amount of content related to human rights documentation is still being removed. But there has absolutely been engagement and collaboration throughout the years, especially since 2017. We worked with YouTube for example to reinstate some of the channels that were removed, as well as thousands of videos that were published by credible human rights and media organizations in Syria. But unfortunately, a big part of this documentation is still being removed. The Facebook Leaks reveal the company knew about this problem, but they are continuing to use machine learning, erasing the history and memory of people around the world.

Frischen: How do you attend to the wellbeing of the humans involved in gathering and triaging violent and traumatic content?

al-Khatib: This is a very important question. We need to make sure there is a system of support for all researchers looking at this content practical assistance from psychologists that understand all the challenges and mitigate some of them. We are setting up protocols, so the researchers have access to experts. There are also some technical efforts underway. For example, we work with machine learning to blur images at the beginning, so researchers are not seeing graphic images directly on their screen. This is something that we want to do more work on.

Frischen: What gives you hope?

al-Khatib: The will of people who are facing the violence firsthand, and the families of victims. Whether in Syria or other countries, they did not yet get the accountability they deserve, but regardless, they are asking for it, fighting for it. This is what gives me hope working together with them, adding value by linking documentation to justice and accountability, and using this process to reconstruct the future of the country again.

Hadi al-Khatib (@Hadi_alkhatib) is the founder of Syrian Archive and its umbrella organization Mnemonic.

This conversation was condensed and edited. Watch the full conversation & browse more insights on Tech & Humanity.

Read the original post:
Human Rights Documentation In The Digital Age: Why Machine Learning Isnt A Silver Bullet - Forbes

The Beatles: Get Back Used High-Tech Machine Learning To Restore The Audio – /Film

"TheBeatles: Get Back" is eight hours of carefully curated audio and footage from The Beatles in the studio and performing a rooftop concert in London in 1969. Jackson had to dig through 60 hours of vintage film footage and around 150 hours of audio recordings in order to put together his three-part documentary. Once he decided which footage and audio to include, then he had to take the next difficult step: cleaning up and restoring them both to give fans a look at TheBeatles like they had never seen them before.

In order to clean up the audio for "Get Back," Jackson employed algorithm technology to teach computers what different instruments and voices sounded like so they could isolate each track:

Once each track was isolated, sound mixers could then adjust volume levels individually to help with sound quality and clarity. The isolated tracks also make it much easier to remove noise from the audio tracks, like background sounds or the electronic hum of older recording equipment. This ability to fine-tune every aspect of the audio allowedJackson to make it sound like theFab Four are hanging out in your living room. When that technology is used for their musical performances, it's all the more impressive, as their rooftop concert feels as close to the real thing as you can possibly get.

Check out "TheBeatles: Get Back," streaming on Disney+.

Read more:
The Beatles: Get Back Used High-Tech Machine Learning To Restore The Audio - /Film

Artificial Intelligence in the Food Manufacturing Industry: Machine Conquers Human? – Food Industry Executive

By Lior Akavia, CEO and co-founder of Seebo

Four years ago, Elon Musk famously predicted that artificial intelligence will overtake human intelligence by the year 2025.

Were headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now, he told the New York Times.

Musk has also repeatedly warned of the potential dangers of AI, even invoking the Terminator movie franchise by way of illustration.

And yet, the very same Elon Musk recently unveiled the prototype for a distinctly humanoid Tesla Robot, which he hopes will be ready in 2022. Speaking to an audience at Teslas AI Day in August, Musk quipped that the robot is intended to be friendly, and added that it will be designed to navigate through a world built for humans alluding to his previous, apparently still-extant concerns.

Of course, Musks fears about AI arent shared by everyone. Fellow tech entrepreneur Mark Zuckerberg has distinctly different views on the matter. On the other hand, Musk isnt alone, either; Stephen Hawking once famously warned that AI could ultimately spell the end of the human race.

So what can we take away from this confusing discourse about AI? Is artificial intelligence the savior of humanity? Or are we about to get conquered by an army of drones?

The truth is (probably) a lot less theatrical but arguably no less dramatic.

The misleading thing about these types of high-profile, philosophical debates about AI is that we actually have a long way to go before what Hawking referred to as full artificial intelligence is even developed let alone mass-introduced into the marketplace.

Undeniably, however, the vast potential of AI is as much recognized by experts as it is taken for granted by the general public. Machine learning and other forms of AI are already defining many aspects of our daily lives, from the way we communicate with others to our ability to get to work on time, to how we shop, work, and even acquire knowledge.

In unveiling his Tesla robot, Musk offered a pretty succinct summary of the core benefits of AI in general, asserting that the robots purpose will be to take over unsafe, repetitive, or boring tasks that humans would rather not do.

That summary is applicable to almost any AI application you can think of: taking over tasks that humans either never really enjoyed doing, or werent ever that great at in the first place. A classic example is food assembly lines: humans get tired, bored, make mistakes, and have potentially dangerous accidents all things that robots either dont experience at all, or (in the case of accidents) experience less often, with costs measured in terms of financial losses rather than human lives.

But a far better illustration of this reality is in the world of data. In the days before big data became a buzzword, there was hope that the explosion of information would immediately usher in an era of true enlightenment. Finally, human beings could have all the data they needed at their fingertips to make the optimal decisions every time.

Of course, thats not what happened. Instead of being liberated by big data, we became hostages to it. From the spam clogging our email inboxes to the blur of graphs, charts, and tables that to this day form the core challenge for almost every business.

Then came artificial intelligence, and with it, the key to unlocking the potential of that ocean of data. And herein lies both the immense promise of AI, as well as the fear of Terminators and robot-driven unemployment: AI, particularly in the form of machine learning algorithms, is infinitely better at analyzing data than human beings are.

While philosophical debates between tech heavyweights naturally make the headlines, the current daily reality is far more benign. In practice, AI is mostly being used to empower humans, not sideline them.

Take the food manufacturing example above. Yes, its true that many food assembly lines are now dominated by machines rather than people, much in the way the Industrial Revolution did away with other menial jobs. But just as the Industrial Revolution paved the way for a more prosperous future, rather than one of mass unemployment (as many feared at that time as well), the Industrial Artificial Intelligence Revolution is enhancing and improving the lives of food manufacturing teams, rather than rendering them redundant.

Using AI, food manufacturing teams are better able to excel at their jobs which of course benefits them, their employers, and ultimately the consumers who benefit from a greater quantity and better quality of product.

Ive seen this firsthand. My company, Seebo, is part of this Fourth Industrial Revolution. Our proprietary Process-Based Artificial Intelligence is enabling global leaders in the food industry to reduce production losses like waste, yield, and quality, saving them millions each year. At the same time, theyre using our technology to become more sustainable: cutting emissions, reducing energy consumption overall and significantly reducing food waste.

And as with many other applications of machine learning AI, its all about the data. In the case of food manufacturers, it means using Seebos AI to reveal the hidden causes of these food production losses, high emissions, and so on insights that were previously unavailable due to the complex nature of food manufacturing data. Armed with those insights, process experts and production teams are able to make the right decisions in real time: to know when to adjust the process or maintain certain set points that they may otherwise have neglected or overlooked.

Of course, as the saying goes, with great power comes great responsibility.

From the wheel to the printing press to nuclear power, technological advancements always have the potential for good or bad. In that sense, AI is no different; where it differs is that its full potential is largely unknown. Weve still yet to tap into the full potential of this technology, so it often feels like a sort of black magic.

But I do believe that the current trajectory is very much for the good but more to the point, we dont have a choice.

Humanity today faces two simultaneous global challenges. First, a population crisis with the global population set to swell 25% by the year 2050 on the one hand, while on the other hand many countries (most notably China) face a rapidly aging population. And second, a rising climate crisis, as countries and industries struggle to cut carbon emissions while maintaining the productivity necessary to sustain those growing and aging populations.

In this struggle, artificial intelligence is perhaps our greatest ally. Ive seen up close its potential to empower better decisions, bridging the gap between seemingly opposing goals like reducing emissions while producing more, not less.

Far from conquering us, AI is humanitys best chance of overcoming some of our greatest food manufacturing challenges today.

Lior Akavia is the CEO and co-founder of Seebo, an industrial Artificial Intelligence start-up that helps tier-one manufacturers around the world to predict and prevent quality and yield losses. He is a serial entrepreneur and innovator, focused on the fields of AI, IoT, and manufacturing.

Link:
Artificial Intelligence in the Food Manufacturing Industry: Machine Conquers Human? - Food Industry Executive

Want To Live Longer: 7 Essential Oil Therapies That Can Promote Long Life

The longer you live, the better it is for your life in general. Sooner or later, you will be left alone to take care of yourself. The use of essential oils for medical purposes has been known to humans from times immemorial. In the archaeological records, it has been found that even Neanderthal man used certain plant extracts for medicinal purposes. Most of these were crude preparations which included infusions, decoctions, and maceration in oil or fat. This method is  in Ayurveda and other herbal medicines even today.

Essential Oil Therapies

However, with advancements in modern science and technology. The way essential oils prepare will change a great deal over the past few centuries. Nowadays, much more refined methods, including distillation, expression, and solvent extraction. It will develop, making isolating individual compounds from plants much easier than before.

With these energy essential oils, You don't have to worry about your health issue any longer. They not only promote energy levels but also help you fight many health-related issues. So now you can easily say hello to good health with these essential oil therapies that can promote long life.

1) Myrrh

Ancient Egyptians have used myrrh since the times of the Pharaohs for its healing properties. It has antimicrobial effects that can treat bacterial infections like pneumonia and tuberculosis. Myrrh essential oil is also very effective in treating arthritis, gout, and rheumatism. You can use myrrh in an aromatherapy diffuser to revive your moods whenever you feel down.

Myrrh has anti-inflammatory properties to help relieve pain associated with any of these health conditions mentioned above. It's also suitable for skin care because the oil facilitates skin healing when applied topically.

2) Peppermint

Peppermint essential oil, another powerful oil that can promote long life, is a hybrid mint plant crossbred from water mint and spearmint. The main chemical components found in peppermint are menthol, menthone, pulegone, piperine, limonene, pinene, linalool, alpha-terpineol acetate, beta-caryophyllene, valeric acid, and many more.

Peppermint essential oil

The various compounds found in peppermint make it a powerful pain reliever. It will use to treat painful conditions like migraines, headaches, and arthritis. Peppermint oil is also known to have anti-inflammatory effects that help treat skin conditions caused by inflammation or redness. The analgesic effects of this essential oil are not harmful to the stomach lining so that you can use it for indigestion as well.

3) Thyme

Thyme essential oil has been used for thousands of years as an antiviral remedy. It will use to treat a variety of conditions. Thyme essential oil works on the respiratory system to clear phlegm and mucous from the body, thereby curing congestion.

One of thyme's best properties is its ability to eliminate viruses from your body. Inhaling thyme oil vapor will stimulate nervous functions, which increases brain activity and improves memory.

4) Lemon

Lemon essential oil is another powerful essential oil that can promote long life. The lemon fruit should be use for centuries because of its healing properties. It has anti-bacterial and antifungal properties that effectively treat skin infections like scabies, athlete's foot, and acne. Lemon essential oil is antioxidant which means it can help slow down the process of cellular aging.

5) Rosemary

Rosemary extract, a potent extract from rosemary leaves, contains rosmarinic acid and carnosic acid, giving your brain a healthy boost. Inhaling rosemary essential oil helps to improve your blood circulation by dilating the blood vessels in the brain. You will have increased energy with regular use of rosemary oil.

6) Frankincense

Frankincense is a type of tree resin that has been used for thousands of years as incense, perfume, and medicine. Ancient Egypt had great respect for the healing powers of frankincense

Frankincense essential oil is very beneficial because it contains anti-inflammatory properties that help treat skin conditions like rashes, eczema, psoriasis, and dermatitis. Some studies conducted by medical experts show frankincense's breast cancer-fighting properties, so if you have this form of health condition. You might want to consider using essential oil.

7) Myrtle

Myrtle is a kind of tree that grows in mild climates. It has a fantastic ability to fight bacteria and fungus infections. The essential oil from myrtle leaves will use as a natural insect repellent. Inhaling myrtle oil vapor improves your memory, concentration, alertness, and focus. Its anti-inflammatory properties also make it effective against skin conditions like pimples, rashes, and red spots. You can use myrtle essential oil for aromatherapy, topical application, or inhalation.

Wrapping Up!

Essential oils contain many healing properties that can help you improve your health and extend your life. Some essential oils such as peppermint and lemon can be powerful stimulants, while others like frankincense and myrtle are calming.

You should make sure to use them properly so they won't irritate the skin. As long as you take care of yourself, live a healthy life through proper nutrition, exercise, good hygiene, relaxation, etc., you have every chance of living a long life!

 

 

Quantum computing explained so kids understand – IBM …

November 25, 2019 | Written by: Jan Lillelund

Categorized: Innovation | Quantum Computing

Share this post:

Quantum computing is buzzing these days. However, it is a very complex topic to understand even for experienced tech professionals, professors and the brightest students. I have experienced this myself during the last few years when speaking about quantum computing at several conferences and universities. But there is a way we can understand the complex implications of how we can utilize quantum computing and how remarkably it improves our lives.

The technology will for sure solve complex problems in the future that even classical super-computers will never be able to. In life sciences, supply chain management, chemistry research and much more. Therefore, it is also crucial that our generation of IT enthusiasts and even our kids get familiar with quantum computing. If more people get excited about the fascinating opportunities the technology offers, it will hopefully help to push the development of quantum computing to new heights in the future.

In this way, we can solve the unsolvable problems society faces today and eventually make the world that we live in a better place.

DID YOU READ:What Angela Merkel and IBMs CEO have in common

Are you new to quantum computing? Or just curious to learn more about it? Then check out this video from WIRED with Dr. Talia Gershon, Senior Manager of Q Experiences at IBM Research.

In the video, she explains quantum computing to make kids, a teenager, a college student and a graduate student understand, and then discusses quantum computing myths and challenges with Professor Steve Girvin from Yale University:

Whether you are a child, student or professional, I hope the video helped you to understand more about the fascinating capabilities of quantum computing. If you are hooked, you can actually try a real quantum computer via the IBM Cloud. This is done through the IBM Q Experience platform.

If you have any further questions or comments please do not hesitate to contact me at janl@dk.ibm.com (Jan Lillelund). Furthermore, you can also check out the IBM Q homepage for much more information about quantum computing at IBM.

More here:
Quantum computing explained so kids understand - IBM ...

QCE21 Home IEEE Quantum Week

IEEE Quantum Week the IEEE International Conference on Quantum Computing and Engineering (QCE) is bridging the gap between the science of quantum computing and the development of an industry surrounding it. As such, this event brings a perspective to the quantum industry different from academic or business conferences. IEEE Quantum Week is a multidisciplinary quantum computing and engineering venue that gives attendees the unique opportunity to discuss challenges and opportunities with quantum researchers, scientists, engineers, entrepreneurs, developers, students, practitioners, educators, programmers, and newcomers.

IEEE Quantum Week 2021 received outstanding contributions from the international quantum community forming anexceptional programwithexciting exhibitsfeaturing technologies from quantum companies, start-ups and research labs.QCE21, the second IEEE International Conference on Quantum Computing and Engineering, provides over 300 hours of quantum and engineering programming featuring10 world-class keynote speakers,19 workfoce-building tutorials,23 community-building workshops,48 technical papers,30 innovative posters,18 stimulating panels, andBirds-of a Feather sessions. The QCE21 program is structured into 10 parallel tracks over six days, October 17-22, 2021 and is available on-demand for registered participants until the end of the year.

The QCE conference grew out of theIEEE Future Directions Quantum Initiativein 2019 and held itsinaugural IEEE Quantum Week event in October 2020.IEEE Quantum Week 2020was tremendous success with over 800 attendees from 45 countries and 270+ hours of quantum computing and engineering programming in nine parallel tracks over five days.

With your contributions and your participation, together we are building a premier meeting of quantum minds to help advance the fields of quantum computing and engineering. As a virtual event, Quantum Week provides ample opportunities to network with your peers and explore partnerships with industry, government, and academia.Quantum Week 2021 aims to bring together quantum professionals, researchers, educators, entrepreneurs, champions and enthusiasts to exchange and share their experiences, challenges, research results, innovations, applications, pathways and enthusiasm on all aspects of quantum computing and engineering.

IEEE Quantum Week aims to showcase quantum research, practice, applications, education, and training including programming systems, software engineering methods & tools, algorithms, benchmarks & performance metrics, hardware engineering, architectures, & topologies, software infrastructure, hybrid quantum-classical computing, architectures and algorithms, as well as many applications including simulation of chemical, physical and biological systems, optimization problems, techniques and solutions, and quantum machine learning.

Link:
QCE21 Home IEEE Quantum Week

Atom Computing: A Quantum Computing Startup That Believes It Can Ultimately Win The Qubit Race – Forbes

Atom Computing

Atom Computing describes itself as a company obsessed with building the worlds most scalable quantum computers out of optically trapped neutral atoms. The companyrecently revealed it had spent the past two years secretly building a quantum computer using Strontium atoms as its units of computation.

Headquartered in Berkeley, California, Benjamin Bloom and Jonathan King founded the company in 2018 with $5M in seed funds. Bloom received his PhD in physics from the University of Colorado, while King received a PhD in chemical engineering from California Berkeley.

Atom Computing received $15M in Series A funding from investorsVenrock, Innovation Endeavors, and Prelude Ventures earlier this year. The company also received three grants from the National Science Foundation.

Atom Staff

Rob Hays, a former Intel, and Lenovo executive was recently named CEO of the company. Atom Computingsstaff of quantum physicists and design engineers fully complements quantum-related disciplines and applications.This month Atom Computing signaled its continued momentum by adding twoquantum veterans to key positions within the company:

Qubit technologies

While traditional computers use magnetic bits to represent a one or a zero for computation, quantum computers usequantum bits or qubits to represent a one or a zero or simultaneously any number in between.

Todays quantum computers use several different technologies for qubits. But regardless of the technology, a common requirement for all quantum computing qubits is that it must be scalable, high quality, and capable of fast quantum interaction with each other.

IBM uses superconducting qubits on its huge fleet of about twenty quantum computers. Although Amazon doesnt yet have a quantum computer, it plans to build one using superconducting hardware. Honeywell and IonQ both use trapped-ion qubits made from a rare earth metal called ytterbium. In contrast, Psi Quantum and Xanadu use photons of light.

Atom computing chose to use different technology -nuclear-spin qubits made from neutral atoms.Phoenix, the name of Atoms first-generation, gate-based quantum computer platform, uses 100 optically trapped qubits.

Atom Computings quantum platform

First-Generation Quantum Computer, Phoenix, Berkeley,

The Phoenix platform uses a specific type of nuclear-spin qubits created from an isotope of Strontium, a naturally occurring element. Strontium is a neutral atom. At the atomic level, neutral atoms have equal numbers of protons and electrons. However, isotopes of Strontium have varying numbers of neutrons. These differences in neutrons produce different energy levels in the atom. Atom Computing uses the isotope Strontium-87 and takes advantage of its unique energy levels to create spin qubits.

Qubits need to remain in a quantum state long enough to complete computations. The length of time that a qubit can retain its quantum state is its coherence time. Since Atom Computings neutral atom qubits are natural rather than manufactured, no adjustments are needed to compensate for differences between qubits. That contributes to its stability and relatively long coherence time in a range greater than 40 seconds compared to a millisecond for superconducting or a few seconds for ion-trapping systems. Moreover, a neutral atom has little affinity for other atoms, making the qubits less susceptible to noise.

Neutral atom qubits offer many advantages that make them suitable for quantum computing. Here are just a few:

How neutral atom quantum processors work

Atom Computing

The Phoenix quantum platform uses lasers as proxies for high-precision, wireless control of the Strontium-87 qubits. Atoms are trapped in a vacuum chamber using optical tweezers controlledby lasers at very specific wavelengths, creatingan array of highly stable qubits captured in free space.

First, a beam of hot strontium moves the atoms into the vacuum chamber. Next, multiple lasers bombard each atom with photons to slow their momentum to a near motionless state, causing its temperature to fall to near absolute zero. This process is called laser cooling and it eliminates the requirement for cryogenics and makes it easier to scale qubits.

Then, optical tweezers are formed in a glass vacuum chamber, where qubits are assembled and optically trapped in an array. One advantage of neutral atoms is that the processors array is not limited to any specific shape, and it can be either 2D or 3D. Additional lasers create a quantum interaction between the atoms (called entanglement) in preparation for the actual computation. After initial quantum states are set and circuits are established, then the computation is performed.

The heart of Phoenix, showing where the Atom Computings qubits entangle. (First-Generation Quantum ... [+] Computer, Phoenix - Berkeley, California)

Going forward

Atom Computing is working with several technology partners. It is also running tests with a small number of undisclosed customers. The Series A funding has allowed it to expand its research and begin working on the second generation of its quantum platform. Its a good sign that Rob Hays, CEO, believes Atom Computing will begin generating revenue in mid-2023.

Atom Computing is a young and aggressive company with promising technology. I spoke with Denise Ruffner shortly after she joined Atom. Her remarks seem to reflect the optimism of the entire company:

"I am joining the dream team - a dynamic CEO with experience in computer development and sales, including an incredible Chief Product Officer, as well as a great scientific team. I am amazed at how many corporations have already reached out to us to try our hardware. This is a team to bet on."

Analyst notes

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, or speaking sponsorships. The company has had or currently has paid business relationships with 88,A10 Networks,Advanced Micro Devices, Amazon,Ambient Scientific,AnutaNetworks,Applied Micro,Apstra,Arm, Aruba Networks (now HPE), AT&T, AWS, A-10 Strategies,Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera,Clumio, Cognitive Systems, CompuCom,CyberArk,Dell, Dell EMC, Dell Technologies, Diablo Technologies,Dialogue Group,Digital Optics,DreamiumLabs, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud,Graphcore,Groq,Hiregenics,HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM,IonVR,Inseego, Infosys,Infiot,Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo,Linux Foundation,Luminar,MapBox, Marvell Technology,Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco),Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek,Novumind, NVIDIA,Nutanix,Nuvia (now Qualcomm), ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Panasas,Peraso, Pexip, Pixelworks, Plume Design, Poly (formerly Plantronics),Portworx, Pure Storage, Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat,Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak (now Aruba-HPE), SONY Optical Storage,Springpath(now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity,TensTorrent,TobiiTechnology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications,Vidyo, VMware, Wave Computing,Wellsmith, Xilinx,Zayo,Zebra,Zededa, Zoho, andZscaler.Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is a personal investor in technology companiesdMYTechnology Group Inc. VI andDreamiumLabs.

Read more from the original source:
Atom Computing: A Quantum Computing Startup That Believes It Can Ultimately Win The Qubit Race - Forbes