Washington increases pressure on Beijing over Chinese media – World Socialist Web Site

Washington increases pressure on Beijing over Chinese media By Ben McGrath 10 March 2020

The Trump administration stepped up its punitive measures against Chinese media in the US after Beijing expelled three Wall Street Journal (WSJ) reporters last month. It has placed a limit on the number of Chinese citizens eligible to work at five of Beijings news outlets. The State Department announced on March 2 that the five agencies will be required to reduce the total number of Chinese nationals from 160 to 100 by March 13.

The five media outlets include Chinas official news agency, Xinhua, China Radio International, China Global Television Network, China Daily Distribution Corporation and Hai Tian Development USA, which print and distribute the newspapers China Daily and Peoples Daily respectively.

In response to the latest restrictions, Chinese Foreign Ministry spokeswoman Hua Chunying suggested Beijing will take further measures. She posted on Twitter, Reciprocity? 29 US media agencies in China VS 9 Chinese ones in the US. Multiple-entry to China VS Single-entry to the US. 21 Chinese journalists denied visas since last year. Now the US kicked off the game, lets play.

Secretary of State Mike Pompeo justified the decision saying, For years, the government of the Peoples Republic of China (PRC) has imposed increasingly harsh surveillance, harassment, and intimidation against American and other foreign journalists operating in China. President Trump has made clear that Beijings restrictions on foreign journalists are misguided. The US government has long welcomed foreign journalists, including PRC journalists, to work freely and without threat of reprisal.

Beijing announced on February 19 that it would expel three journalists after accusing the WSJ of denigrating Chinas efforts to deal with the Covid-19 coronavirus outbreak. None of the three journalists had been involved in writing an opinion piece published February 3 that provided the impetus for the expulsions, but all had been involved in criticizing the treatment of Uighurs in Chinas Xinjiang Province.

The day before Beijings decision, Washington had designated the five media outlets currently at the center of the conflict as foreign diplomatic missions. As a result, they are required to declare all of their property holdings and seek approval for acquisition of new property. Their employees are forced to register with the State Department and all five agencies are subject to greater monitoring by the US government.

Washington is no defender of a free press. Trump and his allies have regularly accused the media of being the enemy of the people while encouraging violence against journalists. Trump even praised the 2017 assault of Guardian reporter Ben Jacobs by then-candidate Republican Greg Gianforte, who slammed Jacobs to the ground. The reporter was covering Gianfortes campaign for the US House of Representatives. Gianforte subsequently won the election but was also convicted of assault.

So volatile has the situation become for reporters covering the US presidential election that the Committee to Protect Journalists (CPJ) is issuing safety kits to journalists covering the election for the first time in the CPJs forty-year history. The kits offer basic safety information on physical, digital and psychological safety resources and tools.

The Trump administrations most vicious attack on freedom of the press is the persecution of Julian Assange and Chelsea Manning, the goal of which is to intimidate journalists and whistleblowers into remaining silent about Washingtons crimes. This began under President Barack Obama and the Democrats, which support the punitive measures against Assange and Manning no less than the Republicans.

Washington is demanding Assanges extradition from the United Kingdom, where the Australian journalist is being subjected to psychological torture in Belmarsh prison. Assange, along with whistleblower Manning, exposed US war crimes and other offenses and now could face the death penalty if sent to the US. Manning has been vindictively held behind bars for refusing to give false testimony in Assanges case.

Washingtons decision last week to further restrict Chinese media in the US is a continuation of its anti-China policy that has been prosecuted by both the Republicans and the Democrats. As with the Obama administrations pivot to Asia, the Trump government is increasingly moving the US onto a war footing with China, applying military and economic pressure to Beijing in an attempt to force the Stalinist regime to bow to US demands.

However, such an agenda finds no mass support among American workers and youth after nearly 30 years of unending war. Therefore, Washington is using empty phrases about free press and democracy in order to justify its war preparations. Publications like the WSJ and the New York Times have contributed to this by demonizing China in support of so-called human rights. They have even claimed that Chinese censorship contributed to the Covid-19 outbreak, stating that it never would have happened in the supposedly free and democratic West.

In a January 29 article in the New York Times, Nicholas Kristof, an ardent supporter of neocolonial campaigns waged in the name of human rights, denounced Chinese President Xi Jinping, in a comment headlined as Coronavirus spreads, the world pays for Chinas dictatorship.

Criticizing Beijings initial cover-up of the novel coronavirus outbreak, Kristof wrote, One reason for the early cover-up is that Xis China has systematically gutted institutions like journalism, social media, nongovernmental organizations, the legal profession and others that might provide accountability.

The claims that the USs free press would have somehow stopped the crisis is belied by the fact the US media has engaged in countless cover-ups leading to complete disasters including the Iraq War and destruction of broad regions in the Middle East and North Africa. The US media is now aiding Washington in deflecting fears and anger over the virus onto China as it becomes increasingly clear that the US ruling class is not only totally unprepared but is entirely indifferent to the fate of broad masses of people.

2019 has been a year of mass social upheaval. We need you to help the WSWS and ICFI make 2020 the year of international socialist revival. We must expand our work and our influence in the international working class. If you agree, donate today. Thank you.

Read more:
Washington increases pressure on Beijing over Chinese media - World Socialist Web Site

Assange trial rehearsal? Hung jury results in mistrial for former CIA tech accused of handing Vault 7 docs to WikiLeaks – RT

Federal prosecutors were unable to convince a jury on any of the spying-related charges against an ex-CIA engineer accused of stealing reams of classified material in what may be a dry run for the case against Julian Assange.

In a significant blow to prosecutors on Monday, jurors failed to come to a verdict on eight central counts against former CIA software engineer Joshua Schulte, who was charged for stealing thousands of pages of classified information on the agencys secret hacking tools and passing them to WikiLeaks what later became its Vault7 release, the largest breach of classified material in CIA history.

While Schulte was found guilty of contempt of court and making false statements to investigators, a hung jury on the remaining eight charges including illegal gathering and transmission of national defense information prompted District Judge Paul Crotty to order a mistrial and dismiss the jurors on the case, who had deemed themselves extremely deadlocked in a note to the judge.

The split verdict came after nearly a full week of messy deliberations, which saw one juror removed for researching the facts of the case against Crottys orders. She was never replaced, however, leaving a short-handed panel to deliver a final decision.

The former technician left his job in the CIAs Langley headquarters in 2016 and was charged some two years later for his alleged role in the Vault 7 leak. But prosecutors had difficulty tying Schulte to the disclosure throughout his four-week trial, with jurors often mystified by a complicated maze of technical evidence.

The case may offer parallels to that of WikiLeaks co-founder Julian Assange, who faces 17 charges under the World War I-era Espionage Act and up to 175 years in prison over his role in the publication of the Iraq and Afghan war logs in 2010. Assange is accused of helping leaker Chelsea Manning (then known as Bradley)hack into military computers to obtain classified material, but if extradited from the UK to stand trial in an American courtroom, prosecutors would likely produce similar technical forensics to prove his involvement, precisely what the government was unable to do in Schultes case.

Arguing that the CIAs computer network had widely known vulnerabilities, including poor password protections, Schultes defense insisted prosecutors had failed to prove his role in the breach. They noted it was possible another actor gained access to his work station, pointing to another CIA employee identified only as Michael as a potential culprit.

The CIA later placed the employee on administrative leave for refusing to cooperate with the investigation, which suggested the government had doubt about the case against Mr. Schulte, defense attorney Sabrina Shroff said in her closing argument on Monday.

Prosecutors are likely to demand a retrial for Schulte, and he still stands accused of possessing child pornography, allegedly stored on devices found during a search of his home. He will be tried separately on those charges, facing a total of 15 counts.

Like this story? Share it with a friend!

Here is the original post:
Assange trial rehearsal? Hung jury results in mistrial for former CIA tech accused of handing Vault 7 docs to WikiLeaks - RT

4 ways to fine-tune your AI and machine learning deployments – TechRepublic

Life cycle management of artificial intelligence and machine learning initiatives is vital in order to rapidly deploy projects with up-to-date and relevant data.

Image: Chinnawat Ngamsom, Getty Images/iStockphoto

An institutional finance company wanted to improve time to market on the artificial intelligence (AI) and machine learning (ML) applications it was deploying. The goal was to reduce time to delivery on AI and ML applications, which had been taking 12 to 18 months to develop. The long lead times jeopardized the company's ability to meet its time-to-market goals in areas of operational efficiency, compliance, risk management, and business intelligence.

SEE: Prescriptive analytics: An insider's guide (free PDF) (TechRepublic)

After adopting a life-cycle management software for its AI and ML application development and deployment, the company was able to reduce its AI and ML application time to market to days, and in some cases, to hours. The process improvement enabled corporate data scientists to spend 90% of their time on data model development, instead of 80% of time on the resolution of technical challenges resulting from unwieldy deployment processes.

This is important because the longer you extend your big data and AI and ML modeling, development, and delivery processes, the greater the risk that you end up with modeling, data, and applications that are already out of date by the time they are ready to be implemented. In the compliance area alone, this creates risk and exposure.

"Three big problems enterprises face as they roll out artificial intelligence and machine learning projects is the inability to rapidly deploy projects, data performance decay, and compliance-related liability and losses," said Stu Bailey, chief technical officer of ModelOP, which provides software that deploys, monitors, and governs data science AI and ML models.

SEE:The top 10 languages for machine learning hosted on GitHub (free PDF)(TechRepublic)

Bailey believes that most problems arise out of a lack of ownership and collaborationbetween data science, IT, and business teams when it comes to getting data models into production in a timely manner. In turn, these delays adversely affect profitability and time-to-business insight.

"Another reason that organizations have difficulty managing the life cycle of their data models is that there are many different methods and tools today for producing data science and machine language models, but no standards for how they're deployed and managed," Bailey said.

The management of big data, AI, and ML life cycles can be prodigious tasks that go beyond having software and automation that does some of the "heavy lifting." Also, many organizations lack policies and procedures for these tasks. In this environment, data can rapidly become dated, application logic and business conditions can change, and new behaviors that humans must teach to machine language applications can become neglected.

SEE:Telemedicine, AI, and deep learning are revolutionizing healthcare (free PDF)(TechRepublic)

How can organizations ensure that the time and talent they put into their big data, AI, and ML applications remain relevant?

Most organizations acknowledge that collaboration between data science, IT, and end users is important, but they don't necessarily follow through. Effective collaboration between departments depends on clearly articulated policies and procedures that everyone adheres to in the areas of data preparation, compliance, speed to market, and learning for ML.

Companies often fail to establish regular intervals for updating logic and data for big data, AI, and ML applications in the field. The learning update cycle should be continuous--it's the only way you can assure concurrency between your algorithms and the world in which they operate.

Like their transaction system counterparts, there will come a time when some AI and ML applications will have seen their day. This is the end of their life cycles, and the appropriate thing to do is retire them.

If you can automate some of your life cycle maintenance functions for big data, AI, and ML, do so. Automation software can automate handoffs between data science IT and production. It makes the process of deployment that much easier.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Originally posted here:
4 ways to fine-tune your AI and machine learning deployments - TechRepublic

AI-powered honeypots: Machine learning may help improve intrusion detection – The Daily Swig

John Leyden09 March 2020 at 15:50 UTC Updated: 09 March 2020 at 16:04 UTC

Forget crowdsourcing, heres crooksourcing

Computer scientists in the US are working to apply machine learning techniques in order to develop more effective honeypot-style cyber defenses.

So-called deception technology refers to traps or decoy systems that are strategically placed around networks.

These decoy systems are designed to act as a honeypot so that once an attacker has penetrated a network, they will attempt to attack them setting off security alerts in the process.

Deception technology is not a new concept. Companies including Illusive Networks and Attivo have been working in the field for several years.

Now, however, researchers from the University of Texas at Dallas (UT Dallas) are aiming to take the concept one step further.

The DeepDig (DEcEPtion DIGging) technique plants traps and decoys onto real systems before applying machine learning techniques in order to gain a deeper understanding of attackers behavior.

The technique is designed to use cyber-attacks as free sources of live training data for machine learning-based intrusion detection systems.

Somewhat ironically, the prototype technology enlists attackers as free penetration testers.

Dr Kevin Hamlen, endowed professor of computer science at UT Dallas, explained: Companies like Illusive Networks, Attivo, and many others create network topologies intended to be confusing to adversaries, making it harder for them to find real assets to attack.

The shortcoming of existing approaches, Dr Hamlen, told The Daily Swig is that such deceptions do not learn from attacks.

While the defense remains relatively static, the adversary learns over time how to distinguish honeypots from a real asset, leading to an asymmetric game that the adversary eventually wins with high probability, he said.

In contrast, DeepDig turns real assets into traps that learn from attacks using artificial intelligence and data mining.

Turning real assets into a form of honeypot has numerous advantages, according to Dr Hamlen.

Even the most skilled adversary cannot avoid interacting with the trap because the trap is within the real asset that is the adversary's target, not a separate machine or software process, he said.

This leads to a symmetric game in which the defense continually learns and gets better at stopping even the most stealthy adversaries.

The research which has applications in the field of web security was presented in a paper (PDF) entitled Improving Intrusion Detectors by Crook-Sourcing, at the recent Computer Security Applications Conference in Puerto Rico.

The research was funded by the US federal government. The algorithms and evaluation data developed so far have been publicly released to accompany the research paper.

Its hoped that the research might eventually find its way into commercially available products, but this is still some time off and the technology is still only at the prototype stage.

In practice, companies typically partner with a university that conducted the research theyre interested in to build a full product, a UT Dallas spokesman explained. Dr Hamlens project is not yet at that stage.

RELATED Gold-nuggeting: Machine learning tool simplifies target discovery for pen testers

Read more from the original source:
AI-powered honeypots: Machine learning may help improve intrusion detection - The Daily Swig

If AI’s So Smart, Why Can’t It Grasp Cause and Effect? – WIRED

Heres a troubling fact. A self-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child whos just learning to walk.

A new experiment shows how difficult it is for even the best artificial intelligence systems to grasp rudimentary physics and cause and effect. It also offers a path for building AI systems that can learn why things happen.

The experiment was designed to push beyond just pattern recognition, says Josh Tenenbaum, a professor at MITs Center for Brains Minds & Machines, who who worked on the project with Chuang Gan, a researcher at MIT, and Kexin Yi, a PhD student at Harvard. Big tech companies would love to have systems that can do this kind of thing.

The most popular cutting-edge AI technique, deep learning, has delivered some stunning advances in recent years, fueling excitement about the potential of AI. It involves feeding a large approximation of a neural network copious amounts of training data. Deep-learning algorithms can often spot patterns in data beautifully, enabling impressive feats of image and voice recognition. But they lack other capabilities that are trivial for humans.

To demonstrate the shortcoming, Tenenbaum and his collaborators built a kind of intelligence test for AI systems. It involves showing an AI program a simple virtual world filled with a few moving objects, together with questions and answers about the scene and whats going on. The questions and answers are labeled, similar to how an AI system learns to recognize a cat by being shown hundreds of images labeled cat.

Systems that use advanced machine learning exhibited a big blind spot. Asked a descriptive question such as What color is this object? a cutting-edge AI algorithm will get it right more than 90 percent of the time. But when posed more complex questions about the scene, such as What caused the ball to collide with the cube? or What would have happened if the objects had not collided? the same system answers correctly only about 10 percent of the time.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

David Cox, IBM director of the MIT-IBM Watson AI Lab, which was involved with the work, says understanding causality is fundamentally important for AI. We as humans have the ability to reason about cause and effect, and we need to have AI systems that can do the same.

A lack of causal understanding can have real consequences, too. Industrial robots can increasingly sense nearby objects, in order to grasp or move them. But they don't know that hitting something will cause it to fall over or break unless theyve been specifically programmedand its impossible to predict every possible scenario.

If a robot could reason causally, however, it might be able to avoid problems it hasnt been programmed to understand. The same is true for a self-driving car. It could instinctively know that if a truck were to swerve and hit a barrier, its load could spill onto the road.

Causal reasoning would be useful for just about any AI system. Systems trained on medical information rather than 3-D scenes need to understand the cause of disease and the likely result of possible interventions. Causal reasoning is of growing interest to many prominent figures in AI. All of this is driving towards AI systems that can not only learn but also reason, Cox says.

The test devised by Tenenbaum is important, says Kun Zhang, an assistant professor who works on causal inference and machine learning at Carnegie Mellon University, because it provides a good way to measure causal understanding, albeit in a very limited setting. The development of more-general-purpose AI systems will greatly benefit from methods for causal inference and representation learning, he says.

Excerpt from:
If AI's So Smart, Why Can't It Grasp Cause and Effect? - WIRED

The Connection Between Astrology And Your Tesla AutoDrive – Forbes

Preamble: Intermittently, I will be introducing some columns which introduce some seemingly outlandish concepts. The purpose is a bit of humor, but also to provoke some thought. Enjoy.

Zodiac signs inside of horoscope circle.

Historically, astrology has been a major component of the cultural life in many major civilizations. Significant events such as marriage, moving into a new home, or even travel were planned with astrology in mind. Even in modern times, astrological internet sites enjoy great success and the gurus of the art publish in major newspapers.

Of course, with the advent of scientific methods and formal education, astrology has rapidly lost favor in intellectual society. After all, what could possibly be the causal relationship between the movement of planets and whether someone will get a job promotion? As some have pointed out, even if there was a relationship, the configuration of the stars change, so how could the predictions of the past possibly be valid ?

Pure poppycock. Right? Perhaps. Lets take a deeper look.

Lets consider the central technology at the apex of current intellectual achievement : machine learning. Machine learning is the engine underlying important technologies such as autonomous vehicles including Teslas AutoDrive. What is machine learning at its core? One looks at massive amounts of data and trains a computational engine (ML engine). This ML engine is then used to make future predictions. Sometimes, the training is done in a constrained manner where one looks at particular items, and other times, the training is left unconstrained. Machine learning and the associated field of Artificial Intelligence (AI) is at the forefront of computer science research. Indeed, as we have discussed in past articles, AI is considered to be the next big economic mega-driver in a vast number of markets. After looking at machine learning, an interesting thought comes to mind.

Was astrology really just machine learning done by humans?

Could the thought leaders from great civilizations have looked at large amounts of human behavioral data and used something very reasonable (planetary movements) to train the astrology engine? After all, what really is the difference between machine learning and astrology?

Marketing Chart Comparing Astrology and Machine Learning

Both astrology and machine learning seem to have a concept of training. In astrology, the astrological signs are used as points of interest, and seemingly arbitrary connections are made to individual human circumstances. Even without the understanding of causality, the correlations can be somewhat true. In machine learning, data correlations are discovered, and there is no requirement of causation. This thought process is central to the machine learning paradigm, and gives it much of its power. In fact, as the chart above shows, there are uncomfortable levels of parallels between astrology and machine learning.

What does this mean? Should we take machine learning a little less seriously? Certainly, some caution is warranted, but it appears to be clear that machine learning can provide utility.

So, what about astrology? Perhaps we should take it a bit more seriously .

If you enjoyed this article, you may also enjoy A Better Transportation Option Than A Tesla.

Read the original post:
The Connection Between Astrology And Your Tesla AutoDrive - Forbes

Think your smartwatch is good for warning of a heart attack? Turns out it’s surprisingly easy to fool its AI – The Register

Neural networks that analyse electrocardiograms can be easily fooled, mistaking your normal heartbeat reading as irregular or vice versa, researchers warn in a paper published in Nature Medicine.

ECG sensors are becoming more widespread, embedded in wearable devices like smartwatches, while machine learning software is being increasingly developed to automatically monitor and process data to tell users about their heartbeats. The US Food and Drug Administration approved 23 algorithms for medical use in 2018 alone.

However, the technology isnt foolproof. Like all deep learning models, ECG ones are susceptible to adversarial attacks: miscreants can force algorithms to misclassify the data by manipulating it with noise.

A group of researchers led by New York University demonstrated this by tampering with a deep convolutional neural network (CNN). First, they obtained a dataset containing 8,528 ECG recordings labelled into four groups: Normal, atrial fibrillation - the most common type of an irregular heartbeat - other, or noise.

The majority of the dataset, some 5,076 samples were considered normal, 758 fell into the atrial fibrillation category, 2,415 classified as other, and 279 as noise. The researchers split the dataset and used 90 per cent of it to train the CNN, and the other 10 per cent to test the system.

Deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye, the researchers explained in the paper (Here's the free preprint version of the paper on arXiv.)

To create these adversarial examples, the researchers added a small amount of noise to samples used in the test set. The uniform peaks and troughs in ECG reading may appear innocuous and normal to the human eye, but adding a small interference was enough to trick the CNN into classifying them as atrial fibrillation - an irregular heartbeat linked to heart palpitations and an increased risk of strokes.

Here are two adversarial examples. The first one shows how an irregular atrial fibrillation (AF) reading being misclassified as normal. The second one is a normal reading misclassified as irregular. Image Credit: Tian et al. and Nature Medicine.

When the researchers fed the adversarial examples to the CNN, 74 per cent of the readings that were originally correctly classified were subsequently wrong. In other words, the model mistook 74 per cent of the readings by assigning them to incorrect labels. What was originally a normal reading then seemed irregular, and vice versa.

Luckily, humans are much more difficult to trick. Two clinicians were given pairs of readings - an original, unperturbed sample and its corresponding adversarial example and asked if either of them looked like they belonged to a different class. They only thought 1.4 per cent of the readings should have been labelled differently.

The heartbeat patterns in original and adversarial samples looked similar to the human eye, and, therefore, itd be fairly easy to tell if a normal heartbeat had been incorrectly misclassified as irregular. In fact, both experts were able to tell the original reading from the adversarial one about 62 per cent of the time.

The ability to create adversarial examples is an important issue, with future implications including robustness to the environmental noise of medical devices that rely on ECG interpretation - for example, pacemakers and defibrillators - the skewing of data to alter insurance claims and the introduction of intentional bias into clinical trial, the paper said.

Its unclear how realistic these adversarial attacks truly are in the real world, however. In these experiments, the researchers had full access to the model making it easy to attack but its much more difficult for these types of attacks to work on, say, someones Apple Watch, for example.

The Register has contacted the researchers for comment. But what the research does prove, however, is that relying solely on machines may be unreliable and that specialists really ought to double check results when neural networks are used in clinical settings.

In conclusion, with this work, we do not intend to cast a shadow on the utility of deep learning for ECG analysis, which undoubtedly will be useful to handle the volumes of physiological signals requiring processing in the near future, the researchers wrote.

This work should, instead, serve as an additional reminder that machine learning systems deployed in the wild should be designed with safety and reliability in mind, with a particular focus on training data curation and provable guarantees on performance.

Sponsored: Quit your addiction to storage

Read more:
Think your smartwatch is good for warning of a heart attack? Turns out it's surprisingly easy to fool its AI - The Register

Tip: Machine learning solutions for journalists | Tip of the day – Journalism.co.uk

Much has been said about what artificial intelligence and machine learning can do for journalism: from understanding human ethics to predicting when readers are about to cancel their subscriptions.

Want to get hands on with machine learning? Quartz investigative editor John Keefe provides 15 video lessons taken from the 'Hands-on Machine Learning Solutions for Journalists' online class he lead through the Knight Center for Journalism in the Americas. It covers all the techniques that the Quartz investigative team and AI studio commonly use in their journalism.

"Machine learning is particularly good at finding patterns and that can be useful to you when you're trying to search through text documents or lots of images," Keefe explained in the introduction video.

Want to learn more about using artificial intelligence in your newsroom? Join us on the 4 June 2020 at our digital journalism conference Newsrewired at MediaCityUK, which will feature a workshop on implementing artificial intelligence into everyday journalistic work. Visit newsrewired.com for the full agenda and tickets

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

Visit link:
Tip: Machine learning solutions for journalists | Tip of the day - Journalism.co.uk

Chilmark Research: The Promise of AI & ML in Healthcare Report – HIT Consultant

What You Need to Know:

New Chilmark Research report reveals artificial intelligence and machine learning (AI/ML) technologies are capturing the imagination of investors and healthcare organizationsand are poised to expand healthcare frontiers.

The latest report evaluates over 120 commercial AI/ML solutions in healthcare, explores future opportunities, and assesses obstacles to adoption at scale.

Interest and investment in healthcare AI/ML toolsis booming with approximately $4B in capital funding pouring into thishealthcare sector in 2019. Such investment is spurring a vast array of AI/MLtools for providers, patients, and payers accelerating the possibilities fornew solutions to improve diagnostic accuracy, improve feedback mechanisms, andreduce clinical and administrative errors, according to Chilmark Researchs last report.

The Promise of AI & ML in Healthcare ReportBackground

The report,The Promise of AI & ML in Healthcare, is the most comprehensive report published on this rapidly evolving market with nearly 120 vendors profiled. The report explores opportunities, trends, and the rapidly evolving landscape for vendors, tracing the evolution from early AI/ML use in medical imaging to todays rich array of vendor solutions in medical imaging, business operations, clinical decision support, research and drug development, patient-facing applications, and more. The report also reviews types and applications of AI/ML, explores the substantial challenges of health data collection and use, and considers issues of bias in algorithms, ethical and governance considerations, cybersecurity, and broader implications for business.

Health IT vendors, new start-up ventures, providers, payers,and pharma firms now offer (or are developing) a wide range of solutions for anequally wide range of industry challenges. Our extensive research for thisreport found that nearly 120 companies now offer AI-based healthcare solutionsin four main categories: hospital operations, clinical support, research anddrug development, and patient/consumer engagement.

Report Key Themes

This report features an overview of these major areas of AI/ML use in healthcare. Solutions for hospital operations include tools for revenue cycle management, applications to detect fraud detection and ensure payment integrity, administrative and supply chain applications to improve hospital operations, and algorithms to boost patient safety. Population health management is an area ripe in AI/ML innovation, with predictive analytics solutions devoted to risk stratification, care management, and patient engagement.

A significant development is underway in AI/ML solutions for clinical decision support, including NLP- and voice-enabled clinical documentation applications, sophisticated AI-based medical imaging and pathology tools, and electronic health records management tools to mitigate provider burnout. AI/ML-enabled tools are optimizing research and drug development by improving clinical trials and patient monitoring, modeling drug simulations, and enabling precision medicine advancement. A wealth of consumer-facing AI/ML applications, such as chatbots, wearables, and symptom checkers, are available and in development.

Provider organizations will find this report offers deep insight into current and forthcoming solutions that can help support business operations, population health management, and clinical decision support. Current and prospective vendors of AI/ML solutions and their investors will find this reports overview of the current market valuable in mapping their own product strategy. Researchers and drug developers will benefit from the discussion of current AI/ML applications and future possibilities in precision medicine, clinical trials, drug discovery, and basic research. Providers and patient advocates will gain valuable insight into patient-facing tools currently available and in development.

All stakeholders in healthcare technologyproviders, payers, pharmaceutical stakeholders, consultants, investors, patient advocates, and government representativeswill benefit from a thorough overview of current offerings as well as thoughtful discussions of bias in data collection and underlying algorithms, cyber-security, governance, and ethical concerns.

For more information about the report, please visit https://www.chilmarkresearch.com/chilmark_report/the-promise-of-ai-and-ml-in-healthcare-opportunities-challenges-and-vendor-landscape/

Read this article:
Chilmark Research: The Promise of AI & ML in Healthcare Report - HIT Consultant

Differentiating Boys with ADHD from Those with Typical Development Bas | NDT – Dove Medical Press

Yunkai Sun,1,2,* Lei Zhao,1,2,* Zhihui Lan,1,2 Xi-Ze Jia,1,2 Shao-Wei Xue1,2

1Center for Cognition and Brain Disorders, Institute of Psychological Sciences and the Affiliated Hospital, Hangzhou Normal University, Hangzhou 311121, Peoples Republic of China; 2Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou 311121, Peoples Republic of China

*These authors contributed equally to this work

Correspondence: Shao-Wei XueCenter for Cognition and Brain Disorders, Hangzhou Normal University, No. 2318, Yuhangtang Road, Hangzhou, Zhejiang 311121, Peoples Republic of ChinaTel/Fax +86-571-28867717Email xuedrm@126.com

Purpose: In recent years, machine learning techniques have received increasing attention as a promising approach to differentiating patients from healthy subjects. Therefore, some resting-state functional magnetic resonance neuroimaging (R-fMRI) studies have used interregional functional connections as discriminative features. The aim of this study was to investigate ADHD-related spatially distributed discriminative features derived from whole-brain resting-state functional connectivity patterns using machine learning.Patients and Methods: We measured the interregional functional connections of the R-fMRI data from 40 ADHD patients and 28 matched typically developing controls. Machine learning was used to discriminate ADHD patients from controls. Classification performance was assessed by permutation tests.Results: The results from the model with the highest classification accuracy showed that 85.3% of participants were correctly identified using leave-one-out cross-validation (LOOV) with support vector machine (SVM). The majority of the most discriminative functional connections were located within or between the cerebellum, default mode network (DMN) and frontoparietal regions. Approximately half of the most discriminative connections were associated with the cerebellum. The cerebellum, right superior orbitofrontal cortex, left olfactory cortex, left gyrus rectus, right superior temporal pole, right calcarine gyrus and bilateral inferior occipital cortex showed the highest discriminative power in classification. Regarding the brainbehaviour relationships, some functional connections between the cerebellum and DMN regions were significantly correlated with behavioural symptoms in ADHD (P < 0.05).Conclusion: This study indicated that whole-brain resting-state functional connections might provide potential neuroimaging-based information for clinically assisting the diagnosis of ADHD.

Keywords: attention deficit hyperactivity disorder, ADHD, resting-state fMRI, R-fMRI, machine learning approach, support vector machine, SVM, leave-one-out cross-validation

This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License.By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.

Here is the original post:
Differentiating Boys with ADHD from Those with Typical Development Bas | NDT - Dove Medical Press