CRISPR gene-editing therapies need more diverse DNA to realize their full potential – Vox.com

Medicine has entered a new era in which scientists have the tools to change human genetics directly, creating the potential to treat or even permanently cure diseases by editing a few strands of troublesome DNA. And CRISPR, the gene-editing technology whose creators won the Nobel Prize for Chemistry in 2020, is the face of this new normal.

CRISPRs novel harnessing of bacterial proteins to target disease-carrying genes has reshaped medical research over the past decade. While gene-editing itself has been around for more than 30 years, scientists can use CRISPR to edit genomes faster, cheaper, and more precisely than they could with previous gene-editing methods.

As a result, investigators have gained far more control over where a gene gets inserted and when it gets turned on. That in turn has opened the door to a new class of better gene therapies treatments that modify or replace peoples genes to stop a disease.

Last December, the US Food and Drug Administration approved the first-ever CRISPR-based therapy, designed to treat sickle cell disease. In February, the treatment, called Casgevy, gained approval from the European Commission as well. It joins the dozen or so pre-CRISPR gene therapies that are already available to patients. In early May, the first patients began to receive treatment

But theres a significant impediment to maximizing CRISPRs potential for developing novel therapies: the lack of diversity in genetics research.

For decades, gene therapy has been defined by both its enormous therapeutic potential, and by the limitations imposed by our imprecise knowledge of human genetics. Even as gene-editing methods, including CRISPR, have become more sophisticated over the years, the data in the genetic databases and biobanks that scientists use to find and develop new treatments are still riddled with biases that could exclude communities of color from enjoying the full benefits of innovations like CRISPR. Unless that gap is closed, CRISPRs promise wont be fully fulfilled.

Developing effective gene therapies depends on growing our knowledge of the human genome. Data on genes and their correlation with disease have already changed the way cancer researchers think about how to design drugs, and which patients to match with which drug.

Scientists have long known that certain genetic mutations that disrupt regular cell functions can cause cancer to develop, and they have tailored drugs to neutralize those mutations. Genetic sequencing technology has sped that progress, allowing researchers to analyze the genetics of tumor samples from cancer patients after theyve participated in clinical trials to understand why some individuals respond better than others to a drug.

In a clinical trial of the colorectal cancer drug cetuximab, investigators found retrospectively that tumors with a mutation in the KRAS gene (which helps govern cell growth) did not respond to treatment. As a result, clinicians are now asked to confirm that patients do not have the mutation in the KRAS gene before they prescribe that particular drug. New drugs have been developed to target those mutations in the KRAS gene.

Its a step-by-step process from the discovery of these disease-related genes to the crafting of drugs that neutralize them. With CRISPR now available to them, many researchers believe that they can speed this process up.

The technology is based on and named after a unique feature in the bacterial immune system that the organism uses to defend itself against viruses. CRISPR is found naturally in bacteria: Its short for Clustered Regularly Interspaced Short Palindromic Repeats, and it functions like a mugshot database for bacteria, containing snippets of genetic code from foreign viruses that have tried to invade in the past.

When new infections occur, the bacteria deploys RNA segments that scan for viral DNA that matches the mugshots. Special proteins are then dispatched to chop the virus up and neutralize it.

To develop CRISPR into a biotech platform, this protein-RNA complex was adapted from bacteria and inserted into human and animal cells, where it proved similarly effective at searching for and snipping strands of DNA.

Using CRISPR in humans requires a few adjustments. Scientists have to teach the system to search through human DNA, which means that it will need a different mugshot database than what the bacteria originally needed. Critical to harnessing this natural process is artificial RNA, known as a guide RNA. These guide RNAs are designed to match genes found in humans. In theory, these guide RNAs search for and find a specific DNA sequence associated with a specific disease. The special protein attached to the guide RNA then acts like molecular scissors to cut the problematic gene.

CRISPRs therapeutic potential was evident in the breakthrough sickle cell treatment approved by the FDA late last year. What made sickle cell such an attractive target is not just that it affects around 20 million people or more worldwide, but that it is caused by a mutation in a single gene, which makes it simpler to study than a disease caused by multiple mutations. Sickle cell is one of the most common disorders worldwide that is caused by a mutation in a single gene. It was also the first to be characterized at a genetic level, making it a promising candidate for gene therapy.

In sickle cell disease, a genetic mutation distorts the shape of a persons hemoglobin, which is the protein that helps red blood cells carry and deliver oxygen from the lungs to tissues throughout the body. For people with sickle cell disease, their red blood cells look like sickles instead of the normal discs. As a result, they can get caught in blood vessels, blocking blood flow and causing issues like pain, strokes, infections, and death.

Since the 1990s, clinicians have observed that sickle cell patients with higher levels of fetal hemoglobin tend to live longer. A series of genome-wide association studies from 2008 pointed to the BCL11A gene as a possible target for therapeutics. These association studies establish the relationships between specific genes and diseases, identifying candidates for CRISPR gene editing.

Casgevys new CRISPR-derived treatment targets a gene called BCL11A. Inactivating this gene stops the mutated form of hemoglobin from being made and increases the production of normal non-sickled fetal hemoglobin, which people usually stop making after birth.

Out of the 45 patients who have received Casgevy since the start of the trials, 28 of the 29 eligible patients who have stayed on long enough to have their results analyzed reported that they have been free of severe pain crises. Once the treatment moves out of clinical settings, its exact effects can vary. And if the underlying data set doesnt reflect the diversity of the patient population, the gene therapies derived from them might not work the same for every person.

Sickle cell disease as the first benefactor of CRISPR therapy makes sense because its a relatively simple disorder that has been studied for a long time. The genetic mutation causing it was found in 1956. But ironically, the same population that could benefit most from Casgvey may miss out on the full benefits of future breakthrough treatments.

Scientists developing CRISPR treatments depend on whats known as a reference genome, which is meant to be a composite representation of a normal human genome that can be used to identify genes of interest to target for treating a disease.

However, most of the available reference genomes are representative of white Europeans. Thats a problem because not everybodys DNA is identical: Recent sequencing of African genomes shows that they have 10 percent more DNA than the standard reference genome available to researchers. Researchers have theorized that this is because most modern humans came out of Africa. As populations diverged and reconcentrated, genetic bottlenecks happened, which resulted in a loss of genetic variation compared to the original population.

Most genome-wide association studies are also biased in the same way: They have a lot of data from white people and not a lot from people of color.

So while those studies can help identify genes of importance that could lead to effective treatments for the population whose genes make up the majority of the reference data i.e., white people the same treatments may not work as well for other nonwhite populations.

Broadly, theres been an issue with human genetics research theres been a major under-representation of people of African ancestry, both in the US and elsewhere, said Sarah Tishkoff, professor of genetics and biology at the University of Pennsylvania. Without including these diverse populations, were missing out on that knowledge that could perhaps result in better therapeutics or better diagnostics.

Even in the case of the notorious breast cancer gene BRCA1, where a single gene mutation can have a serious clinical impact and is associated with an increased risk of developing cancer, underlying mutations within the gene tend to differ in people of different ancestries, Tishkoff said.

These differences, whether large or small, can matter. Although the vast majority of human genomes are the same, a small fraction of the letters making up our genes can differ from person to person and from population to population, with potentially significant medical implications. Sometimes during sequencing, genetic variations of unknown significance appear. These variants could be clinically important, but because of the lack of diversity in previous research populations, no one has studied them closely enough to understand their impact.

If all the research is being done in people of predominantly European ancestry, youre only going to find those variants, Tishkoff said.

Those limitations affect scientists up and down the developmental pipeline. For researchers using CRISPR technology in preclinical work, the lack of diversity in the genome databases can make it harder to identify the possible negative effect of such genetic variation on the treatments theyre developing.

Sean Misek, a postdoctoral researcher at the Broad Institute of MIT and Harvard, started developing a project with the goal of investigating the differences in the genetic patterns of tumors from patients of European descent compared to patients of African descent. CRISPR has become a versatile tool. Not only can it be used for treatments, but it can also be used for diagnostics and basic research. He and his colleagues intended to use CRISPR to screen for those differences because it can evaluate the effects of multiple genes at once, as opposed to the traditional method of testing one gene at a time.

We know individuals of different ancestry groups have different overall clinical responses to cancer treatments, Misek said. Individuals of recent African descent, for example, have worse outcomes than individuals of European descent, which is a problem that we were interested in trying to understand more.

What they encountered instead was a roadblock.

When Miseks team tried to design CRISPR guides, they found that their guides matched the genomes in the cells of people with European and East Asian ancestry, whose samples made up most of the reference genome, but not on cells from people of South Asian or African ancestry, who are far less represented in databases. In combination with other data biases in cancer research, the guide RNA mismatch has made it more difficult to investigate the tumor biology of non-European patients.

Genetic variations across ancestry groups not only affect whether CRISPR technology works at all, but they can also lead to unforeseen side effects when the tool makes cuts in places outside of the intended genetic target. Such side effects of off-target gene edits could theoretically include cancer.

A big part of developing CRISPR therapy is trying to figure out if there are off-targets. Where? And if they exist, do they matter? said Daniel Bauer, an attending physician at Dana-Farber/Boston Childrens Cancer and Blood Disorders Center.

To better predict potential off-target edits, Bauer collaborated with Luca Pinello, associate professor at Massachusetts General Hospital and Harvard Medical School, who had helped develop a tool called CRISPRme that makes projections based on personal and population-level variations in genetics. To test it, they examined the guide RNA being used for sickle cell disease treatment, and found an off-target edit almost exclusively present in cells donated by a patient of African ancestry.

It is currently unclear if this off-target edit detected by the CRISPRme tool has any negative consequences. When the FDA approved the sickle-cell therapy in December 2023, regulators required a post-marketing study to look into off-target effects. Any off-target edits affecting a persons blood should be easily detected in the blood cells, and drawing blood is easier to do than collecting cells from an internal organ, for example.

The genetic variant where the off-target effect occurred can be found in approximately every 1 in 10 people with African ancestry. The fact that we actually were able to find a donor who carried this variant was kind of luck, Bauer said. If the cells we were using were only of European ancestry, it wouldve been even harder to find.

Most of these [off-target] effects probably wont cause any problems, he said. But I think we also have these great technologies, so thats part of our responsibility to look as carefully as we can.

These issues recur again and again as investigators hunt for novel treatments. Katalin Susztak, professor of medicine and genetics at the University of Pennsylvania, thinks one promising candidate for a future CRISPR therapy is a standout gene for kidney disease: APOL1.

Researchers identified the gene when they looked into kidney disease risk in African Americans. While genome-wide association studies turned up thousands of distinct genes increasing risk for people of European ancestry, in African Americans, this single gene was responsible for 3 to 5 times higher risk of kidney disease in patients, said Susztak.

The APOL1 variant is common among African Americans because it protects people from developing African sleeping sickness, which is spread by the Tsetse fly present across much of the continent. This is similar to the story of the sickle cell mutation, which can protect people from malaria.

The variant is maybe only 5,000 years old, so this variant has not arisen in Europe, Asia, or anywhere else. Just in West Africa, Susztak said. But because of the slave trades, West Africans were brought to the United States, so millions of people in the United States have this variant.

The variant also predisposes people to develop cardiovascular disease, high blood pressure, and COVID-related disease, which maybe explains why there was an increased incidence of deaths in African Americans during COVID than in Europeans, Susztak said. APOL1 is potentially a very interesting target [for CRISPR] because the disease association is strong.

A CRISPR treatment for kidney disease is currently being investigated, but using the tool comes with complications. Cutting the APOL1 gene would set off an immune response, Susztak noted, so they will have to somehow prevent undesirable side effects, or find a related, but editable gene, like they did with sickle cell.

An alternative RNA-based strategy utilizing CRISPR is also in the works. DNA needs to be transcribed into a messenger RNA sequence first before it can be turned into proteins. Instead of permanently altering the genome, RNA editing alters the sequence of RNAs, which can then change what proteins are produced. The effects are less permanent, however, lasting for a few months instead of forever which can be advantageous for treating temporary medical conditions.

And it may turn out that gene therapy is simply not the right approach to the problem. Sometimes, a more conventional approach still works best. Susztak said that a small molecule drug developed by Vertex which works similarly to most drugs except special classes like gene therapies or biologics to inhibit the function of the APOL1 protein has enjoyed positive results in early clinical trials.

Even with these limitations, more CRISPR treatments are coming down the pike.

As of early last year, more than 200 people have been treated with experimental CRISPR therapies for cancers, blood disorders, infections, and more. In the developmental pipeline is a CRISPR-based therapeutic from Intellia Therapeutics that treats transthyretin amyloidosis, a rare condition affecting the function of the heart tissues and nerves. The drug has performed well in early trials and is now recruiting participants for a Phase III study. Another CRISPR drug from Intellia for hereditary angioedema, a condition that causes severe swelling throughout the body, is slated to enter Phase III later this year.

As the CRISPR boom continues, some research groups are slowly improving the diversity of their genetic sources.

The All of Us program from the National Institutes of Health, which aims to find the biological, environmental, and lifestyle factors that contribute to health, has analyzed 245,000 genomes to date, over 40 percent of which came from participants who were not of European ancestry. They found new genetic markers for diabetes that have never been identified before.

Then theres the Human Pangenome project, which aims to create a reference genome that captures more global diversity. The first draft of its proposal was released last May. Another project called the PAGE study, funded by the National Human Genome Research Institute and the National Institute on Minority Health and Health Disparities, is working to include more ancestrally diverse populations in genome-wide association studies.

But at the current pace, experts predict that it will take years to reach parity in our genetic databases. And the scientific community must also build trust with the communities its trying to help. The US has a murky history with medical ethics, especially around race. Take the Tuskegee experiment that charted the progression of syphilis in Black American men while hiding the true purpose of the study from the participants and withholding their ability to seek treatment when it became available, or the controversy over Henrietta Lacks cervical cells, which were taken and used in research without her consent. Those are just two prominent historical abuses that have eroded trust between minority communities and the countrys medical system, Tishkoff said. That history has made it more difficult to collect samples from marginalized communities and add them to these critical data sets.

Where the research is being done, where the clinical trials are being held, as well as whos doing the research, can all have an impact on which patients participate. The Human Genetics & Genomics Workforce Survey Report published by the American Society of Human Genetics in 2022 found that 67 percent of the genomic workforce identified as white. Add in the financial burden of developing new treatments when using a reference genome, or a pre-made biobank from past efforts to collect and organize a large volume of biological samples, saves time and costs. In the race to bring CRISPR treatments to market, those shortcuts offered valuable efficiency to drug makers.

What this means is that the first-generation of CRISPR therapeutics might therefore be blunter instruments than they might otherwise be. However, if improvements can be made to make sure the source genomes reflect a wider range of people, Pinello believes that later generations of CRISPR will be more personalized and therefore more effective for more people.

Finding the genes and making drugs that work is, of course, momentous but ultimately, thats only half the battle. The other worry physicians like Susztak have is whether patients will be able to afford and access these innovative treatments.

There is still an overwhelming racial disparity in clinical trial enrollment. Studies have found that people of color are more likely to suffer from chronic illness and underuse medications like insulin compared to their white counterparts. Gene therapies easily rack up price tags in the millions, and insurance companies, including the Centers for Medicare and Medicaid Services, are still trying to figure out how to pay for them.

Because its the pharmaceutical industry, if they dont turn around profit, if they cannot test the drug, or if people are unwilling to take it, then this inequity is going to be worsened, said Susztak. We are essentially going to be creating something that makes things worse even though we are trying to help.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Read more:

CRISPR gene-editing therapies need more diverse DNA to realize their full potential - Vox.com

Cancer risk, wine preference, and your genes Harvard Gazette – Harvard Gazette

Molly Przeworski launched into a lecture on genomic trait prediction with disappointing news: Using your genes to read the future is a murky practice.

The Columbia University systems biologist, who visited Harvard last week as featured speaker in the annual John M. Prather Lecture Series, explained how current approaches to genomic trait prediction in humans are imperfect. In a sample of 150,000 people, she said, more than 600 million positions in the genome differ among individuals. Over the past decade, it has become routine to survey such variation in large samples and try to associate variation in traits, such as height, to these genetic differences. Companies now aim to use DNA profiling to make personal predictions height, cancer risk, educational attainment, which wine would best suit their palate, and even the right romantic partner.

There are areas, notably for medical prognosis, where genomic trait prediction may turn out to be useful, said Przeworski, whose lab studies how evolution, natural selection, recombination, and mutation operate in humans and animals. But by and large, genomic trait prediction is much less informative than these ads and headlines would suggest.

But by and large, genomic trait prediction is much less informative than these ads and headlines would suggest.

At the moment, she said, the most useful application is not for humans, but rather for studying other species ecological responses to climate change. Her team has used genomic trait prediction among coral species in the Great Barrier Reef to shed light on which are most susceptible to coral bleaching.

In human genetics, the typical approach for associating some trait of interest (height, cancer risk) to specific genes is called a genome-wide association study. The test relates trait variations to genotypes (base pairs AA, AG, etc.) in certain positions on the genome, and fits them to a line.

However, many traits are associated with a large number of genetic variants. For example, one study Przeworski cited found 12,000 unique positions on a genome in which changing one base pair letter would have a small effect on ones height. Whats more, environmental factors, such as nutrition, also affect height.

I think a lot of us have this implicit model of what genomic trait prediction should mean that we understand something about how that genetic variant affects the protein, affects the cellular phenotype, affects development, and therefore affects height, she said. In practice, for almost all complex traits, we are very, very far from that. All we really have is this massive correlational study.

So if genomic prediction is murky, why bother? Przeworski admitted to asking herself the same question years ago, and investigating contexts in which confounding genetic clues wouldnt matter as much as simply making a helpful, reliable prediction. It occurred to me we could make predictions about ecologically important traits in the response to climate change, she said.

She spent part of her talk describing how her lab followed up, partnering with Australian scientists who study how ocean warming affects coral reefs. Due to temperature-related disruptions in the symbiotic relationship between certain coral species and the algae they farm, some colonies lose their pigment and become bleached, which stunts growth and leads to colony death. Przeworskis team has used their expertise in genomic trait prediction to build models that determine which corals are most vulnerable to bleaching.

As it becomes more straightforward to collect genomic information, I think its greatest promise may be in applications outside humans, she said.

The lecture was co-sponsored by the Department of Organismic and Evolutionary Biology, the Harvard Museum of Natural History, and the Harvard Museums of Science and Culture.

More:

Cancer risk, wine preference, and your genes Harvard Gazette - Harvard Gazette

TMS student performs in A Wrinkle in Time – hometownweekly

By Madison Butkus

Hometown Weekly Reporter

Westwood resident and Thurston Middle School (TMS) student Eviva Hertz is currently performing in her third professional play, A Wrinkle in Time, at Wheelock Family Theatre (WFT) in Boston, MA. This play is adapted from Madeleine LEngles much-loved classic tale in which Hertz is playing the character of Charles Wallace.

Marketing Specialist at WFT, Jenna Corcoran, went on to write, In A Wrinkle in Time, directed by Regine Vitale, one of literatures most enduring young heroines, Meg Murry, is back, stubbornness and all. Joining forces with her baby brother Charles Wallace, friend and neighbor Calvin OKeefe, and the celestial beings Mrs. Whatsit, Mrs. Who, and Mrs. Which, she must battle the forces of evil in order to rescue her father, save humanity, and discover herself. Traveling through time and space, Meg must save both her father and the world from IT, an evil force that is rapidly casting its shadow over the universe. But what does Meg have that the IT does not? Love. For in the end, love is enough to overcome evil and bring ITs dark world crashing down. One dark and stormy night, the eccentric Mrs. Whatsit arrives at the home of Meg Murry, a young teen who doesnt fit in at her New England high school. Megs scientist Father vanished over two years ago, under mysterious circumstances. Aided by Mrs. Whatsit and her friends, Meg, her gifted brother Charles Wallace, and her friend Calvin are transported through time and space on a mission to rescue their father from the evil forces that hold him prisoner on another planet.

When talking with Hertz, she explained that her love for the theatre started through her older sister who started taking class at WFT. Upon taking a class there herself, she quickly fell in love with WFT and all that it had to offer. Wheelock is really amazing and fantastic, Hertz stated, because it is a family theatre and they really care about the kids. We get our own full dressing rooms and changing/bathroom areas which is really nice. Everyone there is really kind and just based towards a family experience and atmosphere. It is also very much an intergenerational theatre which is really comforting.

While Hertz is playing the role of a boy character in this production, she mentioned how this is rather usual for her. I am almost always the character of a boy, she explained, in a majority of the productions that I do. For Charles Wallace specifically, I really try to tap into that younger part of myself to really get into the boy character. I also get into the Im so smart and know-it-all mindset just as is seen within his character. As I have said to our amazing director Regine Vital multiple times, Charles Wallace is magic but he thinks about himself in a scientific way, where he knows the answers to the questions but he doesnt understand them. So I try to channel all that when becoming his character.

Hertzs love for the theatre is abundantly clear, but she is rather realistic about it when it comes to her future endeavors. I dont want it to become a full time job, she mentioned, because I know it is pretty hard to sustain yourself on just acting. I actually want to become a therapist when I grow up but I do want to continue acting on the side for as long as I can. I just love the feel of being on stage, the rehearsal process, and meeting new friends.

She further described just how much it means to her to be able to live out her dreams of acting on a professional stage. This continual experience means a lot to me, she stated, every time I go on stage to perform. I love letting other people experience the magic of the theater and, you know, the audience really does a lot of the work. They are so important to the show. And it really means a lot to me that I get to be another person and experiment on different ways I can portray different characters. I get to share with people the tiny parts of myself that I didnt know I had but that I now love to show off to them. Through the theatre, I am finding out things about myself that I had never realized before, and through performing, I feel a sense of comfort that I can show them.

A Wrinkle in Time made its opening debut at WFT on April 13th and will continue to run until May 11th. While this is Hertzs third professional play at WFT, she will be returning there in the summer to perform in the role of Mamillius in The Winters Tale.

For more information about WFT and/or to get tickets for A Wrinkle in Time, please visit their website at http://www.wheelockfamilytheatre.org. Here at Hometown Weekly Newspaper, we would like to congratulate Hertz on her role and wish her good luck in her upcoming performances!

Continued here:

TMS student performs in A Wrinkle in Time - hometownweekly

Posted in Tms

Deep learning-based classification of anti-personnel mines and sub-gram metal content in mineralized soil (DL-MMD … – Nature.com

The experimental arrangement in MMD is a prime factor that defines the integrity of the dataset. The dataset is obtained in lab environment with a PI sensitive coil made up of muti-stranded wire with coil diameter of 170mm. It is mounted on a transparent acrylic sheet with a miniaturized Tx/Rx (also mounted) at a distance of 100mm. The electromagnetic field (EMF) simulation of search-head in close proximity of mine is shown in Fig.7. The received signal is digitized, and synchronized data is obtained for both the transmitted positive and negative pulses. The dataset is then populated with this synchronized pulse data. The pulse repetition frequency, including both pulses, is 880Hz.The number of pulses M (refer to Eq.(1)) obtained per class is 1330, representing concatenated positive and negative pulses. It is done to simplify the model, with a total number of concatenated samples being N=244, consisting of 122 samples from each received pulse, respectively. It is approximately 3s of pulsed data per class.

Shows Electromagnetic field simulation of search head in (a) and search head in proximity of mine in (b).

The samples/targets used to represent the nine classes (previously discussed) include minrl/brick (mineralized soil), sand (non-mineralized soil), APM (standard 0.2 gm) and vertical paper pins (0.2 gm). Mineralization is an indication of magnetic permeability (or susceptibility) of the surface soils that have been exposed to high temperatures and heavy rainfall or water for extended periods of time, often exhibit high mineralization due to the presence of residual iron components. For an in-depth exploration of the magnetic susceptibility across a wide range of soil types, you can find comprehensive information in reference18. The choice of using brick, a clay-based material, as a representative sample for mineralized soil is grounded in its unique composition. It contains minerals like iron oxide, such as magnetite or hematite, and exhibits relatively low electrical conductivity19. These distinctive characteristics significantly enhance its detectable response when subjected to a MMD. In fact, this response is typically more robust than that of conventional mineralized soil (from which it originates) or even APM. For the sake of simplicity and consistency, we will refer to this material as "minrl" throughout this paper.

All of the targets mentioned pose their own challenges, but they are placed in close proximity to the MMD, within a distance of no more than 20mm parallel to the surface of the coil. The targets are positioned at the center of the coil. The received signals from different target samples of a positive and a negative transmitted pulses can be observed in Figs. 8 and 9 respectively. The figures display a magnified section of the received signal, focusing on the initial samples that are more strongly influenced by the secondary magnetic field compared to later samples. It can also be seen that signals vary in opposite directions as per polarity of the transmitted pulses.

Received signals of a positive transmitted pulse picked up at the sensor coil from the secondary magnetic field produced by the eddy currents induced within the targets. The x-axis shows few numbers of samples (initial part of the signal) per pulse and y-axis shows amplitude of the signal in volts. Signals from nine targets air, APM, pins, minrl, minrl+APM, minrl+pins, sand, sand+APM and sand+pins have been shown.

Received signals of a negative transmitted pulse picked up at the sensor coil from the secondary magnetic field produced by the eddy currents induced within the targets. The x-axis shows few numbers of samples (initial part of the signal) per pulse and y-axis shows amplitude of the signal in volts. Signals from nine targets air, APM, pins, minrl, minrl+APM, minrl+pins, sand, sand+APM and sand+pins have been shown.

The overall dataset comprises a total of 11,970 pulses, representing nine different classes. The dataset is sufficiently diverse, as illustrated in Fig.10 by examining inter-class distances. For this analysis, two distances are employed: Euclidean distance, which measures point-to-point distance, and Bhattacharyya distance, a metric indicating dissimilarity between two probability distributions. Two cases will be briefly discussed here: one involving the Euclidean distance between air and pins, where the maximum distance is observed as depicted in Fig.10, which is also evident in the received signal shown in Figs. 8 and 9. The second case pertains to the Bhattacharyya distance between air and sand, illustrating minimal dissimilarity. The impact of this dissimilarity will become evident in the overall results. To prepare this dataset for modelling, these pulses are randomly shuffled and subsequently split into two separate sets: a training dataset containing 10,773 pulses and a validation dataset comprising 1197 pulses.

Shows inter-class similarity through Euclidean and Bhattacharyya distances.

During the model training phase, input data is structured as a matrix with dimensions [10,773244], and the output, following a supervised learning approach, is provided as a one-hot encoded labeled matrix with dimensions [10,7739]. The accuracy of the trained model on the provided data is tracked across multiple epochs, including both training and validation accuracy. In the context of this training process, one epoch signifies a complete iteration over the entire training dataset of size [10,773244], with all training samples processed by the model. Figure11 depicts the trend, showing that as the training process repeats over multiple epochs, the model steadily enhances its performance and optimizes its parameters. After 4000 epochs, the trained accuracy reaches approximately 98%, while the validation accuracy hovers above 93%. It also shows that the DL-MMD model has more or less converged at 4000epochs, by achieving the optimum training performance. Likewise, its evident that the models error loss diminishes with the progression of epochs, as illustrated in Fig.12.

Shows the accuracy and validation accuracy of novel DL-MMD model versus epochs. For comparison, the validation accuracy of KNN and SVM classifier are also shown for k=8 and C=100 respectively.

Shows the loss and validation loss of novel DL-MMD model versus epochs.

Figure11, also shows that the presented model performs substantially better compared to support vector machine (SVM) and K-Nearest Neighbors (KNN) classifiers. The main working principle of SVM is to separate several classes in the training set with a surface that maximizes the margin (decision boundary) between them. It uses Structural Risk Minimization principle (SRM) that allows the minimization of a bound on the generalization error20. SVM model used in this research achieved a training accuracy of 93.6% and a validation accuracy of 86.5%, which is far lower than the performance achieved by the presented model. The parameter for kernel function used is the most popular i.e. radial basis function (RBF) and the value of regularization parameter c optimally selected is 100. The regularization parameter controls the trade-off between classifying the training data correctly and the smoothness of the decision boundary. Figure13 shows the influence of the regularization parameter c, on the performance of the classifier. The gamma is automatically calculated based on the inverse of the number of features, which ensures that each feature contributes equally to the decision boundary. The hyperparameter optimization is achieved through a manual grid search method. The code iterates through a predefined list of C values [0.1, 1, 10, 100, 1000, 10000], and for each value of C, it trains a Support Vector Machine (SVM) classifier with a radial basis function (RBF) kernel and evaluates its performance on the training and test sets. The accuracy and C values are then plotted to visually check the best performance. It can be seen that the generalization error increases when the value of C is greater than 100, the SVM starts to overfit the training data and thus resulting in decrease in validation accuracy.

Shows the accuracy of SVM classifier versus regularization parameter C.

While K-Nearest Neighbors (KNN) model with 8 neighbors (k) achieved a training accuracy of 92.6% and a validation accuracy of 90.7% (see Fig.11), which is lower than the performance achieved by the presented model. To enable comparative analysis, it is essential to showcase the performance of this non-parametric machine learning algorithm. In this context, the algorithm predicts the value of a new data point by considering the majority vote or average of its k nearest neighbors within the feature space21. Figure14 illustrates the influence of the hyperparameter k, the number of neighbors, on the performance of the algorithm. The graph demonstrates that the validation accuracy reaches a maximum of 90.7% when 8 neighbors are considered.

Shows the accuracy of KNN classifier versus number of neighbors k.

To further analyze the DL-MMD model versus the experimental data, one more graph has been plotted shown in Fig.15. This graph illustrates the comparative performance of the presented model using a different data split ratio (7030), with 70% for training and 30% for validation. The graph shows a slightly degraded performance when compared to the split ratio (9010) of 90% for training and 10% for validation. However, it still shows validation accuracy of above 88% at 4000 epochs. This degradation is attributed to epistemic uncertainty (model uncertainty) due to slightly less effective learning on a reduced training data and as the training data increases, this uncertainty also reduces.

Shows the accuracy and validation accuracy of novel DL-MMD model versus epochs at two different data split ratios i.e. of 9010 and 7030.

The performance of the model can also be inferred from the confusion matrix shown in Fig.16. It provides a tabular representation of the predicted and actual class labels, giving a very important analysis of the models in terms of true positives, true negatives, false positives, and false negatives. For an application perspective of an MMD, safety of the user is of utmost importance for which false negative matters a lot since mine as target must not be missed.. The overall prediction accuracy is above 93.5%, however, for cases of air and sand it is approximately 85 and 86.5% respectively, inferred from the confusion matrix. These two classification cases of relatively less prediction accuracy can be neglected since sand being wrongly classified as air only and vice-versa. These two classes (air & sand) do not trigger any detection alarm by an MMD, thus misclassification of them will not impact efficiency of DL-MMD classifier. It also highlights the fact that sand (of river) has minimal mineralized content and is generally designated as non-mineralised soil. It is therefore difficult to separate the boundary between these two classes in presence of noise and interference.

Confusion matrix of the proposed DL-MMD classification on 9 classes.

In addition to this, two further cases need to be examined: one involves mineralized soil (minrl) being wrongly classified as APM, and the other involves APM in sand (sand+APM) being wrongly classified as minrl. The first case is of false positive, it will generate a false alarm and will waste time of the user by requiring unnecessary further investigation. The second case is of more importance i.e. of false negative where an APM is detected but wrongly classified by a DL-MMD and will be discussed in next section. Apart from them, there are minor cases e.g. an APM misclassified as APM in sand (sand+APM), it will not have any impact since target of concern (APM) will remain the same but now being shown buried in sand. The occurrence of all these misclassification cases (apart from the air/sand case & vice-versa) is less than 5% approximately.

These results have been obtained by a substantial dataset based on actual data acquired in two sets of 665 (pulses per class) each obtained at two different times through the experimental setup explained previously and then combined together. Comprehensive simulations have been carried out in the Tensor Flow environment for evaluation of the proposed method. In addition to this, the algorithm has been extensively tested with an increased number of layers and channels, resulting in overfitting. Furthermore, the proposed model has been tested with different optimizers, such as Adagrad, Adamax, and Adam. The comparative analysis of Adam and Adamax can be seen in Fig.17. Both show equivalent performance after 2000epochs.

Shows the accuracy and validation accuracy of novel DL-MMD model versus epochs using two different optimizers Adamax and Adam.

In addition to the aforementioned analysis, the dataset underwent evaluation using other prevalent classification algorithms22, which utilize the principle of ensemble learning. However, upon comparison, the proposed deep learning architecture exhibited superior performance, achieving an accuracy exceeding 90%. The confusion matrices of these classification algorithms, AdaBoost and Bagged tree, are depicted in Figs. 18, 19, and 20, with the dataset partitioned into an 80/20 ratio, resulting in accuracies of 75.4%, 80%, and 83.3%, respectively. AdaBoost was employed without PCA, utilizing the maximum number of splits and learners set to 30, with a learning rate of 0.1. For Bagged tree, only Model 2 underwent preprocessing with PCA with a variance of 95%. They both utilized the same number of learners as AdaBoost and a maximum split of 11,969.

Confusion matrix model 1 AdaBoost.

Confusion matrix model 2 Bagged Tree.

Confusion matrix model 3 Bagged Tree.

It is pertinent to mention that there is always redundant information within the received signal that creates background bias, especially in sensitive areas with low metal content. Information regarding the detection of APM mines buried at different depths is available (in the parameter decay rate), but it is not utilized. Therefore, for an APM buried at a different depth (relative to the search head) to the one it is trained on, there is a chance that it can be misclassified. The information exists, but it needs to be pre-processed before feeding the signal to the model. One approach could be to use focused AI models, similar to those shown in Ref23, that inject synthetic bias into the signal to generalize the model in our case at different depths. Another approach can be to localize the area with different decay rates, similar to the one shown in Ref24 for 2D image application. One of the future work will be to utilize this information and integrate it into the DL_MMD architecture.

Read the original:

Deep learning-based classification of anti-personnel mines and sub-gram metal content in mineralized soil (DL-MMD ... - Nature.com

Enhancing cervical cancer detection and robust classification through a fusion of deep learning models | Scientific … – Nature.com

Dataset description

The dataset we used for this study is accessible through this link: https://www.cs.uoi.gr/~marina/sipakmed.html. It contains five different cell types, as detailed in24. In our research, we've transformed this dataset into a two-class system with two categories: normal and abnormal. Specifically, the normal category includes superficial intermediate cells and parabasal cells, while the aberrant category covers koilocytotic, dyskeratotic, and metaplastic cell types25. Within the normal category, we've further divided cells into two subcategories: superficial intermediate cells and parabasal cells. The essential dataset characteristics are summarized in Table 2. The SIPaKMeD dataset comprises a total of 4068 images, with 3254 allocated for training (making up 80% of the total), and 813 set aside for testing (accounting for 20% of the total). This dataset consists of two distinct classes: normal photos, totalling 1618, and aberrant images, amounting to 2450. Figure2 provides visual examples of photographs from these two different categories. The existing literature extensively covers different screening methods for cervical cancer, such as Pap smear, colposcopy, and HPV testing, emphasizing the importance of early detection. However, a significant gap exists in automated screening systems using pap smear images. Traditional methods rely on expert interpretation, but integrating deep learning (DL) and machine learning (ML) offers potential for intelligent automation. Despite this potential, few studies focus on developing and evaluating such systems specifically for cervical cancer prediction using pap smear images. This research addresses this gap by proposing a methodology that utilizes pre-trained deep neural network models for feature extraction and applies various ML algorithms for prediction. The study aims to contribute to advancing automated screening systems for cervical cancer, aiming to improve early detection and patient outcomes.

Proposed model cervical cancer classification.

The schematic representation of our proposed system can be observed in Fig.2. To facilitate the classification task for cervical cancer, we employ the SIPaKMeD dataset, which comprises images of pap smears. This dataset is categorized into two groups: abnormal and normal, with a distribution of 60% for training and 40% for testing. To extract relevant feature sets from well-established CNN architectures such as Alexnet, Resnet-101, Resnet-152, and InceptionV3, we initiate feature extraction from these pretrained CNN models. This step allows us to gather valuable information from the final layer activation values. For the task of classifying images into normal and abnormal categories, we leverage a variety of machine learning techniques, including Simple Logistic, Decision Tree, Random Forest, Naive Bayes, and Principal Component Analysis. Our approach is designed as a hybrid strategy, merging both DL and ML methodologies. The utilization of DL enables our model to capture intricate and complex features inherent in the data, while ML provides the necessary flexibility to handle diverse scenarios. By harnessing the last layer of pretrained models for feature extraction, we enable different machine learning algorithms to classify data based on these extracted attributes. This combination of DL and ML enhances our system's ability to effectively categorize cervical cancer cases.

The pre-trained model has undergone training on a larger dataset, acquiring specific weights and biases that encapsulate the dataset's distinctive characteristics. This model has been effectively employed for making predictions based on data. The transferability of learned features to other datasets is possible because certain fundamental abstract properties remain consistent across various types of images. By utilizing pre-trained models, significant time and effort savings are achieved, as a substantial portion of the feature extraction process has already been completed. Noteworthy examples of pre-trained models include Resnet152, ResNet101, Inceptionv3, and Alexnet, which are summarized in Table 3 for reference.

The image classification framework based on ResNet-101 consists of two main parts: feature extraction and feature classification. In Fig.3, you can see how the feature extractor is built, comprising five main convolution modules with a total of one hundred convolution layers, an average pooling layer, and a fully connected layer26. Once the features are extracted, they are used to train a classifier with a Softmax structure. Table 4 lists the convolution layers and their configurations in the ResNet-101 backbone. Using shortcut connections to increase data dimensions, the ResNet-101 model significantly improves performance by increasing convolutional depth. These shortcut connections also address the problem of network depth causing degradation by enabling identity mapping. For most binary classification tasks, the loss function is applied using the logical cross-entropy function, as shown in Eq.(1).

$$k_{({h_l},;{q_l})}^b = - {f_l}log left( {q_l} right) - left( {1 - {f_l}} right)log left( {1 - {q_l}} right)$$

(1)

where the ground truth value, (% {f_l}), and the predicted value, (% {q_l}), are respectively indicated as the lth training dataset's ground truth and predicted values. The value of the loss, ({k}_{({h_{l}}, ; {q_{l}})}^{b}), is then backpropagated through the CNN model. At the same time, the CNN model parameters (weights and biases) are gradually optimised during each epoch. This process continues until the loss is minimised and the CNN model converges to a solution.

The ResNet architecture is efficient, promoting the training of very deep neural networks (DNN) and enhancing accuracy. It addresses the challenge of accuracy degradation associated with increasing network depth. When depth is increased, accuracy often drops, which is a drawback. However, deeper networks can improve accuracy by avoiding the saturation of shallow networks, where errors remain minimal27. The key idea here is that information from one layer should easily flow to the next with the help of identity mapping. ResNet tackles the degradation problem, along with the gradient vanishing issue, using residual blocks. These blocks handle the remaining computation while considering the input and output of the block. Figure4, illustrates architecture of ResNet152. Table 5, illustrates the configuration of ResNet152.

This advanced model has undergone training by one of the industry's most renowned hardware specialists, leveraging an impressive repertoire of over 20 million distinct parameters. The model's architecture is a harmonious blend of symmetrical and asymmetrical construction blocks, each meticulously crafted with its own unique set of convolutional, average, and maximum pooling layers, concatenation operations, and fully connected layers configurations. Furthermore, the model's design incorporates an activation layer that takes advantage of batch normalization, a widely adopted technique in the field. This technique helps stabilize and accelerate the training process, making the model more robust and efficient28. For the critical task of classification, the model employs the Softmax method, a popular and well-established approach in machine learning. Softmax is instrumental in producing probability distributions over multiple classes, which enables the model to make informed and precise predictions. To provide a visual understanding of the Inception-V3 model's intricate design, Fig.5 serves as a diagrammatic representation, offering insights into the model's underlying architecture and the various components that make it a powerhouse in the realm of machine learning and artificial intelligence.

InceptionV3 architecture.

The field of machine learning, particularly in the domain of image processing, has witnessed a profound impact thanks to the advent of Alexnet. As suggested in Ref.29, this influential model boasts a preconfigured Convolutional Neural Network (CNN) with a total of eight distinct layers29. Its remarkable performance in the 2012 ImageNet Large Scale Visual Recognition Challenge (LSVRC-2012) competition marked a watershed moment, as it clinched victory with a substantial lead over its competitors. The architectural blueprint of Alexnet bears some resemblance to Yann Lecun's pioneering LeNet, highlighting its historical lineage and the evolutionary progress of convolutional neural networks.

Figure6 provides an insightful visual representation of the holistic design of the Alexnet system. In the journey of data processing within Alexnet, input data traverse through an intricate sequence, comprising five convolution layers and three max-pooling layers, as vividly illustrated in Fig.5. These layers play a pivotal role in feature extraction and hierarchical representation, which are vital aspects of image analysis and understanding. The culmination of AlexNet's network journey is marked by the application of the SoftMax activation function in the final layer, enabling it to produce probabilistic class predictions. Along the way, the Rectified Linear Unit (ReLU) activation function is systematically employed across all the network's convolution layers, providing a nonlinear transformation that enhances the network's capacity to learn and extract features effectively. This combination of architectural elements and activation functions has played a significant role in solidifying AlexNet's position as a groundbreaking model in the domain of image processing and machine learning.

Logistic regression serves as a powerful method for modelling the probability of a discrete outcome based on input variables, making the choice of input variables a pivotal aspect of this modelling process. The most common application of logistic regression involves modelling a binary outcome, which pertains to scenarios where the result can exclusively assume one of two possible values, such as true or false, yes or no, and the like. However, in situations where there are more than two discrete potential outcomes, multinomial logistic regression proves invaluable in capturing the complexity of the scenario. Logistic regression finds its primary utility in the realm of classification problems30. It becomes particularly valuable when the task at hand involves determining which category a new sample best aligns with. This becomes especially pertinent when dealing with substantial datasets, where the need to classify or categorize data efficiently and accurately is paramount. One noteworthy domain where logistic regression finds widespread application is in cybersecurity, where classification challenges are ubiquitous. A pertinent example is the detection of cyberattacks. Here, logistic regression plays a crucial role in identifying and categorizing potential threats, contributing significantly to bolstering the security of digital systems and networks.

In the realm of supervised learning algorithms, decision trees emerge as a highly versatile and powerful tool for both classification and regression tasks. They operate by constructing a tree-like structure, wherein internal nodes serve as decision points, branches represent the outcomes of attribute tests, and terminal nodes store class labels. The construction of a decision tree is an iterative process, continually dividing the training data into subsets based on attribute values until certain stopping conditions, such as reaching the maximum tree depth or the minimum sample size required for further division, are met. To guide this division process, the Decision Tree algorithm relies on metrics like entropy or Gini impurity, which gauge the level of impurity or unpredictability within the data subsets31. These metrics inform the algorithms choice of the most suitable attribute for data splitting during training, aiming to maximize information gain or minimize impurity. In essence, the central nodes of a decision tree represent the features, the branches encapsulate the decision rules, and the leaf nodes encapsulate the algorithms outcomes. This design accommodates both classification and regression challenges, making decision trees a flexible tool in supervised machine learning. One notable advantage of decision trees is their effectiveness in handling a wide range of problems. Moreover, their ability to be leveraged in ensembles, such as the Random Forest algorithm, enables the simultaneous training on multiple subsets of data, elevating their efficacy and robustness in real-world applications.

A Random Forest is a powerful machine learning tool that handles both regression and classification tasks effectively. It works by combining the predictions of multiple decision trees to solve complex problems. Here's how it works: The Random Forest algorithm builds a forest of decision trees using a technique called bagging. Bagging improves the precision and reliability of machine learning ensembles32. The algorithm then makes predictions by averaging the results from these trees, determining the final outcome. What makes the Random Forest special is its scalability. Unlike single decision trees, it can adapt to complex data and improves its accuracy as you add more trees to the forest. The Random Forest also helps prevent overfitting, making it a valuable tool for real-world applications with noisy and complex datasets. Moreover, it reduces the need for extensive fine-tuning, making it an appealing choice for practitioners seeking effective and dependable machine learning models.

Nave Bayes theorem forms the fundamental principle underlying the Naive Bayes algorithm. In this method, a key assumption is that there's no interdependence among the feature pairs, resulting in two pivotal presumptions: feature independence and attribute equality. Naive Bayes classifiers are versatile, existing in three primary variants: Gaussian Naive Bayes, Bernoulli Naive Bayes, and Multinomial Naive Bayes33. The choice of variant depends on the nature of the data being analyzed. For binary data, Bernoulli Nave Bayes is employed, while count data finds its match in Multinomial Nave Bayes, and continuous data is aptly handled by Gaussian Nave Bayes. Equation(2) serves as a proof of Bayes theorem, underpinning the mathematical foundations of this approach.

$$Zleft( {b|a} right) = frac{Zleft( b right)Zleft( b right)}{{Zleft( a right)}}$$

(2)

Principal Component Analysis (PCA) serves as a powerful technique designed to mitigate the impact of correlations among variables through an orthogonal transformation. PCA finds widespread use in both exploratory data analysis and machine learning for predictive modelling. In addition, PCA stands out as an unsupervised learning algorithm that offers a valuable approach for delving into the intricate relationships between variables. This method, also referred to as generic factor analysis, enables the discovery of the optimal line of fit through regression analysis34. What sets PCA apart is its ability to reduce the dimensionality of a dataset without prior knowledge of the target variables while preserving the most critical patterns and interdependencies among the variables. By doing so, PCA simplifies complex data, making it more amenable for various tasks, such as regression and classification. The result is a more streamlined subset of variables that encapsulates the essential essence of the data.

See the rest here:

Enhancing cervical cancer detection and robust classification through a fusion of deep learning models | Scientific ... - Nature.com

Predicting equilibrium distributions for molecular systems with deep learning – Nature.com

Deep neural networks have been demonstrated to predict accurate molecular structures from descriptors ({{{mathcal{D}}}}) for many molecular systems1,5,6,9,10,11,12. Here, DiG aims to take one step further to predict not only the most probable structure but also diverse structures with probabilities under the equilibrium distribution. To tackle this challenge, inspired by the heatingannealing paradigm, we break down the difficulty of this problem into a series of simpler problems. The heatingannealing paradigm can be viewed as a pair of reciprocal stochastic processes on the structure space that simulate the transformation between the system-specific equilibrium distribution and a system-independent simple distribution psimple. Following this idea, we use an explicit diffusion process (forward process; Fig. 1b, orange arrows) that gradually transforms the target distribution of the molecule ({q}_{{{{mathcal{D}}}},0}), as the initial distribution, towards psimple through a time period . The corresponding reverse diffusion process then transforms psimple back to the target distribution ({q}_{{{{mathcal{D}}}},0}). This is the generation process of DiG (Fig. 1b, blue arrows). The reverse process is performed by incorporating updates predicted by deep neural networks from the given ({{{mathcal{D}}}}), which are trained to match the forward process. The descriptor ({{{mathcal{D}}}}) is processed into node representations ({{{mathcal{V}}}}) describing the feature of each system-specific individual element and a pair representation ({{{mathcal{P}}}}) describing inter-node features. The ({{{{mathcal{V}}}},{{{mathcal{P}}}}}) representation is the direct input from the descriptor part to the Graphormer model10, together with the geometric structure input R to produce a physically finer structure (Supplementary Information sections B.1 and B.3). Specifically, we choose ({p}_{{{mbox{simple}}}}:= {{{mathcal{N}}}}({{{bf{0}}}},{{{bf{I}}}})) as the standard Gaussian distribution in the state space, and the forward diffusion process as the Langevin diffusion process targeting this psimple (OrnsteinUhlenbeck process)40,41,42. A time dilation scheme t (ref. 43) is introduced for approximate convergence to psimple after a finite time . The result is written as the following stochastic differential equation (SDE):

$${{{rm{d}}}}{{{{bf{R}}}}}_{t}=-frac{{beta }_{t}}{2}{{{{bf{R}}}}}_{t},{{{rm{d}}}}t+sqrt{{beta }_{t}},{{{rm{d}}}}{{{{bf{B}}}}}_{t}$$

(1)

where Bt is the standard Brownian motion (a.k.a. Wiener process). Choosing this forward process leads to a psimple that is more concentrated than a heated distribution, hence it is easier to draw high-density samples, and the form of the process enables efficient training and sampling.

Following stochastic process theory (see, for example, ref. 44), the reverse process is also a stochastic process, written as the following SDE:

$${{{rm{d}}}}{{{{bf{R}}}}}_{bar{t}}=frac{{beta }_{bar{t}}}{2}{{{{bf{R}}}}}_{bar{t}},{{{rm{d}}}}bar{t}+{beta }_{bar{t}}nabla log {q}_{{{{mathcal{D}}}},bar{t}}({{{{bf{R}}}}}_{bar{t}}),{{{rm{d}}}}bar{t}+sqrt{{beta }_{bar{t}}},{{{rm{d}}}}{{{{bf{B}}}}}_{bar{t}}$$

(2)

where (bar{t}:= tau -t) is the reversed time, ({q}_{{{{mathcal{D}}}},bar{t}}:= {q}_{{{{mathcal{D}}}},t = tau -bar{t}}) is the forward process distribution at the corresponding time and ({{{{bf{B}}}}}_{bar{t}}) is the Brownian motion in reversed time. Note that the forward and corresponding reverse processes, equations (1) and (2), are inspired from but not exactly the heating and annealing processes. In particular, there is no concept of temperature in the two processes. The temperature T mentioned in the PIDP loss below is the temperature of the real target system but is not related to the diffusion processes.

From equation (2), the only obstacle that impedes the simulation of the reverse process for recovering ({q}_{{{{mathcal{D}}}},0}) from psimple is the unknown (nabla log {q}_{{{{mathcal{D}}}},bar{t}}({{{{bf{R}}}}}_{bar{t}})). Deep neural networks are then used to construct a score model ({{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{bf{R}}}})), which is trained to predict the true score function (nabla log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}})) of each instantaneous distribution ({q}_{{{{mathcal{D}}}},t}) from the forward process. This formulation is called a diffusion-based generative model and has been demonstrated to be able to generate high-quality samples of images and other content27,28,45,46,47. As our score model is defined in molecular conformational space, we use our previously developed Graphormer model10 as the neural network architecture backbone of DiG, to leverage its capabilities in modelling molecular structures and to generalize to a range of molecular systems. Note that the score model aims to approximate a gradient, which is a set of vectors. As these are equivariant with respect to the input coordinates, we designed an equivariant vector output head for the Graphormer model (Supplementary Information section B.4).

With the ({{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{bf{R}}}})) model, drawing a sample R0 from the equilibrium distribution of a system ({{{mathcal{D}}}}) can be done by simulating the reverse process in equation (2) on N+1 steps that uniformly discretize [0,] with step size h=/N (Fig. 1b, blue arrows), thus

$$begin{array}{ll}&{{{{bf{R}}}}}_{N} sim {p}_{{{mbox{simple}}}},\ &{{{{bf{R}}}}}_{i-1}=frac{1}{sqrt{1-{beta }_{i}}}left({{{{bf{R}}}}}_{i}+{beta }_{i}{{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i})right)+{{{mathcal{N}}}}({{{bf{0}}}},{beta }_{i}{{{bf{I}}}}),,i=N,cdots ,,1,end{array}$$

where the discrete step index i corresponds to time t=ih, and i:=ht=ih. Supplementary Information section A.1 provides the derivation. Note that the reverse process does not need to be ergodic. The way that DiG models the equilibrium distribution is to use the instantaneous distribution at the instant t=0 (or (bar{t}=tau)) on the reverse process, but not using a time average. As RN samples can be drawn independently, DiG can generate statistically independent R0 samples for the equilibrium distribution. In contrast to MD or MCMC simulations, the generation of DiG samples does not suffer from rare events that link different states and can thus be far more computationally efficient.

DiG can be trained by using conformation data sampled over a range of molecular systems. However, collecting sufficient experimental or simulation data to characterize the equilibrium distribution for various systems is extremely costly. To address this data scarcity issue, we propose a pre-training algorithm, called PIDP, which effectively optimizes DiG on an initial set of candidate structures that need not be sampled from the equilibrium distribution. The supervision comes from the energy function ({E}_{{{{mathcal{D}}}}}) of each system ({{{mathcal{D}}}}), which defines the equilibrium distribution ({q}_{{{{mathcal{D}}}},0}({{{bf{R}}}})propto exp (-frac{{E}_{{{{mathcal{D}}}}}({{{bf{R}}}})}{{k}_{{{{rm{B}}}}}T})) at the target temperature T.

The key idea is that the true score function (nabla log {q}_{{{{mathcal{D}}}},t}) from the forward process in equation (1) obeys a partial differential equation, known as the FokkerPlanck equation (see, for example, ref. 48). We then pre-train the score model ({{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }) by minimizing the following loss function that enforces the equation to hold:

$$begin{array}{rc}&mathop{sum }limits_{i=1}^{N}frac{1}{M}mathop{sum }limits_{m=1}^{M}leftVert frac{{beta }_{i}}{2}left(nabla left({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)}cdot {{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)})right)right.+nabla leftVert {{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)})rightVert ^{2}+nabla left(nabla cdot {{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)})right)right)\ &left.-frac{partial }{partial t}{{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }left({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)}right)rightVert^{2}+frac{{lambda }_{1}}{M}mathop{sum }limits_{m=1}^{M}leftVertfrac{1}{{k}_{{{{rm{B}}}}}T}nabla {E}_{{{{mathcal{D}}}}}left({{{{bf{R}}}}}_{{{{mathcal{D}}}},1}^{(m)}right)+{{{{bf{s}}}}}_{{{{mathcal{D}}}},1}^{theta }left({{{{bf{R}}}}}_{{{{mathcal{D}}}},1}^{(m)}right)rightVert^{2}end{array}$$

Here, the second term, weighted by 1, matches the score model at the final generation step to the score from the energy function, and the first term implicitly propagates the energy function supervision to intermediate time steps (Fig. 1b, upper row). The structures ({{{{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)}}}_{m = 1}^{M}) are points on a grid spanning the structure space. Since these structures are only used to evaluate the loss function on discretized points, they do not have to obey the equilibrium distribution (as is required by structures in the training dataset), therefore the cost of preparing these structures can be much lower. As structure spaces of molecular systems are often very high dimensional (for example, thousands for proteins), a regular grid would have intractably many points. Fortunately, the space of actual interest is only a low-dimensional manifold of physically reasonable structures (structures with low energy) relevant to the problem. This allows us to effectively train the model only on these relevant structures as R0 samples. Ri samples are produced by passing R0 samples through the forward process. See Supplementary Information section C.1 for an example on acquiring relevant structures for protein systems.

We also leverage stochastic estimators, including Hutchinsons estimator49,50, to reduce the complexity in calculating derivatives of high order and for high-dimensional vector-valued functions. Note that, for each step i, the corresponding model ({{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }) receives a training loss independent of other steps and can be directly back-propagated. In this way, the supervision on each step can improve the optimizing efficiency.

In addition to using the energy function for information on the probability distribution of the molecular system, DiG can also be trained with molecular structure samples that can be obtained from experiments, MD or other simulation methods. See Supplementary Information section C for data collection details. Even when the simulation data are limited, they still provide information about the regions of interest and about the local shape of the distribution in these regions; hence, they are helpful to improve a pre-trained DiG. To train DiG on data, the score model ({{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i})) is matched to the corresponding score function (nabla log {q}_{{{{mathcal{D}}}},i}) demonstrated by data samples. This can be done by minimizing ({{mathbb{E}}}_{{q}_{{{{mathcal{D}}}},i}({{{{bf{R}}}}}_{i})}{parallel {{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i})-nabla log {q}_{{{{mathcal{D}}}},i}({{{{bf{R}}}}}_{i})parallel }^{2}) for each diffusion time step i. Although a precise calculation of (nabla log {q}_{{{{mathcal{D}}}},i}) is impractical, the loss function can be equivalently reformulated into a denoising score-matching form51,52

$$frac{1}{N}mathop{sum }limits_{i=1}^{N}{{mathbb{E}}}_{{q}_{{{{mathcal{D}}}},0}({{{{bf{R}}}}}_{0})}{{mathbb{E}}}_{p({{{{mathbf{epsilon }}}}}_{i})}{parallel {sigma }_{i}{{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({alpha }_{i}{{{{bf{R}}}}}_{0}+{sigma }_{i}{{{{mathbf{epsilon }}}}}_{i})+{{{{mathbf{epsilon }}}}}_{i}parallel }^{2}$$

where ({alpha }_{i}:= mathop{prod }nolimits_{j = 1}^{i}sqrt{1-{beta }_{j}}), ({sigma }_{i}:= sqrt{1-{alpha }_{i}^{2}}) and p(i) is the standard Gaussian distribution. The expectation under ({q}_{{{{mathcal{D}}}},0}) can be estimated using the simulation dataset.

We remark that this score-predicting formulation is equivalent (Supplementary Information section A.1.2) to the noise-predicting formulation28 in the diffusion model literature. Note that this function allows direct loss estimation and back-propagation for each i in constant (with respect to i) cost, recovering the efficient step-specific supervision again (Fig. 1b, bottom).

The computation of many thermodynamic properties of a molecular system (for example, free energy or entropy) also requires the density function of the equilibrium distribution, which is another aspect of the distribution besides a sampling method. DiG allows for this by tracking the distribution change along the diffusion process45:

$$begin{array}{l}log {p}_{{{{mathcal{D}}}},0}^{theta }({{{{bf{R}}}}}_{0})=log {p}_{{{mbox{simple}}}}left({{{{bf{R}}}}}_{{{{mathcal{D}}}},tau }^{theta }({{{{bf{R}}}}}_{0})right)\qquadqquadqquad;;-displaystyleintnolimits_{0}^{tau }frac{{beta }_{t}}{2}nabla cdot {{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }left({{{{bf{R}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{{bf{R}}}}}_{0})right),{{{rm{d}}}}t-frac{D}{2}intnolimits_{0}^{tau }{beta }_{t},{{{rm{d}}}}tend{array}$$

where D is the dimension of the state space and ({{{{bf{R}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{{bf{R}}}}}_{0})) is the solution to the ordinary differential equation (ODE)

$${{{rm{d}}}}{{{{bf{R}}}}}_{t}=-frac{{beta }_{t}}{2}left({{{{bf{R}}}}}_{t}+{{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{{bf{R}}}}}_{t})right),{{{rm{d}}}}t$$

(3)

with initial condition R0, which can be solved using standard ODE solvers or more efficient specific solvers (Supplementary Information section A.6).

There is a growing demand for the design of materials and molecules that possess desired properties, such as intrinsic electronic band gaps, elastic modulus and ionic conductivity, without going through a forward searching process. DiG provides a feature to enable such property-guided structure generation, by directly predicting the conditional structural distribution given a value c of a microscopic property.

To achieve this goal, regarding the data-generating process in equation (2), we only need to adapt the score function from (nabla log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}})) to ({nabla }_{{{{bf{R}}}}}log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}}| c)). Using Bayes rule, the latter can be reformulated as ({nabla }_{{{{bf{R}}}}}log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}}| c)=nabla log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}})+{nabla }_{{{{bf{R}}}}}log {q}_{{{{mathcal{D}}}}}(c| {{{bf{R}}}})), where the first term can be approximated by the learned (unconditioned) score model; that is, the new score model is

$${{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i}| c)={{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i})+{nabla }_{{{{{bf{R}}}}}_{i}}log {q}_{{{{mathcal{D}}}}}(c| {{{{bf{R}}}}}_{i})$$

Hence, only a ({q}_{{{{mathcal{D}}}}}(c| {{{bf{R}}}})) model is additionally needed45,46, which is a property predictor or classifier that is much easier to train than a generative model.

In a normal workflow for ML inverse design, a dataset must be generated to meet the conditional distribution, then an ML model will be trained on this dataset for structure distribution predictions. The ability to generate structures for conditional distribution without requiring a conditional dataset places DiG in an advantageous position when compared with normal workflows in terms of both efficiency and computational cost.

Given two states, DiG can approximate a reaction path that corresponds to reaction coordinates or collective variables, and find intermediate states along the path. This is achieved through the fact that the distribution transformation process described in equation (1) is equivalent to the process in equation (3) if ({{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }) is well learned, which is deterministic and invertible, hence establishing a correspondence between the structure and latent space. We can then uniquely map the two given states in the structure space to the latent space, approximate the path in the latent space by linear interpolation and then map the path back to the structure space. Since the distribution in the latent space is Gaussian, which has a convex contour, the linearly interpolated path goes through high-probability or low-energy regions, so it gives an intuitive guess of the real reaction path.

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Read this article:

Predicting equilibrium distributions for molecular systems with deep learning - Nature.com

Road to safer self-driving cars is paved with deep learning – ISRAEL21c

Safer and more reliable autonomous systems, such as self-driving vehicles, may be possible thanks to a new understanding of deep learning, a type of artificial intelligence (AI) that mimics the way humans learn and process information.

The study, conducted at Bar-Ilan University and published in the Physica A journal, highlights the interplay between AI confidence levels and decision-making processes.

Understanding the confidence levels of AI systems allows us to develop applications that prioritize safety and reliability, explained Ella Koresh, an undergraduate student who contributed to the research.

For instance, in the context of autonomous vehicles, when confidence in identifying a road sign is exceptionally high, the system can autonomously make decisions. However, in scenarios where confidence levels are lower, the system prompts for human intervention, ensuring cautious and informed decision-making.

According to the researchers, deep learning architectures can achieve higher confidence levels for a substantial portion of inputs, while maintaining an overall average confidence.

Put more simply: deep learning AI can be more certain about a lot of things without sacrificing overall reliability.

The ability to bolster the confidence levels of AI systems establishes a new benchmark for AI performance and safety and could be applicable across a spectrum of fields, from AI-driven writing and image classification to pivotal decision-making processes in healthcare and autonomous vehicles.

In addition to Koresh, the study was authored by Yuval Meir, Ofek Tevet, Yarden Tzach and Prof. Ido Kanter from the department of physics at Bar-Ilan and the universitys brain research center.

YOU CAN GET ISRAEL21c NEWS DELIVERED STRAIGHT TO YOUR INBOX.

Continued here:

Road to safer self-driving cars is paved with deep learning - ISRAEL21c

Cedars-Sinai research shows deep learning model could improve AFib detection – Healthcare IT News

A new artificial intelligence approach developed by investigators in Cedars-Sinai's Los Angeles-based Smidt Heart Institute has been shown to detect abnormal heart rhythms associated with atrial fibrillation that might otherwise be unnoticed by physicians.

WHY IT MATTERS Researchers at Smidt Heart Institute say the findings point to the potential for artificial intelligence to be used more widely in cardiac care.

In a recent study, published in npj Digital Medicine, Cedars-Sinai clinicians show how the deep learning model was developed to analyze images from echocardiogram imaging, in which sound waves show the heart's rhythm.

Researchers trained a program to study more than 100,000 echocardiogram videos from patients with atrial fibrillation, they explain. The model distinguished between echocardiograms showing a heart in sinus rhythm normal heartbeats and those showing a heart in an irregular heart rhythm.

The program was able to predict which patients in sinus rhythm had experienced or would develop atrial fibrillation within 90 days, they said, noting that the AI model evaluating the images performed better than estimating risk based on known risk factors.

"We were able to show that a deep learning algorithm we developed could be applied to echocardiograms to identify patients with a hidden abnormal heart rhythm disorder called atrial fibrillation," explained Dr. Neal Yuan, a staff scientist with the Smidt Heart Institute.

"Atrial fibrillation can come and go," he added, "so it might not be present at a doctor's appointment. This AI algorithm identifies patients who might have atrial fibrillation even when it is not present during their echocardiogram study."

THE LARGER TREND The Smidt Heart Institute is the biggest cardiothoracic transplant center in California and the third-largest in the United States.

An estimated 12.1 million people in the United States will have atrial fibrillation in 2030, according to the CDC. During AFib, the heart's upper chambers sometimes beat in sync with the lower chamber and sometimes they do not making the arrhythmia often difficult for clinicians to detect. In some patients, the condition causes no symptoms at all.

Researchers say a machine learning model trained to analyze echo imaging could help clinicians detect early and subtle changes in the hearts of patients with undiagnosed arrhythmias.

Indeed, AI has long shown big promise for early detection of AFib, as evidenced by similar studies at health systems such as Geisinger and Mayo Clinic.

ON THE RECORD "We're encouraged that this technology might pick up a dangerous condition that the human eye would not while looking at echocardiograms," said Dr. David Ouyang, a cardiologist and AI researcher in the Smidt Heart Institute. "It might be used for patients at risk for atrial fibrillation or who are experiencing symptoms associated with the condition."

"The fact that this program predicted which patients had active or hidden atrial fibrillation could have immense clinical applications," added Dr. Christine M. Albert, chair of the Department of Cardiology at the Smidt Heart Institute. "Being able to identify patients with hidden atrial fibrillation could allow us to treat them before they experience a serious cardiovascular event."

Mike Miliard is executive editor of Healthcare IT News Email the writer: mike.miliard@himssmedia.com Healthcare IT News is a HIMSS publication.

Go here to read the rest:

Cedars-Sinai research shows deep learning model could improve AFib detection - Healthcare IT News

Justice Alito Warns of Threats to Freedom of Speech and Religion – The New York Times

Justice Samuel A. Alito Jr. warned on Saturday that freedom of speech was under threat at universities and that freedom of religion was in peril in society at large.

Troubled waters are slamming against some of our most fundamental principles, he said.

He made his remarks at a commencement ceremony at the Franciscan University of Steubenville in Ohio, a Catholic institution.

Support for freedom of speech is declining dangerously, especially where it should find deepest acceptance, he said.

A university, he said, should be a place for reasoned debate. But he added that today, very few colleges live up to that ideal.

The same is true, he said, for tolerance of religious views in society generally.

Freedom of religion is also imperiled, he said. When you venture out into the world, you may well find yourself in a job or a community or a social setting when you will be pressured to endorse ideas you dont believe or to abandon core beliefs. It will be up to you to stand firm.

In other settings, Justice Alito has given a specific example, complaining that people opposed to same-sex marriage on religious grounds are sometimes treated as bigots.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Original post:

Justice Alito Warns of Threats to Freedom of Speech and Religion - The New York Times

Search warrant executed on home in East Freedom, investigation ongoing – WTAJ – www.wtaj.com

BLAIR COUNTY, Pa. (WTAJ) Multiple police officials were seen at a house in East Freedom on Friday.

Heavy police presence could be seen outside of a home near Rt. 36 on May 10. A WTAJ member at the scene confirmed seeing officers carrying guns out of the home.

Freedom Township Police confirmed that it is an ongoing investigation, but that they had received a tip that a fugitive was staying at the home. They said they executed a search warrant that led to another warrant for guns that they found inside the home.

This is a developing story, please check back for updates anddownload the WTAJ appto receive breaking news notifications.

We will continue to keep you updated online and on-air as we learn more.

Go here to read the rest:

Search warrant executed on home in East Freedom, investigation ongoing - WTAJ - http://www.wtaj.com

Many governments worldwide failing to protect press freedom – Star Tribune

Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.

Since its United Nations declaration in 1993, every May 3, World Press Freedom Day, "acts as a reminder to governments of the need to respect their commitment to press freedom." Unfortunately, many of those same governments are restricting, not respecting, the right to a free press.

In fact, according to Reporters Without Borders, which the same day issued its annual World Press Freedom Index, "Press freedom around the world is being threatened by the very people who should be its guarantors political authorities." As evidence, it reported that of the five indicators it uses to compile its ranking, the political indicator had fallen the most.

"States and other political forces are playing a decreasing role in protecting press freedom," Anne Bocand, the organization's editorial director, stated in the report. "This disempowerment sometimes goes hand in hand with more hostile actions that undermine the role of journalists, or even instrumentalize the media through campaigns of harassment or disinformation."

The report is replete with examples from multiple regions, all of which have resonance anytime, but particularly in an election year or, more precisely, this year of elections, when a record number of people worldwide will vote. And if 2023s plebiscites presage this year, there's trouble ahead: Several elections in Latin America, according to the report, "were won by self-proclaimed predators of press freedom and media plurality, like Javier Milei in Argentina, who shut down the country's biggest news agency in a worrisome symbolic act." Accordingly, Argentina tumbled 26 places to 66th out of 180 nations ranked.

Elections in several African countries were "often accompanied by violence against journalists" in places like Nigeria (112th) and the Democratic Republic of Congo (123rd). In the increasing number of countries governed by military juntas like Niger (down 19 to 80th), Burkina Faso (down 28 to 86th) and Mali (down one to 114th), authorities "continue to tighten their grip on the media and obstruct journalists' work."

It's not just the Global South going south on press freedom. The scourge is seen in places like China (172nd), which along with others "have stepped up their control over social media and the internet, restricting access, blocking accounts, and suppressing messages carrying news and information." China, the world's worst jailer of journalists, "continues to exercise strict control over information channels, implementing censorship and surveillance policies to regulate online content and restrict the spread of information deemed to be sensitive or contrary to the party line."

And it goes beyond Beijing: Moscow, Tehran, Pyongyang and other Orwellian, authoritarian capitals cap most press freedoms as well. Worse yet, many repressive regimes are learning from one another, as revealed in "Annals of Autocracy," an extraordinary Washington Post package that won a well-deserved Pulitzer Prize for editorial writing on Monday for "a compelling and well-researched series on new technologies and the tactics authoritarian regimes use to repress dissent in the digital age, and how they can be fought."

Another Post opinion contributor, Vladimir Kara-Murza, knows the personal cost of resisting repression: The Russian opposition leader has been poisoned, allegedly by the Kremlin, and more recently sentenced to 25 years for speaking out against the war in Ukraine. His mind isn't imprisoned, however, as evidenced by his winning the Pulitzer in the commentary category "for passionate columns written under great personal risk from his prison cell, warning of the consequences of dissent in Vladimir Putin's Russia and insisting on a democratic future for his country."

International issues increasingly determining domestic politics in America were reflected in rewards for other news organizations, including the New York Times in the investigative reporting category "for a deeply reported series of stories revealing the stunning reach of migrant child labor across the United States and the corporate and governmental failures that perpetuate it." The Times also won in international reporting "for its wide-ranging and revelatory coverage of Hamas' lethal attack in southern Israel on October 7, Israel's intelligence failures and the Israeli military's sweeping, deadly response in Gaza."

That war was also the subject of the Breaking News Photography prize, awarded to Reuters, and a special citation was given to journalists and media workers covering the war. According to Reporters Without Borders, "More than 100 Palestinian reporters have been killed by the Israel Defense Forces, including at least 22 in their line of work."

The World Press Freedom Index warns that "in the absence of regulation, the use of generative AI in the arsenal of disinformation for political purposes is a concern." Even without such high-tech tools, disinformation operations were key to discredit Kyiv and Washington in Russians' eyes, as the Post's "Annals of Autocracy" series showed. Yet Moscow isn't the only offender: In 138 nations, the index indicated, "political actors in their countries were often involved in propaganda or disinformation campaigns."

Ominously, the U.S. isn't immune from these political actors, according to Barbara McQuade, author of "Attack from Within: How Disinformation is Sabotaging America."

McQuade's comments came on Wednesday at an evening event titled "Countering Chaos: Navigating Election Disinformation" organized by the Minnesota Peace Initiative and the Committee on Foreign Relations Minnesota. It was held at Norway House, which was fitting, since Norway was once again the top-ranked country in the World Press Freedom Index, followed by neighboring nations Denmark and Sweden.

A former U.S. Attorney for the Eastern District of Michigan and current University of Michigan professor of law who teaches a course in national security, McQuade authoritatively said that "during the years I've been involved in national security, I've seen the greatest threats to our national security evolve, from first Al-Qaeda, and then it was ISIS, and then it was China and Russia and cyber intrusions. And now I think the greatest threat to our national security is disinformation but coming from within our own country."

A "confluence of two events" are "really elevating the problem," McQuade said, naming social media and "our incredibly polarized electorate."

The consequence of this confluence is the "idea that people care more of [their] tribe than they care about the truth I think that is very dangerous to democracy," McQuade said, later adding: "Since World War II, it's been the foreign policy of the United States to lift up democracies around the world because we believe that democracies around the world make us safer. When other countries have democratic forms of government there are fewer wars, there are fewer refugee crises, and we have more and better trade partners. And so, when democracies are failing and backsliding, as we are seeing around the world, that is a threat to our own national security."

A "reminder to governments of the need to respect their commitment to press freedom" the United Nations' stated purpose of World Press Freedom Day is designed to bolster democracies, which in turn should deliver the benefits McQuade describes. But as Reporters Without Borders documents, states are failing. So the Fourth Estate must not.

Read more:

Many governments worldwide failing to protect press freedom - Star Tribune

SonyLIVs Freedom at Midnight Casts Pakistan Leader Mohammed Ali Jinnah and Sister Fatima (EXCLUSIVE) – Variety

Popular on Variety

Perfection is rarely achieved in movies, but this heaven-sent concert doc hits the sweet spot. Over two days in January 1972, the Queen of Soul, Aretha Franklin she was 29 at the time sweeps into the New Temple Missionary Baptist Church in Watts in front of a congregation and testifies to God in song. The blessed thing took nearly half a century to come out because director Sydney Pollack failed to sync the image with the sound. Then digital angels stepped in, and glory, glory, hallelujah!

Read this article:

SonyLIVs Freedom at Midnight Casts Pakistan Leader Mohammed Ali Jinnah and Sister Fatima (EXCLUSIVE) - Variety

Voices Amidst Conflict: Protecting Journalists on World Press Freedom Day – Amnesty International USA

When former President Pierre Nkurunziza decided to run for a third term in office in April 2015, many Burundians took to the streets to express their frustration against the decision that they believed violated the 2005 Burundian Constitution, which limited presidential terms to two terms of 5 years. Her trip in August 2022 was the first time she visited her family in Bujumbura since 2015.

Burundis civil society and media organizations were among the first targets of the government repression in 2015. The government suspended or closed most independent human rights organizations and media outlets and drove them into exile. Despite promises by President Ndayishimiye to normalize relations with the media in 2021, the Burundian government continues to view the press and human rights work with suspicion,and severe restrictions on human rights, including the right to freedom of expression, remain in place. Most independent human rights organizations have been unable to resume their activities in Burundi, especially as the Burundian authorities have issued arrest warrants for many leading activists, who live in exile.

On February 14, five human rights defenders, Sonia Ndikumasabo, president, and Marie Emerusabe, general coordinator, of the Association of Women Lawyers in Burundi (Association des femmes juristes du Burundi, AFJB), Audace Havyarimana, legal representative, Sylvana Inamahoro, executive director, and Prosper Runyange, land project coordinator of the Association for Peace and Promotion of Human Rights in Burundi (Association pour la paix et la promotion des droits de lHomme, APDH), were arrested and accused of rebellion and of undermining internal state security and the functioning of public finances. The charges appear to relate to their relationship with an international organization abroad and the funding they have received from this organization. Twelve human rights defenders and journalists were among a group of 34 peoplesentenced to life in prison in absentia in June 2020on accusations of involvement in an attempted coup in May 2015; the Supreme Court judgment was not made public until February 2021.

Arrest or detention as punishment for the peaceful exercise of human rights, including the right to freedom of expression, is arbitrary and violates the African Charter on Human and Peoples Rights and the International Covenant on Civil and Political Rights, both of which Burundi has ratified. The UN Working Group on Arbitrary Detention has determined that those detained solely for the peaceful exercise of their human rights must be immediately released.

Almost since its inception in 1948, no sources of information independent of the government have been allowed to exist. Pyongyang prohibits free expression, gatherings and meetings, and access to information. Freedom of thought and opinion is discouraged from cradle to grave, enforced by a vast and systemic monitoring by formal and informal internal security agents. Arbitrary arrest, prison camps, forced labor, torture, and execution are used by authorities to prevent dissent.

Severe punishments are imposed on North Korean citizens caught listening to or watching broadcasts from outside the country. Access to computers and the internet is restricted to highly-placed party officials. Unauthorized communication with people outside the country is forbidden. Amnesty International has reported on the execution of teenagers for viewing a South Korean television broadcast.

The governments response to the COVID-19 pandemic included closure of borders and the installation of CCTV cameras and motion detectors, making it more difficult for information to enter the country. In December 2020, the DPRK enacted the Reactionary Ideology and Culture Rejection Law, prohibiting the viewing of anti-socialist ideology and culture. Since January 2023, the Pyongyang Cultural Language Protection Law stipulates punishment for the use of South Korean dialect or slang.

There are indications that, despite these restrictions, more people in some areas are able to receive broadcasts emanating from outside the country. This makes the work of outlets such as NK Radio, Radio Free Asia, and the Voice of America increasingly crucial.

Read the original:

Voices Amidst Conflict: Protecting Journalists on World Press Freedom Day - Amnesty International USA

Medgar Evers, Rep. Clyburn, among nineteen honored with Presidential Medal of Freedom – Insight News

President Joe Biden will award 19 individuals the Presidential Medal of Freedom, the nations highest civilian honor. Civil rights icon Medgar Wiley Evers and South Carolina Democratic Rep. James Clyburn lead the list of recipients whose legacy of bravery and activism inspires generations.

Evers, born in 1925 in Decatur, Mississippi, is remembered for his unwavering dedication to the civil rights movement despite facing relentless racism and threats to his life. His childhood was marked by the pervasive specter of racism, with incidents like the lynching of a family friend serving as stark reminders of the injustice prevalent in the community. Determined to make a difference, Evers enlisted in the Army during World War II, serving with distinction in a segregated field battalion in England and France.

After returning, Evers earned a Bachelor of Arts from Alcorn College, where he met Myrlie Beasley, whom he married in 1951. He embarked on a career in activism, joining the NAACP and organizing boycotts and protests to combat segregation and discrimination. His efforts caught the attention of the NAACP national leadership, leading to his appointment as Mississippis first field secretary for the organization.

Evers also organized boycotts and advocated for the admission of African American students to the University of Mississippi. Despite facing constant threats and violence, Evers remained steadfast in his commitment to the cause of equality. A white supremacist assassinated Evers on June 12, 1963, outside his home, sparking outrage and galvanizing the civil rights movement.

Clyburn, a stalwart figure in American politics known as the Kingmaker, has dedicated his life to public service and advocacy. Representing South Carolinas 6th Congressional District in the U.S. House of Representatives, Clyburn has served since 1993, making history as the first African American to hold multiple terms as Majority Whip. A South Carolina State University graduate, he began his career as a public school teacher in Charleston before assuming roles as an employment counselor and director of youth and community development programs.

Clyburns foray into state government, serving as South Carolina Human Affairs Commissioner, marked a significant milestone in his career. He became the first African American advisor to a South Carolina governor. His transition to federal politics in 1993 heralded a new chapter of leadership, as he became chairman of the Congressional Black Caucus and Vice Chair of the House Democratic Caucus.

Clyburn has earned numerous accolades and honors, including the prestigious Spingarn Medal from the NAACP. His pivotal endorsement of Joe Biden in the 2020 presidential race is widely credited with shaping the course of the election, propelling Biden to victory in crucial primaries and ultimately to the presidency.

The National Newspaper Publishers Association (NNPA) joins all Americans today to salute all of the Presidential Medal of Freedom Award recipients at the White House, NNPA President and CEO Dr. Benjamin F. Chavis Jr., stated. The Biden-Harris administration continues to lead America forward toward freedom, justice, and equality for all. The NNPA takes special note and salute Congressman Clyburn and Medgar Evers for their outstanding and transformative courage and leadership in the ongoing freedom movement for civil and human rights. The Black Press of America extends heartfelt congratulations to Clyburn, Evers, and all who are being honored today.

Among the recipients joining Evers and Clyburn are:

Michael R. Bloomberg, former Mayor of New York City, revolutionized the financial information industry and significantly impacted various sectors, including education, theenvironment, public health, and the arts.

Father Gregory Boyle, the founder of Homeboy Industries, has dedicated his life to gang intervention and rehabilitation, offering hope and opportunities to thousands in Los Angeles.

Senator Elizabeth Dole, a trailblazing leader who has served in various government roles, including the United States Senate and President of the American Red Cross, has steadfastly advocated for military caregivers and their families.

Phil Donahue, a pioneering journalist, revolutionized daytime television with his issue-oriented talk show, setting a new standard for engagement and discourse.

Al Gore, former Vice President, has been a prominent figure in climate activism and global diplomacy, earning recognition for his efforts to address climate change.

Clarence B. Jones, a civil rights activist and confidant of Dr. Martin Luther King, Jr., played a pivotal role in shaping the civil rights movement and preserving Dr. Kings legacy.

Secretary John Kerry, a decorated veteran and former Secretary of State, has dedicated his life to public service, championing diplomacy and environmental stewardship.

Senator Frank Lautenberg, remembered for his extensive service in the United States Senate and advocacy for environmental protection and consumer safety, is honored posthumously.

Katie Ledecky, the most decorated female swimmer in history, has captivated audiences with her remarkable athleticism and achievements in the pool.

Opal Lee, an educator, and activist played a crucial role in making Juneteenth a federally recognized holiday. This symbolized a triumph in the ongoing struggle for equality.

Dr. Ellen Ochoa, the first Hispanic woman in space, continues to inspire future generations as a leading figure in science and exploration.

Speaker Nancy Pelosi, a longtime advocate for democracy and progressive values, has been instrumental in shaping legislative agendas and Democratic priorities.

Dr. Jane Rigby, a prominent astronomer, embodies the spirit of exploration and discovery, contributing to our understanding of the universe.

Teresa Romero, president of the United Farm Workers, has been a tireless advocate for the rights of agricultural workers, securing important victories that have improved their lives.

Judy Shepard, co-founder of the Matthew Shepard Foundation, has been a driving force in the fight against hate crimes, fostering progress and understanding.

Jim Thorpe, the first Native American to win an Olympic gold medal, broke barriers in sports and society, leaving an enduring legacy as an athlete and advocate.

Michelle Yeoh, an acclaimed actress, has broken stereotypes and enriched American culture through her groundbreakingwork in film.

There is nothing beyond our capacity when we act together, Biden insisted. These nineteen Americans built teams, coalitions, movements, organizations, and businesses that shaped America for the better. They are the pinnacle of leadership in their fields.They consistently demonstratedover their careersthe power of community, hard work, and service.

Go here to see the original:

Medgar Evers, Rep. Clyburn, among nineteen honored with Presidential Medal of Freedom - Insight News

Neoliberal economics: The road to freedom or authoritarianism? – NPR

Neoliberal economics: The road to freedom or authoritarianism? : Planet Money Nobel-winning economist Joseph Stiglitz's new book argues the road to tyranny is paved not by too much, but by too little government.

In the early 1930s, Austrian economist Friedrich Hayek, then based at the London School of Economics, jotted off a memo to the school's director, William Beveridge. At the time, the Great Depression was wreaking havoc around the world. And the ideals of classical liberalism, like democracy and free-market capitalism, were under assault. Witnessing the rise of fascist parties around Europe, Beveridge, like many others in his day, had argued fascism was the ultimate expression of a failed capitalist system. Absolutely not, argued Hayek in his memo. Fascism, with its rejection of liberal democracy and embrace of government power, actually had its roots in socialist ideas and policies.

What began as the germ of an idea in a memo became a magazine article and then, in 1944, a book, which Hayek titled The Road To Serfdom. When Hayek shopped the publication rights of the book in the United States, three commercial publishing houses rejected it. They didn't see its potential. Hayek settled for an academic publishing house: The University of Chicago Press.

The Road To Serfdom became a smashing success. Not only did it sell hundreds of thousands of copies, it blew wind into the sails of a flagging conservative movement, which had struggled to captivate the hearts and minds of mainstream America after the Great Depression.

Hayek argued that the ballooning welfare state, characterized by policies like those of the New Deal, handed too much power and control to the central government, robbing people of autonomy over their economic lives, hurting the economy, and paving the road to tyranny. He argued that freedom and prosperity could only be achieved by embracing the free market.

80 years later, economist Joseph Stiglitz who like Hayek, won a Nobel Prize in economics has a new book out with a response to Hayek and his generations of followers. "A major theme of my book is that Hayek got it 180 degrees wrong," Stiglitz told Planet Money in an interview last week. In fact, the very title of Stiglitz's book is a counterpunch to The Road to Serfdom. It's called The Road To Freedom.

Like in the 1930s, when Hayek began working on his book, populism is now exploding around the world. And Stiglitz fears some countries may be careening towards "a 21st century version of fascism." But contrary to the classic argument made by Hayek, Stiglitz says, this rise in authoritarianism "comes not in the countries where the government is doing too much, but where the government is doing too little to protect individuals against unemployment, the stresses of adaptation to globalization, to technical change, to the stresses of migration."

For a long time, conservative politicians sold lower taxes, fewer regulations, and smaller government as integral to enhancing freedom. But, Stiglitz argues, this conception of freedom is all wrong and, even worse, it has paved the way to a dangerous political era that threatens our real freedom.

For Hayek and later Milton Friedman and a whole host of other conservatives and libertarians who were inspired by Hayek's work freedom largely meant freedom from government.

Stiglitz opposes this narrow way of thinking about freedom. In his book, he offers a much different conception of freedom, which he writes is really about, using jargon from economics, "a person's opportunity set the set of options she has available."

Freedom, in other words, is "really what you're free to do," Stiglitz says. "Somebody who is at the point of starvation doesn't really have much freedom. He does what he has to do to survive." By giving that person more resources, Stiglitz says, he becomes more free. He has more options in life. In this sense, Stiglitz argues, the government can step in and give citizens more freedom by, for example, levying taxes to fund programs that eliminate poverty or help people get jobs.

Even more, Stiglitz argues, policymakers should be wary that policies that expand the freedom of some people may come at the cost of the freedom for many more people. He begins his book by quoting the Oxford philosopher Isaiah Berlin: "Freedom for the wolves has often meant death to the sheep." He uses this metaphor to criticize policies like financial deregulation, which, he says, gave more freedom to banks at the expense of the freedom of ordinary Americans.

Stiglitz goes well beyond an effort to reclaim the concept of freedom for progressives. Much of his book is aimed at bulldozing away any legitimacy for "neoliberalism" an increasingly popular term for the free-market ideology that swept America and much of the world in the 1980s and 1990s.

"Neoliberalism's crimes include freeing financial markets to precipitate the largest financial crisis in three-quarters of a century; freeing trade to accelerate deindustrialization [by, for example, gutting American manufacturing]; and freeing corporations to exploit consumers, workers, and the environment alike," Stiglitz writes. "This form of capitalism does not enhance freedom in our society. Instead, it has led to the freedom of a few at the expense of the many. Freedom for the wolves; death for the sheep."

As a member and then president of the Council of Economic Advisors in the Clinton White House, Stiglitz had a prominent seat at the table when neoliberal ideas spread beyond their traditional stronghold in the Republican Party and began being pushed by Democrats. President Bill Clinton promoted a range of free-market policies, including signing the North American Free Trade Act (NAFTA), supporting China in its bid to join the World Trade Organization, and deregulating the telecommunications and financial industries.

Stiglitz says that, behind closed doors, he fought tooth and nail against many of these policies. He notes, for example, he was successful at staving off financial deregulation that is, until he left office in 1997. Clinton didn't sign financial deregulation into law until 1999.

"I strongly opposed deregulation of finance, in part because I understood that 'freeing' the financial sector would make us all less free in the end," Stiglitz writes in his book. He blames financial deregulation for contributing to the 2008 financial crisis.

After serving in the Clinton Administration, Stiglitz again battled creeping neoliberalism, this time on a global scale as the chief economist of The World Bank. There he fought against policies like the liberalization of capital markets, which allowed global investors to more freely move money to and from poor countries. He blamed this policy for creating financial volatility and contributing to economic crises around the world.

Of course, there are many who disagree with Stiglitz's take on neoliberalism and the need for strong government involvement in the economy. They may believe the government is too dumb or corrupt to do a good job regulating the market and engineering a more prosperous and freer society. Countries like Argentina and Venezuela, where generations of left-wing leaders have pursued interventionist policies, have seen a host of economic problems, including runaway inflation and dismal economic growth.

Many economists still believe in the virtues of free-market capitalism. For example, in a new book titled The Capitalist Manifesto: Why The Global Free Market Will Save The World, Swedish author Johan Norberg argues that free-market capitalism has lifted millions and millions of people out of poverty, fostered incredible technological innovations, and brought down prices on all sorts of goods and services. Turning against it, Norberg warns, will only hurt growth, lower our living standards, and devastate many, especially the world's poor.

Now is the time, Stiglitz argues, for the United States and other nations to abandon neoliberalism and embrace a new form of "progressive capitalism," where the government plays a bigger role in managing the economy, fighting climate change, breaking up monopolies, and eradicating poverty, inequality, and joblessness.

"If we continue down this path you might say the road to serfdom we will lose some freedom because it's leading to more populism," Stiglitz says. "This populism is an authoritarian kind of populism and is a real threat to the sustaining of democracy and even, really, a market economy that actually functions."

While Stiglitz spends much of his book criticizing Republicans, many Republicans these days are more receptive to the idea that the free market is failing America and that we need greater government intervention. Senator Josh Hawley (R-Missouri), for example, has been vocal against monopolies and has sponsored various bills to break them up. Last year, Senator Marco Rubio (R-Florida) published a book, Decades of Decadence, which explicitly blasts neoliberalism, especially free-trade deals, for hurting American workers. Rubio now supports "industrial policy" handing the federal government more power to shape and grow strategic American industries (For more on industrial policy, listen to this Indicator episode). In a recent op-ed in The Washington Post, Rubio says "industrial policy" used to be dirty words in his political circle, but now he believes the federal government must play an active role in revitalizing American manufacturing.

We asked Stiglitz whether the growing bipartisan consensus that the government needs to play a bigger role in the economy gives him any hope that his vision may actually come into being. Stiglitz, a staunch Democrat, began by criticizing Republicans, including for pushing the unsubstantiated claim that the 2020 presidential election was stolen.

"But," Stiglitz continued, "when I read, say, Marco Rubio's views about industrial policy, I sometimes think he may have cribbed it from some of the things that I've written," he says with a laugh. "And so there is hope that on a lot of these issues, there is an understanding that neoliberalism failed it's so obvious to me and that we have to have new policies, like industrial policies, like more competition to stop Big Tech. I do think we're moving in that direction in a bipartisan way."

By the way, Joseph Stiglitz and I had a wide-ranging conversation about freedom, economics, neoliberalism, and his views on the world's problems. We covered a whole lot more than what I could fit in this newsletter. We will be releasing an audio version of this interview to Planet Money+ subscribers soon. You can subscribe here.

Link:

Neoliberal economics: The road to freedom or authoritarianism? - NPR

Freedom of the Press as an Element of Freedom of Belief – Bitter Winter

by Karolina Maria Kotkowska*

*A paper presented at the international webinar Media as Friends and Foes of FoRBand the Tai Ji Men Case, co-organized by CESNUR and Human Rights Without Frontiers on May 8, 2024, after World Press Freedom Day (May 3).

The topic of the relationships between new religious movements and the press, and more broadly the media, is a difficult one. On the one hand, it is obvious to everyone that the media should be free, and this is one of the aspects of the necessary freedom of speech in democratic countries. It is hard to imagine a free and pluralistic society without the possibility of expressing opinions, free circulation of thoughts, and also unrestricted access to reliable information.

On the other hand, as researchers of new religious movements, we continually encounter stories where the media have been used to strip vulnerable people of their freedom. Such actions often led to violence, including physical violence, and served as a tool to generate societal phobia towards new religions. The media were employed to construct negative narratives, resulting in police raids, sometimes even involving military personnel or special forces, against individuals without any weapons or criminal background, which were nonetheless presented as an appropriate means of dealing with cults.

Violation of basic human rights using the media is unfortunately a history we know all too well. Not only in the case of individuals and groups, but through entire years of obsession engulfing some countries or regions. And we are not talking about witch hunts, although undoubtedly the mechanism of creating a sense of threat is similar and the comparison is not entirely inadequate.

If we delve even deeper into the history of religions, the period of the formation of monotheisms, the coexistence of numerous competing heterodox groups at various stages of formation, then it will turn out that similar mechanisms were used centuries ago against various religious opponents. Times have changed, social contexts have changed, access to information has changed. One thing has not changedit is still easy to play on fears. It is one thing to provide accurate information about real threats, and another thing to deliberately arouse unjustified fear of threats from an imagined enemy.

Of course, one of the fears that is easiest to exploit nowadays is terrorism, which operates in such a way that a small, inconspicuous group of people can cause a great deal of harm to the entire society. It is not difficult to portray a group of individuals as radicals supposedly ready for anything. Stories like these have been happening for years. One can mention, for example, media attacks and tax police intervention against the Damanhur group operating in the Italian Alps, police raids against MISAthe Movement for Spiritual Integration into the Absolute established in Romania and operating in various countries, or recently also the attacks on an Argentine-based movement, the Buenos Aires Yoga School.

In one of my earlier Tai Ji Men webinars, at the beginning of the Russian invasion of Ukraine, I mentioned how various images of refugees were portrayed in the Polish media. On one hand, there was no doubt that immediate assistance should be provided to Ukrainian refugees, of whom about a million were eventually accepted, while at the same time, in the forests on the Polish-Belarusian border, victims of the war in Afghanistan, including small children, were dying. These individuals became part of a political game and media frenzy, frightening the public with the threat of Islamic terrorists invading the country. Many documents were created in the wake of those events, as well as a feature film, Green Border, directed by Agnieszka Holland.

This spring, with a group of researchers of new religiosity, we had on opportunity to visit the headquarters of one of the Islam-based new religious movements advocating for freedom and inclusivity: the Ahmadi Religion of Peace and Light, located in England. The example of experiences within this movement shows how media can play a positive role. During an attempt to legally cross the Turkish-Bulgarian border, the group was attacked for unclear reasons on the Turkish side. It was only thanks to technology enabling satellite data transmission that the documentation of this event and the entire violence during the incident could be recorded and transmitteddocumentation that would otherwise have been stopped or destroyed by the authorities committing these acts. Such materials, evidencing the violation of human rights of followers, managed to reach other media outlets and serve as evidence to initiate legal proceedings to assert their rights and protect members.

And today we will hear the testimonies of the dizi (disciples) of Tai Ji Men. The oldest of them were there when the Tai Ji Men case started in 1996. They suffered because of a politically motivated persecution, and the unjust arrest of their leaders, but perhaps they suffered even more because of media slander. Hundreds of articled depicting Tai Ji Men as a cult defrauding its victims, evading taxes, and even raising goblins were published. All these accusations were eventually declared false by courts of law, but in the meantime Tai Ji Men dizi were discriminated in their workplaces, bullied in schools, and even insulted in the streets just for wearing their distinctive uniform.

History teaches us that the media can either be allies in the fight for freedom or pose a deadly threat to religious freedom. It all depends on whether they are independent or whether they transmit manipulated data, succumbing to political and other pressures. This means, however, that they themselves are not always free. Lets fight for media freedom, because only true independence of the media gives hope for communication that builds and strengthens freedom, rather than taking it away.

More here:

Freedom of the Press as an Element of Freedom of Belief - Bitter Winter

Marshall native, ‘Grandmother of Juneteenth’ gets Presidential Medal of Freedom – Marshall News Messenger

WASHINGTON (AP) President Joe Biden on May 3 bestowed the Presidential Medal of Freedom on 19 people, including Marshall native and Grandmother of Juneteenth Opal Lee.

Biden said the recipients of the nations highest civilian honor are incredible people whose relentless curiosity, inventiveness, ingenuity and hope have kept faith in a better tomorrow.

The White House said the recipients are exemplary contributions to the prosperity, values, or security of the United States, world peace, or other significant societal, public or private endeavors.

The 10 men and nine women hail from the worlds of politics, sports, entertainment, civil rights and LGBTQ+ advocacy, science and religion. Three medals were awarded posthumously.

Lee, born in Marshall and a 1952 Wiley graduate, led the charge in championing efforts to make Juneteenth nationally recognized as a federal holiday.

The Juneteenth holiday, June 19, marks the day in 1865 when slaves in Texas finally learned that the Civil War had ended and slavery had been abolished. The news, which was delivered in Galveston by Union soldiers, came two and a half years after President Abraham Lincolns Emancipation Proclamation, which was issued in 1862 and became official Jan. 1, 1863.

President Joe Biden signed the holiday into law in 2021.

President Joe Biden hands a pen to Rep. Barbara Lee, D-Calif, after signing the Juneteenth National Independence Day Act, in the East Room of the White House, Thursday, June 17, 2021, in Washington. From left, Lee, Rep. Danny Davis, D-Ill., Opal Lee, Sen. Tina Smith, D-Minn., obscured, Vice President Kamala Harris, Clyburn, Sen. Raphael Warnock, D-Ga., Sen. John Cornyn, R-Texas, Rep. Joyce Beatty, D-Ohio, obscured, Sen. Ed Markey, D-Mass., and Rep. Sheila Jackson Lee, D-Texas. (Evan Vucci/AP File Photo)

Lee, a great-great-grandmother, decided in 2016 that shed personally trek from Fort Worth to Washington, D.C. to bring attention to the mission. With the support of her church and family, she assembled a team to assist with her walking campaign and launched a change.org petition, soliciting support in her desire to see the national recognition of a day to celebrate Freedom for All.

In her petition, Lee shared that she believed Juneteenth could be a unifier because it recognizes the fact that slaves didnt free themselves but had help from Quakers along the Underground Railroad, abolitionists both Black and white like Frederick Douglass and William Lloyd Garrison soldiers and many others who gave their lives for the freedom of the enslaved.

The celebration of Juneteenth has always been close to her heart, starting as a child growing up in Marshall, Lee said in an interview with the News Messenger in March 2023.

In Marshall, on Juneteenth, wed go to the county fairground. Oh, it would be full of music and food, there would be ballgames and food, and speeches and food, and food and food and food, she said. But when I came to Fort Worth, people just sort of celebrated in their backyards with their family and their friends.

Lee said the movement to make the observance a national holiday had already begun with the late Rev. Ronald V. Myers Sr., who founded the original National Juneteenth Observance Foundation.

Mind you, Dr. Ronald Myers had been instrumental in having Juneteenth celebrations in 43 states. And I think some of Doc rubbed off on me, she chuckled. He passed on, but I was determined to have Juneteenth a national holiday; and so, I guess I took up the mantle.

And I tell people, anybodys grandma wouldve done it, you know, she said.

... I thought that if a little old lady in tennis shoes was walking from Fort Worth to Washington, D.C., thats 1,400 miles, somebody would take notice, she said. And so, to walk two and a half, 2.5 miles each time was to symbolize that the enslaved didnt know they were free for two and a half years.

Read the rest here:

Marshall native, 'Grandmother of Juneteenth' gets Presidential Medal of Freedom - Marshall News Messenger

Srinivasan on Open Letters, Protests, Free Speech, and Academic Freedom – Daily Nous – Daily Nous

Amia Srinivasans specialty, it seems to me, is making sense of moral ambivalence: detecting, dissecting, and sometimes defending its reasonability, even in the face of unavoidable and urgent decisions.

[Knot by Anni Albers]

It begins with the matter of signing open letters:

An open letter is an unloved thing. Written by committee and in haste, it is a monument to compromise: a minimal statement to which all signatories can agree, or worse a maximal statement that no signatory fully believes. Some academics have a general policy against signing them. I discovered that was true of some of my Oxford colleagues last year, when I drafted and circulated an open letter condemning Israels attack on Gaza and calling for a ceasefire. Some, like those who are in precarious employment or whose immigration status isnt settled, have good reasons for adopting such a policy. Others understandably dont want to put their name to something that doesnt perfectly represent their views, especially when it might be read as a declaration of faith. I always cringe at the self-importance of the genre: though open letters can sometimes exert influence, stiffly worded exhortations hardly suffice to stop states, militaries, bombs. And yet, a no open letters policy can serve as a convenient excuse when one is hesitant to stand up for ones political principles.

Srinivasan has signed several open letters about Gaza, and recently signed an open letter committing her to an academic and cultural boycott of Columbia University, owing to how it handled student protestors. Then:

In April I was asked to sign a letter opposing the University of Cambridges investigation into Nathan Cofnas, a Leverhulme early career fellow in philosophy. A self-described race realist, Cofnas has written widely in defence of abhorrently racist particularly anti-Black views, invoking what he claims are the findings of the science of heredity.

She shares her many reservations about signing the open letter, but also her reason for ultimately signing it:

Do we think that students should be able to trigger investigations into academics on the grounds that their extramural speech makes them feel unsafe? Do we want to fuel the rights sense of grievance towards the university, when their minority presence within it is owed to the robust correlation between education and political liberalism, not some Marxist plot? Do we want to empower university administrators to fire academics on the grounds that they are attracting negative publicity? Do we think there is any guarantee that a further strengthened institutional power will only be wielded against those whose views and politics we abhor? If we say yes, what picture of power theirs and ours does that presume?

But thats not the end of the discussion, for theres the question of whether her taking a principled stand is her also being a sucker for her political opponents:

free speech and academic freedom are, for many on the right, ideological notions, weapons to be wielded against the left and the institutions it is (falsely) believed to control, the university most of all [and] the free-speech brigade has found justifications for the draconian repression of student protest.

Theres also the question of the extent to which the free speech brigade understands how academic freedom and freedom of speech come apart, or how even different considerations in favor of free speech might be in tension with each other:

After signing the letter criticising the investigation into Cofnas, I was written to by someone from the Committee for Academic Freedom, which bills itself as a non-partisan group of academics from across the political spectrum. He asked me whether I might consider signing up to theCAFs three principles. I looked them up: I. Staff and students atUKuniversities should be free, within the limits of the law, to express any opinion without fear of reprisal. II. Staff and students atUKuniversities should not be compelled to express any opinion against their belief or conscience. III.UKuniversities should not promote as a matter of official policy any political agenda or affiliate themselves with organisations promoting such agendas. I thought about it for a bit. Im on board with PrincipleII, so long as we dont think that asking staff and students to use someones correct pronouns is akin to demanding they swear a loyalty oath. Principle I is problematic, because it doesnt register that academic freedom essentially involves viewpoint-based discrimination that indeed the whole point of academic freedom is to protect academics rights to exercise their expert judgment in hiring, peer review, promotion, examining, conferring degrees and so on. And PrincipleIIIwould prevent universities from condemning, say, Israels systematic destruction of universities and schools in Gaza, which I think as educational institutions they are entitled to do.

Discussion welcome, but read the whole thing first.

Read the rest here:

Srinivasan on Open Letters, Protests, Free Speech, and Academic Freedom - Daily Nous - Daily Nous

Responds To Speaker Johnsons Politico Interview: This Just Shows Our Movement is Winning – Reproductive … – Reproductive Freedom for All

For Immediate Release: Friday, May 10, 2024

Contact: [emailprotected]

Reproductive Freedom for All Responds To Speaker Johnsons Politico Interview: This Just Shows Our Movement is Winning

Washington, DC In a newly released interview with Politico, Speaker Mike Johnson sided with President Trump in saying the federal government has no role in protecting abortion rights. When asked if he anticipates putting forth any abortion legislation before the election, Johnson said no.

Reproductive Freedom for All President and CEO Mini Timmaraju released the following statement in response:

Mike Johnsons flip-flopping on abortion just proves our movement is winning and that Republicans know theyre losing. Leaving abortion to the states is not a moderate position, as 21 states are already enforcing horrifying bans with devastating consequences.

Voters have made it clear to the GOP that we will not tolerate abortion bans. Mike Johnson and congressional Republicans have shown time and time again they are willing to do anything in their power to restrict our reproductive freedom, and we cant trust them.

We demand a federal response to the abortion crisis and call on the press to ask the Speaker if he will support federal protections. We demand nothing less from our federal government than locking in the federal right to abortion and expanding access.

###

For over 50 years, Reproductive Freedom for All (formerly NARAL Pro-Choice America) has fought to protect and advance reproductive freedom at the federal and state levelsincluding access to abortion care, birth control, pregnancy and post-partum care, and paid family leavefor everybody. Reproductive Freedom for All is powered by its more than 4 million members from every state and congressional district in the country, representing the 8 in 10 Americans who support legal abortion.

Continued here:

Responds To Speaker Johnsons Politico Interview: This Just Shows Our Movement is Winning - Reproductive ... - Reproductive Freedom for All

Protecting journalists and promoting media freedom: New rules enter into force – European Union

Independent, fact-based journalism helps protect our democracies by exposing injustices, holding leaders to account and allowing citizens to make informed decisions. Journalists, who sometimes work at great personal risk, should be able to work freely and safely. This lies at the heart of EU values and democracies. This week, two pieces of EU legislation enter into force which will ensure greater protection of journalists and further support media freedom:

These initiatives are part of a European strategy for the media, building on theEuropean Democracy Action Planand theMedia and Audiovisual Action Plan. A recent study also shows that EU countries are making progressin implementing the Commissions Recommendation on the protection, safety and empowerment of journalists. The new rules will help ensure that journalists can carry out their work in a healthy media landscape.

For more information

European Media Freedom Act

Regulation establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act)

EU Directive on protecting persons who engage in public participation from manifestly unfounded claims or abusive court proceedings (Strategic lawsuits against public participation)

Media and digital culture

Media and pluralism

European Democracy Action Plan

Media and Audiovisual Action Plan

Study on measures to improve journalists safety

Video on strategic lawsuits against public participation (SLAPPs)

Link:

Protecting journalists and promoting media freedom: New rules enter into force - European Union