Daily Archives: June 13, 2024

3 Things to Know About FLiRT, the New Coronavirus Strains – Yale Medicine

Posted: June 13, 2024 at 4:40 pm

[Originally published: May 21, 2024; Updated: June 7, 2024.]

Note: Information in this article was accurate at the time of original publication. Because information about COVID-19 changes rapidly, we encourage you to visit the websites of the Centers for Disease Control & Prevention (CDC), World Health Organization (WHO), and your state and local government for the latest information.

The good news is that in the early spring of 2024, COVID-19 cases were down, with far fewer infections and hospitalizations than were seen in the previous winter. But SARS-CoV-2, the coronavirus that causes COVID, is still mutating. In April, a group of new virus strains known as the FLiRT variants (based on the technical names of their two mutations) emerged.

The FLiRT strains are subvariants of Omicron, and they now account for more than 50% of COVID cases in the U.S. (up from less than 5% in March). One of them, KP.3, accounted for 25% of COVID infections in the United States by the end of the first week of June; KP.2 made up 22.5%, and KP.1.1 accounted for 7.5% of cases.

Some experts have suggested that the new variants could cause a summer surge in COVID cases. But the Centers for Disease Control and Prevention (CDC) also reports that COVID viral activity in wastewater (water containing waste from residential, commercial, and industrial processes) in the U.S. has been dropping since January and is currently minimal.

Viruses mutate all the time, so Im not surprised to see a new coronavirus variant taking over, says Yale Medicine infectious diseases specialist Scott Roberts, MD. If anything, he says the new mutations are confirmation that the SARS-CoV-2 virus remains a bit of a wild card, where its always difficult to predict what it will do next. And Im guessing it will continue to mutate.

Perhaps the biggest question, Dr. Roberts says, is whether the newly mutated virus will continue to evolve before the winter, when infections and hospitalizations usually rise, and whether the FLiRT strains will be included as a component of a fall COVID vaccine.

Below, Dr. Roberts answers three questions about the FLiRT variants.

Read the original:

3 Things to Know About FLiRT, the New Coronavirus Strains - Yale Medicine

Posted in Covid-19 | Comments Off on 3 Things to Know About FLiRT, the New Coronavirus Strains – Yale Medicine

COVID Variant KP.3 Surges to DominanceHere’s What You Need to Know – Yahoo! Voices

Posted: at 4:40 pm

Fact checked by Nick BlackmerFact checked by Nick Blackmer

Data from the U.S. Centers for Disease Control and Prevention shows that a new COVID variant called KP.3 has risen to dominance in the United States.

KP.3 accounts for 25% of cases, while another variant, KP.2, makes up about 22% of cases.

Experts said that KP.3 isn't likely to cause more severe symptoms than other COVID strains.

A new COVID-19 variant called KP.3 has surged to dominance in the United States, according to recent data from the Centers for Disease Control and Prevention (CDC).

As of June 8, KP.3 accounted for 25% of cases, per the CDC. The variant has surpassed the previous dominant variant, KP.2, which now makes up about 22% of cases. Both have knocked down JN.1, the top strain circulating this past winter.

With SARS-CoV-2, the virus that causes COVID, mutating consistently, its natural to be concerned each time a new variant rises to prominence.

Heres what you need to know about KP.3, including whether experts are worried about its speedy spread.

KP.3 is part of a newly identified group of variants dubbed FLiRT, which are part of SARS-CoV-2s Omicron lineage. In addition to KP.3, the FLiRT variants also include KP.2 and KP.1.1. They all descend from JN.1.

KP.3 is similar to JN.1 in its structure except for two changes in the spike protein, Carlos Zambrano, MD, a board-certified infectious disease physician and the head of the COVID-19 Task Force at Loretto Hospital in Chicago, told Health.

The spike protein is located on the viruss surface and facilitates its entry into human cells.

One change was observed in the XBB.1.5 lineage, which was predominant in 2023, he said. The second change was observed in viruses circulating in 2021.

According to C. Leilani Valdes, MD, a pathologist and medical director at Regional Pathology Associates in Victoria, Texas, the KP.3 variant has become the frontrunner because it spreads quickly and easily.

It is very good at jumping from one person to another, she said. This means more people are getting infected with KP.3 compared to other variants.

Both experts agreed that there is currently no clear evidence that KP.3 causes more severe illness than other strains, including the JN.1 strain or its derivatives. As such, people who contract KP.3 can expect to experience symptoms characteristic of other recent COVID variants.

KP.3 symptoms resemble typical COVID-19 symptoms, including fever, cough, fatigue, and loss of taste or smell, Valdes said. Some individuals may also experience a sore throat, headache, or muscle pain.

COVID cases are on the rise, and we can expect the number of cases to continue to increase, especially with the KP.3 variant spreading quickly, Valdes said.

The CDC reported last week that COVID-19 infections are growing or likely growing in 30 states and territories. Cases are stable or uncertain in 18 others and are likely declining in oneOklahoma.

Per Zambrano, all three COVID vaccine manufacturersPfizer, Moderna, and Novavaxhave said that their new vaccines slated for August 2024 will target the JN.1 variant.

Because the JN.1 variant is closely related to the FLiRT variants, experts have said that matching the vaccines to JN.1 will offer better protection.

Valdes stressed that vaccination remains one of the most effective tools against COVID. Staying up to date with booster shots significantly reduces the risk of severe illness and hospitalization, she said. Wearing masks, washing hands, and keeping distance from others can help prevent the spread.

The most important takeaway as we head into the summer is that KP.3 spreads easily, she added, so its important to be careful.

For more Health.com news, make sure to sign up for our newsletter!

Read the original article on Health.com.

Read the rest here:

COVID Variant KP.3 Surges to DominanceHere's What You Need to Know - Yahoo! Voices

Posted in Covid-19 | Comments Off on COVID Variant KP.3 Surges to DominanceHere’s What You Need to Know – Yahoo! Voices

Clearly Defining ‘Long COVID’ – UConn Today – University of Connecticut

Posted: at 4:40 pm

A national panel of experts that includes the director of the UConn Health Disparities Institute calls for a redefinition of the term long COVID.

The National Academies of Sciences, Engineering, and Medicine committee is out with a report recommending long COVID be defined as an infection associated with chronic condition that occurs after COVID-19 infection and is present for at least three months as a continuous, relapsing, or progressive disease state that affects one or more organ systems.

Recognizing the existence of multiple working definitions of long COVID, the federal government asked the National Academies to come up with single, common one.

Long COVID has profound medical, social, and economic consequences worldwide, says the NASEM in a statement. The lack of a consensus definition presents challenges for patients, clinicians, public health practitioners, researchers, and policymakers. For patients, varying presentations of the disease and competing definitions can lead to difficulties accessing medical care or obtaining support, skepticism and dismissal of their experiences, delayed or denied treatment, and social stigma.

Linda Sprague Martinez, who joined UConn Health as director of the Health Disparities Institute last fall, is part of the committee, which engaged more than 1,300 participants in preparing the report.

An important dimension of this definition that providers should pay attention to is the way in which it explicitly attends to health equity, Sprague Martinez says. This is critical because health care inequity is pervasive and the health care needs of people of color and the poor are frequently overlooked.

The consensus study report, released this week, includes findings that socioeconomic factors, inequality, discrimination, bias, and stigma can be factors in timely, proper diagnoses, which can impact the potential benefit of care and services specific to long COVID. Examples given include access to COVID-19 testing during acute illness, access to evaluation for possible long COVID, providers willingness to diagnose a particular patient, access to insurance benefits, and patients fears of stigmatization from a long COVID diagnosis.

The U.S. Department of Health and Human Services, through its Office of the Assistant Secretary for Health and Administration for Strategic Preparedness and Response, requested the report, which also gives examples of how establishing a clear consensus definition of long COVID can have wide application:

Under the new definition, long COVID can involve any organ system, single or multiple symptoms, and single or multiple diagnosable conditions, and any of the following could be true:

The full report, A Long COVID Definition: A Chronic, Systemic Disease State with Profound Consequences, is available online through the National Academies of Sciences, Engineering, and Medicine.

Original post:

Clearly Defining 'Long COVID' - UConn Today - University of Connecticut

Posted in Covid-19 | Comments Off on Clearly Defining ‘Long COVID’ – UConn Today – University of Connecticut

Knowledge a factor in closing Black-white COVID-19 vaccination gap | Penn Today – Penn Today

Posted: at 4:40 pm

Early in the COVID-19 pandemic, Black Americans were more hesitant to take the COVID vaccine than were White Americans. As the pandemic went on, however, the disparity in vaccination rates between Black and White adults declined. In a paper titled What Caused the Narrowing of Black-White COVID-19 Vaccination Disparity in the US? A Test of 5 Hypotheses, published in the current issue of the Journal of Health Communication, researchers at the Annenberg Public Policy Center (APPC) assessed explanations for the positive change.

Using April 2021 to July 2022 data from the Annenberg Science and Public Health (ASAPH) survey, a national panel of over 1,800 U.S. adults, a team led by APPC research director Dan Romer assessed potential explanations, including: increased trust in the Centers for Disease Control and Prevention (CDC), exposure to pro-vaccination messages in the media, awareness of COVID-inflicted deaths among personal contacts, and improved access to vaccines. None of these factors explained the decline in disparity, however. Only increased knowledge about COVID-19 vaccination made a difference. Knowledge about the COVID vaccine among Black Americans increased over time, and this increase was associated with their receipt of the vaccine.

Black Americans became less skeptical of the safety and efficacy of the vaccine as time proceeded, which appeared in our data to be an important contributor to increased vaccination rates among them, says Romer.

In the initial wave of the survey, in April 2021, Black respondents were more likely to believe various forms of misinformation about COVID vaccines, such as that the vaccines are responsible for thousands of deaths and that the vaccines can change someones DNA. By the end of the survey period, knowledge about the vaccine among Black Americans had increased significantly.

Read more at Annenberg Public Policy Center.

Read the original:

Knowledge a factor in closing Black-white COVID-19 vaccination gap | Penn Today - Penn Today

Posted in Covid-19 | Comments Off on Knowledge a factor in closing Black-white COVID-19 vaccination gap | Penn Today – Penn Today

COVID-19 cases are on the rise in Hawaii – Spectrum News

Posted: at 4:40 pm

HONOLULU The Department of Health sent out an alert asking the public to be vigilant as COVID-19 cases are on the rise in Hawaii.

Last month, the DOH released a new dashboard that compiles data on the activity in Hawaii of three respiratory illnesses: COVID-19, including influenza (flu), and respiratory syncytial virus (RSV).

At the time, COVID-19 appeared on the dashboard as yellow, or medium activity, but now it appears as red, or high activity. This means the virus is circulating at high levels compared with historic trends and recommended precautions are more important for reducing risk, according to the DOHs alert.

The DOHs precautionary recommendations include:

Get the 2023-24 COVID-19 vaccine. Adults 65 and older and people who are immunocompromised are eligible for an additional dose.

If you feel sick, stay home and away from others. You may return to usual activities when you are fever-free for at least 24 hours without the aid of fever-reducing medicines and your symptoms are improving.

Wear a well-fitting mask indoors.

Stay outdoors or in well-ventilated areas.

Practice good hygiene, such as covering coughs, cleaning frequently touched surfaces and washing hands often.

Take a COVID-19 test if you have symptoms and may need treatment, which works best when taken as soon as possible.

Read the original here:

COVID-19 cases are on the rise in Hawaii - Spectrum News

Posted in Covid-19 | Comments Off on COVID-19 cases are on the rise in Hawaii – Spectrum News

A Combined Flu and COVID-19 Shot May Be Coming – TIME

Posted: at 4:40 pm

As much as wed like to think that COVID-19 is behind us, the virus isnt going anywhere. Health officials continue to recommend that people get vaccinated for both COVID-19 and influenza every year for the foreseeable future, and high hospitalization rates for COVID-19 in the past winter were a reminder that SARS-CoV-2 can still cause serious disease.

Soon, that may be possible with one shot instead of two. On June 10, Moderna reported that its combination COVID-19/influenza shot generated even better immune responses against SARS-CoV-2 and influenza than those elicited by existing, separate vaccines.

Both of the shots used in the study are experimental. The COVID-19 portion relies on a slightly different form of SARS-CoV-2s spike protein than the existing vaccine. Instead of encoding for the entire spike protein, the combination vaccine includes two key parts of it in a way that streamlines the shot to require a lower dosewhich is useful for a combination vaccine, and also potentially extends its shelf life. The influenza component of the vaccine uses the same mRNA technology behind the existing COVID-19 vaccine but targets influenza proteins in the three strains that circulated during the past season: H1N1 and H3N2 from the influenza A group, and an influenza B strain.

Read More: An mRNA Melanoma Vaccine Shows Promise

In a study of more than 8,000 adults ages 50 and older, about half received the combination vaccine. The other halfthe control groupreceived two separate shots: Moderna's latest COVID-19 vaccine, which targets the XBB.1.5 variant, and a flu shot (either Fluarix, if people were 50 to 64 years old, or Fluzone HD for those 65 and older).

In the younger group, the combo vaccine generated about 20% to 40% higher levels of antibodies to the influenza strains, and 30% higher levels to XBB.1.5, compared to the control group. Among older people, antibodies were 6% to 15% higher against the flu strains and 64% higher against XBB.1.5 compared to older people in the control group.

The real advantage of a single shot is that people only need to get one needle," says Dr. Jacqueline Miller, senior vice president and head of development in infectious diseases at Moderna. There's a public-health advantage, too, she says, since U.S. vaccination rates for both diseases are relatively low. "When we are able to give the two vaccines as one, it could increase vaccine compliance rates, especially for those at highest risk."

Read More: How to Navigate the New World of At-Home Testing

Moderna is continuing to study the COVID-19 vaccine and the flu shot used in the combo as separate shots as well. That data will also help the U.S. Food and Drug Administration (FDA) when it reviews the companys request for approval of the combination shot, which could come by the end of the year. The specific strains targeted in the shot will depend on which forms of the viruses are circulating at the time. (The company also filed a request to the FDA on June 7 to update its COVID-19 vaccine to target the JN.1 variant.)

The combination vaccine will likely not arrive in time for the flu and COVID-19 season this fall. But in coming years, a two-in-one vaccine could help to increase vaccination rates, which in turn could contribute to lower hospitalization rates for both diseases.

The rest is here:

A Combined Flu and COVID-19 Shot May Be Coming - TIME

Posted in Covid-19 | Comments Off on A Combined Flu and COVID-19 Shot May Be Coming – TIME

Long-Term COVID-19 Risks: Death, Postacute Sequelae in Third Year – HealthDay

Posted: at 4:40 pm

THURSDAY, June 13, 2024 (HealthDay News) -- For individuals with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection, the risks for death and postacute sequelae of COVID-19 (PASC) reduce over three years but persist, especially among hospitalized individuals, according to a study published online May 30 in Nature Medicine.

Miao Cai, Ph.D., from the Veterans Affairs St. Louis Health Care System, and colleagues followed a cohort of 135,161 people with SARS-CoV-2 infection and 5,206,835 controls from the U.S. Department of Veterans Affairs who were followed for three years to estimate the risks for death and PASC.

The researchers found that the increased risk for death was no longer seen after the first year of infection among nonhospitalized individuals. The risk for incident PASC declined over three years, but in the third year, it still accounted for 9.6 disability-adjusted life years (DALYs) per 1,000 persons in year 3. The risk for death decreased among hospitalized individuals, but in the third year after infection, it remained significantly elevated (incidence rate ratio, 1.29). Over the three years, the risk for incident PASC decreased, but substantial residual risk persisted in the third year, resulting in 90.0 DALYs per 1,000 persons.

"That a mild SARS-CoV-2 infection can lead to new health problems three years down the road is a sobering finding," Ziyad Al-Aly, M.D., also from the Veterans Affairs St. Louis Health Care System, said in a statement. "The problem is even worse for people with severe SARS-CoV-2 infection. It is very concerning that the burden of disease among hospitalized individuals is astronomically higher."

Several authors disclosed ties to the pharmaceutical industry; one author reported ties to Guidepoint.

Abstract/Full Text

Originally posted here:

Long-Term COVID-19 Risks: Death, Postacute Sequelae in Third Year - HealthDay

Posted in Covid-19 | Comments Off on Long-Term COVID-19 Risks: Death, Postacute Sequelae in Third Year – HealthDay

Meet Kyogu Lee, President of Supertone – the voice cloning AI company acquired by HYBE for $32m – Music Business Worldwide

Posted: at 4:37 pm

MBWs World Leaders is a regular series in which we turn the spotlight toward some of the most influential industry figures overseeing key international markets. In this feature, we speak to Kyogu Lee, President of HYBE-owned voice AI company Supertone. World Leaders is supported byPPL.

AI technology is a big priority for HYBE, the South Korean entertainment giant behind K-pop superstars BTS.

Evidence of that arrived in March when HYBEs CEO Jiwon Park consented to have his own voice cloned to demonstrate the capabilities of the companys proprietary AI on its Q1 investor call.

HYBEs so-called voice synthesis technology was developed by Supertone, the AI voice replication software startup in which HYBE acquired a majority stake in a $32 million deal in 2023.

Founded in Seoul in 2020, Supertone claims to be able to create a hyper-realistic and expressive voice that [is not] distinguishable from real humans.

Supertones purported ability to do just that makes the apparent strategy behind HYBEs multi-million dollar investment in the technology a lot clearer, when viewed through the lens of comments shared by HYBE Chairman Bang Si-Hyuk in an interview with Billboardlast year.

I have long doubted that the entities that create and produce music will remain human, saidBang Si-Hyuk.

I dont know how long human artists can be the only ones to satisfy human needs and human tastes. And thats becoming a key factor for my operation and a strategy for HYBE.

By acquiring Supertone, HYBE has also brought into the fold the startups co-founder and President Kyogu Lee, a widely respected AI expert with a PhD in Computer-Based Music Theory and Acoustics from Stanford University.

In addition to heading up Supertone at HYBE, he leads applied research at the Artificial Intelligence Institute at Seoul National University (AIIS) and is also in charge of the Music and Audio Research Group (MARG) atSNU.

Lee claims in our exclusive and in-depth interview below that Supertone stands out in the AI audio landscape today because it is theoretically capable of creating an infinite number of new and original voices, as well as recreating existing voices.

This is made possible by Supertones foundation model called NANSY which stands for Neural Analysis & Synthesis and which Lee explains serves as the backbone of Supertones speech synthesis technologies. You can read the research paper for NANSY, here.

NANSY has the special ability to divide and re-assemble voice components timbre, linguistics, pitch, and loudness individually and independently, generating natural-sounding voices with unparalleled realism, he adds, (bolding MBWs).

Supertones AI vocal cloning tech first generated global media attention in January 2021 when it resurrected the voice of South Korean folk superstar Kim Kwang-seok to be played on Korean television show Competition of the Century: AI vs Human.

More recently, Supertone made headlines globally after recreating the voice of Kim Hyuk Gun, the vocalist of the popular Korean band The Cross, who was paralyzed in an accident. We collected 20 years of his voice data since debut and used it to train an AI voice in his unique vocal style, explains Lee.

HYBE also showcased the possibilities of what it can do with Supertones technology when it released a new singlecalledMasquerade from HYBE artist MIDNATT (aka Lee Hyun)last year. It was claimed byHYBE at the time to be the first-ever multilingual track produced in Korean, English, Japanese, Chinese, Spanish and Vietnamese.

In an increasingly global (yet localized) world, and amid the worldwide explosion of genres from K-Pop and J-Pop to Afrobeats and Spanish-language music, the opportunities presented by this use case of Supertones tech alone will likely have piqued the interest of music industry leaders worldwide.

Using this tech, a superstar artist think Taylor Swift, Billie Eilish or The Weeknd could release a new single in multiple languages in their actual voice on the same day.

According to Lee: Supertones multilingual pronunciation correction technology unlocks new avenues for artists to communicate with local fans in their native language, reaching out to the global market.

He adds: We hope this collaboration will establish a constructive precedent for AI technology supporting artists in overcoming language barriers to connect with global fans and broaden their musical spectrum.

MBW has previously asked if the companys newly acquired AI technology could ever be used to recreate the voices of superstar HYBE artists like BTS for projects that dont require the groups in-person participation, for example, while theyre serving in the military.

MBW readers who have been following our coverage of HYBEs financial performance over the years will recall that its Artist Indirect-Involvement business line revenue-generating projects that use an artists brand/likeness, without the actual artist needing to be involved became the companys primary revenue driver in 2020 in the absence of live shows during the pandemic.

In FY 2021, a year in which HYBE revenues surpassed $1 billion in revenues for the first time, the companys biggest organic revenue driver was, once gain, its Artist Indirect business, accounting for more than 60% of the companys revenues.

This Artist Indirect-Involvement business was only overtaken by the companys ArtistDirect Involvement business line in Q1 2022.

We asked Supertones President the same question: Will its tech ever be used to recreate the voices of superstars like BTS?

While Supertone is theoretically capable of creating an infinite number of new and original voices, as well as recreating existing voices, we are devoted to prioritizing the rights of all artists and creators, including those under HYBE.

Kyogu Lee

He tells us that, while Supertone is theoretically capable of creating an infinite number of new and original voices, as well as recreating existing voices, we are devoted to prioritizing the rights of all artists and creators, including those under HYBE.

He adds: Our focus with HYBE artists lies in facilitating seamless communication and interaction with global audiences, transcending all barriers, including language and geography.

Lee notes that HYBE is currently working on AI-dubbing some of its artists voices into foreign languages for parts of their video content, for example TOMORROW X TOGETHERs ACT: SWEET MIRAGE concert video, where the members comments were dubbed into Spanish using Supertones technology.

One of Supertones latest advancements is a real-time vocal changer called Supertone Shift that lets users switch between voices from a library of ten predefined voices. Users can then customize their chosen voice by adjusting the pitch, reverb and other effects.

Apart from the obvious production-related uses for this tool, the real-time capabilities could make it equally useful in a live setting. Just picture it: An artist could sing live on stage, and via multiple different AI-assisted voices, all switched in real-time.

Lee tells us that Supertone Shift has already hit 70,000 downloads and 30,000 monthly active users in just over two months since its beta launch.

The demand for expressing alter-egos has surged, adds Lee.

Beyond music, Lee says that he envisions Supertone Shift as the ultimate creative tool for a diverse range of content creators, including VTubers, livestreamers, podcasters, and gamers, enhancing the versatility and quality of their outputs.

HYBEs investment in Supertone arrived ahead of the current explosion of AI tech in the music industry and the wave of challenges it has brought with it. There are concerns about the source and legality of the training data used by many of the prominent AI music generators on the market today.

Music industry leaders have also raised the alarm about music streaming services and social media platforms being flooded with AI-generated songs. Some songwriters and artists, meanwhile, are worried about the threat of AI tools to their livelihoods.

According to Lee, AIs future contribution to the music industry will lie in expanding the creativity and imagination of creators and artists rather than replacing creators and creativity altogether.

Music, devoid of a storyteller the artist lacks the essential connection between storyteller (artist), story (music), and listener (fan), which leads me to believe that AI-generated music created without artist input may not endure, he says.

Meanwhile, for Supertone, Lee says that the HYBE subsidiary is focusing on evolving into a consumer-facing company this year, by offering what he calls artistic intelligence with its suite of AI tools for creators.

By providing convenient services that are universally accessible and applicable across diverse content fields, we aim to reduce creative barriers for professionals and individuals alike, says Lee.

Here, Supertones President and HYBEs resident AI expert, Kyogu Lee, tells us more about his companys tech, and his predictions for AI in the music business

Voices created through Supertones technology can be used in various areas, including acting and singing due to its rich expressions, which has reached new heights through our recent technological advancement to generate them in real-time. Moreover, fully equipped with our R&D lab, Content Business Development department, and in-house studio, Supertone transcends the scopes of a technology provider; it serves as a gateway to elevated content, offering new possibilities for content partners spanning music, broadcasting, movies, games, and beyond. We strive to add value to the content industry by amplifying creators artistic expression to produce more engaging content, and by introducing innovative voices to create new forms of content.

As we continue to collaborate with the creative industry, Supertones value is being appreciated across a wide range of content domains.

Notable achievements include our contribution to the Netflix series MASK GIRL released in August, 2023, where Supertones multi-speaker voice morphing technology brought to life the main character Kim Mo-mis alternative persona as an online streamer by producing a unique third voice from fusing voice tones of two actresses who played the character.

Additionally, in the Disney+ 2022 hit series Big Bet, Supertone utilized its voice De-aging technology, the industrys first attempt, to rejuvenate veteran actor Choi Min-siks voice for his character who was in his 30s. As we continue to collaborate with the creative industry, Supertones value is being appreciated across a wide range of content domains.

Kim Kwang-seok is a legendary singer cherished by Korean people with deep connections and affection, so we approached the project with utmost respect.

Although we were cautious given the unfamiliarity of voices created with SVS technology at that time, we had confidence in our ability to authentically resurrect his voice, leveraging Supertones forte in creating expressive voices that could deliver emotions and meanings through singing or speech.

Thankfully, the music industry and fans embraced the result with delight and gratitude. For the public, it provided a chance to observe new possibilities in the content realm, as AI reignited their nostalgia. Hearing Kims recreated voice, Kim Sang-wook, a prominent Korean scientist, responded, I hope this serves as an opportunity to explore AI and contemplate its coexistence with humanity. Im grateful it succeeded in its goal of evoking memories and ultimately resonating with fans as intended.

Supertone initially engaged with HYBE [formerly Big Hit Entertainment] in the first half of 2020. During this period, Supertones singing synthesis technology was gaining attention, and the late Kim Kwang-seoks project sparked interest from the entertainment industry, including HYBE, marking the beginning of our interaction.

HYBE had long been at the forefront of pioneering and advancing technological innovation in the entertainment sector.

HYBE had long been at the forefront of pioneering and advancing technological innovation in the entertainment sector. They recognized the promising trajectory of Supertones technology, including the innovative singing synthesis technology, which we both trusted would be suitable for the music industry. Concurrently, Supertone was firmly convinced of boundless possibilities and synergies that would arise from combining our technology with HYBEs global intellectual property (IP) and established production capabilities, which resulted in this partnership.

Acquired by HYBE in January 2023, Supertone contributes to HYBEs commitment to providing new avenues for content and fan experiences through solution businesses that leverage artists intellectual property (IP). Were currently in the process of running pilot projects across HYBEs various business areas, networks, and partnerships to advance Supertones technology and explore applications that can support and assist artists. Our technology can be utilized as a useful creative tool for some artists like MIDNATT who seek to experience new musical endeavors beyond technological limitations.

Supertone contributes to HYBEs commitment to providing new avenues for content and fan experiences through solution businesses that leverage artists intellectual property.

Additionally, it can enhance content immersion by integrating natural and expressive voice synthesis technology, as exemplified by Weverse Magazines Read-Aloud feature.

Were continuously discussing various business opportunities internally to innovate the possibilities of content creation.

MIDNATT project marks the first occasion where Supertone collaborated with an already existing artist to deliver more immersive and accessible music to fans worldwide. Following the release of his track Masquerade, we monitored a significant amount of positive responses from fans in various languages.

Some expressed how hearing their beloved artists in their native tongue and instantly comprehending the lyrics moved them like never before.

It was immensely gratifying and rewarding that they understood the intention and sincerity behind [the project].

Supertones extensive research into real-time AI voice conversion traces back to 2021, triggered by a conversation with an artist I met through a TV show. Despite being a beloved artist for a long time, he expressed regret over his voices inherent limitations in manifesting a wider range of expressions.

This made me realize that not only ordinary individuals like us but also those who captivate the public with beautiful voices desired to exert new vocal expressions.

Focusing on achieving real-time conversion of conversation-level voices, we showcased our initial project, then called NUVO, at the 2022 CES, where it won the Innovation Award. Later, we further refined the technology to a level suitable for live stages. This was demonstrated in 2023 when MIDNATT seamlessly transitioned between his vocal and a female vocal during a live performance. Achieving imperceptible latency prompted us to recognize the needs of real-time content creators, leading to the development of Supertone Shift.

We are fully aware of the controversy associated with AI technologies. Above all, whats crucial is to ensure that an artists creative intentions are conveyed, and that AI technologies are used as a catalyst for human creativity. It is our firm belief that we can only change perceptions by showcasing exemplary cases of how technology can assist artists and creators by collaborating closely with them. Creating meaningful content based on technology cannot happevn without inspiration and ideas that originate from creators.

We are fully aware of the controversy associated with AI technologies.

Recently, Supertone recreated the voice of Kim Hyuk Gun, the vocalist of the Korean band The Cross. After performing The Crosss music on a live stage together with the AI voice, Kim expressed his appreciation, saying, Thanks to the assistance of AI, he was able to successfully deliver a live performance despite his challenging conditions.

As showcased in this example, Supertone is constantly searching for ways to assist artists in overcoming creative barriers caused by physical or technological limitations.

However, we are often amazed by the innovative ideas and unexpected applications proposed by the artists and creators we collaborate with. Ultimately, I believe technology evolves in a mutually beneficial manner through ongoing interaction and engagement with artists and creators.

AI is being utilized throughout the entire process of creating, producing, distributing, and consuming music. Perhaps the most affected aspect of this is the creative process. However, I am personally skeptical if we can call this music produced solely from AI the evolution of music.

I am personally skeptical if we can call this music produced solely from AI the evolution of music.

To explain the reason behind this, we need to talk about the essence of music, which I believe is storytelling the fundamental purpose of all creative works and content.

Artists aspire to convey their intended story through the creative process, and various formats and genres of content have developed to maximize the effectiveness of their storytelling.

First and foremost, I believe establishing social consensus should be prioritized, one which will provide guidelines for identifying and addressing potential risks and issues caused by synthesized voices created without consent. This will mandate the AI industry to equip itself with the capability and readiness to respond to these issues.

We do not monetize on a voice without the permission of its rightful owner, under any circumstances.

Since its establishment, Supertone has adhered to the philosophy of developing products and conducting business in a manner that respects the intentions of creators. We also continue to enhance ethical guidelines and technological safeguards to prevent the abuse and misuse of AI technology. Supertone possesses watermark technology capable of detecting voices created by Supertone, and we have additionally initiated advanced research and development in this technology since April. In addition, we are actively cooperating to establish legal and institutional frameworks through continuous communication and interaction with relevant industries and policymakers. Throughout our endeavors, we will always prioritize the needs of creators and fans, striving to develop and apply relatable and coexistent technologies.

Supertone upholds the following three principles for responsible and ethical use of AI:

Supertone aspires to be the foremost choice by creators worldwide who seek solutions and services to produce voice content effectively and efficiently. We aim to imprint the equation #1 Voice AI Tech Provider = Supertone in the minds of all creators and potential customers globally.

As technology advances to facilitate music production and distribution, overproduction and oversaturation emerge as significant challenges.

The democratization of music production, fueled by advancements in creation and production technologies, has empowered numerous non-professionals to create music effortlessly.

As technology advances to facilitate music production and distribution, overproduction and oversaturation emerge as significant challenges.

Moreover, the widespread accessibility of the internet and various platforms has enabled global distribution of music.

This inundates listeners with an overwhelming amount of music on an increasingly larger scale, making it difficult for them to discover and explore music that aligns with their preferences. Addressing this challenge will require the development of systems or methodologies capable of identifying and delivering hidden, high-quality music to listeners.

View post:

Meet Kyogu Lee, President of Supertone - the voice cloning AI company acquired by HYBE for $32m - Music Business Worldwide

Posted in Cloning | Comments Off on Meet Kyogu Lee, President of Supertone – the voice cloning AI company acquired by HYBE for $32m – Music Business Worldwide

Exclusive: Camb takes on ElevenLabs with open voice cloning AI model Mars5 offering higher realism, support for 140 … – VentureBeat

Posted: at 4:37 pm

It's time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeats Women in AI Awards today before June 18. Learn More

Today, Dubai-based Camb AI, a startup researching AI-driven content localization technologies, announced the release of Mars5, a powerful AI model for voice cloning.

While there are plenty of models that can create digital voice replicas, including those from ElevenLabs, Camb claims to differentiate by offering a much higher level of realism with Mars5s outputs.

According to early samples shared by the company, the model not only emulates the original voice but also its complex prosodic parameters, including rhythm, emotion and intonation.

Camb also supports nearly 3 times as many languages as ElevenLabs: more than 140 languages compared to ElevenLabs 36, including low-resource ones like Icelandic and Swahili. However, the open-sourced technology, which can be accessed on GitHub starting today, is only the English-specific version. The version with expanded language support is available on the companys paid Studio.

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

The level of prosody and realism that Mars5 is able to capture, even with just a few seconds of input, is unprecedented. This is a mistral moment in speech, Akshat Prakash, the co-founder and CTO of the company, said in a statement.

Normally, voice cloning and text-to-speech conversion are two separate offerings. The former captures parameters from a given voice sample to create a voice clone while the latter uses that clone to convert any given text into synthetic speech. The technology, as we have seen in the past, has the potential to portray anyone as speaking anything.

With Mars5, Camb AI is taking the work ahead by mixing both capabilities into a unified platform. All a user has to do is upload an audio file, ranging between a few seconds to a minute, and provide the text content. The model will then use the speakers voice in the audio file as a reference, capture the relevant details including the original voice, speaking style, emotion, enunciation and meaning and synthesize the provided text as speech using it.

The company claims Mars5 can capture diverse emotional tones and pitches, covering all sorts of complex speech scenarios such as when a person is frustrated, commanding, calm or even spirited. This, Prakash noted, makes it suitable for content that has been traditionally difficult to convert into speech such as sports commentary, movies, and anime.

To achieve this level of prosody, Mars5 combines a Mistral-style ~750M parameter autoregressive model with a novel ~450M parameter non-autoregressive multinomial diffusion model, operating on 6kbps encodec tokens.

The AR model iteratively predicts the most coarse (lowest level) codebook value for the encodec features, while the NAR model takes the AR output and infers the remaining codebook values in a discrete denoising diffusion task. Specifically, the NAR model is trained as a DDPM using a multinomial distribution on encodec features, effectively inpainting the remaining codebook entries after the AR model has predicted the coarse codebook values, Prakash explained.

While specific benchmark stats are yet to be seen, early samples and tests (with a few seconds of reference audio) run by VentureBeat show that the model mostly performs better than popular open and closed-source speech synthesis models, including those from Metavoice and ElevenLabs. The competitive offerings synthesized speech clearly but the results didnt sound as similar to the original voice as they did in the case of Mars5.

ElevenLabs is closed source so its hard to specifically say why they arent able to capture nuances that we can, but given that they report training on 500K+ hours (almost 5 times the dataset we have in English), it is clear to us that we have a superior model design that learns speech and its nuances better than theirs. Of course, as our datasets continue to grow and Mars5 trains even more, which we will release in successive checkpoints in Github, we expect it to only get better and better and better, especially considering support from the open-source community, the CTO added.

As the company continues to bolster the voice cloning and text-to-speech performance of Mars5, it is also planning the open-source release of another model called Boli. This one has been designed to enable translation with contextual understanding, correct grammar and apt colloquialism.

Boli is our proprietary translation model, which surpasses traditional engines such as Google Translate and DeepL in capturing the nuances and colloquial aspects of language. Unlike large-scale parallel corpus-based systems, Boli offers a more consistent and natural translation experience, particularly in low- to medium-resource languages. Feedback from clients indicates that Bolis translations outperform those produced by mainstream tools, including the latest generative models like ChatGPT, Prakash said.

Currently, both Mars5 and Boli work with 140 languages on the Cambs proprietary platform Camb Studio. The company is also providing these capabilities as APIs to enterprises, SMEs and developers. Prakash did not share the exact number of customers but he did point out the company is working with Major League Soccer, Tennis Australia, Maple Leaf Sports & Entertainment as well as leading movie and music studios and several government agencies.

For Major League Soccer, Camb AI live-dubbed a game into four languages in parallel for over 2 hours, uninterrupted becoming the first company to do so. It also translated the Australian Opens post-match conference into multiple languages and translated the psychological thriller Three from Arabic to Mandarin.

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat's Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Go here to read the rest:

Exclusive: Camb takes on ElevenLabs with open voice cloning AI model Mars5 offering higher realism, support for 140 ... - VentureBeat

Posted in Cloning | Comments Off on Exclusive: Camb takes on ElevenLabs with open voice cloning AI model Mars5 offering higher realism, support for 140 … – VentureBeat

Single-cell cloning solution speeds breakthroughs | UNC-Chapel Hill – The University of North Carolina at Chapel Hill

Posted: at 4:37 pm

Cell Microsystems technologies allow researchers to image, identify and isolate viable single cells for analysis more successfully and efficiently than ever. Its coreCellRaft technologywas invented in the UNC-Chapel Hill lab of Dr.Nancy Allbritton, who co-founded the company in 2010 with chemistry professorDr. Christopher Sims and researcherYuli Wang in 2010.

Scientists working in the pharma-biotech and academic industries need to isolate single cells to understand and develop treatments for diseases. The traditional methods they relied on in the past, such as single-cell RNA sequencing, can destroy the original cell, are labor- and time-intensive and have low yield rates.

The CellRaft Array technology created at Carolina allows scientists to get better results faster. The traditional way of doing single-cell clonal propagation is a 10-week process, but with our CellRaft Array, you can go from single cells to a plate full of clones in five to 10 days, said Gary Pace, who became the Cell MicrosystemsCEO in 2014.

Over the past 10 years, Cell Microsystems tested its technology and developed products based on the needs of its customers. One of the companys early breakthroughs came through CRISPR, technology that works like molecular scissors to cut DNA at specific locations and help scientists add, remove or replace genetic material to treat genetic diseases.

We played with other applications before that, but they didnt address a particular market, Pace said. Once we recognized CRISPR as a key application, we saw there was a whole world out there built around clonal propagation from single cells a world that has only gotten bigger. CRISPR allowed us to begin to focus on specific markets.

To extend the power of its core technology, the company developed anautomated platform that allows scientists to watch a single cell divide multiple times. Captured images help researchers identify specific cell attributes that are important for further analysis.

The platform is driven by Cell Microsystems proprietary software calledCellRaft Cytometry, which automates how the system isolates cells and captures images. The softwares image-based verification capabilities let researchers specify precise attributes theyre interested in such as cell shape or colony size and then identify cell colonies that meet their specifications.

The software brings together insights on single-cell propagation, clone colony size and observable genetic characteristics. Our CellRaft Cytometry software creates a Venn diagram of those three distinct observations, and then you can isolate just the cells in the position on the array that overlap all three of those circles of the Venn diagram, Pace said. Weve collapsed a number of the different modalities of a single-cell workflow into a single platform. And thats very powerful. No one else has that.

Cell Microsystems licenses intellectual property for the foundational technology invented at UNC-Chapel Hill used in the CellRaft product and worked with theUNC Office of Technology Commercialization on other joint patents for the companys automated platform. The company has also filed patents on its own inventions.

With a well-integrated set of technologies built around user needs, Cell Microsystems offers researchers a single solution that packs a powerful punch. For scientists, there are benefits across the board: high viability, very efficient, amenable to a large number of cell lines and an integrated platform that gives you cytometric data that you cant get anywhere else, Pace said.

Read more about Cell Microsystems.

Visit link:

Single-cell cloning solution speeds breakthroughs | UNC-Chapel Hill - The University of North Carolina at Chapel Hill

Posted in Cloning | Comments Off on Single-cell cloning solution speeds breakthroughs | UNC-Chapel Hill – The University of North Carolina at Chapel Hill