Slavoj Zizek: the philosopher who annoys all the right people – The Spectator

Surplus-Enjoyment: A Guide for the Non-Perplexed

Slavoj Zizek

Bloomsbury Academic, pp. 384, 20

Slavoj Zizek is a Slovenian graphomaniac who infuriates some of the worlds most annoying people, and might for this reason alone be cherished. He once enjoyed a high degree of pop-philosophical notoriety, being blamed by pundits who had clearly never read his books for the scourge of pomo relativism that threatened to undermine the moral clarity of those who deemed it an excellent wheeze to invade Iraq. Such was his leftish celebrity a decade ago that he shared a stage with Julian Assange and was forced to deny rumours that he was having an affair with Lady Gaga. My friends said, Youre stupid. You should have said: No comment.

Since then his fame has somewhat waned, perhaps because he doesnt do social media (which is a shame, since he would be a Trumpian master of the form). But that hasnt prevented him from continuing to emit an obscene (a favourite word of his) quantity of books. The long-term Zizek observer already knows that one doesnt exactly read a new book by him so much as tune in again to the ceaselessly babbling stream of his comic-philosophical free association. And so it is here: the bracing mash-up of his beloved Hegel with Marx, perverse yet enjoyably plausible interpretations of Hollywood movies and disquisitions on contemporary politics and culture wars.

There is also the usual amount of Lacanian theory, which to some might seem like a version of Scientology for continental philosophers, a sort of intellectual Ponzi scheme in which adepts prove their belonging via the repetition of absurdities, though the psychoanalytic framework more generally does issue in pungent diagnoses of modern sacred cows. Does the predominant ecological discourse, he asks, not address us as a priori guilty, indebted to Mother Nature, under the constant pressure of the ecological superego?

In any case, Zizeks value as a thinker and gadfly lies precisely in his refusal to submit to boring (another favourite word) empiricism and his joy in insulting the left as well as the right. To those who claim to be on the right side of history he counterposes a gloomy poetry: History is not on our side, it tends towards our collective suicide. As an old-fashioned Marxian materialist, he pounces on the curious contradictions of modern leftist nostrums:

The basic characteristic of todays subjectivity is the weird combination of the free subject who experiences himself as ultimately responsible for his fate and the subject who grounds the authority of his speech on his status of a victim of circumstances beyond his control... The notion of subject as a victim involves the extreme narcissistic perspective: every encounter with the Other appears as a potential threat to the subjects precarious imaginary balance.

Indeed, to the extent that he is a communist, he is one with a notably conservative pessimism about the human animal, its envy and its perversions. Glossing Oedipus at Colonus, he concludes with miserable glee: Our being born is already a kind of failure.

What does Zizek enjoy? He likes anarchic challenges to the status quo, such as the Wall Street Bets online forum of amateur investors that caused a massive bubble and then crash in the share price of the ailing US retailer Gamestop in 2021. He even finds something to enjoy in the carnival atmosphere of the storming of the Capitol by Trumpists because the liberals who were outraged, or so he argues mischievously, were outraged only because the wrong kind of people were doing it. Our philosopher thrills to such events because they subvert the system by over-identifying with it or, rather, by universalising it and thereby bringing out its latent absurdity.

This, too, is what Zizek aims to do with modern ideological conflicts. From Hegel he takes the basic lesson that a critique should always be a critique of critique itself, proceeding dialectically to a sort of plague-on-both-your-houses synthesis. This is, for example, how he proceeds in an interestingly tortured chapter on modern gender identities. Meanwhile, he sees the resurgent Taliban and the Covid vaccine sceptics as twin poles of a dead-end reaction to modernity, which can only be overcome by protecting a space for the public exercise of reason. Fans will wonder in alarm: is Zizek turning into Habermas?

Well, he could never be as dull a writer. He is a great caller of things stupid, which is a skill too little practised in a world dedicated to avoiding offence. But he also has genuine enthusiasms that constantly surprise the reader, such as a brilliant few pages on Shostakovich and, later, on the film Joker. As with many of the high priests of postmodernism (e.g. Derrida), Zizek is at heart really a close reader and a seriously inventive one.

See the article here:
Slavoj Zizek: the philosopher who annoys all the right people - The Spectator

Appen’s Annual State of AI and Machine Learning Report Identifies a Gap in Ideal Versus Reality of Data Quality – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Appen Limited (ASX:APX), the global leader in data for the AI Lifecycle providing data sourcing, data preparation, and model evaluation by humans at scale, today released its eighth annual State of AI and Machine Learning report. This years report reveals that sourcing quality data is an obstacle to creating AI.

According to the reports findings, 51% of participants agree that data accuracy is critical to their AI use case. To successfully build AI models, organizations need accurate and high-quality data. Unfortunately, business leaders and technologists report a significant gap in ideal vs reality in achieving data accuracy.

Appens research also found that companies are shifting their focus to responsible AI and maturing in their use of AI. More business leaders and technologists are focusing on improving the data quality behind AI projects in order to promote more inclusive datasets and, as a result, unbiased and better AI. In fact, 80% of respondents stated data diversity is extremely important or very important, and 95% agree that synthetic data will be a key player when it comes to creating inclusive datasets.

This years State of AI report finds that 93% of respondents believe responsible AI is the foundation of all AI projects, said Mark Brayan, CEO at Appen. The problem is, many are facing the challenges of trying to build great AI with poor datasets, and its creating a significant roadblock to reaching their goals.

Additional key takeaways from the 2022 State of AI Report include:

Sourcing: 42% of technologists say the data sourcing stage of the AI lifecycle is very challenging.

Evaluation: 90% are retraining their models more than quarterly.

Adoption: Business leaders are split down the middle on whether their organization is ahead of (49%) or even with (49%) others in their industry.

The majority of AI efforts are spent managing data for the AI lifecycle, which means it is an incredible undertaking for AI leads to handle alone and is the area many are struggling with, said Sujatha Sagiraju, Chief Product Officer at Appen. Sourcing high-quality data is critical to the success of AI solutions, and we are seeing organizations emphasis the importance of data accuracy.

The State of AI report was sourced from 504 interviews collected via The Harris Poll online survey of IT decision makers, business leaders and managers, and technical practitioners from the US, UK, Ireland, and Germany.

To learn more, download the full 2022 State of AI and Machine Learning report.

About Appen Limited

Appen is the global leader in data for the AI Lifecycle. With over 25 years of experience in data sourcing, data annotation, and model evaluation by humans, we enable organizations to launch the worlds most innovative artificial intelligence systems. Our expertise includes a global crowd of over 1 million skilled contractors who speak over 235 languages, in over 70,000 locations and 170 countries, and the industrys most advanced AI-assisted data annotation platform. Our products and services give leaders in technology, automotive, financial services, retail, healthcare, and governments the confidence to launch world-class AI products. Founded in 1996, Appen has customers and offices globally.

Originally posted here:
Appen's Annual State of AI and Machine Learning Report Identifies a Gap in Ideal Versus Reality of Data Quality - Business Wire

How Heinzs Agency Used Machine Learning To Prove Its Ketchup Is The Dominant Condiment – The Drum

Heinz has tapped into the cultural excitement around text-to-image machine learning programs to prove the market dominance of its tomato sauce. The Drum ketched up with the creatives behind the 2001: A Space Odyssey-inspired campaign.

The latest iteration of the Draw Ketchup campaign gave a machine, rather than the general public, the task of drawing ketchup. Lo and behold, more often than not it sketched *almost* correct iterations of Heinzs product albeit with the Bizzaro World twist weve become used to from not-quite-there machine learning programs.

The campaign from Canadian creative agency Rethink set out to strengthen consumer affinity. Heinz is an icon, explains Mike Dubrick, executive creative director, Rethink. But we dont want it to be a heritage brand.

Last year, Kraft-owned Heinz conducted a social experiment asking people across five continents to draw the red condiment. One year later, and building on the campaigns success, Rethink took it one step further.

Dubrick not to be confused with director Stanley Kubrick, to whom this campaign nods says: So, like many of our briefs, the task was to demonstrate Heinzs iconic role in todays pop culture. Pitching the idea to the brand was next. After the brief, we rarely wait until the formal presentation when we share something we think is great.

This idea was first pitched to the client informally by text before a formal presentation with a bit more meat on the bone was conducted, but Dubrick, excited by the idea, points out: When youve got something powerful, why wait?

Get the best of The Drum by choosing from a series of great email briefings, whether thats daily news, weekly recaps or deep dives into media or creativity.

After getting the go-ahead from Heinz, the next step of the creative process was to find the right tech for the job. The team landed on DALLE 2 a new AI system that can create realistic images and art from a text description. Theres been a buzz around such tools on social media these last few months.

Once we had that, we were able to experiment and understand the capabilities, adds Dubrick. We were getting to grips with machine learning and looking for ways to demonstrate that Heinz is ketchup. The AI gave us a completely unbiased opinion on the subject.

The team hilariously began putting the machine to test, asking it to draw a cow bus and fry castle, before finally entrusting the bot to draw ketchup resulting in pictures of what appears to be a Heinz bottle, albeit in different artistic styles.

As always though, relying on tech can sometimes go awry. We asked if there was a plan B? No. It either worked or it didnt, adds Dubrick.

We love solving problems throughout the creative process, but we also think you have to be willing to kill an idea when its simply not going to work. Thankfully, this one did. Of course, there were a few funny and odd creations along the way. The platform is experimental, and we wanted to embrace that.

A campaign like this perhaps dispels the notion that a focus on tech can erode focus on the human aspect of creativity. I think it enhances the human element. It allows the wildest, most imaginative creative thoughts that pop into peoples heads to be transformed into vivid illustrations, he adds.

And it does it in seconds. Like any creative tool, its an opportunity for creative humans to do things theyve never done before.

Interested in creative campaigns? Check out our Ad of the Day section and sign up for our Ads of the Week newsletter so you dont miss a story.

Read the rest here:
How Heinzs Agency Used Machine Learning To Prove Its Ketchup Is The Dominant Condiment - The Drum

Machine learning forecasting for COVID-19 pandemic-associated effects on paediatric respiratory infections – Archives of Disease in Childhood

What is already known on this topic?

The literature states that (non-COVID-19) respiratory diagnoses have broadly reduced during the periods of government interventions as resulting from the COVID-19 pandemic, across the world.

General reductions in respiratory infection diagnoses are generally in contravention with the typical seasonal trends.

Research has predicted an increase in respiratory infections once government interventions and restrictions are removed.

This study analyses respiratory infections observed at a specialist childrens hospital during and after the implementation of restrictions resulting from the COVID-19 pandemic.

The results show a significant reduction in rates of major respiratory diagnoses during restrictions but further illustrate the variation in responses post-restrictions.

The study demonstrates how open-source, cross-domain, forecasting tools can be applied to routine health record activity data to provide evaluation of deviations from historical trends.

This study shows that, in our population, hypothesised excess post-COVID-19 respiratory syncytial virus infections did not occur, with implications for health policy planning.

The results indicate that rates for several respiratory infections continue to remain below typical pre-COVID-19 levels, and further research is required to model future effects.

The electronic health record data-based forecasting method, using cross-domain tools, is applicable to a range of health policy applications, including service usage planning and case surge detection.

The COVID-19 pandemic had a major impact on healthcare services, with significantly reduced service utilisation.1 In addition, the mitigation measures implemented, such as lockdowns, social distancing and personal protective/hygiene actions, have significantly reduced rates of other infectious agents, for example, transmission of norovirus.2 Previous pandemics, such as influenza, have demonstrated that associated public health measures can impact rates of other respiratory infections such as respiratory syncytial virus (RSV),3 and reduced rates of RSV infection and other respiratory pathogens have been reported in several countries during the COVID-19 pandemic.48

The value of routine electronic health record (EHR) data for research is increasingly recognised and has been highlighted by the pandemic,911 and the UK Government has recently published a data strategy emphasising the value of healthcare data for secondary purposes.12 The aim of this study is to analyse routine electronic patient record data from a specialist childrens hospital to examine the effect of the COVID-19 pandemic mitigation measures on rates of seasonal respiratory infections compared with expected rates using an openly available transferable machine learning model.

We performed a retrospective longitudinal study of coded respiratory disorder diagnoses made at the Great Ormond Street Hospital for Children (GOSH), a specialist paediatric hospital in London, that typically receives 280000 patient visits per year and includes several large paediatric intensive care units.

The respiratory disorder data were extracted and aggregated from the Epic patient-level EHR and legacy clinical data warehouses13 using a bespoke Digital Research Environment.14 Diagnoses were labelled with codes from the International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10).15 All diagnoses from inpatients and outpatients recorded between 1 January 2010 and 28 February 2022 were collected for the study.

The diagnosis rates and trends of four respiratory disease categories that are reported to be particularly prevalent during the UK winter were analysed in this study (Respiratory Infection due to the Respiratory Syncytial Virus (RSV), Respiratory Infection due to the Influenza Virus, Acute Nasopharyngitis due to any Virus and Acute Bronchiolitis due to any Virus (excluding RSV)). In addition, diagnoses were aggregated into categories based on respiratory hierarchical groupings of ICD-10 to provide a wider picture of diagnosis rates and seasonal trends15 (the full list of associated ICD-10 codes for each aggregated category is shown in online supplemental table 1).

Each diagnosis category was divided into three time periods, corresponding to before, during and after the enforcement of national restrictions in England in response to the COVID-19 pandemic. The prerestriction period was designated as 1 January 201025 March 2020. The during restriction period was designated from 26 March 2020 (the date The Health Protection (Coronavirus, Restrictions) (England) Regulations legally came into force) to 18 July 2021. The postrestriction period was taken from 19 July 2021 (the date The Health Protection (Coronavirus, Restrictions) (Steps etc.) (England) was revoked) to 28 February 2022.16 England was subject to a range of interventions in the period during restrictions. At their most stringent, these restrictions included full national lockdowns where meeting was disallowed, and it was a legal offence to leave your place of living except for a small range of essential activities. Conversely, at their least stringent, the restrictions permitted gatherings of up to 30 people and only had requirements for face coverings in enclosed spaces and minor personal social distancing measures.

All analysis and modelling for this study were carried out using the R programming language.17

All data were deidentified using the established digital research environment mechanisms, with analysis carried out in a secure virtual environment; no data left the hospital during the study.

For each respiratory disorder diagnosis category, data for the cohort of patients with an associated ICD-10 diagnosis were extracted, and the start date of the period of diagnosis was identified. The daily diagnosis frequency (diagnoses/day) was calculated for each diagnosis category by aggregating the diagnosis dates of all patients with a diagnosis in the category across the period.

The diagnosis rate data were sparse for some categories; therefore, a 30-day moving average filter18 with a centre-aligned, rectangular window was applied to the raw diagnosis frequency series to provide an averaged representation of the diagnosis rate trends, , that was used for the subsequent analysis and modelling.

To understand the impact of restrictions on GOSH diagnosis rates for each category, a statistical model for the typical trend was built from the diagnosis rate trends for the prerestrictions period using the Prophet forecasting procedure.19 Prophet is a robust, open source tool that fits additive and multiplicative seasonal models to time-series data that have strong cyclical/seasonal effects. With Prophet, an input time-series is decomposed into a non-periodic trend that changes non-linearly over time, multiple periodic seasonalities, an irregular holiday effect and a noise signal. Prophet fits the model to the input time-series within the Bayesian statistical inference framework with Markov chain Monte Carlo (MCMC) sampling implemented in the Stan programming language.19

For this study, the diagnosis rate model was designed as a multiplicative model, as follows.

where is the diagnosis rate time series, is the non-periodic trend modelled as a piecewise linear trend with changepoints, is the annual periodic seasonal trend modelled as a five term Fourier series, and is a normally distributed model error function. A multiplicative model, whereby the trends and seasonalities are multiplied together to model the time-series, was used because diagnosis rates clearly showed annual seasonality to be approximately proportional to the overall trend. Details of the implementation of and are available elsewhere.19

Since the multiplicative model was log-transformed and implemented as the following additive model

where x is the input diagnosis rate, approximates the log transformation and is finite for zero valued x for an arbitrary small constant .

To quantify the degree of seasonality in each diagnosis category, a Seasonality Amplitude score was calculated from the Prophet model generated for each diagnosis category. The score, , was calculated as the ratio of the peak-to-peak amplitude, , and the peak amplitude, , of the model forecast for the year immediately prior to the introduction of restrictions.

To understand the significance of any deviation in the observed diagnosis rate from that predicted by the Prophet models, discrete daily z-scores were calculated, as follows:

where is the i-th observed diagnosis rate z-score, is the i-th observed diagnosis rate, is the random variable defining the i-th value of the posterior predictive distribution from the raw MCMC samples in Prophet and is the mapping of probability quantiles to z-scores.

Data from 30199 patients with a diagnosis from Chapter X Diseases of the respiratory system of ICD-10 at the centre between 1 January 2010 and 28 February 2022 were included in the study, with a total of 141003 diagnosis records in the dataset (including repeats). Full summary statistics for the study population are shown in table 1.

Table of summary characteristics for the population of diagnoses analysed in the study

A total of 1060 diagnoses of RSV, 471 diagnoses of Influenza, 2214 diagnoses of Acute Nasopharyngitis and 1568 diagnoses of Acute Bronchiolitis (excl. RSV) were made across the period of study. Online supplemental table 1 shows the patient cohort summary for these diagnosis categories during the three time periods, in addition to those from the ICD-10 hierarchy.

The 30-day moving average diagnosis rates for the respiratory disorder diagnosis categories are shown in figure 1. The four diagnosis rate plots for the respiratory disorder diagnosis categories show clear seasonal trends and exhibit peaks in winter months and troughs in summer months.

Diagnosis frequency plots for the four commonly seasonal respiratory disease categories. The blue line shows the observed 30-day moving average of daily diagnosis rate between 2010 and 2022. The vertical dark red lines define the start and end of widespread restrictions in response to the COVID-19 pandemic in England, UK. The light red sections show the three periods of national lockdowns.

For RSV, the prerestrictions period maximum diagnosis frequencies were 1.8 diagnoses/day. During the restrictions period, the maximum was 0.17 diagnoses/day, representing an 91% reduction. These results are shown for the other categories in table 2.

Table of peak diagnosis rate values for the respiratory disease categories across the three time periods: prerestrictions, during restrictions and postrestrictions

The Prophet seasonal model was calculated for all diagnosis categories based on the prerestriction period (figure 2, table 3). The seasonality amplitude of all four seasonal diagnosis categories were greater than 0.5, demonstrating notable seasonality. Additionally, three respiratory infection categories from the ICD-10 hierarchy (acute upper respiratory infections, influenza and pneumonia, and other acute lower respiratory infections) were found to have seasonality amplitudes greater than 0.5. All categories had their seasonal peak identified between 26 November and 30 January annually (online supplemental table 2).

Diagnosis frequency forecast plots for the four seasonal respiratory disease categories: (A) RSV, (B) influenza, (C) acute nasopharyngitis and (D) acute bronchiolitis (excl. RSV), and three seasonal ICD-10 categories: (E) acute upper respiratory infections, (F) influenza and pneumonia and (G) other acute lower respiratory infections. In the diagnosis frequency plots, the blue line shows the observed 30-day moving average of daily diagnosis rate between 2010 and 2022. The white line shows the seasonal model forecast with the light blue 95% CIs. In the z-score plots, the blue line shows the observed diagnosis rate z-score against the forecast model. The light blue section shows the range for absolute z-score of less than 1.96 (95% CI). The vertical red lines define the start and end of widespread legal restrictions in response to the COVID-19 pandemic in England, UK. The light red sections show the three periods of national lockdowns. Specifically, note the marked reduction in rates for all respiratory infection groups during the pandemic restriction period but also the greater than expected rates for the period immediately postrestrictions relating to rising RSV infection rates. RSV, respiratory syncytial virus.

Table of the forecast and observed number of diagnoses for each respiratory disease category in the during and postrestrictions periods

Comparing observed diagnosis to forecast diagnoses across the restriction period for the four seasonal diagnoses, all showed a greater than 50% reduction from expected rates. This included a 73%, 84%, 70% and 55% reduction for RSV, influenza, acute nasopharyngitis and acute bronchiolitis (excl. RSV), respectively. These categories also had significant negative minimum z-scores of less than 10.0 during the restrictions period.

Across the restrictions period, there was a general reduction of 26% in all Diseases of the Respiratory System (J00J99). Of the ICD-10 hierarchy categories considered in the study, all reduced against forecast rates except Influenza and pneumonia (which contains pneumonia as the result of coronavirus infections) and the aggregated category Other non-infectious diseases of the respiratory system. All categories had negative minimum z-scores of less than 2.0 (outside the 95% CI); however, values were generally closer to zero than observed for the typically seasonal categories.

During the postrestriction period, there were large differences in diagnosis categories responses to the lifting of restrictions. Most categories have returned to, and remained, in-line with prerestriction forecasts; however some have not. RSV diagnosis rates rose most notably and were found to be consistently and significantly above the prerestrictions modelled forecast (maximum z-score 8.13), however subsequently returned to within forecast by the end of winter 2021/2022 (z-score <2.0). Additionally, both influenza and acute nasopharyngitis categories continue to show significantly reduced diagnosis rates in comparison with prerestrictions forecasts (z-scores 4.0 and 2.9 respectively).

In this study we have demonstrated, first, that mitigation and prevention measures put in place during the COVID-19 pandemic period were associated with significant reductions in the rates of children with a diagnosis of specific respiratory infections, particularly due to RSV, influenza, acute nasopharyngitis and acute bronchiolitis, at a large childrens hospital in England, UK. Furthermore, the removal of prevention measures has resulted in widely varied responses in subsequent months. Second, we demonstrate the feasibility of applying an openly available machine learning forecasting model from another domain to routine electronic healthcare data within a secure digital hospital environment. Third, we use our method in analysing the seasonality of respiratory infections to showcase the potential of this model to clinical phenomena that are cyclical (eg, seasonal/diurnal). Our findings are consistent with known epidemiological data, suggesting robustness of the approach. Finally, the use of such a forecasting tool can identify unexpected deviations from normal, in this case the increasing rates of RSV infection in mid-late 2021 beyond the expected, allowing modelling of the likely peak in future months, hence aiding resource planning and public health measures. Again, the potential utility of this approach extends beyond the seasonality of respiratory infection alone.

The almost complete absence of the seasonal RSV infection pattern during the COVID-19 pandemic has been previously reported internationally,4 7 20 with larger than expected numbers susceptible postpandemic,21 and based on simulated trajectories from past data, significant RSV outbreaks had been predicted for the winter of 202122.22 23 Indeed, a resurgence of RSV infections above normal levels and at different times of the season has been reported in several countries.24 25 The data presented here confirm the significant reduction in RSV and other acute respiratory infections in London during the restriction period and further confirm greater than normal (predicted) rates occurring immediately following the lifting of restrictions. However, the peak diagnosis frequency rate was largely equal to that predicted for a typical winter, based on our machine learning modelling, and by 28 February 2022 has returned to within the expected range. All other seasonal respiratory infections categories studied exhibited similar suppression in diagnoses during the restrictions period; however, (unlike RSV) they have all seen within or below forecast diagnosis rates postrestrictions. GOSH does not have an emergency department and is unique in relation to its patient population among childrens hospitals in the UK. Our absolute numbers of diagnoses for different respiratory infections including RSV are relatively low compared with district general hospitals, though the same seasonal and restrictions related effects have been widely observed.4 7 26 Despite this, the model was still able to forecast expected trends and deviations from previous years.

The results for diagnosis rate and number observed during winter 2021/2022, relative to forecast (particularly for RSV), are contrary to some of the previously published suggestions that a lack of population immunity due to the absence of cases during restrictions would lead to increased disease prevalence. Further study is required to explore if this finding is observed in larger, less selective populations as global restrictions are fully removed. However, if replicated elsewhere, these findings could imply that the risk of elevated infections and resulting disease is less of a risk for further increases in health service demand during periods where they are recovering from delays to a range of services during the pandemic.

The study illustrates the value of using routine healthcare data for secondary analyses within a bespoke data infrastructure based around well-defined data definitions and data models allowing data harmonisation, combined with the use of open and commonly used analytic tools such as R and Python,17 27 within a cloud-based trusted research environment allowing secure and auditable collaborative data analysis of non-identifiable data. This approach supports transferability to other organisations, and all code is available at https://github.com/goshdrive/seasonality-analysis-forecasting.

By applying a seasonal forecasting model28 to diagnosis data, we show how it is possible to generate forecasts with narrow confidence intervals from routine healthcare data, even when the underlying healthcare indicators are highly variable throughout a periodic cycle and/or involve moving year-on-year trends. By using a forecasting model that explicitly includes cyclical components described as a Fourier series, instead of a more generalised machine learning model, the library was able to tightly model the data with few parameters requiring domain-specific configuration. Specifically, these results were achieved by setting just three parameters specific to the indicators being studied. For this reason, the Prophet forecasting model has been successfully used in diverse areas including finance,29 temperature prediction,30 cloud computing resource requirements31 and predicting COVID-19 infection rates.32 33

In conclusion, these data, based on routine EHR data combined with cross-domain time-series forecasting machine learning tools, demonstrate the near-complete absence of the seasonal acute respiratory infection-related diagnoses in a specialist childrens hospital during the period of the COVID-19 pandemic mitigation measures in 2020 and 2021. In addition, the data show an earlier-than-usual spike in RSV infection in 2021 but remained within the forecast range. The study illustrates the value of curated real-world healthcare data to rapidly address clinical issues in combination with the use of openly available machine learning tools, which can be applied to a range of scenarios relating to forecasting cyclical time series data.

No data are available. No individual participant data will be available.

Not applicable.

The use of such routine deidentified data for this study was approved under REC 17/LO/0008.

Read more:
Machine learning forecasting for COVID-19 pandemic-associated effects on paediatric respiratory infections - Archives of Disease in Childhood

Artificial Intelligence In Drug Discovery Global Market Report 2022: Rise in Demand for a Reduction in the Overall Time Taken for the Drug Discovery…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence (AI) In Drug Discovery Global Market Report 2022, By Technology, By Drug Type, By Therapeutic Type, By End-Users" report has been added to ResearchAndMarkets.com's offering.

The global artificial intelligence (AI) in drug discovery market is expected to grow from $791.83 million in 2021 to $1042.30 million in 2022 at a compound annual growth rate (CAGR) of 31.6%. The market is expected to reach $2994.52 million in 2026 at a CAGR of 30.2%.

The artificial intelligence (AI) in drug discovery market consists of sales of AI for drug discovery and related services. Artificial Intelligence (AI) for drug discovery is a technology that uses a simulation of human intelligence process by machines to tackle complex problems in the drug discovery process. It helps to find new molecules to identify drug targets and develop personalized medicines in the pharmaceutical industry.

The main technologies in artificial intelligence (AI) in drug discovery are deep learning and machine learning. Deep learning is a machine learning and artificial intelligence (AI) technique that mimics how humans acquire knowledge. Data science, which covers statistics and predictive modelling, incorporates deep learning as a key component.

The different drug types include small molecule, large molecules and involves various types of therapies such as metabolic disease, cardiovascular disease, oncology, neurodegenerative diseases, others. It is implemented in several end-users including pharmaceutical companies, biopharmaceutical companies, academic and research institutes, others.

The rise in demand for a reduction in the overall time taken for the drug discovery process is a key driver propelling the growth of the artificial intelligence (AI) in drug discovery market. Traditionally, it takes three to five years for animal models to identify and optimize molecules before they are evaluated in humans whereas start-ups based on AI have been identifying and designing new drugs in a matter of few days or months.

For instance, in 2020, the British start-up Exscientia and Japan's Sumitomo Dainippon Pharma have used artificial intelligence to produce an obsessive-compulsive disorder (OCD) medication, decreasing the development time from four years to less than one year. The reduction in overall time taken for the drug discovery process drives the artificial intelligence (AI) in drug discovery market's growth.

The shortage of skilled professionals is expected to hamper the AI in drug discovery market. The employees have to re-train or learn new skill sets to work efficiently on the complex AI machines to get the desired results for the drug. The shortage of skills acts as a major hindrance to drug discovery through AI, discouraging companies from adopting AI-based machines for drug discovery.

Scope

Markets Covered:

1) By Technology: Deep Learning; Machine Learning

2) By Drug Type: Small Molecule; Large Molecules

3) By Therapeutic Type: Metabolic Disease; Cardiovascular Disease; Oncology; Neurodegenerative Diseases; Others

4) By End-Users: Pharmaceutical Companies; Biopharmaceutical Companies; Academic And Research Institutes; Others

Key Topics Covered:

1. Executive Summary

2. Artificial Intelligence (AI) In Drug Discovery Market Characteristics

3. Artificial Intelligence (AI) In Drug Discovery Market Trends And Strategies

4. Impact Of COVID-19 On Artificial Intelligence (AI) In Drug Discovery

5. Artificial Intelligence (AI) In Drug Discovery Market Size And Growth

6. Artificial Intelligence (AI) In Drug Discovery Market Segmentation

7. Artificial Intelligence (AI) In Drug Discovery Market Regional And Country Analysis

8. Asia-Pacific Artificial Intelligence (AI) In Drug Discovery Market

9. China Artificial Intelligence (AI) In Drug Discovery Market

10. India Artificial Intelligence (AI) In Drug Discovery Market

11. Japan Artificial Intelligence (AI) In Drug Discovery Market

12. Australia Artificial Intelligence (AI) In Drug Discovery Market

13. Indonesia Artificial Intelligence (AI) In Drug Discovery Market

14. South Korea Artificial Intelligence (AI) In Drug Discovery Market

15. Western Europe Artificial Intelligence (AI) In Drug Discovery Market

16. UK Artificial Intelligence (AI) In Drug Discovery Market

17. Germany Artificial Intelligence (AI) In Drug Discovery Market

18. France Artificial Intelligence (AI) In Drug Discovery Market

19. Eastern Europe Artificial Intelligence (AI) In Drug Discovery Market

20. Russia Artificial Intelligence (AI) In Drug Discovery Market

21. North America Artificial Intelligence (AI) In Drug Discovery Market

22. USA Artificial Intelligence (AI) In Drug Discovery Market

23. South America Artificial Intelligence (AI) In Drug Discovery Market

24. Brazil Artificial Intelligence (AI) In Drug Discovery Market

25. Middle East Artificial Intelligence (AI) In Drug Discovery Market

26. Africa Artificial Intelligence (AI) In Drug Discovery Market

27. Artificial Intelligence (AI) In Drug Discovery Market Competitive Landscape And Company Profiles

28. Key Mergers And Acquisitions In The Artificial Intelligence (AI) In Drug Discovery Market

29. Artificial Intelligence (AI) In Drug Discovery Market Future Outlook and Potential Analysis

30. Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/43bdop

Read this article:
Artificial Intelligence In Drug Discovery Global Market Report 2022: Rise in Demand for a Reduction in the Overall Time Taken for the Drug Discovery...

Auburn University research team receives grant to study coastal resilience along Gulf of Mexico – Office of Communications and Marketing

An Auburn University research team in the College of Forestry, Wildlife and Environment has been awarded a grant of $450,000 to develop a holistic platform that integrates multiscale observations, machine learning and systems modeling for coastal monitoring, assessment and prediction, or Coast-MAP, of ecosystem health, water resources and social resilience.

Led by Shufen Susan Pan, lead principal investigator, the team will consider multiple stressors including climate change, floods and droughts, hurricanes, land use, urbanization, nutrient uses, sewage and nutrient loads.

The Gulf of Mexico has been experiencing increased impacts of persistent climate stressors, including frequent floods, intense hurricanes, increasing sea level rise and is likely to undergo further rapid climate change in the coming years.

To address the combined effects of multiple stresses and to improve predictability, there is a critical need for methodological advancements that integrate multiple layers of geographic information and pursue a science-based approach to monitoring, understanding, predicting and responding to changes in coupled social-ecological systems along the Gulf of Mexico, said Pan.

As director of the colleges GIS and Remote Sensing Laboratory, Pan has used emerging technologies in geospatial modeling, computer simulation, satellite observation and AI/machine learning to monitor, assess and predict, or MAP, multiscale dynamics of coupled social-ecological systems in the context of climate and global environmental change.

To achieve their goal, the team, comprised of Pan and co-principal investigators Christopher Anderson, Hanqin Tian and the University of Alabamas Wanyun Shao, has proposed four objectives.

First, we will evaluate the contemporary states of ecosystem health, water resource and social resilience through ground and satellite observations, machine learning and geospatial mapping observing, said Pan. We then will assess and attribute impacts over the past 30 years of multiple stresses on ecosystem health and water resource.

The team will predict potential impacts of climate and land use changes on ecosystem health and water resources in the next 30 years, as well as work to improve the understanding of the effectiveness of specific resilience-based assessments and decision-making tools with stakeholders.

The methods and metrics will be used to measure coastal resilience that are context specific, validated with observed data and ground-truthed via stakeholder participation, said Pan.

In addition to numerous other methods, the team hopes to achieve their goals by holding stakeholder workshops to learn about stakeholders risk perceptions of future climate conditions; assess multiple stresses impacts on the ecosystem and water resources in the Alabama gulf; and collect remote sensing observations from multiple sources to monitor the functions of different terrestrial ecosystems in Alabamas gulf.

The work Pan and her team of researchers are conducting will help us to predict and respond to changes in coupled social-ecological systems along the Gulf of Mexico, said Janaki Alavalapati, dean of the College of Forestry, Wildlife and Environment. This science-based approach will help predict potential impacts of climate and land use changes on ecosystem health and water resources.

Excerpt from:
Auburn University research team receives grant to study coastal resilience along Gulf of Mexico - Office of Communications and Marketing

AI for Ukraine is a new educational project from AI HOUSE to support the Ukrainian tech community – KDnuggets

Sponsored Post

AI for Ukraine is a series of workshops and lectures held by international artificial intelligence experts to support the development of Ukraines tech community during the war. Yoshua Bengio (MILA/U. Montreal), Alex J. Smola (Amazon Web), Sebastian Bubeck (Microsoft), Gal Varoquaux (INRIA), and many other well-known specialists have joined the initiative. This is a non-commercial educational project by AI HOUSE a company focused on building the AI/ML community in Ukraine and is part of the Roosh tech ecosystem. All proceeds collected upon registration will be donated to the biggest Ukrainian charity fund Come Back Alive.

Its been five months of a completely new reality for every single Ukrainian, one with sirens, bombings, pain, and war. The AI community has also changed a lot, with many now on the front line, and others dedicated to volunteer work. One thing for certain, this war will end with Ukraines victory after which the country will need to be rebuilt in every aspect.

"War is one of the most tragic types of collective human behavior, and democracy is a defense against tyranny and the key to improving people's lives. It is essential to maintain and develop the flame of research and knowledge even in such a dark period, thinking about the post-war times and the importance of science and innovation to achieve progress." comments Yoshua Bengio, famous computer scientist who received the Turing Award in 2018 and is called as one of the Godfathers of AI.

The global AI community has been and is continuing to actively support Ukraine and its tech sector. The AI for Ukraine project is aimed at connecting international experts with the local AI community, sharing insight, and helping Ukraine on its journey of becoming Europes AI hub in the near future.

"Ukraine must continue its path to a modern democratic country, economically prosperous, with free and educated people. Supporting the AI community in Ukraine will contribute to the development of the economy, increasing the value of local talent." adds Gal Varoquaux, ML-researcher and data scientist.

AI for Ukraine aims to:

"In the Roosh ecosystem, AI HOUSE is responsible for developing educational programs and network-building that helped to implement the AI for Ukraine initiative.

We develop artificial intelligence in Ukraine in all aspects, the most fundamental of which is education. Education is the foundation and driving force for every professional who seeks success. Thus, our goal of creating this initiative is primarily to provide access to unique knowledge from the best experts in the industry. We gathered world-renowned specialists that are keen to help Ukraine and its AI community. Together, we will be able to promote the field of artificial intelligence in Ukraine at a qualitatively new level and form conditions for the further development of technological talents." - emphasizes Serhii Tokarev, Founding Partner at Roosh.

Professors from Stanford, Cornell, Berkeley, and other renowned educational institutions, along with engineers and specialists from leading IT companies like Amazon, Samsung AI, Microsoft, Hugging Face have all joined to support the initiative and host educational sessions.

Yoshua Bengio Professor at the University of Montreal, founder and scientific director of the Quebec Institute of Artificial Intelligence, head of the CIFAR Learning in Machines & Brains program, and one of the leading experts in the field of AI. In 2019, Yoshua was awarded the prestigious Killam Prize and in 2022, became the computer scientist with the highest h-index in the world. He is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, the Nobel Prize of Computing, with Geoffrey Hinton and Yann LeCun.

Alex J. Smola VP of machine learning at Amazon Web Services, Professor at Carnegie Mellon University, and one of the world's top experts in the field of machine learning. Alex is also a prolific and widely cited author in the academic research community, having authored and contributed to nearly 500 papers.

Sebastien Bubeck Sr.Principal Research Manager, leads the Machine Learning Foundations group at Microsoft Research Redmond. He has won several awards at machine learning conferences for his work on online decision making, convex optimization, and adversarial robustness.

Gal Varoquaux Research director working on data science and health at Inria (French Computer Science National research). His research focuses on statistical-learning tools for data science and scientific inference. He co-founded scikit-learn, one of the reference machine-learning toolboxes, and helped build various central tools for data analysis in Python.

The educational sessions are available to all and can be accessed via the AI for Ukraine website. Upon registration, participants will be required to make a deposit of any amount ($1 minimum) to receive full access to all upcoming lectures and workshops. All collected funds will be donated to Ukraines largest charity fund comebackalive.in.ua.

In a series of online lectures and workshops, speakers will cover current AI topics. Some of the topics that will be covered in the hands-on workshops are compression of models for deep learning, AutoML, evaluation of machine learning models and their diagnostic value and much more.Participants will have the opportunity to ask questions, interact with the lectures, and join discussions.

The first lecture will be held on the 17th of August with Yoshua Bengio talking about "Bridging the gap between current deep learning and higher-level human cognitive abilities.

AI HOUSE is the AI community in Ukraine that brings together talents, experts, and investors and provides quality education in artificial intelligence and machine learning. We research the most exciting and prospective vectors of AI and ML and establish a partnership with key stakeholders locally and worldwide.

Roosh is a Ukrainian company that creates and invests in AI and ML-focused projects. Its business-ecosystem is composed of venture studio Pawa, venture firm Roosh Ventures, Ukrainian AI/ML community AI HOUSE, tech university SET University, and startups Reface and ZibraAI. Roosh aims to build Ukraine as the European center for artificial intelligence.

aiforukraine.aihouse.clubPoint of contact: Maryna Chernysh - PR Manager at Roosh (mc@roosh.tech)

Follow this link:
AI for Ukraine is a new educational project from AI HOUSE to support the Ukrainian tech community - KDnuggets

Adept Builds a Powerful AI Teammate for Everyone with Oracle and NVIDIA – PR Newswire

Flexible natural-language computing interfacebuilt on Oracle Cloud Infrastructure (OCI) and NVIDIA technology enables people and computers to work together creatively

OCI's high performance and consumption-based pricing helps Adept scale

AUSTIN, Texas, Aug. 11, 2022 /PRNewswire/ -- Adept, a machine learning research and product lab, is using Oracle Cloud Infrastructure(OCI) and NVIDIA technology to develop a universal AI teammate capable of performing a range of tasks people execute on their computer or on the internet. Running thousands of NVIDIA GPUs on clusters of OCI bare metal compute instances and taking advantage of OCI's network bandwidth, Adept can train large-scale AI and ML models faster and more economically than before. As a result, Adept has been able to rapidly advance its general intelligence roadmap and develop its first product a rich language interface for the tools knowledge workers use every day to be productive and creative.

A Highly Scalable, Performant, and Cost-Effective Platform for AI Innovation

With OCI as its preferred cloud platform, Adept obtains the scale and high performance necessary to run massive AI models without excessive compute costs. This has enabled Adept to develop a highly flexible and dynamic natural-language interface for all software that significantly streamlines the tasks knowledge workers execute daily. As a result, users can ask their computer to perform tedious, difficult, or abstract functions, as well as use the interface to test creative ideas.

To fully support Adept with the compute capacity it required, Oracle and NVIDIA customized their offerings to ensure Adept had access to thousands of NVIDIA A100 Tensor Core GPUs needed to train its complex models. Adept, which recently closed a $65 million funding round, is training a giant AI model on OCI using NVIDIA's most powerful A100 GPUs connected with best-of-breed RoCE network powered by NVIDIA (NICs).

"AI continues to rapidly grow in scope but until now, AI models could only read and write text and images; they couldn't actually execute actions such as designing 3D parts or fetching and analyzing data," said David Luan, chief executive officer, Adept. "With the scalability and computing power of OCI and NVIDIA technology, we are training a neural network to use every software application, website, and API in existence building on the capabilities that software makers have already created. The universal AI teammate gives employees an 'extra set of hands' to create as fast as they think and reduce time spent on manual tasks. This in turn will help their organizations become more productive, and nimble in their decision making."

"Adept has exciting, bold ambitions for the future of AI, and we're honored that the company's team of AI and ML trailblazers recognized OCI's ability to support highly innovative and compute-heavy projects like Adept's universal AI assistant," said Karan Batta, vice president, product management, OCI. "With the combined computing power of OCI and NVIDIA, innovators like Adept are poised to unleash the full potential of AI as a technology that can transform how work is done and make every knowledge worker in the world much more productive."

"With brilliant minds from DeepMind, OpenAI, and other AI and ML pioneers, Adept is building the next generation of user interfaces for software applications," said Kari Briski, vice president, AI and high-performance computing (HPC) software development kits, NVIDIA."By workingwith Oracle to provide Adept with an industry-leading GPU engine and a wide range of AI and ML software tools, we're making innovative AI systems possible."

OCI Powers Next-Generation AI Models

OCI's bare metal NVIDIA GPU instances offer startups like Adept an HPC platform for applications that rely on machine learning, image processing, and massively parallel HPC jobs. In addition, HPC on OCI provides the elasticity and consumption-based costs of the cloud, offering on-demand potential to scale tens of thousands of cores simultaneously. As a result, with HPC on OCI, customers like Adept gain access to powerful processors, fast and dense local storage, high-throughput ultra-low-latency RDMA cluster networks, and the tools to automate and run jobs seamlessly.

Additional Resources

About Oracle

Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at http://www.oracle.com.

Trademarks

Oracle, Java, and MySQL are registered trademarks of Oracle Corporation.

###

SOURCE Oracle

Read the original post:
Adept Builds a Powerful AI Teammate for Everyone with Oracle and NVIDIA - PR Newswire

Quantum computing: Realising the potential – Verdict

Quantum computing is becoming more and more prevalent. IBM already runs its own quantum computing service, Qiskit, and Google offers its Sycamore quantum processors to research scientists with approved projects. Potential applications include cryptography, financial modelling, and logistics optimization.

The advance of quantum computers threatens to undermine the security of current cryptography. The encryption methods currently used by banks rely on multiplying massive prime numbers together and using them in key exchanges to secure bank details. For a classical computer, this would take trillions of years to crack. However, with Shors algorithm (a quantum algorithm for finding prime factors of an integer), a quantum computer could crack the commonly used 2048-bit RSA encryption in just 10 seconds.

As a result of this advancement, SSH, which supports Windows and IBM platforms, released OpenSSH 9 this year, an open-source implementation that uses hybrid post-quantum Streamline NTRU Prime + X25519. The hybrid scheme mixes a quantum-vulnerable algorithm with a post-quantum algorithm by combining key material agreed by both of them.

This will prevent capture now, decrypt later attacks where hackers record and store ciphertext to be decrypted by quantum machines later. Post-quantum refers to a world where quantum computers are commonplace, which demonstrates that encryption companies are already aware of the risks and are starting to implement countermeasures to ensure client security.

One key difference between classical and quantum computers is that while the former would have to double the number of transistors working on a problem to double its power, the latter only needs one more qubit (or quantum bit, which is a basic unit of quantum information). As a result, complexity-heavy problems such as models that process large sets of variables to optimize portfolios could be tackled much more easily and quickly.

Quantum computing is especially good at combinatorial optimization, which allows for faster searches of optimal solutions. An example of where this would be useful is in helping players select the highest bandwidth path across a network, which is very helpful for algorithmic traders involved in bandwidth trading.

Finally, according to IBM, quantum computing could even forecast financial crashes, which would lead to far greater global economic stability.

Quantum computing could even improve problems in logistics and supply chains. These have become more complex, especially with Covid19 leading to further unexpected errors. Managers have to accurately predict demand, ensure that they have the right supply levels to avoid inventory space waste, and move products in the fastest and most agile way.

This is where quantum computing comes in. Constrained optimization addresses these problems, but even the basic traveling salesperson problem has 87 billion routes for just 15 stopsand so current analysts have to compress the information or only use part of the dataset. With quantum algorithms, where qubits that can be in multiple states at once are used, they can deliver multiple solutions that are each more accurate than a solution current classical computers could produce.

Go here to read the rest:
Quantum computing: Realising the potential - Verdict

One of the biggest names in quantum computing could have just cracked open the multibillion-dollar market with a new breakthrough – Fortune

Quantinuum, the quantum computing company spun out from Honeywell, said this week that it had made a breakthrough in the technology that should help accelerate commercial adoption of quantum computers.

It has to do with real-time correction of errors.

One of the biggest issues with using quantum computers for any practical purpose is that the circuits in a quantum computer are highly susceptible to all kinds of electromagnetic interference, which causes errors in its calculations. These calculation errors must be corrected, either by using software, often after a calculation has run, or by using other physical parts of the quantum circuitry to check for and correct the errors in real time. So far, while scientists have theorized ways for doing this kind of real-time error correction, few of the methods had been demonstrated in practice on a real quantum computer.

The theoretically game-changing potential of quantum computers stems from their ability to harness the strange properties of quantum mechanics. These machines may also speed up the time it takes to run some calculations that can be done today on supercomputers, but which take hours or days. In order to achieve those results, though, ironing out the calculation errors is of utmost importance. In 2019, Google demonstrated that a quantum computer could perform one esoteric calculation in 200 seconds that it estimated would have taken a traditional supercomputer more than 10,000 years to compute. In the future, scientists think quantum computers will help make the production of fertilizer much more efficient and sustainable as well as create new kinds of space-age materials.

Thats why it could be such a big deal that Quantinuum just said it has demonstrated two methods for doing real-time error correction of the calculations a quantum computer runs.

Tony Uttley, Quantinuums chief operations officer, says the error-correction demonstration is an important proof point that the company is on track to being able to deliver a quantum advantage for some real-world commercial applications in the next 18 to 24 months. That means businesses will able to run some calculationspossibly for financial risk or logistics routingsignificantly faster, and perhaps with better results, by using quantum computers for at least part of the calculation than they could by just using standard computer hardware. This lends tremendous credibility to our road map, Uttley said.

Theres a lot of money in Quantinuums road map. This past February, the firms majority shareholder, Honeywell, foresaw revenue in Quantinuums future of $2 billion by 2026. That future could have just drawn nearer.

Uttley says that today, there is a wide disparity in the amount of money different companies, even direct competitors in the same industry, are investing in quantum computing expertise and pilot projects. The reason, he says, is that there are widely varying beliefs in how soon quantum computers will be able to run key business processes faster or better than existing methods on standard computers. Some people think it will happen in the next two years. Others think these nascent machines will only start to realize their business potential a decade from now. Uttley says he hopes this weeks error-correction breakthrough will help tip more of Quantinuums potential customers into the two-year camp.

A $2 billion market opportunity

Honeywells projection of at least $2 billion in revenue from quantum computing by 2026 was a revisiona year earlier than it had previously forecast. The error-correction breakthrough ought to give Honeywell more confidence in that projection.Quantinuum is one of the most prominent players in the emerging quantum computer industry, with Honeywell having made a bold and so far successful bet on one particular way of creating a quantum computer. That method is based on using powerful electromagnets to trap and manipulate ions. Others, such as IBM , Google, and Rigetti Computing, have created quantum computers using superconducting materials. Microsoft has been trying to create a variation of this superconducting-based quantum computer but using a slightly different technology that would be less prone to errors. Still others are creating quantum computers using lasers and photons. And some companies, such as Intel, have been working on quantum computers where the circuits are built using more conventional semiconductors.

The ability to perform real-time error correction could be a big advantage for Quantinuum and its trapped-ionbased quantum computers as it competes for a commercial edge over competing quantum computer companies. But Uttley points out that besides selling access to its own trapped-ion quantum computers through the cloud, Quantinuum also helps customers run algorithms on IBMs superconducting quantum computers. (IBM is also an investor in Quantinuum.)

Different kinds of algorithms and calculations may be better suited to one kind of quantum computer over another. Trapped ions tend to remain in a quantum state for relatively long periods of timewith the record being an hour. Superconducting circuits, on the other hand, tend to stay in a quantum state for a millisecond or less. But this also means that it takes much longer for a trapped-ion quantum computer to run a calculation than for a superconducting one, Uttley says. He envisions a future of hybrid computing where different parts of an algorithm are run on different machines in the cloudpartially on a traditional computer, partly on a trapped-ion quantum computer, and partly on a superconducting quantum computer.

In a standard computer, information is represented in a binary form, either a 0 or a 1, called a bit. Quantum computers use the principles of quantum mechanics to form their circuits, with each unit of the circuit called a qubit. Qubits can represent both 0 and 1 simultaneously. This means that each additional qubit involved in performing calculations doubles the power of a quantum computer. This doubling of power for every additional qubit is one reason that quantum computers will, in theory, be far more powerful than even todays largest supercomputers. But this is only true if the issue of error-correction can be successfully tackled and if scientists can figure out how to successfully link enough qubits together to exceed the power of existing standard high-performance computing clusters.

Quantinuum demonstrated two different error-correction methodsone called the five-qubit code and the other called the Steane code. Both methods use multiple physical qubits to represent one logical part of the circuit, with some of those qubits actually performing the calculation and the others checking and correcting errors in the calculation. As the name suggests, the five-qubit code uses five qubits, while the Steane code uses seven qubits. Uttley says that Quantinuum discovered that the Steane code worked significantly better than the five-qubit code.

That may mean it will become the dominant form of error correction, at least for trapped-ion quantum computers, going forward.

Sign up for theFortune Features email list so you dont miss our biggest features, exclusive interviews, and investigations.

Read the original here:
One of the biggest names in quantum computing could have just cracked open the multibillion-dollar market with a new breakthrough - Fortune