How to use the intelligence features in iOS 16 to boost productivity and learning – TechRepublic

Apple has packed intelligence features into iOS 16 to allow for translations from videos, copying the subject of a photo and removing the background, and copying text from a video.

A few years ago, Apple began betting on local machine learning in iOS to boost the user experience. It started simple with Photos, but now machine learning is a mainstay in iOS and can help to boost productivity in every turn. iOS 16 adds to the machine learning features of iOS to allow for the ability to copy text from a video, perform quick text actions from photos and videos, and allow you to easily copy the subject of a photo and remove the background, creating an easy alpha layer.

Well walk through these three new intelligence features in iOS 16, find out how to use them, and show you all of the ways that you can use these features to boost your productivity and more.

SEE: iCloud vs. OneDrive: Which is best for Mac, iPad and iPhone users? (free PDF) (TechRepublic)

All of the features below only work on iPhones containing an A12 Bionic processor or later, and the translation and text features only available in the following languages: English, Chinese, French, Italian, German, Japanese, Korean, Portuguese, Spanish and Ukrainian text.

One of the cooler features in iOS 16 was the ability to lift the subject of a photo off the photo, creating an instant alpha of the subject. This removes the background from the photo and leaves you with a perfectly cut out photo subject that you can easily paste into a document, iMessage or anywhere else you can imagine (Figure A).

Figure A

This feature works on iPhone with A12 Bionic and later, and can be done by performing these steps inside of the Photos app:

This doesnt only work in Photos, but also in the Screenshot Utility, QuickLook, Safari and other apps soon. This feature saves a lot of time over opening the photo into a photo editor and manually removing the background.

iOS 15 introduced Live Text, which lets you copy text from a photo or search through your Photo library using text that might be contained in a photo (Figure B). Apple is ramping up this feature in iOS 16 by allowing you to pause a video and copy text from it as well.

Figure B

It works like this:

This feature is great for online learning environments where students might need to copy an example and paste it into a document or other file.

Live Text has been around for two iterations of iOS, so Apple has started building additional features around the Live Text feature, namely the ability to perform actions on text from a photo or paused video frame (Figure C).

Figure C

When you select text in a photo or paused video, you now have the option of performing the following actions on the text:

You can do this by selecting the text from the photo or video, then selecting one of the quick actions presented. This works in the Camera app, Photos app, QuickLook and in the iOS video player.

Read the rest here:
How to use the intelligence features in iOS 16 to boost productivity and learning - TechRepublic

Deep learning algorithm predicts Cardano to trade above $2 by the end of August – Finbold – Finance in Bold

The price of Cardano (ADA) has mainly traded in the green in recent weeks as the network dubbed Ethereum killer continues to record increased blockchain development.

Specifically, the Cardano community is projecting a possible rise in the tokens value, especially with the upcoming Vasil hard fork.

In this line, NeuralProphets PyTorch-based price prediction algorithm that deploys an open-source machine learning framework has predicted that ADA would trade at $2.26 by August 31, 2022.

Although the prediction model covers the time period from July 31st to December 31st, 2022, and it is not an accurate indicator of future prices, its predictions have historically proven to be relatively accurate up until the abrupt market collapse of the algorithm-based stablecoin project TerraUSD (UST).

However, the prediction aligns with the generally bullish sentiment around ADA that stems from the network activity aimed at improving the assets utility. As reported by Finbold, Cardano founder Charles Hoskinson revealed the highly anticipated Vasil hard fork is ready to be rolled after delays.

It is worth noting that despite minor gains, ADA is yet to show any significant reaction to the upgrade, but the tokens proponents are glued to the price movement as it shows signs of recovery. Similarly, the token has benefitted from the recent two-month-long rally across the general cryptocurrency market.

Elsewhere, the CoinMarketCap community is projecting that ADA will trade at $0.58 by the end of August. The prediction is supported by about 17,877 community members, representing a price growth of about 8.71% from the tokens current value.

For September, the community has placed the prediction at $0.5891, a growth of about 9% from the current price. Interestingly, the algorithm predicts that ADA will trade at $1.77 by the end of September. Overall, both prediction platforms indicate an increase from the digital assets current price.

By press time, the token was trading at $0.53 with gains of less than 1% in the last 24 hours.

In general, multiple investors are aiming to capitalize on the Vasil hard fork, especially with Cardano clarifying the upgrade is going on according to plan.

Disclaimer:The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.

Read more from the original source:
Deep learning algorithm predicts Cardano to trade above $2 by the end of August - Finbold - Finance in Bold

Responsible use of machine learning to verify identities at scale – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

In todays highly competitive digital marketplace, consumers are more empowered than ever. They have the freedom to choose which companies they do business with and enough options to change their minds at a moments notice. A misstep that diminishes a customers experience during sign-up or onboarding can lead them to replace one brand with another, simply by clicking a button.

Consumers are also increasingly concerned with how companies protect their data, adding another layer of complexity for businesses as they aim to build trust in a digital world. Eighty-six percent of respondents to a KPMG study reported growing concerns about data privacy, while 78% expressed fears related to the amount of data being collected.

At the same time, surging digital adoption among consumers has led to an astounding increase in fraud. Businesses must build trust and help consumers feel that their data is protected but must also deliver a quick, seamless onboarding experience that truly protects against fraud on the back end.

As such, artificial intelligence (AI) has been hyped as the silver bullet of fraud prevention in recent years for its promise to automate the process of verifying identities. However, despite all of the chatter around its application in digital identity verification, a multitude of misunderstandings about AI remain.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

As the world stands today, true AI in which a machine can successfully verify identities without human interaction doesnt exist. When companies talk about leveraging AI for identity verification, theyre really talking about using machine learning (ML), which is an application of AI. In the case of ML, the system is trained by feeding it large amounts of data and allowing it to adjust and improve, or learn, over time.

When applied to the identity verification process, ML can play a game-changing role in building trust, removing friction and fighting fraud. With it, businesses can analyze massive amounts of digital transaction data, create efficiencies and recognize patterns that can improve decision-making.However, getting tangled up in the hype without truly understanding machine learning and how to use it properly can diminish its value and in many cases, lead to serious problems. When using machine learning ML for identity verification, businesses should consider the following.

Bias in machine learning models can lead to exclusion, discrimination and, ultimately, a negative customer experience. Training an ML system using historical data will translate biases of the data into the models, which can be a serious risk. If the training data is biased or subject to unintentional bias by those building the ML systems, decisioning could be based on prejudiced assumptions.

When an ML algorithm makes erroneous assumptions, it can create a domino effect in which the system is consistently learning the wrong thing. Without human expertise from both data and fraud scientists, and oversight to identify and correct the bias, the problem will be repeated, thereby exacerbating the issue.

Machines are great at detecting trends that have already been identified as suspicious, but their crucial blind spot is novelty. ML models use patterns of data and therefore, assume future activity will follow those same patterns or, at the least, a consistent pace of change. This leaves open the possibility for attacks to be successful, simply because they have not yet been seen by the system during training.

Layering a fraud review team onto machine learning ensures that novel fraud is identified and flagged, and updated data is fed back into the system. Human fraud experts can flag transactions that may have initially passed identity verification controls but are suspected to be fraud and provide that data back to the business for a closer look. In this case, the ML system encodes that knowledge and adjusts its algorithms accordingly.

One of the biggest knocks against machine learning is its lack of transparency, which is a basic tenet in identity verification. One needs to be able to explain how and why certain decisions are made, as well as share with regulators information on each stage of the process and customer journey. Lack of transparency can also foster mistrust among users.

Most ML systems provide a simple pass or fail score. Without transparency into the process behind a decision, it can be difficult to justify when regulators come calling. Continuous data feedback from ML systems can help businesses understand and explain why decisions were made and make informed decisions and adjustments to identity verification processes.

There is no doubt that ML plays an important role in identity verification and will continue to do so in the future. However, its clear that machines alone arent enough to verify identities at scale without adding risk. The power of machine learning is best realized alongside human expertise and with data transparency to make decisions that help businesses build customer loyalty and grow.

Christina Luttrell is the chief executive officer for GBG Americas, comprised of Acuant and IDology.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See more here:
Responsible use of machine learning to verify identities at scale - VentureBeat

How Heinzs Agency Used Machine Learning To Prove Its Ketchup Is The Dominant Condiment – The Drum

Heinz has tapped into the cultural excitement around text-to-image machine learning programs to prove the market dominance of its tomato sauce. The Drum ketched up with the creatives behind the 2001: A Space Odyssey-inspired campaign.

The latest iteration of the Draw Ketchup campaign gave a machine, rather than the general public, the task of drawing ketchup. Lo and behold, more often than not it sketched *almost* correct iterations of Heinzs product albeit with the Bizzaro World twist weve become used to from not-quite-there machine learning programs.

The campaign from Canadian creative agency Rethink set out to strengthen consumer affinity. Heinz is an icon, explains Mike Dubrick, executive creative director, Rethink. But we dont want it to be a heritage brand.

Last year, Kraft-owned Heinz conducted a social experiment asking people across five continents to draw the red condiment. One year later, and building on the campaigns success, Rethink took it one step further.

Dubrick not to be confused with director Stanley Kubrick, to whom this campaign nods says: So, like many of our briefs, the task was to demonstrate Heinzs iconic role in todays pop culture. Pitching the idea to the brand was next. After the brief, we rarely wait until the formal presentation when we share something we think is great.

This idea was first pitched to the client informally by text before a formal presentation with a bit more meat on the bone was conducted, but Dubrick, excited by the idea, points out: When youve got something powerful, why wait?

Get the best of The Drum by choosing from a series of great email briefings, whether thats daily news, weekly recaps or deep dives into media or creativity.

After getting the go-ahead from Heinz, the next step of the creative process was to find the right tech for the job. The team landed on DALLE 2 a new AI system that can create realistic images and art from a text description. Theres been a buzz around such tools on social media these last few months.

Once we had that, we were able to experiment and understand the capabilities, adds Dubrick. We were getting to grips with machine learning and looking for ways to demonstrate that Heinz is ketchup. The AI gave us a completely unbiased opinion on the subject.

The team hilariously began putting the machine to test, asking it to draw a cow bus and fry castle, before finally entrusting the bot to draw ketchup resulting in pictures of what appears to be a Heinz bottle, albeit in different artistic styles.

As always though, relying on tech can sometimes go awry. We asked if there was a plan B? No. It either worked or it didnt, adds Dubrick.

We love solving problems throughout the creative process, but we also think you have to be willing to kill an idea when its simply not going to work. Thankfully, this one did. Of course, there were a few funny and odd creations along the way. The platform is experimental, and we wanted to embrace that.

A campaign like this perhaps dispels the notion that a focus on tech can erode focus on the human aspect of creativity. I think it enhances the human element. It allows the wildest, most imaginative creative thoughts that pop into peoples heads to be transformed into vivid illustrations, he adds.

And it does it in seconds. Like any creative tool, its an opportunity for creative humans to do things theyve never done before.

Interested in creative campaigns? Check out our Ad of the Day section and sign up for our Ads of the Week newsletter so you dont miss a story.

Read the rest here:
How Heinzs Agency Used Machine Learning To Prove Its Ketchup Is The Dominant Condiment - The Drum

Appen’s Annual State of AI and Machine Learning Report Identifies a Gap in Ideal Versus Reality of Data Quality – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Appen Limited (ASX:APX), the global leader in data for the AI Lifecycle providing data sourcing, data preparation, and model evaluation by humans at scale, today released its eighth annual State of AI and Machine Learning report. This years report reveals that sourcing quality data is an obstacle to creating AI.

According to the reports findings, 51% of participants agree that data accuracy is critical to their AI use case. To successfully build AI models, organizations need accurate and high-quality data. Unfortunately, business leaders and technologists report a significant gap in ideal vs reality in achieving data accuracy.

Appens research also found that companies are shifting their focus to responsible AI and maturing in their use of AI. More business leaders and technologists are focusing on improving the data quality behind AI projects in order to promote more inclusive datasets and, as a result, unbiased and better AI. In fact, 80% of respondents stated data diversity is extremely important or very important, and 95% agree that synthetic data will be a key player when it comes to creating inclusive datasets.

This years State of AI report finds that 93% of respondents believe responsible AI is the foundation of all AI projects, said Mark Brayan, CEO at Appen. The problem is, many are facing the challenges of trying to build great AI with poor datasets, and its creating a significant roadblock to reaching their goals.

Additional key takeaways from the 2022 State of AI Report include:

Sourcing: 42% of technologists say the data sourcing stage of the AI lifecycle is very challenging.

Evaluation: 90% are retraining their models more than quarterly.

Adoption: Business leaders are split down the middle on whether their organization is ahead of (49%) or even with (49%) others in their industry.

The majority of AI efforts are spent managing data for the AI lifecycle, which means it is an incredible undertaking for AI leads to handle alone and is the area many are struggling with, said Sujatha Sagiraju, Chief Product Officer at Appen. Sourcing high-quality data is critical to the success of AI solutions, and we are seeing organizations emphasis the importance of data accuracy.

The State of AI report was sourced from 504 interviews collected via The Harris Poll online survey of IT decision makers, business leaders and managers, and technical practitioners from the US, UK, Ireland, and Germany.

To learn more, download the full 2022 State of AI and Machine Learning report.

About Appen Limited

Appen is the global leader in data for the AI Lifecycle. With over 25 years of experience in data sourcing, data annotation, and model evaluation by humans, we enable organizations to launch the worlds most innovative artificial intelligence systems. Our expertise includes a global crowd of over 1 million skilled contractors who speak over 235 languages, in over 70,000 locations and 170 countries, and the industrys most advanced AI-assisted data annotation platform. Our products and services give leaders in technology, automotive, financial services, retail, healthcare, and governments the confidence to launch world-class AI products. Founded in 1996, Appen has customers and offices globally.

Originally posted here:
Appen's Annual State of AI and Machine Learning Report Identifies a Gap in Ideal Versus Reality of Data Quality - Business Wire

Machine learning forecasting for COVID-19 pandemic-associated effects on paediatric respiratory infections – Archives of Disease in Childhood

What is already known on this topic?

The literature states that (non-COVID-19) respiratory diagnoses have broadly reduced during the periods of government interventions as resulting from the COVID-19 pandemic, across the world.

General reductions in respiratory infection diagnoses are generally in contravention with the typical seasonal trends.

Research has predicted an increase in respiratory infections once government interventions and restrictions are removed.

This study analyses respiratory infections observed at a specialist childrens hospital during and after the implementation of restrictions resulting from the COVID-19 pandemic.

The results show a significant reduction in rates of major respiratory diagnoses during restrictions but further illustrate the variation in responses post-restrictions.

The study demonstrates how open-source, cross-domain, forecasting tools can be applied to routine health record activity data to provide evaluation of deviations from historical trends.

This study shows that, in our population, hypothesised excess post-COVID-19 respiratory syncytial virus infections did not occur, with implications for health policy planning.

The results indicate that rates for several respiratory infections continue to remain below typical pre-COVID-19 levels, and further research is required to model future effects.

The electronic health record data-based forecasting method, using cross-domain tools, is applicable to a range of health policy applications, including service usage planning and case surge detection.

The COVID-19 pandemic had a major impact on healthcare services, with significantly reduced service utilisation.1 In addition, the mitigation measures implemented, such as lockdowns, social distancing and personal protective/hygiene actions, have significantly reduced rates of other infectious agents, for example, transmission of norovirus.2 Previous pandemics, such as influenza, have demonstrated that associated public health measures can impact rates of other respiratory infections such as respiratory syncytial virus (RSV),3 and reduced rates of RSV infection and other respiratory pathogens have been reported in several countries during the COVID-19 pandemic.48

The value of routine electronic health record (EHR) data for research is increasingly recognised and has been highlighted by the pandemic,911 and the UK Government has recently published a data strategy emphasising the value of healthcare data for secondary purposes.12 The aim of this study is to analyse routine electronic patient record data from a specialist childrens hospital to examine the effect of the COVID-19 pandemic mitigation measures on rates of seasonal respiratory infections compared with expected rates using an openly available transferable machine learning model.

We performed a retrospective longitudinal study of coded respiratory disorder diagnoses made at the Great Ormond Street Hospital for Children (GOSH), a specialist paediatric hospital in London, that typically receives 280000 patient visits per year and includes several large paediatric intensive care units.

The respiratory disorder data were extracted and aggregated from the Epic patient-level EHR and legacy clinical data warehouses13 using a bespoke Digital Research Environment.14 Diagnoses were labelled with codes from the International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10).15 All diagnoses from inpatients and outpatients recorded between 1 January 2010 and 28 February 2022 were collected for the study.

The diagnosis rates and trends of four respiratory disease categories that are reported to be particularly prevalent during the UK winter were analysed in this study (Respiratory Infection due to the Respiratory Syncytial Virus (RSV), Respiratory Infection due to the Influenza Virus, Acute Nasopharyngitis due to any Virus and Acute Bronchiolitis due to any Virus (excluding RSV)). In addition, diagnoses were aggregated into categories based on respiratory hierarchical groupings of ICD-10 to provide a wider picture of diagnosis rates and seasonal trends15 (the full list of associated ICD-10 codes for each aggregated category is shown in online supplemental table 1).

Each diagnosis category was divided into three time periods, corresponding to before, during and after the enforcement of national restrictions in England in response to the COVID-19 pandemic. The prerestriction period was designated as 1 January 201025 March 2020. The during restriction period was designated from 26 March 2020 (the date The Health Protection (Coronavirus, Restrictions) (England) Regulations legally came into force) to 18 July 2021. The postrestriction period was taken from 19 July 2021 (the date The Health Protection (Coronavirus, Restrictions) (Steps etc.) (England) was revoked) to 28 February 2022.16 England was subject to a range of interventions in the period during restrictions. At their most stringent, these restrictions included full national lockdowns where meeting was disallowed, and it was a legal offence to leave your place of living except for a small range of essential activities. Conversely, at their least stringent, the restrictions permitted gatherings of up to 30 people and only had requirements for face coverings in enclosed spaces and minor personal social distancing measures.

All analysis and modelling for this study were carried out using the R programming language.17

All data were deidentified using the established digital research environment mechanisms, with analysis carried out in a secure virtual environment; no data left the hospital during the study.

For each respiratory disorder diagnosis category, data for the cohort of patients with an associated ICD-10 diagnosis were extracted, and the start date of the period of diagnosis was identified. The daily diagnosis frequency (diagnoses/day) was calculated for each diagnosis category by aggregating the diagnosis dates of all patients with a diagnosis in the category across the period.

The diagnosis rate data were sparse for some categories; therefore, a 30-day moving average filter18 with a centre-aligned, rectangular window was applied to the raw diagnosis frequency series to provide an averaged representation of the diagnosis rate trends, , that was used for the subsequent analysis and modelling.

To understand the impact of restrictions on GOSH diagnosis rates for each category, a statistical model for the typical trend was built from the diagnosis rate trends for the prerestrictions period using the Prophet forecasting procedure.19 Prophet is a robust, open source tool that fits additive and multiplicative seasonal models to time-series data that have strong cyclical/seasonal effects. With Prophet, an input time-series is decomposed into a non-periodic trend that changes non-linearly over time, multiple periodic seasonalities, an irregular holiday effect and a noise signal. Prophet fits the model to the input time-series within the Bayesian statistical inference framework with Markov chain Monte Carlo (MCMC) sampling implemented in the Stan programming language.19

For this study, the diagnosis rate model was designed as a multiplicative model, as follows.

where is the diagnosis rate time series, is the non-periodic trend modelled as a piecewise linear trend with changepoints, is the annual periodic seasonal trend modelled as a five term Fourier series, and is a normally distributed model error function. A multiplicative model, whereby the trends and seasonalities are multiplied together to model the time-series, was used because diagnosis rates clearly showed annual seasonality to be approximately proportional to the overall trend. Details of the implementation of and are available elsewhere.19

Since the multiplicative model was log-transformed and implemented as the following additive model

where x is the input diagnosis rate, approximates the log transformation and is finite for zero valued x for an arbitrary small constant .

To quantify the degree of seasonality in each diagnosis category, a Seasonality Amplitude score was calculated from the Prophet model generated for each diagnosis category. The score, , was calculated as the ratio of the peak-to-peak amplitude, , and the peak amplitude, , of the model forecast for the year immediately prior to the introduction of restrictions.

To understand the significance of any deviation in the observed diagnosis rate from that predicted by the Prophet models, discrete daily z-scores were calculated, as follows:

where is the i-th observed diagnosis rate z-score, is the i-th observed diagnosis rate, is the random variable defining the i-th value of the posterior predictive distribution from the raw MCMC samples in Prophet and is the mapping of probability quantiles to z-scores.

Data from 30199 patients with a diagnosis from Chapter X Diseases of the respiratory system of ICD-10 at the centre between 1 January 2010 and 28 February 2022 were included in the study, with a total of 141003 diagnosis records in the dataset (including repeats). Full summary statistics for the study population are shown in table 1.

Table of summary characteristics for the population of diagnoses analysed in the study

A total of 1060 diagnoses of RSV, 471 diagnoses of Influenza, 2214 diagnoses of Acute Nasopharyngitis and 1568 diagnoses of Acute Bronchiolitis (excl. RSV) were made across the period of study. Online supplemental table 1 shows the patient cohort summary for these diagnosis categories during the three time periods, in addition to those from the ICD-10 hierarchy.

The 30-day moving average diagnosis rates for the respiratory disorder diagnosis categories are shown in figure 1. The four diagnosis rate plots for the respiratory disorder diagnosis categories show clear seasonal trends and exhibit peaks in winter months and troughs in summer months.

Diagnosis frequency plots for the four commonly seasonal respiratory disease categories. The blue line shows the observed 30-day moving average of daily diagnosis rate between 2010 and 2022. The vertical dark red lines define the start and end of widespread restrictions in response to the COVID-19 pandemic in England, UK. The light red sections show the three periods of national lockdowns.

For RSV, the prerestrictions period maximum diagnosis frequencies were 1.8 diagnoses/day. During the restrictions period, the maximum was 0.17 diagnoses/day, representing an 91% reduction. These results are shown for the other categories in table 2.

Table of peak diagnosis rate values for the respiratory disease categories across the three time periods: prerestrictions, during restrictions and postrestrictions

The Prophet seasonal model was calculated for all diagnosis categories based on the prerestriction period (figure 2, table 3). The seasonality amplitude of all four seasonal diagnosis categories were greater than 0.5, demonstrating notable seasonality. Additionally, three respiratory infection categories from the ICD-10 hierarchy (acute upper respiratory infections, influenza and pneumonia, and other acute lower respiratory infections) were found to have seasonality amplitudes greater than 0.5. All categories had their seasonal peak identified between 26 November and 30 January annually (online supplemental table 2).

Diagnosis frequency forecast plots for the four seasonal respiratory disease categories: (A) RSV, (B) influenza, (C) acute nasopharyngitis and (D) acute bronchiolitis (excl. RSV), and three seasonal ICD-10 categories: (E) acute upper respiratory infections, (F) influenza and pneumonia and (G) other acute lower respiratory infections. In the diagnosis frequency plots, the blue line shows the observed 30-day moving average of daily diagnosis rate between 2010 and 2022. The white line shows the seasonal model forecast with the light blue 95% CIs. In the z-score plots, the blue line shows the observed diagnosis rate z-score against the forecast model. The light blue section shows the range for absolute z-score of less than 1.96 (95% CI). The vertical red lines define the start and end of widespread legal restrictions in response to the COVID-19 pandemic in England, UK. The light red sections show the three periods of national lockdowns. Specifically, note the marked reduction in rates for all respiratory infection groups during the pandemic restriction period but also the greater than expected rates for the period immediately postrestrictions relating to rising RSV infection rates. RSV, respiratory syncytial virus.

Table of the forecast and observed number of diagnoses for each respiratory disease category in the during and postrestrictions periods

Comparing observed diagnosis to forecast diagnoses across the restriction period for the four seasonal diagnoses, all showed a greater than 50% reduction from expected rates. This included a 73%, 84%, 70% and 55% reduction for RSV, influenza, acute nasopharyngitis and acute bronchiolitis (excl. RSV), respectively. These categories also had significant negative minimum z-scores of less than 10.0 during the restrictions period.

Across the restrictions period, there was a general reduction of 26% in all Diseases of the Respiratory System (J00J99). Of the ICD-10 hierarchy categories considered in the study, all reduced against forecast rates except Influenza and pneumonia (which contains pneumonia as the result of coronavirus infections) and the aggregated category Other non-infectious diseases of the respiratory system. All categories had negative minimum z-scores of less than 2.0 (outside the 95% CI); however, values were generally closer to zero than observed for the typically seasonal categories.

During the postrestriction period, there were large differences in diagnosis categories responses to the lifting of restrictions. Most categories have returned to, and remained, in-line with prerestriction forecasts; however some have not. RSV diagnosis rates rose most notably and were found to be consistently and significantly above the prerestrictions modelled forecast (maximum z-score 8.13), however subsequently returned to within forecast by the end of winter 2021/2022 (z-score <2.0). Additionally, both influenza and acute nasopharyngitis categories continue to show significantly reduced diagnosis rates in comparison with prerestrictions forecasts (z-scores 4.0 and 2.9 respectively).

In this study we have demonstrated, first, that mitigation and prevention measures put in place during the COVID-19 pandemic period were associated with significant reductions in the rates of children with a diagnosis of specific respiratory infections, particularly due to RSV, influenza, acute nasopharyngitis and acute bronchiolitis, at a large childrens hospital in England, UK. Furthermore, the removal of prevention measures has resulted in widely varied responses in subsequent months. Second, we demonstrate the feasibility of applying an openly available machine learning forecasting model from another domain to routine electronic healthcare data within a secure digital hospital environment. Third, we use our method in analysing the seasonality of respiratory infections to showcase the potential of this model to clinical phenomena that are cyclical (eg, seasonal/diurnal). Our findings are consistent with known epidemiological data, suggesting robustness of the approach. Finally, the use of such a forecasting tool can identify unexpected deviations from normal, in this case the increasing rates of RSV infection in mid-late 2021 beyond the expected, allowing modelling of the likely peak in future months, hence aiding resource planning and public health measures. Again, the potential utility of this approach extends beyond the seasonality of respiratory infection alone.

The almost complete absence of the seasonal RSV infection pattern during the COVID-19 pandemic has been previously reported internationally,4 7 20 with larger than expected numbers susceptible postpandemic,21 and based on simulated trajectories from past data, significant RSV outbreaks had been predicted for the winter of 202122.22 23 Indeed, a resurgence of RSV infections above normal levels and at different times of the season has been reported in several countries.24 25 The data presented here confirm the significant reduction in RSV and other acute respiratory infections in London during the restriction period and further confirm greater than normal (predicted) rates occurring immediately following the lifting of restrictions. However, the peak diagnosis frequency rate was largely equal to that predicted for a typical winter, based on our machine learning modelling, and by 28 February 2022 has returned to within the expected range. All other seasonal respiratory infections categories studied exhibited similar suppression in diagnoses during the restrictions period; however, (unlike RSV) they have all seen within or below forecast diagnosis rates postrestrictions. GOSH does not have an emergency department and is unique in relation to its patient population among childrens hospitals in the UK. Our absolute numbers of diagnoses for different respiratory infections including RSV are relatively low compared with district general hospitals, though the same seasonal and restrictions related effects have been widely observed.4 7 26 Despite this, the model was still able to forecast expected trends and deviations from previous years.

The results for diagnosis rate and number observed during winter 2021/2022, relative to forecast (particularly for RSV), are contrary to some of the previously published suggestions that a lack of population immunity due to the absence of cases during restrictions would lead to increased disease prevalence. Further study is required to explore if this finding is observed in larger, less selective populations as global restrictions are fully removed. However, if replicated elsewhere, these findings could imply that the risk of elevated infections and resulting disease is less of a risk for further increases in health service demand during periods where they are recovering from delays to a range of services during the pandemic.

The study illustrates the value of using routine healthcare data for secondary analyses within a bespoke data infrastructure based around well-defined data definitions and data models allowing data harmonisation, combined with the use of open and commonly used analytic tools such as R and Python,17 27 within a cloud-based trusted research environment allowing secure and auditable collaborative data analysis of non-identifiable data. This approach supports transferability to other organisations, and all code is available at https://github.com/goshdrive/seasonality-analysis-forecasting.

By applying a seasonal forecasting model28 to diagnosis data, we show how it is possible to generate forecasts with narrow confidence intervals from routine healthcare data, even when the underlying healthcare indicators are highly variable throughout a periodic cycle and/or involve moving year-on-year trends. By using a forecasting model that explicitly includes cyclical components described as a Fourier series, instead of a more generalised machine learning model, the library was able to tightly model the data with few parameters requiring domain-specific configuration. Specifically, these results were achieved by setting just three parameters specific to the indicators being studied. For this reason, the Prophet forecasting model has been successfully used in diverse areas including finance,29 temperature prediction,30 cloud computing resource requirements31 and predicting COVID-19 infection rates.32 33

In conclusion, these data, based on routine EHR data combined with cross-domain time-series forecasting machine learning tools, demonstrate the near-complete absence of the seasonal acute respiratory infection-related diagnoses in a specialist childrens hospital during the period of the COVID-19 pandemic mitigation measures in 2020 and 2021. In addition, the data show an earlier-than-usual spike in RSV infection in 2021 but remained within the forecast range. The study illustrates the value of curated real-world healthcare data to rapidly address clinical issues in combination with the use of openly available machine learning tools, which can be applied to a range of scenarios relating to forecasting cyclical time series data.

No data are available. No individual participant data will be available.

Not applicable.

The use of such routine deidentified data for this study was approved under REC 17/LO/0008.

Read more:
Machine learning forecasting for COVID-19 pandemic-associated effects on paediatric respiratory infections - Archives of Disease in Childhood

Artificial Intelligence In Drug Discovery Global Market Report 2022: Rise in Demand for a Reduction in the Overall Time Taken for the Drug Discovery…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence (AI) In Drug Discovery Global Market Report 2022, By Technology, By Drug Type, By Therapeutic Type, By End-Users" report has been added to ResearchAndMarkets.com's offering.

The global artificial intelligence (AI) in drug discovery market is expected to grow from $791.83 million in 2021 to $1042.30 million in 2022 at a compound annual growth rate (CAGR) of 31.6%. The market is expected to reach $2994.52 million in 2026 at a CAGR of 30.2%.

The artificial intelligence (AI) in drug discovery market consists of sales of AI for drug discovery and related services. Artificial Intelligence (AI) for drug discovery is a technology that uses a simulation of human intelligence process by machines to tackle complex problems in the drug discovery process. It helps to find new molecules to identify drug targets and develop personalized medicines in the pharmaceutical industry.

The main technologies in artificial intelligence (AI) in drug discovery are deep learning and machine learning. Deep learning is a machine learning and artificial intelligence (AI) technique that mimics how humans acquire knowledge. Data science, which covers statistics and predictive modelling, incorporates deep learning as a key component.

The different drug types include small molecule, large molecules and involves various types of therapies such as metabolic disease, cardiovascular disease, oncology, neurodegenerative diseases, others. It is implemented in several end-users including pharmaceutical companies, biopharmaceutical companies, academic and research institutes, others.

The rise in demand for a reduction in the overall time taken for the drug discovery process is a key driver propelling the growth of the artificial intelligence (AI) in drug discovery market. Traditionally, it takes three to five years for animal models to identify and optimize molecules before they are evaluated in humans whereas start-ups based on AI have been identifying and designing new drugs in a matter of few days or months.

For instance, in 2020, the British start-up Exscientia and Japan's Sumitomo Dainippon Pharma have used artificial intelligence to produce an obsessive-compulsive disorder (OCD) medication, decreasing the development time from four years to less than one year. The reduction in overall time taken for the drug discovery process drives the artificial intelligence (AI) in drug discovery market's growth.

The shortage of skilled professionals is expected to hamper the AI in drug discovery market. The employees have to re-train or learn new skill sets to work efficiently on the complex AI machines to get the desired results for the drug. The shortage of skills acts as a major hindrance to drug discovery through AI, discouraging companies from adopting AI-based machines for drug discovery.

Scope

Markets Covered:

1) By Technology: Deep Learning; Machine Learning

2) By Drug Type: Small Molecule; Large Molecules

3) By Therapeutic Type: Metabolic Disease; Cardiovascular Disease; Oncology; Neurodegenerative Diseases; Others

4) By End-Users: Pharmaceutical Companies; Biopharmaceutical Companies; Academic And Research Institutes; Others

Key Topics Covered:

1. Executive Summary

2. Artificial Intelligence (AI) In Drug Discovery Market Characteristics

3. Artificial Intelligence (AI) In Drug Discovery Market Trends And Strategies

4. Impact Of COVID-19 On Artificial Intelligence (AI) In Drug Discovery

5. Artificial Intelligence (AI) In Drug Discovery Market Size And Growth

6. Artificial Intelligence (AI) In Drug Discovery Market Segmentation

7. Artificial Intelligence (AI) In Drug Discovery Market Regional And Country Analysis

8. Asia-Pacific Artificial Intelligence (AI) In Drug Discovery Market

9. China Artificial Intelligence (AI) In Drug Discovery Market

10. India Artificial Intelligence (AI) In Drug Discovery Market

11. Japan Artificial Intelligence (AI) In Drug Discovery Market

12. Australia Artificial Intelligence (AI) In Drug Discovery Market

13. Indonesia Artificial Intelligence (AI) In Drug Discovery Market

14. South Korea Artificial Intelligence (AI) In Drug Discovery Market

15. Western Europe Artificial Intelligence (AI) In Drug Discovery Market

16. UK Artificial Intelligence (AI) In Drug Discovery Market

17. Germany Artificial Intelligence (AI) In Drug Discovery Market

18. France Artificial Intelligence (AI) In Drug Discovery Market

19. Eastern Europe Artificial Intelligence (AI) In Drug Discovery Market

20. Russia Artificial Intelligence (AI) In Drug Discovery Market

21. North America Artificial Intelligence (AI) In Drug Discovery Market

22. USA Artificial Intelligence (AI) In Drug Discovery Market

23. South America Artificial Intelligence (AI) In Drug Discovery Market

24. Brazil Artificial Intelligence (AI) In Drug Discovery Market

25. Middle East Artificial Intelligence (AI) In Drug Discovery Market

26. Africa Artificial Intelligence (AI) In Drug Discovery Market

27. Artificial Intelligence (AI) In Drug Discovery Market Competitive Landscape And Company Profiles

28. Key Mergers And Acquisitions In The Artificial Intelligence (AI) In Drug Discovery Market

29. Artificial Intelligence (AI) In Drug Discovery Market Future Outlook and Potential Analysis

30. Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/43bdop

Read this article:
Artificial Intelligence In Drug Discovery Global Market Report 2022: Rise in Demand for a Reduction in the Overall Time Taken for the Drug Discovery...

AI for Ukraine is a new educational project from AI HOUSE to support the Ukrainian tech community – KDnuggets

Sponsored Post

AI for Ukraine is a series of workshops and lectures held by international artificial intelligence experts to support the development of Ukraines tech community during the war. Yoshua Bengio (MILA/U. Montreal), Alex J. Smola (Amazon Web), Sebastian Bubeck (Microsoft), Gal Varoquaux (INRIA), and many other well-known specialists have joined the initiative. This is a non-commercial educational project by AI HOUSE a company focused on building the AI/ML community in Ukraine and is part of the Roosh tech ecosystem. All proceeds collected upon registration will be donated to the biggest Ukrainian charity fund Come Back Alive.

Its been five months of a completely new reality for every single Ukrainian, one with sirens, bombings, pain, and war. The AI community has also changed a lot, with many now on the front line, and others dedicated to volunteer work. One thing for certain, this war will end with Ukraines victory after which the country will need to be rebuilt in every aspect.

"War is one of the most tragic types of collective human behavior, and democracy is a defense against tyranny and the key to improving people's lives. It is essential to maintain and develop the flame of research and knowledge even in such a dark period, thinking about the post-war times and the importance of science and innovation to achieve progress." comments Yoshua Bengio, famous computer scientist who received the Turing Award in 2018 and is called as one of the Godfathers of AI.

The global AI community has been and is continuing to actively support Ukraine and its tech sector. The AI for Ukraine project is aimed at connecting international experts with the local AI community, sharing insight, and helping Ukraine on its journey of becoming Europes AI hub in the near future.

"Ukraine must continue its path to a modern democratic country, economically prosperous, with free and educated people. Supporting the AI community in Ukraine will contribute to the development of the economy, increasing the value of local talent." adds Gal Varoquaux, ML-researcher and data scientist.

AI for Ukraine aims to:

"In the Roosh ecosystem, AI HOUSE is responsible for developing educational programs and network-building that helped to implement the AI for Ukraine initiative.

We develop artificial intelligence in Ukraine in all aspects, the most fundamental of which is education. Education is the foundation and driving force for every professional who seeks success. Thus, our goal of creating this initiative is primarily to provide access to unique knowledge from the best experts in the industry. We gathered world-renowned specialists that are keen to help Ukraine and its AI community. Together, we will be able to promote the field of artificial intelligence in Ukraine at a qualitatively new level and form conditions for the further development of technological talents." - emphasizes Serhii Tokarev, Founding Partner at Roosh.

Professors from Stanford, Cornell, Berkeley, and other renowned educational institutions, along with engineers and specialists from leading IT companies like Amazon, Samsung AI, Microsoft, Hugging Face have all joined to support the initiative and host educational sessions.

Yoshua Bengio Professor at the University of Montreal, founder and scientific director of the Quebec Institute of Artificial Intelligence, head of the CIFAR Learning in Machines & Brains program, and one of the leading experts in the field of AI. In 2019, Yoshua was awarded the prestigious Killam Prize and in 2022, became the computer scientist with the highest h-index in the world. He is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, the Nobel Prize of Computing, with Geoffrey Hinton and Yann LeCun.

Alex J. Smola VP of machine learning at Amazon Web Services, Professor at Carnegie Mellon University, and one of the world's top experts in the field of machine learning. Alex is also a prolific and widely cited author in the academic research community, having authored and contributed to nearly 500 papers.

Sebastien Bubeck Sr.Principal Research Manager, leads the Machine Learning Foundations group at Microsoft Research Redmond. He has won several awards at machine learning conferences for his work on online decision making, convex optimization, and adversarial robustness.

Gal Varoquaux Research director working on data science and health at Inria (French Computer Science National research). His research focuses on statistical-learning tools for data science and scientific inference. He co-founded scikit-learn, one of the reference machine-learning toolboxes, and helped build various central tools for data analysis in Python.

The educational sessions are available to all and can be accessed via the AI for Ukraine website. Upon registration, participants will be required to make a deposit of any amount ($1 minimum) to receive full access to all upcoming lectures and workshops. All collected funds will be donated to Ukraines largest charity fund comebackalive.in.ua.

In a series of online lectures and workshops, speakers will cover current AI topics. Some of the topics that will be covered in the hands-on workshops are compression of models for deep learning, AutoML, evaluation of machine learning models and their diagnostic value and much more.Participants will have the opportunity to ask questions, interact with the lectures, and join discussions.

The first lecture will be held on the 17th of August with Yoshua Bengio talking about "Bridging the gap between current deep learning and higher-level human cognitive abilities.

AI HOUSE is the AI community in Ukraine that brings together talents, experts, and investors and provides quality education in artificial intelligence and machine learning. We research the most exciting and prospective vectors of AI and ML and establish a partnership with key stakeholders locally and worldwide.

Roosh is a Ukrainian company that creates and invests in AI and ML-focused projects. Its business-ecosystem is composed of venture studio Pawa, venture firm Roosh Ventures, Ukrainian AI/ML community AI HOUSE, tech university SET University, and startups Reface and ZibraAI. Roosh aims to build Ukraine as the European center for artificial intelligence.

aiforukraine.aihouse.clubPoint of contact: Maryna Chernysh - PR Manager at Roosh (mc@roosh.tech)

Follow this link:
AI for Ukraine is a new educational project from AI HOUSE to support the Ukrainian tech community - KDnuggets

Adept Builds a Powerful AI Teammate for Everyone with Oracle and NVIDIA – PR Newswire

Flexible natural-language computing interfacebuilt on Oracle Cloud Infrastructure (OCI) and NVIDIA technology enables people and computers to work together creatively

OCI's high performance and consumption-based pricing helps Adept scale

AUSTIN, Texas, Aug. 11, 2022 /PRNewswire/ -- Adept, a machine learning research and product lab, is using Oracle Cloud Infrastructure(OCI) and NVIDIA technology to develop a universal AI teammate capable of performing a range of tasks people execute on their computer or on the internet. Running thousands of NVIDIA GPUs on clusters of OCI bare metal compute instances and taking advantage of OCI's network bandwidth, Adept can train large-scale AI and ML models faster and more economically than before. As a result, Adept has been able to rapidly advance its general intelligence roadmap and develop its first product a rich language interface for the tools knowledge workers use every day to be productive and creative.

A Highly Scalable, Performant, and Cost-Effective Platform for AI Innovation

With OCI as its preferred cloud platform, Adept obtains the scale and high performance necessary to run massive AI models without excessive compute costs. This has enabled Adept to develop a highly flexible and dynamic natural-language interface for all software that significantly streamlines the tasks knowledge workers execute daily. As a result, users can ask their computer to perform tedious, difficult, or abstract functions, as well as use the interface to test creative ideas.

To fully support Adept with the compute capacity it required, Oracle and NVIDIA customized their offerings to ensure Adept had access to thousands of NVIDIA A100 Tensor Core GPUs needed to train its complex models. Adept, which recently closed a $65 million funding round, is training a giant AI model on OCI using NVIDIA's most powerful A100 GPUs connected with best-of-breed RoCE network powered by NVIDIA (NICs).

"AI continues to rapidly grow in scope but until now, AI models could only read and write text and images; they couldn't actually execute actions such as designing 3D parts or fetching and analyzing data," said David Luan, chief executive officer, Adept. "With the scalability and computing power of OCI and NVIDIA technology, we are training a neural network to use every software application, website, and API in existence building on the capabilities that software makers have already created. The universal AI teammate gives employees an 'extra set of hands' to create as fast as they think and reduce time spent on manual tasks. This in turn will help their organizations become more productive, and nimble in their decision making."

"Adept has exciting, bold ambitions for the future of AI, and we're honored that the company's team of AI and ML trailblazers recognized OCI's ability to support highly innovative and compute-heavy projects like Adept's universal AI assistant," said Karan Batta, vice president, product management, OCI. "With the combined computing power of OCI and NVIDIA, innovators like Adept are poised to unleash the full potential of AI as a technology that can transform how work is done and make every knowledge worker in the world much more productive."

"With brilliant minds from DeepMind, OpenAI, and other AI and ML pioneers, Adept is building the next generation of user interfaces for software applications," said Kari Briski, vice president, AI and high-performance computing (HPC) software development kits, NVIDIA."By workingwith Oracle to provide Adept with an industry-leading GPU engine and a wide range of AI and ML software tools, we're making innovative AI systems possible."

OCI Powers Next-Generation AI Models

OCI's bare metal NVIDIA GPU instances offer startups like Adept an HPC platform for applications that rely on machine learning, image processing, and massively parallel HPC jobs. In addition, HPC on OCI provides the elasticity and consumption-based costs of the cloud, offering on-demand potential to scale tens of thousands of cores simultaneously. As a result, with HPC on OCI, customers like Adept gain access to powerful processors, fast and dense local storage, high-throughput ultra-low-latency RDMA cluster networks, and the tools to automate and run jobs seamlessly.

Additional Resources

About Oracle

Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at http://www.oracle.com.

Trademarks

Oracle, Java, and MySQL are registered trademarks of Oracle Corporation.

###

SOURCE Oracle

Read the original post:
Adept Builds a Powerful AI Teammate for Everyone with Oracle and NVIDIA - PR Newswire

Auburn University research team receives grant to study coastal resilience along Gulf of Mexico – Office of Communications and Marketing

An Auburn University research team in the College of Forestry, Wildlife and Environment has been awarded a grant of $450,000 to develop a holistic platform that integrates multiscale observations, machine learning and systems modeling for coastal monitoring, assessment and prediction, or Coast-MAP, of ecosystem health, water resources and social resilience.

Led by Shufen Susan Pan, lead principal investigator, the team will consider multiple stressors including climate change, floods and droughts, hurricanes, land use, urbanization, nutrient uses, sewage and nutrient loads.

The Gulf of Mexico has been experiencing increased impacts of persistent climate stressors, including frequent floods, intense hurricanes, increasing sea level rise and is likely to undergo further rapid climate change in the coming years.

To address the combined effects of multiple stresses and to improve predictability, there is a critical need for methodological advancements that integrate multiple layers of geographic information and pursue a science-based approach to monitoring, understanding, predicting and responding to changes in coupled social-ecological systems along the Gulf of Mexico, said Pan.

As director of the colleges GIS and Remote Sensing Laboratory, Pan has used emerging technologies in geospatial modeling, computer simulation, satellite observation and AI/machine learning to monitor, assess and predict, or MAP, multiscale dynamics of coupled social-ecological systems in the context of climate and global environmental change.

To achieve their goal, the team, comprised of Pan and co-principal investigators Christopher Anderson, Hanqin Tian and the University of Alabamas Wanyun Shao, has proposed four objectives.

First, we will evaluate the contemporary states of ecosystem health, water resource and social resilience through ground and satellite observations, machine learning and geospatial mapping observing, said Pan. We then will assess and attribute impacts over the past 30 years of multiple stresses on ecosystem health and water resource.

The team will predict potential impacts of climate and land use changes on ecosystem health and water resources in the next 30 years, as well as work to improve the understanding of the effectiveness of specific resilience-based assessments and decision-making tools with stakeholders.

The methods and metrics will be used to measure coastal resilience that are context specific, validated with observed data and ground-truthed via stakeholder participation, said Pan.

In addition to numerous other methods, the team hopes to achieve their goals by holding stakeholder workshops to learn about stakeholders risk perceptions of future climate conditions; assess multiple stresses impacts on the ecosystem and water resources in the Alabama gulf; and collect remote sensing observations from multiple sources to monitor the functions of different terrestrial ecosystems in Alabamas gulf.

The work Pan and her team of researchers are conducting will help us to predict and respond to changes in coupled social-ecological systems along the Gulf of Mexico, said Janaki Alavalapati, dean of the College of Forestry, Wildlife and Environment. This science-based approach will help predict potential impacts of climate and land use changes on ecosystem health and water resources.

Excerpt from:
Auburn University research team receives grant to study coastal resilience along Gulf of Mexico - Office of Communications and Marketing