Commonwealth deploys artificial intelligence-powered online tool to help Virginians self-screen for COVID-19 – Southwest Times

RICHMONDGovernor Ralph NorthamFriday announced that Virginians can now use COVIDCheck, a new online risk-assessment tool to check their symptoms and connect with the appropriate health care resource, including COVID-19 testing.

If you are feeling sick or think you may have been exposed to someone with COVID-19, it is important that you take action right away, said Governor Northam. This online symptom-checking tool can help Virginians understand their personal risk for COVID-19 and get recommendations about what to do next from the safety of their homes. As we work to flatten the curve in our Commonwealth, telehealth services like this will be vital to relieving some of the strains on providers and health systems and making health care more convenient and accessible.

COVIDCheck is a free, web-based, artificial intelligence-powered telehealth tool that can help individuals displaying symptoms associated with COVID-19 self-assess their risk and determine the best next steps, such as self-isolation, seeing a doctor or seeking emergency care. This resource assists in identifying users who are at higher risk of COVID-19 and can help individuals find a nearby testing site. It is not to be used in place of emergency medical care.

COVIDCheck users who say they are experiencing symptoms commonly associated with COVID-19 are screened for occupational and medical risk factors and are given one of five care levels in accordance with the Virginia Department of Healths categories.

Because COVID-19 can affect people differently and cause illness ranging from mild to severe, this personalized assessment tool can help people sort through symptoms and decide if they need to seek medical care, said State Health Commissioner M. Norman Oliver, MD, MA. While COVIDCheck is not a substitute for medical advice, it can help people decide what steps to take next to protect themselves, their loved ones, and the community.

By answering a series of questions, an individual can receive a personalized, real-time self-assessment with information and recommendations on what to do next. The recommendations, based on the latest guidance from the Centers for Disease Control and Prevention, include advice on when to contact a medical professional or seek emergency care, next steps for care based on zipcode, and permission to follow up with the individual in three days to see how the person is doing.

Were proud to partner with the Commonwealth of Virginia to mobilize our AI-powered health assistant to provide the most accurate and helpful information to all Virginians during this vital time, said Andrew Le, MD, CEO and co-founder of Buoy Health, which developed COVIDCheck. And as the Commonwealth cautiously continues its phased approach to reopen, our primary goal at Buoy is to empower its residents to make the best decisions about their health so that they may re-enter society in a responsible wayfor themselves, their loved ones, and the Virginia community-at-large.

Virginians can visit vdh.virginia.gov/coronavirus/covidcheck to learn more and use COVIDCheck.

Buoy is a digital health company developed out of the Harvard Innovation Labs by a team of doctors and data-scientists, aimed at providing personalized clinical support through technology to individuals the moment they have a health care concern. Buoy helps remove the fear and complexity that often confronts people as they enter the system by navigating and engaging patients intelligently. The all-on-one technology is able to deliver triage at scale with transparency, connecting individuals with the right care endpoints at the right time.

comments

Continue reading here:
Commonwealth deploys artificial intelligence-powered online tool to help Virginians self-screen for COVID-19 - Southwest Times

Durr brings artificial intelligence to the paint shop – Autocar Professional

The automotive industry is a key driver and user of artificial intelligence (AI), which is fast percolating to different work areas. Now German major Durr has developed Advanced Analytics, the first market-ready AI application for paint shops.

This intelligent solution, which combines the latest IT technology and mechanical engineering expertise, identifies sources of defects and determines optimal maintenance schedules. It also tracks previously unknown correlations and uses the knowledge to adapt the algorithm to the plant using the principle of self-learning. Advanced Analytics is the latest module from the DXQanalyze product series. First practical applications are showing that the software from Durr optimises plant availability and the surface quality of painted bodies.

Why do body parts exhibit the same defects with an unusually high frequency? When is the latest that a mixer in the robot can be replaced without causing machine stoppage? Precise answers are important for sustainable economic success.Because every defect or every unnecessary maintenance procedure that can be avoided saves money or improves the product quality. Before there were very few precise conclusions that would enable the early detection of quality defects or failures.And if there were, they were generally based on a painstaking manual data evaluation or trial-and-error attempts. Artificial intelligence (AI) makes this much more accurate and automatic, explains Gerhard Alonso Garcia, Vice President MES & Control Systems at Drr.

The new self-learning Advanced Analytics plant and process monitoring system adds toDXQanalyze. The digital product series from Durr already included the Data Acquisition modules for acquiring production data, Visual Analytics for visualising it, and Streaming Analytics. The latter lets plant operators analyse in close to real time whether there are deviations from previously defined rules or target values in production using a low-code platform.

Durr'sAI application Advanced Analytics identifies sources of defects and determines optimal maintenance schedules.

AI application with its own memoryWhat makes Advanced Analytics special is that the module combines large quantities of data including historical data with machine learning. In the figurative sense, this means that the self-learning AI application has a memory. This means that it can use information from the past to both recognise complex correlations in large quantities of data and predict an event in the future with a high degree of accuracy based on the current condition of a machine. There are multiple applications for this in paint shops, whether at component, process, or plant level.

Durr software reduces plant downtimes through predictive maintenance,repair information.

Reducing plant downtime with predictive maintenanceWhen it comes to components, Advanced Analytics reduces downtimes through predictive maintenance and repair information, for example by predicting the remaining service life of a mixer. If the component is replaced too early, it increases the spare part costs and repair overhead unnecessarily, while leaving it too long to replace a component can result in quality problems during coating and machine stoppages. Advanced Analytics starts by learning the wear indicators and the temporal pattern of the wear using high-frequency robot data. Since the data is continuously recorded and monitored, the machine learning module individually recognizes aging trends for the respective component based on actual use and calculates the optimum replacement time.

Machine learning simulates continuous temperature curvesAdvanced Analytics improves quality at process level by identifying anomalies, for example by simulating the heat-up curve in the oven. Up to now, manufacturers only had data determined by sensors during measurement runs. However, the heat-up curves that are of vital importance for surface quality of the car bodies vary since the oven ages during the intervals between the measurement runs. This wear causes fluctuating ambient conditions, for example, in the strength of the air flow.

These days, thousands of bodies are produced without us knowing the temperatures to which the individual bodies were heated. Using machine learning, our Advanced Analytics module simulates how the temperature varies under different conditions. This gives our customers a permanent proof of quality for each individual body and lets them identify anomalies, says Gerhard Alonso Garcia.

Higher first-run rate increases equipment effectivenessAt plant level, theDXQplant.analytics software is used with the Advanced Analytics module to increase overall equipment effectiveness. The artificial intelligence tracks system defects such as recurring quality defects in model types, specific colours, or on individual body parts. This permits conclusions about which step in the production process is responsible for the deviations. Such defect and cause correlations make it possible to increase the first-run rate by allowing intervention at a very early stage.

Plant and digital expertise expertly combinedDeveloping AI-capable data models is a very complex process. Machine learning does not work by feeding unspecified amounts of data into a 'smart' algorithm, which then spits out an intelligent result. Instead, relevant (sensor) signals must be collected, carefully selected, and supplemented with structured additional information from production.With Advanced Analytics, Durr has developed a piece of software that supports different use scenarios, provides a runtime environment for machine learning models, and initiates model training. The challenge was there was no generally valid machine learning model and no suitable run-time environment we could have used.To be able to use AI at plant level, we combined our knowledge of mechanical and plant engineering with the knowledge of our experts from the Digital Factory. This resulted in the first AI solution for paint shops, explains Gerhard Alonso Garcia.

With Artificial Intelligence, systematic errors in the painting process can be detected, thus OEE can be increased by allowing intervention at a very early stage.

Interdisciplinary knowledge pays dividendsAdvanced Analytics was developed by an interdisciplinary team of data scientists, computer scientists, and process experts. Durr also entered into cooperation partnerships with several leading automotive manufacturers. This allowed the developers to access real-life production data and beta site environments in production for different application cases. First the algorithms were trained in the lab using a large number of test cases. Next, the algorithms continued learning on-site in real-life operation and autonomously adapted to environment and use conditions. The beta phase was recently successfully completed and showed the potential of AI.

Durr's India connectThe Durr Group has had a direct representation in India since 1997, and Schenck RoTec has since 1986. The Durr Group currently employs around 590 staff there, offering the entire portfolio including sales and service: Durr India, based in Chennai, offers painting, application, final assembly and energy efficiency technology products as well as air pollution control, noise abatement systems and coating systems for battery electrodes.

Since April 2015, Durr India has also been offering on-the-job as well as classroom training in paint and application systems to customers at its new training center.Schenck RoTec India, in Noida, is responsible for balancing technology as well as for testing and filling technology. The HOMAG Group produces machinery and equipment for the woodworking industry. It has a presence in Bangalore, where it operates a production site and sales and service company HOMAG India.

The Drr Group is one of the world's leading mechanical and plant engineering firms with extensive expertise in automation and digitalization/Industry 4.0. Its products, systems and services enable highly efficient manufacturing processes in different industries. The Durr Group supplies sectors like the automotive industry, mechanical engineering, chemical, pharmaceutical and woodworking industries. It generated sales of 3.92 billion euros in 2019. The company has around 16,500 employees and 112 business locations in 34 countries. The Group operates in the market with the brands Drr, Schenck and HOMAG and with five divisions:

Paint and Final Assembly Systems:paint shops as well as final assembly, testing and filling technology for the automotive industry

Application Technology:robot technologies for the automated application of paint, sealants and adhesives

Clean Technology Systems:air pollution control, noise abatement systems and coating systems for battery electrodes

Measuring and Process Systems:balancing equipment and diagnostic technology

Woodworking Machinery and Systems:machinery and equipment for the woodworking industry

Also read:Durr Systems wins German Innovation Award for robot painting system

/news-international/durr-brings-artificial-intelligence-to-the-paint-shop-56448 Durr brings artificial intelligence to the paint shop Durr brings artificial intelligence to the paint shop https://www.autocarpro.in/Utils/ImageResizer.ashx?n=http://img.haymarketsac.in/autocarpro/cc8e20a9-9966-44d0-bbfb-4cae9eefcc8e.jpg

Excerpt from:
Durr brings artificial intelligence to the paint shop - Autocar Professional

Telefonica SA : Telefnica offers start-ups its IoT, blockchain and artificial intelligence technology to help them boost their business -…

Madrid, 27th May 2020. - Telefnica presents the Telefnica Activation Programme, an initiative aimed at start-ups and SMEs in Germany, Spain and the UK seeking to enhance their technological solutions and accelerate their business development through IoT, Blockchain and Big Data/AI (Artificial Intelligence) technologies grouped into Telefnica Tech. To do so, it will give them the opportunity to get to know and take advantage of the company's different platforms in each of these technologies completely free of charge for a period of six months. Start-ups from these three countries interested in participating in this initiative can submit their applications until 22 June through the website http://www.activationprogramme.telefonica.com.

In addition, the start-ups will opt for the possibility of carrying out a pilot with Telefnica and its corporate customer portfolio, as well as being analysed to assess an investment opportunity by Wayra.

'Collaboration is more important than ever, which is why at Connected Open Innovation we want to help start-ups scale by giving them access to our technology platforms through the use of APIs, which are free, agile and simple,' said Irene Gmez, director of Connected Open Innovation at Telefnica.

IoT, blockchain and AI: three technologies for a technological present

Those companies accepted in the IoT category will benefit from six months of free IoT connectivity, with access to Kite, an IoT connectivity platform developed by Telefnica, which will allow the start-ups to manage their solution in an integrated manner. Moreover, by requesting LPWA connectivity, they will also receive an IoT module and access to The Thinx laboratories in Madrid and Barcelona, where they will be able to perform prototypes and even tests in a real environment, saving time and optimising the investment.

On the other hand, with the blockchain welcome pack the start-ups will be able to enjoy unlimited access for the duration of the programme to the TrustOS modules, a platform that makes it easy for companies to incorporate the main benefits of immutability and transparency inherent to the technology into their value proposition. Thanks to this hybrid solution developed by Telefnica (which combines public and private networks), companies will be able to benefit simultaneously from the transparency and confidence of public networks, guaranteeing the performance and scalability necessary for business operations.

Finally, as far as Big Data/AI technology is concerned, they will have access to the LUCA Suite, an inhouse developed platform that allows to automate the data processing in minutes, integrating Machine Learning capabilities in an easy and intuitive way. In this way, without prior knowledge of automatic learning, it is possible to make predictions that increase business opportunities.

Throughout the experience, a team of Telefnica experts provides personalised support adapted to the needs of each start-up, as well as additional training and networking services to get the most out of the programme.

Read more here:
Telefonica SA : Telefnica offers start-ups its IoT, blockchain and artificial intelligence technology to help them boost their business -...

Facebook’s Artificial Intelligence Experts Claims They Have Developed A Better Way To Create 3D Images – Digital Information World

According to a group of artificial intelligence (AI) researchers from the National Tsing Hua University in Taiwan, Facebook, and Virginia Tech, they have developed a better way to create 3D images. The researchers state that their method is superior to Facebook's current 3D Photos feature and other already existing methods to generate 3D photos.

Facebook rolled out its 3D Photos back in 2018 for dual-camera mobiles such as iPhone X. Facebook 3D Photos uses TrueDepth camera to evaluate the depth in images. In the latest study, the researchers use several images captured with an iPhone to indicate how their approach eliminates the discontinuity and blur other existing 3D methods offer.

This new way may also make for better Facebook 3D Images, however, if the new method translates to other environments, it may also result in more real immersion in environments with 3D graphics.This new way is able to create 3D images from RGB-D imagery and it also operates with regular 3D images by using a pre-trained depth evaluation system. The research work also claims superior performance than LLFF and Nvidias Xview. Researchers assessed the performance of some 3D models by using imagery from the RealEstate 10K data set. During the past few months, Nvidia, Microsoft, and Facebook have introduced technologies to create 3D objects from 2D photos. However, the new method uses inpainting, a process of artificial intelligence predicting missing pixels in photos, to generate 3D images.

The new cutting-edge 3D image approach was published in a paper on preprint arXiv. According to the authors, their approach is different as it relies heavily on inpainting for depth value as well as color predictions. Moreover, the new method does not also require pre-determining a specific number of layers for generating 3D photos.

Read next: Facebook Introduces AI to Make eCommerce Easier

Follow this link:
Facebook's Artificial Intelligence Experts Claims They Have Developed A Better Way To Create 3D Images - Digital Information World

Coronavirus tests the value of artificial intelligence in medicine – FierceBiotech

Albert Hsiao, M.D., and his colleagues at the University of California, San Diego (USCD) health system had been working for 18 months on anartificial intelligence programdesigned to help doctors identify pneumonia on a chest X-ray. When thecoronavirushit the U.S., they decided to see what it could do.

The researchers quickly deployed the application, which dots X-ray images with spots of color where there may be lung damage or other signs of pneumonia. It has now been applied to more than 6,000 chest X-rays, and its providing some value in diagnosis, said Hsiao, director of UCSDs augmented imaging and artificial intelligence data analytics laboratory.

His team is one of several around the country that has pushed AI programs developed in a calmer time into the COVID-19 crisis to perform tasks like deciding which patients face the greatest risk of complications and which can be safely channeled into lower-intensity care.

ASCO Explained: Expert predictions and takeaways from the world's biggest cancer meeting

Join FiercePharma for our ASCO pre- and post-show webinar series. We'll bring together a panel of experts to preview what to watch for at ASCO. Cancer experts will highlight closely watched data sets to be unveiled at the virtual meeting--and discuss how they could change prescribing patterns. Following the meeting, well do a post-show wrap up to break down the biggest data that came out over the weekend, as well as the implications they could have for prescribers, patients and drugmakers.

The machine-learning programs scroll through millions of pieces of data to detect patterns that may be hard for clinicians to discern. Yet few of the algorithms have been rigorously tested against standard procedures. So while they often appear helpful, rolling out the programs in the midst of a pandemic could be confusing to doctors or even dangerous for patients, some AI experts warn.

AI is being used for things that are questionable right now, said Eric Topol, M.D., director of the Scripps Research Translational Institute and author of several books on health IT.

Topol singled out a system created byEpic, a major vendor of electronic health record software, that predicts which coronavirus patients may become critically ill. Using the tool before it has been validated is pandemic exceptionalism, he said.

Epic said the companys model had been validated with data from more 16,000 hospitalized COVID-19 patients in 21 healthcare organizations. No research on the tool has been published, but, in any case, it was developed to help clinicians make treatment decisions and is not a substitute for their judgment, said James Hickman, a software developer on Epics cognitive computing team.

Others see the COVID-19 crisis as an opportunity to learn about the value of AI tools.

My intuition is its a little bit of the good, bad and ugly, said Eric Perakslis, Ph.D., a data science fellow at Duke University and former chief information officer at the FDA. Research in this setting is important.

Nearly $2 billion poured into companies touting advancements in healthcare AI in 2019. Investments in the first quarter of 2020 totaled $635 million, up from $155 million in the first quarter of 2019, according to digital health technology funderRock Health.

At least three healthcare AI technology companies have made funding deals specific to the COVID-19 crisis, includingVida Diagnostics, an AI-powered lung-imaging analysis company, according to Rock Health.

Overall, AIs implementation in everyday clinical care is less common than hype over the technology would suggest. Yet the coronavirus crisis has inspired some hospital systems to accelerate promising applications.

UCSD sped up its AI imaging project, rolling it out in only two weeks.

Hsiaos project, with research funding from Amazon Web Services, the UC system and the National Science Foundation (NSF), runs every chest X-ray taken at its hospital through an AI algorithm. While no data on the implementation have been published yet, doctors report that the tool influences their clinical decision-making about a third of the time, said Christopher Longhurst, M.D., UCSD Healths chief information officer.

The results to date are very encouraging, and were not seeing any unintended consequences, he said. Anecdotally, were feeling like its helpful, not hurtful.

AI has advanced further in imaging than other areas of clinical medicine because radiological images have tons of data for algorithms to process, and more data make the programs more effective, said Longhurst.

But while AI specialists have tried to get AI to do things like predict sepsis and acute respiratory distressresearchers at Johns Hopkins Universityrecently won a NSF grantto use it to predict heart damage in COVID-19 patientsit has been easier to plug it into less risky areas such as hospital logistics.

In New York City, two major hospital systems are using AI-enabled algorithms to help them decide when and how patients should move into another phase of care or be sent home.

AtMount Sinai Health System, an artificial intelligence algorithm pinpoints which patients might be ready to be discharged from the hospital within 72 hours, said Robbie Freeman, vice president of clinical innovation at Mount Sinai.

Freeman described the AIs suggestion as a conversation starter, meant to help assist clinicians working on patient cases decide what to do. AI isnt making the decisions.

NYU Langone Healthhas developed a similar AI model. It predicts whether a COVID-19 patient entering the hospital will suffer adverse events within the next four days, said Yindalon Aphinyanaphongs, M.D., Ph.D., who leads NYU Langones predictive analytics team.

The model will be run in a four- to six-week trial with patients randomized into two groups: one whose doctors will receive the alerts, and another whose doctors will not. The algorithm should help doctors generate a list of things that may predict whether patients are at risk for complications after theyre admitted to the hospital, Aphinyanaphongs said.

Some health systems are leery of rolling out a technology that requires clinical validation in the middle of a pandemic. Others say they didnt need AI to deal with the coronavirus.

Stanford Health Careis not using AI to manage hospitalized patients with COVID-19, saidRon Li, M.D., the centers medical informatics director for AI clinical integration. The San Francisco Bay Areahasnt seen the expected surge of patientswho would have provided the mass of data needed to make sure AI works on a population, he said.

Outside the hospital, AI-enabled risk factor modeling is being used to help health systems track patients who arent infected with the coronavirus but might be susceptible to complications if they contract COVID-19.

At Scripps Health in San Diego, clinicians are stratifying patients to assess their risk of getting COVID-19 and experiencing severe symptoms using a risk-scoring model that considers factors like age, chronic conditions and recent hospital visits. When a patient scores seven or higher, a triage nurse reaches out with information about the coronavirus and may schedule an appointment.

Though emergencies provide unique opportunities to try out advanced tools, its essential for health systems to ensure doctors are comfortable with them and to use the tools cautiously, with extensive testing and validation, Topol said.

When people are in the heat of battle and overstretched, it would be great to have an algorithm to support them, he said. We just have to make sure the algorithm and the AI tool isnt misleading, because lives are at stake here.

Kaiser Health News(KHN) is a national health policy news service. It is an editorially independent program of theHenry J. Kaiser Family Foundationwhich is not affiliated with Kaiser Permanente.

ThisKHNstory first published onCalifornia Healthline, a service of theCalifornia Health Care Foundation

Continue reading here:
Coronavirus tests the value of artificial intelligence in medicine - FierceBiotech

Playing God: Why artificial intelligence is hopelessly biased – and always will be – TechRadar India

Much has been said about the potential of artificial intelligence (AI) to transform many aspects of business and society for the better. In the opposite corner, science fiction has the doomsday narrative covered handily.

To ensure AI products function as their developers intend - and to avoid a HAL9000 or Skynet-style scenario - the common narrative suggests that data used as part of the machine learning (ML) process must be carefully curated, to minimise the chances the product inherits harmful attributes.

According to Richard Tomsett, AI Researcher at IBM Research Europe, our AI systems are only as good as the data we put into them. As AI becomes increasingly ubiquitous in all aspects of our lives, ensuring were developing and training these systems with data that is fair, interpretable and unbiased is critical.

Left unchecked, the influence of undetected bias could also expand rapidly as appetite for AI products accelerates, especially if the means of auditing underlying data sets remain inconsistent and unregulated.

However, while the issues that could arise from biased AI decision making - such as prejudicial recruitment or unjust incarceration - are clear, the problem itself is far from black and white.

Questions surrounding AI bias are impossible to disentangle from complex and wide-ranging issues such as the right to data privacy, gender and race politics, historical tradition and human nature - all of which must be unraveled and brought into consideration.

Meanwhile, questions over who is responsible for establishing the definition of bias and who is tasked with policing that standard (and then policing the police) serve to further muddy the waters.

The scale and complexity of the problem more than justifies doubts over the viability of the quest to cleanse AI of partiality, however noble it may be.

Algorithmic bias can be described as any instance in which discriminatory decisions are reached by an AI model that aspires to impartiality. Its causes lie primarily in prejudices (however minor) found within the vast data sets used to train machine learning (ML) models, which act as the fuel for decision making.

Biases underpinning AI decision making could have real-life consequences for both businesses and individuals, ranging from the trivial to the hugely significant.

For example, a model responsible for predicting demand for a particular product, but fed data relating to only a single demographic, could plausibly generate decisions that lead to the loss of vast sums in potential revenue.

Equally, from a human perspective, a program tasked with assessing requests for parole or generating quotes for life insurance plans could cause significant damage if skewed by an inherited prejudice against a certain minority group.

According to Jack Vernon, Senior Research Analyst at IDC, the discovery of bias within an AI product can, in some circumstances, render it completely unfit for purpose.

Issues arise when algorithms derive biases that are problematic or unintentional. There are two usual sources of unwanted biases: data and the algorithm itself, he told TechRadar Pro via email.

Data issues are self-explanatory enough, in that if features of a data set used to train an algorithm have problematic underlying trends, there's a strong chance the algorithm will pick up and reinforce these trends.

Algorithms can also develop their own unwanted biases by mistake...Famously, an algorithm for identifying polar bears and brown bears had to be discarded after it was discovered the algorithm based its classification on whether there was snow on the ground or not, and didn't focus on the bear's features at all.

Vernons example illustrates the eccentric ways in which an algorithm can diverge from its intended purpose - and its this semi-autonomy that can pose a threat, if a problem goes undiagnosed.

The greatest issue with algorithmic bias is its tendency to compound already entrenched disadvantages. In other words, bias in an AI product is unlikely to result in a white-collar banker having their credit card application rejected erroneously, but may play a role in a member of another demographic (which has historically had a greater proportion of applications rejected) suffering the same indignity.

The consensus among the experts we consulted for this piece is that, in order to create the least prejudiced AI possible, a team made up of the most diverse group of individuals should take part in its creation, using data from the deepest and most varied range of sources.

The technology sector, however, has a long-standing and well-documented issue with diversity where both gender and race are concerned.

In the UK, only 22% of directors at technology firms are women - a proportion that has remained practically unchanged for the last two decades. Meanwhile, only 19% of the overall technology workforce are female, far from the 49% that would accurately represent the ratio of female to male workers in the UK.

Among big tech, meanwhile, the representation of minority groups has also seen little progress. Google and Microsoft are industry behemoths in the context of AI development, but the percentage of black and Latin American employees at both firms remains miniscule.

According to figures from 2019, only 3% of Googles 100,000+ employees were Latin American and 2% were black - both figures up by only 1% over 2014. Microsofts record is only marginally better, with 5% of its workforce made up of Latin Americans and 3% black employees in 2018.

The adoption of AI in enterprise, on the other hand, skyrocketed during a similar period according to analyst firm Gartner, increasing by 270% between 2015-2019. The clamour for AI products, then, could be said to be far greater than the commitment to ensuring their quality.

Patrick Smith, CTO at data storage firm PureStorage, believes businesses owe it not just to those that could be affected by bias to address the diversity issue, but also to themselves.

Organisations across the board are at risk of holding themselves back from innovation if they only recruit in their own image. Building a diversified recruitment strategy, and thus a diversified employee base, is essential for AI because it allows organisations to have a greater chance of identifying blind spots that you wouldnt be able to see if you had a homogenous workforce, he said.

So diversity and the health of an organisation relates specifically to diversity within AI, as it allows them to address unconscious biases that otherwise could go unnoticed.

Further, questions over precisely how diversity is measured add another layer of complexity. Should a diverse data set afford each race and gender equal representation, or should representation of minorities in a global data set reflect the proportions of each found in the world population?

In other words, should data sets feeding globally applicable models contain information relating to an equal number of Africans, Asians, Americans and Europeans, or should they represent greater numbers of Asians than any other group?

The same question can be raised with gender, because the world contains 105 men for every 100 women at birth.

The challenge facing those whose goal it is to develop AI that is sufficiently impartial (or perhaps proportionally impartial) is the challenge facing societies across the globe. How can we ensure all parties are not only represented, but heard - and when historical precedent is working all the while to undermine the endeavor?

The importance of feeding the right data into ML systems is clear, correlating directly with AIs ability to generate useful insights. But identifying the right versus wrong data (or good versus bad) is far from simple.

As Tomsett explains, data can be biased in a variety of ways: the data collection process could result in badly sampled, unrepresentative data; labels applied to the data through past decisions or human labellers may be biased; or inherent structural biases that we do not want to propagate may be present in the data.

Many AI systems will continue to be trained using bad data, making this an ongoing problem that can result in groups being put at a systemic disadvantage, he added.

It would be logical to assume that removing data types that could possibly inform prejudices - such as age, ethnicity or sexual orientation - might go some way to solving the problem. However, auxiliary or adjacent information held within a data set can also serve to skew output.

An individuals postcode, for example, might reveal much about their characteristics or identity. This auxiliary data could be used by the AI product as a proxy for the primary data, resulting in the same level of discrimination.

Further complicating matters, there are instances in which bias in an AI product is actively desirable. For example, if using AI to recruit for a role that demands a certain level of physical strength - such as firefighter - it is sensible to discriminate in favor of male applicants, because biology dictates the average male is physically stronger than the average female. In this instance, the data set feeding the AI product is indisputably biased, but appropriately so.

This level of depth and complexity makes auditing for bias, identifying its source and grading data sets a monumentally challenging task.

To tackle the issue of bad data, researchers have toyed with the idea of bias bounties, similar in style to bug bounties used by cybersecurity vendors to weed out imperfections in their services. However, this model operates on the assumption an individual is equipped to to recognize bias against any other demographic than their own - a question worthy of a whole separate debate.

Another compromise could be found in the notion of Explainable AI (XAI), which dictates that developers of AI algorithms must be able to explain in granular detail the process that leads to any given decision generated by their AI model.

Explainable AI is fast becoming one of the most important topics in the AI space, and part of its focus is on auditing data before its used to train models, explained Vernon.

The capability of AI explainability tools can help us understand how algorithms have come to a particular decision, which should give us an indication of whether biases the algorithm is following are problematic or not.

Transparency, it seems, could be the first step on the road to addressing the issue of unwanted bias. If were unable to prevent AI from discriminating, the hope is we can at least recognise discrimination has taken place.

The perpetuation of existing algorithmic bias is another problem that bears thinking about. How many tools currently in circulation are fueled by significant but undetected bias? And how many of these programs might be used as the foundation for future projects?

When developing a piece of software, its common practice for developers to draw from a library of existing code, which saves time and allows them to embed pre-prepared functionalities into their applications.

The problem, in the context of AI bias, is that the practice could serve to extend the influence of bias, hiding away in the nooks and crannies of vast code libraries and data sets.

Hypothetically, if a particularly popular piece of open source code were to exhibit bias against a particular demographic, its possible the same discriminatory inclination could embed itself at the heart of many other products, unbeknownst to their developers.

According to Kacper Bazyliski, AI Team Leader at software development firm Neoteric, it is relatively common for code to be reused across multiple development projects, depending on their nature and scope.

If two AI projects are similar, they often share some common steps, at least in data pre- and post-processing. Then its pretty common to transplant code from one project to another to speed up the development process, he said.

Sharing highly biased open source data sets for ML training makes it possible that the bias finds its way into future products. Its a task for the AI development teams to prevent from happening.

Further, Bazyliski notes that its not uncommon for developers to have limited visibility into the kinds of data going into their products.

In some projects, developers have full visibility over the data set, but its quite often that some data has to be anonymized or some features stored in data are not described because of confidentiality, he noted.

This isnt to say code libraries are inherently bad - they are no doubt a boon for the worlds developers - but their potential to contribute to the perpetuation of bias is clear.

Against this backdrop, it would be a serious mistake to...conclude that technology itself is neutral, reads a blog post from Google-owned AI firm DeepMind.

Even when bias does not originate with software developers, it is still repackaged and amplified by the creation of new products, leading to new opportunities for harm.

Bias is an inherently loaded term, carrying with it a host of negative baggage. But it is possible bias is more fundamental to the way we operate than we might like to think - inextricable from the human character and therefore anything we produce.

According to Alexander Linder, VP Analyst at Gartner, the pursuit of impartial AI is misguided and impractical, by virtue of this very human paradox.

Bias cannot ever be totally removed. Even the attempt to remove bias creates bias of its own - its a myth to even try to achieve a bias-free world, he told TechRadar Pro.

Tomsett, meanwhile, strikes a slightly more optimistic note, but also gestures towards the futility of an aspiration to total impartiality.

Because there are different kinds of bias and it is impossible to minimize all kinds simultaneously, this will always be a trade-off. The best approach will have to be decided on a case by case basis, by carefully considering the potential harms from using the algorithm to make decisions, he explained.

Machine learning, by nature, is a form of statistical discrimination: we train machine learning models to make decisions (to discriminate between options) based on past data.

The attempt to rid decision making of bias, then, runs at odds with the very mechanism humans use to make decisions in the first place. Without a measure of bias, AI cannot be mobilised to work for us.

It would be patently absurd to suggest AI bias is not a problem worth paying attention to, given the obvious ramifications. But, on the other hand, the notion of a perfectly balanced data set, capable of rinsing all discrimination from algorithmic decision-making, seems little more than an abstract ideal.

Life, ultimately, is too messy. Perfectly egalitarian AI is unachievable, not because its a problem that requires too much effort to solve, but because the very definition of the problem is in constant flux.

The conception of bias varies in line with changes to societal, individual and cultural preference - and it is impossible to develop AI systems within a vacuum, at a remove from these complexities.

To be able to recognize biased decision making and mitigate its damaging effects is critical, but to eliminate bias is unnatural - and impossible.

More here:
Playing God: Why artificial intelligence is hopelessly biased - and always will be - TechRadar India

Harness artificial intelligence and take control your health – Newswise

Newswise Sedentary behaviours, poor sleep and questionable food choices are major contributors of chronic disease, including diabetes, anxiety, heart disease and many cancers. But what if we could prevent these through the power of smart technologies?

In a new University of South Australia research project announced today and funded by $1,118,593 from the Medical Research Future Fund (MRFF), researchers will help Australians tackle chronic disease through a range of digital technologies to improve their health.

Using apps, wearables, social media and artificial intelligence, the research will show whether technology can modify and improve peoples behaviours to create meaningful and lasting lifestyle changes that can ward off chronic disease.

Chronic disease is the leading cause of illness, disability and death in Australia with about half of Australians having a least one of eight major conditions including CVD, cancer, arthritis, asthma, back pain, diabetes, pulmonary disease and mental health conditions.

Nearly 40 per cent of chronic disease is preventable through modifiable lifestyle and diet factors.

The research will assess the ability of digital technologies to improve the health and wellbeing across a range of populations, health behaviours and outcomes, with a specific focus on how they can negate poor health outcomes associated high-risk events such as school holidays or Christmas (when people are more likely to indulge and less likely to exercise); where technology could better track the activity among hospital inpatients, outpatients and home-patients (to help recovery from illness and surgery, leading to improved patient outcomes); and how new artificial intelligence-driven virtual health assistants can improve boost health among high-risk groups, such as older adults.

Lead researcher, UniSAs Associate Professor Carol Maher says the research aims to deliver accessible and affordable health solutions for all Australians.

Poor lifestyle patterns a lack of exercise, excess sedentary behaviour, a lack of sleep and poor diets are leading modifiable causes of death and disease in Australia, Assoc Prof Maher says.

Technology has a huge amount to offer in terms of improving lifestyle and health, especially in terms of personalisation and accessibility, but it has to be done thoroughly and it has to be done well.

Research plays an important role in helping understand the products that are most effective, which will see us working with existing commercial technologies and applying and testing them in a new way, as well as developing bespoke software for specific, unmet needs.

The great advantage of technology-delivered programs is that with careful design, once they are developed and evaluated, they can be delivered very affordably and on a massive scale.

If we are to make any change in the prevalence of chronic disease in Australia, we must plan to do it en masse.

The research aims to bridge the gap between academic rigour and commercial offerings so ensure that every Australian has access to the health supports they need.

One of the challenges we face is that many people who could benefit from digital health technologies are intimidated by them for example, older adults who are not that comfortable with technology, or health professionals who are just used to doing things a certain way, Assoc Prof Maher says.

Change can be hard, but when were making leaps in the right direction to improve lifestyle and health of the Australian community, these changes are worth considering.

See the article here:
Harness artificial intelligence and take control your health - Newswise

Pentagon AI chief says the tech could help spot future pandemics earlier – Roll Call

The command is responsible for defending the continental United States and territories and provides military aid to non-military agencies such as the Federal Emergency Management Agency.

The command has deployed Army and Navy medical personnel to New York, sent Navy hospital ships to New York City and Los Angeles, and helped set up field hospitals in areas where local health care facilities were overwhelmed with patients.

Northern Command has said it has been working with several top U.S. technology companies including Apple, Microsoft, mapping software maker Esri and Monkton, a company that helps developers build secure apps for classified purposes to help FEMA and other agencies during the pandemic.

Although the Pentagon has faced skepticism from some tech companies in its pursuit of artificial intelligence technologies,and outright refusal by Google to continue collaborating on a Pentagon project to identify and label objects in drone videos,the pandemic appears to have changed the calculation, Shanahan said.

As soon as the Pentagon launched Project Salus to build predictive models of shortages during the pandemic, there has been an outpouring of support from private companies, as well as major universities, Shanahan said. The top tech companies in the country and their teams of artificial intelligence and machine learning specialists have shown a strong desire to work with the Defense Department, he said.

See the rest here:
Pentagon AI chief says the tech could help spot future pandemics earlier - Roll Call

BMW is using Artificial Intelligence to paint its cars for a perfect result – Hindustan Times

Artificial intelligence can bring even greater precision to controlling highly sensitive systems in automotive production, as a pilot project in the paint shop of the BMW Group's Munich plant has demonstrated.

Despite state-of-the-art filtration technology, the content of finest dust particles in paint lines varies depending on the ambient air drawn in. If the dust content exceeded the threshold, the still wet paint could trap particles, thus visually impairing the painted surface.

Artificial Intelligence (AI) specialists from central planning and the Munich plant have now found a way to avoid this situation altogether. Every freshly painted car body must undergo an automatic surface inspection in the paint shop. Data gathered in these inspections are used to develop a comprehensive database for dust particle analysis. The specialists are now applying AI algorithms to compare live data from dust particle sensors in the paint booths and dryers with this database.

"Data-based solutions help us secure and further extend our stringent quality requirements to the benefit of our customers. Smart data analytics and AI serve as key decision-making aids for our team when it comes to developing process improvements. We have filed for several patents relating to this innovative dust particle analysis technology," said Albin Dirndorfer, Senior Vice President Painted Body, Finish and Surface at the BMW Group.

(Also read: Ford is working on a car paint that can protect your vehicle from bird poop)

Two specific examples show the benefits of this new AI solution: Where dust levels are set to rise owing to the season or during prolonged dry periods, the algorithm can detect this trend in good time and is able to determine, for example, an earlier time for filter replacement.

Additional patterns can be detected where this algorithm is used alongside other analytical tools. For example, analysis could further show that the facility that uses ostrich feathers to remove dust particles from car bodies needs to be fine-tuned.

The BMW Group's AI specialists see enormous potential in dust particle analysis. Based on information from numerous sensors and data from surface inspections, the algorithm monitors over 160 features relating to the car body and is able to predict the quality of paint application very accurately.

This AI solution will be suitable for application in series production when an even broader database for the algorithm has been developed. In particular, this requires additional measuring points and even more precise sensor data for the car body cleaning stations. The AI experts are confident that once the pilot project at the parent plant in Munich has been completed, it will be possible to launch dust particle analysis also at other vehicle plants.

See the article here:
BMW is using Artificial Intelligence to paint its cars for a perfect result - Hindustan Times

UM partners with artificial intelligence leader Atomwise to pursue COVID-19 therapies – UM Today

May 22, 2020

Two University of Manitoba researchers have received support from Atomwise, the leader in using artificial intelligence (AI) for small molecule drug discovery, to explore broad-spectrum therapies for COVID-19 and other coronaviruses.

Jorg Stetefeld: It is crucial to gain a molecular understanding of how one particularly attractive protein target, nsp12, interacts with another key protein named nsp8. Once learned, this knowledge can be used to develop both new drugs and repurpose existing ones.

Faculty of Science professor Jrg Stetefeld (chemistry), Tier-1 Canada Research Chair in Structural Biology and Biophysics, and associate professor Mark Fry (biological sciences) received support through Atomwises Artificial Intelligence Molecular Screen (AIMS) awards program, which seeks to democratize access to AI for drug discovery and enable researchers to accelerate the translation of their research into novel therapies.

The current pandemic of COVID-19 is caused by a novel virus strain of SARS-CoV-2, says Stetefeld. To develop the most efficient therapeutic strategies to counteract the SARS-CoV-2 infection, it is crucial to gain a molecular understanding of how one particularly attractive protein target, nsp12, interacts with another key protein named nsp8. Once learned, this knowledge can be used to develop both new drugs and repurpose existing ones.

Professro Ben Bailey-Elkin, from the Stetefeld laboratory, will test compounds that Atomwises AI team sends him after they perform an in silico screen of millions of compounds, and carry out the subsequent biochemical and biophysical characterization, significantly reducing the time it would traditionally take to carry out this process. The Atomwise team will use their proprietary AI software to search for promising direct-acting antivirals, which interfere with the function of the viruss targeted proteins.

Professor Frys laboratory will take advantage of Atomwises cutting edge AI to screen a panel of small molecules predicted to interfere with the cellular signaling pathway that is central to the cytokine storm associated with the development of the COVID-19 acute respiratory distress syndrome.

Professor Frys laboratory will take advantage of Atomwises cutting edge AI to screen a panel of small molecules predicted to interfere with the cellular signaling pathway that is central to the cytokine storm.

Cytokines are a group of small proteins secreted by cells for the purpose of cell-to-cell communication, and in healthy individuals, these cytokines regulate key activities such as immunity, cell growth and tissue repair, for example, says Fry. A large number of patients with COVID-19 will develop life threatening pneumonia, accompanied by a so-called cytokine storm where the body experiences excessive or uncontrolled release of a number of these molecules.

Fry adds, The cytokine storm is thought to play a major role in the development of COVID-19, and there is some evidence that drugs which inhibit key cytokines such as interleukin-6 may reduce the severity of the disease. Its important to note that many of these inhibitors are part of a therapeutic class called biological drugs. These can be expensive to make and supply may be limited. My hope is that we can develop a small molecule inhibitor of the cytokine storm that will be easy to synthesize and available to all who need it.

Atomwises patented AI technology has been proven in hundreds of projects to discover drug leads for a wide variety of diseases said Dr. Stacie Calad-Thomson, vice president and head of Artificial Intelligence Molecular Screen (AIMS) Partnerships at Atomwise. Were hopeful that the therapies discovered will not only target this pandemic, but potential future pandemics.

Research at the University of Manitoba is partially supported by funding from the Government of Canada Research Support Fund.

UM Today Staff

Visit link:
UM partners with artificial intelligence leader Atomwise to pursue COVID-19 therapies - UM Today