Daily Archives: February 15, 2022

WisDems: Nicholson and Kleefisch want to turn back the clock on health care – WisPolitics.com

Posted: February 15, 2022 at 5:25 am

MADISON, Wis. As Republican legislators this weekcontinue to push an identical version of the unpopular and divisive Texas-style abortion ban, Kevin Nicholson and Rebecca Kleefisch have both doubled down on their promise to turn back the clock on health care in Wisconsin by slashing health care services, banning abortion care with no exceptions, and cutting cancer screenings for thousands of Wisconsinites.

Nicholson, whoconfirmed yesterday that he would sign the unpopular and divisive Texas-style billif he somehow became governor, believes that no one should be able to access abortion care with zero exceptions. During his last failed campaign, Nicholson received a perfect 100 percent rating from Pro-Life Wisconsin, which confirmed to the Associated Press that Nicholson promised to support all of their demands, including banning abortion in all cases, with no exceptions for rape, incest, or when a mothers life is in jeopardy.

Nicholson has long wanted to defund Planned Parenthood, which tens of thousands of Wisconsinites rely on for basic care like cancer screenings, testing, and access to contraception. Hes even previouslytweetedthat Planned Parenthood is not a healthcare provider; its a eugenics shop.

As lieutenant governor, Kleefisch worked to passfive bills limiting critical reproductive health care access and even agreed that survivors of rape should turn lemons into lemonade instead of being empowered to make their own health care decisions.

Kevin Nicholson and Rebecca Kleefisch want to insert themselves into some of the most personal decisions that Wisconsinites make, said Democratic Party of Wisconsin Communications Director Iris Riis. Whether its banning abortion care with no exceptions or cutting cancer screenings, the Republicans running for governor have staked out the most extreme and divisive positions on health care.

Learn more about the Republicans running for governor and their radical agendas atAntiChoiceKevin.comandRadicalRebecca.com.

Read the original:

WisDems: Nicholson and Kleefisch want to turn back the clock on health care - WisPolitics.com

Posted in Eugenics | Comments Off on WisDems: Nicholson and Kleefisch want to turn back the clock on health care – WisPolitics.com

NASA’s Hubble Space Telescope and its Asteroid Detection Ability Will Be Hampered by Starlink Gen2 – iTech Post

Posted: at 5:23 am

SpaceX's Starlink Gen2 satellite has recently submitted an application to the Federal Communications Commission (FCC) to deploy 30,000 Starlink Gen2 satellites.

However, NASA sent a letter to the FCC stating that it encourages the agency to do more research and careful deployment of these satellites.

NASA added that the Hubble space telescope might be affected by the deployment of the plethora of satellites since eight percent of Hubble telescope images are impacted by satellites captured during exposures.

NASA expressed that the license Starlink seeks approval for states 10,000 satellites that are positioned in or above the orbital range of Hubble.

In this case, this would double the number of Hubble's degraded images.

NASA added that it estimates that the presence of a Starlink satellite will be spotted in every single asteroid image captured by the Hubble telescope.

The agency does not take it lightly as this would mean a difficulty of detecting asteroids that might further cause harm towards the planet, the satellites, and NASA's space missions. This might also go as far as having numerous image renders that are unusable.

As reported by Ars Technica, NASA wants "additional information including spacecraft and laser specifications including deployed dimensions, communications plan, ground segment expansion, orbital spacing, and deployment schedule."

"This will inform a thorough analysis of risks and impacts to NASA's missions and enable a mitigation strategy," the report adds.

Read Also: Life on Mars? NASA Discovers Abundant Water Source In The Red Planet

The letter that NASA sent to the FCC does not discourage the agency from rejecting the application of SpaceX Starlink Gen2. Rather, it pushes for the meticulous overseeing of the project to guarantee safe spaceflight in future missions and a long-term sustainable space environment.

It has been reported that NASA has legally expressed its concern regarding the significant increase of space satellites that might possibly cause collisions with other crewed spacecraft missions.

Space traffic might further endanger space exploration due to a possible crowded orbit.

According to Space.com, due to the five-fold increase of satellites in space, NASA expressed its concern about whether or not SpaceX's automated collision avoidance system would be capable of handling an enormous amount of traffic.

The conjunction that may possibly happen between satellites and other space crafts will likely have an effect on both crewed and uncrewed space missions since there will be more objects in close proximity.

Due to the resurfacing concern, SpaceX claims that there is zero risk in Starlink satellites colliding with other spacecraft in orbit.

As reported previously here oniTechPost, NASA told the FCC that "the assumption of zero risk from a system-level standpoint lacks statistical substantiation"

In addition, they added that with "the potential for multiple constellations with thousands and tens of thousands of spacecraft, it is not recommended to assume propulsion systems, ground detection systems, and software are 100% reliable, or that manual operations (if any) are 100% error-free."

PC Magalso reported that SpaceX's Starlink Gen2 satellites are aimed to launch as soon as next month. This leaves SpaceX hoping for the FCC to accept its proposal for deploying 30,000 satellites.

Related Article: NASA Raises Issues Over SpaceX CEO Elon Musk's Plans of Sending 30,000 Starlink Satellites

Go here to read the rest:
NASA's Hubble Space Telescope and its Asteroid Detection Ability Will Be Hampered by Starlink Gen2 - iTech Post

Posted in Hubble Telescope | Comments Off on NASA’s Hubble Space Telescope and its Asteroid Detection Ability Will Be Hampered by Starlink Gen2 – iTech Post

The Top 10 Movies to Help You Envision Artificial Intelligence – Inc.

Posted: at 5:23 am

Artificial intelligence has been with us for decades -- just throw on a movie if you don't believe it.

Even though A.I. may feel like a newer phenomenon, the groundwork of these technologies are more dated than you'd think. The English mathematician Alan Turing, considered by some as the father of modern computer science, started questioning machine intelligence in 1950. Those questions resulted in the Turing Test, which gauges a machine's capacity to give the impression of"thinking" like a human.

The concept of A.I. can feel nebulous, but it doesn't fall under just one umbrella. From smart assistants and robotics to self-driving cars, A.I. manifests in different forms...some more clear than others. Spoiler alert! Here are 10 movies in chronological order that can help you visualize A.I.:

1. Metropolis (1927)

German directorFritz Lang's classicMetropolis showcases one of the earliest depictions of A.I. in film, with the robot, Maria, transformed intothe likeness of a woman. The movie takes place in an industrial city called Metropolis that is strikingly divided by class, where Robot Maria wreaks havoc across the city.

2. 2001: A Space Odyssey (1968)

Stanley Kubrick's 2001 is notable for its early depictionof A.I. and is yet another cautionary tale in which technology takes a turn for the worse. A handful of scientists are aboard a spacecraft headed to Jupiter where a supercomputer, HAL(IBM to the cynical), runs most of the spaceship's operations. After HAL makes a mistake and tries to attribute it to human error, the supercomputer fights back when those aboard the ship attempt to disconnect it.

3. Blade Runner (1982)and Blade Runner 2049 (2017)

The original Blade Runner (1982) featured Harrison Ford hunting down "replicants,"or humanoids powered by A.I., which are almost indistinguishable from humans. In Blade Runner2049 (2017), Ryan Gosling's character, Officer K, lives with an A.I. hologram, Joi. So at least we're getting along better with our bots.

4. The Terminator (1984)

The Terminator's plot focuses on a man-made artificial intelligence network referred to as Skynet -- despite Skynet being created for military purposes, the system ends up plotting to kill mankind. Arnold Schwarzenegger launched his acting career out ofhis role as the Terminator, a time-traveling cyborg killer that masquerades as a human. The film probes the question -- and consequences -- of what happens when robots start thinking for themselves.

5. The Matrix Series (1999-2021)

Keanu Reeves stars in this cult classic as Thomas Anderson/Neo, a computer programmer by day and hacker by night who uncovers the truth behind the simulation known as "the Matrix." The simulated reality is a product of artificially intelligent programs that enslaved the human race. Human beings are kept asleep in "pods," where they unwittingly participate in the simulated reality of the Matrix while their bodies are used to harvest energy.

6. I, Robot (2004)

Thissci-fiflickstarring Will Smith takes place in 2035 in a society where robots with human-like featuresserve humankind. An artificial intelligent supercomputer, dubbed VIKI (which stands for Virtual Interactive Kinetic Intelligence), is one to watch, especially once a programming bug goes awry. The defect in VIKI's programming leads the supercomputer to believe that the robots must take charge in order to protect mankind from itself.

7. WALL-E (2008)

Disney Pixar's WALL-E follows a robot of the same namewhose main role is to compact garbage on a trash-ridden Earth. But after spending centuries alone, WALL-E evolves into a sentient piece of machinery who turns out to be very lonely. The movie takes place in2805 and follows WALL-E and another robot, named Eve, who's job is toanalyzeif a planet is habitable for humans.

8. Tron Legacy (2010)

The Tron universe is filled to the brim with A.I. given that it takes place in a virtual world, known as "the Grid." The movie's protagonist, Sam, finds himself accidentally uploaded to the Grid, where he embarks on an adventure that leads him face-to-face with algorithms and computer programs.The Grid is protected by programs such as Tron, but corrupt A.I. programs surface as well throughout the virtual network.

9. Her (2013)

Joaquin Phoenix plays Theodore Twombly, a professional letter writer going through a divorce. To help himself cope, Theodore picks up a new operating system with advanced A.I. features. He selects a female voice for the OS, naming the device Samantha (voiced by Scarlett Johansson), but it proves to have smart capabilities of itsown. Or is it, her own?Theodore spends a lot of time talking with Samantha, eventually falling in love. The film traces their budding relationship and confronts the notion of sentience and A.I.

10. Ex-Machina (2014)

After winning a contest at his workplace, programmer Caleb Smith meets his company's CEO, Nathan Bateman. Nathan reveals to Caleb that he's created a robot with artificial intelligence capabilities. Caleb's task? Assess if the feminine humanoid robot, Ava, is able to show signs of intelligent human-like behavior: in other words, pass the Turing Test. Ava has a human-like face and physique, but her "limbs" are composed of metal and electrical wiring. It's later revealed that other characters aren't exactly human, either.

Excerpt from:

The Top 10 Movies to Help You Envision Artificial Intelligence - Inc.

Posted in Artificial Intelligence | Comments Off on The Top 10 Movies to Help You Envision Artificial Intelligence – Inc.

Tying Artificial intelligence and web scraping together [Q&A] – BetaNews

Posted: at 5:23 am

Artificial intelligence (AI) and machine learning (ML) seem to have piqued the interest of automated data collection providers. While web scraping has been around for some time, AI/ML implementations have appeared in the line of sight of providers only recently.

Aleksandras ulenko, Product Owner at Oxylabs.io, who has been working with these solutions for several years, shares his insights on the importance of artificial intelligence, machine learning, and web scraping.

BN: How has the implementation of AI/ML solutions changed the way you approach development?

AS: AI/ML has an interesting work-payoff ratio. Good models can sometimes take months to write and develop. Until then, you dont really have anything. Dedicated scrapers or parsers, on the other hand, can take up to a day or two. When you have an ML model, however, maintaining it takes a lot less time for the amount of work it covers.

So, theres always a choice. You can build dedicated scrapers and parsers, which will take significant amounts of time and effort to maintain once they start stacking up. The other choice is to have "nothing" for a significant amount of time, but a brilliant solution later on, which will save you tons of time and effort.

Theres some theoretical point where developing custom solutions is no longer worth it. Unfortunately, theres no mathematical formula to arrive at the correct answer. You have to make a decision when all the repetitive tasks are just too much of a hog on resources.

BN: Have these solutions had a visible impact on the deliverability and overall viability of the project?

AS: Getting started with machine learning is tough, though. Its still, comparatively speaking, a niche specialization. In other words, you wont find many developers that dabble in ML, and knowing how hard it can be to find one for any discipline, its definitely a tough river to cross.

Yet, if the business approach to scraping is based on a long-term vision, ML will definitely come in handy sometime down the road. Every good vision has scaling in it and with scaling comes repetitive tasks. These are best handled with machine learning.

Our awesome achievement we call Adaptive Parser is a great example. It was once almost unthinkable that a machine learning model could be of such high benefit. Now the solution can deliver parsed results from a multitude of e-commerce product pages, irrespective of the changes between them or any that happen over time. Such a solution is completely irreplaceable.

BN: In a previous interview, youve mentioned the importance of making things more user-friendly for web scraping solutions. Is there any particular reason you would recommend moving development towards no-code implementations?

AS: Even companies that have large IT departments may have issues with integration. Developers are almost always busy. Taking time out of their schedules for integration purposes is tough. Most end-users of the data Scraper APIs, after all, arent tech-savvy.

Additionally, the departments that would need scraping the most such as marketing, data analytics, etc., might not have enough sway in deciding the roadmaps of developers. As such, even relatively small hurdles can become impactful enough. Scrapers should now be developed with a non-tech user in mind.

There should be plenty of visuals that allow for a simplified construction of workflows with a dashboard thats used to deliver information clearly. Scraping is becoming something done by everyone.

BN: What do you think lies in the future of scraping? Will websites become increasingly protective of their data, or will they eventually forego most anti-scraping sentiment?

AS: There are two of the answers I can give. One is "more of the same". Surely, a boring one, but its inevitable. Delving deeper into scaling and proliferation of web scraping isnt as fun as the next question -- the legal context.

Currently, it seems as if our position in the industry isnt perfectly decided. Case law forms the basis of how we think and approach web scraping. Yet, it all might change on a whim. Were closely monitoring the developments due to the inherent fragility of the situation.

Theres a possibility that companies will realize the value of their data and start selling it on third-party marketplaces. It would reduce the value of web scraping as a whole as you could simply acquire what you need for a small price. Most businesses, after all, need the data and the insights, not web scraping. Its a means to an end.

Theres a lot of potential in the grand vision of Web 3.0 -- the initiative to make the whole Web interconnected and machine-readable. If this vision came to life, the whole data gathering landscape would be vastly transformed: the Web would become much easier to explore and organize, parsing would become a thing of the past, and webmasters would get used to the idea of their data being consumed by non-human actors.

Finally, I think user-friendliness will be the focus in the future. I dont mean just the no-code part of scraping. A large part of getting data is exploration -- finding where and how its stored and getting to it. Customers will often formulate an abstract request and developers will follow up with methods to acquire what is needed.

In the future, I expect, the exploration phase will be much simpler. Maybe well be able to take the abstract requests and turn them into something actionable through an interface. In the end, web scraping is breaking away from its shell of being something code-ridden or hard to understand and evolving into a daily activity for everyone.

Photo Credit: Photon photo/Shutterstock

See more here:

Tying Artificial intelligence and web scraping together [Q&A] - BetaNews

Posted in Artificial Intelligence | Comments Off on Tying Artificial intelligence and web scraping together [Q&A] – BetaNews

Inside the EU’s rocky path to regulate artificial intelligence – International Association of Privacy Professionals

Posted: at 5:23 am

In April last year, the European Commission published its ambitious proposal to regulate Artificial Intelligence. The regulation was meant to be the first of its kind, but the progress has been slow so far due to the file's technical, political and juridical complexity.

Meanwhile, the EU lost its first-mover advantage as other jurisdictions like China and Brazil have managed to pass their legislation first. As the proposal is entering a crucial year, it is high time to take stock of the state of play, the ongoing policy discussions, notably around data, and potential implications for businesses.

For the European Parliament, delays have been mainly due to more than six months of political disputes between lawmakers over who was to take the lead in the file. The result was a co-lead between the centrists and the center-left, sidelining the conservative European People's Party.

Members of European Parliament are now trying to make up for lost time. The first draft of the report is planned for April, with discussions on amendments throughout the summer. The intention is to reach a compromise by September and hold the final vote in November.

The timeline seems particularly ambitious since co-leads involve double the number of people, inevitably slowing down the process. The question will be to what extent the co-rapporteurs will remain aligned on the critical political issues as the center-right will try to lure the liberals into more business-friendly rules.

Meanwhile, the EU Council made some progress on the file, however, limited by its highly technical nature. It is telling that even national governments, which have significantly more resources than MEPs, struggle to understand the new rules' full implications.

Slovenia, which led the diplomatic talks for the second half of 2021, aimed to develop a compromise for 15 articles, but only covered the first seven. With the beginning of the French presidency in January, the file is expected to move faster as Paris aims to provide a full compromise by April.

As the policy discussions made some progress in the EU Council, several sticking points emerged. The very definition of AI systems is problematic, as European governments distinguish them from traditional software programs or statistical methods.

The diplomats also added a new category for "general purpose" AI, such as synthetic data packages or language models. However, there is still no clear understanding of whether the responsibility should be attributed upstream, to the producer, or downstream, to the provider.

The use of real-time biometric recognition systems has primarily monopolized the public debate, as the commission's proposal falls short of a total ban for some crucial exceptions, notably terrorist attacks and kidnapping. In October, lawmakers adopted a resolution pushing for a complete ban, echoing the argument made by civil society that these exceptions provide a dangerous slippery slope.

By contrast, facial recognition technologies are increasingly common in Europe. A majority of member states wants to keep or even expand the exceptions to border control, with Germany so far relatively isolated in calling for a total ban.

"The European Commission did propose a set of criteria for updating the list of high-risk applications. However, it did not provide a justification for the existing list, which might mean that any update might be extremely difficult to justify," Lilian Edwards, a professor at Newcastle University, said.

Put differently, since the reasoning behind the lists of prohibited or high-risk AI uses are largely value-based, they are likely to remain heatedly debated points point through the whole legislative process.

For instance, the Future of Life Institute has been arguing for a broader definition of manipulation, which might profoundly impact the advertising sector and the way online platforms currently operate.

A dividing line that is likely to emerge systematically in the debate is the tension between the innovation needs of the industry, as some member states already stressed, and ensuring consumer protection in the broadest sense, including the use of personal data.

This underlying tension is best illustrated in the ongoing discussion for the report of the parliamentary committee on Artificial Intelligence in a Digital Age, which are progressing in parallel to the AI Act.

In his initial draft, conservative MEP Axel Voss attacked the General Data Protection Regulation, presenting AI as part of a technological race where Europe risks becoming China's "economic colony" if it did not relax its privacy rules.

The report faced backlash from left-to-center policymakers, who saw it as an attempt to water down the EU's hard-fought data protection law. For progressive MEPs, data-hungry algorithms fed with vast amounts of personal data might not be desirable, and they draw a parallel with their activism in trying to curb personalized advertising.

"Which algorithms do we train with vast amounts of personal data? Likely those that automatically classify, profile or identify people based on their personal details often with huge consequences and risks of discrimination or even manipulation. Do we really want to be using those, let alone 'leading' their development?" MEP Kim van Sparrentak said.

However, the need to find a balance with data protection has also been underlined by Bojana Bellamy, president of the Centre for Information Policy Leadership, who notes how some fundamental principles of the GDPR would be in contradiction with the AI regulation.

In particular, a core principle of the GDPR is data minimization, namely that only the personal data strictly needed for completing a specific task is processed and should not be retained for longer than necessary. Conversely, the more AI-powered tools receive data, the more robust and accurate they become, leading (at least in theory) to a fairer and non-biased outcome.

For Bojana, this tension is due to a lack of a holistic strategy in the EU's hectic digital agenda, arguing that policymakers should follow a more result-oriented approach to what they are trying to achieve. These contradicting notions might fall on the industry practitioners, which might be requested to square a fair and unbiased system while also minimizing the amount of personal data collected.

The draft AI law includes a series of obligations for system providers, namely the organizations that make the AI applications available on the market or put them into services. These obligations will need to be operationalized, for instance, what it means to have a "fair" system, to what length should "transparency" go and how is "robustness" defined.

In other words, providers will have to put a system in place to manage risks and ensure compliance with support from their suppliers. For instance, a supplier of training data would need to detail how the data was selected and obtained, how it was categorized and the methodology used to ensure representativeness.

In this regard, the AI Act explicitly refers to harmonized standards that industry practitioners must develop to exchange information to make the process cost-efficient. For example, the Global Digital Foundation, a digital policy network, is already working on an industry coalition to create a relevant framework and toolset to share information consistently across the value chain.

In this context, European businesses fear that if the EU's privacy rules are not effectively incorporated in the international standards, they could be put at a competitive disadvantage. The European Tech Alliance, a coalition of EU-born heavyweights such as Spotify and Zalando, voiced concerns that the initial proposal did not include an assessment for training dataset collected in third countries that might use data collected via practices at odds with the GDPR.

Adopting industry standards creates a presumption of conformity, minimizing the risk and costs for compliance. These incentives are so strong that harmonized standards tend to become universally adopted by industry practitioners, as the cost for departing from them become prohibitive. Academics have defined standardization as the "real rulemaking" of the AI regulation.

"The regulatory approach of the AI Act, i.e. standards compliance, is not a guarantee of low barriers for the SMEs. On the contrary, standards compliance is often perceived by SMEs as a costly exercise due to expensive conformity assessment that needs to be carried out by third parties," Sebastiano Toffaletti, secretary-general of the European DIGITAL SME Alliance, said.

By contrast, European businesses that are not strictly "digital" but that could embed AI-powered tools into their daily operations see the AI Act as a way to bring legal clarity and ensure consumer trust.

"The key question is to understand how can we build a sense of trust as a business and how can we translate it to our customers," Nozha Boujemaa, global vice president for digital ethics and responsible AI at IKEA, said.

Photo by Michael Dziedzic on Unsplash

View original post here:

Inside the EU's rocky path to regulate artificial intelligence - International Association of Privacy Professionals

Posted in Artificial Intelligence | Comments Off on Inside the EU’s rocky path to regulate artificial intelligence – International Association of Privacy Professionals

Learning to improve chemical reactions with artificial intelligence – EurekAlert

Posted: at 5:23 am

image:INL researchers perform experiments using the Temporal Analysis of Products (TAP). view more

Credit: Idaho National Laboratory

If you follow the directionsin a cake recipe, you expect to end up with a nice fluffy cake.In Idaho Falls,though, the elevation can affecttheseresults.When baked goods dont turn outas expected, the troubleshooting begins.This happens in chemistry,too.Chemistsmustbeable to account for how subtle changes or additions may affect the outcome for better or worse.

Chemists maketheir version ofrecipes, known as reactions,to create specific materials.These materialsare essential ingredients to an array of products found in healthcare, farming, vehicles andother everyday productsfrom diapers to diesel.When chemists develop new materials, they rely on information from previous experiments and predictions based onpriorknowledge ofhowdifferent starting materials interact with others and behave underspecificconditions.There are a lot of assumptions, guesswork and experimentation in designing reactions using traditional methods.New computational methods like machine learning can help scientists better understand complex processes like chemical reactions.While it can be challenging forhumans topick outpatternshiddenwithin the data from many different experiments, computers excel at this task.

Machine learning isan advancedcomputational toolwhereprogrammers givecomputerslots ofdata andminimalinstructions about how to interpret it. Instead of incorporatinghuman bias into the analysis, the computer isonly instructed to pull out what it finds to be important from the data. This could be an image of a cat (if the input is all the photos on the internet) orinformation about how a chemical reactionproceeds through a series ofsteps, as is thecasefora set of machine learning experiments that are ongoing at Idaho National Laboratory.

At the lab,researchersworking with the innovative Temporal Analysis of Products (TAP)reactorsystemaretryingto improveunderstanding of chemical reactions by studying the role of catalysts,whicharecomponentsthat can be added toamixture of chemicals to alter thereactionprocess.Oftencatalystsspeed up thereaction,but they can do other things,too. In baking and brewing,enzymesact as catalyststo speed up fermentationandbreakdown sugars in wheat (glucose) into alcohol and carbon dioxide,which creates the bubbles that make bread riseand beer foam.

In the laboratory,perfectinga new catalystcan be expensive, time-consuming and even dangerous.According toINLresearcher Ross Kunz, Understanding how and why a specific catalyst behavesin a reaction is theholygrail ofreaction chemistry.To help find it,scientists arecombiningmachine learningwith a wealth of new sensor datafrom the TAP reactorsystem.

The TAP reactor system uses an array of microsensors to examine the different componentsof a reaction in realtime.For the simplestcatalytic reaction,the system captures8uniquemeasurementsin each of 5,000timepointsthat make up the experiment.Assembling the timepoints into a single data set provides 165,000 measurements foroneexperiment on a very simple catalyst.Scientiststhenuse the datatopredict what is happening in the reaction at a specific timeand how different reaction steps work together in a larger chemical reaction network.Traditional analysis methods canbarelyscratch the surfaceofsuch a large quantity of datafor a simple catalyst, let alonethe many more measurements thatare produced by acomplex one.

Machine learning methods can take theTAP dataanalysis further. Using a type of machine learning called explainableartificial intelligence, orAI,theteam caneducatethe computer about known properties of thereactionsstarting materialsand the physics that govern these types of reactions, a process called training.The computer can apply thistrainingand the patterns that it detects in the experimental data to better describe theconditions inareactionacross time.The team hopes that theexplainable AI method will produce adescription of the reaction that can be used toaccuratelymodelthe processes that occur during theTAP experiment.

In most AI experiments, a computer is given almost no trainingon the physicsand simply detects patterns in the data based upon what it can identify,similar tohow a baby might react to seeing something completely new.By contrast,the value of explainable AI lies in the fact that humanscan understand the assumptions and information that lead to the computers conclusions.This human-level understanding can make it easier for scientists to verify predictions and detect flaws and biases in the reaction description produced by explainable AI.

Implementing explainable AIis not as simple or straightforward as it might sound.With support from the Department of Energys Advanced Manufacturing office, theINLteam has spent two years preparing theTAPdata for machine learning,developing andimplementingthe machinelearning program, andvalidating the results for a common catalyst in a simple reaction that occursinthe car you driveeveryday. This reaction,the transformation of carbon monoxideinto carbon dioxide,occurs ina carscatalytic converter andrelies onplatinumasthe catalyst. Since this reaction is well studied,researcherscan checkhow well the results of the explainable AI experiments match known observations.

In April 2021, the INL team published their results validating the explainable AI method with the platinum catalyst in the article Data driven reaction mechanism estimation via transient kinetics and machine learninginChemical Engineering Journal.Now that the team has validated the approach, they are examining TAP data frommore complex industrialcatalystsused in the manufacture of smallmolecules like ethylene, propylene and ammonia. They are also working with collaborators at Georgia Institute of Technologyto applythemathematical models that result from themachine learningexperiments tocomputersimulationscalled digital twins. This type of simulation allows the scientists topredict what will happen if they change an aspectof the reaction. When a digital twin is based on avery accurate model of a reaction, researcherscanbe confident in itspredictions.

Bygivingthe digital twinthe taskto simulate a modification to a reaction or new type of catalyst, researchers can avoid doing physical experiments for modifications that are likely to lead to poor results or unsafe conditions. Instead,the digital twin simulation can savetime and moneyby testing thousands of conditions,while researchers can testonly a handful of the mostpromising conditions in the physical laboratory.

Plus, this machine learning approach can produce newer and more accurate modelsfor each new catalyst and reaction condition testedwith the TAP reactorsystem.In turn, applying these models to digital twin simulations gives researchers the predictive power to pick the best catalysts and conditions to test next in the TAP reaction. As a result, each roundof testing, model development and simulationproducesa greater understanding of how a reactionworksand howtoimprove it.

These toolsarethe foundation of a new paradigm incatalyst science but alsopave the way for radical new approaches inchemical manufacturing,said Rebecca Fushimi, who leads the project team.

About Idaho National LaboratoryBattelle Energy Alliance manages INL for the U.S. Department of Energys Office of Nuclear Energy. INL is the nations center for nuclear energy research and development,and alsoperforms research in each of DOEs strategic goal areas: energy, national security, science and the environment. For more information, visitwww.inl.gov. Follow us on social media:Twitter,Facebook,InstagramandLinkedIn.

Chemical Engineering Journal

Data driven reaction mechanism estimation via transient kinetics and machine learning

18-Apr-2021

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

More here:

Learning to improve chemical reactions with artificial intelligence - EurekAlert

Posted in Artificial Intelligence | Comments Off on Learning to improve chemical reactions with artificial intelligence – EurekAlert

Life and health insurers to use advanced artificial intelligence to reduce benefits fraud – Canada NewsWire

Posted: at 5:23 am

TORONTO, Feb. 14, 2022 /CNW/ - The Canadian Life and Health Insurance Association (CLHIA) is pleased to announce the launch of an industry initiative to pool claims data and use advanced artificial intelligence tools to enhance the detection and investigation of benefits fraud.

Every insurer in Canada has their own internal analytics to detect fraud within their book of business. This new initiative, led by the CLHIA and its technology provider Shift Technology will deploy advanced AI to analyze industry-wide anonymized claim data. By identifying patterns across millions of records, the program is enhancing the effectiveness of benefits fraud investigations across the industry.

We expect that the initiative will expand in scope over the coming years to include even more industry data.

"Fraudsters are taking increasingly sophisticated steps to avoid detection," said Stephen Frank, CLHIA's President and CEO. "This technology will give insurers the edge they need to identify patterns and connect the dots across a huge pool of claims data over time, leading to more investigations and prosecutions."

"The capability for individual insurers to identify potential fraud has already proven incredibly beneficial," explained Jeremy Jawish, CEO and co-founder of Shift Technology. "Through the work Shift Technology is doing with the CLHIA, we are expanding that benefit across all member organizations, and providing a valuable fraud fighting solution to the industry at large."

Insurers paid out nearly $27 billion in supplementary health claims in 2020. Employers and insurers lose what is estimated to be millions of dollars each year to fraudulent group health benefits claims. The costs of fraud are felt by insurers, employers and employees and put the sustainability of group benefits plans at risk.

About CLHIAThe CLHIA is a voluntary association whose member companies account for 99 per cent of Canada's life and health insurance business. These insurers provide financial security products including life insurance, annuities (including RRSPs, RRIFs and pensions) and supplementary health insurance to over 29 million Canadians. They hold over $1 trillion in assets in Canada and employ more than 158,000 Canadians. For more information, visit http://www.clhia.ca.

About Shift TechnologyShift Technology delivers the only AI-native decision automation and optimization solutions built specifically for the global insurance industry. Addressing several critical processes across the insurance policy lifecycle, the Shift Insurance Suite helps insurers achieve faster, more accurate claims and policy resolutions. Shift has analyzed billions of insurance transactions to date and was presented Frost & Sullivan's 2020 Global Claims Solutions for Insurance Market Leadership Award. For more information, visit http://www.shift-technology.com.

SOURCE Canadian Life and Health Insurance Association Inc.

For further information: Kevin Dorse, Assistant Vice President, Strategic Communications and Public Affairs, CLHIA, (613) 691-6001, [emailprotected]; Rob Morton, Corporate Communications, Shift Technology, 617-416-9216, [emailprotected]

See the original post here:

Life and health insurers to use advanced artificial intelligence to reduce benefits fraud - Canada NewsWire

Posted in Artificial Intelligence | Comments Off on Life and health insurers to use advanced artificial intelligence to reduce benefits fraud – Canada NewsWire

Toronto tech institute tracking long COVID with artificial intelligence, social media – The Globe and Mail

Posted: at 5:23 am

The Vector Institute has teamed up with Telus Corp., Deloitte and Roche Canada to help health care professionals learn more about the symptoms of long COVID.Nathan Denette/The Canadian Press

A Toronto tech institute is using artificial intelligence and social media to track and determine which long-COVID symptoms are most prevalent.

The Vector Institute, an artificial intelligence organization based at the MaRS tech hub in Toronto, has teamed up with telecommunications company Telus Corp., consulting firm Deloitte and diagnostics and pharmaceuticals business Roche Canada to help health care professionals learn more about the symptoms that people with a long-lasting form of COVID experience.

They built an artificial intelligence framework that used machine learning to locate and process 460,000 Twitter posts from people with long COVID defined by the Canadian government as people who show symptoms of COVID-19 for weeks or months after their initial recovery.

Lest we forget: We need to support the veterans who survived their war against COVID-19

Opera voice coach helps long-term COVID-19 sufferers

The framework parsed through tweets to determine which are first-person accounts about long COVID and then tallied up the symptoms described. It found fatigue, pain, brain fog, anxiety and headaches were the most common symptoms and that many with long COVID experienced several symptoms at once.

Replicating that research without AI would have taken a huge amount of hours worked and staff members, who would have had to manually locate hundreds of thousands of social-media posts or people and siphon out those without long-COVID or first-person accounts and count symptoms.

AI is very good at taking large sets of large amounts of data to find patterns, said Cameron Schuler, Vectors chief commercialization officer and vice-president of industry innovation. Its for stuff that is way too big for any human to actually be able to hold this in their brain.

The framework speeds up the research process around a virus that is quickly evolving and still associated with so many unknowns.

So far, long COVID isnt well understood. Theres no uniform way to diagnose it nor a single treatment to ease or cure it. Information is key to giving patients better outcomes and ensuring hospitals arent overwhelmed in the coming years.

A survey conducted in May, 2021, of 1,048 Canadians with long COVID, also known as post-COVID syndrome, found more than 100 symptoms or difficulties with everyday activities.

COVID-19 can affect people for the long haul and theyre getting the short shrift

Canada should lead the effort to help COVID long-haulers

About 80 per cent of adults surveyed by Viral Neuro Exploration, COVID Long Haulers Support Group Canada and Neurological Health Charities Canada reported one or more symptoms between four and 12 weeks after they were first infected.

Sixty per cent reported one or more symptoms in the long term. The symptoms were so severe that about 10 per cent are unable to return to work in the long term.

Researchers and those behind the technology are hopeful it will quickly contribute to the worlds fight against long COVID, but are already imagining ways they can advance the framework even further or apply it to other situations.

This is a novel kind of tool, said Dr. Angela Cheung, a senior physician scientist at the University Health Network, who is running two large studies on long COVID.

Im not aware of anyone else having done this and so I think it really may be quite useful going forward in health research.

Researchers say preliminary uses of the framework show it can help uncover patterns related to symptom frequencies, co-occurrence and distribution over time.

It could also be applied to other health events such as emerging infections or rare diseases or the effects of booster shots on infection.

Sign up for the Coronavirus Update newsletter to read the days essential coronavirus news, features and explainers written by Globe reporters and editors.

This content appears as provided to The Globe by the originating wire service. It has not been edited by Globe staff.

Follow this link:

Toronto tech institute tracking long COVID with artificial intelligence, social media - The Globe and Mail

Posted in Artificial Intelligence | Comments Off on Toronto tech institute tracking long COVID with artificial intelligence, social media – The Globe and Mail

AION Labs, Powered by BioMed X, Launches Third Global Call for Application: Artificial Intelligence for Design and Optimization of Antibodies for…

Posted: at 5:23 am

REHOVOT, Israel and HEIDELBERG, Germany, Feb. 13, 2022 /PRNewswire/ --AION Labs, a first-of-its-kind innovation lab spearheading the adoption of AI technologies and computational science to solve therapeutic challenges, and German independent research institute BioMedX, announced today the launch of the third global call for application to identify biomedical scientists and inventors to form a new startup at AION Labs' headquarters in Rehovot, Israel.

The chosen AION Labs startup team will be sponsored by several industry-leading partners and supported by the Israel Innovation Authority (IIA) and Digital Israel office. The sponsors of this call for application are AstraZeneca, Israel Biotech Fund, Merck, Pfizer and Teva Pharmaceuticals, with close support from Amazon Web Services (AWS).

Antibody treatments continue to be the standard of care for several disease areas and have emerged as cornerstone therapies during the current pandemic. However, despite being primary treatment modalities for over two decades, the cycle times for the discovery and optimization of therapeutic antibodies can still span several years. In order to achieve developable antibody therapeutics exhibiting target-specific binding, stability and scalability, several biophysical parameters need to be streamlined. The use of artificial intelligence (AI) has the potential to broaden the explored sequence space, accelerate the selection of fully optimized antibodies, and shorten overall lead discovery times by successfully predicting relevant parameters.

AION Labs is inviting computational biologists, bioinformatics and cheminformatics scientists, AI researchers, and antibody or protein engineers at academic and industry research labs worldwide to propose the development of a next-generation computational platform to optimize antibodies for targeted therapies with enhanced properties, including developability or manufacturability, stability, aggregation, immunogenicity, pharmacokinetics and tissue distribution. The ultimate solution is an AI platform that receives sequences of binders and generates novel variants with optimized IgG sequences, biophysical and targeting properties. The goal of the AI algorithm is to make an existing antibody a better drug while reducing design iterations, optimization of cycle times and lowering attrition rates. The AION Labs pharma partners involved in this project will provide a wealth of data for model training and their expertise in setting specifications and evaluating the outcome. Original ideas that go far beyond the current state-of-the-art are being encouraged.

"AION Labs is eager to tackle yet another pharmaceutical R&D challenge," said Dr. Yair Benita, CTO of AION Labs. "We're anticipating another strong round of applications, and look forward to working together with the chosen startup to develop a cutting-edge solution to substantially improve the design and optimization of antibodies for targeted therapies."

As part of the online application procedure, interested candidates are requested to submit a competitive project proposal. After a preliminary short-listing round, candidates will be invited to a five-day innovation boot camp in Rehovot. With the support of experienced mentors from the pharma, tech and VC industries, the winning team of scientists will be trained and guided during a fully-funded incubation period of up to four years towards becoming an independent startup.

Further details about this call for application can be found on the AION Labs website: http://www.aionlabs.com. Interested candidates are invited to apply via the BioMed X Career Space at https://career.bio.mx/call/2022-AIL-C03 before April 10, 2022.

Sign up here to join us for an informative webinar to learn more about AION Labs and this challenge on March 10, 2022 at 11 AM EST: https://us02web.zoom.us/webinar/register/WN_Qu788xk8SfycAm9vaTCpZg

About AION LabsAION Labs is a first-of-its-kind alliance of AstraZeneca, Merck, Pfizer, Teva, the Israel Biotech Fund and Amazon Web Services (AWS) that have come together with one clear mission: to create and adopt groundbreaking new AI technologies that will transform the process of drug discovery and development in order to contribute to the health and well-being of all people world-wide.

AION Labs is a unique venture hub where brilliant innovators and scientist-founders convene from around the world to solve the biggest R&D challenges guided by years of accumulated know-how, data and experience in pharma. The lab leverages its partners' wealth of knowledge and a new multidisciplinary mindset with the ingenuity, agility and innovative power of Israel's start-up ecosystem, to develop strong companies with clear long-term strategies, that will pave the way to the future of healthcare. AION Labs cultivates innovation from within; its unique venture creation process bridges the gap between outstanding academic research in the field of AI and the biggest R&D needs in the discovery and development of new medicines for the benefit of patients.

For more information, visit aionlabs.com

About BioMed XBioMed X is an independent research institute located on the campus of the University of Heidelberg in Germany, with a world-wide network of partner locations. Together with our partners, we identify big biomedical research challenges and provide creative solutions by combining global crowdsourcing with local incubation of the world's brightest early-career research talents. Each of the highly diverse research teams at BioMed X has access to state-of-the-art research infrastructure and is continuously guided by experienced mentors from academia and industry. At BioMed X, we combine the best of two worlds academia and industry and enable breakthrough innovation by making biomedical research more efficient, more agile, and more fun.

For more information, visit bio.mx

Media Contact:Lior FeiginFINN Partners for AION Labs[emailprotected]@LiorFeigin+1 929 588 2016+972 54 282 4503

Logo - https://mma.prnewswire.com/media/1708278/AION_Labs_Logo.jpg

SOURCE AION Labs

Read the rest here:

AION Labs, Powered by BioMed X, Launches Third Global Call for Application: Artificial Intelligence for Design and Optimization of Antibodies for...

Posted in Artificial Intelligence | Comments Off on AION Labs, Powered by BioMed X, Launches Third Global Call for Application: Artificial Intelligence for Design and Optimization of Antibodies for…

On Mars, a NASA Rover and Helicopters Year of Surprise and Discovery – The New York Times

Posted: at 5:22 am

A year ago, NASAs Perseverance rover was accelerating to a collision with Mars, nearing its destination after a 290-million-mile, seven-month journey from Earth.

On Feb. 18 last year, the spacecraft carrying the rover pierced the Martian atmosphere at 13,000 miles per hour. In just seven minutes what NASA engineers call seven minutes of terror it had to pull off a series of maneuvers to place Perseverance gently on the surface.

Given the minutes of delay for radio communications to crisscross the solar system, the people in mission control at NASAs Jet Propulsion Laboratory in California were merely spectators that day. If anything had gone wrong, they would not have had any time to attempt a fix, and the $2.7 billion mission, to search for evidence that something once lived on the red planet, would have ended in a newly excavated crater.

But Perseverance performed perfectly, sending home exhilarating video footage as it landed. And NASA added to its collection of robots exploring Mars.

The vehicle itself is just doing phenomenally well, Jennifer Trosper, the project manager for Perseverance, said.

Twelve months later, Perseverance is nestled within a 28-mile-wide crater known as Jezero. From the topography, it is evident that more than three billion years ago, Jezero was a body of water roughly the size of Lake Tahoe, with rivers flowing in from the west and out to the east.

One of the first things Perseverance did was deploy Ingenuity, a small robotic helicopter and the first such flying machine to take off on another planet. Perseverance also demonstrated a technology for generating oxygen that will be crucial whenever astronauts finally make it to Mars.

The rover then set off on a diversion from the original exploration plans, to study the floor of the crater it landed in. The rocks there turned out not to be what scientists were expecting. It ran into trouble a couple of times when it tried to collect cores of rock cylinders about the size of sticks of chalk that are eventually to be brought back to Earth by a future mission. Engineers were able to solve the problems and most everything is going well.

Its been a very exciting year, exhausting at times, said Joel Hurowitz, a professor of geosciences at Stony Brook University in New York who is a member of the missions science team. The pace of work has been pretty incredible.

After months of scrutinizing the crater floor, the mission team is now preparing to head for the main scientific event: investigating a dried-up river delta along the west rim of Jezero.

That is where scientists expect to find sedimentary rocks that are most likely to contain blockbuster discoveries, maybe even signs of ancient Martian life if any ancient life ever existed on Mars.

Deltas are, at least on Earth, habitable environments, said Amy Williams, a professor of geology at the University of Florida and a member of the Perseverance science team. Theres water. Theres active sediment being transported from a river into a lake.

Such sediments can capture and preserve carbon-based molecules that are associated with life. Thats an excellent place to look for organic carbon, Dr. Williams said. So hopefully, organic carbon thats indigenous to Mars is concentrated in those layers.

Perseverance landed not much more than a mile from the delta. Even at a distance, the rovers eagle-eyed camera could make out the expected sedimentary layers. There were also boulders, some as large as cars, sitting on the delta, rocks that were washed into the crater.

This all tells a fascinating story, said Jim Bell, a planetary scientist at Arizona State University.

The data confirm that what orbital images suggested was a river delta is indeed that and that the history of water here was complex. The boulders, which almost certainly came from the surrounding highlands, point to episodes of violent flooding at Jezero.

It wasnt just slow, gentle deposition of fine grained silt and sand and mud, said Dr. Bell, who serves as principal investigator for the sophisticated cameras mounted on Perseverances mast.

Mission managers had originally planned to head directly to the delta from the landing site. But the rover set down in a spot where the direct route was blocked by sand dunes that it could not cross.

The geological formations to the south intrigued them.

We landed in a surprising location, and made the best of it, said Kenneth Farley, a geophysicist at the California Institute of Technology who serves as the project scientist leading the research.

Because Jezero is a crater that was once a lake, the expectation was that its bottom would be rocks that formed out of the sediments that settled to the bottom.

But at first glance, the lack of layers meant they did not look obviously sedimentary, said Kathryn Stack Morgan of NASAs Jet Propulsion Laboratory, the deputy project scientist. At the same time, nothing clearly suggested they were volcanic in origin, either.

Its really turned into a detective story sort of about why this region is one of the most geologically unusual in the planet, said Nicholas Tosca, a professor of mineralogy and petrology at the University of Cambridge in England and a member of the science team.

As the scientists and engineers contemplated whether to circle around to the north or to the south, the team that built a robotic helicopter named Ingenuity got to try out their creation.

The helicopter was a late addition to the mission, meant as a proof-of-concept for flying through the thin air of Mars.

On April 18 last year, Ingenuity rose to a height of 10 feet, hovered for 30 seconds, and then descended back to the ground. The flight lasted 39.1 seconds.

Over the following weeks, Ingenuity made four more flights of increasing time, speed and velocity.

That helped avoid wasting time driving to unexceptional rocks that had looked potentially interesting in images taken from orbit.

We sent the helicopter and saw the images, and it looked very similar to where we were, Ms. Trosper said. And so we chose not to drive.

The helicopter continues to fly. It just completed its 19th flight, and it remains in good condition. The batteries are still holding a charge. The helicopter has shown it can fly in the colder, thinner air of the winter months. It was able to shake off most of the dust that fell on it during a dust storm in January.

Everythings looking green across the board, said Theodore Tzanetos, who leads the Ingenuity team at the Jet Propulsion Laboratory.

In the exploration of the rocks to the south of the landing site, scientists solved some of their secrets when the rover used its drill to grind shallow holes in a couple of them.

Oh wow, these look volcanic, Dr. Stack Morgan said, remembering her reaction. Exactly what youd expect for a basaltic lava flow.

The tools that Perseverance carries to study the ingredients of Martian rocks can take measurements pinpointed on bits of rock as small as a grain of sand. And cameras on the robotic arm can take close-up pictures.

Those observations revealed large grains of olivine, an igneous mineral that can accumulate at the bottom of a large lava flow. Later fractures emerged between the olivine grains that were filled with carbonates, a mineral that forms through interactions with water.

The thinking now is that the Jezero crater floor is the same olivine-rich volcanic rock that orbiting spacecraft have observed in the region. It might have formed before the crater filled with water.

Sediments from the lake probably did cover the rock, with water percolating through the sediments to fill the fractures with carbonate. Then, slowly, over a few billion years, winds blew the sediments away.

That the wispy air on Mars could erode so much rock is hard for geologists on Earth to wrap their minds around.

You dont find landscapes that are even close to that on Earth, Dr. Farley said.

The most troublesome moments during the first year have occurred during the collection of rock samples. For decades, planetary scientists have dreamed that pieces of Mars could be brought to Earth, where they could study them with state-of-the-art instruments in laboratories.

Perseverance is the first step in turning that dream into reality by drilling cores of rock and sealing them in tubes. The rover, however, has no means to get the rock samples off Mars and back to Earth; that awaits another mission known as Mars Sample Return, a collaboration between NASA and the European Space Agency.

During the development of Perseverances drill, engineers tested it with a wide variety of Earth rocks. But then the very first rock on Mars that Perseverance tried to drill turned out to be unlike all of the Earth rocks.

The rock in essence turned to dust during the drilling and slid out of the tube. After several successes, another drilling attempt ran into problems. Pebbles fell out of the tube in an inconvenient part of the rover the carousel where the drilling bits are stored and that required weeks of troubleshooting to clean away the debris.

That was exciting, not necessarily in the best way, Dr. Stack Morgan said. The rest of our exploration has gone really well.

Perseverance will at some point drop off some of its rock samples for a rover on the Mars Sample Return mission to pick up. That is to prevent the nightmare scenario that Perseverance dies and there is no way to extricate the rocks it is carrying.

The top speed of Perseverance is the same as that of Curiosity, the rover NASA landed in another crater in 2012. But improved self-driving software means it can cover longer distances in a single drive. To get to the delta, Perseverance needs to retrace its path to the landing site and then take a route around the sand dunes to the north.

It could arrive at the delta by late May or early June. Ingenuity will try to stay ahead of Perseverance.

The helicopter flies faster than the rover can drive, but after each flight, its solar panels have to soak up several days of sunshine to recharge the batteries. Perseverance, powered by the heat from a hunk of plutonium, can drive day after day after day.

The helicopter, however, might be able to take a shortcut across the sand dunes.

Were planning to get to the delta, Mr. Tzanetos said. And were discussing what happens beyond the river delta.

But, he added that every day could be the last for Ingenuity, which was designed to last only a month. You hope that youre lucky enough to keep flying, he said, and were going to keep that streak going for as long as we can.

Once Perseverance gets to the delta, the most electrifying discovery would be images of what looked to be microscopic fossils. In that case, we have to start asking whether some globs of organic matter are arranged in a shape that outlines a cell, said Tanja Bosak, a geobiologist at the Massachusetts Institute of Technology.

It is unlikely Perseverance will see anything that is unequivocally a remnant of a living organism. That is why it is crucial for the rocks to be brought to Earth for closer examination.

Dr. Bosak does not have a strong opinion on whether there was ever life on Mars.

We are really trying to peer into the time where we have very little knowledge, she said. We have no idea when chemical processes came together to form the first cell. And so we may be looking at something that was just learning to be life.

See original here:

On Mars, a NASA Rover and Helicopters Year of Surprise and Discovery - The New York Times

Posted in Mars | Comments Off on On Mars, a NASA Rover and Helicopters Year of Surprise and Discovery – The New York Times