The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
AI slashes time and cost of drug discovery and development – Nikkei Asia
Posted: July 25, 2021 at 3:26 pm
TOKYO -- Artificial intelligence is transforming the landscape of drug discovery and development. The technology is helping to slash the time and money needed to develop new drugs for COVID-19 and other serious diseases by quickly identifying promising drug candidates.
In the case of COVID-19, the application of AI helped one company come up with a treatment that was approved in the U.S. in a lightning-fast nine months.
British AI startup BenevolentAI identified baricitinib, a drug developed by Elli Lily for the treatment of rheumatoid arthritis, as a potentially effective COVID-19 drug in just a few days. The medication has been approved as a COVID-19 treatment in the U.S. and Japan. The European Medicines Agency has also begun evaluating baricitinib for use against the coronavirus.
A BenevolentAI specialist team was tasked with using the company's state-of-the-art AI to find an already approved drug that could be repurposed as a COVID-19 treatment. This approach made it possible to win emergency use authorization by the U.S. Food and Drug Administration to treat hospitalized COVID-19 patients in just nine months, instead of the several years typically required. Drug discovery usually involves a long search for candidates and animal testing to evaluate their safety.
BenevolentAI's technology identifies potential drug candidates using data from clinical trials, academic papers, and its own database on diseases, genes and pharmaceuticals. When a target protein is identified, AI finds candidate drugs that act on it.
Applying AI to drug discovery and development is expected to sharply reduce in the time required to create new drugs, a process that usually takes nine to 17 years. That time could be cut in half for approved drugs that are repurposed for other uses.
In February 2020, soon after the World Health Organization declared the COVID-19 outbreak to be a public health emergency of international concern, BenevolentAI's first paper on baricitinib as a candidate COVID-19 treatment, published in the British medical journal The Lancet, found that the drug may inhibit the ability of the virus to infect lung cells and cause inflammation in patients.
Eli Lilly, which owns the rights to baricitinib, and the National Institute of Allergy and Infectious Diseases launched a study in the U.S. to examine the efficacy and safety of the drug as a potential treatment for hospitalized COVID-19 patients. Because the study found that baricitinib can shorten recovery times and improve clinical outcomes for patients, the FDA granted emergency use authorization for the drug in November last year. The drug has been shown to reduce mortality in hospitalized patients by 38% when used in combination with remdesivir, an antiviral medication, according to data released by Eli Lilly.
BenevolentAI is also developing drugs on its own, focusing on treatments for more than 10 diseases, including atopic dermatitis and amyotrophic lateral sclerosis (ALS), also known as motor neurone disease or Lou Gehrig's disease.
The company started a clinical trial on a potential treatment for atopic dermatitis in February. It is also working with AstraZeneca to develop a treatment for chronic kidney disease.
Use of AI in drug discovery and development is spreading around the world. Sumitomo Dainippon Pharma, in partnership with Exscientia, an Oxford, England-based AI drug discovery startup, has found a candidate treatment for obsessive-compulsive disorder. Last year, the Japanese drugmaker began clinical trials of the drug candidate in Japan to evaluate its safety.
"We found the candidate in less than a year using AI for the process, which typically takes four and a half years," said an executive at Sumitomo Dainippon Pharma. In May, the company started Phase 1 clinical trials in the U.S. on an Alzheimer's disease psychosis drug candidate designed using Exscientia's AI technology.
AI does not do all the work. It is used to find candidate drugs and narrow down the design of new drugs by crunching huge amounts of data from scientific papers and experiments. People must work out which direction to take the research and development itself.
Exscientia is attracting the attention of pharmaceutical and biotechnology companies around the world. Evotec, a German biomedical company, has jointly developed a new cancer treatment with Exscientia. Human clinical trials of the A2a receptor antagonist began in April, according to Evotec. The candidate drug was discovered eight months after the two companies launched the project.
Taisho Pharmaceutical of Japan and Insilico Medicine, a Hong Kong-based AI startup, started a joint research project last fall to identify therapeutic compounds that may slow the cellular effects of aging. Insilico is using its AI networks to identify therapeutic targets and find druglike molecules that target senescent cells. The accumulation of these cells as people age is thought to be behind a variety of diseases.
Insilico's job is to identify the role senescence plays in specific cells, tissues and diseases, with different proteins implicated for each, and to design molecules to tackle those targets. Taisho will validate the computer-generated compounds through in vitro and in vivo testing.
Some Japanese AI startups are playing catch-up with such overseas leaders in the field. Hacarus, a Kyoto-based AI startup, and the University of Tokyo in June announced the start of a joint research project to develop cures for Alzheimer's disease and Parkinson's disease. Both are caused by the accumulation of certain proteins in the brain. Using AI to develop drugs for these types of disease is still rare.
The project is aimed at creating a system in one year to efficiently search for compounds that could be candidate drugs. The AI-driven approach will "dramatically improve the speed and accuracy of the research process, which has traditionally depended on human hands and eyes," said Taisuke Tomita, a professor at the University of Tokyo's Graduate School of Pharmaceutical Sciences.
More than 120 Japanese companies and universities have joined the Life Intelligence Consortium, an industry-academia collaboration aimed at applying AI to the life sciences. The consortium has already created around 20 prototype AI programs for drug discovery and development.
"AI will soon become an essential technology for drug discovery and development," said Yasushi Okuno, a Kyoto University professor. The technology offers the opportunity to "reconsider the conventional wisdom that developing a new drug takes 10 years."
Coming up with new has become hugely expensive. The average cost to develop a prescription drug that makes it to market soared to some $2.9 billion in the 2000s, up from $180 million in the 1970s, according to an estimate by Tufts University in the U.S. The rise in drug development costs has been far steeper than inflation overall.
Many new drugs have been developed over years, especially treatments for cancers and "lifestyle diseases." Many substances that act on important molecules implicated in the development of diseases have already been identified and developed into drugs. As a result, it is becoming increasingly difficult to develop effective new drugs. And a greater emphasis on safety has stretched out the time needed for clinical trials.
Only one in about 30,000 candidate substances actually becomes a new drug, according to the Japan Pharmaceutical Manufacturers Association. The process can take nine to 17 years. R&D spending by pharmaceutical companies is equal to roughly 10% of annual sales, compared with around 4% for the manufacturing sector as a whole.
In their efforts to come up with new treatments, drug companies are devoting more of their resources to the development of biopharmaceuticals -- complex medicines made from living cells or organisms, often created using cutting-edge technologies, including antibody drug conjugates. Developing and manufacturing biopharmaceuticals is thus complicated and costly.
One such drug, Nivolumab, is sold under the brand name Opdivo. It was first used in Japan in 2014 to treat a variety of cancers and initially cost some 35 million yen ($318,000) a year to administer. This led to complaints that its use would further strain the government's already huge and growing health care spending.
AI is among the technologies that could, by drastically shortening drug development time and cost, help to keep prices in check, thereby improving access to new treatments for previously intractable diseases and enhancing the quality of life for everyone.
Read the original:
AI slashes time and cost of drug discovery and development - Nikkei Asia
Posted in Ai
Comments Off on AI slashes time and cost of drug discovery and development – Nikkei Asia
Why Government Needs More Women in AI – GovernmentCIO Media & Research
Posted: at 3:25 pm
Women are shaping AI advancement in federal IT and helping to remove biases in data sets.
Womenin tech can supercharge teams' creativity and help them stay under budget, meet deadlinesand improve outcomes, studies show, so its time for more women to pursuetech careers, according to a lead Department of Labor officialspeaking at GovernmentCIOMedia & Researchs Women Tech Leaders eventThursday.
Kathy McNeill, who leads emerging technology strategy at the agency,said the federal government needs more women in AIto produce accurate data sets and data analysis.
AI is a reflection of those who develop it and the data sets we use, she said during a fireside chat.
McNeill provided an example of how Google Translate took the phrase she is a doctor and he is a babysitter and translated it to he is a doctor and she is a babysitter in another language, to illustrate biases inherent inartificially intelligent algorithms.
A lot of systems were developed 10 to 20 years ago, she said. Think how we've evolved and changed since then. Some of our government systems are even older than that think of the biases that must exist in those systems. There are data sets we use today that were developed in the 60s that had women tagged as homemakers when in fact they were teachers, or scientists, or lawyers. We need women, and we need women of diverse backgrounds to make sure we're doing real technical work to minimize the biases in systems.
According to a2015 American Association of University Womenstudy, the number of women in tech careers dropped from35% in 1990 to 26% in 2013.
McNeill believes one reason for this is because womencan be questioned and treated with skepticism.
Tracey Cho, she's a rock star in Silicon Valley, she's got technical chops at Google, Pinterest and Facebook, McNeill said. She was quoted in an article in 2019 ...talking about computers and computer science. She said women are still questioned for their technical chops and treated skeptically, sometimes straight out [with] hostility.We need to create a culture that's inclusive for women.
Technology jobs in government are chronically underfilled, and with more women pursuing STEM education than any time in history, now is the time to pursue a career in federal IT, McNeill said. Women are already shaping AI advancement at federal agencies in significant ways.
Krista Kinnard, division director of technology at the Department of Labor, is pioneering use of AI to help screen candidate resumes for human resources, improve compliancetraining and boost cybersecurity controls.
In government we have dimensional challenges, McNeillsaid. When it comes to AI andmachine learning and modern technology, we have a mix of aging and new technology, [so] we have very complex problems to solve.We're advancing so rapidly, there are so many ways to get involved. We, as women, bring such a rich perspective to the tech team.
McNeill advised women in tech or interested in tech careers to find a mentor, be a mentor, and be a lifelong learner.
It's also important to be true to who you are, she added. Always look for ways to learn about the newtechnologies and solutions. We put a lot of pressure on ourselves to get everything right. You're not going to get everything right. So be agile, start fast, fail, learn, and iterate. Speak up, volunteer for the tough project, the new position. Be tenacious. Don't take no for an answer.
View original post here:
Why Government Needs More Women in AI - GovernmentCIO Media & Research
Posted in Ai
Comments Off on Why Government Needs More Women in AI – GovernmentCIO Media & Research
Bezos Makes History, Common Sense AI And More In This Week’s Top News – Analytics India Magazine
Posted: at 3:25 pm
On Tuesday, Jeff Bezos and three passengers reached the edge of space and safely returned Tuesday morning after a flight of just over 10 minutes that the billionaire businessman hopes will kick-start an expansive new era for human space travel. What were doing is the first step of something big. Big things start small, Bezos said. Tuesdays space flight was Blue Origins first with passengers on board. Bezos was accompanied by his brother Mark, a 82-year-old aviator Wally Funk and 18-year-old student Oliver Daemen.
Founded 2000, Blue Origin currently employs more than 3,500 people spread over Florida, California and other locations. Blue Origin and Virgin Galactic have been applauded for successfully launching sort of worlds first commercial astronauts. But, looks like those laurels have to wait for a while.The Federal Aviation Administration (FAA) spoiled the party by releasing new rules on the day of Bezos historic flight. According to the new rules, to qualify as commercial astronauts, space-goers must travel 50 miles (80km) above the Earths surface and must have demonstrated activities during flight that were essential to public safety, or contributed to human space flight safety.
Alphabets Moonshot Factory X has launched a new robotics company, Intrinsic, which develops software and AI tools that use sensor data from a robots environment to learn from, and quickly adapt to the real world. The team at Intrinsic features a diverse group of talent from robotics, film production, design, computer perception, and mechanical design.
Founded in 2010 by Google founders Larry Page and Sergey Brin, X was established with the goal to work on moonshots: far-out, sci-fi sounding technologies that could one day make the world a radically better place. So far, X has incubated hundreds of different moonshot projects in fields ranging from computational agriculture to cybersecurity; from seawater fuel to machine learning and more.
On Friday, NASA announced that it has awarded a new contract to Elon Musks SpaceX to provide launch services for Earths first mission to conduct detailed investigations of Jupiters moon Europa. The Europa Clipper mission, which will launch in October 2024 will use SpaceXs Falcon Heavy rocket. The total contract award amount for launch services is approximately $178 million.
Europa Clipper will conduct a detailed survey of Europa and use a sophisticated suite of science instruments to investigate whether the icy moon has conditions suitable for life. Key mission objectives are to produce high-resolution images of Europas surface, determine its composition, look for signs of recent or ongoing geological activity, measure the thickness of the moons icy shell, search for subsurface lakes, and determine the depth and salinity of Europas ocean, said NASA in a statement.
On Wednesday, the Federal Trade Commission voted to enforce laws around the Right to Repair, thereby ensuring that US consumers will be able to repair their own electronic and automotive devices. Right to repair initiatives would require the smartphone, laptops and other device makers to append their products with a how to repair manual to assist their customers. And would establish more transparency with regards to green initiatives and also restrict the makers from resorting to planned obsolescence. This resolution might also encourage responsible marketing and advertising.
At the 2021 International Conference on Machine Learning (ICML), DARPA, IBM, MIT and Harvard released a new dataset for benchmarking AI intuition, along with two machine learning models representing different approaches to the problem. According to the researchers, this work aims to accelerate the development of AI that exhibits common sense. These tools rely on testing techniques that psychologists use to study the behavior of infants and is an extension of neuro-symbolic AI research, which combines the logic of symbolic AI algorithms with the deep learning capabilities found in neural networks.
Steve Jobs is out there challenging the status quo yet again. Jobs job application is being auctioned and this is the first time an item is available both in physical as well as NFT format for auction. Dated 1973, this artifact will go out in only one form either physical or digital based on the highest bidder and their choice of format. Ever since its introduction, NFTs(non-fungible tokens) have been a rage, fetching as high as $60 million for artwork. But, in the case of Jobs application, the interest for physical format has surpassed the NFT cult by a huge margin. The auction, which is going to end in four days time, is a key litmus test for the future of decentralised collectibles.
Ubers trucking division, Freight, has spent $2.25 billion to acquire Transpalce, a company that makes shipping software. According to reports, Uber Freight will acquire Transplace from TPG Capital, the private equity platform of alternative asset firm TPG that acquired Transplace in 2017.Uber has recently shut down its AI research labs and currently focussing on investing in talent houses that can boost its freight economy.
Go here to read the rest:
Bezos Makes History, Common Sense AI And More In This Week's Top News - Analytics India Magazine
Posted in Ai
Comments Off on Bezos Makes History, Common Sense AI And More In This Week’s Top News – Analytics India Magazine
How AI Is Transforming the Technology Workplace: Lessons from JPMorgan Chase – EnterpriseAI
Posted: at 3:25 pm
July 23, 2021 by Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE
AI poses great potential for improving productivity in the technology workplace, Salwa Alamir of JPMorgan Chase said in her plenary session at PEARC21. A data-driven ML approach can help identify worker skills necessary for a job or a role in a given project, determine how projects can be sped up via improved task management and boost understanding of how code can be better maintained. ML- and data-driven analyses of three areas people, tooling and process have shown promise in improving the productivity of software developers, with further applications across that vast banking enterprise.
The PEARC conference series provides a forum for discussing challenges, opportunities and solutions among the broad range of participants in the research computing community. This community-driven effort builds on successes of the past, and aims to grow and be more inclusive by involving additional local, regional, national and international cyberinfrastructure and research computing partners spanning academia, government and industry. PEARC21, Evolution Across All Dimensions, was offered this year as a virtual event (July 19-22).
Tooling: Data Driven Approach
JPMorgan Chase employs more than 50,000 technologists, with an annual budget of more than $11 billion, Alamir said. The AI team began by asking whether the companys productivity-to-cost balance could be improved via machine learning. They devised a three-pronged approach that split the problem into three productivity enablers: people, process and tooling.
For the tooling analysis, the team analyzed the correlation between productivity tool adoption with industry standard metrics such as lead time for changes, change failure rates, deployment frequencies and mean time to recovery after failures.
Salwa Alamir of JPMorgan Chase
We decided to use some of these metrics to decide what are the most useful tools developers can use, she said. We decided that unique incidents are not as bad as incidents that are happening over and over, given that the latter imply developers may be addressing the symptoms and not the root causes of a given incident. They identified a number of tools associated with better metrics, including continuous integration/continuous delivery tools, cloud tools, training portals and task-management tools that the company should prioritize use of by its developers.
People: AI for Skills Understanding
For people, we were interested in skill sets, she said. How can we identify skill sets and retain the top talent we have in the firm? The focus of the exercise was automated review of resumes.
At any time, JPMorgan Chase has numerous job positions open globally. The number of resumes it receives for these positions is on the order of a million; manual review of these resumes is often time-prohibitive. Aids to automate the process to date, such as keyword searches, have proved inadequate to the task.
Resume formats also proved a challenge, Alamir reported. For example, a resume written in columns cant be read line by line, or the AI risks conflating information pertaining to different topics. Skills are changing over time, so no static list of required skills will remain relevant for long the AI would have to detect new skills not present in the training data.
The team hit upon a four-dimensional rubric, featuring a deep soft-skill extractor, a sectional hard-skill extractor, a project-delivery extractor and an experience extractor designed to extract skills from any free text. While intended for situations of applicant surplus, they found that their algorithms could also be applied to the opposite problem of too few applicants by searching for skills that are relevant for other open positions.
Process: AI for Task Management
The process productivity enabler portion of the effort focused in part on project management. The team developed a technique able to detect patterns of mismanagement among projects. These patterns may include the cliff, in which the projects languish for long periods and then the work is performed at the last minute; the gap, in which the project proceeds according to schedule for a time but then enters a phase in which the remaining work isnt completed; and the wave, in which successive bursts of task completion and addition of new tasks kept the project from reducing the amount of outstanding work. Notably, employees report that task planning and management was time consuming, with sudden workloads posed by unexpected events.
To improve on the current, manual planning method, Alamir and her colleagues leveraged historical records kept at JPMorgan Chase that tracked every development task ever created at the company. Learning from these data, their algorithm considers task difficulty, priority, precedence, duration, developer skills and additional constraints specific to a particular project to assign tasks optimally and minimize time to completion.
The team has a paper on that work that has been accepted for publication. Moreover, an informal survey of the companys development teams elicited positive reviews of the manageability of the AI-generated project plans.
Process: AI for Code Maintenance
Another aspect of the process study was that of the vast network of scripts underlying the companys systems. Some of its code repositories are huge, containing more than 10 million lines created by thousands of developers. Development can be accelerated when common tasks are performed by the same piece of code in the repository. But this approach poses risks.
It causes a dependency on scripts that developers may not be aware of, Alamir said. If a developer decides to change a function, [they] dont know how many people are using it. It may happen that ten teams [code] fail because of that change. Uncovering and correcting those failures manually can take months.
The AI team developed a multistep approach to solve the issue. By generating a network graph through static analysis of the code repository, they were able to visualize the directional code dependencies in a given program. By parsing release notes they were able to find deprecated functions requiring updates. The AI is able to automate that process when an update is available, or alert developers when an updated script isnt available at a high degree of confidence and the code must be corrected manually. Finally, unit tests of the updated scripts and scripts with dependencies can confirm a successful update.
The J.P. Morgan team has to date not implemented their algorithms at a scale large enough to demonstrate improvement in the three productivity enablers. Alamir is confident, though, that the size of the enterprise will greatly multiply even modest gains.
One hour more productivity per week per developer in such a large company will add up, she said. Were applying economies of scale; even just a little bit of improvement could end up being huge.
Read additional coverage of PEARC21 on our sister site HPCwire.
Related
About the author: Tiffany Trader
With over a decades experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.
Read the rest here:
How AI Is Transforming the Technology Workplace: Lessons from JPMorgan Chase - EnterpriseAI
Posted in Ai
Comments Off on How AI Is Transforming the Technology Workplace: Lessons from JPMorgan Chase – EnterpriseAI
Battlefield 2042 AI Bots Will Be "Really Hard" to Tell Apart From Humans Says Devs – MP1st
Posted: at 3:25 pm
With Battlefield 2042 confirmed to feature AI bots (that cant be turned off) to make sure servers stay full constantly, some might be wondering how these Battlefield 2042 AI bots will react in-game. As it turns out, very well, it looks like at least according to Ripple Effect Studios, whos co-developing Battlefield 2042 alongside the main studio over at DICE Sweden.
Talking about the difficulty of the AI in Battlefield 2042s Battlefield Portal modem Design Director Justin Wiebte of Ripple Effect Studios states that players will have a really hard time telling the difference between AI bots and real human players.
Weve tried to put a lot of effort into making them play just like a player would. So it would be really hard for people to tell the difference between an AI and a real human player because they will run around, they will drive vehicles, they will pick each other up, they will drop [each other] off at objective locations and things like that its a very intelligent system. And then you can tweak and tune some of the things they are and arent allowed to do, including how difficult they are to play against.
This is very encouraging to hear. If the AI bots are really this good in-game, I suspect some players would rather have these as squad mates rather than human players, since some of the latter fail at doing the most basic of team-oriented tasks such as giving ammo, healing, and the like. That said, given how Battlefield 2042 AI bots are confirmed to not be able to use Specialties and Traits, they are severely limited in what they can do.
Source: VG247
Read more:
Battlefield 2042 AI Bots Will Be "Really Hard" to Tell Apart From Humans Says Devs - MP1st
Posted in Ai
Comments Off on Battlefield 2042 AI Bots Will Be "Really Hard" to Tell Apart From Humans Says Devs – MP1st
Surgeon and researcher innovate with mixed reality and AI for safer surgeries – Healthcare IT News
Posted: at 3:25 pm
A University of Oklahoma researcher and a surgeon at OU Health, based in Oklahoma City, had a vision of using AI to visualize superimposed and anatomically aligned 3D CT scan data during surgery. The mission was to augment every surgery.
THE PROBLEM
"Compared to a pilot flying a plane or even a regular Google Maps user on his way to work, surgeons today have their instruments clustered behind them hanging on the wall," said Mohammad Abdul Mukit, an MS student in electrical and computer engineering at the University of Oklahoma, and a graduate fellow and research assistant. His research focuses on applications of computer vision, extended reality and AI in medical surgeries.
"The Google Maps user or the pilot gets constant, real-time updates regarding where they are, what to do next, and other vital data that helps them make split-second decisions," he explained. "They don't have to plan the trip for days or memorize every turn and detail of every landmark along the way. They just do it."
On the other hand, surgeons today have to do rigorous surgical planning, memorize the specifics of each unique case, and know all the necessary steps to ensure the safest possible surgery. Then they engage in complex procedures for several hours, with no targeting or homing devices or head-mounted displays to assist them.
"They have to feel their way to their objective and hope everything goes as they planned," Mukit said. "Through our research, we aim to change this process forever. We are making the 'Google Maps for surgery.'"
PROPOSAL
To turn this vision into reality, Mukit and OU Health plastic and reconstructive surgeon Dr. Christian El Amm have been working together since 2019. This journey, however, started in 2018, with El Amm's collaboration with energy technology company Baker Hughes.
BH specializes in using augmented reality/mixed reality and computed tomography scans to create 3D reconstructions of rock specimens. For geologists and oil and gas companies, this visualization is extremely helpful as it assists them to efficiently plan and execute drilling operations.
Mohammad Abdul Mukit, University of Oklahoma
This technology caught the attention of El Amm. He envisioned that this technology combined with AI could allow him to visualize superimposed and anatomically aligned 3D CT scan data during surgery. This could also be used to see reconstruction steps he had planned during surgery while never losing sight of the patient.
However, several key challenges needed to be solved to get a prototype mixed reality system ready for use in surgery.
MEETING THE CHALLENGE
"During the year-long collaboration, the BH team created solutions for those challenges that, until that time, were unsolved," Mukit recalled. "They implemented a client/server system. The server a high-end PC equipped with RGBD cameras would do all the computer vision work to estimate the six DoF pose of the patient's head.
"It would then stream the stored CT scan data to the client device, a Microsoft Hololens-1, for anatomically aligned visualization," he continued. "BH developed a proprietary compression algorithm that enabled them to stream a high volume of CT scan data. BG also integrated a proprietary AI engine to do the pose estimation."
This was a complex engineering project done in a very short time. After this prototype was completed, the team had a better understanding of the limitations of such a setup and the need for a better system.
"The prototype system was somewhat impractical for a surgical setting, but it was essential for better understanding our needs," Mukit said. "First, the system couldn't estimate the head pose in surgical settings when most of the patient's body was covered in clothing except the head. Next, the system needed time-consuming camera calibration steps every time we exited the app.
"This was a problem since according to our experience, surgeons accept only those devices that just work from the get-go," he continued. "They don't have the time to fiddle around with technology while they are concentrating on life-altering procedures. We also deeply felt the need for the options to control the system via voice commands. This is an essential element when it comes to surgical settings as the surgeons will always have their hands busy."
Surgeons will not be contaminating their hands by touching a computer for controlling the system or by taking off the device for recalibration. The team realized that a new, more convenient and seamless system was essential.
"I started working on building a better system from scratch in 2019, once the official collaboration ended with BH," Mukit said. "Since then, we have moved most of the essential tasks to the edge, the head-mounted display itself. We also leveraged CT scan data to train and deploy machine learning models, which are more robust in head pose estimation than before.
"We developed 'marker-less tracking,' which allows the CT scan or other images to be superimposed using artificial intelligence instead of cumbersome markers to guide the way," he added. "We then eliminated the need for any manual camera calibration."
Finally, they added voice commands. All these moves made the apps/system plug-and-play for surgeons, Mukit said.
"Due to their convenience and usefulness, the apps were very warmly welcomed by the OU-Medicine surgeons," he noted. "Suddenly ideas, feature requests, queries were just pouring in from different medical experts. I realized then that we had something really special in our hands and that we had only scratched the surface. We started developing these features for each unique genre of surgery."
Gradually, this made the system enriched with various useful features and led to unique innovations, he added.
RESULTS
El Amm has begun using the device during surgical cases to enhance the safety and efficiency of complex reconstructions. Many of his patients come to him for craniofacial reconstruction after a traumatic injury; others have congenital deformities.
Thus far, he has used the device for several cases, including reconstructing a patient's ear. The system took a mirror image of the patient's other ear, then the device overlaid it on the other side, allowing El Amm to precisely attach a reconstructed ear. In the past, he would cut a template of the ear and aim for precision using the naked eye.
In another surgical case, which required an 18-step reconstruction of the face, the device overlaid the patient's CT scan on top of his real bones.
"Each one of those bones needed to be cut and moved in a precise direction," El Amm said. "The device allowed us to see the bones individually, then it displayed each of the cuts and each of the movements, which allowed the surgeon to verify that he had gone through all those steps. It's basically walking through the steps of surgery in virtual reality."
ADVICE FOR OTHERS
"When you change the way you see the world, you change the world you see," Mukit said. "That is what mixed reality was made for. MR is the next general-purpose computer. Powerful technology will no longer be in your pockets or at your desks.
"Through MR, it will be integrated with your human self," he continued. "It will change how you solve problems, which in turn will lead to new creative ways of solving problems with AI. I think that within the next few years we are going to see another technology revolution. Especially after a mixed reality head-set is unveiled in 2023, which is reported to be lighter than any other visors in the market."
Currently, almost every industry is integrating mixed reality headsets into their businesses rightly so, as the gains are evident, he added.
"This technology is now mature enough for countless possible applications in almost every industry and especially in healthcare," he concluded. "Mixed reality has not made its way fully into this industry yet. We have only scratched the surface, and already in a few months, we have seen such an overwhelming tsunami of ideas from experts. Ideas that now can be implemented with ease.
"These application scenarios range from education and training to making surgeries safer, faster and more economical for both the surgeons and patients. The time to jump into mixed reality is now."
Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.
Visit link:
Surgeon and researcher innovate with mixed reality and AI for safer surgeries - Healthcare IT News
Posted in Ai
Comments Off on Surgeon and researcher innovate with mixed reality and AI for safer surgeries – Healthcare IT News
How AI helped Israel defeat Hamas in the recent war in Gaza – The Jerusalem Post
Posted: at 3:25 pm
A buzzword for the past few years, artificial intelligence (AI), is changing not only the civilian world but militaries and battlefields across the globe, with the Israel Defense Forces at the forefront.
The IDF has been working on AI for decades after troops and officers first recognized the need and realized that the military and defense establishment had to invest time and manpower in the development of the technology.
The world has changed; we are living in a world full of data, Maj. M., a senior officer in the C4I Directorate told The Jerusalem Post as we sat in his office in a nondescript base in central Israel.
In the small base with old buildings, his office is full of plaques and awards for his unit, which has been at the forefront of the IDFs digital revolution.
Since 2005 the technology has made a revolution in the military. Its allowed us to acquire a lot more intelligence with a lot more velocity.
cnxps.cmd.push(function () { cnxps({ playerId: '36af7c51-0caf-4741-9824-2c941fc6c17b' }).render('4c4d856e0e6f4e3d808bbc1715e132f6'); });
With battlefields changing, a central part of the IDFs Momentum multiyear plan is to transform the IDF into a smart army, holistic and tech-friendly, using simulators for more and more battalions and using AI to significantly increase its target bank.
The IDF is now data-driven, Maj. M. said, adding that its no small challenge to take the data, use algorithms to analyze them, and get them to the troops on the front lines.
While the Israeli military relied on what was already on the civilian market and adapted it for military purposes, in the years before the fighting, the IDF established an advanced AI technological platform that centralized all data on terrorist groups in the Strip on one platform that enabled the analysis and extraction of the intelligence.
For the first time, a multidisciplinary center was created that produces hundreds of targets relevant to developments in the fighting, allowing the military to continue to fight as long as it needs to with more and more new targets, a senior officer said at the time.
While the IDF had gathered thousands of targets in the densely populated coastal enclave over the past two years, hundreds were gathered in real time thanks to programs developed by soldiers in Unit 8200 who pioneered algorithms and code.
Troops from Maj. M.s unit were sent to the Gaza Division to help soldiers and commanders understand all the data they had been given.
The military believes that using AI helped shorten the length of the fighting, having been effective and quick in gathering targets using super-cognition.
And in the North, using innovative intelligence and advanced technology, the IDFs target bank in the Northern Command is 20 times larger than the target bank the military had in 2006, with thousands of targets ready to be attacked, including headquarters, strategic assets and weapons storehouses.
In addition to AI being used to gather intelligence and targets, the IDF is also using more robotic platforms and drones along its borders.
We want our borders to be smart and deadly. Instead of putting troops at risk, we can deploy a semiautonomous vehicle with sensors and cameras to do the same job, Maj. M said. But theres always a person sitting in the command room operating it.
And while troops did not maneuver inside Gaza during the fighting in May, future battlefields will see troops on the front lines with all the data they need in real time, such as what weapons they need to hit targets, who will support them, and more Maj. M. said.
Along with the IDF, Israeli defense companies such as Israel Aerospace Industries and Rafael Advanced Defense Systems have also been pioneering AI technology for years.
ITS BEEN the buzzword for the past few years; everyone is using it, Dr. Irit Idan, executive VP, R & D, Rafael, told the Post.
While only recently has it been in the spotlight, AI has been around since the 1950s, Idan explained.
Its not something new, but there are waves where we see several jumps in capabilities, she said.
The first wave, from the 1950s to the 2000s, laid down the rules that are still being used today, but intelligence was gathered in a fairly simple manner. The second wave, from 2000 to 2020, dealt with machine learning and statistical intelligence, but the machine was unable to explain how it arrived at a connection or answer.
Idan said that we are now in the third wave, where companies want the machine to explain the rules that it is using to make the decision.
Its very important to be able to explain the decision-making; because if you want to rely on the machines decision and base your action on AI, you have to really understand why it says what it says, she said, adding that its very important for the civilian market, but even more important in the defense industry and on the battlefield.
With a lot of challenges remaining in this third wave, Idan told the Post that there are some places where there are no humans involved, although people will still need to be in the loop for most decisions.
Using the examples of two well-known Rafael systems that use AI, the Iron Dome and Windbreaker, Idan said that time is a crucial aspect.
A shell launched towards a tank from a short distance and the system needs to identify the launch, what kind of shell was launched, and destroy it within a few seconds. No human brain can do that in the few seconds that you have between the launch and the hit, she said. We have to rely on the AI in the system.
But with the Iron Dome theres a bit more time involved, and therefore a human is involved in the decision-making process.
And thats what we are going to see in the coming year, where man and machine work together and know their strengths and weaknesses and how to get the best result, Idan added.
We have our hands on the pulse on whats going on across the world in AI, Idan said, explaining that Rafael uses AI for both civilian and military needs.
Pointing to companies like Google, Amazon and Facebook, Idan said that the AI market is worth billions of dollars, and it is all based on data. And whoever has the data leads AI.
Citing the examples of China and Russia, Maj. M said that the IDF wants AI superiority; we want to be quicker, more precise, effective, and not at a high cost.
According to Idan, while Israel is a groundbreaker in the field, China is a leader in AI because there are no regulations in China when it comes to such technology.
Russia is also at the forefront, and in 2017 Russian President Vladamir Putin said that artificial intelligence is the future, not only for Russia but for all humankind.... Whoever becomes the leader in this sphere will become the ruler of the world.
As technology continues to break barriers, AI will continue to be the buzzword, both in the military and in civilian spheres around the world.
And Israel and the IDF will continue to aim for AI superiority.
Read this article:
How AI helped Israel defeat Hamas in the recent war in Gaza - The Jerusalem Post
Posted in Ai
Comments Off on How AI helped Israel defeat Hamas in the recent war in Gaza – The Jerusalem Post
How the National Science Foundation is taking on fairness in AI – Brookings Institution
Posted: July 23, 2021 at 4:14 am
Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficiallyin this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.
The FAI program is an investment in what the NSF calls use-inspired research, where scientists attempt to address fundamental questions inspired by real world challenges and pressing scientific limitations. Use-inspired research is an alternative to the traditional basic research, which attempts to make fundamental advances in scientific understanding without necessarily a specific practical goal. NSF is better known for basic research in computer science, where the NSF provides 87% of all federal basic research funding. Consequently, the FAI program is a relatively small portion of the NSFs total investment in AIaround $3.3 million per year, considering that Amazon covers half of the cost. In total, the NSF requested $868 million in AI spending, about 10% of its entire budget for 2021, and Congress approved every penny. Notably, this is a broad definition of AI spending that includes many applications of AI to other fields, rather than fundamental advances in AI itself, which is likely closer to $100 or $150 million, by rough estimation.
The FAI program is specifically oriented towards the ethical principle of fairnessmore on this choice in a moment. While this may seem unusual, the program is a continuation of prior government funded research into the moral implications and consequences of technology. Starting in the 1970s, the federal government started actively shaping bioethics research in response to public outcry following the APs reporting on the Tuskegee Syphilis Study. While the original efforts may have been reactionary, they precipitated decades of work towards improving the biomedical sciences. Launched alongside the Human Genome Project in 1990, there was an extensive line of research oriented towards the ethical, legal, and social implications of genomics. Starting in 2018, the NSF funded 21 exploratory grants on the impact of AI on Society, a precursor to the current FAI program. Today, its possible to draw a rough trend line through these endeavors, in which the government is becoming more concerned with first pure science, then the ethics of the scientific process, and now the ethical outcomes of the science itself. This is a positive development, and one worth encouraging.
NSF made a conscious decision to focus on fairness rather than other prevalent themes like trustworthiness or human-centered design. Dr. Erwin Gianchandani, an NSF deputy assistant director, has described four categories of problems in FAIs domain, and these can each easily be tied to present and ongoing challenges facing AI. The first category is focused on the many conflicting mathematical definitions of fairness and the lack of clarity around which are appropriate in what contexts. One funded project studied the human perceptions of what fairness metrics are most appropriate for an algorithm in the context of bail decisionsthe same application of the infamous COMPAS algorithm. The study found that survey respondents slightly preferred an algorithm that had a consistent rate of false positives (how many people were unnecessarily kept in jail pending trial) between two racial groups, rather than an algorithm which was equally accurate for both racial groups. Notably, this is the opposite quality of the COMPAS algorithm, which was fair in its total accuracy, but resulted in more false positives for Black defendants.
The second category, Gianchandani writes, is to understand how an AI system produces a given result. The NSF sees this as directly related to fairness because giving an end-user more information about an AIs decision empowers them to challenge that decision. This is an important pointby default, AI systems disguise the nature of a decision-making process and make it harder for an individual to interrogate the process. Maybe the most novel project funded by NSF FAI attempts to test the viability of crowdsourcing audits of AI systems. In a crowdsourced audit, many individuals might sign up for a toole.g., a website or web browser extensionthat pools data about how those individuals were treated by an online AI system. By aggregating this data, the crowd can determine if the algorithm is being discriminatory, which would be functionally impossible for any individual user.
The third category seeks to use AI to make existing systems fairer, an especially important task as governments around the world are continuing to consider if and how to incorporate AI systems into public services. One project from researchers at New York University seeks, in part, to tackle the challenge of fairness when an algorithm is used in support of a human decision-maker. This is perhaps inspired by a recent evaluation of judges using algorithmic risk assessments in Virginia, which concluded that the algorithm failed to improve public safety and had the unintended effect of increasing incarceration of young defendants. The NYU researchers have a similar challenge in minddeveloping a tool to identify and reduce systemic biases in prosecutorial decisions made by district attorneys.
The fourth category is perhaps the most intuitive, as it aims to remove bias from AI systems, or alternatively, make sure AI systems work equivalently well for everyone. One project looks to create common evaluation metrics for natural language processing AI, so that their effectiveness can be compared across many different languages, helping to overcome a myopic focus on English. Other projects looks at fairness in less studied methods, like network algorithms, and still more look to improve in specific applications, such as for medical software and algorithmic hiring. These last two are especially noteworthy, since the prevailing public evidence suggests that algorithmic bias in health-core provisioning and hiring is widespread.
Critics may lament that Big Tech, which plays a prominent role in AI research, is present even in this federal programAmazon is matching the support of the NSF, so each organization is paying around $10 million. Yet there is no reason to believe the NSFs independence has been compromised. Amazon is not playing any role in the selection of the grant applications, and none of the grantees contacted had any concerns about the grant-selection process. NSF officials also noted that any working collaboration with Amazon (such as receiving engineering support) is entirely optional. Of course, it is worth considering what Amazon has to gain from this partnership. Reading the FAI announcement, it sticks out that the program seeks to contribute to trustworthy AI systems that are readily accepted and that projects will enabled broadened acceptance of AI systems. It is not a secret that the current generation of large technology companies would benefit enormously from increased public trust in AI. Still, corporate funding towards genuinely independent research is good and unobjectionable especially relative to other options like companies directly funding academic research.
Beyond the funding contribution, there may be other societal benefits from the partnership. For one, Amazon and other technology companies may pay more attention to the results of the research. For a company like Amazon, this might mean incorporating the results into its own algorithms, or into the AI systems that it sells through Amazon Web Services (AWS). Adoption into AWS cloud services may be especially impactful, since many thousands of data scientists and companies use those services for AI. As just an example, Professor Sandra Wachter of the Oxford Internet Institute was elated to learn that a metric of fairness she and co-authors had advocated for had been incorporated into an AWS cloud service, making it far more accessible for data science practitioners. Generally speaking, having an expanded set of easy-to-use features for AI fairness makes it more likely that data scientists will explore and use these tools.
In its totality, FAI is a small but mighty research endeavor. The myriad challenges posed by AI are all improved with more knowledge and more responsible methods driven by this independent research. While there is an enormous amount of corporate funding going into AI research, it is neither independent nor primarily aimed at fairness, and may entirely exclude some FAI topics (e.g., fairness in the government use of AI). While this is the final year of the FAI program, one of NSF FAIs program directors, Dr. Todd Leen, stressed when contacted for this piece that the NSF is not walking away from these important research issues, and that FAIs mission will be absorbed into the general computer science directorate. This absorption may come with minor downsidesfor instance, a lack of clearly specified budget line and no consolidated reporting on the funded research projects. The NSF should consider tracking these investments and clearly communicating to the research community that AI fairness is an ongoing priority of the NSF.
The Biden administration could also specifically request additional NSF funding for fairness and AI. For once, this funding would not be a difficult sell to policymakers. Congress funded the totality of the NSFs $868 million budget request for AI in 2021, and President Biden has signaled clear interest in expanding science funding; his proposed budget calls for a 20% increase in NSF funding for fiscal year 2022, and the administration has launched a National AI Research Taskforce co-chaired by none other than Dr. Erwin Gianchandani. With all this interest, bookmarking $5 to $10 million per year explicitly for the advancement of fairness in AI is clearly possible, and certainly worthwhile.
The National Science Foundation and Amazon are donors to The Brookings Institution. Any findings, interpretations, conclusions, or recommendations expressed in this piece are those of the author and are not influenced by any donation.
Read the rest here:
How the National Science Foundation is taking on fairness in AI - Brookings Institution
Posted in Ai
Comments Off on How the National Science Foundation is taking on fairness in AI – Brookings Institution
Getting Industrial About The Hybrid Computing And AI Revolution – The Next Platform
Posted: at 4:14 am
For oil and gas companies looking at drilling wells in a new field, the issue becomes one of return vs. cost. The goal is simple enough: install the fewest number of wells that will draw them the most oil or gas from the underground reservoirs for the longest amount of time. The more wells installed, the higher the cost and the larger the impact on the environment.
However, finding the right well placements quickly becomes a highly complex math problem. Too few wells sited in the wrong places leaves a lot of resources in the ground. Too many wells placed too close together not only can sharply increase the cost but cause wells to pump from the same area.
Shahram Farhadi knows how complex the challenge is. Farhadi is the chief technology officer for industrial AI at Beyond Limits, a startup spun off by Caltech and NASAs Jet Propulsion Lab to commercialize technologies built for space exploration for industrial settings. The company, founded in 2014, aims to leverage cognitive AI, machine learning, and deep learning techniques in industries like oil and gas, manufacturing and industrial Internet of Things (IoT), power and natural resources, and healthcare and other evolving markets, many of which have already been using HPC environments to run their most complicated programs.
Placing wells within a reservoir is one of those problems that involves a sequential decision-making process that changes and grows with each decision made. Farhadi notes that in chess, there are almost 5 million possible moves after the first five are made. For the game Go, that number becomes 10 to the 12th power. When optimizing well placement in a small reservoir from where and when to drill to how many producer and injector wells there can be as many as 10 to the 20th power possible combinations after five sequential, non-commutative choices of vertical drilling locations.
The combination of advanced AI frameworks with HPC can greatly reduce the challenge.
Anything the AI can learn such as basic rules for how far the wells should be separated and apply to the problem will help decrease the number of computations, to hammer them down to something that is more tangible, Farhadi tells The Next Platform.
Where to place wells has been a challenge for oil and gas companies for years, during which time they developed seismic imaging capabilities and simulation models that run on HPC systems that describe reservoirs beneath the ground. They also use optimizers to run variations of the model to determine how many of which kinds of wells should we place where. There have been at least two generations of engineers who worked to perfect these equations and their nuances, tuning and learning from the data, Farhadi says.
The problem has been that they have worked on these computations using a combination of brute force and such optimizations as particle swarm and genetic algorithms atop computationally expensive reservoir simulators, making such a complex problem even more challenging. Thats where Beyond Limits advanced AI frameworks can come in.
The industry is really equipped with really good simulations and the opportunity of a high-performance AI could be, how about we use the simulations to generate the data and then learn from that generated data? he says. In that sense, you are going some good miles. Other industries are also doing this now, like with the auto industry, this is happening more or less. But from the energy industry standpoint, these simulations are fairly rich.
Beyond Limits is applying such techniques as deep reinforcement learning (DRL), using a framework to train a reinforcement learning agent to make optimal sequential recommendations for placing wells. It also uses reservoir simulations and novel deep convolutional neural networks to work. The agent takes in the data and learns from the various iterations of the simulator, allowing it to reduce the number of possible combinations of moves after each decision is made. By remembering what it learned from the previous iterations, the system can more quickly whittle the choices down to the one best answer.
One area that we looked at specifically is the simulation of subsurface movement of fluids, Farhadi says. Think of a body of a rock that is found somewhere that has oil in it. It also has water that has come to it and as you take out this hydrocarbon, this whole dynamic changes. Things will kick in. You might have water breaking through, but its quite a delicate process that is happening down there. A lot of time goes into building this image because you have limited information. But lets say you have built the image and you have a simulator now that if you tell this simulator, I want to place a well here [and] a well here, the simulator can evolve this in time and give you the flow rates and say, If you do this, this is what youre going to get. Now if I operate this asset, the question for me is just exactly that: How many wells do I put in this? What kind of wells do I want to put vertical [and] horizontal? Do I want to inject water from the beginning? Do I want to inject gas? This is basically the expertise of reservoir engineering. Its playing the game of how to optimally extract this natural resource from these assets, and the assets are usually billions of dollars of value. This is a very, very precious asset for any company that is producing oil and gas. The question is, how do you extract the max out of it now?
The goal is to get down to a high net present value (NPV) score essentially the amount of oil or gas that will be captured (and sold) and the amount of money made after costs are figured in. The fewest wells needed to extract the most resources will mean more profit.
The NPV initially does some iteration, but after about 150,000 times of interacting with the simulator, it can get to something like $40 million dollars of NPV, he says. The key thing here is the fact that this simulation on its own can be expensive to run, so you optimize it, be smart and use it efficiently.
That included creating a system that would allow Beyond Limits to most efficiently scale the model to where the oil and gas companies needed it. The company tested it using three systems two of which were CPU-only and one that was a hybrid running CPUs and GPUs. Beyond Limits used an on-premises 20-core CPU system running Intel Core i9-7900X chips, a cloud-based 96-core CPU system with the same processors, and the hybrid setup, with a 20-core CPU and two Nvidia Ampere A100 GPU accelerators on a p4d.24xlarge Amazon Web Services instance.
The company also took it a step further by including a 36-hour run on a p4d.24xlarge AWS instance using a setup with 90 CPU cores and eight A100 GPUs.
The metrics benchmarked were around the instantaneous rate of reinforcement learning calculation, the number of episodes and forward action-explorations during the progress of reinforcement learning and the value of the best solution found in terms of NPV.
What Beyond Limits found was that the hybrid setup outperformed both CPU-only systems. In terms of benchmarks, the hybrid setup delivered a peak in terms of processing speed of 184.3 percent over the 96-core system and 1,169.5 percent over the 20-core operation. To reach the same number of actions explored at the end of 120,000 seconds, the CPU-GPU hybrid had an improvement in time elapsed of 245.4 percent over the 20 CPU cores and 152.9 percent of the 96 CPU cores. (See chart below.) Regarding NPV, the hybrid instance had a boost of about 109 percent compared to the 20-core CPU setup for vertical wells.
Scale and efficiency are key when trying to reach optimal NPV, because not only do calculations such as the number and types of wells used add to the costs, but so do computational needs.
This problem is very, very complicated in terms of the number of possible combinations, so the more hardware you throw at it, the higher you get and obviously there are physical limits to that, Farhadi says. The GPU becomes a real value-add because you can now achieve NPVs that are higher. Just because you were able to have higher grades, you would be able to have more FLOPs or you could compute more. You have a higher chance of finding better configurations. The idea here was to show that there is this technology that can help with highly combinatorial simulation-based optimizations called reinforcement learning, and we have benchmarked it on simple, smaller reservoir models. But if you were to take it to the actual field models with this number of cells, its going to be on its own, like a massive high-performance training system.
Beyond Limits is also building advanced AI systems for other industries. One example is a system designed to help with planning of a refinery. Another AI system helps chemists more quickly and efficiently build formulas for engine oil and other lubricants, he says.
For the practices that you have relied on a human expert to come up with a framework and [to] solve a problem, it is important for them that whatever system you build is honoring that and can digest that, Farhadi says. Its not only data, its also that knowledge thats human. How do we incorporate and then bring this together? For example, how do you make the knowledge that your engineer learned about from the data or how do you use the physics as a constraint for your AI? Its an interesting field. Even in the frontiers of deep learning [and] machine learning, this is now being looked at. Instead of just looking at the pixels, now lets see if we can have more robust representations of hierarchical understandings of the objects that come our way. We really started this way earlier than 2014, because one big motivation was that the industries we went to required it. That was what they had and they needed to augment it, maybe with digital assistants. It has data elements to it, but they were not quite competent.
View post:
Getting Industrial About The Hybrid Computing And AI Revolution - The Next Platform
Posted in Ai
Comments Off on Getting Industrial About The Hybrid Computing And AI Revolution – The Next Platform
Diverse AI teams are key to reducing bias – VentureBeat
Posted: at 4:14 am
All the sessions from Transform 2021 are available on-demand now. Watch now.
An Amazon-built resume-rating algorithm, when trained on mens resumes, taught itself to prefer male candidates and penalize resumes that included the word women.
A major hospitals algorithm, when asked to assign risk scores to patients, gave white patients similar scores to Black patients who were significantly sicker.
If a movie recommendation is flawed, thats not the end of the world. But if you are on the receiving end of a decision [that] is being used by AI, that can be disastrous, Huma Abidi, senior director of AI SW products and engineering at Intel, said during a session on bias and diversity in AI at VentureBeats Transform 2021 virtual conference. Abidi was joined by Yakaira Nuez, senior director of research and insights at Salesforce, and Fahmida Y Rashid, executive editor of VentureBeat.
In order to produce fair algorithms, the data used to train AI needs to be free of bias. For every dataset, you have to ask yourself where the data came from, if that data is inclusive, if the dataset has been updated, and so on. And you need to utilize model cards, checklists, and risk management strategies at every step of the development process.
The best possible framework is that we were actually able to manage that risk from the outset we had all of the actors in place to be able to ensure that the process was inclusive, bringing the right people in the room at the right time that were representative of the level of diversity that we wanted to see and the content. So risk management strategies are my favorite. I do believe in order for us to really mitigate bias that its going to be about risk mitigation and risk management, Nuez said.
Make sure that diversity is more than just a buzzword and that your leadership teams and speaker panels are reflective of the people you want to attract to your company, Nuez said.
When thinking about diversity, equity, and inclusion work, or bias and racism, the most impact tends to be in areas in which individuals are most at risk, Nuez said. Health care, finance, and legal situations anything involving police and child welfare are all sectors where bias causes the most amount of harm when it shows up. So when people are working on AI initiatives in these spaces to increase productivity or efficiencies, it is even more critical that they are thinking deliberately about bias and potential for harm. Each person is accountable and responsible for managing that bias.
Nuez discussed how the responsibility of a research and insights leader is to curate data so executives can make informed decisions about product direction. Nuez is not just thinking about the people pulling the data together, but also the people who may not be in the target market, to give insight into people Salesforce would not have known anything about otherwise.
Nuez regularly asks the team to think about bias and whether it is present in the data, like asking whether the panel of individuals for a project is diverse. If the feedback is not from an environment that is representative of the target ecosystem, then that feedback is less useful.
Those questions are the small little things that I can do at the day to day level to try to move the needle a bit at Salesforce, Nuez said.
Research has shown that minorities often have to whiten their rsums in order to get callbacks and interviews. Companies and organizations can weave diversity and inclusion into their stated values to address this issue.
If its already not part of your core mission statement, its really important to add those things diversity, inclusion, equity. Just doing that, by itself, will help a lot, Abidi said.
Its important to integrate these values into corporate culture because of the interdisciplinary nature of AI: Its not just engineers; we work with ethicists, we have lawyers, we have policymakers. And all of us come together in order to fix this problem, Abidi said.
Additionally, commitments by companies to help fix gender and minority imbalances also provide an end goal for recruitment teams: Intel wants women in 40% of technical roles by 2030. Salesforce is aiming to have 50% of its U.S. workforce made up of underrepresented groups, including women, people of color, LGBTQ+ employees, people with disabilities, and veterans.
Original post:
Posted in Ai
Comments Off on Diverse AI teams are key to reducing bias – VentureBeat