The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: March 31, 2021
Solving The Mystery Of The Pandemic’s Origin Story – WBUR
Posted: March 31, 2021 at 5:42 am
Scientists investigating the pandemic still can't pinpoint the exact origin of the coronavirus that caused it. The WHO is poised to report on its search but some are skeptical about their conclusions.
Joseph B. McCormick and Susan P. Fisher-Hoch, professors of Epidemiology, Human Genetics & Environmental Sciences at the University of Texas Health Science Center at Houston. Authors ofLevel 4: Virus Hunters of the CDC - Tracking Ebola and the World's Deadliest Viruses."Jamie Metzl, senior fellow at the Atlantic Council. Former NSC official during Clinton administration and adviser to the WHO on human genome editing. (@JamieMetzl)
Alison Young, investigative reporter. Professor of Public Affairs Reporting and director of the Missouri School of Journalisms Washington Program. (@alisonannyoung)
AP: "WHO report says animals likely source of COVID" "A joint WHO-China study on the origins of COVID-19 says that transmission of the virus from bats to humans through another animal is the most likely scenario and that a lab leak is extremely unlikely, according to a draft copy obtained by The Associated Press."
MIT Technology Review: "No one can find the animal that gave people Covid-19" "More than a year after covid-19 began, no food animal has been identified as a reservoir for the pandemic virus. Thats despite efforts by China to test tens of thousands of animals, including pigs, goats, and geese, according to Liang Wannian, who leads the Chinese side of the research team. No one has found a direct progenitor of the virus, he says, and therefore the pandemic 'remains an unsolved mystery.'"
The Wall Street Journal: "How the WHOs Hunt for Covids Origins Stumbled in China" "China resisted international pressure for an investigation it saw as an attempt to assign blame, delayed the probe for months, secured veto rights over participants and insisted its scope encompass other countries as well, the Journal found."
USA Today Opinion: "Could an accident have caused COVID-19? Why the Wuhan lab-leak theory shouldn't be dismissed" "As members of a World Health Organization expert team have made international headlines recently dismissing as 'extremely unlikely' the possibility that a laboratory accident in Wuhan, China, could have sparked the COVID-19 pandemic, I cant stop thinking of the hundreds of lab accidents that are secretly occurring just in the United States."
View original post here:
Solving The Mystery Of The Pandemic's Origin Story - WBUR
Posted in Human Genetics
Comments Off on Solving The Mystery Of The Pandemic’s Origin Story – WBUR
Variants in Three Genes Linked with Increased Cervical Cancer Risk – Clinical OMICs News
Posted: at 5:42 am
Research led by Imperial College London has revealed three genesPAX8, CLPTM1L and HLA-DQA1containing variants that increase a womans risk for cervical cancer.
The genome-wide association study, one of the first of its kind for cervical cancer or precancer, was carried out in samples from more than 150,000 women of European descent from the UK Biobank cohort and validated in a second Finnish cohort.
Cervical cancer impacts approximately 570,000 women around the world, with more than 13,000 cases diagnosed in the U.S. each year. It remains one of the most common female cancers despite extensive screening and vaccination against the human papilloma virus (HPV), which is known to be the main cause of cervical cancer.
HPV causes cervical cancer, but what we havent understood until now is why many people are infected with HPV, yet very few develop cervical cancer, said Sarah Bowden, M.D., a researcher from the Department of Surgery and Cancer at Imperial College London and lead author on the paper describing the study published in The Lancet Oncology.
Over 70% of people are infected with HPV over their lifetime, yet most women clear the infection, and only a small fraction go on to develop abnormal pre-cancerous cervical cells; even fewer develop cervical cancer.
There has long been some uncertainty about the degree to which genetic variants can impact a womans risk for developing cervical cancer. Previous studies suggest that the genetic contribution to the risk of cervical cancer ranges from 27-36%, but only seven earlier studies have looked at genetic variants that could contribute to this risk and these have all been fairly modest in size.
This study included 4,769 women with invasive cervical cancer or precancerous neoplasms and 145,545 controls. The women were aged 40-69 years and of European origin.
Out of 9,600,464 SNPs included in the GWAS, six variants were linked with increased risk for cervical cancer or precancerous lesions that could lead to cervical cancer. In a Finnish replication cohort of 128,123 individuals, FinnGen, three of these associations were successfully replicated.
The three significant associations were found with SNPs in the PAX8, CLPTM1L and HLA-DQA1 genes. While gene variants in the HLA region have previously been linked with cervical cancer, the association with the PAX8 and CLPTM1L genes was new.
HLA genes are strongly linked to immune system function, whereas the PAX8 protein triggers hormones important for growth regulation, brain development and metabolism and is expressed in the endometrium and ovaries. CLPTM1L is often overexpressed in lung tumors and the gene lies within a cancer susceptibility region. Inhibition of this protein can help stop tumor formation in some cancers.
The research team also looked at other factors that could predispose women to cervical cancer and confirmed previous links with smoking, age at pregnancy and number of sexual partners.
While more work is needed to investigate the newly discovered genetic links, this study is a step in the right direction to understanding more about cervical cancer and how it could be treated.
Once genetic testing becomes more widespread, looking at a patients genetic information alongside cervical screening could help identify individuals who need close monitoring or treatment, said Bowden.
Increased genetic information could also lead to new drugs in the future. At the moment, if a woman is found to have a pre-cancerous cervical abnormality, the options are to watch and wait, which means regularly check-ups, or a treatment to surgically remove part of the cervix. This can increase the risk of a late miscarriage or preterm birth in future pregnancies. But if we knew more about the interaction between genetics and HPV, we might be able to develop new drugs to treat these abnormalities at an early stage.
Visit link:
Variants in Three Genes Linked with Increased Cervical Cancer Risk - Clinical OMICs News
Posted in Human Genetics
Comments Off on Variants in Three Genes Linked with Increased Cervical Cancer Risk – Clinical OMICs News
Professor Sir Peter Harper, clinical geneticist who shed light on inherited diseases obituary – Telegraph.co.uk
Posted: at 5:42 am
Professor Sir Peter Harper, who has died aged 81, was one of the worlds most respected clinical geneticists.
As Professor of Medical Genetics at Cardiff University he focused on muscular dystrophies and Huntingtons disease, and Cardiff soon became an internationally renowned centre for both.
Harpers research showed that both of the conditions he focused on, myotonic dystrophy and Huntingtons disease, resulted from expansion of unstable repetitive DNA sequences explaining how they tend to get worse as the condition passes down the generations a phenomenon that is termed genetic anticipation.
In 1987, following years of planning, his vision for an integrated academic and NHS centre for medical genetics was realised with the opening of the Institute of Medical Genetics.
The relatively modest building eventually housed outpatient clinics, clinicians and counsellors, NHS and university molecular genetics teams, cytogeneticists, a newborn biochemical screening lab, foetal pathology, experts in computer programming and mathematical genetics, social scientists, psychiatrists and psychologists.
This professional diversity created a unique atmosphere in which many different perspectives were brought to bear on inherited conditions. The genes for myotonic dystrophy (characterised by progressive muscle wasting) and Huntingtons disease (a progressive brain disorder) were identified and, remarkably, each were shown to result from different unstable expansions affecting repetitive DNA sequences.
Evidence-based approaches to predictive genetic testing were developed and contentious areas such as genetics and insurance and genetic testing in children were explored.
Throughout these endeavours the views of patients and their families were given priority and organisations including the Myotonic Dystrophy Support Group and the Huntingtons Disease Association were involved as equal partners. Harpers guide, Practical Genetic Counselling was translated into many languages and has run to eight editions.
Peter Stanley Harper was born on April 28 1939 and brought up in Barnstaple, Devon. His father Richard was a GP; his mother, Margery (ne Elkington) was a talented French scholar who sacrificed a promising academic career to follow her husbands work.
From Blundells School, Tiverton, Peter won a scholarship to read Medicine at Exeter College, Oxford, where he also attended zoology lectures in genetics and biology.
Determined to combine genetics and medicine in his future career, in 1967 he moved to Liverpool to work with Cyril (later Sir Cyril) Clarke, who had just established a new unit for medical genetics.
There, Harper worked on inherited oesophageal cancer while also investigating insect genetics at the university zoology department. In 1968 he married Elaine and they moved to Johns Hopkins University, Baltimore, in 1969. There he completed a doctorate on myotonic dystrophy, a condition that became a clinical and research focus throughout his professional life.
Returning to Britain in 1971, Harper gained a clinical academic post in the Department of Medicine in Cardiff, where he remained until his retirement.
Before retiring, Harper started a project to record the history of medical genetics. This turned into a major undertaking, occupying him until the end of his life and involving much international travel.
The endeavour also included careful documentation of historical and contemporary abuses of genetics in Europe, America, Russia and China. Much of the material he accumulated can be read in his books, including A Short History of Medical Genetics (2008) and Evolution of Medical Genetics A British Perspective (2020) and accessed online at http://www.genmedhist.org.
Harper was active nationally and internationally, through roles with the Clinical Genetics Society and the Royal College of Physicians, the European Society of Human Genetics, the American College of Medical Genetics (which awarded him its lifetime achievement award in 1994) and the American Society of Human Genetics. He was chief editor of the Journal of Medical Genetics (1986-96), and a member of the Human Genetics Commission and the Nuffield Council for Bioethics.
He was appointed CBE in 1994 and knighted in 2004, although he never used the title. He particularly enjoyed sharing his passion and knowledge of nature with his family; he is survived by his wife Elaine, and by three daughters and two sons.
Sir Peter Harper, born April 28 1939, died January 23 2021
See the article here:
Professor Sir Peter Harper, clinical geneticist who shed light on inherited diseases obituary - Telegraph.co.uk
Posted in Human Genetics
Comments Off on Professor Sir Peter Harper, clinical geneticist who shed light on inherited diseases obituary – Telegraph.co.uk
Level-up your career with a master’s from rebro University – Study International News
Posted: at 5:42 am
Countries such as the US, Canada and the UK are seen as bastions of higher education. If youre willing to look past these popular study abroad destinations, Sweden is also home to innovative universities that offer students a world-class education.
The countrys government invests heavily in education, paving the way for the rise of research-intensive universities for aspiring researchers. There are part-time work opportunities for international students to support themselves as they work towards completing their degree, while the country also offers a high standard of living, further bolstering its appeal as a study abroad destination.
One such university that offers all this and more is rebro University, which offers a wide range of masters programmes to bolster your personal and professional development. Its research spans no less than 36 different subjects across the humanities and social sciences, medicine and health, and science and technology fields.
Attesting to the universitys prowess are its rankings. rebro University is ranked in the top 80 of the Times Higher Education (THE) Young University Rankings 2020 and in the top 400 THE World University Rankings 2021.
The universitys convenient location is also a major allure. rebro is a bustling city home to panoramic woodlands and pristine nature while still being close to the town centre for all your modern needs. Despite rebros modern appearance, the city is also home to popular attractions such as the rebro Castle, an ancient castle made of weathered grey stones.
Learning here will also be culturally immersive high standards of living aside, the university is located just two hours outside of Stockholm, a city known for its Michelin restaurants, picturesque hiking trails and public art.
rebro University is known for its world-class education and research expertise.
Source: rebro University
rebro University offers masters programmes across a wide range of subjects to cater to varied interests, from economics to mathematics and science to the latest in AI and robotics.
With sustainability high on the global agenda, rebro Universitys MSc Programme in Chemistry in Environmental Forensics will be ideal for those aspiring to solve some of the worlds global challenges. The course offers a broad syllabus, with a focus on chemical safety, health and the environment. It also provides insight into several related research areas, including bioanalytical and environmental analysis, looking at the source and amount of environmental chemical contaminants and their history.
Meanwhile, LinkedIns 2020 Emerging Jobs Report notes that the demand for AI experts has grown 74% annually over the past five years. If youre interested to pursue an exciting career in the industry, the universitys MSc Programme in AI and Robotics will teach you the methods used to power the latest generation of autonomous vehicles, how navigation software in your phone finds the fastest routes in real-time, and how sensors in robots and intelligent systems are used to perceive the world.
During your thesis project, youll have the opportunity to interact with future employers. Many of their graduates have gone on to further positions within academia and leading companies in the field, including Volvo and Epiroc.
Those interested in experimental medicine might want to enrol in the MSc Programme in Experimental Medicine, which offers broad and specialised knowledge in the field. The main focus of the programme looks at inflammatory mechanisms and its implications on public health, as many of the common diseases worldwide share an inflammation process as a common denominator. You will gain knowledge and skills in modern experimental medical and laboratory science, as well as in-depth knowledge of cell biology, immunology, human genetics and bioinformatics, and translational medicine.
For students who have a passion for music and its effect on individuals and society, rebro Universitys Masters Programme in Musicology Music and Human Beings would be the ideal step for those who already have a bachelors degree in the humanities, social sciences, musicology, or a related field. The programme prepares students for a career in doctoral research on music in academia, but it can also help students across a wide variety of music-related professions.
Make friends that will last a lifetime at this university.Source: Orebro University
High-quality masters programmes aside, there are many other appeals of studying at this university.
rebro University guarantees its masters students affordable accommodation where they have their own studio flat equipped with a kitchen and bathroom for just 350 euros a month. Students with citizenship in a European Union (EU) or European Economic Area (EEA) country, or Switzerland, are exempted from paying an application fee or tuition fees for any of their courses.
Coupled with its accredited faculties, leading research centres, and opportunities for close contact with professors, becoming a postgraduate student at rebro University is sure to be a memorable one. To find out more about the universitys programmes, click here.
Follow rebro University on Facebook, LinkedIn and YouTube
Here is the original post:
Level-up your career with a master's from rebro University - Study International News
Posted in Human Genetics
Comments Off on Level-up your career with a master’s from rebro University – Study International News
AI in Cardiology: Where We Are Now and Where to Go Next – TCTMD
Posted: at 5:41 am
Artificial intelligence (AI) has become a buzzword in every cardiology subspecialty from imaging to electrophysiology, and researchers across the fields spectrum are increasingly using it to look at big data sets and mine electronic health records (EHRs).
But in concrete terms, what is the potential for AIas well as its subcategories, like machine learningto treat heart disease or improve research? And what are its greatest challenges?
There are so many unknowns that we capture in our existing data sources, so the exciting part of AI is that we have now the tools to leverage our current diagnostic modalities to their extreme to the entirety, Rohan Khera, MBBS (Yale School of Medicine, New Haven, CT), told TCTMD. We don't have to gather new data per se, but just the way to incorporate that data into our clinical decision-making process and the research process is exciting.
Generally speaking, I'm excited about anything that allows doctors to be more of a doctor and that would make our job easier, so that we can focus our time on the hard part, Ann Marie Navar, MD, PhD (UT Southwestern Medical Center, Dallas, TX), commented to TCTMD. If a computer-based algorithm can help me more quickly filter out what therapy somebody should or shouldn't be on and help me estimate the potential benefit of those therapies for an individual patient in a way that helps me explain it to a patient, that's great. That saves me a lot of cognitive load and frankly time that I can then spend in the fun part of being a doctor, which is the patient-physician interface.
There should not be too much hype, hope, or phobia. Jai Nahar
Although a proponent of AI in medicine, Jai Nahar, MD (George Washington School of Medicine, Washington, DC), who co-chairs the American College of Cardiologys Advanced Healthcare and Analytics Work Group, said the technology faces hurdles related to clinical validation, implementation, regulation, and reimbursement before it can be readily adopted.
Cardiologists should be balanced in their approach to AI, he stressed to TCTMD. There should not be too much hype, hope, or phobia. Like any advanced technology, we have to be cognizant about what the technology stands for and how it could be used in a constructive and equitable manner.
A Foothold in Imaging
Nico Bruining, PhD (Erasmus Medical Center, Rotterdam, the Netherlands), editor-in-chief of the European Heart Journal Digital Health, told TCTMD there are two main places in hospitals where he sees AI technologies flourishing in the near future: image analysis and real-time monitoring in conjunction with EHRs.
What we also envision is that some of the measurements we now do in the hospitallike taking the EKG and making a decision about itcan move more to the general practitioner, so closer to the patient at home, he said. Since its already possible to get a hospital-grade measurement onsite, AI will be applied more in primary and secondary prevention, Bruining predicted. There we see a lot of possibilities, which we see sometimes deferred to a hospital.
On the imaging front, Manish Motwani, MBChB, PhD (Central Manchester University Hospital NHS Trust, England), who recently co-chaired a Society of Cardiovascular Computed Tomography webinar on the intersection of AI and cardiac imaging, observed to TCTMD that the field has really exploded over the past 5 years. Across all the modalities, people are looking at how they can extract more information from the images we acquire faster with less human input, he noted.
Already, most cardiac MRI vendors have embedded AI-based technologies to aid in calculating LV mass and ejection fraction by automatically drawing contours on images and allowing for readers to merely make tweaks instead of starting from scratch, Motwani explained. Also, a field called radiomics is digging deeper into images and extracting data that would be impossible for humans to glean.
When you're looking at a CT in black and white, your human eye can only resolve so much distinction between the pixels, Motwani said. But it's actually acquired at a much higher spatial resolution than your monitor can display. This technique, radiomics, actually looks at the heterogeneity of all the pixels and the texture that is coming out with data and metrics that you can't even imagine. And for things like tumors, this can indicate mitoses and replication and how malignant something may be. So were really starting to think about AI as finding things out that you didn't even know existed.
Yet another burgeoning sector of AI use in cardiology is in ECG analysis, where recent studies have shown machine-learning models to be better than humans at identifying conditions such as long QT syndrome and atrial fibrillation, calculating coronary calcium, and estimating physiologic age.
Peter Noseworthy, MD (Mayo Clinic, Rochester, MN), who studies AI-enabled ECG identification of hypertrophic cardiomyopathy, told TCTMD that although the term low-hanging fruit is overused, [this ECG approach] basically readily available data and an opportunity that we wanted to take advantage of.
Based on his findings and others, his institution now offers a research tool that allows any Mayo clinician to enable the AI-ECG to look for a variety of conditions. Noseworthy said the dashboard has been used thousands of times since it was made available a few months ago. It is not currently available to clinicians at other institutions, but eventually, if it is approved by the US Food and Drug Administration, then we could probably build a widget in the Epic [EHR] that would allow other people to be able to run these kinds of dashboards.
The exciting thing is that the AI-ECG allows us to truly do preventative medicine, because we can now start to anticipate the future development of diseases, said Michael Ackerman, MD, PhD (Mayo Clinic), a colleague of Noseworthys who led the long QT study. We can start to see diseases that nobody knew were there. . . . AI technology has the ability to sort of shake up all our standard excuses as to why we can't screen people for these conditions and change the whole landscape of the early detection of these very treatable, sudden-death-associated syndromes.
New Era of Research
Ackerman believes AI has also opened the door to a new era of research. Every one of our papers combined from 2000 until this year [didnt include] 1.5 million ECGs, he said, noting that a single AI study can encompass that level of data. It's kind of mind-boggling really.
Bruining said this added capability will allow researchers to collect the data from not only your own hospital but also over a country or perhaps even over multiple countries. It's a little bit different in Europe than it is in the United States. In the United States you are one big country, although different states, and one language. For us in Europe, we are smaller countries and that makes it a little more difficult to combine all the data from all the different countries. But that is the size you need to develop robust and trustworthy algorithms. Because the higher the number the more details, you will find.
AI technology has the ability to sort of shake up all our standard excuses as to why we can't screen people for these conditions and change the whole landscape of the early detection of these very treatable, sudden-death-associated syndromes. Michael Ackerman
Wei-Qi Wei, MD, PhD (Vanderbilt University Medical Center, Nashville, TN), who has used AI to mine EHR data, told TCTMD that while messy, the information found in EHRs provides a much more comprehensive look at patient cohorts compared to the cross-sectional data typically used by clinical trials. We have huge data here, he said. The important thing for us is to understand the data and to try to use the data to improve the health care quality. That's our goal.
In order to do that, algorithms must first be able to bridge information from various systems, as otherwise the end result will not be relevant, Wei continued. A very common phrase in the machine-learning world [is] garbage in, garbage out, he said.
Siloes and Generalizability
The development of different AI platforms has mostly emerged within institutions, with researchers often validating algorithms on internal data.
That scenario has transpired because to develop these tools, researchers need two things often only found at large academic institutions, said Ackerman. You need to have the AI scientiststhe people who play in the sandbox with all their computational powerand you need a highly integrated, accessible electronic health record that you can then data mine and marry the two.
Though productive, this system may build in inherent limits to AI algorithms in terms of generalizability.
Motwani pointed to the UKs National Health System (NHS) to illustrate the point. The NHS, in theory, stores all patient data in a single database, Motwani said. There's an opportunity for big data mining and you have the database ready to retrospectively test these programs. But on the other hand, in the US you have huge resources but a very fragmented healthcare system and you've got one institution that might find it difficult to get the numbers and therefore need to interact with another institution but there isn't a combined database.
This has resulted in a situation where much of the data used to develop AI tools are coming only from a handful of institutions and not a broad spectrum, he continued. That makes it difficult, necessarily, to say, well, all of this applies to a population on the other end of the world.
Still, the beauty of AI and machine learning is that it's constantly learning, Motwani said. Most vendors, in whatever task they're doing, have a constant feedback loop so their algorithms are always updating in theory. As more sites come onboard, the populations are more diverse that the algorithms are being used on. There's this positive feedback loop for it to constantly become more and more accurate.
It would be helpful, said Bruining, if there was more open access to algorithms so that they could be more easily reproduced and evolved. That would help grow interest into these kinds of developments, he suggested, because the medical field is still very conservative and not everybody will trust a deep-learning algorithm which teaches itself. So we have to overcome that.
Navar acknowledged the siloes that many AI researchers are working in, but said she has no doubt that technology will be able to merge and become more generally disseminated with time. Once an institution learns something and figures something out, [they will be able] to package that knowledge up and spread it in a way that everybody else can share in it, she predicted, adding that this will manifest in many ways. There aren't that many different EHR companies in the US market. Although there's competition, there's not an infinite number of technology vendors for things like EKG reading software or echo reading software. So if we're talking about the cardiology space, here this is where I think that there's probably going to be a role for a lot of the companies in the space to adopt and help spread this technology.
While no large consortium of AI researchers in cardiology yet exists, Bruining said he is petitioning for one within the European Society of Cardiology. For now, initiatives like the National Institutes of Health (NIH)-funded eMERGE network and BigData@Heart in Europe are enabling scientists to collaborate and work toward common goals.
Regulation Speculation
Its one thing to use technology to help increase efficiency or keep track of patient data, but regulation will be required once physicians start using AI to help in clinical decision-making.
Bruining said many of these AI tools should be validated and regulated in the same way as any other medical device. If you have software that does make a diagnosis, then you have to go through a whole regulation and you have to inform notified bodies, who will then look. The same thing as with drugs, he said. Software is a device.
Ethically, Khera referred to AI-based predictive models as a digital assay of sorts that is no different than any other lab test. So I think the bar for some of these tools, especially if they alter the way we deliver care, will probably be the same as any other assay, he said.
Navar agreed. We're probably going to need to categorize AI-based interventions into those that are direct-to-patient or impacting patient care immediately versus those that are sort of filtered through a human or physician interaction, she said. For example, an AI-based prediction model that's used to immediately recommend something to a patient that doesn't go through a doctor, that's like a medical device and needs to be really highly scrutinized because we're depending on the accuracy of that to guide a treatment or a therapy for a patient. If it's something that is augmenting the workflow, maybe suggesting to the physician to consider something, then I think there's a bit more safeguard there because it's being filtered through the physician and so may not be quite as scrutinized.
Up until now, however, regulatory approval has not kept pace with these technologies, Motwani noted, and there lies a challenge for this expanding field. At the moment, a lot of these technologies are being put through as devices, when actually if you think about it, they're not really.
Additionally, he continued, many AI-based tools are not prospectively tested but rather developed on historical databases.
It's very hard to actually prospectively test AI, Motwani explained. Are we going to say, Ok, all these X-rays are now going to be read by a computer, and then compare that to humans? You'd have to get ethical approval for such a trial, and that's where it gets difficult. The way AI is transitioning at the moment is that it's more of an assistant to human reads. [This] is probably a natural kind of transition that's helpful rather than saying all of a sudden all of this is read by computers, which I think most people would feel uncomfortable with. In some ways, the lack of regulatory clarity is helpful because people are just dipping in and out, they're beginning to get comfortable with it and seeing whether it's useful or not.
Who Pays?
Who will bear the cost for development and application of these new technologies remains an open question.
Right now, Khera said most tools related to clinical decision support are institutionally supported, although the NIH has funded some research. However, implementation is more often paid for by health systems in a bid for efficiency in healthcare, he said. Still other algorithms are covered by the Centers for Medicare & Medicaid Services like any other assay.
In the imaging space, Motwani said most of the companies that are involved are developing fee-to-scan pricing. If you're a large institution and you have a huge faculty, is that really attractive? Probably not. But if you're a smaller institution that maybe doesn't have cardiac MRI readers, for example, it might be cost-effective rather than employing staff to get some of this processing done.
Yet then questions arise, such as Are insurers going to pay for this if a human can do it? and Is it then that those human staff can then be deployed to other stuff? Motwani said. I think the business model will be that companies will make the case that your throughput of cases will be higher using some of these technologies. Read times will be shortened. Second opinions can be reduced. So that's the financial incentive.
This will vary depending on geography and how a countrys health system is set up, added Motwani. For example, in the US, the more [scans] that you're doing, the more you're billing, the more money you're making, and therefore it makes sense. Other healthcare systems like the UK where it doesn't work necessarily in that way, . . . there the case would be that you just have to get all the cases done as quickly and effectively as possible, so we'll pay for these tools because the human resources may not be available.
There are also cost savings to be had, added Bruining. If you can reduce the workload, that can save costs, and also other costs could be saved if you can use AI more in primary prevention and secondary prevention, because the patients are then coming less to the hospitals.
Overall, Nahar observed, once we have robust evidence that this worksit decreases the healthcare spending and increases the quality of carethen I think we'll have more payers buy into this and they'll buy into paying for the cost.
Fear and Doctoring
The overarching worry about AI replacing human jobs is present in the cardiovascular medicine sector as well, but everyone who spoke with TCTMD said cardiologists should have no anxiety about adopting new technology.
There's an opportunity to replace an old model, Wei said. Yeah, it might replace some human jobs, but from another way, it creates some jobs as well. . . . Machine learning improves the efficiency or efficacy of learning from data, but at the same time more and more jobs will be available for people who understand artificial intelligence.
Bruining said that technologies come and go, citing the stethoscope as one example of something that has disappeared more or less within cardiology. Physicians and also the nurses and other allied health professionals very much want a reduction in the workload and a reduction in the amount of data they have to type over from one system to another, and that can be handled by AI, he said.
I think that as humans, we're always going to need the human part of doctoring. Ann Marie Navar
Is there scope for automation in some of our fields? Definitely so, said Khera, who recently led a study looking at machine learning and post-MI risk prediction. But would it supersede our ability to provide care, especially in healthcare? I don't think that will actually happen. . . . I think [AI is] just expanding our horizons rather than replacing our existing framework, because our framework needs a lot of work.
Navar agreed. I have yet to see a technology come along that is so transformative that it takes away so much of our work that we have nothing to do and now we're all going to be unemployed, she said. I don't worry that computers are going to replace physicians at all. I'm actually excited that a lot of these AI-based tools can help us reduce the time that we spend doing things that we don't need to and spend more time doing the fun or harder part of being a doctor. . . . I think that there's always going to be limitations with any of these technologies and I think that as humans, we're always going to need the human part of doctoring.
That notionthat AI might allow for deeper connection with patients, rather than taking doctors out of the equationhas been dubbed deep medicine after a book of the same name by cardiologist Eric Topol, MD.
COVID-19, Prevention, and Opportunity
In light of the COVID-19 pandemic, Bruining said AI research has been one of the fields to get a bump this past year. It has torn down a lot of walls, and it also shows how important it is to share your scientific data, he said.
Motwani agreed. On a quick Google search, you'll find hundreds and hundreds of AI-based algorithms for COVID, he said. I think the people who had the infrastructure and databases and algorithms rapidly pivoted toward the COVID. Now whether they've made a difference in it prospectively is probably uncertain, but the tool set is there. It shows how they can actually use that data for a new disease and potentially if there were something to happen again.
Looking forward, Nahar sees the largest opportunity for AI in prediction of disease and precision medicine. Just before the disease or the adverse events manifest, we have a big window for opportunity, and that's where we could use digital biomarkers, computational biomarkers for prediction of which patients are at risk, and target early intervention based on the risk intervention to the patient so they would not manifest their disease or the disease prevention would be halted, he said. They would not go to the ER as frequently, they would not get hospitalized, and the disease would not advance.
Navar said for now cardiologists should think of AI more as augmented intelligence rather than artificial, given that humans are still an integral part of its application. In the future, she predicted, this technology will be viewed through a different lens. We don't talk about statistics as a field. We don't say, What are the challenges of statistics? because there's like millions of applications of statistics and it depends on both where you're using it, which statistical technique you're using, and how good you are at that particular technique, she said. We're going to look back on how we're talking about AI now and laugh a little bit at trying to capture basically an entirely huge set of different technologies with a single paintbrush, which is pretty impossible.
Go here to see the original:
AI in Cardiology: Where We Are Now and Where to Go Next - TCTMD
Posted in Ai
Comments Off on AI in Cardiology: Where We Are Now and Where to Go Next – TCTMD
Allscripts CEO talks EHR innovation, AI and the cloud – Healthcare IT News
Posted: at 5:41 am
Electronic health records have come a long way but for many users, they have a long way yet to go.
Physicians and nurses who are tasked with using the complex tools day in and day out have usability issues that stand in the way of spending quality time with their patients. They often ask, 'Am I taking care of the patient or the computer?'
But in recent years, EHR vendors have been innovating with their technologies andmaking usability strides with tools such as artificial intelligence and the cloud. Innovation with EHRs is key to making the health IT experience andthe overall healthcare experience better for patients and caregivers.
Healthcare IT News interviewed Paul Black, CEO of Allscripts, an EHR giant, to discuss EHR innovation and the state of electronic health records today.
Q: On the subject of innovation in EHRs, technology should make the human experience easier for providers and health systems. Where is health IT failing healthcare organizations today with EHRs?
A: It isn't that health IT is necessarily failing, but priorities have been directed toward less innovative endeavors, such as meaningful use requirements. For the last several years, and since the advent of meaningful use, some health IT development has been more about checking the box with technology, due to tight timelines and the changing scope of the rules. This has had a profound effect on innovation and only over the past few years has the effect of physician burnout been made clear.
Q: On the flipside of this coin, how is health IT today successfully innovating and making the EHR experience for providers better?
A: Health IT companies that are embracing technology to deliver human-centered design features and products will be the software suppliers that are most sought after.
These software suppliers will need to look within, create design expertise and hierarchies that support human-centric design thinking, support industry-leading training, and enable anyone who has a product role to lead by design and revamp clinical interactions with the products by doing practice-in-motion studies to inform the design.
Health IT companies need to look beyond what is typically in use today look to the consumer market and pull in those technologies to make the EHRs more holistic in addressing the healthcare industry's challenges.
For example, socioeconomic barriers to healthcare are a large challenge for organizations today. Let's take one specific issue, which is inadequate transportation for patients to get to their follow-up appointments, which creates gaps in care and potentially adds up to 22 million missed appointments per year.
Integrating ride-share capabilities into the EHR makes it very easy to ensure patients have reliable transportation and incorporates a consumer-familiar product that already is trusted by patients, outside of healthcare.
Allscripts has successfully done this with our Lyft integration into Sunrise. Software suppliers, like Allscripts, who make the decision to prioritize creating solutions to these types of challenges, are effectively delivering a human-centric technology experience.
Q: What role does artificial intelligence play in innovation with EHRs today?
A: Currently, artificial intelligence is playing a small role in EHRs. However, over the next two to three years, this role will increase exponentially. On the clinical side, AI is still getting its "sea legs" and has slower adoption, only due to the refining and training of the AI models.
One area where it is being heavily adopted is with patient summarization. This is the concept of organizing the patient's clinical data in a way that makes it easy for a provider to consume it.They don't have to manually gather the pertinent information, as it's fed to the provider right in their workflow.
AI will be used to provide curated clinical decision insights at the point of care, serve up critical information that will help clinicians make faster decisions, and automate clinical tasks that have bogged down clinicians today and have led to clinician burnout.
On the revenue cycle side, AI and RPA (robotic processing automation) are being used today to ensure accurate and timely claims, reducing the workloads on the back-end processes and driving down revenue-cycle administrative costs.
The use of these tools, as well as AI bots, will increase significantly and eventually automate the revenue-cycle process, further driving down costs and providing increased revenue to power hospital growth initiatives.
Allscripts is excited to be moving along a development road map that includes these AI innovations, as well as additional machine learning and cognitive support, through our healthcare IT.
Q: What role does the cloud play in innovation with EHRs today?
A: The cloud plays a significant role with innovation. The ability to do complex computing in the cloud will enable healthcare IT advances that have not been achieved on local computation stacks.
Healthcare IT software suppliers that can take advantage of these cloud innovations will be poised to deliver point-of-care cognitive support, quickly, to their provider organizations.
Allscripts recognized early on that the future was in the cloud and invested heavily in an extended strategic partnership with Microsoft and the use of the Azure cloud-based capabilities to enable the best possible healthcare IT cloud experience.
Among other things, these types of partnerships can bring healthcare organizations innovations like Microsoft Text analytics, part of Azure cognitive services, which automates the clinician's workflows in the areas like problem-list management, actionable orders and progress=note creation reducing cognitive burden and driving better outcomes for patients.
With the ability to amass large amounts of federated dataand the ability to perform advanced analytics, cloud computing can provide faster evidence-based treatment protocols that have typically taken up to 10 years to get from bench to bedside.
This will now take a fraction of that time, providing personalized care plans that will increase the speed to wellness and reduce the cost of healthcare compared with the sometimes trial-and-error approach to medication and treatment management.
Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.
See original here:
Allscripts CEO talks EHR innovation, AI and the cloud - Healthcare IT News
Posted in Ai
Comments Off on Allscripts CEO talks EHR innovation, AI and the cloud – Healthcare IT News
Arms first new architecture in a decade is designed for security and AI – The Verge
Posted: at 5:41 am
Chip designer Arm has announced Armv9, its first new chip architecture in a decade following Armv8 way back in 2011. According to Arm, Armv9 offers three major improvements over the previous architecture: security, better AI performance, and faster performance in general. These benefits should eventually trickle down to devices with processors based on Arms designs.
Its an important milestone for the company, whose designs power almost every smartphone sold today, as well as increasing numbers of laptops and even servers. Apple announced its Mac computers transition to its own Arm-based processors last year, and its first Apple Silicon Macs released later in the year. Other manufacturers like Microsoft have also released Arm-based laptops in recent years.
First of the big three improvements coming with Armv9 is security. The new Arm Confidential Compute Architecture (CCA), attempts to protect sensitive data with a secure, hardware-based environment. These so-called Realms can be dynamically created to protect important data and code from the rest of the system.
Next up is AI processing. Armv9 will include Scalable Vector Extension 2 (SVE2), a technology that is designed to help with machine learning and digital signal processing tasks. This should benefit everything from 5G systems to virtual and augmented reality and machine learning workloads like image processing and voice recognition. AI applications like these are said to be a key reason why Nvidia is currently in the process of buying Arm for $40 billion.
But away from these more specific improvements, Arm also promises more general performance increases from Armv9. It expects CPU performance to increase by over 30 percent across the next two generations, with further boosts performance coming from software and hardware optimizations. Arm says all existing software will run on Armv9-based processors without any problems.
With the architecture announced, the big question is when the processors using the architecture might release and find their way into consumer products. Arm says it expects the first Armv9-based silicon to ship before the end of the year.
Read this article:
Arms first new architecture in a decade is designed for security and AI - The Verge
Posted in Ai
Comments Off on Arms first new architecture in a decade is designed for security and AI – The Verge
Silicon Valley leaders think A.I. will one day fund free cash handouts. But experts arent convinced – CNBC
Posted: at 5:41 am
Sam Altman, president of Y Combinator
Patrick T. Fallon | Bloomberg | Bloomberg | Getty Images
Artificial intelligence companies could become so powerful and so wealthy that they're able to provide a universal basic income to every man, woman and child on Earth.
That's how some in the AI community have interpreted a lengthy blog post from Sam Altman, the CEO of research lab OpenAI, that was published earlier this month.
In as little as 10 years, AI could generate enough wealth to pay every adult in the U.S. $13,500 a year, Altman said in his 2,933 word piece called "Moore's Law for Everything."
"My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe," said Altman, the former president of renowned start-up accelerator Y-Combinator earlier this month. "Software that can think and learn will do more and more of the work that people now do."
But critics are concerned that Altman's views could cause more harm than good, and that he's misleading the public on where AI is headed.
Glen Weyl, an economist and a principal researcher at Microsoft Research, wrote on Twitter: "This beautifully epitomizes the AI ideology that I believe is the most dangerous force in the world today."
One industry source, who asked to remain anonymous due to the nature of the discussion, told CNBC that Altman "envisions a world wherein he and his AI-CEO peers become so immensely powerful that they run every non-AI company (employing people) out of business and every American worker to unemployment. So powerful that a percentage of OpenAI's (and its peers') income could bankroll UBI for every citizen of America."
Altman will be able to "get away with it," the source said, because "politicians will be enticed by his immense tax revenue and by the popularity that paying their voter's salaries (UBI) will give them. But this is an illusion. Sam is no different from any other capitalist trying to persuade the government to allow an oligarchy."
Oneof the main thrusts of the essay is a call to tax capital companies and land instead of labor. That's where the UBI money would come from.
"We could do something called the American Equity Fund," wrote Altman. "The American Equity Fund would be capitalized by taxing companies above a certain valuation 2.5% of their market value each year, payable in shares transferred to the fund, and by taxing 2.5% of the value of all privately-held land, payable in dollars."
He added: "All citizens over 18 would get an annual distribution, in dollars and company shares, into their accounts. People would be entrusted to use the money however they needed or wanted for better education, healthcare, housing, starting a company, whatever."
Altman said every citizen would get more money from the fund each year, providing the country keeps doing better.
"Every citizen would therefore increasingly partake of the freedoms, powers, autonomies, and opportunities that come with economic self-determination," he said. "Poverty would be greatly reduced and many more people would have a shot at the life they want."
Matt Clifford, the co-founder of start-up builder Entrepreneur First, wrote in his "Thoughts in Between" newsletter: "I don't think there is anythingintellectuallyradical here ... these ideas have been around for a long time but it's fascinating as a showcase of how mainstream these previously fringe ideas have become among tech elites."
Meanwhile, Matt Prewitt, president of non-profit RadicalxChange, which describes itself as a global movement for next-generation political economies, told CNBC: "The piece sells a vision of the future that lets our future overlords off way too easy, and would likely create a sort of peasant class encompassing most of society."
He added: "I can imagine even worse futures but this the wrong direction in which to point our imaginations. By focusing instead on guaranteeing and enabling deeper, broader participation inpolitical and economic life, I think we can do far better."
Richard Miller, founder of tech consultancy firm Miller-Klein Associates, told CNBC that Altman's post feels "muddled," adding that "the model is unfettered capitalism."
Michael Jordan, an academic at University of California Berkeley, told CNBC the blog post is too far from anything intellectually reasonable, either from a technology point of view, or an economic point of view, that he'd prefer not to comment.
In Altman's defense, he wrote in his blog that the idea is designed to be little more than a "conversation starter." Altman did not immediately reply to a CNBC request for an interview.
An OpenAI spokesperson encouraged people to read the essay for themselves.
Not everyone disagreed with Altman. "I like the suggested wealth taxation strategies," wrote Deloitte worker Janine Moir on Twitter.
Founded in San Francisco in 2015 by a group of entrepreneurs including Elon Musk, OpenAI is widely regarded as one of the top AI labs in the world, along with Facebook AI Research, and DeepMind, which was acquired by Google in 2014.
The research lab, backed by Microsoft with $1 billion in July 2019, is best known for creating an AI image generator, called Dall-E, and an AI text generator, known as GPT-3. It has also developed agents that can beat the best humans at games like Dota 2. But it's nowhere near creating the AI technology that Altman describes, experts told CNBC.
Daron Acemoglu, an economist at MIT, told CNBC: "There is an incredible mistaken optimism of what AI is capable of doing."
Acemoglu said algorithms are good at performing some "very, very narrow tasks" and that they can sometimes help businesses to cut costs or improve a product.
"But they're not that revolutionary, and there's no evidence that any of this is going to be revolutionary," he said, adding that AI leaders are "waxing lyrical about what AI is doing already and how it's revolutionizing things."
In terms of the measures that are standard for economic success, like total factor productivity growth, or output per worker, many sectors are having the worst time they've had in about 100 years, Acemoglu said. "It's not comparable to previous periods of rapid technological progress," he said.
"If you look at the 1950s and the 1960s, the rate of TFP (total factor productivity) growth was about 3% a year," said Acemoglu. "Today it's about 0.5%. What that means is you're losing about a point and a half percentage growth of GDP (gross domestic product) every year so it's a really huge, huge, huge productivity slowdown. It's completely inconsistent with this view that we're just getting an enormous amount of benefits (from AI)."
Technology evangelists have been saying AI will change the world for years with some speculating that "artificial general intelligence" and "superintelligence" isn't far away.
AGI is the hypothetical ability of an AI to understand or learn any intellectual task that a human being can, while superintelligence is defined by Oxford professor Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest."
But some argue that we're no closer to AGI or superintelligence than we were at the start of the century.
"One can say, and some do, 'oh it's just around the corner.' But the premise of that doesn't seem to be very well articulated. It was just around the corner 10 years ago and it hasn't come," said Acemoglu.
Read the original here:
Posted in Ai
Comments Off on Silicon Valley leaders think A.I. will one day fund free cash handouts. But experts arent convinced – CNBC
Regulators Want to Know How Financial Institutions Use AI and How They’re Mitigating Risks – Nextgov
Posted: at 5:41 am
A group of federal financial regulators says they know U.S. financial institutions are using artificial intelligence but wants more information on where the technology is being deployed and how those organizations are accounting for the risks involved.
The financial sector is using forms of AIincluding machine learning and natural language processingto automate rote tasks and spot trends humans might miss. But new technologies always carry inherent risks, and AI has those same issues, as well as a host of its own.
On Wednesday, the Board of Governors of the Federal Reserve System, the Bureau of Consumer Financial Protection, the Federal Deposit Insurance Corporation, the National Credit Union Administration and the Office of the Comptroller of the Currency will publish a request for information in the Federal Register seeking feedback on AI uses and risk management in the financial sector.
All of these agencies have some regulatory oversight responsibility, including the use of new technologies and techniques, and the associated risks involved with any kind of innovation.
With appropriate governance, risk management, and compliance management, financial institutions use of innovative technologies and techniques, such as those involving AI, has the potential to augment business decision-making, and enhance services available to consumers and businesses, the request states.
Financial organizations are already using some AI technologies to identify fraud and unusual transactions, personalize customer service, help make decisions on creditworthiness, using natural language processing on text documents, and for cybersecurity and general risk management.
AIlike most other technologieshas the potential to automate some tasks and can help identify trends human analysts might have missed.
AI can identify relationships among variables that are not intuitive or not revealed by more traditional techniques, the RFI states. AI can better process certain forms of information, such as text, that may be impractical or difficult to process using traditional techniques. AI also facilitates processing significantly large and detailed datasets, both structured and unstructured, by identifying patterns or correlations that would be impracticable to ascertain otherwise.
That said, there are risks to deploying the new technologyas there are with any innovation disrupting a sectorsuch as automating discriminatory processes and policies, creating data leakage and sharing problems and new cybersecurity weaknesses.
But AI also carries its own specific challenges. The financial agencies cite explainability, broader or more intensive data usage and dynamic updating as examples.
The request for information seeks to understand respondents views on the use of AI by financial institutions in their provision of services to customers and for other business or operational purposes; appropriate governance, risk management, and controls over AI; and any challenges in developing, adopting, and managing AI.
The request also turns the tables on the agencies themselves, asking institutions about what assistance, regulations, laws and the like would help the sector better manage the promise and risk of AI.
The RFI includes 17 detailed questions. Responses are due 60 days after the publish date, or June 30.
Follow this link:
Regulators Want to Know How Financial Institutions Use AI and How They're Mitigating Risks - Nextgov
Posted in Ai
Comments Off on Regulators Want to Know How Financial Institutions Use AI and How They’re Mitigating Risks – Nextgov
A.I. researchers urge regulators not to slam the brakes on its development – CNBC
Posted: at 5:41 am
LONDON Artificial intelligence researchers argue that there's little point in imposing strict regulations on its development at this stage, as the technology is still in its infancy and red tape will only slow down progress in the field.
AI systems are currently capable of performing relatively "narrow" tasks such as playing games, translating languages, and recommending content.
But they're far from being "general" in any way and some argue that experts are no closer to the holy grail of AGI (artificial general intelligence) the hypothetical ability of an AI to understand or learn any intellectual task that a human being can than they were in the 1960s when the so-called "godfathers of AI" had some early breakthroughs.
Computer scientists in the field have told CNBC that AI's abilities have been significantly overhyped by some. Neil Lawrence, a professor at the University of Cambridge, told CNBC that the term AI has been turned into something that it isn't.
"No one has created anything that's anything like the capabilities of human intelligence," said Lawrence, who used to be Amazon's director of machine learning in Cambridge. "These are simple algorithmic decision-making things."
Lawrence said there's no need for regulators to impose strict new rules on AI development at this stage.
People say "what if we create a conscious AI and it's sort of a freewill" said Lawrence. "I think we're a long way from that even being a relevant discussion."
The question is, how far away are we? A few years? A few decades? A few centuries? No one really knows, but some governments are keen to ensure they're ready.
In 2014, Elon Musk warned that AI could "potentially be more dangerous than nukes" and the late physicist Stephen Hawking said in the same year that AI could end mankind. In 2017, Musk again stressed AI's dangers, saying that it could lead to a third world war and he called for AI development to be regulated.
"AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk said. However, many AI researchers take issue with Musk's views on AI.
In 2017, Demis Hassabis, the polymath founder and CEO of DeepMind, agreed with AI researchers and business leaders (including Musk) at a conference that "superintelligence" will exist one day.
Superintelligence is defined by Oxford professor Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest." He and others have speculated that superintelligent machines could one day turn against humans.
A number of research institutions around the world are focusing on AI safety including the Future of Humanity Institute in Oxford and the Centre for the Study Existential Risk in Cambridge.
Bostrom, the founding director of the Future of Humanity Institute, told CNBC last year that there's three main ways in which AI could end up causing harm if it somehow became much more powerful. They are:
"Each of these categories is a plausible place where things could go wrong," said the Swedish philosopher.
Skype co-founder Jaan Tallinn sees AI as one of the most likely existential threats to humanity's existence. He's spending millions of dollars to try to ensure the technology is developed safely. That includes making early investments in AI labs like DeepMind (partly so that he can keep tabs on what they're doing) and funding AI safety research at universities.
Tallinn told CNBC last November that it's important to look at how strongly and how significantly AI development will feed back into AI development.
"If one day humans are developing AI and the next day humans are out of the loop then I think it's very justified to be concerned about what happens," he said.
But Joshua Feast, an MIT graduate and the founder of Boston-based AI software firm Cogito, told CNBC: "There is nothing in the (AI) technology today that implies we will ever get to AGI with it."
Feast added that it's not a linear path and the world isn't progressively getting toward AGI.
He conceded that there could be a "giant leap" at some point that puts us on the path to AGI, but he doesn't view us as being on that path today.
Feast said policymakers would be better off focusing on AI bias, which is a major issue with many of today's algorithms. That's because, in some instances, they've learned how to do things like identify someone in a photo off the back of human datasets that have racist or sexist views built into them.
The regulation of AI is an emerging issue worldwide and policymakers have the difficult task of finding the right balance between encouraging its development and managing the associated risks.
They also need to decide whether to try to regulate "AI as a whole" or whether to try to introduce AI legislation for specific areas, such as facial recognition and self-driving cars.
Tesla's self-driving driving technology is perceived as being some of the most advanced in the world. But the company's vehicles still crash into things earlier this month, for example, a Tesla collided with a police car in the U.S.
"For it (legislation) to be practically useful, you have to talk about it in context," said Lawrence, adding that policymakers should identify what "new thing" AI can do that wasn't possible before and then consider whether regulation is necessary.
Politicians in Europe are arguably doing more to try to regulate AI than anyone else.
In Feb. 2020, the EU published its draft strategy paper for promoting and regulating AI, while the European Parliament put forward recommendations in October on what AI rules should address with regards to ethics, liability and intellectual property rights.
The European Parliament said "high-risk AI technologies, such as those with self-learning capacities, should be designed to allow for human oversight at any time." It added that ensuring AI's self-learning capacities can be "disabled" if it turns out to be dangerous is also a top priority.
Regulation efforts in the U.S. have largely focused on how to make self-driving cars safe and whether or not AI should be used in warfare. In a 2016 report, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI software with few restrictions.
The National Security Commission on AI, led by ex-Google CEO Eric Schmidt, issued a 756-page report this month saying the U.S. is not prepared to defend or compete in the AI era. The report warns that AI systems will be used in the "pursuit of power" and that "AI will not stay in the domain of superpowers or the realm of science fiction."
The commission urged President Joe Biden to reject calls for a global ban on autonomous weapons, saying that China and Russia are unlikely to keep to any treaty they sign. "We will not be able to defend against AI-enabled threats without ubiquitous AI capabilities and new warfighting paradigms," wrote Schmidt.
Meanwhile, there's also global AI regulation initiatives underway.
In 2018, Canada and France announced plans for a G-7-backed international panel to study the global effects of AI on people and economies while also directing AI development. The panel would be similar to the international panel on climate change. It was renamed the Global Partnership on AI in 2019. The U.S is yet to endorse it.
Follow this link:
A.I. researchers urge regulators not to slam the brakes on its development - CNBC
Posted in Ai
Comments Off on A.I. researchers urge regulators not to slam the brakes on its development – CNBC