The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: April 9, 2021
Understanding the Connected Nature of Type 1 Diabetes, Other Autoimmune Diseases – AJMC.com Managed Markets Network
Posted: April 9, 2021 at 2:42 am
A team of researchers in Indiana has found that the best way to find new treatments for autoimmune diseases, including type 1 diabetes (T1D), is to study the immune system and targeted tissues together, based on an article published earlier this year.
Looking at the immune system in isolation is akin to attempting to fly a plane with only one wing, said Decio L. Eizirik, MD, PhD, the scientific director of the Indiana Biosciences Research Institute Diabetes Center, who is the senior author of the paper that appeared in Science Advances.1
The research, funded in part by the JDRF, springs from the fact that autoimmune diseases are increasing worldwide, and the prevalence of T1D, systemic lupus erythematosus (SLE), multiple sclerosis, and rheumatoid arthritis (RA) has reached 0.5% to 5% depending on the region. JDRFs clinical research strategy has focused in recent years not only on developing treatments and ultimately cures for T1D, but also on preventing it by screening those at greatest risk of developing the disease, and pursuing interventions to halt its onset.
As the authors wrote, While the immune targets of T1D, SLE, MS, and RA are distinct, they share several similar elements, including common variants that pattern disease risk, local inflammation with contribution by innate immunity, and downstream mechanisms mediating target tissue damage.
They focused on the increasing evidence that target tissues are not innocent bystanders of the autoimmune attack, but participate in a deleterious dialog with the immune system that contributes to their own demise. In their work, the researchers mined RNA sequencing datasets from relevant organ and tissue cells in the different diseases, and identified similar and dissimilar gene signatures. In doing so, they identified both candidate genes for the 4 major diseases as well as major common gene expression changes in tissues among them.
One common gene is TYK2, a protein that regulates interferon signaling. The team showed in its research that use of TYK2 inhibitors - already in use for other autoimmune diseases - protect -cells against immune-mediated damage.
"This research is significant in reaching the JDRF's mission to cure, treat and prevent T1D," Frank Martin, PhD, JDRF director of research, said in a statement. "Discovering the common pathways of tissue destruction across multiple autoimmune diseases will dramatically accelerate our path to a cure for T1D. Drugs that are effective in one autoimmune disease could be equally beneficial for another and quickly repurposed to make a big impact for people living with that disease. Characterizing the similarities and differences between multiple autoimmune diseases has the potential to transform the way we treat and cure these diseases in the future.
JDRF has undertaken a large-scale screening project, called T1Detect, that offers participants a blood test to find antibodies, which tell whether a person is at an early stage of T1D and likely to become insulin dependent. The project comes not only as rates of T1D are rising overall, but as they are rising faster among Black and Hispanic youth. The difference now is that there may soon be a treatment at hand.
Another JDRF-funded study, reported last month in Science Translational Medicine, showed that the monoclonal antibody teplizumab delayed the onset of T1D.2 FDA has scheduled an advisory panel on teplizumab for May 27 and a targeted action date for the drug is set for July 2, 2021.
References
See original here:
Understanding the Connected Nature of Type 1 Diabetes, Other Autoimmune Diseases - AJMC.com Managed Markets Network
Posted in Gene Medicine
Comments Off on Understanding the Connected Nature of Type 1 Diabetes, Other Autoimmune Diseases – AJMC.com Managed Markets Network
NanoString Highlights Spatial Biology Research from the 2021 American Association of Cancer Research (AACR) Conference – BioSpace
Posted: at 2:42 am
SEATTLE--(BUSINESS WIRE)-- NanoString Technologies, Inc. (NASDAQ: NSTG), a leading provider of life science tools for discovery and translational research, today announced the highlights of spatial biology abstracts that will be presented at the 2021 meeting of the American Association of Cancer Research (AACR), which will be held virtually from April 10 - 15, 2021.
The GeoMx Digital Spatial Profiler (DSP) enables researchers to characterize tissue morphology to rapidly and quantitatively profile RNA and proteins. To date, NanoString and its collaborators have presented DSP data in dozens of abstracts at major scientific meetings and more than 45 peer-reviewed publications, demonstrating DSPs utility to address a wide range of biological questions in formalin-fixed paraffin-embedded (FFPE) and frozen tissues. At AACR 2021, eight abstracts that used GeoMx DSP will be presented during the poster session on Saturday, April 10.
Four of the eight abstracts will be presented by investigators from the GeoMx Breast Cancer Consortium (GBCC), an international network of breast cancer researchers. Their goal is to apply innovative approaches and decipher the spatial context of breast cancer to develop a comprehensive atlas and database of novel biomarkers for the disease.
GBCC Abstracts
Poster 2718: Digital spatial profiling in HER2 positive breast cancer: The road to precision medicine
In this work, the GeoMx DSP was used to profile 71 protein targets and gene expression profiling was done using NanoString's nCounter PanCancer IO360 assay for primary and metastatic tissues from human epidermal growth factor 2 positive (HER2+) breast cancer (BC) patients. A detailed characterization of carefully chosen immune cold, warm and hot regions of interest (ROI) in the tumor and tumor immune microenvironment of (HER2+) of these samples established that primary tumors had a higher number of immune cells than the metastatic sites. These findings, therefore, suggest that immunotherapy in early-stage BC could be more effective than in advanced BC.
Poster 2701: Molecular profiling to assess the immune response to neoadjuvant SABR in early breast cancer
NanoString's Human PanCancer immune profiling panel was used to assess the impact of localized radiotherapy to elicit an immune response in primary breast carcinomas before lumpectomy. They analyzed 25 patient samples for low-risk primary breast carcinomas from the SIGNAL 2.0 clinical trial using the GeoMx DSP platform, pre, and post stereotactic body radiation therapy (SBRT). Significant differences were found in the gene expression patterns in the immune microenvironment gene expression patterns and cellular composition after radiotherapy, demonstrating that SBRT treatment indeed evokes an immune response, increasing the innate immune response.
Poster 2698: Spatial gene expression profiling in breast cancer
Transcriptome profiling was performed for a cohort of breast cancer lumpectomies using the Cancer Transcriptomic Atlas (CTA) assay on the GeoMx DSP platform. Analysis of 60 patient samples revealed region-specific heterogeneity in unifocal and multifocal cancer tumors. This study demonstrates and establishes the importance of interactions between immune and tumor cells in the tumor microenvironment and the need to develop a strategy to stratify patients to available targeted therapies.
Poster 2726: Characterization of immune microenvironment and heterogeneity in breast cancer subtypes
In this work, the immune microenvironment of Luminal A, Luminal B, Basal, and HER2 tumor subtypes in a cohort of early breast cancer patients was studied using protein biomarkers. The markers were delineated in a spatial context using the GeoMx DSP. Characterization of the immune microenvironment subtypes provided evidence for potential clinical use for GeoMx DSP in diagnosing and better stratifying breast cancer patients based on spatial heterogeneity in tumor and tumor microenvironment.
Other spatial abstracts
Poster 339: Resistance to trastuzumab is associated with alpha-smooth muscle actin expression in the stroma of patients with HER2+ breast cancer
GeoMx DSP was used to identify biomarkers for resistance to trastuzumab in HER2+ breast cancer. Fifty-eight protein targets were analyzed in three different regions of interest (tumor [PanCK+], leukocyte [CD45+/CD68-], and macrophage [CD68+]) in a cohort of 151 breast cancer patients that received trastuzumab. The study uncovered a-SMA as a potential biomarker to augment the predictive value of the current standard of care HER2 assay and justifies its further validation in the light of the many new HER2 targeted therapies.
Poster 705: SARS-CoV-2 infection of the human heart governs intracardiac innate immune response
Spatial profiling of human post-mortem cardiac samples of SARS-CoV-2 infected myocardium was carried out using NanoString's Whole Transcriptome Analysis (1,864 genes) panel, along with a matching proteome panel on the GeoMx digital spatial profiler. The purpose of their investigation was to elucidate the molecular mechanisms underlying cardiac toxicity, a severe cause of morbidity and mortality in patients on DOX therapy. The study showed interesting gender-specific differential gene expression patterns in the myocardium between SARS-CoV-2 infected and control regions of interest. Signatures of enhanced innate and acquired immune signaling, apoptosis and autophagy, chromatin remodeling, reduced DNA repair, and reduced oxidoreductase activity were all observed in regions of infection. Additionally, DOX-induced increase in the expression of TMPSS2 and cathepsins A, B, and F, clearly indicated enhanced SARS-CoV-2 susceptibility in the myocardium, thus placing cancer patients on DOX therapy at increased risk of cardiac damage.
Poster 2731: Cell-type deconvolution of African American breast tumors reveals spatial heterogeneity of the immune microenvironment
Researchers at the University of Chicago carried out spatial gene expression analysis within localized segments of TNBC tumors from a cohort of self-reported African American patients in the Chicago Multi-Ethnic Breast Cancer Study (ChiMEC). Regions of interest for spatial characterization of tumor and tumor microenvironment using the GeoMx DSP Cancer Transcriptome Atlas assay were manually selected based on the specific morphologies. The 1,825 genes interrogated in the CTA assay provided a granular understanding of the immune landscape's heterogeneity within tumors.
Poster 2771: Comprehensive analysis of immuno oncology markers in the tumor microenvironment of solid tumor samples using GeoMxTM digital spatial profiler (DSP) and MultiOmyxTM hyperplexed immunofluorescence (IF)
This study describes a multi-faceted highly multiplexed tissue analysis of critical Immuno oncology (IO) protein markers in a pan-cancer cohort of up to 35 FFPE samples originating from breast, head, and neck, prostate, non-small cell lung cancer (NSCLC), endometrial and colorectal indications using NanoString human IO panel on GeoMx DSP in combination with a complementary MultiOmyx Hyperplexed Immunofluorescence (IF) assay. The spatial and quantitative data outputs from DSP nCounter system and cell classification information from the MultiOmyx assay provided the researchers an ability not only to characterize the immunophenotypes but also to visualize the spatial distribution of tumor-infiltrating immune cells at a single-cell resolution within the TME.
Spotlight Theaters at AACR
NanoString will be hosting two spotlight theaters during AACR 2021. The first spotlight theater presentation is April 11 from 1:00-2:00 pm EDT, featuring Joseph Beechem, Ph.D., senior vice president of R&D and chief scientific officer for NanoString, with an overview of the latest developments in spatial biology, True spatial genomics: Measuring the transcriptome in regions, cell and sub-cellular compartments. Dr. Beechem will explain spatial technologies' evolution and their applications from multi-cell to single-cell and subcellular resolution, using the GeoMx DSP and the companys Spatial Molecular Imager.
The second NanoString spotlight theater is Tuesday, April 13, from 11:00-12:00 pm EDT, and is entitled: New Approaches for Cellular Therapies: Technology Symposium Featuring the GeoMx DSP and nCounter CAR-T Characterization. This panel will include three speakers, Dr. Ryan Golden, Resident Physician in Clinical Pathology, Carl June Lab, University of Pennsylvania; Dr. Marco Ruella, Assistant Professor of Medicine, University of Pennsylvania; and Ghamdan Al-Eryani, Ph.D. Student, Tumor Progression Group from the Garvan Institute. Each speaker will discuss new approaches to CAR-T characterization using the spatially-resolved and bulk RNA analysis, from understanding resistance in CART immunotherapy in lymphoma to TCR diversity in melanoma.
NanoString has launched a Technology Access Program (TAP) for the recently announced single and subcellular Spatial Molecular Imager to complement the existing TAP program for GeoMx. Under the program, customers can submit tissue samples to NanoString for analysis using the spatial profiling platforms and receive a complete data package. Researchers interested in participating in NanoString's Technology Access Program should contact the company at TAP@nanostring.com.
About NanoString Technologies, Inc.
NanoString Technologies is a leading provider of life science tools for discovery and translational research. The companys nCounter Analysis System is used in life sciences research and has been cited in more than 4,000 peer-reviewed publications. The nCounter Analysis System offers a cost-effective way to easily profile the expression of hundreds of genes, proteins, miRNAs, or copy number variations, simultaneously with high sensitivity and precision, facilitating a wide variety of basic research and translational medicine applications, including biomarker discovery and validation. The companys GeoMx Digital Spatial Profiler enables highly-multiplexed spatial profiling of RNA and protein targets in a variety of sample types, including FFPE tissue sections.
For more information, please visit http://www.nanostring.com.
NanoString, NanoString Technologies, the NanoString logo, GeoMx, and nCounter are trademarks or registered trademarks of NanoString Technologies, Inc. in various jurisdictions.
View source version on businesswire.com: https://www.businesswire.com/news/home/20210408005318/en/
See original here:
NanoString Highlights Spatial Biology Research from the 2021 American Association of Cancer Research (AACR) Conference - BioSpace
Posted in Gene Medicine
Comments Off on NanoString Highlights Spatial Biology Research from the 2021 American Association of Cancer Research (AACR) Conference – BioSpace
The future of AI is being shaped right now. How should policymakers respond? – Vox.com
Posted: at 2:41 am
For a long time, artificial intelligence seemed like one of those inventions that would always be 50 years away. The scientists who developed the first computers in the 1950s speculated about the possibility of machines with greater-than-human capacities. But enthusiasm didnt necessarily translate into a commercially viable product, let alone a superintelligent one.
And for a while in the 60s, 70s, and 80s it seemed like such speculation would remain just that. The sluggishness of AI development actually gave rise to a term: AI winters, periods when investors and researchers got bored with lack of progress in the field and devoted their attention elsewhere.
No one is bored now.
Limited AI systems have taken on an ever-bigger role in our lives, wrangling our news feeds, trading stocks, translating and transcribing text, scanning digital pictures, taking restaurant orders, and writing fake product reviews and news articles. And while theres always the possibility that AI development will hit another wall, theres reason to think it wont: All of the above applications have the potential to be hugely profitable, which means there will be sustained investment from some of the biggest companies in the world. AI capabilities are reasonably likely to keep growing until theyre a transformative force.
A new report from the National Security Commission on Artificial Intelligence (NSCAI), a committee Congress established in 2018, grapples with some of the large-scale implications of that trajectory. In 270 pages and hundreds of appendices, the report tries to size up where AI is going, what challenges it presents to national security, and what can be done to set the US on a better path.
It is by far the best writing from the US government on the enormous implications of this emerging technology. But the report isnt without flaws, and its shortcomings underscore how hard it will be for humanity to get a handle on the warp-speed development of a technology thats at once promising and perilous.
As it exists right now, AI poses policy challenges. How do we determine whether an algorithm is fair? How do we stop oppressive governments from using AI surveillance for totalitarianism? Those questions are mostly addressable with the same tools the US has used in other policy challenges over the decades: Lawsuits, regulations, international agreements, and pressure on bad actors, among others, are tried-and-true tactics to control the development of new technologies.
But for more powerful and general AI systems advanced systems that dont yet exist but may be too powerful to control once they do such tactics probably wont suffice.
When it comes to AI, the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans that is, humanity doesnt construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.
Because the tech is necessarily speculative, the problem is that we dont know as much as wed like to about how to design those systems. In many ways, were in a position akin to someone worrying about nuclear proliferation in 1930. Its not that nothing useful could have been done at that early point in the development of nuclear weapons, but at the time it would have been very hard to think through the problem and to marshal the resources let alone the international coordination needed to tackle it.
In its new report, the NSCAI wrestles with these problems and (mostly successfully) addresses the scope and key challenges of AI; however, it has limitations. The commission nails some of the key concerns about AIs development, but its US-centric vision may be too myopic to confront a problem as daunting and speculative as an AI that threatens humanity.
AI has seen extraordinary progress over the past decade. AI systems have improved dramatically at tasks including translation, playing games such as chess and Go, answering important research biology questions (such as predicting how proteins fold), and generating images.
These systems also determine what you see in a Google search or in your Facebook News Feed. They compose music and write articles that, at first glance, read as though a human wrote them. They play strategy games. They are being developed to improve drone targeting and detect missiles.
All of those are instances of narrow AI computer systems designed to solve specific problems, versus those with the sort of generalized problem-solving capabilities humans have.
But narrow AI is getting less narrow and researchers have gotten better at creating computer systems that generalize learning capabilities. Instead of mathematically describing detailed features of a problem for a computer to solve, today its often possible to let the computer system learn the problem by itself.
As computers get good enough at performing narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAIs famous GPT series of text generators is, in one sense, the narrowest of narrow AIs it just predicts what the next word will be, based on previous words its prompted with and its vast store of human language. And yet, it can now identify questions as reasonable or unreasonable as well as discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first).
What these developments show us is this: In order to be very good at narrow tasks, some AI systems eventually develop abilities that are not narrow at all.
The NSCAI report acknowledges this eventuality. As AI becomes more capable, computers will be able to learn and perform tasks based on parameters that humans do not explicitly program, creating choices and taking actions at a volume and speed never before possible, the report concludes.
Thats the general dilemma the NSCAI is tasked with addressing. A new technology, with both extraordinary potential benefits and extraordinary risks, is being developed. Many of the experts working on it warn that the results could be catastrophic. What concrete policy measures can the government take to get clarity on a challenge such as this one?
The NSCAI report is a significant improvement on much of the existing writing about artificial intelligence in one important respect: It understands the magnitude of the challenge.
For a sense of that magnitude, its useful to imagine the questions involved in figuring out government policy on nuclear nonproliferation in the 1930s.
By 1930, there was certainly some scientific evidence that nuclear weapons would be possible. But there were no programs anywhere in the world to make them, and there was even some dissent within the research community about whether such weapons could ever be built.
As we all know, nuclear weapons were built within the next decade and a half, and they changed the trajectory of human history.
Given all that, what could the government have done about nuclear proliferation in 1930? Decide on the wisdom of pushing itself to develop such weapons, perhaps, or develop surveillance systems that would alert the country if other nations were building them.
In practice, the government in 1930 did none of these things. When an idea is just beginning to gain a foothold among the academics, engineers, and experts who work on it, its hard for policymakers to figure out where to start.
When considering these decisions, our leaders confront the classic dilemma of statecraft identified by Henry Kissinger: When your scope for action is greatest, the knowledge on which you can base this action is always at a minimum. When your knowledge is greatest, the scope for action has often disappeared, Chair Eric Schmidt and Vice Chair Bob Work wrote of this dilemma in the NSCAI report.
As a result, much government writing about AI to date has seemed fundamentally confused, limited by the fact that no one knows exactly what transformative AI will look like or what key technical challenges lie ahead.
In addition, a lot of the writing about AI both by policymakers and by technical experts is very small, focused on possibilities such as whether AI will eliminate call centers, rather than the ways general AI, or AGI, will usher in a dramatic technological realignment, if its built at all.
The NSCAI analysis does not make this mistake.
First, the rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence and in some instances exceed human performance is world altering. AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience, reads the executive summary.
The report also extrapolates from current progress in machine learning to identify some specific areas where AI might enable notable good or notable harm:
Combined with massive computing power and AI, innovations in biotechnology may provide novel solutions for mankinds most vexing challenges, including in health, food production, and environmental sustainability. Like other powerful technologies, however, applications of biotechnology can have a dark side. The COVID-19 pandemic reminded the world of the dangers of a highly contagious pathogen. AI may enable a pathogen to be specifically engineered for lethality or to target a genetic profile the ultimate range and reach weapon.
One major challenge in communicating about AI is its much easier to predict the broad effects that unleashing fast, powerful research and decision-making systems on the world will have speeding up all kinds of research, for both good and ill than it is to predict the specific inventions those systems will come up with. The NSCAI report outlines some of the ways AI will be transformative, and some of the risks those transformations pose that policymakers should be thinking about how to manage.
Overall, the report seems to grasp why AI is a big deal, what makes it hard to plan for, and why its necessary to plan for it anyway.
But theres an important way in which the NSCAI report falls short. Recognizing that AI poses enormous risks and that it will be powerful and transformative, the report foregrounds a posture of great-power competition with both eyes on China to address the looming problem before humanity.
We should race together with partners when AI competition is directed at the moonshots that benefit humanity like discovering vaccines. But we must win the AI competition that is intensifying strategic competition with China, the report concludes.
China is run by a totalitarian regime that poses geopolitical and moral problems for the international community. Chinas repression in Hong Kong and Tibet, and the genocide of the Uyghur people in Xinjiang, have been technologically aided, and the regime should not have more powerful technological tools with which to violate human rights.
Theres no question that China developing AGI would be a bad thing. And the countermeasures the report proposes especially an increased effort to attract the worlds top scientists to America are a good idea.
More than that, the US and the global community should absolutely devote more attention and energy to addressing Chinas human rights violations.
But its where the report proposes beating China to the punch by accelerating AI development in the US, potentially through direct government funding, that I have hesitations. Adopting an arms-race mentality on AI would make involved companies and projects more likely to discourage international collaboration, cut corners, and evade transparency measures.
In 1939, at a conference at George Washington University, Niels Bohr announced that hed determined that uranium fission had been discovered. Physicist Edward Teller recalled the moment:
For all that the news was amazing, the reaction that followed was remarkably subdued. After a few minutes of general comment, my neighbor said to me, perhaps we should not discuss this. Clearly something obvious has been said, and it is equally clear that the consequences will be far from obvious. That seemed to be the tacit consensus, for we promptly returned to low-temperature physics.
Perhaps that consensus would have prevailed, if World War II hadnt started. It took the concerted efforts of many brilliant researchers to bring nuclear bombs to fruition, and at first most of them hesitated to be a part of the effort. Those hesitations were reasonable inventing the weaponry with which to destroy civilization is no small thing. But once they had reason to fear that the Nazis were building the bomb, those reservations melted away. The question was no longer Should these be built at all? but Should these be built by us, or by the Nazis?
It turned out, of course, that the Nazis were never close, nor was the atomic bomb needed to defeat them. And the US development of the bomb caused its geopolitical adversaries, the USSR, to develop it too, much sooner than it otherwise would have, through espionage. The world then spent decades teetering on the brink of nuclear war.
The specter of a mess like that looms large in everyones minds when they think of AI.
I think its a mistake to think of this as an arms race, Gilman Louie, a commissioner on the NSCAI report, told me though he immediately added, We dont want to be second.
An arms race can push scientists toward working on a technology that they have reservations about, or one they dont know how to safely build. It can also mean that policymakers and researchers dont pay enough attention to the AI alignment problem which is really the looming issue when it comes to the future of AI.
AI alignment is the work of trying to design intelligent systems that are accountable to humans. An AI even in well-intentioned hands will not necessarily ensure its development consistent with human priorities. Think of it this way: An AI aiming to increase a companys stock price, or to ensure a robust national defense against enemies, or to make a compelling ad campaign, might take large-scale actions like disabling safeguards, rerouting resources, or interfering with other AI systems we would never have asked for or wanted. Those large-scale actions in turn could have drastic consequences for economies and societies.
Its all speculative, for sure, but thats the point. Were in the year 1930 confronting the potential creation of a world-altering technology that might be here a decade-and-a-half from now or might be five decades away.
Right now, our capacity to build AIs is racing ahead of our capacity to understand and align them. And trying to make sure AI advancements happen in the US first can just make that problem worse, if the US doesnt also invest in the research which is much more immature, and has less obvious commercial value to build aligned AIs.
We ultimately came away with a recognition that if America embraces and invests in AI based on our values, it will transform our country and ensure that the United States and its allies continue to shape the world for the good of all humankind, NSCAI executive director Yll Bajraktari writes in the report. But heres the thing: Its entirely possible for America to embrace and invest in an AI research program based on liberal-democratic values that still fails, simply because the technical problem ahead of us is so hard.
This is an important respect in which AI is not analogous to nuclear weapons, where the most important policy decisions were whether to build them at all and how to build them faster than Nazi Germany.
In other words, with AI, theres not just the risk that someone else will get there first. A misaligned AI built by an altruistic, transparent, careful research team with democratic oversight and a goal to share its profits with all of humanity will still be a misaligned AI, one that pursues its programmed goals even when theyre contrary to human interests.
The limited scope of the NSCAI report is a fairly obvious consequence of what the commission is and what it does. The commission was created in 2018 and tasked with recommending policies that would advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.
Right now, the part of the US government that takes artificial intelligence risks seriously is the national security and defense community. Thats because AI risk is weird, confusing, and futuristic, and the national security community has more latitude than the rest of the government to spend resources seriously investigating weird, confusing, and futuristic things.
But AI isnt just a defense and security issue; it will affect is affecting most aspects of society, like education, criminal justice, medicine, and the economy. And to the extent it is a defense issue, that doesnt mean that traditional defense approaches make sense.
If, before the invention of electricity, the only people working on producing electricity had been armies interested in electrical weapons, theyd not just be missing most of the effects of electricity on the world, theyd even be missing most of the effects of electricity on the military, which have to do with lighting, communications, and intelligence, rather than weapons.
The NSCAI, to its credit, takes AI seriously, including the non-defense applications and including the possibility that AI built in America by Americans could still go wrong. The thing I would say to American researchers is to avoid skipping steps, Louie told me. We hope that some of our competitor nations, China, Russia, follow a similar path demonstrate it meets thorough requirements for what we need to do before we use these things.
But the report, overall, looks at AI from the perspective of national defense and international competition. Its not clear that will be conducive to the international cooperation we might need in order to ensure no one anywhere in the world rushes ahead with an AI system that isnt ready.
Some AI work, at least, needs to be happening in a context insulated from arms-race concerns and fears of China. By all means, lets devote greater attention to Chinas use of tech in perpetrating human rights violations. But we should hesitate to rush ahead with AGI work without a sense of how well make it happen safely, and there needs to be more collaborative global work on AI, with a much longer-term lens. The perspectives that work could create room for just might be crucial ones.
Continued here:
The future of AI is being shaped right now. How should policymakers respond? - Vox.com
Posted in Ai
Comments Off on The future of AI is being shaped right now. How should policymakers respond? – Vox.com
Will Artificial Intelligence ever live up to its hype? The Stute – The Stute
Posted: at 2:41 am
When I started writing about science decades ago, artificial intelligence was ascendant.IEEE Spectrum, the technology magazine for which I worked, produced a special issue on how AI would transform the world. I edited an article in which computer scientist Frederick Hayes-Roth predicted that AI would soon replace experts in law, medicine, finance and other professions.
That was 1984. That period of exuberance gave way to a slump known as an AI winter, when disillusionment set in and funding declined. In 1998, I tracked Hayes-Roth down to ask how he thought his predictions had held up. He laughed and replied, Youve got a mean streak. AI had not lived up to expectations, he acknowledged. Our minds are hard to replicate, because we are very, very complicated systems that are both evolved and adapted through learning to deal well and differentially with dozens of variables at one time. Algorithms that can perform a specialized task, like playing chess, cannot be easily adapted for other purposes. It is an example of what is called nonrecurrent engineering, Hayes-Roth explained.
Today, according to some measures, AI is booming once again. Programs such as voice and face recognition are embedded in cell phones, televisions, cars and countless other consumer products. Clever algorithms help me choose a Valentines present for my girlfriend, find my daughters building in Brooklyn and gather information for columns like this one. Venture-capital investments in AI doubled between 2017 and 2018 to $40 billion,according toWIRED.A Price Waterhouse studyestimates that by 2030 AI will boost global economic output by more than $15 trillion, more than the current output of China and India combined.
Some observers fear that AI is moving too fast.New York Timescolumnist Farhad Manjoocalls an AI-based reading and writing program, GPT-3, amazing, spooky, humbling and more than a little terrifying. Someday, he frets, he might be put out to pasture by a machine.Elon Musk made headlinesin 2018 when he warned that superintelligent AI represents the single biggest existential crisis that we face. (Really? Worse than climate change? Nuclear weapons? Psychopathic politicians? I suspect that Musk, whohas invested in AI, is trying to promote the technology with his over-the-top fearmongering.)
Experts are pushing back against the hype, pointing out that many alleged advances in AI are based on flimsy evidence. Last year, for example, a team from Google Healthclaimed inNaturethat their AI program had outperformed humans in diagnosing breast cancer. A group led by Benjamin Haibe-Kains, a computational genomics researcher,criticized the Google Health paper, arguing that the lack of details of the methods and algorithm code undermines its scientific value.
Haibe-Kainscomplained toTechnology Reviewthat the Google Health report is more an advertisement for cool technology than a legitimate, reproducible scientific study. The same is true of other reported advances, he said. Indeed, artificial intelligence, like biomedicine and other fields, has become mired in a replication crisis. Researchers make dramatic claims that cannot be tested, because researchersespecially those in industrydo not disclose their algorithms.One recent reviewfound that only 15 percent of AI studies shared their code.
There are also signs that investments in AI are not paying off. Technology analyst Jeffrey Funk recently examined 40 startup companies developing AI for health care, manufacturing, energy, finance, cybersecurity, transportation and other industries. Many of the startups were not nearly as valuable to society as all the hype would suggest, Funk reports inIEEE Spectrum. Advances in AI are unlikely to be nearly as disruptivefor companies, for workers, or for the economy as a wholeas many observers have been arguing.
The longstanding goal of general artificial intelligence, possessing the broad knowledge and learning capacity to solve a variety of real-world problems, as humans do, remains elusive. We have machines that learn in a very narrow way, Yoshua Bengio, a pioneer in the AI approach called deep learning, recentlycomplained inWIRED. They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.
Writing inThe Gradient, an online magazine devoted to tech, AI entrepreneur and writer Gary Marcus accuses AI leaders as well as the media of exaggerating the fields progress. AI-based autonomous cars, fake news detectors, diagnostic programs and chatbots have all been oversold, Marcus contends. He warns that if and when the public, governments, and investment community recognize that they have been sold an unrealistic picture of AIs strengths and weaknesses that doesnt match reality, a newAI winter may commence.
Another AI veteran and writer, Eric Larson, questions the myth that one day AI will inevitably equal or surpass human intelligence. In his new bookThe Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, Larson argues that success with narrow applications gets us not one step closer to general intelligence. Larson says the actual science of AI (as opposed to the pseudo-science of Hollywood and science fiction novelists) has uncovered a very large mystery at the heart of intelligence, which no one currently has a clue how to solve. Put bluntly: all evidence suggests that human and machine intelligence are radically different. And yet the myth of inevitability persists.
When I first started writing about science, I believed the myth of AI. One day, surely, researchers would achieve the goal of a flexible, supersmart, all-purpose artificial intelligence, like HAL. Given rapid advances in computer hardware and software, it was only a matter of time. Gradually, I became an AI doubter, as I realized that our mindsin spite of enormous advances in neuroscience, genetics, cognitive science and, yes, artificial intelligenceremain as mysterious as ever. Heres the paradox: machines are becoming undeniably smarterand humans, it seems lately, more stupid, and yet machines will never equal, let alone surpass, our intelligence. They will always remain mere machines. Thats my guess, and my hope.
John Horgan directs the Center for Science Writings at Stevens. This column is adapted from one originally published on ScientificAmerican.com.
Read the original:
Will Artificial Intelligence ever live up to its hype? The Stute - The Stute
Posted in Ai
Comments Off on Will Artificial Intelligence ever live up to its hype? The Stute – The Stute
The Future of AI 7 Stages of Evolution You Need to Know About – ReadWrite
Posted: at 2:41 am
According to artificial intelligence statistics, the global AI market is expected to grow to $60 billion by 2025. Global GDP will grow by $15.7 trillion by 2030 due to artificial intelligence, as it will increase business productivity by 40%. Investment in artificial intelligence has grown by 6 times since 2000. In fact, 84% of businesses think that artificial intelligence can give them a competitive advantage.
If you are a fan of science fiction movies, you might have seen AI in action in its full glory. With artificial intelligence leaving impressionable marks on every facet of our personal and professional lives, it is important to understand how it works and how it will evolve in the future. This allows us to prepare for the future in a much better way.
In this article, you will learn about how artificial intelligence will evolve in the future and what stages it will go through.
7 Stages of AI Evolution
This form of artificial intelligence is everywhere. It surrounds us whether we are at work, at home or traveling. From business software to smart apps, aircraft to electronic appliances, all follow rule-based systems. Robotic process automation is the next stage of a rule-based system in which the machine can perform complete processes on their own without requiring any help from humans.
Since it is a basic level of artificial intelligence and also the most ubiquitous, it is cost-effective and fast. That is why mobile app development company uses it. On the flip side, it requires comprehensive knowledge and domain expertise and involves some kind of human involvement. Generating rules for such a system is sophisticated, time-consuming and resource-intensive.
This type of algorithm is developed by feeding information about a particular domain in which they would be implemented in. Since these algorithms are trained using the knowledge and experience of experts and are updated to cope with new emerging situations, this makes them an alternative to human experts in the same industry. One of the best examples of that type of artificial intelligence is smart chatbots.
Chatbots have already changed the way businesses look at customer support and deliver customer service. It has not only saved businesses from hiring customer service representatives but also help them automate and streamline customer support. In addition to this, it can help businesses in many other ways.
Another form of this type of artificial intelligence is Robo advisors. These Robo advisors are already being used in finance and helping people make sensible investment decisions. We might see their applications grow in other industries as well in the future. They can automate and optimize passive indexing strategies and follow mean-variance optimization.
Unlike context-aware and retention artificial intelligence, domain-specific expertise aims to not only reach a level of human capability but also want to surpass it. Since it has access to more data, it can make better decisions than its human counterpart. We already see its application in areas of cancer diagnosis.
Another popular example of this type of AI is Googles Deepmind Alpha Go. Initially, the system was taught rules and objectives of winning and later, it taught itself how to play Go. The important thing to note here is that it did so with human support, which stopped them from making poor decisions. In March 2016, we finally saw Alpha Go defeat the 18-time world Go champion Lee Sedol by four games to one.
Soon after Alpha Gos success, Google created Alpha Go Zero, which requires no human support to play Go. It learned rules and analyzed thousands of Go games to create strategies. After three days, it defeated Alpha Go by a huge margin of 100 games to nil. This was a clear indication of the potential of smart machines and what they can do when they acquire human-like intelligence. It was a massive breakthrough in the field of artificial intelligence.
These reasoning machines are powered by algorithms that have a theory of mind. This means that it can make sense of different mental states. In fact, they also have beliefs, knowledge, and intentions which are used to create their own logic.
Hence, they have the capacity to reason, negotiate, and interact with humans and other machines. Such algorithms are currently at the development stage, but we can expect to see them in commercial applications in the next few years. Due to this, they can interact, reason, and even negotiate with humans as well as other machines.
The ultimate goal of artificial intelligence is to create systems that can surpass human intelligence. Even though we are very close to achieving that goal, there is still no system that can achieve that feat. Experts are divided on this one as some think that we can achieve that level in less than five years while others argue that we may never be able to achieve that level.
Self-aware AI systems will have more perspective and can understand and react to emotional responses. Just like self-aware humans, self-aware machines can also show a degree of self-control and can regulate themselves according to the situation.
AI researchers have already developed systems that can win from humans in games and do a better job in many other areas. Whats next? The real challenge for AI experts would be to create AI-powered systems that can outperform humans in every department. As a human, visualizing something which is miles ahead of us is out of the question, let alone creating it.
If AI researchers succeed in creating something along these lines, we might see it is being used in solving the worlds biggest problems, such as poverty, hungry and climate change. In fact, we can also expect such systems to make new scientific discoveries and design new economic and governance models. Just like self-aware systems, experts are on the fence about whether it is possible or not. Even if it is possible, how long will it take for this dream to see the light of the day?
At this stage of artificial intelligence, we will be able to connect our brains with one another. This will pave the way for the future of the internet. This will not only help with traditional activities such as sharing ideas but also help with advanced activities such as the ability to observe dreams. It could enable humans to communicate with other living beings, such as plants and animals.
How will artificial intelligence evolve in years to come? Share your opinion with us in the comments section below.
Working in digital marketing with mobile app development company in Dallas Branex, Muneeb Qadar Siddiqui has earned 8 years of experience with skills in digital marketing. Paid marketing, affiliate marketing, search engine marketing and search engine optimization are his strengths. He is also a connoisseur of fine dining in his free time. Do connect with him:Facebook |Twitter |Linkedin
Read more here:
The Future of AI 7 Stages of Evolution You Need to Know About - ReadWrite
Posted in Ai
Comments Off on The Future of AI 7 Stages of Evolution You Need to Know About – ReadWrite
This Is the Most Powerful Artificial Intelligence Tool in the World – Entrepreneur
Posted: at 2:41 am
April7, 20215 min read
Opinions expressed by Entrepreneur contributors are their own.
In June 2020, the Californian company OpenAI announcedGPT-2's upgrade to GPT-3 , a language model based on artificial intelligence and deep learning with cognitive capabilities. It is atechnology that has generated great expectationsand that has been presented as the most important and useful advance in AI in recent years.
OpenAI is a non-profit company founded by Elon Musk, co-founder and director of Tesla and SpaceX, which was born with the aim of researching and democratizing access to General Artificial Intelligence. Originally, it was a non-profit organization. However, in 2020, it becamea company andpartnered with Microsoft in order to achieve new advances, both in the field of language with GPT-3 models, and in the field of robotics and vision.
GPT-3 (Generative Pre-Training Transformer 3) is what is known as an autoregressive language model, which uses deep learning to produce texts that simulate human writing.
Unlike most artificial intelligence systems that are designed for a use case, this API (Application Programming Interface) provides a general-purpose "text input and output" interface, allowing users to test it. in practically any assignment in English. This tool is capable of, among other functions, generating a text on any subject that is proposed to it in the same way as a human would, programming (in HTML code) and generating ideas.
As Nerea Luis, an expertin artificial intelligence and engineer at Sngular, says, GPT-3 is living confirmation that the Natural Language Processing area is advancing more than ever by leaps and bounds."
Do you want to know how GPT-3 works? Now I explain it to you.
The user only has to start writing a paragraph, and the system itself takes care of completing the rest of the text in the most coherent way possible. Also, with GPT-3 you can generate conversations and the answers provided by the system will be based on the context of the previous questions and answers.
It should be noted that the tool generates text using algorithms that were previously trained, and that have already received all the data they need to carry out their task. On time, they have received around 570 GB of text information collected by crawling the Internet (a publicly available dataset known as CommonCrawl) along with other text selected by OpenAI, including text from Wikipedia .
GPT-3 has aroused a lot of interest because it is one of the first systems to show the possibilities of general artificial intelligence, because it completes with surprisingly reasonable results tasks that until now required a specially built system to solve that particular task. Furthermore, it does so from just a few examples, says Csar de Pablo, data scientist at BBVA Data & Analytics.
As for the possible applications that this tool may have, I mention the following:
GPT-3 will be able to generate text for websites , social media ads, scripts , etc. In this way, with a few simple guidelines of the needs you have, GPT-3 will transform it into a precise text. In addition, you can select the type of text you need, from the most generic to the most strategic and creative.
With GPT-3 you will be able to compose emails (among other functions) by simply giving it some guidelines of what you want to say and communicate. For example, through magicemail.io, a portal where you can test the tool (there is a waiting list of about 6000 users), you can see how it works. As a Google Chrome extension, Magicemail will be installed in our Gmail.
When an email arrives, we will simply have to click on the tool to receive a phrase of what they want to tell us in the email.
GPT-3 will develop the code just by telling it how we want our landing page or website to look. Once you give us the HTML code, we will only need a "copy and paste" to have an optimal result. The tool will significantly streamline web development processes.
With this model, chatbots will be much more accurate, with more accurate responses, generating in the user a value of more personalized and effective attention.
Furthermore, GPT-3 could have huge implications for the way software and applications are developed in the future.
A sample of how this technology works can be seen in the essay Are you still scared, human?,published by The Guardian that, as its editor comments, was made from the best fragments of eight articles generated with GPT-3 in order to capture the different styles and registers of artificial intelligence.
Another demo available online is "GPT-3: Build Me A Photo App,"which shows the creation of an application that looks and works similar to the Instagram application, using a plugin for the Figma software tool, which is widely used for application design.
Let us remember that currently the use that is given to the GPT-3 model is mainly limited to the research community. However, it is clear that GPT-3 in the near future may create everything that has a language structure, such as: answering questions, writing essays, summarizing texts, translating, taking notes and even creating code for computers.
Therefore, GPT-3 is positioned as an artificial intelligence tool with great potential for the future. And, surely, when it is open to the public, its reach will be much more surprising.
Continue reading here:
This Is the Most Powerful Artificial Intelligence Tool in the World - Entrepreneur
Posted in Ai
Comments Off on This Is the Most Powerful Artificial Intelligence Tool in the World – Entrepreneur
In an AI world we need to teach students how to work with robot writers – The Conversation AU
Posted: at 2:41 am
Robots are writing more of what we read on the internet. And artificial intelligence (AI) writing tools are becoming freely available for anyone, including students, to use.
In a period of rapid change, there are enormous ethical implications for post-human authorship in which humans and machines collaborate. The study of AI ethics needs to be central to education as we increasingly use machine-generated content to communicate with others.
AI robot writers, such as GPT-3 (Generative Pre-trained Transformer) take seconds to create text that seems like it was written by humans. In September, 2020 GPT-3 wrote an entire essay in The Guardian to convince people not to fear artificial intelligence.
AI does this through whats called natural language processing and generation. This involves converting human language to computer commands and vice versa. To do this, machine algorithms study millions of text samples, words, sentences and paragraphs humans use to gain a general understanding of the context of human language. Then machines use that knowledge to put together text.
As recently as 2019, this kind of technology seemed a long way off. But today, it is readily available.
For example, the free Zyro content generator provides text for websites. I can select the categories Health and Living and Personal Trainer (I happen to be one, as well as an academic). In two seconds, the following text appears:
I am experienced with personal training, athletic performance, and nutrition. This includes working with children, pregnant women, older adults, people with disabilities, athletes, and anyone looking to achieve a competitive edge, improve their overall health, and overall performance. I also have an interest in alternative health, including wellness coaching and nutrition counselling. In my spare time, I enjoy spending time with my family, the outdoors, and writing.
This sums me up (despite the program knowing nothing about me) and would be fine for my website, if I had one. I have the option to generate more responses, and in two seconds, the program offers:
I am experienced in working with adults and children to improve their overall physical and emotional well-being. I have a strong desire to help you reach your goal [] I love helping others achieve their goals. I love working with others to achieve their goals. I am a mother of two healthy children and a wife to two incredible people.
While this is repetitive and Zyro does not notice the apparent error in the last sentence, these issues would be easy to correct. Text, even for niche purposes, can now be generated in a few clicks.
There are other digital tools such as paraphrasers and rewriters that can generate up to 1,000 articles from a single seed article, each of them substantially unique. Quillbot and WordAI, for instance, can rapidly rewrite text and make it difficult to detect plagiarism. WordAI boasts unlimited human quality content at your fingertips.
So what does this mean for education, writing, and society?
Of course, theres the issue of cheating on essays and other assignments. School and university leaders need to have difficult conversations about what constitutes authorship and editorship in the post-human age. We are all (already) writing with machines, even just via spelling and grammar checkers.
Tools such as Turnitin originally developed for detecting plagiarism are already using more sophisticated means of determining who wrote a text by recognising a human authors unique fingerprint. Part of this involves electronically checking a submitted piece of work against a students previous work.
Many student writers are already using AI writing tools. Perhaps, rather than banning or seeking to expose machine collaboration, it should be welcomed as co-creativity. Learning to write with machines is an important aspect of the workplace writing students will be doing in the future.
Read more: OK computer: to prevent students cheating with AI text-generators, we should bring them into the classroom
AI writers work lightning fast. They can write in multiple languages and can provide images, create metadata, headlines, landing pages, Instagram ads, content ideas, expansions of bullet points and search-engine optimised text, all in seconds. Students need to exploit these machine capabilities, as writers for digital platforms and audiences.
Perhaps assessment should focus more on students capacities to use these tools skilfully instead of, or at least in addition to, pursuing pure human writing.
Yet the question of fairness remains. Students who can access better AI writers (more natural, with more features) will be able to produce and edit better text.
Better AI writers are more expensive and are available on monthly plans or high one-off payments wealthy families can afford. This will exacerbate inequality in schooling, unless schools themselves provide excellent AI writers to all.
We will need protocols for who gets credit for a piece of writing. We will need to know who gets cited. We need to know who is legally liable for content and potential harm it may create. We need transparent systems for identifying, verifying and quantifying human content.
Read more: When does getting help on an assignment turn into cheating?
And most importantly of all, we need to ask whether the use of AI writing tools is fair to all students.
For those who are new to the notion of AI writing, it is worthwhile playing and experimenting with the free tools available online, to better understand what creation means in our robot future.
See more here:
In an AI world we need to teach students how to work with robot writers - The Conversation AU
Posted in Ai
Comments Off on In an AI world we need to teach students how to work with robot writers – The Conversation AU
Spurred by the pandemic, AI is driving decentralized clinical trials – Healthcare IT News
Posted: at 2:41 am
With clinical oncology trials put on hold during the COVID-19 pandemic, researchers turned to troves of data to find patients across the country who would qualify for trials, even if they weren't physically there.
Artificial intelligence enabled this process, and may have created a move toward decentralized trials that potentially could last long after the pandemic is over.
Jeff Elton is CEO of ConcertAI, which works with some of the biggest oncology pharmaceutical companies and research organizations. Healthcare IT News interviewed Elton to get his thoughts on this shift and what it means for both treatments and patient outcomes.
Q: With trials on hold, researchers have been working with all of this data to find patients who would qualify for trials, even if they are not physically there. How did artificial intelligence technology enable this?
A: By putting the data in cancer centers to work. We process structured and unstructured data combing through EHRs as well as other sources of patient information that EHRs might not include. Natural language processors and other tools integral to workflows are critical here.
The clinical settings have mountains of data. When participation in trials plunged, they had to quickly and efficiently leverage all the data at their fingertips to find as many potential eligible patients. People working manually would have taken too long and might overlook something. AI has been able to do it. AI enhances the ability to identify patients eligible for clinical studies.
It's a complex process. We need to eliminate false negatives, meaning that if a patient is potentially eligible for a clinical trial, we identify them. We also make sure that we don't have too many false positives. Otherwise, we just create work.
We also use AI tools to ensure we are seeing what we expect and need in clinical setting data exception and anomaly detection and reporting tools are key to identifying and understanding the correct data.
It is critical to understand that if there is no datathere is no AI. Meaningful AI and machine learning capabilities require broad data access, the ability to prepare data for specific AI methods and tools, and reserved data for independent validation. Of course, we also must be vigilant of underlying health and biological trends for retraining or re-specification of AI models.
We can also generate evidence from complementary data from retrospective sources for prospective studies and sometimes retrospective data alone for label expansions.
Increasingly, the FDA is accepting studies with retrospective data provided in replacement for forward-recruited patients in standard-of-care controls as "external control arms." This shift is in the best interest of patients and allows a more efficient study execution, since patients can be recruited exclusively to the treatment arm with the novel therapeutic.
Q: Has AI sparked a move toward decentralized clinical trials, a move that potentially could stick around long after the pandemic is over?
A: We are not going backwards. Decentralized trials have been emerging over the past several years. COVID-19 was the tipping event, or shock,that accelerated the trend.
Decentralized trials do not require AI at all, incidentally, but can leverage AI given that workflows are all digital and most data is machine readable. We will enter a period where decentralized trials are at scale, coexisting with legacy approaches.
But that will only exist for an interim period eventually digital onlywith deeply embedded AI ... the only approach. I use the term "integrated digital trials" to describe what's ahead.
With integrated digital trials, clinical studies are integral to the care process itself, versus being imposed on it. Trials don't need to place a higher burden on providers and patients than the standard of care.
This point is incredibly important. Reducing the burden that trials put on patients and providers allows us to move clinical trials into the community where 80% of patients receive their care. It is both the democratization and ubiquity of clinical trials.
Q: What does this shift mean for both treatments and patient outcomes?
A: All of this is good. It's good for patients, first and foremost, because they can participate in trials in a broader array of treatment settings. It's good for treatment innovation, because more study alternatives are available in more settings with lower barriers to participation.
Standard-of-care treatment for novel therapeutics versus a separate clinical trial should increase the likelihood of a positive clinical outcome. We want to bring more potentially beneficial options to patients, faster and with greater precision.
Q: Please share an anecdote of your work this past year with pharma companies and research organizations about how AI has improved or enhanced oncology clinical trials.
A: One of our partners had a study that was unable to accrue patients. The trial sponsor wanted our tools, clinical sites and data to solve their problem. We did, but the problem turned out to be a trial design that was inexecutable. Our AI-optimized study design solution found the problem. It was not the insight that was expected, but it was nonetheless valuable.
Of greater significance, we and our sponsor partners in the past year have affirmed our commitment to eliminating the research disparities that sometimes underlie health and other inequities.
We have successfully brought together our combination of rich clinical data and AI optimizations to reconsider clinical trial designs to ensure diversity, avoid unintentional exclusions, and identify sites and investigators that can assure study success and timeliness for completion.
Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.
Go here to see the original:
Spurred by the pandemic, AI is driving decentralized clinical trials - Healthcare IT News
Posted in Ai
Comments Off on Spurred by the pandemic, AI is driving decentralized clinical trials – Healthcare IT News
Fiddler AI Named to the 2021 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups – Yahoo Finance
Posted: at 2:41 am
Fiddler AI honored for achievements in ML Model Performance Monitoring and Explainable AI to build trustworthy and reliable AI solutions
PALO ALTO, Calif., April 7, 2021 /PRNewswire/ -- CB Insights today named Fiddler AI to the fifth annual AI 100 ranking, showcasing the 100 most promising private artificial intelligence companies in the world.
"This is the fifth year CB Insights has recognized the most promising private artificial intelligence companies with the AI 100, and this is one of the most global groups we've ever seen. This year's cohort spans 18 industries, and is working on everything from climate risk to accelerating drug R&D," said CB Insights CEO Anand Sanwal. "Last year's AI 100 companies had a remarkable run after being named to the list, with more than 50% going on to raise additional financing (totaling $5.2B), including 16 $100 million+ mega-rounds. Many also went on to exit via M&A, SPAC or IPO. As industry after industry adopts AI, we expect this year's class will see similar levels of interest from investors, acquirers and customers."
"We're honored to be named to the AI 100 list and excited that our mission to build trust in AI is quickly becoming critical in today's world. Algorithms rule our lives - from news consumption to mortgage financing, our lives are driven by algorithms. Most algorithms are AI-based and increasingly black boxes. We cannot allow algorithms to operate with a lack of transparency. We need accountability to build trust between humans and AI. Fiddler's mission is to build trust with AI by continuously monitoring models and unlocking the AI black box with explainability," said CEO & Founder, Krishna Gade.
Through an evidence-based approach, the CB Insights research team selected the AI 100 from a pool of over 6,000 companies based on several factors including patent activity, investor quality, news sentiment analysis, proprietary Mosaic scores, market potential, partnerships, competitive landscape, team strength, and tech novelty. The Mosaic Score, based on CB Insights' algorithm, measures the overall health and growth potential of private companies to help predict a company's momentum.
Story continues
Fiddler's Model Performance Monitoring solution enables data science and AI/ML teams to validate, monitor, explain, and analyze their AI solutions to accelerate AI adoption, meet regulatory compliance and build trust with end-users. They provide complete visibility and understanding of AI solutions to customers. Fiddler has been recognized for its industry-leading capabilities and innovation - they were named a Technology Pioneer 2020 by the World Economic Forum, one of Forbes' companies to watch on its 2020 AI 50 list, and a 2019 Cool Vendor in Gartner's Enterprise AI Governance and Ethical Response Report.
Quick facts on the 2021 AI 100:
Equity funding and deals: Since 2010, the AI 100 2021 cohort has raised over $11.7B in equity funding across 370+ deals from more than 700 investors.
12 unicorns: Companies with $1B+ valuations on the list span applications as varied as data annotation, cybersecurity, sales & CRM platforms, and enterprise search.
Geographic distribution: 64% of the selected companies are headquartered in the US. Eight of the winners are based in the UK, followed by six each in China and Israel, and five in Canada. Other countries represented in this year's list include Japan, Denmark, Czech Republic, France, Poland, Germany, and South Korea.
About CB InsightsCB Insights builds software that enables the world's best companies to discover, understand and make technology decisions with confidence. By marrying data, expert insights and work management tools, clients manage their end-to-end technology making process on CB Insights. To learn more, please visit http://www.cbinsights.com.
Contact:CB Insightsawards@cbinsights.com
About Fiddler AIFiddler's mission is to enable businesses of all sizes to unlock the AI blackbox and deliver transparent AI experiences to end-users. We enable businesses to build, deploy, and maintain trustworthy AI solutions. Fiddler's next-generation ML Model Performance Management solution enables data science and technical teams to monitor, explain, and analyze their AI solutions, providing responsible and reliable experiences to business stakeholders and customers. Fiddler works with pioneering Fortune 500 companies as well as emerging tech companies. For more information please visit http://www.fiddler.ai or follow us on Twitter @fiddlerlabs and LinkedIn.
CONTACT: Fiddler AImedia@fiddler.ai
Cision
View original content:http://www.prnewswire.com/news-releases/fiddler-ai-named-to-the-2021-cb-insights-ai-100-list-of-most-innovative-artificial-intelligence-startups-301265389.html
SOURCE Fiddler AI
Go here to see the original:
Posted in Ai
Comments Off on Fiddler AI Named to the 2021 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups – Yahoo Finance
To Bridge the AI Ethics Gap, We Must First Acknowledge It’s There – Datanami
Posted: at 2:41 am
(bookzv/Shutterstock)
Companies are adopting AI solutions at unprecedented rates, but ethical worries continue to dog the roll outs. While there are no established standards for AI ethics, a common set of guidelines is beginning to emerge to help bridge the gap between ethical principles and the AI implementations. Unfortunately, a general hesitancy to even discuss the problem could slow efforts to find a solution.
As the AI Ethics Chief for Boston Consulting Group, Steve Mills talks with a lot of companies about their ethical concerns and their ethics programs. While theyre not slowing down their AI rollouts because of ethics concerns at this time, Mills says, they are grappling with the issue and are searching for the best way to develop AI systems without violating ethical principles.
What we continue seeing here is this gap, what we started calling this the responsible AI gap, that gap from principle to action, Mills says. They want to do the right thing, but no one really knows how. There is no clear roadmap or framework of this is how you build an AI ethics program, or a responsible AI program. Folks just dont know.
As a management consulting firm, Boston Consulting Group is well positioned to help companies with this problem. Mills and his BCG colleagues have helped companies develop AI programs. Out of that experience, they recently came up with a general AI ethics program that others can use as a framework to get started.
It has six parts, including:
The most important thing a company can do to get started is to appoint somebody to be responsible for the AI ethics program, Mills says. That person can come from inside the company or outside of it, he says. Regardless, he or she will need to be able to drive the vision and strategy of ethics, but also understand the technology. Finding such a person will not be easy (indeed, just finding AI ethicists let alone executives who can take this role is no easy task).
Ultimately, youre going to need a team. Youre not going to be successful with just one person, Mills says. You need a wide diversity of skill sets. You need bundled into that group the strategists, the technologists, the ethicists, marketingall of it bundled together. Ultimately, this is really about driving a culture change.
There are a handful of companies that have taken a leadership role in paving the way forward in AI ethics. According to Mills, the software companies Microsoft, Salesforce, and Autodesk, as well as Spanish telecom Telefnica, have developed solid programs to define what AI ethics means to them and developed systems to enforce it within their companies.
And BCG of course, he says, but Im biased.
As the Principal Architect of the Ethical AI Practice at Salesforce, Kathy Baxter is one of the foremost authorities on AI ethics. Her decisions impact how Salesforce customers approach the AI ethical quandary, which in turn can impact millions of end users around the world.
So you might expect Baxter to say that Salesforces algorithms are bias-free, that they always make fair decisions, and never take into account factors based on controversial data.
You would be mistaken.
You can never say that a model is 100% bias free. Its just statistically not possible, Baxter says. If it does say that there is zero bias, youre probably overfitting your model. Instead, what we can say is that this is the type of bias that I looked for.
To prevent bias, model developers must be conscious of the specific types of bias theyre trying to prevent, Baxter says. That means, if youre looking to avoid identity bias in a sentiment analysis model, for example, then you should be on the lookout for how different terms, such as Muslim, feminist, or Christian, affect the results.
(Vitalii Vodolazskyi/Shutterstock)
Other biases to be on the lookout for are gender bias, racial bias, and accent or dialect bias, Baxter says. Emerging best-practices for AI ethics demands that practitioners devise ways to detect specific types of bias that could impact their particular AI system, and to take steps to counter those biases.
What type of bias did you look for? How did you measure it? Baxter tells Datanami. And then what was the score? What is the actual safe or acceptable threshold of bias for you to say this is good enough to be released in the world?
Baxters is a more nuanced, and practical, view of AI ethics than one might get from textbooks (if there are any on the topic yet). She seems to recognize that you should accept from the outset that bias is everywhere in human society, and that it can never be fully eradicated. But we can hopefully eliminate the worst type of biases and still enable companies and their customers to reap the rewards that AI promises in the first place.
You often hear people say, Oh we should follow the Hippocratic Oath that says do no harm, Baxter says. Well, thats not actually the true application in medical or pharmaceutical industry, because if you said no harm, there would be no medical treatment. You could never do surgery because youre doing harm to the body when youre cutting the body open. But the benefits outweigh the risks of doing nothing.
There are ethical pitfalls everywhere. For example, its not just bad form to make business decisions based on the race or ethnicity of somebodyits also illegal. But the paradox is, unless you collect data about race or ethnicity, you dont know if those factors are sneaking into the model somehow, perhaps through a proxy like ZIP Codes.
You want to be able to run a story and see, are the outcomes different based on what someones races is, or based on what someones gender is? Baxter says. If it is, thats a real problem. If you just say No, I dont even want to look at race, Im just going to completely exclude that, then its very difficult to create fairness through unawareness.
The challenge is that this is all fairly new, and nobody has a solid roadmap to follow. Salesforce is working to build processes in Einstein Discovery to help its customers model data without incorporating negative bias, but even Salesforce is flying blind to a certain extent.
Kathy Baxter, Principal Architect of the Ethical AI Practice at Salesforce
The lack of established standards and regulations is the biggest challenge in AI ethics, Baxter says. Everyone is working in kind of a sea of vagueness, she says.
She sees similarities to how the cybersecurity field developed in the 1980s. There was no security at first, and we all got hit by malware and viruses. That ultimately prompted the creation of a new discipline with new standards to guide its development. That process took years, and it will take years to hash out standards for AI ethics, she says.
Its a game of whack a mole in security. I think its going to be similar to AI, she says. Were in this period right now where were developing standards, were developing regulations and it will never be a solved problem. AI will continue evolving, and when it does, new risks will emerge and so we will always be in a practice. It will never be a solved problem, but [well continue] learning and iterating. So I do think we can get there. Were just in an uncomfortable place right now because we dont have it.
AI ethics is a new discipline, so dont expect perfection overnight. A little bit of failure isnt the end of the world, but being open enough to discuss failures is a virtue. That can be tough to do in todays volatile public environment, but its a critical ingredient to make progress, BCGs Mills says.
What I try to tell people is no one has all the answers. Its a new area. Everyone is collectively learning, he says. The best thing you can do is be open and transparent about it. I think customers appreciate that, particular if you take the stand of, We dont have all the answers. Here are the things were doing. We might get it wrong sometimes, but well be honest with you about what were doing. But I think were just not there yet. People are hesitant to have that dialog.
Related Items:
Looking For An AI Ethicist? Good Luck
Governance, Privacy, and Ethics at the Forefront of Data in 2021
AI Ethics Still In Its Infancy
Read the original:
To Bridge the AI Ethics Gap, We Must First Acknowledge It's There - Datanami
Posted in Ai
Comments Off on To Bridge the AI Ethics Gap, We Must First Acknowledge It’s There – Datanami