The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Many Artificial Intelligence Initiatives Included in the NDAA – RTInsights
Posted: January 19, 2021 at 9:36 am
The NDAA guidelines reestablish an artificial intelligence advisor to the president and push education initiatives to create a tech-savvy workforce.
Theres plenty of debate surrounding why the USAs current regulatory stance on artificial intelligence (AI) and cybersecurity remains fragmented. Regardless of your thoughts on the matter, the recently passed National Defense Authorization Act (NDAA) includes quite a few AI and cybersecurity driven initiatives for both the military and non-military entities.
Its common to attach provisions to bills whenCongress, the Senate, or both know the bill must pass by a certain time. TheNDAA is one such bill. It has a yearly deadline every year or the countrysmilitary completely loses funding leading to lawmakers using it to pass lawsthat dont always make it on their own. (This years bill was initially vetoed.But the veto was overridden on January 1.)
The bill contains 4,500 pages worth ofinformation. Along with a few different initiatives, one particular moveoutlines both the military and the governments new interest in artificialintelligence.
One of the biggest moves in the bill has to do with the newly created Joint AI Center (JAIC). It moves from the under the supervision of the DODs CIO to the deputy secretary of defense. It moves higher in the DOD hierarchy, possibly underscoring just how crucial new cybersecurity initiatives are to the Department of Defense.
To that end, the JAIC is the Department of Defenses (DoD) AI Center of Excellence that provides expertise to help the Department harness the game-changing power of AI. The mission of the JAIC is to transform the DoD by accelerating the delivery and adoption of artificial intelligence. The goal is to use AI to solve large and complex problem sets that span multiple services, then ensure the Services and Components have real-time access to ever-improving libraries of data sets and tools.
The center will also receive its own oversightboard matching other bill provisions dealing with AI ethics and will soonhave acquisition authority as well. The center will be creating reportsbiannually about its work and its integration with other notable agencies.
The secretary of defense will also investigatewhether the DoD can use AI ethically, both acquired and developed technologies.This will happen within 180 days of the bills passing, creating a pressingdeadline for handling ethics issues surrounding both new technologies and the often-controversialuse of military AI-use.
The DoD will receive a steering committee onemerging technology as well as new hiring guidelines for AI-technologists. Thedefense department will also take on five new AI-driven initiatives designed toimprove efficiency at the DoD.
The second massive provision in the bill is a large piece of cybersecurity legislation. The Cyberspace Solarium Commission worked on quite a few pieces of legislation that made it into the bills final version. The bill creates a White House cyber director position. It also gives the Cybersecurity and Infrastructure Security Agency (CISA) more authority for threat hunting.
It directs the executive branch to conductcontinuity of the economy planning to protect critical economicinfrastructure in the case of cyberattacks. It also establishes a joint cyberplanning office at CISA.
The Cybersecurity Security Model Certification(CMMC) will fall under the Government Accountability Office, and the governmentwill require regular briefings from the DoD on its progress. CMMC is thegovernments accreditation body, and this affects anyone in the defensecontract supply chain.
Entities outside the Department of Defensewill have new initiatives as well. The National AI Initiative hopes toreestablish the United States as a leading authority and provider of artificialintelligence. The initiative will coordinate research, development, anddeployment of new artificial intelligence programs among the DOD as well ascivilian agencies.
This coordination should help bring coherenceand consistency to research and development. In the past, critics have cited alack of realistic and workable regulations as a clear reason the United Stateshas fallen behind in AI development.
It will advise future presidents on the stateof AI within the country to increase competitiveness and leadership. Thecountry can expect more training initiatives and regular updates about thescience itself. It will lead and coordinate strategic partnerships andinternational collaboration with key allies and provide those opportunities tothe US economy.
AI bias is a huge concern among business and US citizens, so the National AI Initiative Advisory Committee will also create a subcommittee on AI and law enforcement. Its findings on data security, and legal standards could affect how businesses handle their own data security in the future.
The National Science Foundation will runawards, competitions, grants, and other incentives to develop trustworthy AI.The country is betting heavily on new initiatives to increase trust among USconsumers as AI becomes a more important part of our lives.
NIST will expand its mission to createframeworks and standards for AI adoption. NIST guidelines already offercompanies a framework for assessing cybersecurity. The updates will helpdevelop trustworthy AI and spell a pathway for AI adoption that consumers willtrust and embrace.
As countries scramble to first place in AIreadiness, these initiatives hope to fix some key gaps leading to the USslagging authority. The NDAA guidelines reestablish an AI-advisor to thepresident and push education initiatives to create a tech-savvy workforce.
It also helps create guidelines for businesses already frantically adopting AI-driven initiatives, providing critical guidance for cybersecurity and sustainability frameworks. Between training and NIST frameworks, businesses could see a new era of trustworthy and ethical AI the sort that creates real insights and efficiency while mitigating risk.
Other countries are investing heavily in AIdevelopment, so new and expanded provisions will help secure the United Statesplace as a world leader in AI. Governmental funding and collaboration withcivilian researchers and development teams is one way the US can remain trulycompetitive in new technology the presence of such a robust body ofAI-focused legislations suggests lawmakers are making this a priority.
Read more from the original source:
Many Artificial Intelligence Initiatives Included in the NDAA - RTInsights
Posted in Artificial Intelligence
Comments Off on Many Artificial Intelligence Initiatives Included in the NDAA – RTInsights
3 Reasons Why Governments Need to Regulate Artificial Intelligence – BBN Times
Posted: at 9:36 am
Artificial Intelligence (AI) research, although far from reaching its pinnacle, is already giving us glimpses of what a future dominated by this technology can look like.
While the rapid progress of the technology should be seen with a positive lens, it is important to exercise some caution and introduce worldwide regulations for the development and use of AI technology.
The constant research in the field of technology, in addition to giving rise to increasingly powerful applications, is also increasing the accessibility to these applications. It is making it easier for more and more people, as well as organizations, to use and develop these technologies. While the democratization of technology that is transpiring across the world is a welcome change, it cannot be said for all technological applications that are being developed.
The usage of certain technologies should be regulated, or at the very least monitored, to prevent the misuse or abuse of the technology towards harmful ends. For instance, nuclear research and development, despite being highly beneficial to everyone, is highly regulated across the world. Thats because, nuclear technology, in addition to being useful for constructive purposes like power generation, can also be used for causing destruction in the form of nuclear bombs. To prevent, international bodies have restricted nuclear research only to the entities that can keep the technology secure and under control. Similarly, the need for regulating AI research and applications is also becoming increasingly obvious. Read on to know why.
AI research, in recent years, has resulted in numerous applications and capabilities that used to be, not long ago, reserved for the realm of futuristic fiction. Today, it is not uncommon to come across machines that can perform specific logical and computational tasks better than humans. They can perform feats such as understanding what we speak or write using natural language processing,detecting illnesses using deep neural networks, and playing games involving logic and intuition better than us. Such applications, if made available to the general public and businesses worldwide, can undoubtedly make a positive impact in the world.
For instance, AI can predict the outcome of different decisions made by businesses and individuals and suggest the optimal course of action in any situation. This will minimize the risks involved in any endeavor and maximize the likelihood of achieving the most desirable outcomes. They can help businesses become more efficient by automating routine tasks and preserve human health and safety by undertaking tasks that involve high stress and hazard. They can also save lives by detecting diseases much earlier than can be diagnosed by human doctors. Thus, any progress made in the field of AI will result in an improvement in the overall standard of human life. However, it is important to realize that, like any other form of technology, AI is a double-edged sword.AI has a dark side, too. If highly advanced and complex AI systems are left uncontrolled and unsupervised, they stand the risk of deviating from desirable behavior and perform tasks in unethical ways.
There have been many instances where AI systems tried to fool its human developers by cheating in the way they performed tasks they were programmed to do. For example, anAI tasked with generating virtual maps from real aerial images cheatedin the way it performed its task by hiding data from its developers. This was caused by the fact that the developers used the wrong metric to evaluate the AIs performance, causing the AI to cheat to maximize the target metric. While itll be a long time before we have sentient AI that can potentially contemplate a coup against humanity, we already have AI systems that can cause a lot of harm by acting in ways not intended by the developers. In short,we are currently at more risk of AI doing things wrong than them doing the wrong things.
To prevent AI from doing things wrong (or doing the wrong things), it is important for the developers to exercise more caution and care while creating these systems. And the way the AI community is trying to achieve this currently is by having a generally accepted set of ethics and guidelines surrounding the ethical development and use of AI. Or, in some cases, ethical use of AI is being inspired by the collective activism of individuals in the tech community. For instance,Google recently pledged to not use AI for military applicationsafter its employees openly opposed the notion. While such movements do help in mitigating AI-induced risks and regulating AI development to a certain extent, it is not a given that every group involved in developing AI technology will comply with such activism.
AI research is being performed in every corner of the world, often in silos for competitive reasons. Thus, there is no way to know what goes on in each of these places, let alone stopping them from doing anything unethical. Also, while most developers try and create AI systems and test them rigorously to prevent any mishaps, they may often compromise such aspects while focusing on performance and on-time delivery of projects. This may lead to them creating AI systems that are not fully tested for safety and compliance. Even small issues can have devastating ramifications based on the application. Thus, it is necessary to institutionalize AI ethics into law, which will make regulating AI and its impact easier for governments and international bodies.
Legally regulating AI can ensure that AI safety becomes an inherent part of any future AI development initiative. This means that every new AI, regardless of its simplicity or complexity, will go through a process of development that immanently focus on minimizing non-compliance and chances of failure. To ensure AI safety, the regulators must consider a few must-have tenets as a part of the legislation. These tenets should include:
Any international agency or government body that sets about regulating AI through legislation should consult with experts in the field of artificial intelligence, ethics and moral sciences, and law and justice. Doing so helps in eliminating any political or personal agenda, biases, and misconceptions while framing the rules for regulating AI research and application. And once framed these regulations should be upheld and enforced strictly. This will ensure that only the applications that comply with the highest of the safety standards are adopted for mainstream use.
While regulating AI is necessary, it should not be done in a way that stifles the existing momentum in AI research and development. Thus, the challenge will be to strike a balance between allowing enough freedom to developers to ensure the continued growth of AI research and bringing in more accountability for the makers of AI. While too much regulation can prove to be the enemy of progress, no regulation at all can lead to the propagation of AI systems that can not only halt progress but can potentially lead to destruction and global decline.
See original here:
3 Reasons Why Governments Need to Regulate Artificial Intelligence - BBN Times
Posted in Artificial Intelligence
Comments Off on 3 Reasons Why Governments Need to Regulate Artificial Intelligence – BBN Times
Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this – The Conversation AU
Posted: at 9:36 am
From Google searches and dating sites to detecting credit card fraud, artificial intelligence (AI) keeps finding new ways to creep into our lives. But can we trust the algorithms that drive it?
As humans, we make errors. We can have attention lapses and misinterpret information. Yet when we reassess, we can pick out our errors and correct them.
But when an AI system makes an error, it will be repeated again and again no matter how many times it looks at the same data under the same circumstances.
AI systems are trained using data that inevitably reflect the past. If a training data set contains inherent biases from past human decisions, these biases are codified and amplified by the system.
Or if it contains less data about a particular minority group, predictions for that group will tend to be worse. This is called algorithmic bias.
Gradient Institute has co-authored a paper demonstrating how businesses can identify algorithmic bias in AI systems, and how they can mitigate it.
The work was produced in collaboration with the Australian Human Rights Commission, Consumer Policy Research Centre, CSIROs Data61 and the CHOICE advocacy group.
Algorithmic bias may arise through a lack of suitable training data, or as a result of inappropriate system design or configuration.
For example, a system that helps a bank decide whether or not to grant loans would typically be trained using a large data set of the banks previous loan decisions (and other relevant data to which the bank has access).
The system can compare a new loan applicants financial history, employment history and demographic information with corresponding information from previous applicants. From this, it tries to predict whether the new applicant will be able to repay the loan.
But this approach can be problematic. One way in which algorithmic bias could arise in this situation is through unconscious biases from loan managers who made past decisions about mortgage applications.
If customers from minority groups were denied loans unfairly in the past, the AI will consider these groups general repayment ability to be lower than it is.
Young people, people of colour, single women, people with disabilities and blue-collar workers are just some examples of groups that may be disadvantaged.
Read more: Artificial Intelligence has a gender bias problem -- just ask Siri
The biased AI system described above poses two key risks for the bank.
First, the bank could miss out on potential clients, by sending victims of bias to its competitors. It could also be held liable under anti-discrimination laws.
If an AI system continually applies inherent bias in its decisions, it becomes easier for government or consumer groups to identify this systematic pattern. This can lead to hefty fines and penalties.
Our paper explores several ways in which algorithmic bias can arise.
It also provides technical guidance on how this bias can be removed, so AI systems produce ethical outcomes which dont discriminate based on characteristics such as race, age, sex or disability.
For our paper, we ran a simulation of a hypothetical electricity retailer using an AI-powered tool to decide how to offer products to customers and on what terms. The simulation was trained on fictional historical data made up of fictional individuals.
Based on our results, we identify five approaches to correcting algorithmic bias. This toolkit can be applied to businesses across a range of sectors to help ensure AI systems are fair and accurate.
1. Get better data
The risk of algorithmic bias can be reduced by obtaining additional data points or new types of information on individuals, especially those who are underrepresented (minorities) or those who may appear inaccurately in existing data.
2. Pre-process the data
This consists of editing a dataset to mask or remove information about attributes associated with protections under anti-discrimination law, such as race or gender.
3. Increase model complexity
A simpler AI model can be easier to test, monitor and interrogate. But it can also be less accurate and lead to generalisations which favour the majority over minorities.
4. Modify the system
The logic and parameters of an AI system can be proactively adjusted to directly counteract algorithmic bias. For example, this can be done by setting a different decision threshold for a disadvantaged group.
5. Change the prediction target
The specific measure chosen to guide an AI system directly influences how it makes decisions across different groups. Finding a fairer measure to use as the prediction target will help reduce algorithmic bias.
In our recommendations to government and businesses wanting to employ AI decision-making, we foremost stress the importance of considering general principles of fairness and human rights when using such technology. And this must be done before a system is in-use.
We also recommend systems are rigorously designed and tested to ensure outputs arent tainted by algorithmic bias. Once operational, they should be closely monitored.
Finally, we advise that to use AI systems responsibly and ethically extends beyond compliance with the narrow letter of the law. It also requires the system to be aligned with broadly-accepted social norms and considerate of impact on individuals, communities and the environment.
With AI decision-making tools becoming commonplace, we now have an opportunity to not only increase productivity, but create a more equitable and just society that is, if we use them carefully.
Read more: YouTube's algorithms might radicalise people but the real problem is we've no idea how they work
More here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this – The Conversation AU
Government Will Increase Scrutiny on AI in Screening – ESR NEWS
Posted: at 9:36 am
Written By Employment Screening Resources (ESR)
Government agencies in the United States such as the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), and the Equal Employment Opportunity Commission (EEOC) will increase scrutiny on how Artificial Intelligence (AI) is used in background screening, according to the ESR Top Ten Background Check Trends for 2021 compiled by leading global background check firm Employment Screening Resources (ESR).
In April 2020, the FTC the nations primary privacy and data security enforcer issued guidance to businesses on Using Artificial Intelligence and Algorithms written by Director of FTC Bureau of Consumer Protection Andrew Smith on the use of AI for Machine Learning (ML) technology and automated decision making with regard to federal laws that included the Fair Credit Reporting Act (FCRA) that regulates background checks.
Headlines tout rapid improvements in artificial intelligence technology. The use of AI technology machines and algorithms to make predictions, recommendations, or decisions has enormous potential to improve welfare and productivity. But it also presents risks, such as the potential for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities, Director Smith wrote in the FTC guidance.
The good news is that, while the sophistication of AI and machine learning technology is new, automated decision-making is not, and we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers, Smith wrote. In 2016, the FTC issued a report on Big Data: A Tool for Inclusion or Exclusion? In 2018, the FTC held a hearing to explore AI and algorithms.
In July 2020, the Consumer Financial Protection Bureau (CFPB) a government agency that helps businesses comply with federal consumer financial law published a blog on Providing adverse action notices when using AI/ML models that addressed industry concerns about how the use of AI interacts with the existing regulatory framework. One issue is how complex AI models address the adverse action notice requirements in the FCRA.
FCRA also includes adverse action notice requirements. For example, when adverse action is based in whole or in part on a credit score obtained from a consumer reporting agency (CRA), creditors must disclose key factors that adversely affected the score, the name and contact information of the CRA, and additional content. These notice provisions serve important anti-discrimination, educational, and accuracy purposes, the blog stated.
There may be questions about how institutions can comply with these requirements if the reasons driving an AI decision are based on complex interrelationships. Industry continues to develop tools to accurately explain complex AI decisions These developments hold great promise to enhance the explainability of AI and facilitate use of AI for credit underwriting compatible with adverse action notice requirements, the blog concluded.
In December 2020, ten Democratic members of the United States Senate sent a letter requesting clarification from the U.S. Equal Employment Opportunity Commission (EEOC) Chair Janet Dhillon regarding the EEOCs authority to investigate bias in AI driven hiring technologies, according to a press release on the website of U.S. Senator Michael Bennet (D-Colorado), one of the Senators who signed the letter.
While hiring technologies can sometimes reduce the role of individual hiring managers biases, they can also reproduce and deepen systemic patterns of discrimination reflected in todays workforce data Combatting systemic discrimination takes deliberate and proactive work from vendors, employers, and the Commission, Bennet and the other nine Senators wrote in the letter to EEOC Chair Dhillon.
Today, far too little is known about the design, use, and effects of hiring technologies. Job applicants and employers depend on the Commission to conduct robust research and oversight of the industry and provide appropriate guidance. It is essential that these hiring processes advance equity in hiring, rather than erect artificial and discriminatory barriers to employment, the Senators continued in the letter.
Machine learning is based on the idea that machines should be able to learn and adapt through experience and Artificial Intelligence refers to the broader idea that machines can execute tasks intelligently to simulate human thinking and capability and behavior to learn from data without being programmed explicitly, explained Attorney Lester Rosen, founder and chief executive officer (CEO) of ESR.
There have certainly been technological advances including back-office efficiencies and strides towards better integrations that streamline the employment screening process. However, does that qualify as machine learning or AI? In reality, true machine learning and artificial intelligence and the role it is likely to play in the future could fuel a new source of litigation for plaintiffs class action attorneys, said Rosen.
Proponents of AI argue that it will make the processes faster and take bias out of hiring decisions. It is doubtful that civil rights advocates and the EEOC will see it that way. The use of AI for decision making is contrary to one of the most fundamental bedrock principles of employment that each person should be treated as an individual, and not processed as a group or based upon data points, Rosen concluded.
Employment Screening Resources (ESR) a leading global background check provider that was ranked the number one screening firm by HRO Today in 2020 offers the award-winning ESR Assured Compliance system, which is part of The ESRCheck Solution, for real-time compliance that offers automated notices, disclosures, and consents for employers performing background checks. To learn more about ESR, visit http://www.esrcheck.com.
Since 2008, Employment Screening Resources (ESR) has annually selected the ESR Top Ten Background Check Trends that feature emerging and influential trends in the background screening industry. Each of the top background check trends for 2021 will be announced via the ESR News Blog and listed on the ESR background check trends web page at http://www.esrcheck.com/Tools-Resources/ESR-Top-Ten-Background-Check-Trends/.
NOTE: Employment ScreeningResources (ESR) does not provide or offer legal services or legal advice ofany kind or nature. Any information on this website is for educational purposesonly.
2021 Employment Screening Resources (ESR) Making copies of or using any part of the ESR News Blog or ESR website for any purpose other than your own personal use is prohibited unless written authorization is first obtained from ESR.
See original here:
Government Will Increase Scrutiny on AI in Screening - ESR NEWS
Posted in Artificial Intelligence
Comments Off on Government Will Increase Scrutiny on AI in Screening – ESR NEWS
Artificial intelligence is the future for pathology at Duke through new program – WRAL Tech Wire
Posted: at 9:36 am
DURHAM Researchers at Duke University have been merging artificial intelligence with health care toimprove patient outcomes for the better part of two decades. From making cochlear implants deliver purer sounds to the brain to finding hidden trends within reams of patient data, the field spans a diverse range of niches that are now beginning to make real impacts.
Among these niches, however, there is one in which Duke researchers have always been at the leading edgeimage analysis, with a broad team of researchers teaching computers to analyze images to unearth everything from various forms of cancer to biomarkers of Alzheimers disease in the retina.
To keep pushing the envelope in this field by cementing these relationships into both schools organization, Dukes Pratt School of Engineering and School of Medicine have launched a newDivision of Artificial Intelligence and Computational Pathology.
Machine learning can do a better job than the average person at finding the signal in the noise, and that can translate into better outcomes and more cost-effective care, saidMichael Datto, associate professor of pathology at Duke. This is one of the most exciting times Ive seen in pathology, and its going to be exciting to see what we can do.
The new division will support translational research by developing AI technologies for image analysis to enhance the diagnosis, classification, prediction and prognostication of a variety of diseases, as well as train the next generation of pathologists and scientists in the emerging field.
The division is led byCarolyn Glass, assistant professor of pathology, andLaura Barisoni, professor of pathology and medicine, and operates with the partnership ofAI Health, directed byLawrence Carin, professor of electrical and computer engineering and vice president for research at Duke, andAdrian Hernandez, professor of medicine and vice dean for clinical research.
Duke has taken the lead at the national level in establishing a division in the Department of Pathology in partnership with AI/Health with the goal of developing and establishing new models and protocols to practice pathology in the 21st century, said Barisoni, who is also director of renal pathology service at Duke.
AI Health is also a new initiative, launched as a collaboration between the Schools of Engineering and Medicine and Trinity College of Arts & Sciences, with units such as theDuke Global Health Instituteand theDuke-Margolis Center for Health Policy, to leverage machine learning to improve both individual and population health through education, research and patient-care projects.
For what everyone has envisioned for AI Health, we see pathology paving the way, said Hernandez. AI Health is a catalyst and spark for putting cutting-edge machine learning development and testing into real-world settings. In pathology, we have image-intensive data streams, and COVID-19 has really emphasized the need for the timely processing of patient samples.
Applying machine learning image analysis to pathology processes, however, is easier said than done. Figuring out how to process extremely large image files and train AI algorithms on relatively few examples is part of the focus of Carins laboratory, in partnership with Ricardo Henao, assistant professor of biostatistics and bioinformatics as well as electrical and computer engineering.
Current AI algorithms, such as convolutional neural networks (CNN), were originally designed for the analysis of natural images, such as those captured on phones. Adapting such algorithms for the diagnosis of biopsy scans, however, is challenging due to the large size of the scanstypically of tens of gigabytesand the sparsity of abnormal diagnostic cells they contain. Led by David Dov, a postdoctoral researcher in Carins laboratory, Duke engineers are working to overcome these challenges to design AI algorithms for the diagnosis of various conditions, such as different types of cancers and transplant rejection.
Designing algorithms that make a real impact on clinical practice requires close collaboration between AI researchers and pathologists, said Dov, who joined Duke after completing his PhD in electrical engineering at The Technion Israel Institute of Technology. A key challenge in these collaborations is gaining a deep understanding of the gaps in medical practice, and then ensuring that clinicians fully understand the capabilities and limitations of AI in bridging these gaps. The new Division of Artificial Intelligence and Computational Pathology plays an important role in facilitating such collaborations.
In a virtual kickoff meeting this fall, the new divisions leadership spoke to the potential it holds to improve patient health outcomes and several researchers delved into projects they already have underway in the field. For example,Danielle Range, assistant professor of pathology, spoke of efforts to use AI in diagnosing cancer;Roarke Horstmeyer, assistant professor of biomedical engineering, described his efforts to create a smart microscope to better diagnose disease; and Glass detailed her work on the use of machine learning in diagnosing transplant rejection.
In the last couple of years, we have seen an exponential increase in AI pathology interest from Duke undergraduates to medical students applying for Pathology residency positions, said Glass. I think continued development of a solid, integrated curriculum and educational program will be critical to train these future leaders.
(C) Duke University
See more here:
Artificial intelligence is the future for pathology at Duke through new program - WRAL Tech Wire
Posted in Artificial Intelligence
Comments Off on Artificial intelligence is the future for pathology at Duke through new program – WRAL Tech Wire
Jvion Applies Clinical Artificial Intelligence to Help Prioritize COVID-19 Vaccine Distribution to Most Vulnerable Communities and Individuals – PR…
Posted: at 9:36 am
Jvion COVID Community Vulnerability Map, now featuring COVID Vaccination Prioritization Index (VPI)
ATLANTA (PRWEB) January 19, 2021
Jvion, a leader in clinical artificial intelligence (AI), announced the launch of its COVID Vaccination Prioritization Index (VPI). The VPI helps guide the distribution of COVID-19 vaccines during subsequent phases of community vaccination efforts. The VPI will be applied in two ways. The first is an update to Jvions COVID Community Vulnerability Map, initially released last spring, that indexes communities by their priority level for vaccination, based on CDC guidelines and socioeconomic vulnerability. Jvion can also add the index to its COVID Patient Vulnerability Lists for new and existing customers.
The past year has been difficult for us all, but particularly so for our society's most vulnerable members: the elderly, the sick and the unemployed, racial and ethnic minorities, rural Americans, and the hard-working people on the frontlines, said Jvions Chief Product Officer Dr. John Showalter, MD, MSIS. Now that vaccines are here, were proud to be able to help these people get the protection they need as quickly as possible.
Jvions VPI takes into account the CDCs recommendations for who should be prioritized for the limited supply of vaccines. Once healthcare workers and long-term care facility residents are vaccinated, the next phases will prioritize essential workers, the elderly, and those with underlying medical conditions. At each phase, Jvions VPI will help public health officials determine which locations need more vaccines based on the makeup of the community, and help providers target their vaccination outreach to their patients at greatest risk.
To that effect, the COVID Community Vulnerability Map has been updated with a new layer that rates counties and zip codes on a scale from 1-6 based on the proportion of residents in the CDCs prioritization cohorts. The layer also accounts for environmental and social determinants of health (SDOH), such as air pollution, low-income jobs, and food insecurity, all of which have been correlated with higher rates of hospitalizations and deaths from COVID-19.
Since its release in March 2020, the Map has been viewed over two million times, including by members of the White House Task Force, FEMA, every military branch, and state and local governments, and has been used to guide public health outreach, resource allocation, and the deployment of mobile testing sites in vulnerable communities.
After launching the public-facing COVID Community Vulnerability Map, Jvion also sent Patient Vulnerability Lists to its provider and payer customers that ranked their patients or members by their vulnerability to severe outcomes if infected with COVID-19, based on their clinical, socioeconomic, and behavioral risk factors. These lists will be updated to flag those individuals who should be prioritized for vaccination.
The vaccine prioritization tools are possible thanks to the CORE. Built on Microsoft Azure, the CORE is a secure and scalable repository that includes clinical, socioeconomic, and experiential data on 30 million individuals. The CORE was the driving force behind Jvions COVID Response Suite, which included the COVID Community Vulnerability Map and Patient Vulnerability Lists, in addition to an Inpatient Triage Assessment and a Return to Work Assessment.
About JvionJvion, a leader in clinical artificial intelligence, enables providers, payers and other healthcare entities to identify and prevent avoidable patient harm, utilization and costs. An industry first, the Jvion CORE goes beyond predictive analytics and machine learning to identify patients on a trajectory to becoming high-risk. Jvion then determines the interventions that will more effectively reduce risk and enable clinical and operational action. The CORE accelerates time to value by leveraging established patient-level intelligence to drive engagement across healthcare organizations, populations, and individuals. To date, the Jvion CORE has been deployed across hundreds of clients and resulted in millions saved. For more information, visit https://www.jvion.com.
Share article on social media or email:
See the rest here:
Posted in Artificial Intelligence
Comments Off on Jvion Applies Clinical Artificial Intelligence to Help Prioritize COVID-19 Vaccine Distribution to Most Vulnerable Communities and Individuals – PR…
Neil Mackay’s Big Read: Why artificial intelligence will either be the saviour or exterminator of the human race – HeraldScotland
Posted: January 17, 2021 at 8:57 am
HERES how quickly, how dangerously, artificial intelligence is moving: one day after Brian Christian sat in his office, near Berkeley University in California, warning The Herald on Sunday about the imminent perils of a powerful AI being sent out onto social media by a rogue state to ramp up online hate and division, South Korea turned its worried attention to a character called Lee Luda.
Luda is just 20. She is a university student who gained 750,000 friends on Facebook in three weeks. The only problem is that Luda isnt a woman, or a student, or 20, or Korean. She is an Artificial Intelligence and she had to be removed from Facebook after causing outrage with attacks on sexual minorities and disabled people. What makes Luda even more troubling is that the AI has an almost identical name to a South Korean popstar Lee Lu-da. How long before theres a truly malevolent deep fake AI out there corrupting reality?
The Korean experience is a perfect example of what Christian, one of the worlds leading experts on AI, calls the alignment problem. How do we ensure AI creations match human norms and values? How do we prevent AI transgressing morality? How do we stop AI doing something dreadful in the real world?
Killer AI
THESE AIs have already killed people. One death occurred in Arizona when a woman wheeled her bike onto a road but didnt use a crossing. The AI of a passing self-driving car, Christian explains, wasnt properly programmed to understand that humans might appear on roads without using crossings, or might wheel bikes. So, the AI just drove over the confusing object and killed a human being.
Weve now reached a tipping point with AI which is why Christian has written his prescient new book, The Alignment Problem: How Can Machines Learn Human Values? Christian is uniquely qualified when it comes to warning us about the uncharted territory were entering with the crossover between human and machine intelligence. He is a computer scientist and philosopher by training, and currently scientific communicator-in-residence at the Simons Institute for the Theory of Computing at Berkeley University the metaphoric heart and soul of Silicon Valley. He is also an affiliate of the Centre for Information Technology Research in the Interest of Society, and the Centre for Human-Compatible AI. If there is anyone who understands the risks of humanitys grand experiment with AI, its Christian.
Dont make the mistake of thinking AI is some geeky oddity confined to the realms of sci-fi and self-driving cars. AI is in your life right now. All around the world, if you apply for a mortgage, or seek credit, AI decides yes or no. AI is inside the justice system, advising judges in America whether its safe to let a prisoner out on bail. Police and intelligence services use AI to surveil us.
AI advises Western military powers on drone targets. Politicians routinely use AIsometimes without even knowing it to make judgments about how they govern our lives. Doctors depend on AI to help with diagnoses. There are neural networks in your phone, sorting pictures of your partner into individual albums without you asking. AI is now part and parcel of modern life but were only in the foothills of where this technology might take us. The abilities of AI are accelerating at an astonishing rate already some AI is so human-like it mirrors how the brain uses the neurotransmitter chemical dopamine.
Horror story
Christian says we need to think about two old horror stories when we consider AI. First, theres The Sorcerers Apprentice the tale of a young magician who casts a spell on a broom, making it carry water for him. However, the young magician has no idea how to break the spell, so the broom/slave keeps carrying water until his house is flooded. Then theres The Monkeys Paw the story of bereaved parents who use an enchantment to bring their son, who died in a horrible accident, back to life. What returns from the cemetery, though, is beyond their worst nightmares.
Beware what you ask for, the stories warn and we need to be very careful what we ask AI, and the instructions we give an AI to carry out the tasks we assign it. For example, there are cases of AIs used in job hiring. The AI looks at a companys employee history and sees 80 per cent of past workers are male and white so the AI modifies its hiring policy to exclude most black people and women.
When AIs like Lee Luda are tasked with engaging in conversation, they have usually been fed the entire internet in order to understand how humans talk. Of course, humans online are fairly horrible so the AI just copies our worst excesses. AIs have captioned photographs of black people as gorillas because humans use such slurs online.
The wilful child
AI IS like a child, says Christian. Youre worried about it falling in with the wrong group of friends.
There is also the question of whether humans are even ready for such technology. Christian wonders if the leap forward promised by AI might decades from now be seen as revolutionary as the invention of agriculture which completely transformed humanity. It has been said that modern humans have Palaeolithic brains, medieval institutions and godlike powers AI makes that abundantly clear. Some of the greatest scientific minds have used the analogy of a foal and a stallion to explain humanitys limited emotional intelligence versus our technological prowess. The foal is our emotional intelligence barely able to totter, while our technological prowess is the stallion, galloping across the plains. Theres a serious mismatch and the foal needs to catch up.
Christians mission is to bring a sense of crisis to humanity over AIs civilisational risks and present-day misuses. This, he feels is the defining project of the next decade. Christian points out that he once had a conversation with Elon Musk, the tech tycoon. Musk asked him: Give me one good argument why we shouldnt be worried about AI? If AI unsettles Musk, we know were in trouble. Scientists in industry and academia are increasingly worried that AI has developed too fast for us to properly prepare for its dangers.
The incredible acceleration in the capacity of what machine-learning systems can do and the steady proliferation of these systems into the decision-making apparatus of our society makes the question of safety critical, Christian says.
Job destruction
Until recently, most concerns around AI have centred on job losses the robot replacing the human. AI is now clever enough to take on roles in creative industries. AIs can write sports reports basic articles about who scored and when. An AI can take a 100-page document about a companys finances and turn it into an accurate business report in seconds. Christian explains: You can say write me a five-paragraph essay about a Peruvian explorer encountering a tribe of unicorns in the Andes and itll just write stuff.
But jobs losses are simply one of the risks of the rise of AI. The idea of a truly powerful AI being harnessed for bad intent by some malevolent state is the stuff of genuine nightmares. Christian says well shortly see AIs on social media in a way which makes the Korean example seem positively benign. If we think the excesses of Twitter and Facebook in the Trump era are bad, we aint seen nothing yet.
Social media hell
How does our public discourse survive the ability to generate human level speech at scale just a firehose of internet comments? Christian asks. We are very, very close, he believes, to being able to place an AI in cyberspace which engages in a way that appears truly human, forever. Imagine a system that can advocate for a particular worldview political, corporate, religious, ethnic tirelessly, debating with billions of people 24 hours a day A tidal wave is coming, Christian says. That same AI could argue both sides of the same debate pro- and anti-Brexit, perhaps simultaneously.
Christian notes with irony that some AI systems used by social media giants were initially invented for video games. Were now being played.
Genocide by AI
Lets say one day we do crack the problem of aligning AI with human wishes that we discover the secret formula which negates the risk of a Sorcerers Apprentice scenario. Even if we did pull off that feat, whats to say that the human instructing the AI isnt themselves bad? A machine may be aligned to human wishes, but what if those human wishes were evil? What, asks Christian, if the wishes of the human commanding the AI were the creation of a religious ethno-state? Alignment is in the eye of the beholder. For some, the perfectly aligned AI might be capable of the perfect genocide.
There is no alignment problem, he says, if you want someone to get killed and the machine kills them thats aligned.
Redemption?
Christians vision is truly frightening at times, but he does see some possibility of hope. The very fact that AI seems to be opening our eyes to our own darkest side the way its exposing humanitys innate racism and sexism may act as a spur to deal with these moral failings. For example, ask some AIs to answer the logic question what is doctor minus man plus woman and youll be told nurse as if doctors are always men and nurses always women when the answer should, obviously, still be doctor. Think how often we refer to mankind. The machine just learns from us and responds accordingly.
AI holds up a mirror to ourselves, Christian says. Perhaps the shock of looking in that mirror will make us better as a species. At least thats the redemptive vision, he says. The bottom line is that AI wont change for the better unless we change it will be bad if were bad. And that question of change is vital. AI doesnt just have to align with human values today, but adapt its moral alignment as our own values change over time. Imagine if we fixed the alignment problem right now. Would people a century from now want to live by our standards? Youd hate to live in a world run by the aligned AI of the 15th century, Christian notes.
Its a hell of a thing were asking. In effect, were on the road to sentient machines, and we need to make them capable of responding to human emotions, needs and morality with pinpoint accuracy. It may never be possible and thats where the dystopia lies.
Utopia
The flip side of dystopia, of course, is utopia, and some have faith that AI could lead to a golden future. Christian ponders whether AI if its ever able to be harnessed correctly might one day help us raise the level of human happiness worldwide. Might it find a way to deal with the climate crisis? Tackle poverty? In effect, teach us to be better people.
He also speculates AI might give us a greater, more humble understanding of ourselves. At a certain point, Christian says, were going to transition to a world where people just accept that human minds are one type of mind among many. Might our realisation that machines are smarter, more powerful, than us cause humans to start treating the creatures of the Earth with more dignity and decency?
Dependent blobs
Of course, we could just become dependent blobs fed, watered, and entertained by the omniscient AI. It raises the spectre of a world like the one EM Forster imagined in The Machine Stops where humans live a soulless existence micro-managed by a grand worldwide artificial intelligence. We may well get that future if we uncritically keep treading the path were treading, Christian says.
And weve certainly been uncritical until now. In terms of regulation, largely speaking weve done next to nothing, according to Christian. There is some general data regulation but nothing substantive to rein in any potential risks from AI. In just a few years, weve entered a world where AI can kill pedestrians, rant on Facebook, decide your credit rating, and racially profile job candidates. As Christian points out: If we dont manage AI properly, weve a pretty good idea where we go because were there. We havent managed it properly. We really are living through the scenario of The Monkeys Paw. So what is Christians vision of the future? Rather than some Terminator-style catastrophe where the Earth is reduced to smouldering rubble, he says, imagine a world which is like a Kafkaesque bureaucracy that nobody really understands or feels theyve control over. Its a world where the AI system determines everything for you whether you get this job, or that house. A bit like the Little Britain sketch Computer Says No except theres no human at the keyboard, just the AI.
Self-destruction
Until now, Christian says, humans have managed to escape the worst effects of our own stupidity because were incompetent. The only reason theres any tuna left in the ocean is because we didnt have enough boats to fish them all, he says. AI solves the competence part but not the wisdom. That could be a recipe for self-destruction. Imagine if a century ago AI had helped us extract all the coal from the ground and burn it?
Obviously, science can never be reversed, and even if we wanted to ban advances in AI it would be impossible. You can stop people developing nuclear weapons by preventing access to enriched plutonium, says Christian, but all it takes to do AI is a computer off the shelf. What do you regulate?
Singularity
Perhaps we need to change what it means to work in the computer industry. If you train to become a civil engineer, Christian says, you dont take a course called bridge safety thats what it means to be a civil engineer. Safety is intrinsic to the notion of what youre doing in your field. AI needs something like that. With universities increasingly including ethnics courses for computer engineers, Christian says he hopes to see a professional licence for programmers within a decade.
Its like were founding a new nation, he says, and we need to figure out what we stand for AI is an experimental Wild West that we need to get serious about. If its a wild west then we need to invent a sheriff. But what if it all spins out of control before we do get serious, before we hire the sheriff? What if the so-called singularity arrives first the moment when a machine reaches the same level of intelligence as a human, and then gets smarter and smarter, bigger and bigger? Is that possible?
Christian says there are three schools of thought: the sceptics, who say its merely a distant possibility; the hard take-off people who say this isnt a drill, and one day soon boom, itll take off like a rocket and suddenly overnight weve a new world order; and the soft take-off people like him. Human-level AI is essentially inevitable, he says. Were well on the way. The world isnt going to change overnight, with governments suddenly subjugated by a super-computer. I think were like the frog boiling one degree at a time. We dont realise we need to jump out of the pot.
Read this article:
Posted in Artificial Intelligence
Comments Off on Neil Mackay’s Big Read: Why artificial intelligence will either be the saviour or exterminator of the human race – HeraldScotland
The Fear of Artificial Intelligence in Job Loss – Analytics Insight
Posted: at 8:57 am
With all the hype over Artificial Intelligence, there is additionally a lot of disturbing buzz about the negative results of AI. These fall comprehensively into three categories: job loss, ethical issues, and criminal use.
More than one-quarter (27%) of all employees state they are stressed that the work they have now will be disposed of within the next five years because of new innovation, robots or artificial intelligence, as indicated by the quarterly CNBC/SurveyMonkey Workplace Happiness review.
In certain industries where technology already has played a profoundly disruptive role, employees fear of automation likewise run higher than the normal: Workers in automotives, business logistics and support, marketing and advertising, and retail are proportionately more stressed over new technology replacing their jobs than those in different industries.
42% of workers in the logistics industry have better than expected worries about new technology replacing their jobs. The dread stems from the fact that the business is already witnessing it. Self-driving trucks already are compromising the jobs of truck drivers, and it is causing a huge frenzy in this job line.
In a new paper published in the Findings of Empirical Methods in Natural Language Processing (EMNLP), Assistant Professor Xiang Ren and PhD understudy Yuchen Lin at the University of Southern California found that notwithstanding critical advances AI actually doesnt have the common sense required to create conceivable sentences. As Lin disclosed to Science Daily, Current machine text-generation models can compose an article that might be persuasive to numerous people, yet theyre essentially mirroring what they have found in the training stage.
Where these models fizzled was in depicting regular situations. Given the words dogs, frisbee, toss, and catch, one model concocted the sentence Two dogs are tossing frisbees at one another. Nothing incorrect in that aside from that it misses what we know through common sense, viz that a canine cant toss frisbees.
Another study of Blumberg Capital of 1,000 American adults found that about half are prepared to accept new tech, while the other half are frightened it will remove their jobs. One surprising finding: Most individuals (72%) comprehend that A.I. is proposed to remove the exhausting, dull parts of what they do, freeing them to concentrate on more perplexing and intriguing tasks. All things considered, 81% are so fearful of being supplanted that theyre reluctant to surrender their drudge work to an algorithm.
When AI dispenses with jobs (all the more precisely, the requirement for them), there is the undeniable loss of pay. This implies less disposable income and a decrease in spending on nice-to-have goods and luxuries. Less demand compels costs to drop. If costs dip under a level where commodity margins are threatened, the organization and at last the business, will crease.
Costs of fundamental products will keep on dropping, however contracting margins are somewhat offset by diminishing operational costs (on account of AI-driven automation). Food costs, for instance, could go down.
Society overall should then wrestle with the more profound social, financial, and mental consequences of permanent net job losses caused by AI. In reassurance, the deficiency of jobs without (hopefully) loss of lifestyle should give us the time and opportunity to consider these issues.
Sociologists will be compelled to rethink and re-plan their models of human association and organization. Financial specialists will be compelled to reevaluate incentives and agency relationships. Politicians will be compelled to create a new manner of speaking for their platforms when the customary political posturings will get moot. Schools will be compelled to battle with a deschooled society.
In any case, the day isnt far, caution the analysts, when AI agents will create more commonsensical responses. Already media startups for example, Knowhere and Patch have incorporated AI into their working and even legacy newspapers are fusing a few components of it into their day-to-day working. However, an opinion piece is still some way off.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
More here:
The Fear of Artificial Intelligence in Job Loss - Analytics Insight
Posted in Artificial Intelligence
Comments Off on The Fear of Artificial Intelligence in Job Loss – Analytics Insight
Artificial Intelligence app checks if people are wearing face masks – Innovation Origins
Posted: at 8:57 am
Since December 1, 2020, it has been mandatory to wear a face mask in public places in the Netherlands. This has already been compulsory in public transport for some time. It can be quite difficult for employees to keep an eye on whether everyone abides by this rule. Which is why ML6 developed an AI application that can tell when people are not wearing a face mask.
Companies can use the AI application to do a better job of checking whether employees and visitors are complying with the face mask requirement. An audible alert sounds when someone is not wearing a face mask, or is not wearing the face mask properly.
A computer and a camera are required to use the application. The app connects to the camera and displays this image on a computer while the app monitors people inside a building using a facial recognition model. In addition to the recognition model, the app also uses a model that allows the app to see if someones mouth is visible, which invariably shows if someone is wearing a face mask.
The company says it is a time-consuming task for staff to check if everyone is actually wearing a proper form of face covering. The app can automatically check if the rule is being followed.
ML6 has released the code and instructions for use free of charge, and the program can be used in all places where masks are mandatory. The company contends that it is contributing to the fight against the coronavirus with this application.
Also interesting:Greater accuracy in the analysis of medical imaging data thanks to AIAI makes driving a car safer, but steering it yourself still feels better
More here:
Artificial Intelligence app checks if people are wearing face masks - Innovation Origins
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence app checks if people are wearing face masks – Innovation Origins
How to Explain the Future of Artificial Intelligence Using Only Sci-Fi Films – BBN Times
Posted: at 8:57 am
Ive read the bookLife 3.0by physicist & AI philosopherMax Tegmark,where he sets out a series of possible scenarios and outcomes for humankind sharing the planet with artificial intelligence.
Im going to summarise it here using a series of sci-fi films as a mental shortcut or go-to reference for each bulletpoint.
Tegmark immediately shoots down any notion that we are likely to be victims of a robot-powered genocide, and claims the idea we would programme or allow a machine to have the potential to hate humans is preposterous - fuelled by Hollywoods obsession with the apocalypse. Actually, we have the power, now, to ensure that if AIs goals are properly aligned with ours from the start, so that it wants what we want, then there can never be a falling out between species. In other words, if AI does pose a threat - and in some of his scenarios it does - it will not come fromThe Matrixsmarauding AIs,enslaving humanity and claiming, like Agent Smith,Human beings are a disease. You are a plague and we are the cure.
Conversely the idea that AI will deliver some sci-fi utopia, where human beings are finessed to perfection - like inStar Trek- also bothers him. Complacency and arrogance are also an enemy of progress, it seems.
Rather and crucially, Tegmark wants us to chart a course between those two poles. A middle way, steering between techno-apocalypse and techno-utopia, driven by cautious optimism, the building of safeguards and safety nets, and very big off-switches. HisFuture Of Life Institute,featuring such luminaries as Elon Musk, Richard Dawkins and the late Stephen Hawking, is a think-tank designed to tackle and solve these specific issues, now, before they become a problem.
So if machines will never hate humans, why do we need an off-switch? Because despite our best efforts, machines go wrong. All the time.For Tegmark, its less about evil androids and rampaging robots, and more about the innate unreliability of technology. He asks: how many of us have had a blue-screen of death, or a computer says no moment which has seriously inconvenienced us? When AIs become part of our daily lives, and in some cases we place that life in their hands, how safe are we from a catastrophic failure of an algorithm?
In2001: A Space Odysseythe ships computer HAL3000 makes an error of judgement about the need replace a component, sending astronauts Bowman and Poole on a series of perilous space walks to replace the unit, which results in Pooles death. When Bowmans questioning of HAL, an attempt to get behind the black-box logic of the decision, ends in a deadly standoff, [Open the pod bay doors, please, HAL] Bowman has no option but to go for the turn it off and turn it on again approach, deactivating HALs circuitry.
In other words, in the future having Artificial Intelligence IT issues might cost you your life. For the author, this will always be a more pressing concern than aTerminatorstyle wipeout.
In the opening scenes ofI, Robot,we see a host machines performing everyday tasks such as delivering post and emptying rubbish bins. Tegmark warns of threats to jobs, citing any vocation that relies on pattern recognition, predictable repeated actions or manual labour to be most at threat.
Today we see production lines supplemented by automated machines, for instance the car industry. Tomorrow, he contends, it may be legal work, with AIs rapid scanning documents for legal precedents and case studies. Or, indeed, soldiers, with autonomous military equipment set to be a hot topic for the next decade.
His takeout, ultimately, is if you dont want to be replaced by a robot, look for a job that is creative, involves unpredictability and requires human empathy, with artists and nurses being two roles he cites as safe. For now.
There is a fear that if we create a superintelligence that recursively self-improves itself to build a replacement thats smarter than its originator, then humans will effectively lose control over their creation. The issue is whatever thought experiment you perform to plan to control your super intelligent AI will fall at the first hurdle for one reason: theyll always be able to outsmart your constraints, by definition.
We see this in inEx Machina, where an artificially intelligent android is locked deep in a high security vault - but having been programmed to optimise its own escape as a test of its own abilities - seduces and woos lonely scientist Domhnall Gleason into setting it free, outsmarting him using sex to leverage his emotional weak spot.
Tegmark, referencing another expert in the field,SuperintelligenceauthorNick Bostrom, suggests that whilst recursive superintelligence is probably inevitable, what we can do is concentrate on controlling the speed of its evolution in order to make the necessary preparations for that inevitable arrival.
Casting his net far into the future, Tegmark concludes the book by postulating on the future of the human race once sentient, artificial general intelligence (AGI) has arrived.
He lays out a few possible scenarios. First, AIs as productive citizens, living alongside us and respected by us as conscious beings. Its a controversial subject, but if a machine believes itself to be conscious and has subjective experiences, is that any different from a human who feels the same? Without solving the hard problem of consciousness, we cannot rule it out. The filmBicentennial Man, starring Robin Williams, explores these issues in great detail.
Second, Tegmark wonders if AIs are an evolutionary replacement for humankind, their ultimate purpose fulfilled in the creation of the next phase of life. InSpielbergs A.I,Jude Law states to fellow android David,When the end comes, all that will be left is us. This is fulfilled in the final scenes, where the life-like David is excavated by a series of hyper-advanced AI beings, who now view him as the last remaining connection to the human race.
Third, Tegmark ponders if a way to mitigate against the human races replacement is via merging with AI. If AI and humans are one and the same thing, and there is no us and them, we cannot be in conflict. Here he references the filmTranscendence, featuring Johnny Depp as a dying scientist who digitally uploads his consciousness, before gaining the power to manipulate matter at an atomic level, and become a digital demi-god.
The key point behind this brain-melting philosophy can be summarised thus: we are now at a juncture where we need to start having real conversations about what we want the human race to be over the coming centuries. Polarising the debate by conjuring up images of robo-apocalypses or digital rapture into a cyber-heaven, Tegmark feels, are not helpful when informing the debate.
So lets start that debate, using intelligence and moderation. How do you want to share your life with AI?
Read more from the original source:
How to Explain the Future of Artificial Intelligence Using Only Sci-Fi Films - BBN Times
Posted in Artificial Intelligence
Comments Off on How to Explain the Future of Artificial Intelligence Using Only Sci-Fi Films – BBN Times