The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Evolving Relationship Between Artificial Intelligence and Big Data – ReadWrite
Posted: January 18, 2020 at 11:20 am
Find the evolving relationship between big data and artificial intelligence. The growing popularity of these technologies offers engaging audience experience. It encourages newcomers to come up with an outstanding plan.
AI and Big Data help you transform your idea into substance. It helps you make full use of visuals, graphs, and multimedia to give your targeted audience with a great experience. According toMarkets And Markets, the worldwide market for AI in accounting assumed to grow. As a result, growth from $666 million in 2019 to $4,791 million by 2024.
The critical component of delivering an outstanding pitch is taking a step further with an incredible plan of assuring success. Big data and Artificial intelligence help you contribute to multiple industries bringing an effective plan. It can directly speak to investors and your targeted audience, covering essential aspects and representing your idea in a nutshell.
According to Techjury, The big data analytics market is set to reach $103 billion by 2023, and in 2019, the big data market is expected to grow by 20%.
From transformation to the phenomenal growth AI and Big data provide you with the accessibility of relevant information. Big data holds the data from multiple sources like social media platforms, search data, and others, which can be structured or unstructured. While artificial intelligence is intelligence demonstrated by machines with the rise of natural intelligence displayed by humans.
The most exciting thing for anyone to do is to identify the problem. So to know what prevents people from reaching their goal. From the product or service you wish to obtain the targeted audiences attention, it must solve the problem of the potential customers. There can be any problem from simple to complicated for which customers need a solution.
For every problem, there is a solution. Once you have understood the problem and willing to bring change, you can clearly solve the problem in the most defined ways. Artificial intelligence is a true reflection of technology advancement. With big data, you can make full use of vital information extracting the information you need.
For every problem, there is a solution. Once you have understood the problem and willing to bring change, you can clearly solve the problem in the most defined ways. Artificial intelligence is a true reflection of technology advancement. With big data, you can make full use of vital information extracting the information you need.
One can come with accurate solutions using AI and big data. It helps in introducing a low error rate compared to humans if appropriately coded. The AI takes the decision based on data and a set of algorithms, which decreases the chance of error. Big data and AI, when used together, can really help you solve the problem by answering the potential issues and bringing an effective solution.
To solve any kind of problem, one must know about the potential market. Divide your target market into segments from whom you expect to get a positive response. It helps you do what you need to. These advanced technologies have a strong foundation with outstanding capabilities to capture the potential market. One must learn and apply these technologies to get a better result in transforming the overall experience of customers.
Capturing the target audiences attention is as important as solving the problem. Once you know how big is your potential market is, and what your target audience wants, you can use these advanced technologies to pitch and get the desired result. That is only possible if you use your segment creatively and consider creating your own identity for targeting your customer while working on your business plan.
Every industry has its own competition with a particular set of competitors. One must invest in something that can really help people and bring the best solution for them with beneficial results and stand out in the real competition.
To stay in the market and promote your service, one must invest in providing customers with alternative solutions. These AI solutions can help you increase your customer base. Give your customers the reason to choose your solution over someone elses. That reason will be the identity that you will create in the market. Build a unique solution that can help you focus on growing your business and stay ahead in the competition.
Mark your presence in the market, accomplishing specific goals that you desire to achieve and have already accomplished. Make your business a reality setting realistic goals and perform better and notable milestones to achieve greater success. The core essence of running a smooth business and getting all that you desire is accomplishing set milestones.
Accomplishing set milestones can really help you get desired results and gain positive support from the trusted and reliable model. By doing this, you can strategies your small business plan with changing times and market demand. Gain an ideal position in the market with better results and in-depth data.
Achieving a milestone can be a tough task. However, with AI & Big data, it has become possible to get predictive analysis for better results and position of control. Consider all the options that make you stand out in the competition and help you grow your business.
AI can help you analyze consumer data patterns. It can predict what users would like to pay for with the help of big data. Both these technologies are compelling to present and provides a useful result that can boost your sales and increase business revenue.
Nitin Garg is the CEO and Co-founder of BR Softech Mobile App Development Company. Likes to share his opinions on IT industry via blogs. His interest is to write on the latest and advanced IT technologies which include IoT, VR & AR app development, web, and app development services.
Here is the original post:
Evolving Relationship Between Artificial Intelligence and Big Data - ReadWrite
Posted in Artificial Intelligence
Comments Off on Evolving Relationship Between Artificial Intelligence and Big Data – ReadWrite
Artificial intelligence jobs on the rise, along with everything else AI – ZDNet
Posted: December 25, 2019 at 6:51 am
AI jobs are on the upswing, as are the capabilities of AI systems. The speed of deployments has also increased exponentially. It's now possible to train an image-processing algorithm in about a minute -- something that took hours just a couple of years ago.
These are among the key metrics of AI tracked in the latest release of theAI Index, an annual data update from Stanford University'sHuman-Centered Artificial Intelligence Institutepublished in partnership with McKinsey Global Institute. The index tracks AI growth across a range of metrics, from papers published to patents granted to employment numbers.
Here are some key measures extracted from the 290-page index:
AI conference attendance: One important metric is conference attendance, for starters. That's way up. Attendance at AI conferences continues to increase significantly. In 2019, the largest, NeurIPS, expects 13,500 attendees, up 41% over 2018 and over 800% relative to 2012. Even conferences such as AAAI and CVPR are seeing annual attendance growth around 30%.
AI jobs: Another key metric is the amount of AI-related jobs opening up. This is also on the upswing, the index shows. Looking at Indeed postings between 2015 and October 2019, the share of AI jobs in the US increased five-fold since 2010, with the fraction of total jobs rising from 0.26% of total jobs posted to 1.32% in October 2019. While this is still a small fraction of total jobs, it's worth mentioning that these are only technology-related positions working directly in AI development, and there are likely an increasingly large share of jobs being enhanced or re-ordered by AI.
Among AI technology positions, the leading category being job postings mentioning "machine learning" (58% of AI jobs), followed by artificial intelligence (24%), deep learning (9%), and natural language processing (8%). Deep learning is the fastest growing job category, growing 12-fold between 2015 and 2018. Artificial Intelligence grew by five-fold, machine learning grew by five-fold, machine learning by four-fold, and natural language processing two-fold.
Compute capacity: Moore's Law has gone into hyperdrive, the AI Index shows, with substantial progress in ramping up the computing capacity required to run AI, the index shows. Prior to 2012, AI results closely tracked Moore's Law, with compute doubling every two years. Post-2012, compute has been doubling every 3.4 months -- a mind-boggling net increase of 300,000x. By contrast, the typical two-year doubling period that characterized Moore's law previously would only yield a 7x increase, the index's authors point out.
Training time: The among of time it takes to train AI algorithms has accelerated dramatically -- it now can happen in almost 1/180th of the time it took just two years ago to train a large image classification system on a cloud infrastructure. Two years ago, it took three hours to train such a system, but by July 2019, that time shrunk to 88 seconds.
Commercial machine translation: One indicator of where AI hits the ground running is machine translation -- for example, English to Chinese. The number of commercially available systems with pre-trained models and public APIs has grown rapidly, the index notes, from eight in 2017 to over 24 in 2019. Increasingly, machine-translation systems provide a full range of customization options: pre-trained generic models, automatic domain adaptation to build models and better engines with their own data, and custom terminology support."
Computer vision: Another benchmark is accuracy of image recognition. The index tracked reporting through ImageNet, a public dataset of more than 14 million images created to address the issue of scarcity of training data in the field of computer vision. In the latest reporting, the accuracy of image recognition by systems has reached about 85%, up from about 62% in 2013.
Natural language processing: AI systems keep getting smarter, to the point they are surpassing low-level human responsiveness through natural language processing. As a result, there are also stronger standards for benchmarking AI implementations. GLUE, the General Language Understanding Evaluation benchmark, was only released in May 2018, intended to measure AI performance for text-processing capabilities. The threshold for submitted systems crossing non-expert human performance was crossed in June, 2019, the index notes. In fact, the performance of AI systems has been so dramatic that industry leaders had to release a higher-level benchmark, SuperGLUE, "so they could test performance after some systems surpassed human performance on GLUE."
Continued here:
Artificial intelligence jobs on the rise, along with everything else AI - ZDNet
Posted in Artificial Intelligence
Comments Off on Artificial intelligence jobs on the rise, along with everything else AI – ZDNet
Why Cognitive Technology May Be A Better Term Than Artificial Intelligence – Forbes
Posted: at 6:51 am
Getty
One of the challenges for those tracking the artificial intelligence industry is that, surprisingly, theres no accepted, standard definition of what artificial intelligence really is. AI luminaries all have slightly different definitions of what AI is. Rodney Brooks says that artificial intelligence doesnt mean one thing its a collection of practices and pieces that people put together. Of course, thats not particularly settling for companies that need to understand the breadth of what AI technologies are and how to apply them to their specific needs.
In general, most people would agree that the fundamental goals of AI are to enable machines to have cognition, perception, and decision-making capabilities that previously only humans or other intelligent creatures have. Max Tegmark simply defines AI as intelligence that is not biological. Simple enough but we dont fully understand what biological intelligence itself means, and so trying to build it artificially is a challenge.
At the most abstract level, AI is machine behavior and functions that mimic the intelligence and behavior of humans. Specifically, this usually refers to what we come to think of as learning, problem solving, understanding and interacting with the real-world environment, and conversations and linguistic communication. However the specifics matter, especially when were trying to apply that intelligence to solve very specific problems businesses, organizations, and individuals have.
Saying AI but meaning something else
There are certainly a subset of those pursuing AI technologies with a goal of solving the ultimate problem: creating artificial general intelligence (AGI) that can handle any problem, situation, and thought process that a human can. AGI is certainly the goal for many in the AI research being done in academic and lab settings as it gets to the heart of answering the basic question of whether intelligence is something only biological entities can have. But the majority of those who are talking about AI in the market today are not talking about AGI or solving these fundamental questions of intelligence. Rather, they are looking at applying very specific subsets of AI to narrow problem areas. This is the classic Broad / Narrow (Strong / Weak) AI discussion.
Since no one has successfully built an AGI solution, it follows that all current AI solutions are narrow. While there certainly are a few narrow AI solutions that aim to solve broader questions of intelligence, the vast majority of narrow AI solutions are not trying to achieve anything greater than the specific problem the technology is being applied to. What we mean to say here is that were not doing narrow AI for the sake of solving a general AI problem, but rather narrow AI for the sake of narrow AI. Its not going to get any broader for those particular organizations. In fact, it should be said that many enterprises dont really care much about AGI, and the goal of AI for those organizations is not AGI.
If thats the case, then it seems that the industrys perception of what AI is and where it is heading differs from what many in research or academia think. What interests enterprises most about AI is not that its solving questions of general intelligence, but rather that there are specific things that humans have been doing in the organization that they would now like machines to do. The range of those tasks differs depending on the organization and the sort of problems they are trying to solve. If this is the case, then why bother with an ill-defined term in which the original definition and goals are diverging rapidly from what is actually being put into practice?
What are cognitive technologies?
Perhaps a better term for narrow AI being applied for the sole sake of those narrow applications is cognitive technology. Rather than trying to build an artificial intelligence, enterprises are leveraging cognitive technologies to automate and enable a wide range of problem areas that require some aspect of cognition. Generally, you can group these aspects of cognition into three P categories, borrowed from the autonomous vehicles industry:
From this perspective, its clear that while cognitive technologies are indeed a subset of Artificial Intelligence technologies, with the main difference being that AI can be applied both towards the goals of AGI as well as narrowly-focused AI applications. On the other-hand, using the term cognitive technology instead of AI is an acceptance of the fact that the technology being applied borrows from AI capabilities but doesnt have ambitions of being anything other than technology applied to a narrow, specific task.
Surviving the next AI winter
The mood in the AI industry is noticeably shifting. Marketing hype, venture capital dollars, and government interest is all helping to push demand for AI skills and technology to its limits. We are still very far away from the end vision of AGI. Companies are quickly realizing the limits of AI technology and we risk industry backlash as enterprises push back on what is being overpromised and under delivered, just as we experienced in the first AI Winter. The big concern is that interest will cool too much and AI investment and research will again slow, leading to another AI Winter. However, perhaps the issue never has been with the term Artificial Intelligence. AI has always been a lofty goal upon which to set the sights of academic research and interest, much like building settlements on Mars or interstellar travel. However, just as the Space Race has resulted in technologies with broad adoption today, so too will the AI Quest result in cognitive technologies with broad adoption, even if we never achieve the goals of AGI.
Continue reading here:
Why Cognitive Technology May Be A Better Term Than Artificial Intelligence - Forbes
Posted in Artificial Intelligence
Comments Off on Why Cognitive Technology May Be A Better Term Than Artificial Intelligence – Forbes
Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American
Posted: at 6:51 am
Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.
IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.
Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.
Even the U.S. Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.
Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.
Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.
Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.
Its only a matter of time before something like this leads to a serious health problem, said Steven Nissen, chairman of cardiology at the Cleveland Clinic.
Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.
That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.
Experts such as Bob Kocher, a partner at the venture capital firm Venrock, are more blunt. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.
None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.
Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.
And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.
AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.
Yet the majority of AI devices dont require FDA approval.
None of the companies that I have invested in are covered by the FDA regulations, Kocher said.
Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.
Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.
Almost none of the [AI] stuff marketed to patients really works, said Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.
The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.
Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.
Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.
But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.
If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.
Relaxed AI Standards At The FDA
The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.
Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.
Instead, the FDA is using the process to greenlight AI devices.
Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.
AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.
To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.
The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.
Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.
Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.
Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.
High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.
But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.
Johnson & Johnson, for example, has recalled hip implants and surgical mesh.
In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.
The honor system is not a regulatory regime, said Jesse Ehrenfeld, who chairs the physician groups board of trustees.In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.
When Good Algorithms Go Bad
Some AI devices are more carefully tested than others.
An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Michael Abramoff, the companys founder and executive chairman.
The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.
IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.
Yet some AI-based innovations intended to improve care have had the opposite effect.
A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.
Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.
DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.
False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.
As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.
Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.
A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.
In view of the risks involved, doctors need to step in to protect their patients interests, said Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.
While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.
Kaiser Health News (KHN) is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation that is not affiliated with Kaiser Permanente.
Here is the original post:
Artificial Intelligence Is Rushing Into Patient Care - And Could Raise Risks - Scientific American
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American
What Is The Artificial Intelligence Of Things? When AI Meets IoT – Forbes
Posted: at 6:51 am
Individually, the Internet of Things (IoT) and Artificial Intelligence (AI) are powerful technologies. When you combine AI and IoT, you get AIoTthe artificial intelligence of things. You can think of internet of things devices as the digital nervous system while artificial intelligence is the brain of a system.
What Is The Artificial Intelligence Of Things? When AI Meets IoT
What is AIoT?
To fully understand AIoT, you must start with the internet of things. When things such as wearable devices, refrigerators, digital assistants, sensors and other equipment are connected to the internet, can be recognized by other devices and collect and process data, you have the internet of things. Artificial intelligence is when a system can complete a set of tasks or learn from data in a way that seems intelligent. Therefore, when artificial intelligence is added to the internet of things it means that those devices can analyze data and make decisions and act on that data without involvement by humans.
These are "smart" devices, and they help drive efficiency and effectiveness. The intelligence of AIoT enables data analytics that is then used to optimize a system and generate higher performance and business insights and create data that helps to make better decisions and that the system can learn from.
Practical Examples of AIoT
The combo of internet of things and smart systems makes AIoT a powerful and important tool for many applications. Here are a few:
Smart Retail
In a smart retail environment, a camera system equipped with computer vision capabilities can use facial recognition to identify customers when they walk through the door. The system gathers intel about customers, including their gender, product preferences, traffic flow and more, analyzes the data to accurately predict consumer behavior and then uses that information to make decisions about store operations from marketing to product placement and other decisions. For example, if the system detects that the majority of customers walking into the store are Millennials, it can push out product advertisements or in-store specials that appeal to that demographic, therefore driving up sales. Smart cameras could identify shoppers and allow them to skip the checkout like what happens in the Amazon Go store.
Drone Traffic Monitoring
In a smart city, there are several practical uses of AIoT, including traffic monitoring by drones. If traffic can be monitored in real-time and adjustments to the traffic flow can be made, congestion can be reduced. When drones are deployed to monitor a large area, they can transmit traffic data, and then AI can analyze the data and make decisions about how to best alleviate traffic congestion with adjustments to speed limits and timing of traffic lights without human involvement.
The ET City Brain, a product of Alibaba Cloud, optimizes the use of urban resources by using AIoT. This system can detect accidents, illegal parking, and can change traffic lights to help ambulances get to patients who need assistance faster.
Office Buildings
Another area where artificial intelligence and the internet of things intersect is in smart office buildings. Some companies choose to install a network of smart environmental sensors in their office building. These sensors can detect what personnel are present and adjust temperatures and lighting accordingly to improve energy efficiency. In another use case, a smart building can control building access through facial recognition technology. The combination of connected cameras and artificial intelligence that can compare images taken in real-time against a database to determine who should be granted access to a building is AIoT at work. In a similar way, employees wouldn't need to clock in, or attendance for mandatory meetings wouldn't have to be completed, since the AIoT system takes care of it.
Fleet Management and Autonomous Vehicles
AIoT is used to in fleet management today to help monitor a fleet's vehicles, reduce fuel costs, track vehicle maintenance, and to identify unsafe driver behavior. Through IoT devices such as GPS and other sensors and an artificial intelligence system, companies are able to manage their fleet better thanks to AIoT.
Another way AIoT is used today is with autonomous vehicles such as Tesla's autopilot systems that use radars, sonars, GPS, and cameras to gather data about driving conditions and then an AI system to make decisions about the data the internet of things devices are gathering.
Autonomous Delivery Robots
Similar to how AIoT is used with autonomous vehicles, autonomous delivery robots are another example of AIoT in action. Robots have sensors that gather information about the environment the robot is traversing and then make moment-to-moment decisions about how to respond through its onboard AI platform.
See more here:
What Is The Artificial Intelligence Of Things? When AI Meets IoT - Forbes
Posted in Artificial Intelligence
Comments Off on What Is The Artificial Intelligence Of Things? When AI Meets IoT – Forbes
One key to artificial intelligence on the battlefield: trust – C4ISRNet
Posted: at 6:51 am
To understand how humans might better marshal autonomous forces during battle in the near future, it helps to first consider the nature of mission command in the past.
Derived from a Prussian school of battle, mission command is a form of decentralized command and control. Think about a commander who is given an objective and then trusted to meet that goal to the best of their ability and to do so without conferring with higher-ups before taking further action. It is a style of operating with its own advantages and hurdles, obstacles that map closely onto the autonomous battlefield.
At one level, mission command really is a management of trust, said Ben Jensen, a professor of strategic studies at the Marine Corps University. Jensen spoke as part of a panel on multidomain operations at the Association of the United States Army AI and Autonomy symposium in November. Were continually moving choice and agency from the individual because of optimized algorithms helping [decision-making]. Is this fundamentally irreconcilable with the concept of mission command?
The problem for military leaders then is two-fold: can humans trust the information and advice they receive from artificial intelligence? And, related, can those humans also trust that any autonomous machines they are directing are pursuing objectives the same way people would?
To the first point, Robert Brown, director of the Pentagons multidomain task force, emphasized that using AI tools means trusting commanders to act on that information in a timely manner.
A mission command is saying: youre going to provide your subordinates the depth, the best data, you can get them and youre going to need AI to get that quality data. But then thats balanced with their own ground and then the art of whats happening, Brown said. We have to be careful. You certainly can lose that speed and velocity of decision.
Before the tools ever get to the battlefield, before the algorithms are ever bent toward war, military leaders must ensure the tools as designed actually do what service members need.
How do we create the right type of decision aids that still empower people to make the call, but gives them the information content to move faster? said Tony Frazier, an executive at Maxar Technologies.
Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.
Subscribe
Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe
Thanks for signing up!
By giving us your email, you are opting in to the C4ISRNET Daily Brief.
An intelligence product, using AI to provide analysis and information to combatants, will have to fall in the sweet spot of offering actionable intelligence, without bogging the recipient down in details or leaving them uninformed.
One thing thats remained consistent is folks will do one of three things with overwhelming information, Brown said. They will wait for perfect information. Theyll just wait wait, wait, theyll never have perfect information and adversaries [will have] done 10 other things, by the way. Or theyll be overwhelmed and disregard the information.
The third path users will take, Brown said, is the very task commanders want them to follow: find golden needles in eight stacks of information to help them make a decision in a timely manner.
Getting there, however, where information is empowering instead of paralyzing or disheartening, is the work of training. Adapting for the future means practicing in the future environment, and that means getting new practitioners familiar with the kinds of information they can expect on the battlefield.
Our adversaries are going to bring a lot of dilemmas our way and so our ability to comprehend those challenges and then hopefully not just react but proactively do something to prevent those actions, is absolutely critical, said Brig. Gen. David Kumashiro, the director of Joint Force Integration for the Air Force.
When a battle has thousands of kill chains, and analysis that stretches over hundreds of hours, humans have a difficult time comprehending what is happening. In the future, it will be the job of artificial intelligence to filter these threats. Meanwhile, it will be the role of the human in the loop to take that filtered information and respond as best it can to the threats arrayed against them.
What does it mean to articulate mission command in that environment, the understanding, the intent, and the trust? said Kumashiro, referring to the fast pace of AI filtering. When the highly contested environment disrupts those connections, when we are disconnected from the hive, those authorities need to be understood so that our war fighters at the farthest reaches of the tactical edge can still perform what they need to do.
Planning not just for how these AI tools work in ideal conditions, but how they will hold up under the degradation of a modern battlefield, is essential for making technology an aide, and not a hindrance, to the forces of the future.
If the data goes away, and you still got the mission, youve got to attend to it, said Brown. Thats a huge factor as well for practice. If youre relying only on the data, youll fail miserably in degraded mode.
Original post:
One key to artificial intelligence on the battlefield: trust - C4ISRNet
Posted in Artificial Intelligence
Comments Off on One key to artificial intelligence on the battlefield: trust – C4ISRNet
How is Artificial Intelligence (AI) Changing the Future of Architecture? – AiThority
Posted: at 6:51 am
Artificial Intelligence (AI) has always been a topic of discussion- is it good enough for us? Getting more and more into this high technology world will give us a better future or not? According to recent research, almost everyone has a different requirement for automation. And most of the work of humans is done by the latest high intelligence computers. You all must be familiar with the fact of how Artificial Intelligence is changing industries, like Medicine, Automobiles, and Manufacturing. Well, what about Architecture?
The main issue is about the fact that these high tech robots will actually replace the creator? Although these high tech computers are not good enough at some ideas and you have to rely on Human Intelligence for that. However, these can be used to save a lot of time by doing some time-consuming tasks, and we can utilize that time in creating some other designs.
Artificial Intelligence is a high technology mechanical system that can perform any task but needs a few human efforts like visual interpretation or design-making etc. AI works and gives the best results possible by analyzing tons of data, and thats how it can excel in architecture.
Read More: Mobile Advertising Needs More Than Just 5G
While creating new designs, architects usually go through past designs and the data prepared throughout the making of the building. Instead of investing a lot of time and energy to create something new, it is alleged that a computer will be able to analyze the data in a short time period and will give recommendations accordingly. With this, an architect will be able to do testing and research simultaneously and sometimes even without pen and paper. It seems like it will lead to the organizations or the clients to revert to computers for masterplans and construction.
However, the value of architects and human efforts of analyzing a problem and finding the perfect solutions will always remain unchallenged.
Read More: How Automating Procurement is Like Self-Driving Cars
Parametric architecture is a hidden weapon that allows an architect to change specific parameters to create various types of output designs and create such structures that would not have been imagined earlier. It is like an architects programming language.
It allows an architect to consider a building and reframe it to fit into some other requirements. A process like this allows Artificial Intelligence to reduce the effort of an architect so that the architect can freely think about different ideas and create something new.
Constructing a building is not a one-day task as it needs a lot of pre-planning. However, this pre-planning is not enough sometimes, and you need a little bit of more effort to get an architects opinion to life. Artificial Intelligence will make an architects work significantly easier by analyzing the whole data and creating models that can save a lot of time and energy of the architect.
All in all, AI can be called an estimation tool for various aspects while constructing a building. However, when it comes to the construction part, AI can help so that human efforts become negligible.
The countless hours of research at the starting of any new project is where AI steps in and makes it easy for the architect by analyzing the aggregate data in millisecond and recommending some models so that the architect can think about the conceptual design without even using the pen or paper.
Just like while building a home for a family, if you have the whole information about the requirements of the family, you can simply pull all zoning data using AI and generate design variations in a short time period.
This era of modernization demands everything to be smartly designed. Just like smart cities, todays high technology society demands smart homes. However, now architects do not have to bother about how to use AI to create the designs of home only, but they should worry about making the users experience worth paying.
Change is something that should never change. The way your city looks today will be very different in the coming time. The most challenging task for an architect is city planning that needs a lot of precision planning. However, the primary task is to analyze all the possible aspects, and understand how a city will flow, how the population is going to be in the coming years.
All these factors are indicating one thing only, i.e., the future architects will give fewer efforts in the business of drawing and more into satisfying all the requirements of the user with the help of Artificial Intelligence.
Read More: How AI and Automation Are Joining Forces to Transform ITSM
Originally posted here:
How is Artificial Intelligence (AI) Changing the Future of Architecture? - AiThority
Posted in Artificial Intelligence
Comments Off on How is Artificial Intelligence (AI) Changing the Future of Architecture? – AiThority
Chanukah and the Battle of Artificial Intelligence – The Ultimate Victory of the Human Being – Chabad.org
Posted: at 6:51 am
Chanukah is generally presented as a commemoration of a landmark victory for religious freedom and human liberty in ancient times. Big mistake. Chanukahs greatest triumph is still to comethe victory of the human soul over artificial intelligence.
Jewish holidays are far more than memories of things that happened in the distant pastthey are live events taking place right now, in the ever-present. As we recite on Chanukahs parallel celebration, Purim, These days will be remembered and done in every generation. The Arizal explains: When they are remembered, they reenact themselves.
And indeed, the battle of the Maccabees is an ongoing battle, oneThe battle of the Maccabees is an ongoing battle embedded deep within the fabric of our society. embedded deep within the fabric of our society, one that requires constant vigilance lest it sweep away the foundations of human liberty. It is the struggle between the limitations of the mind and the infinite expanse that lies beyond the minds restrictive boxes, between perception and truth, between the apparent and the transcendental, between reason and revelation, between the mundane and the divine.
Today, as AI development rapidly accelerates, we may be participants in yet a deeper formalization of society, the struggle between man and machine.
Let me explain what I mean by the formalization of society. Formalization is something the manager within us embraces, and something the incendiary, creative spark within that manager defies. Its why many bright kids dont do well in school, why our most brilliant, original minds are often pushed aside for promotions while the survivors who follow the book climb high, why ingenuity is lost in big corporations, and why so many of us are debilitated by migraines. Its also a force that bars anything transcendental or divine from public dialogue.
Formalization is the strangulation of life by reduction to standard formulas. ScientistsFormalization is the strangulation of life by reduction to standard formulas. reduce all change to calculus, sociologists reduce human behavior to statistics, AI technologists reduce intelligence to algorithms. Thats all very usefulbut it is no longer reality. Reality is not reducible, because the only true model of reality is reality itself. And what else is reality but the divine, mysterious and wondrous space in which humans live?
Formalization denies that truth. To reduce is useful, to formalize is to kill.
Formalization happens in a mechanized society because automation demands that we state explicitly the rules by which we work and then set them in silicon. It reduces thought to executable algorithms; behaviors to procedures, ideas to formulas. Thats fantastic because it potentially liberates us warm, living human beings from repetitive tasks that can be performed by cold, lifeless mechanisms so we may spend more time on those activities that no algorithm or formula could perform.
Potentially. The default, however, without deliberate intervention, is the edifice complex.
The edifice complex is what takes place when we create a device, institution or any other formal structurean edificeto more efficiently execute some mandate. That edifice then develops a mandate of its ownthe mandate to preserve itself by the most expedient means. And then, just as in the complex it sounds like, The Edifice Inc., with its new mandate, turns around and suffocates to deathThe Edifice Inc., with its new mandate, turns around and suffocates to death the original mandate for which it was created. the original mandate for which it was created.
Think of public education. Think of many of our religious institutions and much of our government policy. But also think of the general direction that industrialization and mechanization has led us since the Industrial Revolution took off 200 years ago.
Its an ironic formula. Ever since Adam named the animals and harnessed fire, humans have built tools and machines to empower themselves, to increase their dominion over their environment. And, yes, in many ways we have managed to increase the quality of our lives. But in many other ways, we have enslaved ourselves to our own servantsto the formalities of those machines, factories, assembly lines, cost projections, policies, etc. We have coerced ourselves into ignoring the natural rhythms of human life, the natural bonds and covenants of human community, the spectrum of variation across human character and our natural tolerance to that wide deviance, all to conform to those tight formalities our own machinery demands in the name of efficacy.
In his personal notes in the summer of 1944, having barely escaped from occupied France, the Rebbe, Rabbi Menachem M. Schneerson of righteous memory, described a world torn by a war between two ideologiesbetween those for whom the individual was nothing more than a cog in the machinery of the state, and those who understood that there can be no benefit to the state by trampling the rights of any individual. The second ideologythat held by the western Alliesis, the Rebbe noted, a Torah one: If the enemy says, give us one of you, or we will kill you all! declared the sages of the Talmud, Not one soul shall be deliberately surrendered to its death.
Basically, the life of the individual is equal to the whole. Go make an algorithm from that. The math doesntThe life of the individual is equal to the whole. Go make an algorithm from that. The math doesnt work. work. Try to generalize it. You cant. It will generate what logicians call a deductive explosion. Yet it summarizes a truth essential to the sustainability of human life on this planetas that world war demonstrated with nightmarish poignance.
That war continued into the Cold War. It presses on today with the rising economic dominance of the Communist Party of China.
In the world of consumer technology, total dominance of The Big Machine was averted when a small group of individuals pressed forward against the tide by advancing the human-centered digital technology we now take for granted. But yet another round is coming, and it rides on the seductive belief that AI can do its best job by adding yet another layer of formalization to all societys tasks.
Dont believe that for a minute. The telos of technology is to enhance human life, not to restrict it; to provide human beings with tools and devices, not to render them as such.
Technologys ultimate purpose will come in a time of which Maimonides writes, when the occupation of the entire world will be only to know the divine. AI can certainly assist us in attaining that era and living itas long as we remain its masters and do not surrender our dignity as human beings. And that is the next great battle of humanity.
To win this battle, we need once again only a small army, but an army armed with more than vision. They must be people with faith. Faith in the divine spark within the human being. For that is what underpins the security of the modern world.
Pundits will tell you that our modern world is secular. Dont believe them. They will tell you that religion is not taught in American public schools. Its a lie. Western society is sustained on the basis of a foundational, religious belief: that all human beings are equal. Thats a statement withAll human beings are equal. Thats a statement of faith. no empirical or rational support. Because it is neither. It is a statement of faith. Subliminally, it means: The value of a single human life cannot be measured.
In other words, every human life is divine.
No, we dont say those words; there is no class in school discussing our divine image. Yet it is a tacit, unspoken belief. Western society is a church without walls, a religion whose dogmas are never spoken, yet guarded jealously, mostly by those who understand them the least. Pull out that belief from between the bricks and the entire edifice collapses to the ground.
It is also a ubiquitous theme in Jewish practice. As Ive written elsewhere, leading a Jewish way of life in the modern era is an outright rebellion against the materialist reductionism of a formalized society.
We liberate ourselves from interaction with our machines once a week, on Shabbat, and rise to an entirely human world of thought, prayer, meditation, learning, songs, and good company. We insist on making every instance of food consumption into a spiritual, even mystical event, by eating kosherWe liberate ourselves from interaction with our machines once a week. and saying blessings before and after. We celebrate and empower the individual through our insistence that every Jew must study and enter the discussion of the hows and whys of Jewish practice. And on Chanukah, we insist that every Jew must create light and increase that light each day; that none of us can rely on any grand institution to do so in our proxy.
Because each of us is an entire world, as our sages state in the Mishnah, Every person must say, On my account, the world was created.
This is what the battle of Chanukah is telling us. The flame of the menorah, that is the human soul The human soul is a candle of Gd. The war-machine of Antiochus upon elephants with heavy armorthat is the rule of formalization and expedience coming to suffocate the flame. The Maccabee rebels are a small group of visionaries, those who believe there is more to heaven and earth than all science and technology can contain, more to the human soul than any algorithm can grind out, more to life than efficacy.
How starkly poignant it is indeed that practicing, religious Jews were by far the most recalcitrant group in the Hellenist world of the Greeks and Romans.
Artificial intelligence can be a powerful tool for good, but only when wielded by those who embrace a reality beyond reason. And it is that transcendence that Torah preserves within us. Perhaps all of Torah and its mitzvahs were given for this, the final battle of humankind.
Visit link:
Posted in Artificial Intelligence
Comments Off on Chanukah and the Battle of Artificial Intelligence – The Ultimate Victory of the Human Being – Chabad.org
Artificial Intelligence, Foresight, and the Offense-Defense Balance – War on the Rocks
Posted: at 6:51 am
There is a growing perception that AI will be a transformative technology for international security. The current U.S. National Security Strategy names artificial intelligence as one of a small number of technologies that will be critical to the countrys future. Senior defense officials have commented that the United States is at an inflection point in the power of artificial intelligence and even that AI might be the first technology to change the fundamental nature of war.
However, there is still little clarity regarding just how artificial intelligence will transform the security landscape. One of the most important open questions is whether applications of AI, such as drone swarms and software vulnerability discovery tools, will tend to be more useful for conducting offensive or defensive military operations. If AI favors the offense, then a significant body of international relations theory suggests that this could have destabilizing effects. States could find themselves increasingly able to use force and increasingly frightened of having force used against them, making arms-racing and war more likely. If AI favors the defense, on the other hand, then it may act as a stabilizing force.
Anticipating the impact of AI on the so-called offense-defense balance across different military domains could be extremely valuable. It could help us to foresee new threats to stability before they arise and act to mitigate them, for instance by pursuing specific arms agreements or prioritizing the development of applications with potential stabilizing effects.
Unfortunately, the historical record suggests that attempts to forecast changes in the offense-defense balance are often unsuccessful. It can even be difficult to detect the changes that newly adopted technologies have already caused. In the lead-up to the First World War, for instance, most analysts failed to recognize that the introduction of machine guns and barbed wire had tilted the offense-defense balance far toward defense. The years of intractable trench warfare that followed came as a surprise to the states involved.
While there are clearly limits on the ability to anticipate shifts in the offense-defense balance, some forms of technological change have more predictable effects than others. In particular, as we argue in a recent paper, changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change.
Two Kinds of Technological Change
In a classic analysis of arms races, Samuel Huntington draws a distinction between qualitative and quantitative changes in military capabilities. A qualitative change involves the introduction of what might be considered a new form of force. A quantitative change involves the expansion of an existing form of force.
Although this is a somewhat abstract distinction, it is easy to illustrate with concrete examples. The introduction of dreadnoughts in naval surface warfare in the early twentieth century is most naturally understood as a qualitative change in naval technology. In contrast, the subsequent naval arms race which saw England and Germany competing to manufacture ever larger numbers of dreadnoughts represented a quantitative change.
Attempts to understand changes in the offense-defense balance tend to focus almost exclusively on the effects of qualitative changes. Unfortunately, the effects of such qualitative changes are likely to be especially difficult to anticipate. One particular reason why foresight about such changes is difficult is that the introduction of a new form of force from the tank to the torpedo to the phishing attack will often warrant the introduction of substantially new tactics. Since these tactics emerge at least in part through a process of trial and error, as both attackers and defenders learn from the experience of conflict, there is a limit to how much can ultimately be foreseen.
Although quantitative technological changes are given less attention, they can also in principle have very large effects on the offense-defense balance. Furthermore, these effects may exhibit certain regularities that make them easier to anticipate than the effects of qualitative change. Focusing on quantitative change may then be a promising way forward to gain insight into the potential impact of artificial intelligence.
How Numbers Matter
To understand how quantitative changes can matter, and how they can be predictable, it is useful to consider the case of a ground invasion. If the sizes of two armies double in the lead-up to an invasion, for example, then it is not safe to assume that the effect will simply cancel out and leave the balance of forces the same as it was prior to the doubling. Rather, research on combat dynamics suggests that increasing the total number of soldiers will tend to benefit the attacker when force levels are sufficiently low and benefit the defender when force levels are sufficiently high. The reason is that the initial growth in numbers primarily improves the attackers ability to send soldiers through poorly protected sections of the defenders border. Eventually, however, the border becomes increasingly saturated with ground forces, eliminating the attackers ability to exploit poorly defended sections.
Figures 1: A simple model illustrating the importance of force levels. The ability of the attacker (in red) to send forces through poorly defended sections of the border rises and then falls as total force levels increase.
This phenomenon is also likely to arise in many other domains where there are multiple vulnerable points that a defender hopes to protect. For example, in the cyber domain, increasing the number of software vulnerabilities that both an attacker and defender can each discover will benefit the attacker at first. The primary effect will initially be to increase the attackers ability to discover vulnerabilities that the defender has failed to discover and patch. In the long run, however, the defender will eventually discover every vulnerability that can be discovered and leave behind nothing for the attacker to exploit.
In general, growth in numbers will often benefit the attacker when numbers are sufficiently low and benefit the defender when they are sufficiently high. We refer to this regularity as offensive-then-defensive scaling and suggest that it can be helpful for predicting shifts in the offense-defense balance in a wide range of domains.
Artificial Intelligence and Quantitative Change
Applications of artificial intelligence will undoubtedly be responsible for an enormous range of qualitative changes to the character of war. It is easy to imagine states such as the United States and China competing to deploy ever more novel systems in a cat-and-mouse game that has little to do with quantity. An emphasis on qualitative advantage over quantitative advantage is a fairly explicit feature of the American military strategy and has been since at least the so-called Second Offset strategy that emerged in the middle of the Cold War.
However, some emerging applications of artificial intelligence do seem to lend themselves most naturally to competition on the basis of rapidly increasing quantity. Armed drone swarms are one example. Paul Scharre has argued that the military utility of these swarms may lie in the fact that they offer an opportunity to substitute quantity for quality. A large swarm of individually expendable drones may be able to overwhelm the defenses of individual weapon platforms, such as aircraft carriers, by attacking from more directions or in more waves than the platforms defenses are capable of managing. If this method of attack is in fact viable, one could see a race to build larger and larger swarms that ultimately results in swarms containing billions of drones. The phenomenon of offensive-then-defensive scaling suggests that growing swarm sizes could initially benefit attackers who can focus their attention increasingly intensely on less well-defended targets and parts of targets before potentially allowing defensive swarms to win out if sufficient growth in numbers occurs.
Automated vulnerability discovery tools also stand out as another relevant example, which have the potential to vastly increase the number of software vulnerabilities that both attackers and defenders can discover. The DARPA Cyber Grand Challenge recently showcased machine systems autonomously discovering, patching, and exploiting software vulnerabilities. Recent work on novel techniques such as deep reinforcement fuzzing also suggests significant promise. The computer security expert Bruce Schneier has suggested that continued progress will ultimately make it feasible to discover and patch every single vulnerability in a given piece of software, shifting the cyber offense-defense balance significantly toward defense. Before this point, however, there is reason for concern that these new tools could initially benefit attackers most of all.
Forecasting the Impact of Technology
The impact of AI on the offense-defense balance remains highly uncertain. The greatest impact might come from an as-yet-unforeseen qualitative change. Our contribution here is to point out one particularly precise way in which AI could impact the offense-defense balance, through quantitative increases of capabilities in domains that exhibit offensive-then-defensive scaling. Even if this idea is mistaken, it is our hope that by understanding it, researchers are more likely to see other impacts. In foreseeing and understanding these potential impacts, policymakers could be better prepared to mitigate the most dangerous consequences, through prioritizing the development of applications that favor defense, investigating countermeasures, or constructing stabilizing norms and institutions.
Work to understand and forecast the impacts of technology is hard and should not be expected to produce confident answers. However, the importance of the challenge means that researchers should still try while doing so in a scientific, humble way.
This publication was made possible (in part) by a grant to the Center for a New American Security from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author(s).
Ben Garfinkel is a DPhil scholar in International Relations, University of Oxford, and research fellow at the Centre for the Governance of AI, Future of Humanity Institute.
Allan Dafoe is associate professor in the International Politics of AI, University of Oxford, and director of the Centre for the Governance of AI, Future of Humanity Institute. For more information, see http://www.governance.ai and http://www.allandafoe.com.
Image: U.S. Air Force (Photo by Tech. Sgt. R.J. Biermann)
Read more:
Artificial Intelligence, Foresight, and the Offense-Defense Balance - War on the Rocks
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence, Foresight, and the Offense-Defense Balance – War on the Rocks
AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming – Express.co.uk
Posted: at 6:51 am
Fear surrounding artificial intelligence has remained prevalent as society has witnessed the mass leaps the technology sector has made in recent years. Shadow Robot Company Director, Rich Walker explained it is not evil A.I. people should necessarily be afraid of but rather the companies they masquerade behind. During an interview with Express.co.uk, Mr Walker explained advanced A.I. that had nefarious intent for mankind would not openly show itself.
He noted companies that actively do harm to society and people within them would be more appealing to A.I. that had goals of destroying humanity.
He said: There is the kind of standard fear of A.I. that comes from science fiction.
Which is either the humanoid robot, like from the Terminator, takes over and tries to destroy humanity.
Or it is the cold compassionless machine that changes the world around it in its own image and there is no space for humans in there.
DON'T MISS:Elon Musk issues terrifying prediction on 'AI robot swarms'
There is actually quite a good argument that there are cold compassionless machines that change the world around us in their own image.
They are called corporations.
We shouldnt necessarily worry about A.I as something that will come along and change everything.
We already have these organisations that will do that.
They operate outside of national rules of laws and societal codes of conduct.
So, A.I. is not the bit that makes that happen, the bits that make that happen are already in place.
He later added: I guess you could say that a company that has known for 30 years that climate change was inevitable and has systematically defunded research into climate change and funded research that shows climate change isnt happening is the kind of organisation I am thinking of.
That is the kind of behaviour you have to say: That is trying to destroy humanity.
DON'T MISSTESS satellite presents stunning new southern sky mosaic[VIDEO]Life discovered deep underground points to subterranean Galapagos'[INTERVIEW]Shadow land: Alien life can exist in 2D universe'[INTERVIEW]
They would argue no they are not trying to do that but the fact would be the effects of what you are doing is trying to destroy humanity.
If you wanted to have an Artificial Intelligence that was a bad guy, a large corporation that profits from fossil fuels and systematically hid the information that fossil fuels were bad for the planet, that would be an A.I bad guy in my book.
The Shadow Robot Company has directed there focus on creating complex dexterous robot hands that mimicked humans hands.
The robotics company uses tactical Telerobot technology to demonstrate how A.I programmes can be used alongside human interaction to create complex robotic relationship.
Excerpt from:
Posted in Artificial Intelligence
Comments Off on AI Warning: Compassionless world-changing A.I. already here -You WONT see them coming – Express.co.uk