The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Is Artificial Intelligence Made in Humanity’s Image? Lessons for an AI Military Education – War on the Rocks
Posted: May 21, 2022 at 6:55 pm
Artificial intelligence is not like us. For all of AIs diverse applications, human intelligence is not at risk of losing its most distinctive characteristics to its artificial creations.
Yet, when AI applications are brought to bear on matters of national security, they are often subjected to an anthropomorphizing tendency that inappropriately associates human intellectual abilities with AI-enabled machines. A rigorous AI military education should recognize that this anthropomorphizing is irrational and problematic, reflecting a poor understanding of both human and artificial intelligence. The most effective way to mitigate this anthropomorphic bias is through engagement with the study of human cognition cognitive science.
This article explores the benefits of using cognitive science as part of an AI education in Western military organizations. Tasked with educating and training personnel on AI, military organizations should convey not only that anthropomorphic bias exists, but also that it can be overcome to allow better understanding and development of AI-enabled systems. This improved understanding would aid both the perceived trustworthiness of AI systems by human operators and the research and development of artificially intelligent military technology.
For military personnel, having a basic understanding of human intelligence allows them to properly frame and interpret the results of AI demonstrations, grasp the current natures of AI systems and their possible trajectories, and interact with AI systems in ways that are grounded in a deep appreciation for human and artificial capabilities.
Artificial Intelligence in Military Affairs
AIs importance for military affairs is the subject of increasing focus by national security experts. Harbingers of A New Revolution in Military Affairs are out in force, detailing the myriad ways in which AI systems will change the conduct of wars and how militaries are structured. From microservices such as unmanned vehicles conducting reconnaissance patrols to swarms of lethal autonomous drones and even spying machines, AI is presented as a comprehensive, game-changing technology.
As the importance of AI for national security becomes increasingly apparent, so too does the need for rigorous education and training for the military personnel who will interact with this technology. Recent years have seen an uptick in commentary on this subject, including in War on the Rocks. Mick Ryans Intellectual Preparation for War, Joe Chapas Trust and Tech, and Connor McLemore and Charles Clarks The Devil You Know, to name a few, each emphasize the importance of education and trust in AI in military organizations.
Because war and other military activities are fundamentally human endeavors, requiring the execution of any number of tasks on and off the battlefield, the uses of AI in military affairs will be expected to fill these roles at least as well as humans could. So long as AI applications are designed to fill characteristically human military roles ranging from arguably simpler tasks like target recognition to more sophisticated tasks like determining the intentions of actors the dominant standard used to evaluate their successes or failures will be the ways in which humans execute these tasks.
But this sets up a challenge for military education: how exactly should AIs be designed, evaluated, and perceived during operation if they are meant to replace, or even accompany, humans? Addressing this challenge means identifying anthropomorphic bias in AI.
Anthropomorphizing AI
Identifying the tendency to anthropomorphize AI in military affairs is not a novel observation. U.S. Navy Commander Edgar Jatho and Naval Postgraduate School researcher Joshua A. Kroll argue that AI is often too fragile to fight. Using the example of an automated target recognition system, they write that to describe such a system as engaging in recognition effectively anthropomorphizes algorithmic systems that simply interpret and repeat known patterns.
But the act of human recognition involves distinct cognitive steps occurring in coordination with one another, including visual processing and memory. A person can even choose to reason about the contents of an image in a way that has no direct relationship to the image itself yet makes sense for the purpose of target recognition. The result is a reliable judgment of what is seen even in novel scenarios.
An AI target recognition system, in contrast, depends heavily on its existing data or programming which may be inadequate for recognizing targets in novel scenarios. This system does not work to process images and recognize targets within them like humans. Anthropomorphizing this system means oversimplifying the complex act of recognition and overestimating the capabilities of AI target recognition systems.
By framing and defining AI as a counterpart to human intelligence as a technology designed to do what humans have typically done themselves concrete examples of AI are measured by [their] ability to replicate human mental skills, as De Spiegeleire, Maas, and Sweijs put it.
Commercial examples abound. AI applications like IBMs Watson, Apples SIRI, and Microsofts Cortana each excel in natural language processing and voice responsiveness, capabilities which we measure against human language processing and communication.
Even in military modernization discourse, the Go-playing AI AlphaGo caught the attention of high-level Peoples Liberation Army officials when it defeated professional Go player Lee Sedol in 2016. AlphaGos victories were viewed by some Chinese officials as a turning point that demonstrated the potential of AI to engage in complex analyses and strategizing comparable to that required to wage war, as Elsa Kania notes in a report on AI and Chinese military power.
But, like the attributes projected on to the AI target recognition system, some Chinese officials imposed an oversimplified version of wartime strategies and tactics (and the human cognition they arise from) on to AlphaGos performance. One strategist in fact noted that Go and warfare are quite similar.
Just as concerningly, the fact that AlphaGo was anthropomorphized by commentators in both China and America means that the tendency to oversimplify human cognition and overestimate AI is cross-cultural.
The ease with which human abilities are projected on to AI systems like AlphaGo is described succinctly by AI researcher Eliezer Yudkowsky: Anthropomorphic bias can be classed as insidious: it takes place with no deliberate intent, without conscious realization, and in the face of apparent knowledge. Without realizing it, individuals in and out of military affairs ascribe human-like significance to demonstrations of AI systems. Western militaries should take note.
For military personnel who are in training for the operation or development of AI-enabled military technology, recognizing this anthropomorphic bias and overcoming it is critical. This is best done through an engagement with cognitive science.
The Relevance of Cognitive Science
The anthropomorphizing of AI in military affairs does not mean that AI is always given high marks. It is now clich for some commentators to contrast human creativity with the fundamental brittleness of machine learning approaches to AI, with an often frank recognition of the narrowness of machine intelligence. This cautious commentary on AI may lead one to think that the overestimation of AI in military affairs is not a pervasive problem. But so long as the dominant standard by which we measure AI is human abilities, merely acknowledging that humans are creative is not enough to mitigate unhealthy anthropomorphizing of AI.
Even commentary on AI-enabled military technology that acknowledges AIs shortcomings fails to identify the need for an AI education to be grounded in cognitive science.
For example, Emma Salisbury writes in War on the Rocks that existing AI systems rely heavily on brute force processing power, yet fail to interpret data and determine whether they are actually meaningful. Such AI systems are prone to serious errors, particularly when they are moved outside their narrowly defined domain of operation.
Such shortcomings reveal, as Joe Chapa writes on AI education in the military, that an important element in a persons ability to trust technology is learning to recognize a fault or a failure. So, human operators ought to be able to identify when AIs are working as intended, and when they are not, in the interest of trust.
Some high-profile voices in AI research echo these lines of thought and suggest that the cognitive science of human beings should be consulted to carve out a path for improvement in AI. Gary Marcus is one such voice, pointing out that just as humans can think, learn, and create because of their innate biological components, so too do AIs like AlphaGo excel in narrow domains because of their innate components, richly specific to tasks like playing Go.
Moving from narrow to general AI the distinction between an AI capable of only target recognition and an AI capable of reasoning about targets within scenarios requires a deep look into human cognition.
The results of AI demonstrations like the performance of an AI-enabled target recognition system are data. Just like the results of human demonstrations, these data must be interpreted. The core problem with anthropomorphizing AI is that even cautious commentary on AI-enabled military technology hides the need for a theory of intelligence. To interpret AI demonstrations, theories that borrow heavily from the best example of intelligence available human intelligence are needed.
The relevance of cognitive science for an AI military education goes well beyond revealing contrasts between AI systems and human cognition. Understanding the fundamental structure of the human mind provides a baseline account from which artificially intelligent military technology may be designed and evaluated. It possesses implications for the narrow and general distinction in AI, the limited utility of human-machine confrontations, and the developmental trajectories of existing AI systems.
The key for military personnel is being able to frame and interpret AI demonstrations in ways that can be trusted for both operation and research and development. Cognitive science provides the framework for doing just that.
Lessons for an AI Military Education
It is important that an AI military education not be pre-planned in such detail as to stifle innovative thought. Some lessons for such an education, however, are readily apparent using cognitive science.
First, we need to reconsider narrow and general AI. The distinction between narrow and general AI is a distraction far from dispelling the unhealthy anthropomorphizing of AI within military affairs, it merely tempers expectations without engendering a deeper understanding of the technology.
The anthropomorphizing of AI stems from a poor understanding of the human mind. This poor understanding is often the implicit framework through which the person interprets AI. Part of this poor understanding is taking a reasonable line of thought that the human mind should be studied by dividing it up into separate capabilities, like language processing and transferring it to the study and use of AI.
The problem, however, is that these separate capabilities of the human mind do not represent the fullest understanding of human intelligence. Human cognition is more than these capabilities acting in isolation.
Much of AI development thus proceeds under the banner of engineering, as an endeavor not to re-create the human mind in artificial ways but to perform specialized tasks, like recognizing targets. A military strategist may point out that AI systems do not need to be human-like in the general sense, but rather that Western militaries need specialized systems which can be narrow yet reliable during operation.
This is a serious mistake for the long-term development of AI-enabled military technology. Not only is the narrow and general distinction a poor way of interpreting existing AI systems, but it clouds their trajectories as well. The fragility of existing AIs, especially deep-learning systems, may persist so long as a fuller understanding of human cognition is absent from their development. For this reason (among others), Gary Marcus points out that deep learning is hitting a wall.
An AI military education would not avoid this distinction but incorporate a cognitive science perspective on it that allows personnel in training to re-think inaccurate assumptions about AI.
Human-Machine Confrontations Are Poor Indicators of Intelligence
Second, pitting AIs against exceptional humans in domains like Chess and Go are considered indicators of AIs progress in commercial domains. The U.S. Defense Advanced Research Projects Agency participated in this trend by pitting Heron Systems F-16 AI against a skilled Air Force F-16 pilot in simulated dogfighting trials. The goals were to demonstrate AIs ability to learn fighter maneuvers while earning the respect of a human pilot.
These confrontations do reveal something: some AIs really do excel in certain, narrow domains. But anthropomorphizings insidious influence lurks just beneath the surface: there are sharp limits to the utility of human-machine confrontations if the goals are to gauge the progress of AIs or gain insight into the nature of wartime tactics and strategies.
The idea of training an AI to confront a veteran-level human in a clear-cut scenario is like training humans to communicate like bees by learning the waggle dance. It can be done, and some humans may dance like bees quite well with practice, but what is the actual utility of this training? It does not tell humans anything about the mental life of bees, nor does it gain insight into the nature of communication. At best, any lessons learned from the experience will be tangential to the actual dance and advanced better through other means.
The lesson here is not that human-machine confrontations are worthless. However, whereas private firms may benefit from commercializing AI by pitting AlphaGo against Lee Sedol or Deep Blue against Garry Kasparov, the benefits for militaries may be less substantial. Cognitive science keeps the individual grounded in an appreciation for the limited utility without losing sight of its benefits.
Human-Machine Teaming Is an Imperfect Solution
Human-machine teaming may be considered one solution to the problems of anthropomorphizing AI. To be clear, it is worth pursuing as a means of offloading some human responsibility to AIs.
But the problem of trust, perceived and actual, surfaces once again. Machines designed to take on responsibilities previously underpinned by the human intellect will need to overcome hurdles already discussed to become reliable and trustworthy for human operators understanding the human element still matters.
Be Ambitious but Stay Humble
Understanding AI is not a straightforward matter. Perhaps it should not come as a surprise that a technology with the name artificial intelligence conjures up comparisons to its natural counterpart. For military affairs, where the stakes in effectively implementing AI are far higher than for commercial applications, ambition grounded in an appreciation for human cognition is critical for AI education and training. Part of a baseline literacy in AI within militaries needs to include some level of engagement with cognitive science.
Even granting that existing AI approaches are not intended to be like human cognition, both anthropomorphizing and the misunderstandings about human intelligence it carries are prevalent enough across diverse audiences to merit explicit attention for an AI military education. Certain lessons from cognitive science are poised to be the tools with which this is done.
Vincent J. Carchidi is a Master of Political Science from Villanova University specializing in the intersection of technology and international affairs, with an interdisciplinary background in cognitive science. Some of his work has been published in AI & Society and the Human Rights Review.
Image: Joint Artificial Intelligence Center blog
View original post here:
Posted in Artificial Intelligence
Comments Off on Is Artificial Intelligence Made in Humanity’s Image? Lessons for an AI Military Education – War on the Rocks
Iterative Scopes to Present Three Abstracts on Artificial Intelligence Applications for GI Endoscopy at DDW 2022 – Business Wire
Posted: at 6:55 pm
CAMBRIDGE, Mass.--(BUSINESS WIRE)--Iterative Scopes, a pioneer in precision medicine technologies for gastroenterology, announced today that its artificial intelligence platforms will be featured in three abstract presentations at the upcoming Digestive Disease Week 2022 (DDW 2022). The meeting will take place virtually and onsite at the San Diego Convention Center in San Diego, CA, from May 21 to May 24.
Experts in inflammatory bowel disease (IBD) and artificial intelligence (AI) will present two abstracts discussing data on the companys endoscopic scoring algorithms in ulcerative colitis (UC), a condition included in the umbrella of IBD, developed in collaboration with Eli Lilly and Company.
The data are drawn from an innovative partnership between Iterative Scopes and Lilly, focusing on studying the effectiveness of machine learning (ML) models to automatically score endoscopic disease severity in UC. Progress in IBD research is hindered by variability in the human interpretation of endoscopic severity. This unique ML approach incorporates novel methods of interpreting and integrating visual data into the assessment of clinical trial endoscopic endpoints. This data has the potential to be considered as a substitute to human central readers, which may reduce clinical trial costs and accelerate IBD research.
Aasma Shaukat, MD, MPH, Robert M. and Mary H. Glickman Professor of Medicine and Gastroenterology at NYU Grossman School of Medicine, and a leader of the Iterative Scopes advisory board, will present the first publicly available registration trial data on SKOUT, the companys automated polyp detection algorithm for colorectal cancer screening, in a plenary session on Late-Breaking Clinical Science Abstracts. The plenary sessions at DDW are the forum for highlighting some of the years best research abstracts as determined by the conference organizers. In her discussion, Dr. Shaukat will highlight results of a multicentered, randomized clinical trial in the US assessing whether SKOUT is superior to standard colonoscopy in increasing the adenomas per colonoscopy.
SKOUT has a pending 510(k) and is not available for sale in the United States. SKOUT received its CE Mark certification in 2021.
We founded Iterative Scopes four years ago to change the trajectory of GI drug development and clinical care, and we are extremely excited to share results of Iterative Scopes work in applying cutting edge, computational approaches towards achieving this goal, said Jonathan Ng, MBBS, the founder and CEO of Iterative Scopes. We are excited to share our work with the clinical community at DDW, through these presentations and the other events surrounding DDW.
Iterative Scopes Presentations at DDW 2022:
Endoscopic Scoring Solutions in Ulcerative Colitis
Title: Can a single central reader provide a reliable ground truth (GT) for training a machine learning (ML) model that predicts endoscopic disease activity in ulcerative colitis (UC)?Date & Time: May 21, 5:15-5:30 PM PDTSession Type: Research ForumPresenter: Klaus Gottlieb, MD, JD (Senior Medical Fellow, Lilly)Presentation No: 278Location: Room: 23 - San Diego Convention Center
Title: Development of a novel ulcerative colitis (UC) endoscopic activity prediction model using machine learning (ML)Date & Time; May 23, 12:30-1:30 PM PDTSession Type: PosterPresenter: David T. Rubin, MD (Joseph B. Kirsner Professor of Medicine and Chief, Section of Gastroenterology, Hepatology and Nutrition, UChicago Medicine and Chair of Iterative Scopes Advisory Board)Presentation No: Mo 1639Location: Poster Hall - San Diego Convention Center
SKOUT, Polyp Detection in Colonoscopy
Title: Increased Adenoma Detection with the use of a novel computer aided detection device, SKOUTTM: Results of a multicenter randomized clinical trial in the USDate & Time: May 24, 8:15-8:30 AM PDTSession Type: PlenaryPresenter: Aasma Shaukat, MD, MPH (Robert M and Mary H. Glickman Professor of Medicine and Gastroenterology, Department of Medicine, NYU Grossman School of Medicine and a leader of Iterative Scopes Advisory Board)Presentation No: 5095Location: Room 3 - San Diego Convention Center
Iterative Scopes was founded in 2017 as a spin out of the Massachusetts Institute of Technology (MIT) by Dr. Ng, a physician-entrepreneur, who developed the companys foundational concepts while he was in school at MIT and Harvard. In December 2021, the company and its investors closed a $150 million Series B financing, which attracted a roster of A-list venture capitalists, big pharmaceutical companies venture arms, and individual leaders in healthcare.
About Iterative Scopes
Iterative Scopes is a pioneer in the application of artificial intelligence-based precision medicine for gastroenterology with the aim of helping to optimize clinical trials investigating treatment of inflammatory bowel disease (IBD). The technology is also designed to potentially enhance colorectal cancer screenings. Its powerful, proprietary artificial intelligence and computer vision technologies have the potential to improve the accuracy and consistency of endoscopy readings. Iterative Scopes is initially applying these advances to impact polyp detection for colorectal cancer screenings and working to standardize disease severity characterization for inflammatory bowel disease. Longer term, the company plans to establish more meaningful endpoints for GI diseases, which may be better predictors of therapeutic response and disease outcomes. Spun out of MIT in 2017, the company is based in Cambridge, Massachusetts.
About Digestive Disease Week (DDW)
Digestive Disease Week (DDW) is the largest international gathering of physicians, researchers and academics in the fields of gastroenterology, hepatology, endoscopy and gastrointestinal surgery. Jointly sponsored by the American Association for the Study of Liver Diseases (AASLD), the American Gastroenterological Association (AGA) Institute, the American Society for Gastrointestinal Endoscopy (ASGE) and the Society for Surgery of the Alimentary Tract (SSAT), DDW is an in-person and virtual meeting from May 21-24, 2022. The meeting showcases more than 5,000 abstracts and hundreds of lectures on the latest advances in GI research, medicine and technology. More information can be found at http://www.ddw.org.
Go here to see the original:
Posted in Artificial Intelligence
Comments Off on Iterative Scopes to Present Three Abstracts on Artificial Intelligence Applications for GI Endoscopy at DDW 2022 – Business Wire
Metanomic Acquires Intoolab, Developers of the First Bayesian Network Artificial Intelligence Engine – Business Wire
Posted: at 6:55 pm
EDINBURGH, Scotland--(BUSINESS WIRE)--Today, Metanomic (https://www.metanomic.net/) announces it has acquired Intoolab A.I (https://www.intoolab.com/) , a Bayesian Network Artificial Intelligence company, to expand and to improve data analysis and A.I across video games and Web3. This acquisition augments Metanomics current game economy infrastructure that allows developers to build, simulate and run balanced game economies and core gameplay loops in a live, real-time environment.
The addition of artificial intelligence helps developers create better experiences through intelligent insight from player behaviour in run-time. As a part of the acquisition the existing solution will be rebranded to Thunderstruck and extend the capabilities of the Metanomic Engine.
A more intelligent approach to video game development and Web3 analytics
Thunderstruck is based on a new approach to artificial intelligence that is not based on off the shelf machine learning algorithms but a powerful combination of Deep Learning and Dynamic Bayesian Networks. The solution has already been successfully proven in Healthcare and the technology is now being applied initially to video games and Web3 in a number of core applications -
Web3 and metaverse are the next big thing that will affect our everyday lives in so many dimensions and we couldnt be at a better place and time with our technology, to take the next step and introduce Dynamic Bayesian Networks to this untapped and exciting new market, said Nikos Tzagkarakis, CEO and Co-founder of IntooLab who joins Metanomic as their incoming Chief AI Officer.
We knew from the first time we met Theo and the rest of the team, that joining the incredible Metanomic powerhouse is our best bet to be part of an unmatched platform for Web3, game economies and the metaverse.
The incredible traction weve seen across both the traditional games industry, web3, and play-and-earn games for the beta of our economy platform already puts us in a strong position to help developers build more fun and engaging experiences, Theo Priestley, CEO of Metanomic said.
"With the acquisition of an artificial intelligence platform we can deliver more power to developers through real-time player insight as well as the most advanced game economy solution in the industry.
With the acquisition also comes the commitment to continue to build the companys development hub in Greece, supporting the local startup community and tech economy.
Thunderstruck is available for trial and partner integration from today. The Metanomic Engine is a complete, free to use game economy design and run-time solution for game developers across Web3 and traditional games and is currently in a closed beta. To sign up for the waiting list email hello@metanomic.net
About Metanomic
Metanomic is the first and only complete real-time economy as a service platform for developers. Built by professional economists and game designers, the platform utilizes patented algorithms to easily deploy plug and play, interoperable and infinitely scalable game and creator economies ready for web3, metaverse, and play-and-earn games. The platform allows developers to easily and quickly build a fully-scalable and configurable play-and-earn economy that features asset creation engines and fully balanced player markets.
About Intoolab
Founded in 2017, Intoolab has always been technology-first, building the first production ready Dynamic Bayesian Networks intelligence that can be a milestone neurosymbolic step towards A.I. frameworks that understand the human concepts structurally and not just correlationally.
Tzager is the first Bayesian Inference A.I. framework, built for biomedical research, drug discovery, and personalized medicine with the main features being Pathway Simulations, Predictor Research/Models, and Literature Intelligence. Tzager is not just another deep learning algorithm trained to solve very specific problems, but an intelligent system with its own framework.
More here:
Posted in Artificial Intelligence
Comments Off on Metanomic Acquires Intoolab, Developers of the First Bayesian Network Artificial Intelligence Engine – Business Wire
Virtual hospital operations summit to focus on role of Artificial Intelligence in achieving ROI for system wide impact – Becker’s Hospital Review
Posted: at 6:55 pm
While the healthcare industry has faced unprecedented operational constraints in recent years, including limited physical capacity, vulnerable patients, loss of revenue, and shortages of staff, it has also reaped opportunities to adapt and excel.
Health systems and hospitals have been especially primed to rapidly adopt digital transformation and technology initiatives to predict and manage optimal scheduling, staffing, and patient flow.
With the support of AI-based analytics tools, health systems can better use the assets in which they have already invested. By utilizing critical resources like operating rooms, infusion clinics, and inpatient bed units effectively, they can improve financial performance, relieve burden on staff, and treat higher volumes of patients in shorter periods of time. Many healthcare organizations have already deployed analytics to achieve these results, quickly seeing a large return on a relatively small investment in implementation.
With their upcoming Transform Hospital Operations Summit, hosted in partnership with Beckers, healthcare analytics expert LeanTaaS will share these providers journeys, results, and stories. Driven by a focus on deploying AI to achieve better return on investment, the two-day program will connect over 1,000 attendees with health system executives, technology leaders, and industry experts to discuss how hospitals across the U.S. use AI and predictive and prescriptive analytics tools to solve critical challenges arising from case backlogs, provider burnout and staffing shortages, and increased patient wait times.
Summit attendees will learn about success stories from C-suite hospital and health system leaders who have transformed operations and unlocked revenue by using AI and machine learning solutions. These sessions will encompass a wide range of perspectives on this topic, including the strategies of breaking through operational barriers with partnerships, the urgency behind implementing high-powered AI, the best practices for scaling disruptive new technology at scale, the potential for AI to revolutionize the healthcare industry, and more.
Primary speakers include Dr. Patrick McGill, EVP, Chief Transformation Officer at Community Health Network; Dr. Douglas Flora, Executive Medical Director of Oncology Services at St. Elizabeth Healthcare; and Dr Eric Eskiolu, Executive Vice President, Chief Medical and Scientific Officer and Co-Director of the Institute of Innovation and Artificial Intelligence at Novant Health.
Were excited to speak at Transform and share our experiences with AI and analytics, but just as importantly, about how were building a culture that supports transformation through a commitment to clinical excellence, workforce development, and process improvement, shared Aaron Miri, Senior Vice President and Chief Digital and Information Officer and Amy Huveldt, VP of Performance Excellence, both of Baptist Health and who will also be primary speakers.
Further speakers include healthcare leaders and experts from Cone Health, Mount Nittany Medical Center, Multicare, UCHealth, University of Utah Health, Vanderbilt-Ingram Cancer Center, and Yale New Haven Health. These sessions will feature healthcare executives highlighting the results they have achieved by leveraging AI in their operations, including increasing surgical case length accuracy by 4%; reducing infusion patient wait times by 30%; and decreasing inpatient time-to-admit by 16%, despite an 18% increase in COVID-19 census. Attendees can build a hospital operations summit schedule based on interest and specialty, choosing from three Learning Tracks: Perioperative, Infusion Centers, and Inpatient Beds.
As the healthcare industry continues to grapple with lingering effects of the pandemic, its no secret that health systems need to do more with less while also prioritizing the valuable time and wellbeing of staff. Were looking forward to our June Transform event, as it will hone in on critical healthcare issues and how AI can support hardworking hospital leaders, said Mohan Giridharadas, LeanTaaS founder and CEO. This event will provide all attendees with the resources needed to compete and thrive by using smarter capacity management decisions every single day.
Transform registration is free for all attendees. To register and learn more about the sessions and speakers that will be featured at the summit, view the conference agenda here.
Read more:
Posted in Artificial Intelligence
Comments Off on Virtual hospital operations summit to focus on role of Artificial Intelligence in achieving ROI for system wide impact – Becker’s Hospital Review
Artificial intelligence to be UAE’s top sector over next decade, survey finds – The National
Posted: at 6:55 pm
Artificial intelligence is being tipped to be the UAE's most important industry over the next 10 years, with universities urged to step up efforts to prepare the next generation of high-tech workers.
The fast-rising sector was ranked ahead of construction, electronics, aerospace, robotics, design engineering and IT and cybersecurity in a poll of technology and engineering employees in the Emirates.
The UAE government is driving forwards with ambitious plans to establish itself as a global AI hub.
In 2017, the country appointed Omar Al Olama as its first Minister of State for Artificial Intelligence and later adopted the National Artificial Intelligence Strategy 2031 to promote the growth of the cutting-edge technology.
The Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi was established in 2019 to develop the skills of top talent from across the world to lead workplaces of the future.
.
The survey, commissioned by the UK-based Institution of Engineering and Technology (IET), and carried out by YouGov, polled 325 employers and employees in the UAE in December 2021 and January 2022.
Julian Young, IET president, said artificial intelligence would most certainly continue to grow in prominence.
Alongside that, I would almost add everything to do with digitalisation. Everything in the future in a highly advanced technological community, will be about digitalisation and getting computers to do far more work for us," said Mr Young.
So if one has a skilled workforce in this field, one would be able to make a profitable company and a profitable organisation and be a truly global player.
I'm not surprised to see that these are the skill sets that are required in three years' time, that these are the skill sets required in 10 years' time.
Sir Julian Young, President of the Institution of Engineering and Technology, said artificial intelligence would most certainly continue to grow in prominence. Photo: the Institution of Engineering and Technology
"If you pick up artificial intelligence and then think about the type of courses that people are undertaking. Do we need courses in artificial intelligence? Yes, of course. But in the more traditional areas of engineering, mechanical, electrical, electronic aerospace, there needs to be a digital component."
He said all of the traditional industries need digital and software and computer science inputs to be able to make the best of their workforce.
Ian Mercer, head of international operations for the Institution of Engineering and Technology, said universities could use the findings to ensure their courses run parallel to the immediate and future demands of the economy.
"If I were an academician, then I would be thinking, 'if that's where the where the industry is going to go then the courses that we're going to offer to students probably need to be ramped up to be where the need is going to be'," Mr Mercer said.
"At the end of the day, universities want jobs to be available for the people that they put through the system.
"If you look at the the ambitions of the UAE government, they want to become a tech hub of the world."
He said that in a post-oil and gas economy, technology may be one of the main workforce providers in the region.
A graduation ceremony at the Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi. All photos: Khushnum Bhandari / The National
The UAE is continuing to explore ways in which artificial intelligence can be used to boost business, make government departments more agile and efficient, and support health services.
Artificial intelligence could soon be used to tailor UAE government employees working hours to their own personal productivity.
The initiative, which is being studied by the Federal Authority for Government Human Resources, is one of a host of practical applications for AI in everyday life.
In March, 41 business leaders who took a three-month course at Mohamed bin Zayed University for Artificial Intelligence, celebrated their graduation.
The course aimed to support UAE government and business sectors. Participants were required to complete 12 rigorous weeks of coursework, lectures and collaborative project work.
Dr Jamal Al Kaabi, undersecretary at the Department of Health in Abu Dhabi, joined the programme after the Covid-19 pandemic made him realise the potential of artificial intelligence.
He believes wearable technology and AI could be crucial in providing home services and follow-up care for the elderly.
Updated: May 20, 2022, 5:20 AM
See more here:
Artificial intelligence to be UAE's top sector over next decade, survey finds - The National
Posted in Artificial Intelligence
Comments Off on Artificial intelligence to be UAE’s top sector over next decade, survey finds – The National
The Accuracy of Artificial Intelligence in the Endoscopic Diagnosis of Early Gastric Cancer: Pooled Analysis Study – Newswise
Posted: at 6:55 pm
Background: Artificial intelligence (AI) for gastric cancer diagnosis has been discussed in recent years. The role of AI in early gastric cancer is more important than in advanced gastric cancer since early gastric cancer is not easily identified in clinical practice. However, to our knowledge, past syntheses appear to have limited focus on the populations with early gastric cancer. Objective: The purpose of this study is to evaluate the diagnostic accuracy of AI in the diagnosis of early gastric cancer from endoscopic images. Methods: We conducted a systematic review from database inception to June 2020 of all studies assessing the performance of AI in the endoscopic diagnosis of early gastric cancer. Studies not concerning early gastric cancer were excluded. The outcome of interest was the diagnostic accuracy (comprising sensitivity, specificity, and accuracy) of AI systems. Study quality was assessed on the basis of the revised Quality Assessment of Diagnostic Accuracy Studies. Meta-analysis was primarily based on a bivariate mixed-effects model. A summary receiver operating curve and a hierarchical summary receiver operating curve were constructed, and the area under the curve was computed. Results: We analyzed 12 retrospective case control studies (n=11,685) in which AI identified early gastric cancer from endoscopic images. The pooled sensitivity and specificity of AI for early gastric cancer diagnosis were 0.86 (95% CI 0.75-0.92) and 0.90 (95% CI 0.84-0.93), respectively. The area under the curve was 0.94. Sensitivity analysis of studies using support vector machines and narrow-band imaging demonstrated more consistent results. Conclusions: For early gastric cancer, to our knowledge, this was the first synthesis study on the use of endoscopic images in AI in diagnosis. AI may support the diagnosis of early gastric cancer. However, the collocation of imaging techniques and optimal algorithms remain unclear. Competing models of AI for the diagnosis of early gastric cancer are worthy of future investigation. Trial Registration: PROSPERO CRD42020193223; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=193223
Read more from the original source:
Posted in Artificial Intelligence
Comments Off on The Accuracy of Artificial Intelligence in the Endoscopic Diagnosis of Early Gastric Cancer: Pooled Analysis Study – Newswise
Artificial Intelligence/Machine Learning and the Future of National Security – smallwarsjournal
Posted: May 11, 2022 at 11:50 am
Artificial Intelligence/Machine Learning and the Future of National Security
AI is a once-in-a lifetime commercial and defense game changer
By Steve Blank
Hundreds of billions in public and private capital is being invested in AI and Machine Learning companies. The number of patents filed in 2021 is more than 30 times higher than in 2015 as companies and countries across the world have realized that AI and Machine Learning will be a major disruptor and potentially change the balance of military power.
Until recently, the hype exceeded reality. Today, however, advances in AI in several important areas (here, here, here, here and here) equal and even surpass human capabilities.
If you havent paid attention, nows the time.
AI and the DoD
The Department of Defense has thought that AI is such a foundational set of technologies that they started a dedicated organization -- the JAIC -- to enable and implement artificial intelligence across the Department. They provide the infrastructure, tools, and technical expertise for DoD users to successfully build and deploy their AI-accelerated projects.
Some specific defense-related AI applications are listed later in this document.
Were in the Middle of a Revolution
Imagine its 1950, and youre a visitor who traveled back in time from today. Your job is to explain the impact computers will have on business, defense and society to people who are using manual calculators and slide rules. You succeed in convincing one company and a government to adopt computers and learn to code much faster than their competitors /adversaries. And they figure out how they could digitally enable their business supply chain, customer interactions, etc. Think about the competitive edge theyd have by today in business or as a nation. Theyd steamroll everyone.
Thats where we are today with Artificial Intelligence and Machine Learning. These technologies will transform businesses and government agencies. Today, 100s of billions of dollars in private capital have been invested in 1,000s of AI startups. The U.S. Department of Defense has created a dedicated organization to ensure its deployment.
But What Is It?
Compared to the classic computing weve had for the last 75 years, AI has led to new types of applications, e.g. facial recognition; new types of algorithms, e.g. machine learning; new types of computer architectures, e.g. neural nets; new hardware, e.g. GPUs; new types of software developers, e.g. data scientists; all under the overarching theme of artificial intelligence. The sum of these feels like buzzword bingo. But they herald a sea change in what computers are capable of doing, how they do it, and what hardware and software is needed to do it.
This brief will attempt to describe all of it.
New Words to Define Old Things
One of the reasons the world of AI/ML is confusing is that its created its own language and vocabulary. It uses new words to define programming steps, job descriptions, development tools, etc. But once you understand how the new world maps onto the classic computing world, it starts to make sense. So first a short list of some key definitions.
AI/ML - a shorthand for Artificial Intelligence/Machine Learning
Artificial Intelligence (AI) - a catchall term used to describe Intelligent machines which can solve problems, make/suggest decisions and perform tasks that have traditionally required humans to do. AI is not a single thing, but a constellation of different technologies.
Machine Learning (ML) - a subfield of artificial intelligence. Humans combine data with algorithms (see here for a list) to train a model using that data. This trained model can then make predications on new data (is this picture a cat, a dog or a person?) or decision-making processes (like understanding text and images) without being explicitly programmed to do so.
Machine learning algorithms - computer programs that adjust themselves to perform better as they are exposed to more data.
The learning part of machine learning means these programs change how they process data over time. In other words, a machine-learning algorithm can adjust its own settings, given feedback on its previous performance in making predictions about a collection of data (images, text, etc.).
Deep Learning/Neural Nets a subfield of machine learning. Neural networks make up the backbone of deep learning. (The deep in deep learning refers to the depth of layers in a neural network.) Neural nets are effective at a variety of tasks (e.g., image classification, speech recognition). A deep learning neural net algorithm is given massive volumes of data, and a task to perform - such as classification. The resulting model is capable of solving complex tasks such as recognizing objects within an image and translating speech in real time. In reality, the neural net is a logical concept that gets mapped onto a physical set of specialized processors. See here.)
Data Science a new field of computer science. Broadly it encompasses data systems and processes aimed at maintaining data sets and deriving meaning out of them. In the context of AI, its the practice of people who are doing machine learning.
Data Scientists - responsible for extracting insights that help businesses make decisions. They explore and analyze data using machine learning platforms to create models about customers, processes, risks, or whatever theyre trying to predict.
Whats Different? Why is Machine Learning Possible Now?
To understand why AI/Machine Learning can do these things, lets compare them to computers before AI came on the scene. (Warning simplified examples below.)
Classic Computers
For the last 75 years computers (well call these classic computers) have both shrunk to pocket size (iPhones) and grown to the size of warehouses (cloud data centers), yet they all continued to operate essentially the same way.
Classic Computers - Programming
Classic computers are designed to do anything a human explicitly tells them to do. People (programmers) write software code (programming) to develop applications, thinking a priori about all the rules, logic and knowledge that need to be built in to an application so that it can deliver a specific result. These rules are explicitly coded into a program using a software language (Python, JavaScript, C#, Rust, ).
Classic Computers - Compiling
The code is then compiled using software to translate the programmers source code into a version that can be run on a target computer/browser/phone. For most of todays programs, the computer used to develop and compile the code does not have to be that much faster than the one that will run it.
Classic Computers - Running/Executing Programs
Once a program is coded and compiled, it can be deployed and run (executed) on a desktop computer, phone, in a browser window, a data center cluster, in special hardware, etc. Programs/applications can be games, social media, office applications, missile guidance systems, bitcoin mining, or even operating systems e.g. Linux, Windows, IOS. These programs run on the same type of classic computer architectures they were programmed in.
Classic Computers Software Updates, New Features
For programs written for classic computers, software developers receive bug reports, monitor for security breaches, and send out regular software updates that fix bugs, increase performance and at times add new features.
Classic Computers- Hardware
The CPUs (Central Processing Units) that write and run these Classic Computer applications all have the same basic design (architecture). The CPUs are designed to handle a wide range oftasks quickly in a serial fashion. These CPUs range from Intel X86 chips, and the ARM cores on Apple M1 SoC, to thez15 in IBM mainframes.
Machine Learning
In contrast to programming on classic computing with fixed rules, machine learning is just like it sounds we can train/teach a computer to learn by example by feeding it lots and lots of examples. (For images a rule of thumb is that a machine learning algorithm needs at least 5,000 labeled examples of each category in order to produce an AI model with decent performance.) Once it is trained, the computer runs on its own and can make predictions and/or complex decisions.
Just as traditional programming has three steps - first coding a program, next compiling it and then running it - machine learning also has three steps: training (teaching), pruning and inference (predicting by itself.)
Machine Learning - Training
Unlike programing classic computers with explicit rules, training is the process of teaching a computer to perform a task e.g. recognize faces, signals, understand text, etc. (Now you know why you're asked to click on images of traffic lights, cross walks, stop signs, and buses or type the text of scanned image in ReCaptcha.) Humans provide massive volumes of training data (the more data, the better the models performance) and select the appropriate algorithm to find the best optimized outcome.
(See the detailed machine learning pipeline later in this section for the gory details.)
By running an algorithm selected by a data scientist on a set of training data, the Machine Learning system generates the rules embedded in a trained model. The system learns from examples (training data), rather than being explicitly programmed. (See the Types of Machine Learning section for more detail.) This self-correction is pretty cool. An input to a neural net results in a guess about what that input is. The neural net then takes its guess and compares it to a ground-truth about the data, effectively asking an expert Did I get this right? The difference between the networks guess and the ground truth is itserror. The network measures that error, and walks the error back over its model, adjusting weights to the extent that they contributed to the error.)
Just to make the point again: The algorithms combined with the training data - not external human computer programmers - create the rules that the AI uses. The resulting model is capable of solving complex tasks such as recognizing objects its never seen before, translating text or speech, or controlling a drone swarm.
(Instead of building a model from scratch you can now buy, for common machine learning tasks, pretrained models from others and here, much like chip designers buying IP Cores.)
Machine Learning Training - Hardware
Training a machine learning model is a very computationally intensive task. AI hardware must be able to perform thousands of multiplications and additions in a mathematical process called matrix multiplication. It requires specialized chips to run fast. (See the AI hardware section for details.)
Machine Learning - Simplification via pruning, quantization, distillation
Just like classic computer code needs to be compiled and optimized before it is deployed on its target hardware, the machine learning models are simplified and modified(pruned) touse less computingpower, energy, and memory before theyre deployed to run on their hardware.
Read this article:
Artificial Intelligence/Machine Learning and the Future of National Security - smallwarsjournal
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence/Machine Learning and the Future of National Security – smallwarsjournal
Eigen Technologies Named to Forbes AI 50 List of Top Artificial Intelligence Companies of 2022 – Business Wire
Posted: at 11:50 am
NEW YORK--(BUSINESS WIRE)--Eigen Technologies (Eigen), the global intelligent document processing (IDP) provider, is proud to announce that the company has been named on the fourth annual Forbes AI 50 list 2022 for North America. Produced in partnership with Sequoia Capital, this list recognizes the standout privately held companies in North America that are making the most interesting and impactful uses of AI.
In selecting honorees for this years list, Forbes evaluated hundreds of submissions, handpicking the top 50 most compelling companies. These are the businesses that are leading in the development and use of AI technology. With its focus on no-code, easy to use AI-powered IDP software with a small data approach, Eigen is a standout example of the type of business that embodies these qualities.
Dr. Lewis Z. Liu, Co-Founder & CEO, Eigen Technologies said:
Eigen has always been focused on taking cutting-edge technology and applying it to solve real world business problems, so we are absolutely thrilled to be recognized by Forbes as one of the most impactful AI businesses. We have won many awards over the years but being listed among these AI innovators is particularly special as it recognizes the very qualities that we seek to live by at Eigen. IDP technology, such as ours, is at the forefront of the next revolution in how organizations make use of the 80-90% of their data that is currently trapped and unusable. We pioneered the small data approach that is essential to turning this information into structured usable data and as a result were seeing fantastic traction in the market. We see this award as a recognition of our pioneering work that shows were on the right path as we scale.
About Eigen Technologies
Eigen is an intelligent document processing (IDP) company that enables its clients to quickly and precisely extract answers from their documents, so they can better manage risk, scale operations, automate processes and navigate dynamic regulatory environments.
Eigens customizable, no-code AI-powered platform uses machine learning to automate the extraction of answers from documents and can be applied to a wide variety of use cases. It understands context and delivers better accuracy on far fewer training documents, while protecting the security of clients data.
Our clients include some of the best-known and respected names in finance, insurance, law and professional services, including Goldman Sachs, ING, BlackRock, Aviva and Allen & Overy. Almost half of all global systemically important banks (G-SIBs) use Eigen to overcome their document and data challenges. Eigen is backed by Goldman Sachs, Temasek, Lakestar, Dawn Capital, ING Ventures, Anthemis and the Sony Innovation Fund by IGV.
Follow this link:
Posted in Artificial Intelligence
Comments Off on Eigen Technologies Named to Forbes AI 50 List of Top Artificial Intelligence Companies of 2022 – Business Wire
National Technology Day: How artificial intelligence is helping MSMEs to optimize processes, accelerate growth – The Financial Express
Posted: at 11:50 am
Technology for MSME: The shift towards more efficient technology solutions from the good old websites and emails, in the name of digital adoption, is apparent among MSMEs that have shied away from evolving technologies for a long time. The shift has largely been visible because of better affordability due to growing on-demand, or pay as you go or what is called the software-as-a-service (SaaS) ecosystem in India liberating small businesses from the cost conundrum to some extent.
As India observes the National Technology Day on Wednesday to commemorate its entry into the elite club of countries having nuclear weapons with the Pokhran nuclear tests in 1998, it is also the day to remember the countrys achievement in science and innovation. While a large number of MSMEs are yet to fully benefit from the technology revolution, some of them have certainly been warming up to the new age solutions such as artificial intelligence (AI) and using it also for better growth.
The implementation of AI is across multiple use cases. For instance, Delhi-based long-haul logistics services provider JCCI Logistics has deployed AI and internet of things (IoT) solutions to manage its fleet of around 150 trucks. The company, launched in 2004, uses on-demand fleet management software for GPS tracking of vehicles, fuel management, driver analytics, and route planning.
Vehicles need to run as much as possible and thats what matters. Before deploying this solution in 2020, our monthly cumulative running was around 8,000 to 10,000 kilometres. It has increased by around 20 per cent now. The jump, I think, is primarily because of the on-board diagnostics (OBD) device that you can fit in a vehicle to get data related to fuel consumption, drivers driving behaviour, whether there is unnecessary hard acceleration or not, etc., Sachin Jain, Founder, JCCI Logistics told Financial Express Online.
OBD is essentially a machine learning (ML) and internet of things (IoT) based device that gets signals from different sensors in a vehicle and conveys them to the users dashboard with the help of the software.
JCCI Logistics have been among post-Covid adopters of deep technology solutions as the pandemic perhaps necessitated the use of software and digital for sustenance.
Covid might have caused a faster switch to some AI/ML applications since the labor force was locked up. AI/ML provides a significant opportunity for reduction in input costs, particularly those of human capital. The advent of edge AI/ML will further hasten adoption, particularly as it gets married to IoT on small devices and sensors that are available at scale and used routinely by businesses of all sizes, Utkarsh Sinha, Managing Director at advisory firm Bexley Advisors told Financial Express Online.
Among the top sectors where the use of AI accelerated during the pandemic was restaurant as the pandemic precipitated the eateries into looking at ways that could help them optimize their processes right from sales to inventory management and more.
Kabir Suri who runs Azure Hospitality, which owns restaurant chains like Mamagoto, Dhaba, Speedy Chow, etc., has been using AI in the companys operations for the past five years while Covid only reinforced his commitment to AI for efficiency and growth. We have had a direct saving of 30 per cent in past five years along with getting customer insights due to AI that has led to an uptake in revenue as well. Five years back we had around 10 outlets and now have 60 across India, Suri told Financial Express Online.
Subscribe to Financial Express SME newsletter now: Your weekly dose of news, views, and updates from the world of micro, small, and medium enterprises
The company has an in-house AI solution that shows live sales, total transactions, menus, items sold, total consumption per restaurant, etc. The solution captures data from every restaurant throughout the day on a real time basis and consolidates it to show up for analysis on its dashboard. This becomes important for restaurants with chains to understand the consumer-behaviour pattern, the impact of different occasions on business like festivals such as Navratras in North particularly, Christmas in Goa, and some other festivals in South, said Suri.
Moreover, the AI solution at Azure Hospitality helps Suri control the HR module as well. You can look at your salary component, leaves, attendance, holidays, payslips, etc., through a single system every day whenever you want. Basically, AI helps you make better decisions as you grow bigger by minimizing the impact of any uncertainty, Suri added.
Another sector that depends heavily on technology and AI particularly is tourism for purposes ranging from travel booking via chatbots, flight forecasting in terms of the current best price and future prices, recommendations for hotel and cab booking based on travel-related searches, etc.
There is AI at every stage in tourism and aviation, Subhas Goyal, Founder and Chairman at B2B travel company STIC Travel told Financial Express Online. The company is the exclusive General Sales Agent (GSA) a sales representative of a company in a specific region or country for 11 international airlines in India including United Airlines, Air China, Croatia Airlines etc.
STIC has been using for the past five years AI-based Microsoft Dynamics CRM to manage customer relationships, track sales leads, marketing, etc., and streamline administrative processes in sales and marketing. The company is now also implementing a chatbot assistant to answer customer queries on its platform. Goyal noted the standard queries around bookings, holiday searches can be answered by the AI bot while for further details and feedback, there would be manual intervention.
Post-Covid, more MSMEs had started to use primary technology tools at least such as social media, online service aggregators, company websites etc. According to a Crisil survey of around 540 micro and small units released in April this year, over 65 per cent respondents adopted or upgraded their use of online aggregators, social media platforms, and company websites. Among sectors, manufacturing reported higher adoption with 71 per cent respondents adopting or upgrading their use of digital platforms in comparison to 66 per cent respondents in the services sector.
Good technology is invisible. AI/ML will soon form a fundamental layer in all operations and interactions for small businesses. As technology offerings scale, it will soon be easier to get good AI to do certain tasks than to get a human to do it. The impact of this on labor force utilization will be significant, added Sinha.
Read the rest here:
Posted in Artificial Intelligence
Comments Off on National Technology Day: How artificial intelligence is helping MSMEs to optimize processes, accelerate growth – The Financial Express
Artificial intelligence drives the way to net-zero emissions – Sustainability Magazine
Posted: at 11:50 am
Op-ed: Aaron Yeardley, Carbon Reduction Engineer, Tunley Engineering
The fourth industrial revolution (Industry 4.0) is already happening, and its transforming the way manufacturing operations are carried out. Industry 4.0 is a product of the digital era as automation and data exchange in manufacturing technologies shift the central industrial control system to a smart setup that bridges the physical and digital world, addressed via the Internet of Things (IoT).
Industry 4.0 is creating cyber-physical systems that can network a production process enabling value creation and real-time optimisation. The main factor driving the revolution is the advances in artificial intelligence (AI) and machine learning. The complex algorithms involved in AI use the data collected from cyber-physical systems, resulting in smart manufacturing.
The impact that Industry 4.0 will have on manufacturing will be astronomical as operations can be automatically optimised to produce increased profit margins. However, the use of AI and smart manufacturing can also benefit the environment. The technologies used to optimise profits can also be used to produce insights into a companys carbon footprint and accelerate its sustainability. Some of these methods are available to help companies reduce their GHG emissions now. Other methods have the potential to reduce global GHG emissions in the future.
Scope 3 emissions are the emissions from a companys supply chain, both upstream and downstream activities. This means scope 3 covers all of a companys GHG emission sources except those that are directly created by the company and those created from using electricity. It comes as no surprise that on average Scope 3 emissions are 5.5 times greater than the combined amount from Scope 1 and Scope 2. Therefore, companies should ensure all three scopes are quantitated in their GHG emissions baseline.
However, in comparison to Scope 1 and Scope 2 emissions, Scope 3 emissions are difficult to measure and calculate. This is because of a lack of transparency in supply chains, lack of connections with suppliers, and complex industrial standards that provide misleading information. The major issues concerning Scope 3 emissions are as follows:
AI-based tools can help establish baseline Scope 3 emissions for companies as they are used to model an entire supply chain. The tools can quickly and efficiently sort through large volumes of data collected from sensors. If a company deploys enough sensors across the whole area of operations, it can identify sources of emissions and even detect methane plumes.
A digital twin is an AI model that works as a digital representation of a physical piece of equipment or an entire system. A digital twin can help the industry optimise energy management by using the AI surrogate models to better monitor and distribute energy resources and provide forecasts to allow for better preparation. A digital twin will optimise many sources of data and bring them onto a dashboard so that users can visualise it in real-time. For example, a case study in the Nanyang Technological University used digital twins across 200 campus buildings over five years and managed to save 31% in energy and 9,600 tCO2e. The research used IESs ICL technology to plan, operate, and manage campus facilities to minimise energy consumption.
Digital twins can be used as virtual replicas of building systems, industrial processes, vehicles, and many other opportunities. The virtual environment enables more testing and iterations so that everything can be optimised to its best performance. This means digital twins can be used to optimise building management making smart strategies that are based on carbon reduction.
Predictive maintenance of machines and equipment used in industry is now becoming common practice because it saves companies costs in performing scheduled maintenance, or costs in fixing broken equipment. The AI-based tool uses machine learning to learn how historical sensor data maps to historical maintenance records. Once a machine learning algorithm is trained using the historical data, it can successfully predict when maintenance is required based on live sensor readings in a plant. Predictive maintenance accurately models the wear and tear of machinery that is currently in use.
The best part of predictive maintenance is that it does not require additional costs for extra monitoring. Algorithms have been created that provide accurate predictions based on operational telemetry data that is already available. Predictive maintenance combined with other AI-based methods such as maintenance time estimation and maintenance task scheduling can be used to create an optimal maintenance workflow for industrial processes. Conversely, improving current maintenance regimes which often contribute to unplanned downtime, quality defects and accidents is appealing for everybody.
An optimal maintenance schedule produced from predictive maintenance prevents work that often is not required. Carbon savings will be made via the controlled deployment of spare parts, less travel for people to come to the site, and less hot shooting of spare parts. Intervening with maintenance only when required and not a moment too late will save on the use of electricity, efficiency (by preventing declining performance) and human labour. Additionally, systems can employ predictive maintenance on pipes that are liable to spring leaks, to minimise the direct release of GHGs such as HFCs and natural gas. Thus, it has huge potential for carbon savings.
Research has shown that underpinning the scheduling of maintenance activities on predictive maintenance and maintenance time estimation can produce an optimal maintenance scheduling (Yeardley, Ejeh, Allen, Brown, & Cordiner, 2021). The work optimised the scheduling by minimising costs based on plant layout, downtime, and labour constraints. However, scheduling can also be planned by optimising the schedule concerning carbon emissions. In this situation, maintenance activities can be performed so that fewer journeys are made and GHG emissions are saved.
The internet of things (IoT) is the digital industrial control system, a network of physical objects that are connected over the internet by sensors, software and other technologies that exchange data with each thing. In time, the implementation of the IoT will be worldwide and every single production process and supply chain will be available as a virtual image.
Open access to a worldwide implementation of the IoT has the potential to provide a truly circular economy. Product designers can use the information available from the IoT and create value from other peoples waste. Theoretically, we could establish a work where manufacturing processes are all linked so that there is zero extracted raw materials, zero waste disposed and net-zero emissions.
Currently, the world has developed manufacturing processes one at a time, not interconnected value chains across industries. It may be a long time until the IoT creates the worldwide virtual image required, but once it has the technology is powerful enough to address losses from each process and exchange material between connected companies. Both materials and energy consumption can be shared to lower CO2 emissions drastically. It may take decades, but the IoT provides the technology to create a circular economy.
ConclusionAI has enormous potential to benefit the environment and drive the world to net-zero. The current portfolio of research being conducted at the Alan Turning Institute (UKs national centre for data science) includes projects that explore how machine learning can be part of the solution to climate change. For example, an electricity control room algorithm is being developed to provide decision support and ensure energy security for a decarbonised system. The national grids electricity planning is improved by forecasting the electricity demand and optimising the schedule. Further, Industry 4.0 can plan for the impact that global warming and decarbonisation strategies have on our lives.
See original here:
Artificial intelligence drives the way to net-zero emissions - Sustainability Magazine
Posted in Artificial Intelligence
Comments Off on Artificial intelligence drives the way to net-zero emissions – Sustainability Magazine