The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
AI Is Shaping The Future Of War Analysis – Eurasia Review
Posted: November 25, 2021 at 12:06 pm
By Amir Husain*
Several years ago, before many were talking about artificial intelligence (AI) and its practical applications to the field of battle, retired United States Marine Corps General John Allen, and I began a journey to not only investigate the art of the possible with AI, but also to identify its likely implications on the character and conduct of war. We wrote about how developments in AI could lead to what we referred to as Hyperwar a type of conflict and competition so automated that it would collapse the decision action loop, eventually minimizing human control over most decisions. Since then, my goal has been to encourage the organizational transformation necessary to adopt safer, more explainable AI systems to maintain our competitive edge, now that the technical transformation is at our doorstep.
Through hundreds of interactions with defense professionals, policymakers, national leaders and defense industry executives, General Allen and I have taken this message to our defense communitythat a great change is coming and one that might see us lose our pole position. During the course of these exchanges, one fact became increasingly clear; artificial intelligence and the effects it is capable of unleashing have been gravely misunderstood. On one hand, there are simplistic caricatures that go too far; the Terminator running amuck, an instantiation of artificial intelligence as a single computer system with a personality and a self-appointed goal, much like the fictionalized Skynet. Or an intelligent robot so powerful and skilled that it would render us humans useless. On the other hand, there are simplifications of AI as a feature; trivializations in the name of practicality by those who cannot see beyond today and misconstrue AIs holistic potential as the specific capabilities of one or two products they have used, or most likely, merely seen. I would hear from some that fully autonomous systems should (and more amusingly,could) be banned and this would somehow take care of the problem. Others thought the proponents of artificial intelligence had overstated the case and there would never be synthetic intelligence superior to humans in the conduct of war.
But artificial intelligence is not like a nuclear weapon; a great big tangible thing that can be easily detected, monitored or banned. It is a science, much like physics or mathematics. Its applications will lead not merely to incremental enhancements in weapon systems capability but require a fundamental recalculation of what constitutes deterrence and military strength. For example, the combination of AI elementsvisual recognition, language analysis, the automated extraction of topical hierarchies (or ontologies), control of systems with reinforcement learning, simulation-based prediction, and advanced forms of searchwith existing technologies and platforms, can rapidly yield entirely new and unforeseen capabilities. The integration of new AI into an existing platform represents a surprise in its own right. But the complex interactions of such platforms with others like them can create exponential, insurmountable surprise. Which current conventional system deters such an AI creation?
These reactions were all telling. Rather than seeing artificial intelligence as a science, people were reacting to caricatures or linear projections based on the past. Specifically, the contention that since no AI has been built thus far that can exhibit long-term autonomy in battle, such an AI could never be built. Or that if it were, then it would take over the world of its own volition. These reactions would not be as problematic if they were coming from ordinary people playing the role of observers. But seeing people in positions of power and authorityparticipantsespouse such thinking was worrisome. Why? Simply because artificial intelligence will lead to the most important capabilities and technologies yet built by humankind, and a failure to understand the nature of artificial intelligence will cause us to fall behind in terms of taking advantage of all it has to offer in the near, medium, and long term. The stakes are high beyond description.
Earlier in this piece, I described hyperwar to be a type of automatedpotentially autonomousconflict. But a deeper understanding of concepts underpinning hyperwar requires exposure to the idea of the Observe-Orient-Decide-Act (OODA) loop; a cyclical process governing action both in the realm of war, and as many have recently pointed out, in commerce,1engineering,2and other peace-time pursuits.
Where did the idea of the OODA loop come from? While researchers in various fields throughout history have articulated the idea of a cognitive decision/action loop, the modern day conception of the OODA loop in a military context came from USAF Colonel John Boyd. Col. Boyd is famous both for the OODA loop and for his key role in developing the F-16 program. He is also remembered as that famed military strategist whose conceptual and doctrinal contributions, some would argue, quite directly led to the overwhelming U.S. victory in the first Gulf War. Acknowledging the impact of Boyds work, then-commandant of the Marine Corps., General Charles Krulak said these words in Boyds eulogy: John Boyd was an architect of [the Gulf War] victory as surely as if hed commanded a fighter wing or a maneuver division in the desert. His thinking, his theories, his larger than life influence were there with us in Desert Storm.
Of all Boyds considerable contributions, perhaps the idea of the OODA loop is the most potent and long-lasting. OODA governs how a combatant directs energy to defeat an opposing force. Each phase of the OODA loop is itself a cycle; small OODA loops curled up within larger ones. As the OODA loop progresses, information processes feed decision processes that filter out irrelevant data and boil outputs down to those that are necessary and of the highest quality. In turn, these outputs become inputs to another mini OODA loop. Seen in this way, the macro OODA loop of war is a massively parallel collection of perception, decision, and action processes; exactly the types of tasks AI is so well suited to, running at a scale at which machines possess an inherent advantage.
Just how good has AI become at these perception, decision, and action tasks? Take perception, an area where machines and the algorithms they host have made great strides over the past few years. AI systems can now beat Stanford radiologists in reading chest X-rays,3discern and read human handwriting faster than any human,4and detect extrasolar planets at scale, from murky data that would be a challenge for human astronomers to interpret.5The AI perception game is hard to beat, and operates at a scale and speed unfathomable to a human being.
The combined effect of millions of sensors deployed in space, in the air, on land, on the surface of the sea and under it, all being routed to a scalable AI perception system will be transformative. We are beginning to see shades of what this will feel like to military commanders. When the Russian military conducted a test of 80 UAVs simultaneously flying over Syrian battlefields6with unified visualization, Russian Defense Minister Sergei Shoigu commented that the experience was like a semi-fantastic film and that they saw all the targets, saw the launches and tracked the trajectory. This, of course, is just the beginning.
What about decisionmaking? How would AI fare in that domain? Today, planners use tools such as Correlation of Forces (COF) calculators7to determine the outcome of a confrontation based on the calculated capability of a blue force versus a red force. They use these calculations and projections to make logistical and strategic decisions. If you divide the battlespace into a grid that constrains both space and time, in some sense the only COF calculation that matters inside each cell is the COF calculation for the cell itself, not for the entire grid. Taking this idea further, given the presence of assets in each cell, one could calculate their area of impact, under the constraint of a time bound. Obviously, a hypersonic missile will have a larger area of impact with a smaller time bound in comparison to a tank. An AI trying to solve this problem would use sensors to identify assets present in each grid, calculate COF coefficients for each cell for a given time bound, and then seek to generate and optimize a plan of action that results in the smallest own-force maneuvering most efficiently to inflict maximum attrition on the enemy. All while suffering the least damage itself. A proxy for determining how much damage you could inflict while minimizing own-losses is the COF coefficient itself. The larger your advantage over the enemy, the greater the chances of a swift victory. An AI could also play this per-cell COF optimization game with itself millions of times to learn better ways of calculating COF coefficients.
This is one simple example of how a strategic hyperwar AI could seek advantage. There are others. The key point is that no human commander could even properly process thousands of fast-changing, per-cell COF calculations, much less act on them with the speed of a purpose-built machine running a rapidly improving algorithm.
Finally, let us come to action. In 2020, the Defense Advanced Research Projects Agency (DARPA) organized a dogfight competition8between human F-16 pilots and various AI algorithms, called AlphaDogfight. The result was a landslide. AI won 5-1. There are many points of view about this competition and questions raised as to whether the rules of engagement were determined fairly. From my own personal experience applying AI to autonomous piloting applications, I know this: AI eventually wins. In 2017, SparkCognition, the AI company I founded, worked to develop technology to identify the conditions for an automated take off rejection. Using reinforcement learning, the AI we developed exceeded human performance both in timeliness of decision-making and accuracy of decisions made. The following year we worked on multi-ship defensive counter air (DCA) scenarios and found that, once again, AI performed amazingly well. In time, AI will win. Is someone making bets to the contrary? And if not, why arent we moving faster to embrace the inevitable?
The fusion of distributed artificial intelligence with highly autonomous military systems has the potential to usher in a type of lightning-quick conflict that has never been seen before. The essential findings of my work in collaboration with General Allen discussed above revealed that if artificial intelligence was aggressively applied to every element of the OODA loop, in essence, the OODA loop could collapse on itself. Artificially intelligent systems would enable massive concurrent coordination of forces and enable the application of force in optimized ways. As a result, a small, highly mobile force (e.g. drones) under the control of AI could always outmaneuver and outmass a much larger conventional force at critical points. Consequently, the effect of platforms under AI control would be multiplied many fold, ultimately making it impossible for an enemy executing a much slower OODA loop to contend or respond.
What, then, are the larger implications of AIs dominance in perception, decision, and action tasks? What happens when the OODA loop collapses? Let us examine a few implications.
Previous work indicates that AI would provide a significant increase in the latitude of action available to both nation states and non-state actors. Smaller scale autonomous operations have an inherent quality of deniability in that there are no humans to capture or interrogate. And it is not just conventional, kinetic actions that AI can control but also cyber operations. The applications of AI to cyber are tremendous and range from automatic development of cyber weapons to the continuous, intelligent scanning of enemy targets to identifying pathways for exploitation, to the autonomous conduct of large scale, distributed cyber operations.
The onset of hyperwar type conflicts will have a great effect on almost all our current military planning and the calculations on which these plans are based. The most potent teeth to tail ratios sustainable by a human force will seem trivial when autonomous systems are widely deployed. The idea that training will always enable dominance will have to be questioned. And the already outdated notion of platform versus platform comparisons will become completely extinct.
Most of the scenarios described in Hyperwar: Conflict and Competition in the AI Century, have already come to pass. In one conceptual vignette, we outlined how autonomous drones could be used to attack oil installations. Two years later, this actually happened against a Saudi oil facility in Abqaiq. We also highlighted how existing conventional aircraft would be reused as autonomous drones. The Chinese did exactly that with their J-6 and J-7 aircraft. Integrating AI into current systems presents the opportunity to build a potent capability at low cost and create significant complications for planners looking to counter these threats.
When kinetic or cyber effects can be employed over great distances, with great precision and with no human involvement, the likelihood that countries and groups will use these capabilities increases. And when autonomous systems begin to blunt the training-enabled human edge, the potency of such actions is amplified.
Every day brings with it new announcements in military technology developments. And most of these are not taking place in the United States. Consider just the following recent news from around the world:
There is also a considerable amount of work going on in Pakistan, India, Israel, South Korea, Brazil, and elsewhere. The list truly goes on and on. In a world where strategic competition between near-peers is once again at the fore, the pace of military innovation is skyrocketing.
While the volume and pace of these developments is impressive, nothing in the list above should be truly surprising. For years, General John Allen, former Deputy Secretary of Defense, Robert O. Work, and others have been pointing to the potential of autonomous technologies, inexpensive sensors, and fast spreading technical knowledge combining to yield potent and inexpensive capabilities.
Countries across the globe are leveraging low-cost frameworks for innovation, combining open source software and systems with inexpensive, commercial grade electronics, domestic software prowess and a willingness to experiment and rapidly iterate using methodologies often referred to as Agile. Not only does this result in lower development costs, it also leads to speed of innovation.
In contrast, in the United States we spend large sums of money on incredibly expensive platforms that work well when they are maintained at great cost, and that perform when they are piloted or controlled by humans in whom we have invested millions of additional dollars of training time. Is this the best strategy? Or are we doing to ourselves what we did to the Soviet Union in the 1960s and 1970s encouraging military spending into broader economic oblivion?
Our opponents will increasingly use inexpensive technologies that are easily produced, employable in large quantities, and that continue to deliver results even when they are left to their own devices without any need for a highly trained human operator.
While the United States is the richest nation on earth, too great a disparity in cost-per-capability cannot be sustained even by the worlds apex military power. We are walking a dangerous path if we continue to provide lip service to emerging, disruptive technologies while making the real, significant investments in legacy platforms. It is not enough to talk about technological disruption, we must actually disrupt our funding and spending patterns.
Let us apply the cost-per-capability lens to just a few of our high-end platforms that have traditionally been force multipliers and differentiators for our forces. U.S. attack helicopters are the most potent in the world. But recent export orders show that they now cost between $100-125 million per aircraft.15While capabilities vary based on platform, in general, these helicopters carry anywhere between 8 and 16 anti-tank guided missiles (ATGMs), enjoy a loiter time of about 2.5 hours, and carry two pilots on board. In contrast, the Bayraktar TB2 currently being used in Libya and Nagorno-Karabakh has a loiter time of 24 hours, carries 2 ATGMs, requires zero on-board pilots, and costs about $2M16. Its quite apparent that armor is vulnerable to these drones, much as it is to attack helicopters. But have we considered how these drones can be employed in swarms as an alternative to the expensive attack helicopter? How many TB2s can be delivered via a single transport aircraft? How many conventional attack helicopters? How much training is required for on-board pilots versus for an autonomous system complemented by a remote operator? A new, distributed lethality alternative to attack helicopters has advantages beyond the obvious lower cost.
It might be tempting to look at tactical drones and dismiss them as relatively simple systems that were bound to proliferate. Of course, I agree with both those points; many are simple systems and they have indeed proliferated. However, the drones now being developed in a number of countries are not necessarily just tactical or low-end. Complex high-end capabilities are proliferating, too. AI is being applied to other complementary areas, such as jamming, to create cognitive EW (Electronic Warfare) pods that can be flown into action by a UAV.
And it is not just about the drones alone, but rather the fact that their employment in real theatres of conflict also entails a significant shift in the entire concept of operations. For example, it has been theorized that TB2 drones over Azerbaijan were controlled from Turkey, with larger Akinci drones acting as relays. ATGMs delivered at scale, against a peer-force by attritable, long-endurance platforms controlled by pilots hundreds of miles away never before was this concept of operations employed. But even newer methods of employment are coming.
Turkish Aerospace and Bayraktar are collaborating with Aselsan to incorporate the Koral EW system onto their drones. Russias Uran-9 UGVs have been improved after their performance in Syria was studied and gaps were identified. Chinese UAV developments are progressing at such a significant rate that it is difficult to capture them in a work that falls short of book-length. Sensors, control systems, vehicles, and conops are all evolving fast on the global scene and this means complex, multi-system threats employed in surprising ways.
Michael Peck, writing inNational Interestsuggests that Turkey may have won the laser weapons race when it deployed a laser weapon system in Libya that was able to shoot down a Chinese Wing Loong drone. He goes on to quote Alexander Timokhin of Army Recognition; the interesting thing in this whole story is how essentially newcomers to the laser theme occupy that niche in which the grandees of laser business, such as Russia and the USA, do not even think to climb. Indeed, space that is ceded will be occupied. Technological gaps between several leading nations of the world are no longer so insurmountable so as to allow complacency. And cost matters! How is it that Turkey, with a $22 billion defense budget, is able to drive so much innovation in air-to-air missiles, lasers, EW, drones, and many other areas, whereas our dollars do not quite seem to go as far in the United States.
Cost is a critical feature, too! Big, expensive, slow-to-evolve, slow-to-build and complex to maintain platforms need to be re-thought in an age where software is the most lethal weapon. One that is growing exponentially in capability over months, not years. You can not bend new metal fast enough to keep up. It is the relationship between the software and the metal that truly matters. In this context, how does the $35 billion carrier strike group evolve in the age of inexpensive DF-21D missiles and next-generation AI-powered cruise missiles? What about the tank? General Tony T2 Thomas, the former commander of the United States Special Operations Command (USSOCOM), recently discussed this point with me and wondered whether Nagorno-Karabakh pointed us to the end of the tank-as-platform. General Thomas has also publicly tweeted his views on this topic; The real debate is the role of massed armor in future warfare (there is a reason the Marines just gave up their tanks).
There are signs of progress and improvement. Certainly, the United States has not been sitting entirely still. The Air Forces announcement of the first test of a sixth generation platform is encouraging, in particular because it was developed so quickly. Also encouraging are the three Boeing, General Atomics, and Kratos SkyBorg prototype development efforts for loyal wingmen drones. But given history, one wonders how expensive new systems will be by the time they are deployed. Will future programs be able to avoid the types of issues that the F-35 program encountered? A $120 million, fifth-generation stealth platform for use against near-peer threats, but only used in anger with non-stealthy, externally mounted munitions to conduct missions in uncontested airspace. Are these missions not better suited to a 40-year old F-16 or A-10? Consider further the case of our B1s, which are exquisitely complex aircraft designed for low-altitude, high-speed penetration of highly defended airspace. To find some use, they were eventually used to drop conventional bombs in Afghanistan. Mundane, low-end work for a high-end platform.
It is high time we got over the platform and focused on the mission. If we keep buying $120 million jets with $44,000/hr flight costs to use them on missions better suited to $2 million drones that could cost us $2,000/hr, we will eventually find that financial oblivion we seem to be looking for. We do not need all high-end, all the time. And there are more imaginative ways of employing our existing high-end platforms than as frontline bomb trucks.
While AI will play a huge role in augmenting conventional platforms, it will also play four additional roles. First, it has the potential to automate planning and strategy. Second, it can revolutionize sensor technology by fusing and interpreting signals more efficiently than ever before. Third, it has a massive role to play in space based systems; particularly around information fusion to counter hypersonics. Fourth, it can enable next generation cyber and information warfare capabilities.
Imagine an ocean in which submarines cannot hide effectively, negating one leg of the triad. Imagine middle powers fielding far more competent forces because while they lack the resources to train human pilots to the level of the United States Air Force, they are capable of the design expertise required to field AI-powered platforms. Imagine cyber attacks engineered by AI and executed by AI at scale. Imagine long-running, fully automated information warfare and espionage programs run by AI systems. If AI is applied creatively in nation state competitions, it has the potential to create significant, lasting impact and deliver a game-changing edge.
Software, AI, autonomythese are the ultimate weapons. These technologies are the difference between hundreds of old Mig-19 and Mig-21 fighter jets lying in scrap yards, and their transformation into autonomous, maneuverable, and so-called attritable, or expendable, supersonic drones built from abundant air frames, equipped with swarm coordination and the ability to operate in contested airspaces. Gone are the days when effectiveness and capability could be ascribed to individual systems and platforms. Now, its all about the network of assets, how they communicate, how they decide to act, and how efficiently they counter the system that is working in opposition to them. An individual aircraft carrier or a squadron of strategic bombers are no longer as independently meaningful as they once were.
In the emerging environment, network-connected, cognitive systems of war will engage each other. They will be made up principally of software, but also of legacy weapons platforms, humans, and newer assets capable of autonomous decision and action. The picture of the environment in which they operate across time and space will only be made clear by intelligent systems capable of fusing massive amounts of data and automatically interpreting them to identify and simulate forward the complex web of probabilities that result. Which actions are likely to be successful? With what degree of confidence? What are the adversarys most likely counter-moves? The large scale, joint application of autonomously coordinated assets by a cognitive system will be unlike anything that has come before. It is this fast-evolving new paradigm, powered by artificial intelligence at every level, from the tactical to the strategic, that demands our attention. We must no longer focus on individual platforms or stand-alone assets, but on the cognitive system that runs an autonomous Internet of War.
Integrating the LEGO bricks of intelligence and autonomy into conventional platforms results in unconventional upgrades. A Chinese-built Shenyang J-6 Farmer fighter jet with autonomy is not just a 1950s era write-off. It becomes a system with new potential, diminished logistics dependencies, and an enhanced efficacy that goes far beyond an engine or radar upgrade. Broadly, the consequences of the use of AI to revitalize and reinvent conventional platforms will be hard to ignore.
Despite the change occurring globally in value shifting from the physical to the digital, and the tremendous latent potential of AI, the U.S. Department of Defense has not traditionally been at its best when it comes to understanding, acquiring, or deploying software capabilities. Hardware platforms come far more naturally to our acquisition professionals. We can hope for a change of heart and perspective, but absent that, in order for AI to be meaningful to them in the near term, we must reinvent, enhance, and reimagine existing platforms just as we build new ones. It is only then that we will cost-effectively fulfill needs and create significant new capabilities that open the door to even greater future potential. Briefing after briefing on the potential of AI, or distributing primers on machine learning inside the confines of the Pentagon will not lead to critical adoption; the performance gains that result when AI is integrated into platforms will be the proverbial proof that lies in the eating of the pudding.
We have made the mistake of being too slow to adapt, and not predicting the next conflict well enough to be prepared. Perhaps some of our allies have made the same mistake. In fact, a report from the European Council on Foreign Relations (ECFR) concluded that the advanced European militaries would perform badly against Azerbaijans current UAS-led strategy.17The truth is that we have developed an inflated opinion of the quality of our readiness because over the past 40 years we have not had to face opponents that were able to turn our omissions into unforgivable sins. The future may not be so kind.
To compete in this new era of exponential technologies, the U.S. military and our intelligence agencies need to go all-in on digital and physical systems powered by artificial intelligence. Imbued with synthetic cognition, such systems can make a meaningful difference to every branch of our armed services and our government organizations. A serious effort to fuel the development of such systems will lay the groundwork for true, full-spectrum AI adoption across government. But for any of this to become reality, long held views and processes in the Defense Department must change. In order to turn the tide, at a minimum, we need to:
If we are to remain competitive, an aggressive, fast-track effort to incorporate AI into existing and new platforms must be adopted. In the age of hyperwar, our willingness to embrace commercial innovation, our decisiveness in acknowledging that we live in a post-platform era, and most importantly, the speed with which we operationalize new investments, will be the attributes that lead to victory.
*About the author: Amir Husain is a serial entrepreneur, inventor, technologist, and author based in Austin, Texas. He is the Founder and CEO of an award-winning artificial intelligence company, SparkCognition, and is the founding CEO of SkyGrid, a Boeing and SparkCognition joint venture.
Source:This article was published inPRISM Vol. 9, No. 3, which is published by the National Defense University.
Notes
1What Do AI And Fighter Pilots Have To Do With E-Commerce? Sentients Antoine Blondeau Explains | GE News.
2How Great Engineering Managers Identify and Respond to Challenges the OODA Loop Model Waydev.
3https://hitconsultant.net/2019/08/22/ai-tech-beats-radiologists-in-stanford-chest-x-ray-diagnostic-competition/.
4https://www.labroots.com/trending/technology/8347/ai-reads-handwriting.
5https://news.sky.com/story/ai-algorithm-identifies-50-new-planets-from-old-nasa-data-12057528.
6http://newsreadonline.com/russia-in-syria-simultaneously-launched-up-to-80-drones/.
7Demystifying the Correlation of Forces Calculator (army.mil).
8AlphaDogfight Trials Go Virtual for Final Event (darpa.mil).
9Russia bets big on Mini Drones for Attack Helicopter, Combat Troops (defenseworld.net).
10Military Watch Magazine.
11Chinas Autoflight puts a canard twist on its latest long-range eVTOL (newatlas.com).
12Iran showcases Shahed 181 and 191 drones during Great Prophet 14 Exercise The Aviationist.
13Iranian press review: Revolutionary Guard equips speed boats with suicide drones | Middle East Eye.
14Ukraine Forming Venture with Turkey to Produce 48 Bayraktar TB2 Drones (thedefensepost.com).
15Apache attack helicopters and weapons: $930 million price tag is unreal (nationalheraldindia.com).
16UK eyes cheaper armed drones after Turkeys successful UAV program | IRIA News (ir-ia.com).
17Air Forces Monthly, January 2021.
Visit link:
Posted in Artificial Intelligence
Comments Off on AI Is Shaping The Future Of War Analysis – Eurasia Review
[Webinar] Balancing Compliance with AI Solutions – How Artificial Intelligence Can Drive the Future of Work by Enabling Fair, Efficient, and Auditable…
Posted: at 12:06 pm
December 7th, 2021
2:00 PM - 3:00 PM EDT
*Eligible for HRCI and SHRM recertification credits
With the expansion of Talent Acquisition responsibilities and complex landscape from hiring recovery, talent redeployment, the great resignation, and DE&I initiatives, there has never been a greater need for intelligent, augmentation and automation solutions for recruiters, managers, and sourcers. There is also growing awareness of problematic artificial intelligence solutions being used across the HR space and the perils of efficiency and effectiveness solutions at the cost of fairness and diversity goals. These concerns are compounded with increased inquiries from employees and candidates of the AI solutions used to determine or influence their careers, particularly whats inside the AI and how they are tested for bias. Join this one-hour webinar hosted by HiredScore CEO & Founder Athena Karp as she shares:
Speakers
Athena Karp
CEO & Founder @HiredScore
Athena Karp is the founder and CEO of HiredScore, an artificial intelligence HR technology company that powers the global Fortune 500. HiredScore leverages the power of data science and machine learning to help companies reach diversity and inclusion goals, adapt for the future of work, provide talent mobility and opportunity, and HR efficiencies. HiredScore has won best-in-class industry recognition and honors for delivering business value, accelerating HR transformations, and leading innovation around bias mitigation and ethical AI.
The rest is here:
Posted in Artificial Intelligence
Comments Off on [Webinar] Balancing Compliance with AI Solutions – How Artificial Intelligence Can Drive the Future of Work by Enabling Fair, Efficient, and Auditable…
Which beverages companies are leading the way in artificial intelligence? – data – just-drinks.com
Posted: at 12:06 pm
Unilever and Suntory are among the beverage brand owners best positioned to take advantage of future artificial intelligence disruption in the industry, according to recent research.
The assessment comes from GlobalDatas Thematic Research ecosystem, which ranks companies on a scale of one to five, based on their likelihood to tackle challenges like AI. According to the analysis, Unilever is well-placed to benefit from its investments in artificial intelligence. The group, which operates the Lipton iced tea brand in partnership with PepsiCo, was the only company to attain the top score in GlobalDatas non-alcoholic beverages Thematic Scorecard.
In the 12 months to the end of September, Unilever advertised for 323 new artificial intelligence-related roles and mentioned artificial intelligence five times in its filings.
The table below shows how GlobalData scored the biggest companies in beverages on their artificial intelligence performance, as well as the number of new AI jobs, deals, patents and mentions in company reports since October 2020.
Higher numbers usually indicate that a company has spent more time and resources on improving its artificial intelligence performance, or that artificial intelligence is at least high up executives list of priorities.
A high number of mentions of artificial intelligence in quarterly company filings suggests the company is either reaping the rewards from previous investments or needs to invest more to catch up with the rest of the industry. Similarly, a high number of deals could indicate a company is dominating the market, or that it is using M&A acquisitions to plug gaps in its capabilities.
This article is based on GlobalData research figures as of 10 November.
Why has Artificial Intelligence become an obsession for your CEO? focus
Read the original post:
Which beverages companies are leading the way in artificial intelligence? - data - just-drinks.com
Posted in Artificial Intelligence
Comments Off on Which beverages companies are leading the way in artificial intelligence? – data – just-drinks.com
Who Says AI Is Not For Women? Here Are 6 Women Leading AI Field In India – SheThePeople
Posted: at 12:06 pm
I dont see tech or AI as hostile to women. There are many successful women in AI both at the academic as well as industry levels, says Ramya Joseph, the founder of AI-based entrepreneurial start-up Pefin, the worlds first AI financial advisor. And even on my team at Pefin, women hold senior technology positions. There tends to be a misconception that tech tends to attract a geeky or techy kind of personality, which is not the case at all,
Joseph has a bachelors degree in computer science and masters in Artificial Intelligence, Machine Learning and Financial Engineering. As a wife, mother and daughter, Joseph could closely relate to the crisis of financial advice to plan for the future. She came up with the idea of founding Pefin when her father lost his job due to a lack of financial advice when he jeopardised his retirement plans. Navigating and solving his problems, Joseph realised that many were telling the same problem. Hence she came up with the idea of an AI-driven financial adviser.
No doubt Artificial Intelligence is one of the growing industries in the field of professionalism. As new inventions and developments knock at our doors, the relation between humans and computers is being reassessed. With the expansion of AI, new skills and exceptional human labour is in high demand. But the problem is that despite the evolution in society, the gender pay gap is not shrinking. As per the wef forum, only 22 per cent of AI professionals are women. The report suggests that there is a gender gap of around 72 per cent.
Despite this, many women are breaking the glass ceilings and reforming the field of Artificial Intelligence. Through their skills and leadership, these women are carving the path for other women to participate as AI professionals. So in this article, I am going to list out some women AI professionals in India who changing the gender dynamics through their excellence.
Amarjeet Kaur is a research scientist at TechMahindra. She has a PhD in Computer Science and Technology. Kaur specialises in research techniques and technologies like graph-based text analysis, latent semantic analysis and concept maps among others. She also has expertise in experimentation and field research, data collection and analysis and project management. She is known for her organisational skills and willingness to take charge.
Kaur has also worked with the Department of Science and Technology at Women Scientist Scheme. As a part of the scheme, she helped in developing a technique to automatically evaluate long descriptive answers. With more than ten years of research and teaching experience, Kaur has excellent academic skills. Her academic skills and innovative techniques have gained her a gold medal and a toppers position at Mumbai University. Her innovative skills and course material has also received a place in Mumbai Universitys artificial intelligence and machine learning courses.
Sanghamitra Bandyopadhyay works at the Machine Intelligence Unit of the Indian Statistical Institute. She also completed her PhD from the institute and became its director serving for the years 2015 to 2020. Bandyopadhyaya is also a member of the Science, Technology and Innovation Advisory Council of the Prime Minister of India (PM-STIAC). She specialises in fields like machine learning, bioinformatics, data mining and soft and evolutionary computation.
She has been felicitated with several awards for her work like Bhatnagar Prize, Infosys award, TWAS Prize, DBT National Women Bioscientist Award (Young) and more. She has written around 300 research papers and has edited three books.
Ashwini Ashokan is the founder of MadStreetDen, an artificial intelligence company that uses image recognising platforms to power retail, education, health, media and more. Starting up in 2014, the venture is headquartered in California with offices access Chennai, Bangalore, Tokyo, London and more. She co-founded the platform along with her husband. Speaking to SheThePeople, Ashokan said, Its only natural that the AI we build mimics what weve fed it, until the agency of its own, which could be good or bad. As an industry, we need to think about what were teaching our AI, She also added, Every line of code we write, every feature we put in products we need to ask ourselves, what effect does this have on the way the world will be interacting with it.
Apurva Madiraju is a vice president at Swiss Re Global Business Solutions India in Bangalore. She is leading the data analytics and data science team of the audit function. As the leader, she is responsible for building machine learning and text analytics solution to deal with audit compliance risk.
Madiraju flaunts 11 years of experience across diverse fields like artificial intelligence, data science, machine learning and data engineering. She has developed multiple AI and ML-driven solutions like ticket volume forecasting models, turn-around-time prediction solutions and more. She has worked across companies globally to lead the conceptualisation, development and deployment of many AI and ML-based solutions for enterprises.
With more than 20 years of experience as a Data Scientist, Bindu Narayan serves as the Senior Manager with Advanced Analytics and AI at EY GDS Data and Analytics Practice. At EY, Narayan is AI competency leader for EYs Global Delivery Services. She along with her team offers virtual assistant solutions to clients across the industry. Moreover, with her skills, Narayan has developed many innovative AI solutions and leads in the field of machine learning, customer and marketing analytics and predictive modelling. She completed her PhD from IIT Madras on the topic of modelling Customer Satisfaction and Loyalty.
Read the original here:
Who Says AI Is Not For Women? Here Are 6 Women Leading AI Field In India - SheThePeople
Posted in Artificial Intelligence
Comments Off on Who Says AI Is Not For Women? Here Are 6 Women Leading AI Field In India – SheThePeople
Sunnybrook launches innovative new artificial intelligence research lab with $1-million gift from TD Bank Group – Canada NewsWire
Posted: at 12:06 pm
TORONTO, Nov. 25, 2021 /CNW/ - TD Bank Group has donated a $1-million gift to establish the Augmented Precision Medicine Lab at Sunnybrook Health Sciences Centre. The Augmented Precision Medicine Lab will develop cutting-edge artificial intelligence (AI) systems to help improve the clinical care that patients receive in the fields of cardiology, cancer and other chronic diseases. Sunnybrook's rich and complex data stores will be harnessed to develop clinical risk prediction models that will enable physicians to provide personalized care to patients and potentially improve outcomes.
With this investment, Sunnybrook will have the resources it needs to build technological infrastructure, attract more talent, and accelerate a number of innovative projects either planned or underway.
"This generous gift will unite medical experts, computer scientists and industry partners to harness the power of big data and machine learning to drive personalized approaches to medicine," says Kelly Cole, President and CEO, Sunnybrook Foundation. "TD has long been a dedicated supporter of innovation at Sunnybrook and we are delighted to take this next step together."
The Augmented Precision Medicine Lab will work closely with industry partners to develop powerful new diagnostic tools, bring them to communities across Canada, and ultimately improve health outcomes.
"AI in medicine will undoubtedly improve the quality of care that patients receive, and, perhaps more importantly, it will improve health-care equity by dramatically widening access to underserved communities and populations," says Dr. Alexander Bilbily, a physician and computer scientist at Sunnybrook who will serve as the director of the new lab. "And by recognizing the essential role that industry plays in health care, we create a clear path from the lab to the patient where these tools can have a real impact on the patient journey."
The Augmented Precision Medicine Lab's first project aims to leverage Sunnybrook's extensive experience with patients with COVID-19 to create AI tools that can identify which patients are more likely to deteriorate. As a result, doctors will be empowered to closely monitor and improve care for these patients. The tool is being developed for use in smaller community hospitals, which demonstrates how AI can extend the reach of medical knowledge to smaller centres with less experience, thereby improving health-care equity for patients in underserved areas.
"The funding announced today will help Sunnybrook enhance its research and develop AI technologies to advance quality health care for patients who need it most," says Janice Farrell Jones, Senior Vice President, Sustainability and Corporate Citizenship, TD Bank Group. "Through the TD Ready Commitment, the Bank's corporate citizenship platform, we are proud to support this important initiative that will ultimately help patients living with cardiac conditions, cancer and other chronic diseases access equitable and personalized care."
Together, Sunnybrook and TD Bank Group are inventing the future of health care.
About Sunnybrook
Sunnybrook Health Sciences Centre is inventing the future of health care for the 1.3 million patients the hospital cares for each year through the dedication of its more than 10,000 staff and volunteers. An internationally recognized leader in research and education and a full affiliation with the University of Toronto distinguishes Sunnybrook as one of Canada's premier academic health sciences centres. Sunnybrook specializes in caring for high-risk pregnancies, critically ill newborns and adults, offering specialized rehabilitation, and treating and preventing cancer, cardiovascular disease, neurological and psychiatric disorders, orthopaedic and arthritic conditions and traumatic injuries. The hospital also has a unique and national leading program for the care of Canada's war veterans.
SOURCE Sunnybrook Health Sciences Centre
For further information: Media contact: Samantha Sexton, Sunnybrook Health Sciences Centre, 416.480.4040, [emailprotected]
http://www.sunnybrook.ca/foundation
Read this article:
Posted in Artificial Intelligence
Comments Off on Sunnybrook launches innovative new artificial intelligence research lab with $1-million gift from TD Bank Group – Canada NewsWire
At LI hospitals, the artificial intelligence revolution has already begun – Newsday
Posted: at 12:06 pm
The words "artificial intelligence" evoke a futuristic world, but at certain Long Island hospitals, the future is here and now.
At some hospitals, nurses track the severity of patients symptoms with help from artificial intelligence, a broad term that encompasses computer programs that can be fed huge volumes of data and trained to analyze new data.
Others use A.I. to predict which patients are at risk of becoming ill again because they dont follow instructions after theyre discharged, or those who are healthy enough to be allowed to sleep through the night instead of being awakened to have their vital signs checked. Still others use the technology to speed the analysis of sleep studies that help diagnose conditions such as sleep apnea.
The ventures vary widely in their origins, scope and funding. One is a new company called Truveta, formed in an unusual alliance between New Hyde Park-based Northwell Health and 19 other health systems across the country. The company, which recently announced $200 million in new private funding, pulls information from millions of the networks patient records anonymized to protect confidentiality and provides real-time analysis to health care providers.
Northwell Healthhas joined forces with 19 other health systems to start acompany called Truveta, which recently announced $200 million in new private funding from its member networks and its CEO, Terry Myerson. Using information from millions of the networks anonymized patient records, the company provides real-time analysis to health-care providers.
NYU Langone Hospital-Long Islandin Mineola has launched an A.I.-powered program that tracks COVID-19 patients vital signs, lab results and other information, recording17 data points every 30 minutes to detect signs of potential deterioration.
Mount Sinai South Nassau in Oceansideuses A.I. to detect patients' risks of falling or becoming severely ill, and to predict how much nursing care they will need.
Stony Brook University'sDepartment of Biomedical Informaticshas received more than $5 million in federal grants to research the potential use of A.I. in diagnosing and treating cancer.
Catholic Health uses A.I.to analyze patients' brain waves, breath patterns, cardiac signals, leg movements and other data points recorded during sleep studies, speeding up the completion of reports that are reviewed by board-certified physicians.
Sources: Northwell Health, NYU Langone Health,Mount Sinai South Nassau,Stony Brook University,Catholic Health
Northwell sees "revolutionary potential" in A.I., Dr. Martin Doerfler, Northwells senior vice president of clinical strategy and development, said in an interview, "and we wanted to be part of it."
On a different scale, another new program got its start on a local nurses laptop during the coronavirus surge last year. After months of research and development, it evolved into an A.I. tool that flags COVID-19 patients at NYU Langone Hospital-Long Island who are at high risk of becoming severely ill in the next 12 hours.
The A.I. program "doesn't take over your decision-making and it never should," said Jeanmarie Moorehead, senior director of operations at the Mineola hospital. "But it is definitely value-added, tremendous value-added to the clinician."
What the A.I. efforts have in common is an ambitious effort to use specialized computer programs to comb through columns of data too vast to be understood by a human being, detect patterns and use that information to guide health care providers in diagnosing and treating patients.
The use of A.I. in health care is on the rise, with global funding in the sector reaching $8.5 billion from January through September nearly double the amount in all of 2019, according to CB Insights, a company that tracks A.I. investments. The United States was the biggest spender, with investments in A.I. in health care totaling $5.45 billion from January through September, the company reported.
Health care technology, including A.I., "is clearly seeing an increased level of investment," especially over the last year and a half, said Peter Micca, a partner and national health tech leader with Deloitte & Touche LLP in Manhattan. "COVID has only accelerated the awareness around the importance of technology in health care."
One hurdle is that, in contrast with industries such as finance and social media, health care data "is completely fragmented," Doerfler said. "We need to know the answers that are hidden inside the fragmented data, and you don't get the answers until you get the data sets large enough that you can find the answers quickly."
Incomplete data sets often lack diversity of race, gender, socioeconomic status and other characteristics, and overrepresent middle-aged white men with health insurance, Doerfler said. By contrast, said Terry Myerson, Truveta's CEO, the data set drawn from its 20 networks represents 16% of all clinical care provided in the United States and reflects "the diversity of our country."
The goal of Truveta, Myerson said, is to "empower our clinicians to be experts" and "help families make the most informed decisions about their care."
Some industry analysts warn of potential pitfalls in the adoption of A.I. At the annual conference of Stony Brook University's Center of Excellence in Wireless and Information Technology this month, Daniel Holewienko, executive director, big data and business intelligence at Henry Schein in Melville, said failing to embrace A.I. would put health care companies "at a competitive disadvantage."
Still, he said, those adopting the new technology can face high costs and difficulties integrating it into their current systems, among other challenges. Protecting privacy, preventing bias and making sure clinicians do not place excessive faith in the machines are among the other concerns, health care providers say.
Dr. Joel Saltz, founding chair of the Department of Biomedical Informatics at Stony Brook University, said the industry has proceeded cautiously in adopting A.I. The advanced technology has become more widely used in the last five years or so, he said.
"These things are incremental, especially in health care, because you've got to make sure they're safe and effective," said Saltz, who is working with colleagues on a project led by the federal Food and Drug Administration, focusing on the use of A.I. in digital pathology. Such tools, he said, are used for "decision support," to aid doctors and nurses rather than replace their work.
Stony Brook's biomedical informatics department is working on three projects funded by more than $5 million in federal grants to research the potential use of A.I. in diagnosing and treating cancer. An A.I. program can examine hundreds of slides and analyze millions of cells, complementing doctors' ability to visually classify tumors, Saltz said. "Think about the difference between a paper map and Google Earth," Saltz said. "It really opens up a whole new way of doing things."
It's possible that some of the research could be put into clinical practice within 10 years, he said.
In some cases, the COVID-19 crisis has sparked innovation by doctors, researchers and nurses as they raced to understand the new virus and find ways to save patients lives. Nurses have been key players in using and, in at least one case, helping to develop the new technology.
At NYU Langone Hospital-Long Island in Mineola, for instance, computers are running a new A.I.-powered program that keeps an eye on COVID-19 patients vital signs, lab results and other information, using patients' electronic medical records to monitor 17 data points every 30 minutes and detect signs of impending danger.
A paper version of the program was born of necessity during the first COVID surge in early 2020. At the time, nurse clinician Cathrine Abbate was seeking a rapid, consistent way to communicate with her fellow nurses and doctors about the severely ill patients suffering from a new and brutal virus.
On video conference calls before and after their shifts, Abbate and other nurses brainstormed about the warning signs that tended to precede a rapid decline in patients condition, such as needing large amounts of oxygen or not being able to eat or move. With that information, she used Microsoft Word to create a blank grid that she printed out at her home in Huntington Station. The grid included seven columns, tracking information about the patients condition. In the hospital, using copies of the grid made it easier for nurses to quickly rank the severity of each symptom and give an overall rating from 1 to 10, with 10 being the worst, she said.
"We needed to be able to fluidly communicate with each other about how the patients were doing," Abbate recalled. "It was just a way to create a language for ourselves."
Nurse manager Sarojini Seemungal helped implement the new system on the 30-bed unit, and alerted her own managers. Moorehead brought it to the attention of researchers at NYU Langone in Manhattan who specialize in analyzing data.
The researchers spent months meeting weekly with nurses and developing an A.I. program that provides information to a rapid response team of critical care nurses at the Long Island hospital who give special attention to the highest-risk patients, said Dr. Yindalon Aphinyanaphongs, director of operational data science and machine learning at NYU Langone Health.
The program acts as a "tireless monitor," taking information about thousands of previous patients including many whose conditions deteriorated and using it to predict whether current patients are likely to decline, he said.
Theres a lot of "hype" about A.I. and its subset machine learning, a term that refers to computers learning from examples, Aphinyanaphongs said.
"A lot of times when people think of artificial intelligence, they think of, you know, WALL-E," he said, in a reference to the 2008 animated movie about a lonely robot. But in fact, "the value in some of these models has to do with, not doing something better than humans, but doing things faster than humans can do," and more consistently, he said.
A tool like the one developed by the nurses and researchers, he said, can take a health care provider who has little experience with COVID, and it "can help elevate their experience and their expertise to the point where they're functioning at the same sort of assessment level as someone who has seen a lot of COVID patients."
The program can be downloaded for free by other hospitals that use the Epic electronic medical records system, Aphinyanaphongs said.
At Mount Sinai South Nassau in Oceanside, computers use A.I. to make sure patients receive precise, personalized care, taking into account the severity of their illnesses and other factors, said Stacey Conklin, chief nursing officer and senior vice president of patient care services. Those at higher risk of falls, for example, get extra help moving around if needed, she said.
A.I. "takes a lot of the subjectivity away from staffing, and allows us to really put the resources where they're needed most," Conklin said. "If I as a manager am trying to figure out where to put all of my resources, it's very helpful for me to be able to look broadly across the unit and see what's going on with all the patients so that I can ensure that the patients are getting the best care."
At the Catholic Health systems six sleep labs, A.I. is used to analyze the sleep studies of patients who spend the night hooked up to machines that record brain waves, breath patterns, cardiac signals, leg movements and other data points to diagnose conditions such as sleep apnea, said Brendan Duffy, director of sleep services at the network.
The data can fill hundreds of pages, and analyzing the information is "a very time-consuming, very meticulous" process that used to take one to two hours for each report, Duffy said.
Once the health system started using the A.I. program about three months ago, he said, that time was reduced to about 20 minutes, he said.
The new system means the sleep labs can get patients on the calendar for follow-up appointments more quickly, so patients spend less time driving while drowsy or suffering compromised immune systems due to sleep deprivation, he said.
But despite their remarkable efficiency, he said, the computers cant have the last word.
A board-certified physician reviews the sleep reports "each and every time, and that's nonnegotiable," he said.
At Northwells Feinstein Institutes for Medical Research in Manhasset, researchers used A.I. to analyze 24 million patient vital sign measurements. The results helped them predict which patients were low-risk enough to sleep through the night with a nurse looking in on them periodically, instead of being awakened to have their vitals checked, according to an article published last year in the journal Nature Partner Journals Digital Medicine.
The health system also is using A.I. to identify certain high-risk patients, said Dr. Jamie Hirsch, director of Northwells data science program.
In presentations about A.I., Hirsch tells his fellow physicians the technology can help identify people such as a fictional patient he has dubbed "Ethel," a sprightly 87-year-old grandmother who is "fiercely independent," but who feels overwhelmed in the hospital, lives alone and might need more assistance than she realizes.
In a busy hospital filled with hundreds of patients, a patient like Ethel might not get the hand-holding she needs, he said.
But when an A.I. program is trained to flag patients who are older, live alone and are coping with a bewildering array of medications and discharge instructions, he said, "now you have a patient experience specialist that's going to come in and say, How are you? Let's sit down, let's talk, you know, how can we make your experience better . How do we get you home, so you can continue living that independent life that you so value?"
He said, "It allows us to focus our energies in the right way, to the right person, at the right time."
Maura McDermott covers health care and other business news on Long Island.
Read the original here:
At LI hospitals, the artificial intelligence revolution has already begun - Newsday
Posted in Artificial Intelligence
Comments Off on At LI hospitals, the artificial intelligence revolution has already begun – Newsday
Artificial intelligence in the healthcare sector: Lindera successfully closes financing round of six million euros – KULR-TV
Posted: at 12:06 pm
Lindera one of the leading deep-tech companies in the field of computer vision has successfully closed a Series A financing round. The Berlin-based health-tech company is receiving additional growth capital from new investors as well as from its existing shareholders from the Rheingau Founders circle. With its technology, Lindera is democratising the use of high-precision 3D motion tracking in the healthcare sector. Lindera's scientifically tested and validated solution makes it possible to create motion analyses with a smartphone camera, comparable to the gold standard in measurement accuracy (GAITRite).
Karsten Wulf, Co-founder of buw Holding and Shareholder of family office zwei.7, comments on his investment: "Given the demographic developments and ongoing shortage of skilled care professionals, we see enormous potential in digital health and care applications. This is not only about the sustainable improvement of efficiency but also about increasing the quality of patient care. We are convinced that Lindera, with the cutting-edge digital technology it has developed in-house and its scientific excellence, will play an important role in this area while at the same time keep the focus on people." Commenting on the successful financing round, Diana Heinrichs, Founder and CEO of Lindera, says: "Similar to how Amazon has evolved from a pioneer in online book retail to one of the leading tech companies, backed by zwei.7 we are now developing from an AI pioneer in care into a movement specialist along the entire health supply chain."
With its AI-based mobility analysis, Lindera SturzApp, the Berlin-based company is already successfully in use in more than 350 care facilities and therapy centres throughout Germany. Its customer base includes some of the largest German care facility operators. Lindera is also planning to expand internationally via a pilot project in Paris. In addition, long-term cooperations with customers and health insurance companies, as well as deep roots in the care structures, have created the basis for further growth.
In addition to nursing care, Lindera has been deploying its technology in other medical areas for a long time. The company is using patented, self-learning computer vision technology to address inefficiencies in care structures and to standardise billing-relevant movement assessments at the highest level with the goal of increasing the quality of care measurably. As a result, Lindera aims to use its AI-driven medical devices to make lasting changes in other healthcare areas, such as orthopaedics, geriatrics, neurology, and physical rehabilitation. With "LTech" its own software development kit Lindera also provides its smart 3D algorithm to developers of other healthcare applications, contributing to the development of apps, for example, in the field of physiotherapy.
Within the care sector, Lindera has now received one of the largest investments in the DACH region to date. The team intends to use the additional capital to establish an objective, patient-centred quality standard in care, grow internationally, and advance the development for admission, treatment, and discharge management in hospitals.
Issued by news aktuell/ots on behalf of Lindera GmbH
See the article here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence in the healthcare sector: Lindera successfully closes financing round of six million euros – KULR-TV
The AI Act – does this mark a turning point for the regulation of artificial intelligence? An overview – Lexology
Posted: at 12:06 pm
For some time now, the EU has been preoccupied with the question of artificial intelligence. This includes in particular, the creation of an appropriate legal framework. At the end of April 2021, the EU Commission finally presented a Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts(hereinafter AI Act) which constituted, in its own words, the worlds first legal framework for AI. But what is behind it? What is to be regulated and how? What effects can the AI Act have? These are the questions we will explore in this Plugin edition dedicated to the AI Act. We will start off with an overview.
Background to the legislation
The European Parliament and the European Council have in the past explicitly and repeatedly called for legislative action or adopted resolutions in relation to artificial intelligence (AI) systems. In 2018, the EU Commission published its European AI strategy entitled Artificial Intelligence for Europe and a Coordinated Plan for Artificial Intelligence, set up a high-level expert group on artificial intelligence and published Guidelines for Trustworthy AI on this basis in 2019. In 2020, against this background, the EU Commission finally published its White Paper on Artificial Intelligence A European Concept for Excellence and Trust (White Paper), which for the first time developed a specific concept for regulating AI. These measures in particular form the basis for the current proposal of the AI Act.
Basic information on the AI Act
The rapid development of AI technologies is witnessed daily. On the one hand, they are said to bring multiple benefits to the economy and society across the entire spectrum of industrial and social activities. On the other hand, their use can also potentially result in new or changed risks or disadvantages for individuals or society, for example in connection with AI-based social scoring or biometric facial recognition. In this respect, the AI Act is basically intended to balance the benefits and risks of AI technologies. According to the explanatory memorandum to the proposal, the AI Act contains a regulatory approach to AI that respects proportionality and is limited to the minimum requirements necessary to address the risks and problems associated with AI without unduly restricting or hindering technological development or otherwise disproportionately increasing the costs of placing AI solutions on the market.
The proposal accordingly sets out harmonised rules for the development, placing on the market and use of AI systems in the Union. The main objective is to ensure the smooth functioning of the internal market by defining them. Weighing up various policy options, the EU Commission opted in the proposal for a horizontal EU legislative instrument based on proportionality and a risk-based approach, complemented by a code of conduct for AI systems that do not pose a high risk. In this respect, the proposal follows a risk-based approach already laid out in the White Paper, according to which AI systems are grouped into categories according to their potential risk: unacceptable risk, high risk and low or minimal risk.
Overview of the main regulatory content
The AI Act first defines its scope of application. Two aspects of regulation are particularly striking here: the definition of AI systems and the extensive territorial scope of application. AI systems are legally defined as software developed using one or more of the techniques and concepts listed in Annex I and capable of producing results such as content, predictions, recommendations or decisions that influence the environment with which they interact, in relation to a set of goals defined by humans. The techniques and concepts listed in Annex I include machine learning, logic and knowledge-based approaches, as well as statistical approaches and Bayesian estimation, search and optimisation methods. Critics recognise several features here that require interpretation and specification. In a sense, all kinds of more complex software could be included here, meaning that the definition could be described as rather imprecise.
The AI Act also applies the so-called place of market principle to determine the territorial scope. Providers that place AI systems on the market or put them into service in the Union, regardless of whether those providers are also established in the Union or in a third country, are covered by the territorial scope of the AI Act. Furthermore, users of AI systems located in the Union and providers and users of AI systems established or located in a third country are covered if the result produced by the system is used in the Union. Consequently, there is a kind of extraterritorial claim of EU law and a wide territorial scope the AI Act reflects the General Data Protection Regulation in this respect.
A core element of the AI Act is the risk-based approach: some AI practices classified as particularly harmful are to be banned (please see the Plugin article Prohibited practices under the draft AI Act Does the European Commission want to ban Instagram?). Furthermore, the proposal contains extremely extensive regulation of high-risk AI systems, i.e. those systems that pose significant risks to the health and safety or fundamental rights of individuals (see the Plugin article on this subject High-risk systems: A danger foreseen is half avoided or is it?). For certain AI systems that do not fall under the aforementioned risk categories, only minimal transparency obligations are proposed, in particular for the use of chatbots or so-called deepfakes. Finally, AI systems without an inherent risk that requires regulation are not to be covered by the AI Act at all. The EU Commission assumes that the vast majority of AI systems fall under this and cites applications such as AI-supported video games or spam filters as examples.
The regulation of high-risk AI systems could be considered the centrepiece of the proposal. They will have to comply with horizontal requirements for trustworthy AI and undergo conformity assessment procedures before being placed on the market in the Union. In order to ensure safety and compliance with existing legislation protecting fundamental rights throughout the lifecycle of AI systems, the obligations imposed on providers and users of these systems are extremely comprehensive. These include, for example, in terms of conformity assessment, risk management systems, technical documentation, record-keeping obligations, transparency and provision of information to users, human oversight, accuracy, robustness and cybersecurity, quality management systems, post-market monitoring, notification of serious incidents and malfunctions, and corrective actions. In this context, special attention must also be given to compliance with data quality criteria and data governance (see the Plugin article on this topic). Data governance in the AI Regulation in conflict with the GDPR?). Affected companies are likely to face an extremely comprehensive and complex implementation effort in this case.
The AI Act also clearly aims to establish a comprehensive framework for AI product compliance (see the Plugin article on this topic CE mark for AI systems extension of product safety law to artificial intelligence). With regard to high-risk AI systems, which are safety components of products, the proposal will be integrated into existing sector-specific safety legislation to maintain coherence, avoid overlap and reduce administrative burden. Thus, the requirements for high-risk AI systems associated with products covered by the New Legislative Framework (NLF) (such as machinery, medical devices, toys) will be assessed under the existing conformity assessment procedures under the relevant NLF legislation. According to the explanatory memorandum to the Act, the interplay of the requirement is that the security risks dependent on the respective AI systems are to be subject to the requirements of the AI Act, while the NLF law is to ensure the security of the end product as a whole.
Furthermore, the regulations of the AI Act shall be enforced by the Member States through a governance structure as well as a cooperation mechanism at Union level, by the establishment of a European Committee on Artificial Intelligence. In addition, measures to support innovation are proposed, especially in the form of AI real labs (see the Plugin article on this Innovation meets regulation: A sandbox for artificial intelligence (AI)).
On the basis of the AI Act, the Member States will also adopt rules on sanctions applicable to infringements of the AI Act. Fines are also specifically mentioned here. The sanctions provided for must be effective, proportionate and dissuasive. Depending on the infringement, the range of fines is to be between up to 30,000,000 Euro or in the case of companies up to 6 percent and up to 10,000,000 Euro or in the case of companies up to 2 percent of the total worldwide annual turnover of the preceding business year, whichever is higher (see the Plugin article Fines under the AI Act A bottomless pit?). Article 10 of the AI Act plays a special role here: there are obvious parallels with the GDPR and its system of heavy fines imposed by data protection authorities which have recently been witnessed.
Outlook
With the proposed regulation, the EU Commission has laid down a fundamental foundation for the regulation of AI in the EU. It has the potential to subject the development, placing on the market and use of a large proportion of AI systems in the Union to comprehensive and complex regulation. This applies both very generally and according to respective sectors, for example in the area of Work 4.0 (see the Plugin article on this The impact of the AI Act on HR technology) or InsurTech (see the Plugin article on this Regulation of the use of Big Data and AI by insurance undertakings). Accordingly, the proposal is already facing some harsh criticism in the first draft, for example from industry associations. For others, however, the draft does not go far enough: for example, it is criticised that far too few applications are subject to prohibited practices in the field of artificial intelligence. The EU still has a mammoth task ahead of it until a final legal framework is achieved: the draft regulation must now pass through the European Parliament and other EU bodies in the legislative process, which will only proceed with amendments and after years of tough wrangling.
Read more from the original source:
Posted in Artificial Intelligence
Comments Off on The AI Act – does this mark a turning point for the regulation of artificial intelligence? An overview – Lexology
Staten Island Family Advocating For New Artificial Intelligence Program That Aims To Prevent Drug Overdoses – CBS New York
Posted: at 12:06 pm
NEW YORK (CBSNewYork) So many families have felt the pain of losing a loved one to a drug overdose, and now, new artificial intelligence technology is being used to help prevent such tragedies.
When you have a family member who lives this lifestyle, its a call you always know could come, Megan Wohltjen said.
Wohltjens brother, Samuel Grunlund, died of an overdose in March 2020, just two days after leaving a treatment facility. He was 27.
Very happy person. He was extremely athletic. Really intelligent, like, straight A student He started, you know, smoking marijuana and then experimenting with other drugs, Wohltjen told CBS2s Natalie Duddridge.
He wanted to get clean and addiction just destroyed his life, said Maura Grunlund, Sams mother.
Since Sams death, his mother and sister have been advocating for a new program they believe could have saved him. Its called Hotspotting the Opioid Crisis.
Researchers at MIT developed artificial intelligence that aims to stop an overdose before it happens.
This project has never been tried before, and its an effort to combine highly innovative predictive analytics and an AI-based algorithm to identify those who are most at risk of an overdose, said former congressman Max Rose, with the Secure Future Project.
The technology screens thousands of medical records through data sharing with doctors, pharmacies and law enforcement.
For example, over time, it might flag if a known drug user missed a treatment session, didnt show up to court or, in Sams case, just completed a rehab program. It then alerts health care professionals.
Im just calling to check in to see how things are going, said Dr. Joseph Conte,executive director of Staten Island Performing Provider System.
Conte says the program trains dozens of peer advocates who themselves are recovering addicts. They reach out to at-risk individuals and find out what they need from jobs to housing to therapy.
Theres no pressure on the patient to enter rehab. The goal is to keep them alive.
We cant help them if theyre dead If youre not ready for treatment, you should be ready for harm reduction. You should have Narcan available if you or a friend overdoses, Conte said.
Health officials say a record number of people, 100,000, died of overdoses in 2020.
This year alone on Staten Island, more than 70 people have fatally overdosed.
The number of opioid deaths per 100,000 people on Staten Island is about 170% higher than the national rate. Officials say fentanyl is largely to blame, and the lethal drug was found in 80% of Staten Island toxicology reports.
I believe that my son would be alive today if he hadnt used fentanyl I really feel that if this was any other disease, people would be up in arms, Maura Grunlund said.
Wohltjen says her brother always encouraged her to run the New York City Marathon, so this year, she did it, wearing his Little League baseball hat and raising thousands of dollars for the Partnership to End Addiction.
If we could save one life it would make a difference, Wohltjen said.
More:
Posted in Artificial Intelligence
Comments Off on Staten Island Family Advocating For New Artificial Intelligence Program That Aims To Prevent Drug Overdoses – CBS New York
6 positive AI visions for the future of work – World Economic Forum
Posted: at 12:06 pm
Current trends in AI are nothing if not remarkable. Day after day, we hear stories about systems and machines taking on tasks that, until very recently, we saw as the exclusive and permanent preserve of humankind: making medical diagnoses, drafting legal documents, designing buildings, and even composing music.
Our concern here, though, is with something even more striking: the prospect of high-level machine intelligence systems that outperform human beings at essentially every task. This is not science fiction. In a recent survey the median estimate among leading computer scientists reported a 50% chance that this technology would arrive within 45 years.
Importantly, that survey also revealed considerable disagreement. Some see high-level machine intelligence arriving much more quickly, others far more slowly, if at all. Such differences of opinion abound in the recent literature on the future of AI, from popular commentary to more expert analysis.
Yet despite these conflicting views, one thing is clear: if we think this kind of outcome might be possible, then it ought to demand our attention. Continued progress in these technologies could have extraordinarily disruptive effects it would exacerbate recent trends in inequality, undermine work as a force for social integration, and weaken a source of purpose and fulfilment for many people.
In April 2020, an ambitious initiative called Positive AI Economic Futures was launched by Stuart Russell and Charles-Edouard Boue, both members of the World Economic Forums Global AI Council (GAIC). In a series of workshops and interviews, over 150 experts from a wide variety of backgrounds gathered virtually to discuss these challenges, as well as possible positive Artificial Intelligence visions and their implications for policymakers.
Those included Madeline Ashby (science fiction author and expert in strategic foresight), Ken Liu (Hugo Award-winning science fiction and fantasy author), and economists Daron Acemoglu (MIT) and Anna Salomons (Utrecht), among many others. What follows is a summary of these conversations, developed in the Forum's report Positive AI Economic Futures.
Participants were divided on this question. One camp thought that, freed from the shackles of traditional work, humans could use their new freedom to engage in exploration, self-improvement, volunteering, or whatever else they find satisfying. Proponents of this view usually supported some form of universal basic income (UBI), while acknowledging that our current system of education hardly prepares people to fashion their own lives, free of any economic constraints.
The second camp in our workshops and interviews believed the opposite: traditional work might still be essential. To them, UBI is an admission of failure it assumes that most people will have nothing of economic value to contribute to society. They can be fed, housed, and entertained mostly by machines but otherwise left to their own devices.
People will be engaged in supplying interpersonal services that can be provided or which we prefer to be provided only by humans. These include therapy, tutoring, life coaching, and community-building. That is, if we can no longer supply routine physical labour and routine mental labour, we can still supply our humanity. For these kinds of jobs to generate real value, we will need to be much better at being human an area where our education system and scientific research base is notoriously weak.
So, whether we think that the end of traditional work would be a good thing or a bad thing, it seems that we need a radical redirection of education and science to equip individuals to live fulfilling lives or to support an economy based largely on high-value-added interpersonal services. We also need to ensure that the economic gains born of AI-enabled automation will be fairly distributed in society.
One of the greatest obstacles to action is that, at present, there is no consensus on what future we should target, perhaps because there is hardly any conversation about what might be desirable. This lack of vision is a problem because, if high-level machine intelligence does arrive, we could quickly find ourselves overwhelmed by unprecedented technological change and implacable economic forces. This would be a vast opportunity squandered.
For this reason, the workshop attendees and interview participants, from science-fiction writers to economists and AI experts, attempted to articulate positive visions of a future where Artificial Intelligence can do most of what we currently call work.
These scenarios represent possible trajectories for humanity. None of them, though, is unambiguously achievable or desirable. And while there are elements of important agreement and consensus among the visions, there are often revealing clashes, too.
The economic benefits of technological progress are widely shared around the world. The global economy is 10 times larger because AI has massively boosted productivity. Humans can do more and achieve more by sharing this prosperity. This vision could be pursued by adopting various interventions, from introducing a global tax regime to improving insurance against unemployment.
Large companies focus on developing AI that benefits humanity, and they do so without holding excessive economic or political power. This could be pursued by changing corporate ownership structures and updating antitrust policies.
Human creativity and hands-on support give people time to find new roles. People adapt to technological change and find work in newly created professions. Policies would focus on improving educational and retraining opportunities, as well as strengthening social safety nets for those who would otherwise be worse off due to automation.
The World Economic Forums Centre for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient government procurement of artificial intelligence (AI) technology. Governments across Europe, Latin America and the Middle East are piloting these guidelines to improve their AI procurement processes.
Our guidelines not only serve as a handy reference tool for governments looking to adopt AI technology, but also set baseline standards for effective, responsible public procurement and deployment of AI standards that can be eventually adopted by industries.
We invite organizations that are interested in the future of AI and machine learning to get involved in this initiative. Read more about our impact.
Society decides against excessive automation. Business leaders, computer scientists, and policymakers choose to develop technologies that increase rather than decrease the demand for workers. Incentives to develop human-centric AI would be strengthened and automation taxed where necessary.
New jobs are more fulfilling than those that came before. Machines handle unsafe and boring tasks, while humans move into more productive, fulfilling, and flexible jobs with greater human interaction. Policies to achieve this include strengthening labour unions and increasing worker involvement on corporate boards.
In a world with less need to work and basic needs met by UBI, well-being increasingly comes from meaningful unpaid activities. People can engage in exploration, self-improvement, volunteering or whatever else they find satisfying. Greater social engagement would be supported.
The intention is that this report starts a broader discussion about what sort of future we want and the challenges that will have to be confronted to achieve it. If technological progress continues its relentless advance, the world will look very different for our children and grandchildren. Far more debate, research, and policy engagement are needed on these questions they are now too important for us to ignore.
Written by
Stuart Russell, Professor of Computer Science and Director of the Center for Human-Compatible AI, University of California, Berkeley
Daniel Susskind, Fellow in Economics, Oxford University, and Visiting Professor, Kings College, London
The views expressed in this article are those of the author alone and not the World Economic Forum.
Original post:
6 positive AI visions for the future of work - World Economic Forum
Posted in Artificial Intelligence
Comments Off on 6 positive AI visions for the future of work – World Economic Forum