In 1946 the New York Times revealed one of World War IIs top secrets an amazing machine which applies electronic speeds for the first time to mathematical tasks hitherto too difficult and cumbersome for solution. One of the machines creators offered that its purpose was to replace, as far as possible, the human brain. While this early version of a computer did not replace the human brain, it did usher in a new era in which, according to the historian Jill Lepore, technological change wildly outpaced the human capacity for moral reckoning.
That era continues with the application of machine learning to questions of command and control. The application of machine learning is in some areas already a reality the U.S. Air Force, for example, has used it as a working aircrew member on a military aircraft, and the U.S. Army is using it to choose the right shooter for a target identified by an overhead sensor. The military is making strides toward using machine learning algorithms to direct robotic systems, analyze large sets of data, forecast threats, and shape strategy. Using algorithms in these areas and others offers awesome military opportunities from saving person-hours in planning to outperforming human pilots in dogfights to using a multihypothesis semantic engine to improve our understanding of global events and trends. Yet with the opportunity of machine learning comes ethical risk the military could surrender life-and-death choice to algorithms, and surrendering choice abdicates ones status as a moral actor.
So far, the debate about algorithms role in battlefield choice has been eitheror: Either algorithms should make life-and-death choices because there is no other way to keep pace on an increasingly autonomous battlefield, or humans should make life-and-death choices because there is no other way to maintain moral standing in war. This is a false dichotomy. Choice is not a unitary thing to be handed over either to algorithms or to people. At all levels of decision-making (i.e., tactical, operational, and strategic), choice is the result of a several-step process. The question is not whether algorithms or humans should make life-and-death choices, but rather which steps in the process each should be responsible for. By breaking choice into its constituent parts and training servicemembers in decision science the military can both increase decision speed and maintain moral standing. This article proposes how it can do both. It describes the constituent components of a choice, then discusses which of those components should be performed by machine learning algorithms and which require human input.
What Decisions Are and What It Takes To Make Them
Consider a fighter pilot hunting surface-to-air missiles. When the pilot attacks, she is determining that her choice, relative to other possibilities before her, maximizes expected net benefit, or utility. She may not consciously process the decision in these terms and may not make the calculation perfectly, but she is nonetheless determining which decision optimizes expected costs and benefits. To be clear, the example of the fighter pilot is not meant to bound the discussion. The basic conceptual process is the same whether the decision-makers are trigger-pullers on the front lines or commanders in distant operations centers. The scope and particulars of a decision change at higher levels of responsibility, of course, from risking one unit to many, or risking one bystanders life to risking hundreds. Regardless of where the decision-maker sits or rather where the authority to choose to employ force lawfully resides choice requires the same four fundamental steps.
The first step is to list the alternatives available to the decision-maker. The fighter pilot, again just for example, might have two alternatives: attack the missile system from a relatively safer long-range approach, or attack from closer range with more risk but a higher probability of a successful attack. The second step is to take each of these alternatives and define the relevant possible results. In this case, the pilots relevant outcomes might include killing the missile while surviving, killing the missile without surviving, failing to kill the system but surviving, and, lastly, failing to kill the missile while also failing to survive.
The third step is to make a conditional probability estimate, or an estimate of the likelihood of each result assuming a given alternative. If the pilot goes in close, what is the probability that she kills the missile and survives? What is the same probability for the attack from long range? And so on for each outcome of each alternative.
So far the pilot has determined what she can do, what may happen as a result, and how likely each result is. She now needs to say how much she values each result. To do this she needs to identify how much she cares about each dimension of value at play in the choice, which in highly simplified terms are the benefit to mission that comes from killing the missile, and the cost that comes from sacrificing her life, the lives of targeted combatants, and the lives of bystanders. It is not enough to say that killing the missile is beneficial and sacrificing life is costly. She needs to put benefit and cost into a single common metric, sometimes called a utility, so that the value of one can be directly compared to the value of the other. This relative comparison is known as a value trade-off, the fourth step in the process. Whether the decision-maker is on the tactical edge or making high-level decisions, the trade-off takes the same basic form: The decision-maker weighs the value of attaining a military objective against the cost of dollars and lives (friendly, enemy, and civilian) needed to attain it. This trade-off is at once an ethical and a military judgment it puts a price on life at the same time that it puts a price on a military objective.
Once these four steps are complete, rational choice is a matter of fairly simple math. Utilities are weighted by an outcomes likelihood high-likelihood outcomes get more weight and are more likely to drive the final choice.
It is important to note that, for both human and machine decision-makers, rational is not necessarily the same thing as ethical or successful. The rational choice process is the best way, given uncertainty, to optimize what decision-makers say they value. It is not a way of saying that one has the right values and does not guarantee a good outcome. Good decisions will still occasionally lead to bad outcomes, but this decision-making process optimizes results in the long run.
At least in the U.S. Air Force, pilots do not consciously step through expected utility calculations in the cockpit. Nor is it reasonable to assume that they should performing the mission is challenging enough. For human decision-makers, explicitly working through the steps of expected utility calculations is impractical, at least on a battlefield. Its a different story, however, with machines. If the military wants to use algorithms to achieve decision speed in battle, then it needs to make the components of a decision computationally tractable that is, the four steps above need to reduce to numbers. The question becomes whether it is possible to provide the numbers in such a way that combines the speed that machines can bring with the ethical judgment that only humans can provide.
Where Algorithms Are Better and Where Human Judgment Is Necessary
Computer and data science have a long way to go to exercise the power of machine learning and data representation assumed here. The Department of Defense should continue to invest heavily in the research and development of modeling and simulation capabilities. However, as it does that, we propose that algorithms list the alternatives, define the relevant possible results, and give conditional probability estimates (the first three steps of rational decision-making), with occasional human inputs. The fourth step of determining value should remain the exclusive domain of human judgment.
Machines should generate alternatives and outcomes because they are best suited for the complexity and rule-based processing that those steps require. In the simplified example above there were only two possible alternatives (attack from close or far) with four possible outcomes (kill the missile and survive, kill the missile and dont survive, dont kill the missile and survive, and dont kill the missile and dont survive). The reality of future combat will, of course, be far more complicated. Machines will be better suited for handling this complexity, exploring numerous solutions, and illuminating options that warfighters may not have considered. This is not to suggest, though, that humans will play no role in these steps. Machines will need to make assumptions and pick starting points when generating alternatives and outcomes, and it is here that human creativity and imagination can help add value.
Machines are hands-down better suited for the third step estimating the probabilities of different outcomes. Human judgments of probability tend to rely on heuristics, such as how available examples are in memory, rather than more accurate indicators like relevant base rates, or how often a given event has historically occurred. People are even worse when it comes to understanding probabilities for a chain of events. Even a relatively simple combination of two conditional probabilities is beyond the reach of most people. There may be openings for human input when unrepresentative training data encodes bias into the resulting algorithms, something humans are better equipped to recognize and correct. But even then, the departures should be marginal, rather than the complete abandonment of algorithmic estimates in favor of intuition. Probability, like long division, is an arena best left to machines.
While machines take the lead with occasional human input in steps one through three, the opposite is true for the fourth step of making value trade-offs. This is because value trade-offs capture both ethical and military complexity, as many commanders already know. Even with perfect information (e.g., the mission will succeed but it will cost the pilots life) commanders can still find themselves torn over which decision to make. Indeed, whether and how one should make such trade-offs is the essence of ethical theories like deontology or consequentialism. And prioritization of which military objectives will most efficiently lead to success (however defined) is an always-contentious and critical part of military planning.
As long as commanders and operators remain responsible for trade-offs, they can maintain control and responsibility for the ethicality of the decision even as they become less involved in the other components of the decision process. Of note, this control and responsibility can be built into the utility function in advance, allowing systems to execute at machine speed when necessary.
A Way Forward
Incorporating machine learning and AI into military decision-making processes will be far from easy, but it is possible and a military necessity. China and Russia are using machine learning to speed their own decision-making, and unless the United States keeps pace it risks finding itself at a serious disadvantage on future battlefields.
The military can ensure the success of machine-aided choice by ensuring that the appropriate division of labor between human and machines is well understood by both decision-makers and technology developers.
The military should begin by expanding developmental education programs so that they rigorously and repeatedly cover decision science, something the Air Force has started to do in its Pinnacle sessions, its executive education program for two- and three-star generals. Military decision-makers should learn the steps outlined above, and also learn to recognize and control for inherent biases, which can shape a decision as long as there is room for human input. Decades of decision science research have shown that intuitive decision-making is replete with systematic biases like overconfidence, irrational attention to sunk costs, and changes in risk preference based merely on how a choice is framed. These biases are not restricted just to people. Algorithms can show them as well when training data reflects biases typical of people. Even when algorithms and people split responsibility for decisions, good decision-making requires awareness of and a willingness to combat the influence of bias.
The military should also require technology developers to address ethics and accountability. Developers should be able to show that algorithmically generated lists of alternatives, results, and probability estimates are not biased in such a way as to favor wanton destruction. Further, any system addressing targeting, or the pairing of military objectives with possible means of affecting those objectives, should be able to demonstrate a clear line of accountability to a decision-maker responsible for the use of force. One means of doing so is to design machine learning-enabled systems around the decision-making model outlined in this article, which maintains accountability of human decision-makers through their enumerated values. To achieve this, commanders should insist on retaining the ability to tailor value inputs. Unless input opportunities are intuitive, commanders and troops will revert to simpler, combat-tested tools with which they are more comfortable the same old radios or weapons or, for decision purposes, slide decks. Developers can help make probability estimates more intuitive by providing them in visual form. Likewise, they can make value trade-offs more intuitive by presenting different hypothetical (but realistic) choices to assist decision-makers in refining their value judgements.
The unenviable task of commanders is to imagine a number of potential outcomes given their particular context and assign a numerical score or utility such that meaningful comparisons can be made between them. For example, a commander might place a value of 1,000 points on the destruction of an enemy aircraft carrier and -500 points on the loss of a fighter jet. If this is an accurate reflection of the commanders values, she should be indifferent between an attack with no fighter losses and one enemy carrier destroyed and one that destroys two carriers but costs her two fighters. Both are valued equally at 1,000 points. If the commander strongly prefers one outcome over the other, then the points should be adjusted to better reflect her actual values or else an algorithm using that point system will make choices inconsistent with the commanders values. This is just one example of how to elicit trade-offs, but the key point is that the trade-offs need to be given in precise terms.
Finally, the military should pay special attention to helping decision-makers become proficient in their roles as appraisers of value, particularly with respect to decisions focused on whose life to risk, when, and for what objective. In the command-and-control paradigm of the future, decision-makers will likely be required to document such trade-offs in explicit forms so machines can understand them (e.g., I recognize there is a 12 percent chance that you wont survive this mission, but I judge the value of the target to be worth the risk).
If decision-makers at the tactical, operational, or strategic levels are not aware of or are unwilling to pay these ethical costs, then the construct of machine-aided choice will collapse. It will either collapse because machines cannot assist human choice without explicit trade-offs, or because decision-makers and their institutions will be ethically compromised by allowing machines to obscure the tradeoffs implied by their value models. Neither are acceptable outcomes. Rather, as an institution, the military should embrace the requisite transparency that comes with the responsibility to make enumerated judgements about life and death. Paradoxically, documenting risk tolerance and value assignment may serve to increase subordinate autonomy during conflict. A major advantage of formally modeling a decision-makers value trade-offs is that it allows subordinates and potentially even autonomous machines to take action in the absence of the decision-maker. This machine-aided decision process enables decentralized execution at scale that reflects the leaders values better than even the most carefully crafted rules of engagement or commanders intent. As long as trade-offs can be tied back to a decision-maker, then ethical responsibility lies with that decision-maker.
Keeping Values Preeminent
The Electronic Numerical Integrator and Computer, now an artifact of history, was the top secret that the New York Times revealed in 1946. Though important as a machine in its own right, the computers true significance lay in its symbolism. It represented the capacity for technology to sprint ahead of decision-makers, and occasionally pull them where they did not want to go.
The military should race ahead with investment in machine learning, but with a keen eye on the primacy of commander values. If the U.S. military wishes to keep pace with China and Russia on this issue, it cannot afford to delay in developing machines designed to execute the complicated but unobjectionable components of decision-making identifying alternatives, outcomes, and probabilities. Likewise, if it wishes to maintain its moral standing in this algorithmic arms race, it should ensure that value trade-offs remain the responsibility of commanders. The U.S. militarys professional development education should also begin training decision-makers on how to most effectively maintain accountability for the straightforward but vexing components of value judgements in conflict.
We stand encouraged by the continued debate and hard discussions on how to best leverage the incredible advancement in AI, machine learning, computer vision, and like technologies to unleash the militarys most valuable weapon system, the men and women who serve in uniform. The military should take steps now to ensure that those people and their values remain the key players in warfare.
Brad DeWees is a major in the U.S. Air Force and a tactical air control party officer. He is currently the deputy chief of staff for 9th Air Force (Air Forces Central). An alumnus of the Air Force Chief of Staffs Strategic Ph.D. program, he holds a Ph.D. in decision science from Harvard University. LinkedIn.
Chris FIAT Umphres is a major in the U.S. Air Force and an F-35A pilot. An alumnus of the Air Force Chief of Staffs Strategic Ph.D. program, he holds a Ph.D. in decision science from Harvard University and a Masters in management science and engineering from Stanford University. LinkedIn.
Maddy Tung is a second lieutenant in the U.S. Air Force and an information operations officer. A Rhodes Scholar, she is completing dual degrees at the University of Oxford. She recently completed an M.Sc. in computer science and began the M.Sc. in social science of the internet. LinkedIn.
The views expressed here are the authors alone and do not necessarily reflect those of the U.S. government or any part thereof.
Image: U.S. Air Force (Photo by Staff Sgt. Sean Carnes)
See the original post here:
Machine Learning and Life-and-Death Decisions on the Battlefield - War on the Rocks
- Microsoft reveals how it caught mutating Monero mining malware with machine learning - The Next Web [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- The role of machine learning in IT service management - ITProPortal [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Workday talks machine learning and the future of human capital management - ZDNet [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning - SemiEngineering [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Synthesis-planning program relies on human insight and machine learning - Chemical & Engineering News [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Here's why machine learning is critical to success for banks of the future - Tech Wire Asia [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- The 10 Hottest AI And Machine Learning Startups Of 2019 - CRN: The Biggest Tech News For Partners And The IT Channel [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Onica Showcases Advanced Internet of Things, Artificial Intelligence, and Machine Learning Capabilities at AWS re:Invent 2019 - PR Web [Last Updated On: December 3rd, 2019] [Originally Added On: December 3rd, 2019]
- Machine Learning Answers: If Caterpillar Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 3rd, 2019] [Originally Added On: December 3rd, 2019]
- Amazons new AI keyboard is confusing everyone - The Verge [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- Exploring the Present and Future Impact of Robotics and Machine Learning on the Healthcare Industry - Robotics and Automation News [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- 3 questions to ask before investing in machine learning for pop health - Healthcare IT News [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights [Last Updated On: December 6th, 2019] [Originally Added On: December 6th, 2019]
- The NFL And Amazon Want To Transform Player Health Through Machine Learning - Forbes [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Scientists are using machine learning algos to draw maps of 10 billion cells from the human body to fight cancer - The Register [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Appearance of proteins used to predict function with machine learning - Drug Target Review [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Google is using machine learning to make alarm tones based on the time and weather - The Verge [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- 10 Machine Learning Techniques and their Definitions - AiThority [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Taking UX and finance security to the next level with IBM's machine learning - The Paypers [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Government invests 49m in data analytics, machine learning and AI Ireland, news for Ireland, FDI,Ireland,Technology, - Business World [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Bing: To Use Machine Learning; You Have To Be Okay With It Not Being Perfect - Search Engine Roundtable [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- IQVIA on the adoption of AI and machine learning - OutSourcing-Pharma.com [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Schneider Electric Wins 'AI/ Machine Learning Innovation' and 'Edge Project of the Year' at the 2019 SDC Awards - PRNewswire [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Industry Call to Define Universal Open Standards for Machine Learning Operations and Governance - MarTech Series [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Qualitest Acquires AI and Machine Learning Company AlgoTrace to Expand Its Offering - PRNewswire [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Automation And Machine Learning: Transforming The Office Of The CFO - Forbes [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Machine learning results: pay attention to what you don't see - STAT [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- The challenge in Deep Learning is to sustain the current pace of innovation, explains Ivan Vasilev, machine learning engineer - Packt Hub [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- Israelis develop 'self-healing' cars powered by machine learning and AI - The Jerusalem Post [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- Theres No Such Thing As The Machine Learning Platform - Forbes [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- Global Contextual Advertising Markets, 2019-2025: Advances in AI and Machine Learning to Boost Prospects for Real-Time Contextual Targeting -... [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Machine Learning Answers: If Twitter Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Tech connection: To reach patients, pharma adds AI, machine learning and more to its digital toolbox - FiercePharma [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Machine Learning Answers: If Seagate Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- MJ or LeBron Who's the G.O.A.T.? Machine Learning and AI Might Give Us an Answer - Built In Chicago [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Amazon Releases A New Tool To Improve Machine Learning Processes - Forbes [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- AI and machine learning platforms will start to challenge conventional thinking - CRN.in [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- What is Deep Learning? Everything you need to know - TechRadar [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Machine Learning Answers: If BlackBerry Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- QStride to be acquired by India-based blockchain, analytics, machine learning consultancy - Staffing Industry Analysts [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Dotscience Forms Partnerships to Strengthen Machine Learning - Database Trends and Applications [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- The Machines Are Learning, and So Are the Students - The New York Times [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Kubernetes and containers are the perfect fit for machine learning - JAXenter [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Data science and machine learning: what to learn in 2020 - Packt Hub [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- What is Machine Learning? A definition - Expert System [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Want to dive into the lucrative world of deep learning? Take this $29 class. - Mashable [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Another free web course to gain machine-learning skills (thanks, Finland), NIST probes 'racist' face-recog and more - The Register [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- TinyML as a Service and machine learning at the edge - Ericsson [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Machine Learning in 2019 Was About Balancing Privacy and Progress - ITPro Today [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Ten Predictions for AI and Machine Learning in 2020 - Database Trends and Applications [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- The Value of Machine-Driven Initiatives for K12 Schools - EdTech Magazine: Focus on Higher Education [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- CMSWire's Top 10 AI and Machine Learning Articles of 2019 - CMSWire [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- Machine Learning Market Accounted for US$ 1,289.5 Mn in 2016 and is expected to grow at a CAGR of 49.7% during the forecast period 2017 2025 - The... [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- Are We Overly Infatuated With Deep Learning? - Forbes [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- Can machine learning take over the role of investors? - TechHQ [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- Dr. Max Welling on Federated Learning and Bayesian Thinking - Synced [Last Updated On: December 28th, 2019] [Originally Added On: December 28th, 2019]
- 2010 2019: The rise of deep learning - The Next Web [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Machine Learning Answers: Sprint Stock Is Down 15% Over The Last Quarter, What Are The Chances It'll Rebound? - Trefis [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Sports Organizations Using Machine Learning Technology to Drive Sponsorship Revenues - Sports Illustrated [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- What is deep learning and why is it in demand? - Express Computer [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Byrider to Partner With PointPredictive as Machine Learning AI Partner to Prevent Fraud - CloudWedge [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Stare into the mind of God with this algorithmic beetle generator - SB Nation [Last Updated On: January 5th, 2020] [Originally Added On: January 5th, 2020]
- US announces AI software export restrictions - The Verge [Last Updated On: January 5th, 2020] [Originally Added On: January 5th, 2020]
- How AI And Machine Learning Can Make Forecasting Intelligent - Demand Gen Report [Last Updated On: January 5th, 2020] [Originally Added On: January 5th, 2020]
- Fighting the Risks Associated with Transparency of AI Models - EnterpriseTalk [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- NXP Debuts i.MX Applications Processor with Dedicated Neural Processing Unit for Advanced Machine Learning at the Edge - GlobeNewswire [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Cerner Expands Collaboration with Amazon Web as its Preferred Machine Learning Provider - Story of Future [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Can We Do Deep Learning Without Multiplications? - Analytics India Magazine [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Machine learning is innately conservative and wants you to either act like everyone else, or never change - Boing Boing [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies - Business Wire [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- FLIR Systems and ANSYS to Speed Thermal Camera Machine Learning for Safer Cars - Business Wire [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets - PRNewswire [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Tiny Machine Learning On The Attiny85 - Hackaday [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- AI, machine learning, and other frothy tech subjects remained overhyped in 2019 - Boing Boing [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Chemists are training machine learning algorithms used by Facebook and Google to find new molecules - News@Northeastern [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- AI and machine learning trends to look toward in 2020 - Healthcare IT News [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- What Is Machine Learning? | How It Works, Techniques ... [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]