Who is Edward Snowden and Where is he Now? Life, Whistleblower & Prison Time – Stanford Arts Review

Edward Snowden born June 21, 1983, in Elizabeth City, North Carolina, US, a former American intelligence contractor who in 2013 disclosed the existence of confidential data collection programs developed by the National Security Agency ( NSA) and provoked a cultural discussion about national security and individual privacy.

Snowden was born in North Carolina, and his family moved to central Maryland, a short distance from NSA headquarters in Fort Meade when he was a child. He dropped out of high school and studied part-time between 1999 and 2005 at a community college; completed his GED but did not receive a college degree.

He enlisted in the army as a member of a special unit in May 2004 but was released four months later. In 2005 he worked as a security guard at the Center for Advanced Study of Language, a research center at the University of Maryland in partnership with the NSA.

Despite the lack of education and training, Snowden demonstrated computer skills and was hired by the Central Intelligence Agency in 2006. He was granted a top-secret permit and in 2007 was sent to Geneva, where he worked as a secret communications security specialist.

Snowden left the CIA and went to the NSA in 2009. There he worked as a private contractor for Dell and Booz Allen Hamilton. During this time, he began collecting information about many of the NSAs activities in particular, the secret surveillance systems he believed were large in size and scope.

In May 2013 Snowden applied for leave to go to the aviation industry and traveled to Hong Kong, where the following month he conducted a series of interviews with reporters for The Guardian newspaper. Illustrations have already been shown in the documentary Citizenfour (2014).

Among the NSA secrets revealed by Snowden is a court order forcing telecommunications company Verizon to change the metadata (such as dial numbers and phone lengths) of millions of subscribers.

Snowden also pointed out the existence of PRISM, a data mining system that has reportedly provided the NSA, the Federal Bureau of Investigation, and the Government Communications Office equivalent to the British NSA direct access to major Internet servers such as -Google, Facebook, Microsoft, and Apple.

In August 2014, as the end of Snowdens provisional asylum expiration, the Russian government granted him a three-year residence permit (effective August 1), which would allow him to leave the country for up to three months. He was also allowed to request the extension of that permit and, after five years of residence, to apply for Russian citizenship if he chose to do so.

In September 2019 Snowden released a memo Permanent Record. On the same day, the U.S. Department of Justice He accused her of returning all the money she had received in the letter, claiming that she had violated her confidentiality agreements with the CIA and the NSA by not submitting the work for review before publication.

Stay Tuned With Stanford Arts Review for More Updates

Continued here:
Who is Edward Snowden and Where is he Now? Life, Whistleblower & Prison Time - Stanford Arts Review

The Utter Familiarity of Even the Strangest Vaccine Conspiracy Theories – The Atlantic

The multi-domain quality of the conspiracy theory also helps to explain its cyclical and adaptable nature: Once a narrative has established a pattern of creating such large leaps, the creation of further or newer leaps to even more disparate domains is considerably eased.

A deeper question is why these disease narratives circulate at all. One argument, advanced in the book Covid-19 Conspiracy Theories, is that conspiracy theories are often shared among people who lackor feel that they lacksocial power. In an age of wealth inequality and partisan politics, the majority of Americans potentially fall into this category.

Another, more general, answer is that the amount of time between the start of an epidemic and the point at which science can provide clear explanations creates an information vacuum for a concerned public that demands immediate response. These vacuums are easily filled both by the individual turning to familiar narratives from previous epidemics, and by anti-vaccination and conspiracy-theory groups actively working to promote their own narratives.

If spreading rumors is easy, combatting them is hard. As folklorists such as Bill Ellis have proposed, some legends may not die so much as they dive, remaining latent for long periods of time until a new situation arises that fits the scope and nature of the narrative. It is equally the case, as the sociologist John Gagnon has argued, that the difference between a scientific theory and a conspiracy theory is that a scientific theory has holes in it.

Just as problematic, whether you want to call the current era postmodern or post-truth: Public trust in both government and fellow citizens is at or near historic lows. In the face of such opposition, public figures may not be capable of turning the tide. A recent Pew Research Center poll of U.S. adults found that 39 percent definitely or probably would not get a coronavirus vaccine, and that 21 percent do not intend to get vaccinated and are pretty certain more information will not change their mind. How many of these respondents were reacting to any given narrativewhether false claim, conspiracy theory, or otherwiseis unclear, but the narratives are certainly massaging these responses.

That doesnt mean community leaders shouldnt try to debunk conspiracy theories and chip away at resistance. Pastors, prominent business owners, local sports figures, and so on should work in conjunction with local doctors to provide solid information. Such efforts should be frequent and, for best results, done in person, as when Anthony Fauci personally Zoomed into a Boston-area church to talk directly to parishioners.

Conspiracy theories will always be among us, but the pandemic doesnt have to be.

Originally posted here:
The Utter Familiarity of Even the Strangest Vaccine Conspiracy Theories - The Atlantic

Improving invoice anomaly detection with AI and machine learning – Ericsson

Anomalies are a common issue in many industries and the telecom industry has its fair share of them. Telecom anomalies could be related to network performance, security breaches or fraud and can occur anywhere in multitude of Telecom processes. In recent times, AI is being increasingly used to solve these problems.

Telecom invoices represent one of the most complex types of invoices generated in any industry. With the huge number and variety of possible products and services errors are inevitable. Products are built up of product characteristics, and its the sheer number of these characteristics and the various combinations of them which leads to such variety.

To add to this is the billing process complexity, which presents a variety of challenges. A periodic bill is a regular billing process, but all other bills or bill requests results in a deviation from the standard process and may result in anomalies. Convergent billing also means its possible to have a single bill for multi-play contract(s) with cross-promotions and volume discounts, which can make billing a challenging task. Moreover, many organizations have a setup where different departments have different invoice and payment responsibilities which also complicates the billing process. Have a headache yet? Me too.

As you can see, the usage to bill or invoice journey is a multi-step process and is filled with pitfalls. And here comes even more trouble.

With 5G, products and services and subsequently the billing process become even more complicated. Service providers are gearing up to address varied enterprise models, such as ultra-reliable low-latency communication (URLLC), enhanced mobile broadband (eMBB), or massive machinetype communication (mMTC). The rollout of 5G also heralds a revolution of IoT devices.

3GPP with 5G has introduced the concept of network slicing (NW slice) and the associated service-level agreements (SLAs) another dimension that will add to the complexity of the billing process.

Its a known fact in Telecom industry that billing errors leads to billing disputes and is one of the leading causes of customer churn.

Fixing billing errors has a big cost and time impact on service provider financials. Most service providers have a mix of manual and/or automated processes to detect invoice anomalies. The manual process usually relies on sampling techniques based on organizational policies, resource availability, individual skills, and experience. Its slow and lacks coverage across the entire set of generated invoices. With the introduction of IT in business processes, such audits can leverage rule-based automation to find patterns and give insights on larger data sets; however, this also has the challenge of rules being nothing but encoded experience, which may result in high numbers of false positive alerts and the incorrect flagging of legitimate behaviors as suspicious. Rule identification is done by a domain expert. The dynamic nature of the telecom industry also has to be taken into account and keeping pace would mean slowing down the launches of new products and services in the market. As a result, I think its fair to say that traditional approaches are ineffective and inefficient.

An AI-based solution can more accurately identify invoice anomalies and reduce false positives. AI is also able to more easily identify noncompliant behaviors with hidden patterns that are difficult for humans to identify. An AI agent learns to identify invoice anomaly behavior from a supplied set of data using the following steps:

Before going deep into the aspects of AI, it is important to establish some boundaries on thedefinition of an anomaly. Anomalies can be broadly categorized as:

A single instance of data is anomalous if it is too far off from the rest, for instance, an irregular low or high invoice amount.

A data point otherwise normal, however when applied a context becomes an anomaly. For instance, an invoice having usage charges for a period when a user was in an inactive status.

A collection of related data instances that are anomalous with respect to the entire dataset but not individual values. For instance, a set of invoices having missing usage data or higher than usual charges for voice calls for a day or a period. A set of point anomalies could become collective ones if we join multiple point anomalies together.

Identifying the category of anomalies helps in the identification of suitable artificial intelligence or machine learning approaches. Machine learning has four common classes of applications: classification, predicting value using a regression model, clustering or anomaly detection, and dimensionality reduction or discovering structure. While the first two are supervised learning, the latter two belong to unsupervised learning. In machine learning, there are two types of categories based on the number of variables used to predict: univariate (one variable) or multivariate (where an outlier is a combined unusual score on at least two variables). Most of the analyses that we end up doing are multivariate due to the complexity of the billing process. The image below gives us a deeper look into machine learnings approach to anomaly detection.

During the past few years, all industries have seen a strong focus on AI/ML technologies and theres certainly a reason why AI/ML leverage data-driven programming and unearth value latent in data. Previously unknown insights can now be discovered using AI/ML which is the main driver behind using AI/ML in invoice anomaly detection and a big part of what makes it so appealing. It can help service providers understand the unknown reasons behind invoice anomalies. Moreover, it can offer real-time analysis, greater accuracy and much wider coverage.

The other benefit of AI/ML is the ability to learn from its own predictions, which are fed backinto the system as reward or penalty. This helps to not only learn the patterns of today but alsonew patterns which may arise in the future.

An AI/ML model is as only as good as the data thats fed into it. It means that the invoice anomaly model needs to adapt to telecom data when deploying the model in production. The models drift needs to be continuously monitored for effective prediction owing to the dynamic nature of real-world data. Real-world data may change its characteristics or undergo structural changes; thus, the model needs to continue to align with such changes in data. This means model life cycle management must be continuously ongoing and closely monitored.

A lack of trust and data bias are also two common challenges in this field. Its important that an organizations policies are designed to avoid data bias as much as possible. Lack of awareness seeds lack of trust. Transparency and explainability of model predictions can help; especially in cases of invoice anomalies (where it is important to explain the reason for an invoice being anomalous).

Yes, I believe it is. Ensuring that the billing process is fool proof and that the generated invoices are correct is a big task particularly in the telecoms industry. The current process of sampling invoices for manual verification or static rules-based software for invoice anomaly detection has limitations either in the coverage of the number of invoices or in not being able to discover errors which have not been configured as rules.

AI/ML can help here, as it can not only provide full coverage to all invoice data but can alsolearn new anomalies over a period. Ericssons Billing product has been in the process ofincorporating AI/ML technology to discover invoice anomalies and other appropriate usecases. Beyond just invoice anomalies, we are seeing a trend where a growing number ofservice providers have started to effectively and efficiently use AI/ML technology for varioususe cases.

Read the full eBrief on AI and invoice anomaly detection:

Read previous blog posts

How to make anomaly detection more accessible

Heres how to build robust anomaly detectors with machine learning

Original post:
Improving invoice anomaly detection with AI and machine learning - Ericsson

Which Technology Jobs Will Require AI and Machine Learning Skills? – Dice Insights

Artificial intelligence (A.I.) and machine learning seem poised to dominate the future. Companies everywhere are pouring resources into making their apps and services smarter. But which technology jobs will actually require A.I. skills?

For an answer to that question, we turn to Burning Glass, which collects and analyzes millions of job postings from across the country. Specifically, we wanted to see which professions had the highest percentage of job postings requesting A.I. skills. Heres that breakdown; as the clich goes, some of these results may surprise you:

What can we conclude from this breakdown? Although you might think that artificial intelligence skills are very much in demand among software developers and engineers (after all, someone needs to build a smarter chatbot), data science is clearly the profession where A.I. is most in vogue.

Indeed,theres a lot of overlap between A.I. and data science. Both disciplines involve collecting, wrangling, cleaning, and analyzing massive amounts of data. But whereas a data scientist will analyze data for insights that they present to the broader organization, artificial intelligence and machine learning experts will use those datasets to train A.I. platforms to become smarter. Once sufficiently trained, those platforms can then make their own (hopefully correct) inferences about data.

Given that intersection of artificial intelligence and data science, many machine-learning and A.I. experts become data scientists, and vice versa. That relationship will likely only deepen in the years ahead. Burning Glass suggests that machine learning is a defining skill among data scientists, necessary for day-to-day tasks; if youre aiming for a job as a data scientist, having extensive knowledge of artificial intelligence and machine-learning tools and platforms can give you a crucial advantage in a crowded market.

Many other technologist roles will see the need for artificial intelligence skills increase in the years ahead. If youre involved in software development, for instance, learning A.I. skills now will prepare you for a future in which A.I. tools and platforms are a prevalent element in many companies tech stacks. And make no mistake about it: Managers and executives will alsoneed to become familiar with A.I. concepts and skills.A.I. is not going to replace managers but managers that use A.I. will replace those that do not, Rob Thomas, senior vice president of IBMscloudand data platform,recently told CNBC.

Overall, jobs utilizing artificial intelligence skills are projected to grow 43.4 percent over the next decade; the current median salary for jobs that heavily utilize A.I. skills is $105,000, higher than for many other professions. It must be noted, though, that A.I. and machine learning are areas where you really need to know your stuff, and hiring managers will surely test you on both your knowledge of fundamental concepts as well as your ability to execute. When applying for A.I.-related jobs, a portfolio of previous projects can only help your prospects.

Granted, its still early days for A.I.: Despite all the hype, relatively few companies have integrated A.I. into either their front-end products or back-end infrastructure. Nonetheless, its clear that employers are already interested in technologists who are familiar with the A.I. and machine learning platforms that will help determine the future.

Want more great insights?Create a Dice profile today to receive the weekly Dice Advisor newsletter, packed with everything you need to boost your career in tech. Register now

Original post:
Which Technology Jobs Will Require AI and Machine Learning Skills? - Dice Insights

Machine Learning and Life-and-Death Decisions on the Battlefield – War on the Rocks

In 1946 the New York Times revealed one of World War IIs top secrets an amazing machine which applies electronic speeds for the first time to mathematical tasks hitherto too difficult and cumbersome for solution. One of the machines creators offered that its purpose was to replace, as far as possible, the human brain. While this early version of a computer did not replace the human brain, it did usher in a new era in which, according to the historian Jill Lepore, technological change wildly outpaced the human capacity for moral reckoning.

That era continues with the application of machine learning to questions of command and control. The application of machine learning is in some areas already a reality the U.S. Air Force, for example, has used it as a working aircrew member on a military aircraft, and the U.S. Army is using it to choose the right shooter for a target identified by an overhead sensor. The military is making strides toward using machine learning algorithms to direct robotic systems, analyze large sets of data, forecast threats, and shape strategy. Using algorithms in these areas and others offers awesome military opportunities from saving person-hours in planning to outperforming human pilots in dogfights to using a multihypothesis semantic engine to improve our understanding of global events and trends. Yet with the opportunity of machine learning comes ethical risk the military could surrender life-and-death choice to algorithms, and surrendering choice abdicates ones status as a moral actor.

So far, the debate about algorithms role in battlefield choice has been eitheror: Either algorithms should make life-and-death choices because there is no other way to keep pace on an increasingly autonomous battlefield, or humans should make life-and-death choices because there is no other way to maintain moral standing in war. This is a false dichotomy. Choice is not a unitary thing to be handed over either to algorithms or to people. At all levels of decision-making (i.e., tactical, operational, and strategic), choice is the result of a several-step process. The question is not whether algorithms or humans should make life-and-death choices, but rather which steps in the process each should be responsible for. By breaking choice into its constituent parts and training servicemembers in decision science the military can both increase decision speed and maintain moral standing. This article proposes how it can do both. It describes the constituent components of a choice, then discusses which of those components should be performed by machine learning algorithms and which require human input.

What Decisions Are and What It Takes To Make Them

Consider a fighter pilot hunting surface-to-air missiles. When the pilot attacks, she is determining that her choice, relative to other possibilities before her, maximizes expected net benefit, or utility. She may not consciously process the decision in these terms and may not make the calculation perfectly, but she is nonetheless determining which decision optimizes expected costs and benefits. To be clear, the example of the fighter pilot is not meant to bound the discussion. The basic conceptual process is the same whether the decision-makers are trigger-pullers on the front lines or commanders in distant operations centers. The scope and particulars of a decision change at higher levels of responsibility, of course, from risking one unit to many, or risking one bystanders life to risking hundreds. Regardless of where the decision-maker sits or rather where the authority to choose to employ force lawfully resides choice requires the same four fundamental steps.

The first step is to list the alternatives available to the decision-maker. The fighter pilot, again just for example, might have two alternatives: attack the missile system from a relatively safer long-range approach, or attack from closer range with more risk but a higher probability of a successful attack. The second step is to take each of these alternatives and define the relevant possible results. In this case, the pilots relevant outcomes might include killing the missile while surviving, killing the missile without surviving, failing to kill the system but surviving, and, lastly, failing to kill the missile while also failing to survive.

The third step is to make a conditional probability estimate, or an estimate of the likelihood of each result assuming a given alternative. If the pilot goes in close, what is the probability that she kills the missile and survives? What is the same probability for the attack from long range? And so on for each outcome of each alternative.

So far the pilot has determined what she can do, what may happen as a result, and how likely each result is. She now needs to say how much she values each result. To do this she needs to identify how much she cares about each dimension of value at play in the choice, which in highly simplified terms are the benefit to mission that comes from killing the missile, and the cost that comes from sacrificing her life, the lives of targeted combatants, and the lives of bystanders. It is not enough to say that killing the missile is beneficial and sacrificing life is costly. She needs to put benefit and cost into a single common metric, sometimes called a utility, so that the value of one can be directly compared to the value of the other. This relative comparison is known as a value trade-off, the fourth step in the process. Whether the decision-maker is on the tactical edge or making high-level decisions, the trade-off takes the same basic form: The decision-maker weighs the value of attaining a military objective against the cost of dollars and lives (friendly, enemy, and civilian) needed to attain it. This trade-off is at once an ethical and a military judgment it puts a price on life at the same time that it puts a price on a military objective.

Once these four steps are complete, rational choice is a matter of fairly simple math. Utilities are weighted by an outcomes likelihood high-likelihood outcomes get more weight and are more likely to drive the final choice.

It is important to note that, for both human and machine decision-makers, rational is not necessarily the same thing as ethical or successful. The rational choice process is the best way, given uncertainty, to optimize what decision-makers say they value. It is not a way of saying that one has the right values and does not guarantee a good outcome. Good decisions will still occasionally lead to bad outcomes, but this decision-making process optimizes results in the long run.

At least in the U.S. Air Force, pilots do not consciously step through expected utility calculations in the cockpit. Nor is it reasonable to assume that they should performing the mission is challenging enough. For human decision-makers, explicitly working through the steps of expected utility calculations is impractical, at least on a battlefield. Its a different story, however, with machines. If the military wants to use algorithms to achieve decision speed in battle, then it needs to make the components of a decision computationally tractable that is, the four steps above need to reduce to numbers. The question becomes whether it is possible to provide the numbers in such a way that combines the speed that machines can bring with the ethical judgment that only humans can provide.

Where Algorithms Are Better and Where Human Judgment Is Necessary

Computer and data science have a long way to go to exercise the power of machine learning and data representation assumed here. The Department of Defense should continue to invest heavily in the research and development of modeling and simulation capabilities. However, as it does that, we propose that algorithms list the alternatives, define the relevant possible results, and give conditional probability estimates (the first three steps of rational decision-making), with occasional human inputs. The fourth step of determining value should remain the exclusive domain of human judgment.

Machines should generate alternatives and outcomes because they are best suited for the complexity and rule-based processing that those steps require. In the simplified example above there were only two possible alternatives (attack from close or far) with four possible outcomes (kill the missile and survive, kill the missile and dont survive, dont kill the missile and survive, and dont kill the missile and dont survive). The reality of future combat will, of course, be far more complicated. Machines will be better suited for handling this complexity, exploring numerous solutions, and illuminating options that warfighters may not have considered. This is not to suggest, though, that humans will play no role in these steps. Machines will need to make assumptions and pick starting points when generating alternatives and outcomes, and it is here that human creativity and imagination can help add value.

Machines are hands-down better suited for the third step estimating the probabilities of different outcomes. Human judgments of probability tend to rely on heuristics, such as how available examples are in memory, rather than more accurate indicators like relevant base rates, or how often a given event has historically occurred. People are even worse when it comes to understanding probabilities for a chain of events. Even a relatively simple combination of two conditional probabilities is beyond the reach of most people. There may be openings for human input when unrepresentative training data encodes bias into the resulting algorithms, something humans are better equipped to recognize and correct. But even then, the departures should be marginal, rather than the complete abandonment of algorithmic estimates in favor of intuition. Probability, like long division, is an arena best left to machines.

While machines take the lead with occasional human input in steps one through three, the opposite is true for the fourth step of making value trade-offs. This is because value trade-offs capture both ethical and military complexity, as many commanders already know. Even with perfect information (e.g., the mission will succeed but it will cost the pilots life) commanders can still find themselves torn over which decision to make. Indeed, whether and how one should make such trade-offs is the essence of ethical theories like deontology or consequentialism. And prioritization of which military objectives will most efficiently lead to success (however defined) is an always-contentious and critical part of military planning.

As long as commanders and operators remain responsible for trade-offs, they can maintain control and responsibility for the ethicality of the decision even as they become less involved in the other components of the decision process. Of note, this control and responsibility can be built into the utility function in advance, allowing systems to execute at machine speed when necessary.

A Way Forward

Incorporating machine learning and AI into military decision-making processes will be far from easy, but it is possible and a military necessity. China and Russia are using machine learning to speed their own decision-making, and unless the United States keeps pace it risks finding itself at a serious disadvantage on future battlefields.

The military can ensure the success of machine-aided choice by ensuring that the appropriate division of labor between human and machines is well understood by both decision-makers and technology developers.

The military should begin by expanding developmental education programs so that they rigorously and repeatedly cover decision science, something the Air Force has started to do in its Pinnacle sessions, its executive education program for two- and three-star generals. Military decision-makers should learn the steps outlined above, and also learn to recognize and control for inherent biases, which can shape a decision as long as there is room for human input. Decades of decision science research have shown that intuitive decision-making is replete with systematic biases like overconfidence, irrational attention to sunk costs, and changes in risk preference based merely on how a choice is framed. These biases are not restricted just to people. Algorithms can show them as well when training data reflects biases typical of people. Even when algorithms and people split responsibility for decisions, good decision-making requires awareness of and a willingness to combat the influence of bias.

The military should also require technology developers to address ethics and accountability. Developers should be able to show that algorithmically generated lists of alternatives, results, and probability estimates are not biased in such a way as to favor wanton destruction. Further, any system addressing targeting, or the pairing of military objectives with possible means of affecting those objectives, should be able to demonstrate a clear line of accountability to a decision-maker responsible for the use of force. One means of doing so is to design machine learning-enabled systems around the decision-making model outlined in this article, which maintains accountability of human decision-makers through their enumerated values. To achieve this, commanders should insist on retaining the ability to tailor value inputs. Unless input opportunities are intuitive, commanders and troops will revert to simpler, combat-tested tools with which they are more comfortable the same old radios or weapons or, for decision purposes, slide decks. Developers can help make probability estimates more intuitive by providing them in visual form. Likewise, they can make value trade-offs more intuitive by presenting different hypothetical (but realistic) choices to assist decision-makers in refining their value judgements.

The unenviable task of commanders is to imagine a number of potential outcomes given their particular context and assign a numerical score or utility such that meaningful comparisons can be made between them. For example, a commander might place a value of 1,000 points on the destruction of an enemy aircraft carrier and -500 points on the loss of a fighter jet. If this is an accurate reflection of the commanders values, she should be indifferent between an attack with no fighter losses and one enemy carrier destroyed and one that destroys two carriers but costs her two fighters. Both are valued equally at 1,000 points. If the commander strongly prefers one outcome over the other, then the points should be adjusted to better reflect her actual values or else an algorithm using that point system will make choices inconsistent with the commanders values. This is just one example of how to elicit trade-offs, but the key point is that the trade-offs need to be given in precise terms.

Finally, the military should pay special attention to helping decision-makers become proficient in their roles as appraisers of value, particularly with respect to decisions focused on whose life to risk, when, and for what objective. In the command-and-control paradigm of the future, decision-makers will likely be required to document such trade-offs in explicit forms so machines can understand them (e.g., I recognize there is a 12 percent chance that you wont survive this mission, but I judge the value of the target to be worth the risk).

If decision-makers at the tactical, operational, or strategic levels are not aware of or are unwilling to pay these ethical costs, then the construct of machine-aided choice will collapse. It will either collapse because machines cannot assist human choice without explicit trade-offs, or because decision-makers and their institutions will be ethically compromised by allowing machines to obscure the tradeoffs implied by their value models. Neither are acceptable outcomes. Rather, as an institution, the military should embrace the requisite transparency that comes with the responsibility to make enumerated judgements about life and death. Paradoxically, documenting risk tolerance and value assignment may serve to increase subordinate autonomy during conflict. A major advantage of formally modeling a decision-makers value trade-offs is that it allows subordinates and potentially even autonomous machines to take action in the absence of the decision-maker. This machine-aided decision process enables decentralized execution at scale that reflects the leaders values better than even the most carefully crafted rules of engagement or commanders intent. As long as trade-offs can be tied back to a decision-maker, then ethical responsibility lies with that decision-maker.

Keeping Values Preeminent

The Electronic Numerical Integrator and Computer, now an artifact of history, was the top secret that the New York Times revealed in 1946. Though important as a machine in its own right, the computers true significance lay in its symbolism. It represented the capacity for technology to sprint ahead of decision-makers, and occasionally pull them where they did not want to go.

The military should race ahead with investment in machine learning, but with a keen eye on the primacy of commander values. If the U.S. military wishes to keep pace with China and Russia on this issue, it cannot afford to delay in developing machines designed to execute the complicated but unobjectionable components of decision-making identifying alternatives, outcomes, and probabilities. Likewise, if it wishes to maintain its moral standing in this algorithmic arms race, it should ensure that value trade-offs remain the responsibility of commanders. The U.S. militarys professional development education should also begin training decision-makers on how to most effectively maintain accountability for the straightforward but vexing components of value judgements in conflict.

We stand encouraged by the continued debate and hard discussions on how to best leverage the incredible advancement in AI, machine learning, computer vision, and like technologies to unleash the militarys most valuable weapon system, the men and women who serve in uniform. The military should take steps now to ensure that those people and their values remain the key players in warfare.

Brad DeWees is a major in the U.S. Air Force and a tactical air control party officer. He is currently the deputy chief of staff for 9th Air Force (Air Forces Central). An alumnus of the Air Force Chief of Staffs Strategic Ph.D. program, he holds a Ph.D. in decision science from Harvard University. LinkedIn.

Chris FIAT Umphres is a major in the U.S. Air Force and an F-35A pilot. An alumnus of the Air Force Chief of Staffs Strategic Ph.D. program, he holds a Ph.D. in decision science from Harvard University and a Masters in management science and engineering from Stanford University. LinkedIn.

Maddy Tung is a second lieutenant in the U.S. Air Force and an information operations officer. A Rhodes Scholar, she is completing dual degrees at the University of Oxford. She recently completed an M.Sc. in computer science and began the M.Sc. in social science of the internet. LinkedIn.

The views expressed here are the authors alone and do not necessarily reflect those of the U.S. government or any part thereof.

Image: U.S. Air Force (Photo by Staff Sgt. Sean Carnes)

See the original post here:
Machine Learning and Life-and-Death Decisions on the Battlefield - War on the Rocks

Machine Learning Tools and Algorithms are Now being Used by Canada based Scotiabank to Help Clients Impacted by COVID-19 – Crowdfund Insider

Canadian banking group Scotiabank has noted that its strategic investments in machine learning (ML) technologies are beginning to pay off during the Coronavirus crisis, allowing it to effectively serve clients as they try to cope with these uncertain and challenging times.

Analysts at the banks global risk management department have been using ML tools to develop a cashflow prediction software program, called Sofia or Strategic Operating Framework for Insights and Analytics.

Sofia makes use of historical commercial banking information, like customer deposits and various trends from previous years along with machine learning to predict what clients might expect in the coming weeks. This rolling average, which gets updated in real-time, provides the banking platform with a better understanding of which customers are more likely to be impacted by the economic downturn and how to effectively address their requirements.

This means that relationship managers are able to proactively approach and work with clients whose cashflow might be under pressure. They can offer assistance to these customers, like providing information about different customer help programs or options such as short-term lending.

John Phillips, Director and Head, Credit Solutions Group at Scotiabank, stated:

Its intended to provide us with insights into accounts which may be trending down so that we can get in front of it and have discussions with our customers that are informed by the data.

The ML tool was developed before the Coronavirus pandemic began. Its meant to expedite the review process for managing commercial banking accounts. But the COVID crisis has given these ML tools a new purpose, helping to effectively assess and predict which customers will require assistance during these unprecedented times which have seen record levels of volatility in financial markets. These tools have been launched throughout Canada for commercial and retail clients.

Daniel Moore, Chief Risk Officer, Scotiabank, remarked:

Developing these kinds of tools and analytics had already been on our roadmap, but what has been supercharged by the pandemic is the demand side for those analytics. Either as individual or business owner, if your bank comes to you and says your account balance is showing stretched liquidity, wed like to sit down with you and discuss how we can help you out, thats a highly different conversation than six months later when the client is having difficulties.

Moore added:

An early conversation is good for the Bank and good for the customer. Thats how we should be using data.

Excerpt from:
Machine Learning Tools and Algorithms are Now being Used by Canada based Scotiabank to Help Clients Impacted by COVID-19 - Crowdfund Insider

Deep learning doesnt need to be a black box – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Deep neural networks can perform wonderful feats thanks to their extremely large and complicated web of parameters. But their complexity is also their curse: The innerworkings of neural networks are often a mysteryeven to their creators. This is a challenge that has been troubling the artificial intelligence community since deep learning started to become popular in the early 2010s.

In tandem with the expansion of deep learning in various domains and applications, there has been a growing interest in developing techniques that try to explain neural networks by examining their results and learned parameters. But these explanations are often erroneous and misleading, and they provide little guidance in fixing possible misconceptions embedded in deep learning models during training.

In a paper published in the peer-reviewed journal Nature Machine Intelligence, scientists at Duke University propose concept whitening, a technique that can help steer neural networks toward learning specific concepts without sacrificing performance. Concept whitening bakes interpretability into deep learning models instead of searching for answers in millions of trained parameters. The technique, which can be applied to convolutional neural networks, shows promising results and can have great implications for how we perceive future research in artificial intelligence.

Given enough quality training examples, a deep learning model with the right architecture should be able to discriminate between different types of input. For instance, in the case of computer vision tasks, a trained neural network will be able to transform the pixel values of an image into their corresponding class. (Since concept whitening is meant for image recognition, well stick to this subset of machine learning tasks. But many of the topics discussed here apply to deep learning in general.)

During training, each layer of a deep learning model encodes the features of the training images into a set of numerical values and stores them in its parameters. This is called the latent space of the AI model. In general, the lower layers of a multilayered convolutional neural network will learn basic features such as corners and edges. The higher layers of the neural network will learn to detect more complex features such as faces, objects, full scenes, etc.

Ideally, a neural networks latent space would represent concepts that are relevant to the classes of images it is meant to detect. But we dont know that for sure, and deep learning models are prone to learning the most discriminative features, even if theyre the wrong ones.

For instance, the following data set contains images of cats that happen to have a logo in the lower right corner. A human would easily dismiss the logo as irrelevant to the task. But a deep learning model might find it to be the easiest and most efficient way to tell the difference between cats and other animals. Likewise, if all the images of sheep in your training set contain large swaths of green pastures, your neural network might learn to detect green farmlands instead of sheep.

So, aside from how well a deep learning model performs on training and test data sets, it is important to know which concepts and features it has learned to detect. This is where classic explanation techniques come into play.

Many deep learning explanation techniques are post hoc, which means they try to make sense of a trained neural network by examining its output and its parameter values. For instance, one popular technique to determine what a neural network sees in an image is to mask different parts of an input image and observes how these changes affect the output of the deep learning model. This technique helps create heatmaps that highlight the features of the image that are more relevant to the neural network.

Other post hoc techniques involve turning different artificial neurons on and off and examining how these changes affect the output of the AI model. These methods can help find hints about relations between features and the latent space.

While these methods are helpful, they still treat deep learning models like black boxes and dont paint a definite picture of the workings of neural networks.

Explanation methods are often summary statistics of performance (e.g., local approximations, general trends on node activation) rather than actual explanations of the models calculations, the authors of the concept whitening paper write.

For instance, the problem with saliency maps is that they often miss showing the wrong things that the neural network might have learned. And interpreting the role of single neurons becomes very difficult when the features of a neural network are scattered across the latent space.

Deep neural networks (NNs) are very powerful in image recognition but what is learned in the hidden layers of NNs is unknown due to its complexity. Lack of interpretability makes NNs untrustworthy and hard to troubleshoot, Zhi Chen, Ph.D. student in computer science at Duke University and lead author of the concept whitening paper, told TechTalks. Many previous works attempt to explain post hoc what has been learned by the models, such as what concept is learned by each neuron. But these methods heavily rely on the assumption that these concepts are actually learned by the network (which they are not) and concentrated on one neuron (again, this is not true in practice).

Cynthia Rudin, professor of computer science at Duke University and co-author of the concept whitening paper, had previously warned about the dangers of trusting black-box explanation techniques and had shown how such methods could provide erroneous interpretations of neural networks. In a previous paper, also published in Nature Machine Intelligence, Rudin had encouraged the use and development of AI models that are inherently interpretable. Rudin, who is also Zhis Ph.D. advisor, directs Duke Universitys Prediction Analysis Lab, which focuses on interpretable machine learning.

The goal of concept whitening is to develop neural networks whose latent space is aligned with the concepts that are relevant to the task it has been trained for. This approach will make the deep learning model interpretable and makes it much easier to figure out the relations between the features of an input image and the output of the neural network.

Our work directly alters the neural network to disentangle the latent space so that the axes are aligned with known concepts, Rudin told TechTalks.

Deep learning models are usually trained on a single data set of annotated examples. Concept whitening introduces a second data set that contains examples of the concepts. These concepts are related to the AI models main task. For instance, if your deep learning model detects bedrooms, relevant concepts would include bed, fridge, lamp, window, door, etc.

The representative samples can be chosen manually, as they might constitute our definition of interpretability, Chen says. Machine learning practitioners may collect these samples by any means to create their own concept datasets suitable for their application. For example, one can ask doctors to select representative X-ray images to define medical concepts.

With concept whitening, the deep learning model goes through two parallel training cycles. While the neural network tunes its overall parameters to represent the classes in the main task, concept whitening adjusts specific neurons in each layer to align them with the classes included in the concept data set.

The result is a disentangled latent space, where concepts are neatly separated in each layer and the activation of neurons correspond with their respective concepts. Such disentanglement can provide us with a much clearer understanding of how the network gradually learns concepts over layers, Chen says.

To evaluate the effectiveness of the technique, the researchers ran a series of validation images through a deep learning model with concept whitening modules inserted at different layers. They then sorted the images based on which concept neurons they had activated at each layer. In the lower layers, the concept whitening module captures low-level characteristics such as colors and textures. For instance, the lower layers of the network can learn that blue images with white objects are closely associated with the concept airplane and images with warm colors are more likely to contain the concept bed. In the higher layers, the network learns to classify the objects that represent the concept.

One of the benefits of concept disentanglement and alignments is that the neural network becomes less prone to making obvious mistakes. As an image runs through the network, the concept neurons in the higher layers correct the errors that might have happened in the lower layers. For instance, in the image below, the lower layers of the neural network mistakenly associated the image with the concept airplane because of the dense presence of blue and white pixels. But as the image moves through the higher layers, the concept neurons help steer the results in the right direction (visualized in the graph to the right).

Previous efforts in the field involved creating classifiers that tried to infer concepts from the values in a neural networks latent space. But, according to Chen, without a disentangled latent space, the concepts learned by these methods are not pure because the prediction scores of the concept neurons can be correlated. Some people have tried to disentangle neural networks in supervised ways before, but not in a way that actually worked to disentangle the space. CW, on the other hand, truly disentangles these concepts by decorrelating the axes using a whitening transformation, he says.

Concept whitening is a module that can be inserted into convolutional neural networks instead of the batch normalization module. Introduced in 2015, batch normalization is a popular technique that adjusts the distribution of the data used to train the neural network to speed up training and avoid artifacts such as overfitting. Most popular convolutional neural networks use batch normalization in various layers.

In addition to the functions of batch normalization, concept whitening also aligns the data along several axes that represent relevant concepts.

The benefit of concept whitenings architecture is that it can be easily integrated into many existing deep learning models. During their research, the scientists modified several popular pre-trained deep learning models by replacing batch norm modules with concept whitening, and they achieved the desired results with just one epoch of training. (One epoch is a round of training on the full training set. Deep learning modules usually undergo many epochs when trained from scratch.)

CW could be applied to domains like medical imaging where interpretability is very important, Rudin says.

In their experiments, the researchers applied concept whitening to a deep learning model for skin lesion diagnosis. Concept importance scores measured on the CW latent space can provide practical insights on which concepts are potentially more important in skin lesion diagnosis, they write in their paper.

For future direction, instead of relying on predefined concepts, we plan to discover the concepts from the dataset, especially useful undefined concepts that are yet to be discovered, Chen says. We can then explicitly represent these discovered concepts in the latent space of neural networks, in a disentangled way, for better interpretability.

Another direction of research is organizing concepts in hierarchies and disentangling clusters of concepts rather than individual concepts.

With deep learning models becoming larger and more complicated every year, there are different discussions on how to deal with the transparency problem of neural networks.

One of the main arguments is to observe how AI models behave instead of trying to look inside the black box. This is the same way we study the brains of animals and humans, conducting experiments and recording activations. Any attempt to impose interpretability design constraints on neural networks will result in inferior models, proponents of this theory argue. If the brain evolved through billions of iterations without intelligent top-down design, then neural networks should also reach their peak performance through a pure evolutionary path.

Concept whitening refutes this theory and proves that we can impose top-down design constraints on neural networks without causing any performance penalties. Interestingly, experiments show that deep learning models concept whitening modules provide interpretability without a significant drop in accuracy on the main task.

CW and many other works from our lab (and many other labs) clearly show the possibility of building an interpretable model without hurting the performance, Rudin says. We hope our work can shift peoples assumption that a black box is necessary for good performance, and can attract more people to build interpretable models in their fields.

Register to receive updates from TechTalks

Read this article:
Deep learning doesnt need to be a black box - TechTalks

The future of software testing: Machine learning to the rescue – TechBeacon

The last decadehas seen a relentless push to deliver software faster. Automated testing has emerged as one of the most important technologies for scaling DevOps, companies are investing enormous time and effort to build end-to-end software delivery pipelines, and containers and their ecosystem are holding up on their early promise.

The combination of delivery pipelines and containers has helped high performers to deliver software faster than ever.That said, many organizations are stillstruggling to balance speed and quality. Many are stuck trying to make headway with legacy software, large test suites, and brittle pipelines. So where do yougofrom here?

In the drive to release quickly, end users have become software testers. But theyno longer want to be your testers, and companies are taking note. Companies now want to ensure that quality is not compromised in the pursuit of speed.

Testing is one of the top DevOps controls that organizations can leverage to ensure that their customers engage with a delightful brand experience. Othersinclude access control, activity logging, traceability, and disaster recovery. Our company'sresearch over the past year indicates that slow feedback cycles, slow development loops, and developer productivity will remain the top priorities over the next few years.

Quality and access control are preventative controls, while others are reactive. There will be an increasing focus on quality in the future because it prevents customers from having a bad experience. Thus, delivering value fastor better yet, delivering the right value at the right quality level fastis the key trend that we will see this year and beyond.

Here are the five key trends to watch.

Test automation efforts will continue to accelerate. A surprising number of companiesstill have manual tests in their delivery pipeline, but you can't deliver fast if you have humans in the critical path of the value chain, slowing things down. (The exception isexploratory testing, where humans are a must.)

Automating manual tests is a long process that requires dedicated engineering time. While many organizations have at least some test automation, there's more that needs to be done. That's why automatedtesting willremain one of the top trends going forward.

As teams automate tests and adopt DevOps, quality must become part of the DevOps mindset. That means quality will become a shared responsibility of everyone in the organization.

Figure 2. Top performers shift tests around to create new workflows. They shift left for earlier validation and right to speed up delivery. Source: Launchable

Teams will need to become more intentional about where tests land. Should they shift tests left to catch issues much earlier, or should they add more quality controls to the right? On the "shift-right"side of the house, practices such as chaos engineering and canary deployments are becoming essential.

Shifting large test suites left is difficult because you don't want to introduce long delays while running tests in an earlier part of your workflow. Many companies tag some tests from a large suite to run in pre-merge, but the downside is that these tests may or may not be relevant to a specific change set. Predictive test selection (see trend 5 below) provides a compelling solution for running just the relevant tests.

Over the past six to eightyears, the industry has focused on connecting various tools by building robust delivery pipelines. Each of those tools generates a heavy exhaust of data, but that data is being used minimally, if at all. We have moved from "craft" or "artisanal" solutions to the "at-scale" stage in the evolution of tools in delivery pipelines.

The next phase is to bring smartsto the tooling.Expect to see an increased emphasis by practitioners onmakingdata-driven decisions.

There are two key problems in testing: not enough tests, and too many of them. Test-generation tools take a shot at the first problem.

To create a UI test today, you either must write a lot of code or a tester has to click through the UI manually, which is an incredibly painful and slow process. To relieve this pain, test-generation tools use AI to create and run UI tests on various platforms.

For example, one tool my team exploreduses a "trainer"that lets you record actions on a web app to create scriptless tests. While scriptless testing isnt a new idea, what is new is that this tool "auto-heals"tests in lockstep with the changes to your UI.

Another tool that we explored has AI bots that act like humans. They tap buttons, swipe images, type text, and navigate screens to detect issues. Once they find an issue, they create a ticket in Jira for the developers to take action on.

More testing tools that use AI willgain traction in 2021.

AI has other uses for testing apart from test generation. For organizations struggling with runtimes of large test suites, an emerging technology calledpredictive test selectionis gaining traction.

Many companies have thousands of tests that run all the time. Testing a small change might take hours or even days to get feedback on. While more tests are generally good for quality, it also means that feedback comes more slowly.

To date, companies such as Google and Facebook have developed machine-learning algorithms that process incoming changes and run only the tests that are most likely to fail. This is predictive test selection.

What's amazing about this technology is that you can run between 10% and 20% of your tests to reach 90% confidence that a full run will not fail. This allows you to reduce a five-hour test suite that normally runs post-merge to 30 minuteson pre-merge, running only the tests that are most relevant to the source changes. Another scenario would be to reduce a one-hour run to six minutes.

Expect predictive test selection to become more mainstream in 2021.

Automated testing is taking over the world. Even so, many teams are struggling to make the transition. Continuous quality culture will become part of the DevOps mindset. Tools will continue to become smarter. Test-generation tools will help close the gap between manual and automated testing.

But as teams add more tests, they face real problems with test execution time. While more tests help improve quality, they often become a roadblock to productivity. Machine learning will come to the rescue as we roll into 2021.

See the original post here:
The future of software testing: Machine learning to the rescue - TechBeacon

How machine learning is contributing to the evolution of online education space – India Today

Machine learning has no doubt made a massive change in every spectrum we can imagine. Automation has been made a necessity in order to be technologically updated and also be at par as a tech business.

The benefits of automated operations are: higher productivity, reliability, availability, increased performance, and reduced operating costs; learning analytics that build statistical models of student knowledge to provide computerised and personalised feedback on learning the students' progress and their instructors; scheduling algorithms that search for an optimal and adapted teaching policy that helps students learn more efficiently, and so on.

Now that being said, if we have to talk about one such industry, the education space has bagged this tech upgrade in a thought provoking method. From choosing the course of your choice to earning a full-fledged degree, online education has come a long way.

2020 has been the year for the online education space. It wouldn't be wrong if we called it a revolution. Thanks to the pandemic, every educational organisation has taken the online teaching route. Being in the education space you need to make your students get a classroom experience as much as possible.

This is where machine learning algorithms like speech recognition, image recognition and text analysis come into picture.

The dimensions of the business changes when additional features to the model are added. With the option to pre-record lectures, students can watch them at their convenient time and stay on par with the lectures. This has been made possible by the custom video streaming facility.This also gives an opportunity to the educators to create personalised content based on each student's performance.

As a result, quality of teaching increases which is in turn beneficial for the business itself.

As a website, you get a good look at the data about your consumers. From number of views per page to number of people who have signed up for your programmes. This helps you in making better business decisions and also understanding customer behaviour.

With the progression of the world, having a degree is a minimum qualification required for most quality jobs. And college degrees are getting extremely expensive which makes it difficult to afford education. E-learning allows deviating from a set syllabus to provide students with knowledge that is always relevant. Online education is highly popular, effective and affordable due to advances in machine learning.

This education software can focus on specific areas where students need to improve, which makes the teaching and learning experience more learner-centric. The online education system has provided and ensured a personal touch into teaching and learning, which offers a positive interface between instructors and learners.

Machine learning, being so much in demand, can be a business model in itself. The number of people wanting machine learning as a subject are increasing day-by-day. This acts as an advantage to education providing spaces to inculcate it as a course.Today, there is a gap between students and educators. There is a need for a MLPaaS Machine Learning Platform as a Service.

An all-in-one space where educators can interact with their students to make the learning more interactive and less on-way.

It can expedite the development of new and more innovative forms of online education and can adapt and adjust to the individual learning requirements of every student. The algorithms help to analyse the capacity of the students and modify teaching approaches for boosting the teaching and learning experience in a globalised classroom by enabling the instructors to cultivate best academic practices.

All in all, the inculcation of ML into your teaching platforms can only result in positive impact when used by your customers when done right. This is the best time in the industry to gravitate towards algorithms because it reduces clerical work and makes the user experience much better.

Authored by Deepak Mishra, Founder and CEO, Prodevans Technologies.

Read: Role of education publishers in the age of digital learning

Read: How to build effective learning paradigm in Indian educational system

Continued here:
How machine learning is contributing to the evolution of online education space - India Today

Five real world AI and machine learning trends that will make an impact in 2021 – IT World Canada

Experts predict artificial intelligence (AI) and machine learning will enter a golden age in 2021, solving some of the hardest business problems.

Machine learning trains computers to learn from data with minimal human intervention. The science isnt new, but recent developments have given it fresh momentum, said Jin-Whan Jung, Senior Director & Leader, Advanced Analytics Lab at SAS. The evolution of technology has really helped us, said Jung. The real-time decision making that supports self-driving cars or robotic automation is possible because of the growth of data and computational power.

The COVID-19 crisis has also pushed the practice forward, said Jung. Were using machine learning more for things like predicting the spread of the disease or the need for personal protective equipment, he said. Lifestyle changes mean that AI is being used more often at home, such as when Netflix makes recommendations on the next show to watch, noted Jung. As well, companies are increasingly turning to AI to improve their agility to help them cope with market disruption.

Jungs observations are backed by the latest IDC forecast. It estimates that global AI spending will double to $110 billion over the next four years. How will AI and machine learning make an impact in 2021? Here are the top five trends identified by Jung and his team of elite data scientists at the SAS Advanced Analytics Lab:

Canadas Armed Forces rely on Lockheed Martins C-130 Hercules aircraft for search and rescue missions. Maintenance of these aircraft has been transformed by the marriage of machine learning and IoT. Six hundred sensors located throughout the aircraft produce 72,000 rows of data per flight hour, including fault codes on failing parts. By applying machine learning, the system develops real-time best practices for the maintenance of the aircraft.

We are embedding the intelligence at the edge, which is faster and smarter and thats the key to the benefits, said Jung. Indeed, the combination is so powerful that Gartner predicts that by 2022, more than 80 per cent of enterprise IoT projects will incorporate AI in some form, up from just 10 per cent today.

Computer vision trains computers to interpret and understand the visual world. Using deep learning models, machines can accurately identify objects in videos, or images in documents, and react to what they see.

The practice is already having a big impact on industries like transportation, healthcare, banking and manufacturing. For example, a camera in a self-driving car can identify objects in front of the car, such as stop signs, traffic signals or pedestrians, and react accordingly, said Jung. Computer vision has also been used to analyze scans to determine whether tumors are cancerous or benign, avoiding the need for a biopsy. In banking, computer vision can be used to spot counterfeit bills or for processing document images, rapidly robotizing cumbersome manual processes. In manufacturing, it can improve defect detection rates by up to 90 per cent. And it is even helping to save lives; whereby cameras monitor and analye power lines to enable early detection of wildfires.

At the core of machine learning is the idea that computers are not simply trained based on a static set of rules but can learn to adapt to changing circumstances. Its similar to the way you learn from your own successes and failures, said Jung. Business is going to be moving more and more in this direction.

Currently, adaptive learning is often used fraud investigations. Machines can use feedback from the data or investigators to fine-tune their ability to spot the fraudsters. It will also play a key role in hyper-automation, a top technology trend identified by Gartner. The idea is that businesses should automate processes wherever possible. If its going to work, however, automated business processes must be able to adapt to different situations over time, Jung said.

To deliver a return for the business, AI cannot be kept solely in the hands of data scientists, said Jung. In 2021, organizations will want to build greater value by putting analytics in the hands of the people who can derive insights to improve the business. We have to make sure that we not only make a good product, we want to make sure that people use those things, said Jung. As an example, Gartner suggests that AI will increasingly become part of the mainstream DevOps process to provide a clearer path to value.

Responsible AI will become a high priority for executives in 2021, said Jung. In the past year, ethical issues have been raised in relation to the use of AI for surveillance by law enforcement agencies, or by businesses for marketing campaigns. There is also talk around the world of legislation related to responsible AI.

There is a possibility for bias in the machine, the data or the way we train the model, said Jung. We have to make every effort to have processes and gatekeepers to double and triple check to ensure compliance, privacy and fairness. Gartner also recommends the creation of an external AI ethics board to advise on the potential impact of AI projects.

Large companies are increasingly hiring Chief Analytics Officers (CAO) and the resources to determine the best way to leverage analytics, said Jung. However, organizations of any size can benefit from AI and machine learning, even if they lack in-house expertise.

Jung recommends that if organizations dont have experience in analytics, they should consider getting an assessment on how to turn data into a competitive advantage. For example, the Advanced Analytics Lab at SAS offers an innovation and advisory service that provides guidance on value-driven analytics strategies; by helping organizations define a roadmap that aligns with business priorities starting from data collection and maintenance to analytics deployment through to execution and monitoring to fulfill the organizations vision, said Jung. As we progress into 2021, organizations will increasingly discover the value of analytics to solve business problems.

SAS highlights a few top trends in AI and machine learning in this video.

Jim Love, Chief Content Officer, IT World Canada

Read more from the original source:
Five real world AI and machine learning trends that will make an impact in 2021 - IT World Canada