High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline – Little Black Book – LBBonline

I cant stop playing with Midjourney. It may signal the end of human creativity or the start of an exciting new era, but heres me, like a monkey at a typewriter chucking random words into the algorithm for an instant hit of this-shouldnt-be-as-good-as-it-is art.

For those who dont know, Midjourney is one of a number of image-generating AI algorithms that can turn written prompts into unworldly pictures, It, along with OpenAIs DALL-E 2, have been having something of a moment in the last month as people get their hands on them and try to push them to their limits. Craiyon - formerly DALL-E mini - is an older, less refined and very much wobblier platform to try too. Its worth having a go just to get a feel for what these algorithms can and cant do - though be warned, the dopamine hit of seeing some silly words turn into something strange, beautiful, terrifying or cool within seconds is quite addictive. A confused dragon playing chess. A happy apple. A rat transcends and perceives the oneness of the universe, pulsing with life. Yes Sir, I can boogie.

Within the LBB editorial team, weve been having lots of discussions about the implications of these art-generating algorithms. What are the legal and IP ramifications for those artists whose works are mined and drawn into the data set (on my Midjourney server, Klimt and HR Giger seem to be the most popular artists to replicate but what of more contemporary artists?). Will the industry use this to find unexpected new looks that go beyond the human creative habits and rules - or will we see content pulled directly from the algorithm? How long will it take for the algorithms to iron out the wonky weirdness that can sometimes take the human face way beyond the uncanny valley to a nightmarish, distorted abyss? What are the keys to writing prompts when you are after something very specific? Why does the algorithm seem to struggle when two different objects are requested in the same image?

Unlike other technologies that have shaken up the advertising industry, these image-generating algorithms are relatively accessible and easy to use (DALL-E 2s waitlist aside). The results are almost instant - and the possibilities, for now, seem limitless. Weve already seen a couple of brands have a go with campaigns that are definitely playing on the novelty and PR-angle of this new technology - and also a few really intriguing art projects too...

Agency: Rethink

The highest profile commercial campaign of the bunch is Rethinks new Heinz campaign. Its a follow up to a previous campaign, in which humans were asked to draw a bottle of ketchup and ended up all drawing a bottle of Heinz. This time around, the team asked Dall-E 2 - and the algorithm, like its human predecessors, couldnt help but create images that looked like Heinz branded bottles (albeit with a funky AI spin). In this case, the AI is used to reinforce and revisit the original idea - but how long will it take before were using AIs to generate ideas for boards or pitch images?

Agency: 10 Days

Animation: Jeremy Higgins

This artsy animated short by art director and designer Jeremy Higgins is a delight and shows how a sequence of similar AI-generated images can serve as frames in a film. The flickering effect ironically gives the animation a very hand-made stop motion style, reminding me of films that use individual oil paintings as frames. Its a really vivid encapsulation of what it feels like to be sucked into a Midjourney rabbit hole too...I also have to tip my hat to Stefan Sagmeister who shared this film on his Instagram account.

For the latest issue of Cosmopolitan, creative Karen X Cheng used Dall-E 2 to create a dramatic and imposing cover - using the prompt: 'a strong female president astronaut warrior walking on the planet Mars, digital art synthwave'. Theres a deep dive into the creative process that also examines some of the potential ramifications of the technology on the Cosmopolitan website thats well worth a read.

Studio:T&DA

Heres a cheeky sixth entry to High Five. This execution is part of a wider summer platform for BT Sport, centred around belief - in this case football pundit Robbie Savage is served up a Dall-E 2 image of striker Aleksander Mitrovi lifting the golden boot. Fulham has just been promoted to the Premier League - but though Robbie can see it, he cant quite believe it.

See the original post here:
High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline - Little Black Book - LBBonline

InMoment Recognized for Artificial Intelligence Innovation in 2022 AI Breakthrough Awards for Best AI-Based Solution for Retail – Business Wire

SOUTH JORDAN, Utah--(BUSINESS WIRE)--InMoment, the leading provider of Experience Improvement (XI) solutions, today announced that it has been selected as the winner of the Best AI-based Solution for Retail award in the 5th annual AI Breakthrough Awards program conducted by AI Breakthrough, a leading market intelligence organization that recognizes the top companies, technologies, and products in the global Artificial Intelligence (AI) market.

The mission of the AI Breakthrough Awards is to honor excellence and recognize the innovation, hard work, and success in a range of AI and machine learning related categories, including AI Platforms, Deep Learning, Smart Robotics, Business Intelligence, Natural Language Processing, industry-specific AI applications, and many more. This years program attracted more than 2,950 nominations from over 18 countries worldwide.

InMoment has been recognized for its work with one of the worlds largest outdoor clothing, equipment, and services retailers. Using InMoments XI Spotlight, a sophisticated, AI-based natural language processing (NLP) and text analytics application, this retailer can now combine and break down silos to analyze data from different types of text data sources, including surveys, reviews, social media, news articles, forums and communities, chat and IM conversations, phone logs, and email to get a integrated, and holistic view of their customers.

A positive customer experience is the lifeblood of any company, especially in retail, and the retail landscape has evolved faster than anyone could have imagined, forcing brands to rethink how to serve their customers, said James Johnson, managing director, AI Breakthrough. InMoment incorporates breakthrough AI technology to empower this positive customer experience, helping retailers know where customers are and what they need from their experiences while providing them with in-depth customer understanding, so they can drive acquisition, retention, and spend. We extend our sincere congratulations to the InMoment team for their well-deserved 2022 AI Breakthrough Award.

The ROI that the retailer realized working with InMoment includes significant cost savings over competitive offerings or the cost of developing the solution in-house; optimized data by funneling siloed data into the same system and creating 100+ granular categories for use across business units; and seamless integration with the retailers existing BI tools used by the CX team for a truly 360-degree view of the customer.

Modern shoppers are not only flooded with choices, but they also are savvier than ever - and our technology and services help to tailor their experiences with in-depth customer understanding, said Mehul Nagrani, General Manager, AI Product & Technology at InMoment. InMoments XI Spotlight integrates and analyzes data from all types of feedback sources via an API with visualization and management tools that ensures a granular and sophisticated insight across touchpoints. Were proud of our ability to use artificial intelligence to help retailers and businesses of all types improve experiences.

With InMoment, retailers gain complex intention analysis, identifying whether someone intends to buy, sell, quit, or recommend a product or service. In doing so, InMoment empowers the company or sales team with immediately actionable information on the state of potential customers, customers, and employees. InMoment also allows retailers to access customer and employee feedback to understand their sentimentand what can be done to improve it. This gives retailers the intelligence they need to increase customer retention and loyalty, acquire new customers, and identify areas within their business for cost reduction to ultimately drive better overall business results.

About InMoment

Improving experiences is why InMoment exists. Our mission is to help our clients improve experiences at the intersection of valuewhere customer, employee, and business needs come together. The heart of what we do is connect our clients with what matters most through a unique combination of data, technology, and human expertise. With our hyper-modern technology platform, decades of domain authority, and global teams of experts, we uniquely deliver a focus on Experience Improvement (XI) to help our clients own the moments that matter. Take a moment and learn more at inmoment.com.

About AI Breakthrough

Part of Tech Breakthrough, a leading market intelligence and recognition platform for global technology innovation and leadership, the AI Breakthrough Awards program is devoted to honoring excellence in Artificial Intelligence technologies, services, companies, and products. The AI Breakthrough Awards provide public recognition for the achievements of AI companies and products in categories including AI Platforms, Robotics, Business Intelligence, AI Hardware, NLP, Vision, Biometrics, and more. For more information, visit AIBreakthroughAwards.com.

See the original post:
InMoment Recognized for Artificial Intelligence Innovation in 2022 AI Breakthrough Awards for Best AI-Based Solution for Retail - Business Wire

Artificial Intelligence is the Future of the Banking Industry Are You Prepared for It? – International Banker

By Pritham Shetty, Consulting Director, Propel

Our world is moving at a fast pace. Though banks originally built their foundations to be run solely by humans, the time has come forartificial intelligence in the banking industry. In 2020, the global AI banking market was valued at $3.88 billion, andit is projected to reach $64.03 billion by the end of the decade,with a compound annual growth rate of 32.6%. However, when it comes to implementing even the best strategies, theapplication of artificial intelligence in the banking industryis susceptible to weak core tech and poor data backbones.

By my count, there were 20,000 new banking regulatory requirements created in 2015 alone. Chances are your business wont find a one-size-fits-all solution to dealing with this. The next-best option is to be nimble. You need to be able to break down the business process into small chunks. By doing so, you can come up with digital strategies that work with new and existing regulations.

AIcan take you a long way in this process, but you must know how to harness its power. Take originating home loans, for instance. This can be an important, sometimes tedious, process for the loan seeker and bank. With an AI solution, loan origination can happen quicker and be more beneficial to both parties.

As the world of banking moves toward AI, it is integral to note that the crucial working element for AI is data. The trick to using that data is to understand how to leverage it best for your business value. Data with no direction wont lead to progress, nor will it lead to the proper deployment of your AI. That is one of the top reasons it isso challenging to implement AI in banks there has to be a plan.

Even if you come up with a poor strategy, those mistakes can be course-corrected over time. It takes some time and effort, but it is doable. If you home in on how customer information can be used, you can utilize AI for banking services in a way that is scalable and actionable. Once you understand how to use the data you collect, you can develop technical solutions that work with each other, identify specific needs, and build data pipelines that will lead you down the road to AI.

How is artificial intelligence changing the banking sector?

Due to the increasingly digital world, customers have more access to their banking information than ever. Of course, this can lead to other problems. Because there is so much access to data, there are also prime opportunities for fraudulent activities, and this is one example ofhow AI is changing the banking sector. With AI, you can train systems to learn, understand, and recognize when these activities happen. In fact, there was a5% decrease in record exposure from 2020 to 2021.

AI also safeguards against data theft or abuse. Not only can AI recognize breaches from outside sources, but it can also recognize internal threats. Once an AI system is trained, it can identify these problems and even offer solutions to them. For instance, a customer support call center can have traffic directed by AI to assuage an influx of calls during high-volume fluctuations.

Another great example of this is the development ofconversational AI platforms. The ubiquity of social media and other online platforms can be used to tailor customer experiences directly led by AI. By using the data gathered from all sources, AI can greatly improve the customer experience overall.

For example, a loan might take anywhere from seven to 45 days to be granted. But with AI, the process can be expedited not only for the customer, but also the bank. By using AI in a situation such as this, your bank can assess the risk it is taking on by servicing loans. It can also make the process faster by performing underwriting, document scanning, and other manual processes previously associated with data collection. On top of all that, AI can gather and analyze data about your customers behaviors throughout their banking lives.

In the past, so much of this work was done solely by people. Although automation has certainly helped speed up and simplify tasks, it is used for tedium and doesnt have the complexity of AI. AI saves time and money by freeing up your employees to do other processes and provides valuable insights to your customers. And customers can budget better and have a clearer idea of where their money is going.

Even the most traditional banks will want to adopt AI to save time and money and allow employees more opportunities to have positive one-on-one relationships with customers. Look no further than fintech companies such as Credijusto, Nubank, and Monzo that have digitized traditional banking services through the power of cutting-edge tech.

Are you ready to put AI to work for your business?

Today, its not a question ofhow AI is impacting financial services. Now, its about how to implement it. That all starts with you. You must ask the right questions: What are your goals for implementing AI? Do you want to improve your internal processes? Simply provide a better customer service experience? If so, how should you implement AI for your banking services? Start with these strategies:

By making realistic short-term goals, you set yourself up for future success. These are the solutions that will be the building blocks for the type of AI everyone will aspire to use.

You want to ensure that you know how you currently use data and how you plan on using it in the future. Again, this sets your organization up for success in the long run. If you dont have the right practices now, you certainly wont going forward.

As you implement AI into your banking practices, you should know how exactlyyou generate data. Then, you must understand how you interpret it. What is the best use for it? After that, you can make decisions that will be scalable, useful, and seamless.

Technology has not only made the world around us move faster, but also better in so many ways. Traditional institutions such as banks might be slow to adopt, but weve already seenhow artificial intelligence is changing the banking sector. By taking the proper steps, you could be moving right along with it into the future.

Visit link:
Artificial Intelligence is the Future of the Banking Industry Are You Prepared for It? - International Banker

Researchers use artificial intelligence to create a treasure map of undiscovered ant species – EurekAlert

image:Map detailing ant diversity centers in Africa, Madagascar and Mediterranean regions. view more

Credit: Kass et al., 2022, Science Advances

E. O. Wilson once referred to invertebrates as the little things that run the world, without whom the human species [wouldnt] last more than a few months. Although small, invertebrates have an outsized influence on their environments, pollinating plants, breaking down organic matter and speeding up nutrient cycling. And what they lack in stature, they make up for in diversity. With more than one million known species, insects alone vastly outnumber all other invertebrates and vertebrates combined.

Despite their importance and ubiquity, some of the most basic information about invertebrates, such as where theyre most diverse and how many of them there are, still remains a mystery. This is especially problematic for conservation scientists trying to stave off global insect declines; you cant conserve something if you dont know where to look for it.

In a new study published this Wednesday in the journal Science Advances, researchers used ants as a proxy to help close major knowledge gaps and hopefully begin reversing these declines. Working for more than a decade, researchers from institutions around the world stitched together nearly one-and-a-half million location records from research publications, online databases, museums and scientific field work. They used those records to help produce the largest global map of insect diversity ever created, which they hope will be used to direct future conservation efforts.

This is a massive undertaking for a group known to be a critical ecosystem engineer, said co-author Robert Guralnick, curator of biodiversity informatics at the Florida Museum of Natural History. It represents an enormous effort not only among all the co-authors but the many naturalists who have contributed knowledge about distributions of ants across the globe.

Creating a map large enough to account for the entirety of ant biodiversity presented several logistical challenges. All currently known ant species were included, which numbered at more than 14,000, and each one varied dramatically in the amount of data available.

The majority of the records used contained a description of the location where an ant was collected or spotted but did not always have the precise coordinates needed for mapping. Inferring the extent of an ants range from incomplete records required some clever data wrangling.

Co-author Kenneth Dudley, a research technician with the Okinawa Institute of Science and Technology built a computational workflow to estimate the coordinates from the available data, which also checked the data for errors. This allowed the researchers to make different range estimates for each species of ant depending on how much data was available. For species with less data, they constructed shapes surrounding the data points. For species with more data, the researchers predicted the distribution of each species using statistical models that they tuned to reduce as much noise as possible.

The researchers brought these estimates together to form a global map, divided into a grid of 20 km by 20 km squares, that showed an estimate of the number of ant species per square (called the species richness). They also created a map that showed the number of ant species with very small ranges per square (called the species rarity). In general, species with small ranges are particularly vulnerable to environmental changes.

However, there was another problem to overcomesampling bias.

Some areas of the world that we expected to be centers of diversity were not showing up on our map, but ants in these regions were not well-studied, explained co-first author Jamie Kass, a postdoctoral fellow at the Okinawa Institute of Science and Technology. Other areas were extremely well-sampled, for example parts of the USA and Europe, and this difference in sampling can impact our estimates of global diversity.

So, the researchers utilized machine learning to predict how their diversity would change if they sampled all areas around the world equally, and in doing so, identified areas where they estimate many unknown, unsampled species exist.

This gives us a kind of treasure map, which can guide us to where we should explore next and look for new species with restricted ranges, said senior author Evan Economo, a professor at the Okinawa Institute of Science and Technology.

When the researchers compared the rarity and richness of ant distributions to the comparatively well-studied amphibians, birds, mammals and reptiles, they found that ants were about as different from these vertebrate groups as the vertebrate groups were from each other.

This was unexpected given that ants are evolutionarily highly distant from vertebrates, and it suggests that priority areas for vertebrate diversity may also have a high diversity of invertebrate species. The authors caution, however, that ant biodiversity patterns have unique features. For example, the Mediterranean and East Asia show up as diversity centers for ants more than the vertebrates.

Finally, the researchers looked at how well-protected these areas of high ant diversity are. They found that it was a low percentageonly 15% of the top 10% of ant rarity centers had some sort of legal protection, such as a national park or reserve, which is less than existing protection for vertebrates.

Clearly, we have a lot of work to do to protect these critical areas, Economo concluded.

The global distribution of known and undiscovered ant biodiversity

3-Aug-2022

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read the original here:
Researchers use artificial intelligence to create a treasure map of undiscovered ant species - EurekAlert

CSforALL Urges Greater Focus on AI and Data Science – Government Technology

(TNS) If you're not in the know, artificial intelligence and data science may sound like especially nerdy subsets of the already pocket-protector infused field of computer science.

But anyone who is serious about expanding computer science educationa list that includes Fortune 500 company CEOs and policymakers on both sides of the aisleshould be thinking carefully about emphasizing AI, in which machines are trained to perform tasks that simulate some of what the human brain can do, and data science, in which students learn to record, store, and analyze data.

That means making sure kids have access to well-designed resources to learn those subjects, bolstering professional development for those who teach them, exposing career counselors to information about how to help students pursue jobs in those fields, and much more.

Leigh Ann DeLyser, CSforALL's co-founder and executive director, spoke with Education Week about some big picture ideas around the push for a greater focus on AI and data science within computer science education. Here are some key takeaways from that conversation.

Teaching computer scienceincluding AI and data sciencecan help the next generation grapple with big societal problems.

"Our world is complex and messy and full of big problems," DeLyser said. AI and data science are fast- growing areas when it comes to employment, but "they're also the fastest-growing tools that are being used by business people, nonprofits, and governments every single day. No matter what you do in life, if you want to tackle the big problems we have in the world, you're going to need to understand these things and how they can be used, even if you're not the programmer who is writing the code that makes them go."

Students from all different backgrounds must get grounding in computer science.

It's especially important to increase socioeconomic, racial, and gender diversity in the field.

"Research shows that teams that have different backgrounds are better problem solvers, because they think about problems from different ways," DeLyser said. "When everybody comes with the same perspective, you tend to miss out on some of the ideas or the big challenges that pop up along the way. ... We [want to] give equal access, no matter what ZIP code [students] grow up in, to those high-paying careers and opportunities later in their life."

There are already good models of how to teach AI and data-science.

It's possible to see school districts already experimenting with how to do this well, if you know where to look, DeLyser said. "Often, we frame [computer science access] as a deficit narrative. There's nothing happening in education, or education is failing."

But that's not the case, she added. For instance, the large Gwinnett County school district outside Atlanta, is getting ready to open a high school that will focus on artificial intelligence. And in Bentonville, Ark., where Walmart is headquartered, local high school students interning with the company get a first-hand look at how the retail giant uses AI to configure store layouts, with an eye towards maximizing profit.

It's never too early to start teaching artificial intelligence.

Believe it or not, kids as young as kindergarten or even preschool can become familiar with the basics of AI, DeLyser said.

"AI is pattern recognition. One of the most important pre skills for algebra and math development for kids in kindergarten, and even preschool, is pattern recognition. 'This is a circle, this is a square,'" DeLyser said. Teaching AI is "having them take that learning that they're doing for the pattern recognition just one step further. It's like, OK, 'I'm going to teach you, you're going to teach a friend. Now I'm going to teach a computer.' It's not that far off from the work that they're already doing."

2022 Education Week (Bethesda, Md.). Distributed by Tribune Content Agency, LLC.

Read the rest here:
CSforALL Urges Greater Focus on AI and Data Science - Government Technology

Artificial Intelligence Chipsets Market Is Booming with Progressive Trends and Exciting Opportunities by 2028| IBM Corp. (U.S.), Microsoft Corp….

The rising headway in the innovation and adoption of AI is ad libbing the buyer administrations is the vital variable for driving the development of artificial intelligence chipsets in the market. The rising number of AI applications, helping the processing ability to drive the huge and complex dataset is adding to the Artificial Intelligence Chipsets Market development. The Global Artificial Intelligence Chipsets Market report gives a comprehensive assessment of the market. The report offers an exhaustive examination of key portions, patterns, drivers, limitations, cutthroat scene, and elements that are assuming a significant part in the market.

Artificial intelligence chipset is extraordinarily intended to drive AI errands all the more effectively by depending on the utilization of simple hardware for the low accuracy number-crunching which gives headway in power and space. Artificial brain organizations, AI, and machine vision can all profit from this chipset. Lately, expanded government mediation in the assurance of basic foundation and delicate information has brought about the use of AI (chipsets) in security applications. The extension of AI chipsets in the auto area region is being powered by government backing, especially in the United States. Artificial intelligence chipsets are used for various undertakings, including quantum registering, extortion discovery, risk to the executives, and applications that request continuous information. During the conjecture time frame, these capabilities are supposed to help the Artificial Intelligence Chipsets Market.

Get Sample Copy of this Report: https://www.infinitybusinessinsights.com/request_sample.php?id=877556

The worldwide Artificial Intelligence Chipsets market is expected to grow at a booming CAGR of 2022-2028, rising from USD billion in 2021 to USD billion in 2028. It also shows the importance of the Artificial Intelligence Chipsets market main players in the sector, including their business overviews, financial summaries, and SWOT assessments.

Artificial Intelligence Chipsets Market, By Segmentation:

Artificial Intelligence Chipsets Market segment by Type:

Deep LearningRobot TechnologyDigital Personal AssistantQuerying MethodNatural Language ProcessingContext Aware Processing

Artificial Intelligence Chipsets Market segment by Application:

RetailTransportationAutomationManufacturingOthers

The years examined in this study are the following to estimate the Artificial Intelligence Chipsets market size:

History Year: 2015-2019 Base Year: 2021 Estimated Year: 2022 Forecast Year: 2022 to 2028

Flooding COVID occasions, cautious travel limitations, a concise end in get-together, and breakdowns in the shop connection and standard medication supply have made a liberal part for market progression in 2020 and presently. Whether or not the Covid pandemic issue is settled, dynamic creative exertion in the space might assist with supporting the markets moving diagram somewhat later.

Coming up next are the essential named locales for the Artificial Intelligence Chipsets market: America, North (United States, Canada, and Mexico), Europe (European Union) (Germany, France, United Kingdom, Russia, Italy, and Rest of Europe), Asia-Pacific district (China, Japan, Korea, India, Southeast Asia, and Australia), Americas, South (Brazil, Argentina, Colombia, and Rest of South America), Africa, and the Middle East (Saudi Arabia, UAE, Egypt).

The Key companies profiled in the Artificial Intelligence Chipsets Market: IBM Corp. (U.S.), Microsoft Corp. (U.S.), Google Inc. (U.S.), FinGenius Ltd. (U.K.), NVIDIA Corporation (U.S.), Intel Corporation (U.S.), General Vision, Inc. (U.S.), Numenta, Inc. (U.S.), Sentient Technologies (U.S.), Inbenta Technologies, Inc. (U.S.),

Some of the key questions answered in this report:1. Analysis of Artificial Intelligence Chipsets market (Preceding, present, and future) to calculate the rate of growth and market size.2. Market risk, market opportunities, driving forces.3. New technologies and issues to investigate market dynamics.4. Market Forecast5. Closely evaluate current and rising market segments.

Then, the report describes the Artificial Intelligence Chipsets market division based on various parameters and attributes that are based on geographical distribution, product types, and applications. The market segmentation clarifies further regional distribution for the market, business trends, potential revenue sources, and upcoming market opportunities.

If you need anything more than these then let us know and we will prepare the report according to your requirement.

For More Details On this Artificial Intelligence Chipsets Market Report @:https://www.infinitybusinessinsights.com/request_sample.php?id=877556

Table of Contents: List of Data Sources:Chapter 2. Executive SummaryChapter 3. Industry Outlook3.1. Artificial Intelligence Chipsets Market Industry segmentation3.2. Artificial Intelligence Chipsets Market Industry size and growth prospects, 2015 20263.3. Artificial Intelligence Chipsets Market Industry Value Chain Analysis3.3.1. Vendor landscape3.4. Regulatory Framework3.5. Market Dynamics3.5.1. Market Driver Analysis3.5.2. Market Restraint Analysis3.6. Porters Analysis3.6.1. Threat of New Entrants3.6.2. Bargaining Power of Buyers3.6.3. Bargaining Power of Buyers3.6.4. Threat of Substitutes3.6.5. Internal Rivalry3.7. PESTEL AnalysisChapter 4. Artificial Intelligence Chipsets Market Industry Product OutlookChapter 5. Artificial Intelligence Chipsets Market Industry Application OutlookChapter 6. Artificial Intelligence Chipsets Market Industry Geography Outlook6.1. Artificial Intelligence Chipsets Industry Share, by Geography, 2022 & 20286.2. North America6.2.1. Market 2022 -2028 estimates and forecast, by product6.2.2. Market 2022 -2028, estimates and forecast, by application6.2.3. The U.S.6.2.3.1. Market 2022 -2028 estimates and forecast, by product6.2.3.2. Market 2022 -2028, estimates and forecast, by application6.2.4. Canada6.2.4.1. Market 2022 -2028 estimates and forecast, by product6.2.4.2. Market 2022 -2028, estimates and forecast, by application6.3. Europe6.3.1. Market 2022 -2028 estimates and forecast, by product6.3.2. Market 2022 -2028, estimates and forecast, by application6.3.3. Germany6.3.3.1. Market 2022 -2028 estimates and forecast, by product6.3.3.2. Market 2022 -2028, estimates and forecast, by application6.3.4. the UK6.3.4.1. Market 2022 -2028 estimates and forecast, by product6.3.4.2. Market 2022 -2028, estimates and forecast, by application6.3.5. France6.3.5.1. Market 2022 -2028 estimates and forecast, by product6.3.5.2. Market 2022 -2028, estimates and forecast, by applicationChapter 7. Competitive LandscapeChapter 8. Appendix

About Us:Infinity Business Insights is a market research company that offers market and business research intelligence all around the world. We are specialized in offering the services in various industry verticals to recognize their highest-value chance, address their most analytical challenges, and alter their work.We attain particular and niche demand of the industry while stabilize the quantum of standard with specified time and trace crucial movement at both the domestic and universal levels. The particular products and services provided by Infinity Business Insights cover vital technological, scientific and economic developments in industrial, pharmaceutical and high technology companies.

Contact us:Amit JSales Co-OrdinatorInternational: +1-518-300-3575Email: [emailprotected]Website: https://www.infinitybusinessinsights.comFacebook: https://facebook.com/Infinity-Business-Insights-352172809160429LinkedIn: https://www.linkedin.com/company/infinity-business-insightsTwitter: https://twitter.com/IBInsightsLLP

See the rest here:
Artificial Intelligence Chipsets Market Is Booming with Progressive Trends and Exciting Opportunities by 2028| IBM Corp. (U.S.), Microsoft Corp....

What is Artificial Intelligence? Guide to AI | eWEEK – eWeek

By any measure, artificial intelligence (AI) has become big business.

According to Gartner, customers worldwide will spend $62.5 billion on AI software in 2022. And it notes that 48 percent of CIOs have either already deployed some sort of AI software or plan to do so within the next twelve months.

All that spending has attracted a huge crop of startups focused on AI-based products. CB Insights reported that AI funding hit $15.1 billion in the first quarter of 2022 alone. And that came right after a quarter that saw investors pour $17.1 billion into AI startups. Given that data drives AI, its no surprise that related fields like data analytics, machine learning and business intelligence are all seeing rapid growth.

But what exactly is artificial intelligence? And why has it become such an important and lucrative part of the technology industry?

Also see: Top AI Software

In some ways, artificial intelligence is the opposite of natural intelligence. If living creatures can be said to be born with natural intelligence, man-made machines can be said to possess artificial intelligence. So from a certain point of view, any thinking machine has artificial intelligence.

And in fact, one of the early pioneers of AI, John McCarthy, defined artificial intelligence as the science and engineering of making intelligent machines.

In practice, however, computer scientists use the term artificial intelligence to refer to machines doing the kind of thinking that humans have taken to a very high level.

Computers are very good at making calculations at taking inputs, manipulating them, and generating outputs as a result. But in the past they have not been capable of other types of work that humans excel at, such as understanding and generating language, identifying objects by sight, creating art, or learning from past experience.

But thats all changing.

Today, many computer systems have the ability to communicate with humans using ordinary speech. They can recognize faces and other objects. They use machine learning techniques, especially deep learning, in ways that allow them to learn from the past and make predictions about the future.

So how did we get here?

Also see: How AI is Altering Software Development with AI-Augmentation

Many people trace the history of artificial intelligence back to 1950, when Alan Turing published Computing Machinery and Intelligence. Turings essay began, I propose to consider the question, Can machines think?' It then laid out a scenario that came to be known as a Turing Test. Turing proposed that a computer could be considered intelligent if a person could not distinguish the machine from a human being.

In 1956, John McCarthy and Marvin Minsky hosted the first artificial intelligence conference, the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). It convinced computer scientists that artificial intelligence was an achievable goal, setting the foundation for several decades of further research. And early forays into AI technology developed bots that could play checkers and chess.

The 1960s saw the development of robots and several problem-solving programs. One notable highlight was the creation of ELIZA, a program that simulated psychotherapy and provided an early example of human-machine communication.

In the 1970s and 80s, AI development continued but at a slower pace. The field of robotics in particular saw significant advances, such as robots that could see and walk. And Mercedes-Benz introduced the first (extremely limited) autonomous vehicle. However, government funding for AI research decreased dramatically, leading to a period some refer to as the AI winter.

Interest in AI surged again in the 1990s. The Artificial Linguistic Internet Computer Entity (ALICE) chatbot demonstrated that natural language processing could lead to human-computer communication that felt far more natural than what had been possible with ELIZA. The decade also saw a surge in analytic techniques that would form the basis of later AI development, as well as the development of the first recurrent neural network architecture. This was also the decade when IBM rolled out its Deep Blue chess AI, the first to win against the current world champion.

The first decade of the 2000s saw rapid innovation in robotics. The first Roombas began vacuuming rugs, and robots launched by NASA explored Mars. Closer to home, Google was working on a driverless car.

The years since 2010 have been marked by unprecedented increases in AI technology. Both hardware and software developed to a point where object recognition, natural language processing, and voice assistants became possible. IBMs Watson won Jeopardy. Siri, Alexa, and Cortana came into being, and chatbots became a fixture of modern retail. Google DeepMinds AlphaGo beat human Go champions. And enterprises in all industries have begun deploying AI tools to help them analyze their data and become more successful.

Now AI is truly beginning to evolve past some of the narrow and limited types into more advanced implementations.

Also see:The History of Artificial Intelligence

Different groups of computer scientists have proposed different ways of classifying the types of AI. One popular classification uses three categories:

Another popular classification uses four different categories:

While these classifications are interesting from a theoretical standpoint, most organizations are far more interested in what they can do with AI. And that brings us to the aspect of AI that is generating a lot of revenue the AI use cases.

Also see: Three Ways to Get Started with AI

The possible AI use cases and applications for artificial intelligence are limitless. Some of todays most common AI use cases include the following:

Of course, these are just some of the more widely known use cases for AI. The technology is seeping into daily life in so many ways that we often arent fully aware of them.

Also see: Best Machine Learning Platforms

So where is the future of AI? Clearly it is reshaping consumer and business markets.

The technology that powers AI continues to progress at a steady rate. Future advances like quantum computing may eventually enable major new innovations, but for the near term, it seems likely that the technology itself will continue along a predictable path of constant improvement.

Whats less clear is how humans will adapt to AI. This question poses questions that loom large over human life in the decades ahead.

Many early AI implementations have run into major challenges. In some cases, the data used to train models has allowed bias to infect AI systems, rendering them unusable.

In many other cases, business have not seen the financial results they hoped for after deploying AI. The technology may be mature, but the business processes surrounding it are not.

The AI software market is picking up speed, but its long-term trajectory will depend on enterprises advancing their AI maturity, said Alys Woodward, senior research director at Gartner.

Successful AI business outcomes will depend on the careful selection of use cases, Woodware added. Use cases that deliver significant business value, yet can be scaled to reduce risk, are critical to demonstrate the impact of AI investment to business stakeholders.

Organizations are turning to approaches like AIOps to help them better manage their AI deployments. And they are increasingly looking for human-centered AI that harnesses artificial intelligence to augment rather than to replace human workers.

In a very real sense, the future of AI may be more about people than about machines.

Also see: The Future of Artificial Intelligence

Read the rest here:
What is Artificial Intelligence? Guide to AI | eWEEK - eWeek

Artificial Intelligence in Cyber Security: Benefits and Drawbacks. – TechGenix

AI for cybersecurity; its everywhere else!

You can use artificial intelligence (AI) to automate complex repetitive tasks much faster than a human. AI technology can sort complex, repetitive input logically. Thats why AI is used for facial recognition and self-driving cars. But this ability also paved the way for AI cybersecurity. This is especially helpful in assessing threats in complex organizations. When business structures are continually changing, admins cant identify weaknesses traditionally.

Additionally, businesses are becoming more complex in network structure. This means cybercriminals have more exploits to use against you. You can see this in highly automated manufacturing 3.0 businesses or integrated companies like the oil and gas industry. To this end, various security companies have developed AI cybersecurity tools to help protect businesses.

In this article, Ill delve into what AI is and how it applies to cybersecurity. Youll also learn the benefits and drawbacks of this promising technology. First, lets take a look at what AI is!

Artificial intelligence is a rationalization method using a statistically weighted matrix. This matrix is also called a neural net. You can think of this net as a decision matrix with nodes that have a weighted bias for each filtering process. The neural net will receive a database of precompiled data. This data will also contain answers to the underlying question the AI solves. This way, the AI will create a bias.

For example, lets consider a database containing different images. Lets say it has images of a persons face and other images of watermelons. Additionally, each image has a tag to check each item. As the AI learns whether it guessed correctly or not, the system increments node weightings. This process continues until the system reaches a predefined error percentage. This is often referred to as deep learning, which refers to the decision layers creating the depth.

Now, lets take a look at the steps used to process data.

We can condense the overall data workflow into the following process:

However, this process is slightly different with deep learning. The first step would include data from a precompiled database tagged with the correct response. Additionally, deep learning will repeat steps 1 through 4 to reach a predefined error tolerance value.

Lets take a look at this with an example of how AI data is processed.

Lets say a picture has reached an AI node. The node will filter the data into a usable format like 255 grayscale. Then, itll run a script to identify features, for example. If these features match others from a filter, the node can make a decision. For instance, itll say whether it found a face or a watermelon.

Then, the data goes to the next node down. This specific node could have a color filter to confirm the first decision. The process continues until the data reaches the last node. At that point, the AI will have made a final decision, ensuring whether it found a face or a watermelon.

Importantly, AI systems will always have a degree of error to them. None are infallible, and they never will be. But sometimes, the error percentages could be acceptable.

Now that you know how AI works lets take a look at AI cybersecurity solutions.

AI cybersecurity addresses the need to automate the assessment of threats in complex environments. Specifically, here are 2 use-cases for AI in AI cybersecurity:

Now you know the two main uses of AI in cybersecurity, lets take a look at its benefits and drawbacks!

As mentioned, AI has a lot of benefits. It runs repetitive tasks to identify anomalies or to classify data in particular in your business. That said, a few large drawbacks may offset its benefits. Here, well look at the drawbacks.

The first drawback is the AI cybersecurity solutions accuracy. This accuracy also depends on many factors. This includes the neural nets size and the decisions defined for filtering. It also depends on the number of iterations used to reach the predefined error percentage.

Imagine you have a decision tree with three layers. And each layer has several nodes for each decision route. Even though this is a fairly simple matrix, it needs a lot of calculations. Your systems finite resources will compromise your solutions intelligence.

An AI cybersecurity solution provider may stunt its solutions intelligence/accuracy to meet the target demographic. But sometimes, the problem isnt intelligence. Instead, its low latency and security vulnerabilities. When searching for an AI cybersecurity solution, consider how secure it is in your network.

Once trained, an AI statistical weighted matrix is often not re-trained in service. Youll find this is due to the lack of processing resources available in hardware. Sometimes, the system learns something that makes it worse, reducing effectiveness. Conversely, humans learn iteratively. This means they cause a lot of accidents. As a result, solution providers must ensure the software meets specification requirements during use.

Cybersecurity often requires updates to counter new exploits. To this end, it takes a lot of power to train your AI. Additionally, your AI cybersecurity vendor will need to update regularly to address cyber threats.

That said, the AI component of an AI cybersecurity solution is for classifying data and assessing anomalies in baseline data. As a result, it doesnt cause an issue for malware list updates. This means you can still use AI cybersecurity.

Now you know the benefits and drawbacks of AI cybersecurity, lets take a look at some uses for this technology!

As mentioned, highly automated businesses have the weakest cybersecurity. Generally, automated environments will overlap information technology (IT), operational technology (OT), and the Internet of things (IoT). This is to improve productivity, reduce the unit cost of a product, and undercut the competition.

But this also creates vulnerabilities. To this end, AI cybersecurity is great for finding potential exploits in these companies. Solutions either inform the administrator or automatically apply patches.

However, this may not be enough. Cybercriminals are currently attacking large, highly integrated companies. To do that, they exploit OT, which has no security. This OT was meant for wired networks to send commands to hardware like plant equipment. This means it never posed a security weakness. But today, attackers use OT to access the rest of a network or take plant equipment offline.

OT risk management tools are becoming popular for the reasons mentioned above. These systems effectively take a real-time clone of the production environment. Then, they run countless simulations to find exploits.

The AI part of the system generally finds exploits. In that case, an administrator provides a solution. OT risk management software continually runs as manufacturing plant arrangements change to meet orders, projects, or supply demands.

In this scenario, AI systems use known malware from antivirus lists to try and find an entry route into the system. The task requires automated repetitive functions of a complex system. And this makes it perfect for AI

So when should you implement AI cybersecurity? Lets find out.

As discussed above, businesses that use manufacturing and plant equipment should use AI cybersecurity. In most cases, youll also need to look for an OT risk management solution to reduce risks associated with OT.

You also can use AI cybersecurity if your business uses IoT and IT. This way, you can reduce the risk to the network from exploits. IoT devices generally undercut competitors, so you bypass the cost of adding adequate security measures.

Finally, you can use AI even if your company only uses IT. AI helps assess irregular traffic, so it protects your gateways. Additionally, you can leverage AIs data analytics. This way, youll know if someone is using your hardware for malicious purposes.

Now you know all you need to get started with AI cybersecurity, lets wrap things up!

Youll likely use AI wherever you need automated repetitive tasks. AI also helps make decisions on complex tasks. This is why many cybersecurity solution providers use AI. In fact, these providers tools help meet the challenge of highly complex systems that have very poor security.

You can always benefit from AI cybersecurity. It doesnt matter how integrated your business technology is. AI functionality is also great for classifying data using intelligent operations. This way, you can speed up your search for malware. AI cybersecurity is also beneficial for finding abnormal use of the network.

Do you have more questions about AI cybersecurity? Check out the FAQ and Resources sections below!

An AI neural net is a statical weighted matrix. This matrix helps process input data based on decisions made at nodes with a calibrated bias. To optimize this bias, data gets iteratively passed through the matrix. After that, the success rate is assessed, and each weighting value brings incremental changes. This process is called deep learning.

AI intelligence refers to the AIs error tolerance and decision layers. In theory, you could have as many layers as needed to make an intelligent AI. However, training it with data to reach a high error tolerance could be processor-intensive. This training may also take too long to produce. As a result, the solution becomes ineffective.

AI is trained using data to meet a predefined error tolerance level. For instance, a self-driving car lasts 1,000,000 miles by design. In this case, the cars service life determines the AI error tolerance. The AI accuracy must likely be 99.99% correct during decision-making to meet the service life

Operations technology (OT) risk assessment software assesses the security risks of plant equipment. Plants, integrated oil supply chains, and manufacturing 3.0 or above are also prime targets for attacks. AI cybersecurity can help assess threats using a clone of the production system. This helps check routes from OT systems to the rest of the system.

Yes, AI cybersecurity works in real-time. This helps detect weaknesses in your network or cyber threats. For example, you can find weaknesses by assessing traffic data through gateways and other hardware. You also can use AI as a centralized OT risk assessment software. This will let you assess the network structure for threats.

Learn about the different types of malware your AI cybersecurity solution will have to deal with.

Find out more about AI cybersecurity.

Discover more about AI and deep learning.

Understand how you can protect your organization by following GRC.

Learn how you can make your OPSEC better.

Read more from the original source:
Artificial Intelligence in Cyber Security: Benefits and Drawbacks. - TechGenix

A technique to improve both fairness and accuracy in artificial intelligence – MIT News

For workers who use machine-learning models to help them make decisions, knowing when to trust a models predictions is not always an easy task, especially since these models are often so complex that their inner workings remain a mystery.

Users sometimes employ a technique, known as selective regression, in which the model estimates its confidence level for each prediction and will reject predictions when its confidence is too low. Then a human can examine those cases, gather additional information, and make a decision about each one manually.

But while selective regression has been shown to improve the overall performance of a model, researchers at MIT and the MIT-IBM Watson AI Lab have discovered that the technique can have the opposite effect for underrepresented groups of people in a dataset. As the models confidence increases with selective regression, its chance of making the right prediction also increases, but this does not always happen for all subgroups.

For instance, a model suggesting loan approvals might make fewer errors on average, but it may actually make more wrong predictions for Black or female applicants. One reason this can occur is due to the fact that the models confidence measure is trained using overrepresented groups and may not be accurate for these underrepresented groups.

Once they had identified this problem, the MIT researchers developed two algorithms that can remedy the issue. Using real-world datasets, they show that the algorithms reduce performance disparities that had affected marginalized subgroups.

Ultimately, this is about being more intelligent about which samples you hand off to a human to deal with. Rather than just minimizing some broad error rate for the model, we want to make sure the error rate across groups is taken into account in a smart way, says senior MIT author Greg Wornell, the Sumitomo Professor in Engineering in the Department of Electrical Engineering and Computer Science (EECS) who leads the Signals, Information, and Algorithms Laboratory in the Research Laboratory of Electronics (RLE) and is a member of the MIT-IBM Watson AI Lab.

Joining Wornell on the paper are co-lead authors Abhin Shah, an EECS graduate student, and Yuheng Bu, a postdoc in RLE; as well as Joshua Ka-Wing Lee SM 17, ScD 21 and Subhro Das, Rameswar Panda, and Prasanna Sattigeri, research staff members at the MIT-IBM Watson AI Lab. The paper will be presented this month at the International Conference on Machine Learning.

To predict or not to predict

Regression is a technique that estimates the relationship between a dependent variable and independent variables. In machine learning, regression analysis is commonly used for prediction tasks, such as predicting the price of a home given its features (number of bedrooms, square footage, etc.) With selective regression, the machine-learning model can make one of two choices for each input it can make a prediction or abstain from a prediction if it doesnt have enough confidence in its decision.

When the model abstains, it reduces the fraction of samples it is making predictions on, which is known as coverage. By only making predictions on inputs that it is highly confident about, the overall performance of the model should improve. But this can also amplify biases that exist in a dataset, which occur when the model does not have sufficient data from certain subgroups. This can lead to errors or bad predictions for underrepresented individuals.

The MIT researchers aimed to ensure that, as the overall error rate for the model improves with selective regression, the performance for every subgroup also improves. They call this monotonic selective risk.

It was challenging to come up with the right notion of fairness for this particular problem. But by enforcing this criteria, monotonic selective risk, we can make sure the model performance is actually getting better across all subgroups when you reduce the coverage, says Shah.

Focus on fairness

The team developed two neural network algorithms that impose this fairness criteria to solve the problem.

One algorithm guarantees that the features the model uses to make predictions contain all information about the sensitive attributes in the dataset, such as race and sex, that is relevant to the target variable of interest. Sensitive attributes are features that may not be used for decisions, often due to laws or organizational policies. The second algorithm employs a calibration technique to ensure the model makes the same prediction for an input, regardless of whether any sensitive attributes are added to that input.

The researchers tested these algorithms by applying them to real-world datasets that could be used in high-stakes decision making. One, an insurance dataset, is used to predict total annual medical expenses charged to patients using demographic statistics; another, a crime dataset, is used to predict the number of violent crimes in communities using socioeconomic information. Both datasets contain sensitive attributes for individuals.

When they implemented their algorithms on top of a standard machine-learning method for selective regression, they were able to reduce disparities by achieving lower error rates for the minority subgroups in each dataset. Moreover, this was accomplished without significantly impacting the overall error rate.

We see that if we dont impose certain constraints, in cases where the model is really confident, it could actually be making more errors, which could be very costly in some applications, like health care. So if we reverse the trend and make it more intuitive, we will catch a lot of these errors. A major goal of this work is to avoid errors going silently undetected, Sattigeri says.

The researchers plan to apply their solutions to other applications, such as predicting house prices, student GPA, or loan interest rate, to see if the algorithms need to be calibrated for those tasks, says Shah. They also want to explore techniques that use less sensitive information during the model training process to avoid privacy issues.

And they hope to improve the confidence estimates in selective regression to prevent situations where the models confidence is low, but its prediction is correct. This could reduce the workload on humans and further streamline the decision-making process, Sattigeri says.

This research was funded, in part, by the MIT-IBM Watson AI Lab and its member companies Boston Scientific, Samsung, and Wells Fargo, and by the National Science Foundation.

Read more:
A technique to improve both fairness and accuracy in artificial intelligence - MIT News

The ADF could be doing much more with artificial intelligence | The Strategist – The Strategist

Artificial intelligence is a general-purpose technology that is steadily becoming pervasive across global society. AI is now beginning to interest the worlds defence forces, but the military comes late to the game. Given this, defence forces globally are fundamentally uncertain about AIsplace in warfighting. Accordingly, theres considerable experimentation in defence AI underway worldwide.

This process is being explored in a new series sponsored by the Defense AI Observatory at the Helmut Schmidt University/University of the Federal Armed Forces in Germany. Unlike other defence AI studies, the series is not focusing solely on technology but instead is looking more broadly across what the Australian Defence Force terms the fundamental inputs to capability. The first study examines Australian defence AI, and another 17 country studies have already been commissioned.

The ADF conceives of AI as mainly being used in humanmachine teams to improve efficiency, increase combat power and achieve decision superiority, while lowering the risk to personnel. For a middle power, Australia is following a fairly active AI development program with a well-defined innovation pathway and numerous experimentation projects underway.

There is also a reasonable level of force structure ambition. The latest major equipment acquisition plan, covering the next 10 to 20 years, sets out six defence AI-relevant projects, one navy, one army, three air force and one in the information and cyber domain. Even in this decade, the AI-related projects are quite substantial; they include teaming air vehicles (with an estimated cost of $9.1 billion), an integrated undersea surveillance system ($6.2 billion), a joint air battle management system ($2.3 billion) and a distributed ground station ($1.5 billion).

Associated with this investment is a high expectation that Australian AI companies will have considerable involvement in the projects. Indeed, the government recently added AI to its set of priorities for sovereign industrial capability. The Australian defence AI sector, though, consists mainly of small and medium-sized companies that individually lack the scale to undertake major equipment projects and would need to partner with large prime contractors to achieve the requisite industrial heft.

There are also wider national concerns about whether Australia will have a large enough AI workforce over the next decade to handle commercial demands, even without Defence drawing people away for its requirements. Both factors suggest Defence could end up buying its AI offshore and rely principally on long-term foreign support, as it does for many other major equipment projects.

An alternative might be funding collaborative AI developments with the US. A harbinger of this may be the Royal Australian Navys new experimentation program involving a recently decommissioned patrol boat being fitted with Austal-developed autonomous vessel technology featuring AI. Austal is simultaneously involved in a much larger US Navy program fitting its system to one of the companys expeditionary fast transport ships, USNS Apalachicola, currently being built. In this case, Austal is an Australian company with a large US footprint and so can work collaboratively in both countries. The RAN, simply because of economies of scale, is probably more likely to adopt the US Navy variant rather than a uniquely Australian version.

The outlier to this acquisition strategy might be the Boeing Australia Ghost Bat program that could see AI-enabled, loyal wingman uncrewed air vehicles in limited ADF service in 202425, before the US. The US Air Force is running several experimentation programs aiming to develop suitable technologies, some of which also involve the Boeing parent company. Theres a high likelihood of cross-fertilisation between the Australian and US programs. This raises the tantalising possibility of a two-nation support system of a scale that would allow the Australian companies involved to grow to a size suitable for long-term sustainment of the relevant ADF AI capabilities. This possibility might be a one-off, however, as there seem to be no other significant Australian defence AI programs.

Australia collaborating with the US on AI or buying US AI products can ensure interoperability. But in seeking such an objective theres always a tension between each Australian service being interoperable with its US counterpart or instead across the ADF. This tension is likely to remain as AI enters service, especially given its demands for task-related big data.

Interoperability and domestic industry support are traditionally important issues, but they may need to be counterbalanced by emerging geostrategic uncertainties and ADF capability considerations. Australia is worried about the possibility of conflict in the Indo-Pacific region given Chinese assertiveness coupled with the example of Russias invasion of Ukraine. To offset the numerically large military forces of the more bellicose Indo-Pacific states, some advocate developing a higher quality, technologically superior ADF that can help deter regional adventurism.

In being a general-purpose technology, AI can potentially provide a boost across the whole ADF, not just one or two elements within it. But such a vision is not what is being pursued. Defences current AI plans will most likely lead to evolutionary improvements not revolutionary changes. AI is envisaged as being used to either enhance, augment or replace existing capability; this approach means the future ADF will do things better, but it wont necessarily be able to do better things.

A revolution in Australian military affairs seems unlikely under current schemes. For that, defence AI would need to be reconceptualised as a disruptive technology rather than a sustaining innovation. Embracing disruptive innovation would be intellectually demanding and, in suggesting the adoption of unproven force structures, could involve taking strategic risks. These are reasonable concerns that would need careful management.

Against such worries though, China looms large. The strategically intelligent choice for the ADF might be embracing disruptive AI.

Go here to read the rest:
The ADF could be doing much more with artificial intelligence | The Strategist - The Strategist