Page 34«..1020..33343536..4050..»

Category Archives: Ai

C-store Retailers Can Personalize the Customer Experience With AI – CSNews Online

Posted: May 28, 2022 at 8:34 pm

CHICAGO To some, artificial intelligence (AI) may sound futuristic, but AI is here today and the technology can help convenience store retailers with their marketing campaigns.

"Using AI saves you time, effort and money," said Ryan DiLello, content specialist for Paytronix Systems Inc., during a recentConvenience Store Newswebinar."It ensures you do not have one-size-fits-all campaigns. It allows you to learn more about your customers and meet their needs."

Specifically, AI can help c-store retailers learn more about their customers' behavior and value; segment customers in more exacting ways; make more compelling, personalized offers; maximize channel efficiency; and, find ideal customers in the marketplace using AI-constructed profiles.

AI-driven marketing operates through four stages, stated DiLello. They are:

One thing AI can do is provide data regarding how likely customers are to visit a c-store, as well as open emails with provided targeted offers. The data can, for example, show which days a customer is visiting a store. If a customer visits exclusively on weekdays, AI can generate targeted offers to try to encourage consumers to visit on the weekend.

AI can also help launch "Missed Visit Campaigns," which recognizes individual lapses in guest behavior and identifies guests "out of their rhythm." According to DiLello, results from the first seven days of a Missed Visit Campaign revealed guest visits increased by 42 percent and in-store spending rose by 19 percent.

Geofencing, when AI notices when a customer is near a store and provides targeted offers and promotions, is another way to draw in-store traffic.

"We have seen great results using geofencing," he said.

C-store retailers have also used K-Means Clustering. Through this method, AI looks at popular pairing items and makes recommendations based on trends in the data. Further, K-Means Clustering helps develop unique and personalized guest experiences intended to keep customers returning to the store.

"AI recommends coupon offers," DiLello said. "It is a low-risk way to win back customers."

AI can help c-store retailers beyond just marketing, making their implementation well worth the investment, DiLello stressed. The technology takes it one step further by helping to determine a key metric: customer lifetime value (CLV). AI calculates how long a person has been in a loyalty program, what the average visit cadence looks like, when the most recent visit was, how much customers spend per visit and how long the consumer is likely to stay active. These data sets are important to identify top customers, segment more effectively, optimize acquisition and realize lift or a customer's lifetime journey DiLello pointed out.

The objectives of CLVs are to identify and reward most valuable customers, and find lower-value customers and boost their CLV.

"People often ask what a good CLV to customer acquisition cost (CAC) ratio is. We often say a three-to-one ratio is good," DiLello explained.

Retailers can lower CAC by retaining customers longer, reducing media and advertising expenses, and reducing third-party marketplace fees, he added.

In the future, DiLello expects AI to provide c-store retailers with practical uses in operations. For example, robotic servers, kiosks with facial recognition, food waste reduction management, inventory management and smart routing for delivery are ways AI can benefit c-store operators down the road.

"Inventory management is big for c-stores,"he said. "AI will allow retailers to cruise through an unsteady supply chain to order items months in advance."

An on-demand replay of this webinar, "Get Smart: How You Can Personalize the Customer Experience With AI,"is availablehere.

See more here:

C-store Retailers Can Personalize the Customer Experience With AI - CSNews Online

Posted in Ai | Comments Off on C-store Retailers Can Personalize the Customer Experience With AI – CSNews Online

Danny’s workmate is called GPT-3. You’ve probably read its work without realising it’s an AI – ABC News

Posted: at 8:34 pm

Two years ago this weekend, GPT-3 was introduced to the world.

You may not have heard of GPT-3, but there's a good chance you've read its work, used a website that runs its code, or even conversed with it through a chatbot or a character in a game.

GPT-3 is an AI model a type of artificial intelligence and its applications have quietly trickled into our everyday lives over the past couple of years.

In recent months, that trickle has picked up force: more and more applications are using AIlike GPT-3, and these AI programsare producing greater amounts of data, from words, to images, to code.

A lot of the time, this happens in the background; we don't see what the AI has done, or we can't tell if it's any good.

But there are some things that are easy for us to judge: writing is one of those.

From student essaysto content marketing, AI writing toolsare doing what only a few years ago seemed impossible.

In doing so, the technology ischanging how we think about what has been considered a uniquelyhuman activity.

And we have no idea how the AI models aredoing it.

Danny Mahoney's workmatenever leaves, sleeps, or takes a break.

Day after day,the AI writing assistant churns outblog posts, reviews, company descriptionsand the like for clients of Andro Media, Mr Mahoney'sdigital marketing company in Melbourne.

"Writers are expensive. And there's a limit to how much quality content a human can produce," Mr Mahoney says.

"You can get the same quality of content using AI tools. You just get it faster."

How much faster? About three times, he estimates.

He still has to check and edit the AI-generated text, but it's less work and he's cut his rates by half.

"Every SEO [Search Engine Optimisation] agency that I've spoken with uses AI to some extent."

In Perth, Sebastian Marks no longer bothers with content agencies at all.

About a year ago, he saw an ad for an AI writing assistant and signed up.

The AI tool nowwrites pretty much everything for his company, Moto Dynamics, whichsells motorcycles and organises racing events.

Its output includes employee bios, marketing copy, social media posts, and business proposals.

"Once we'd started feeding data into it and teaching it how to work for us, it became more and more user-friendly," he says.

"Now weuse it essentially as an admin."

The particular AI writing tool Mr Mahoney uses is calledContentBot, which like many of its competitors was launched early last year.

"It was very exciting," says Nick Duncan, the co-founder of ContentBot, speaking from Johannesburg.

"There was a lot of word of word of mouth with this technology. It just sort of exploded."

The trigger for this explosion was OpenAI's November 2021 decision to make its GPT-3 AIuniversally available for developers.

It meant anyone could payto access the AI tool, which had been introduced in May 2020 for a limited number of clients.

Dozens of AI writing tools launched in early 2021.

LongShot AIis only a year old, but claims to have 12,000 users around the world, including in Australia.

"And there are other products that would have ten-fold the number of clients we have,"says its co-founder,Ankur Pandey, speaking from Mumbai.

"Revolutionary changes in AI happened in the fall of 2020.This whole field has completely skyrocketed."

Companies likeContentBot andLongshot payOpenAI for access to GPT-3:the rate of the most popular model (Davinci) is about $US0.06 per 750 words.

In March 2021, GPT-3 was generating an average of 4.5 billion words per day.

We don't know the current figure, but it would be much higher given the AI is being more widely used.

"It's been a game changer," Mr Duncan says.

There are dozens of AI writing tools that advertise to students.

Among them isArticle Forge,a GPT-3 powered toolthat claims itsessayscan pass the plagiarism checkers used by schools and universities.

Demand for the product has increased five-foldin two years, chief executive officer Alex Cardinell says.

"It's the demand for cheaper content with shorter turnaround times that requires less overall effort to produce.

"People do not want AI, they want what AI can do for their business."

Lucinda McKnight, a curriculum expert at Deakin University, confirms that students are early adopters of AI writing tools.

"I can tell you without doubt that kids are very widely using these things, especially spinners on the internet."

Spinners are automated tools thatrephrase and rewrite content so it won't be flagged for plagiarism.

"It can produce in a matter of seconds multiple different copies of the same thing, but worded differently."

These developments are shifting ideas around student authorship.If it becomes impossible to distinguish AI writing from human, what's the point in trying to detect plagiarism?

"We should be getting studentsto acknowledge how they've used AI as another kind of source for their writing," Dr McKnight says.

"That is the way to move forwards, rather than to punish students for using them."

When GPT-3 launched two years ago, word spread of its writing proficiency, but access was limited.

Loading

Recently, OpenAI has thrown open the doors to anyone with a guest login, which takes a few minutes to acquire.

Given the prompt "Write a news story about AI", the AI toolburped out three paragraphs. Here's the first:

"The world is on the brink of a new era of intelligence. For the first time in history, artificial intelligence (AI) is about to surpass human intelligence. This momentous event is sure to change the course of history, and it is all thanks to the tireless work of AI researchers."

In general, GPT-3is remarkably good at stringing sentences together, though plays fast and loose with the facts.

Asked to write about the 2022 Australian election, it claimed the vote would beheld on July 2.

But it stillmanaged to sound like it knew what it was talking about:

"Whoever wins the election, it is sure to be a close and hard-fought contest. With the country facing challenges on many fronts, the next government will have its work cut out for it."

Mr Duncan says you "can't just let the AI write whatever it wants to write".

"It's terrible atfact-checking. It actually makes up facts."

He uses the tool as a creative prompt: the slog of writing from scratch is replaced byediting and fact-checking.

"It helps you overcome the blank-page problem."

Mr Mahoney agrees.

"If you produce content purely by an AI, it's very obvious that it's written by one.

"It's either too wordy or just genuinely doesn't make sense."

Loading

But with proper guidance, GPT-3 (andother AI writing tools) can be good enough for standard professional writing tasks like work emails or content marketing, where speed is more important than style.

"People who create content for marketing tend to use it every day,"Longshot'sAnkur Pandey says.

"Most of the focus of this industry is content writers,content marketers and copywriters, because this is mission critical for them."

Then there's coding: In November 2021, a third of the code on GitHuba hosting platform for code was being written with Copilot, a GPT-3 powered coding tool that had been launched five months earlier.

US technological research and consulting firm Gartner predicts that by 2025, generative AI (like GPT-3) will account for 10 per cent of all data produced, up from less than 1 per cent today.

That data includes everything from website code and chatbot platforms to image generation and marketing copy.

"At the moment, content creation is mostly using generative AI to assist as part of the pipeline," says Anthony Mullen, research director for AI at Gartner.

"I think that will persist for a while, but it does shift the emphasis more towards ideas, rather than craft.

"Whether it is producing fully completed work or automating tasks in the creative process,generative AI will continue to reshape the creative industries.

"This technology is a massive disruptor."

Until recently, decent text generationAI seemed a long way away.

Progress in natural language processing (NLP),or the ability of a computer program to understand human language, appeared to be getting bogged down in the complexity of the task.

Then, in 2017, a series of rapid advancements culminated in a new kind of AI model.

In traditional machine learning, a programmer teaches a computer to, for instance, recognise if an image does or does not containa dog.

In deep learning, the computer is provided with a set of training data eg. images tagged dog or not dog that it uses to create a feature set for dogs.

With this set, it creates amodel thatcan then predictwhether untagged images do or do not contain a dog.

These deep learning models are the technology behind, for instance, the computer vision that's used in driverless cars.

While working on ways to improve Google Translate, researchers at the companystumbled upon a deep learning model that proved to begood at predicting what word should come next in a sentence.

Called Transformer, it'slike a supercharged version of text messaging auto-complete.

"Transformer isa very, very good statistical guesser," says Alan Thompson, an independent AI researcher and consultant.

"It wants to know what is coming next in your sentence or phrase or piece of language, or in some cases, piece of music or image or whatever else you've fed to the Transformer."

At the same time, in parallel to Google, an Australian tech entrepreneurand data scientist, Jeremy Howard, was finding new ways to train deep learning models on large datasets.

Professor Howard, who would go on to become an honorary professor at the University of Queensland,had moved to San Francisco six years earlier, from Melbourne.

He proposed feeding Transformer a big chunk of text data and seeing what happened.

"So in 2018, the OpenAI team actually tookProfessor Jeremy Howard's advice and fed the original GPTwith a whole bunch of book data into this Transformer model,"Dr Thompson says.

"And they watched as it was able to complete sentences seemingly out of nowhere."

Transformer is the basis forGPT(which stands for Generative Pre-trained Transformer), as well as other current language models.

ProfessorHoward's contribution is widely recognised inSilicon Valley, but not so much in Australia, to which he recently returned.

"In Australia, people will ask what do you do and I'll be like, 'I'm aprofessor in AI'. And they say, 'Oh well, how about the footy?'" he says.

"It's very, very different."

The short answer is that, beyond a certain point, we don't know.

AI like GPT-3 are known as "black boxes", meaning it's impossible to know the internal process of computation.

The AI has trained itself to do a task, but how it actually performsthat task is largely a mystery.

"We've given it this training data and we've let it kind of macerate that data for months, which is the equivalent of many human years, or decades even," Dr Thompson says.

"And it can do things that it shouldn't be able to do. It taught itselfcoding and programming. It can write new programmes that haven't existed."

As you might guess, this inability to understand exactly how the technology works is a problem fordriverless cars, which rely on AI to make life-and-death decisions.

Read more:

Danny's workmate is called GPT-3. You've probably read its work without realising it's an AI - ABC News

Posted in Ai | Comments Off on Danny’s workmate is called GPT-3. You’ve probably read its work without realising it’s an AI – ABC News

AI Attempts Converting Python Code To C++ – Hackaday

Posted: at 8:34 pm

[Alexander] created codex_py2cpp as a way of experimenting with Codex, an AI intended to translate natural language into code. [Alexander] had slightly different ideas, however, and created codex_py2cpp as a way to play with the idea of automagically converting Python into C++. Its not really intended to create robust code conversions, but as far as experiments go, its pretty neat.

The program works by reading a Python script as an input file, setting up a few parameters, then making a request to OpenAIs Codex API for the conversion. It then attempts to compile the result. If compilation is successful, then hopefully the resulting executable actually works the same way the input file did. If not? Well, learning is fun, too. If you give it a shot, maybe start simple and dont throw it too many curveballs.

Codex is an interesting idea, and this isnt the first experiment weve seen that plays with the concept of using machine learning in this way. Weve seen a project that generates Linux commands based on a verbal description, and our own [Maya Posch] took a close look at GitHub Copilot, a project high on promise and concept, but at least at the time considerably less so when it came to actual practicality or usefulness.

Visit link:

AI Attempts Converting Python Code To C++ - Hackaday

Posted in Ai | Comments Off on AI Attempts Converting Python Code To C++ – Hackaday

Ireland gets its first AI ambassador. Will other countries follow suit? – Analytics India Magazine

Posted: at 8:34 pm

Ireland has appointed Dr Patricia Scanlon as its first AI Ambassador to facilitate the Governments AI adoption strategy launched last year. Patricia is the founder and former executive chairperson of the speech recognition tech firm called SoapBox Labs. Irelands AI Here For Good strategy focuses on how technology can be utilised in human-centric and ethical ways to improve the lives of its citizens.

Dr Scanlon, a member of the Enterprise Digital Advisory Forum (EDAF), will work closely with the Department of Enterprise, Trade and Employment.

She will work on demystifying AI and promoting the positive impacts it can have in areas such as transport, agriculture, health and education.

The Department of Trade, Enterprise and Employment announced the national AI strategy, AI Here for Good, on 8th July 2021.

The strategy outlined Irelands plan to become a global leader in artificial intelligence to benefit its economy and society, with a people-centred, ethical approach to AI development, adoption and use. Further, Ireland will join the Global Partnership on AI and continue to take part in EU discussions and define a framework for trustworthy AI.

The government also plans to identify areas AI researchers from Ireland could collaborate with other countries. As part of a broader strategy, higher education institutions are encouraged to design AI-related courses and employers are urged to facilitate workplace-focused AI upskilling and reskilling.

Additionally, National Youth Assembly on Artificial Intelligence is set to take place in September 2022 to address concerns of youth around AI and to promote STEM careers.

The AI Ambassador appointment came weeks after the United States Department of Defense appointed Dr Craig Martell as the Chief Digital and Artificial Intelligence Officer (CDAO)a newly created position. The role was created to monitor data and AI initiatives under one official at the highest levels of the Pentagon. The CDAO reports directly to the deputy secretary of defence.

Countries worldwide are engaged in an AI arms race, sometimes literally. For example, in the ongoing Russia-Ukraine war, the former had used AI-based drones to unleash terror on Ukrainian cities. Ukraine, on its part, has also taken help from the US firm Clearview AI to uncover the Russian assailants and combat misinformation.

In Xinjiang and Tibet, China uses AI-powered technology to combine multiple streams of informationincluding individual DNA samples, online chat history, social media posts, medical records, and bank account informationto track citizens.

India is behind the US, China, the UK, France, Japan and Germany in the top AI adopters list. Canada, South Korea and Italy round out the top 10.

In October 2016, the Obama administration released a report titled Preparing for the Future of Artificial Intelligence, addressing concerns around AI like its application for the public good, economic impact, regulation, fairness and global security. In addition, the US government also released a companion document called the National Artificial Intelligence Research and Development Strategic Plan 3, which formed the benchmark for Federally-funded research and development in AI. These documents were the first of a series of policy documents released by the US regarding the role of AI.

The United Kingdom announced its national development strategy in 2020 and issued a report to accelerate the application of AI by government agencies. In 2018, the Department for Business, Energy, and Industrial Strategy released the Policy Paper AI Sector Deal. The Japanese government released its paper on Artificial Intelligence Technology Strategy in 2017. The European Union launched SPARC, the worlds largest civilian robotics R&D program, in 2014. Developing countries such as Mexico and Malaysia are in the process of creating their national AI strategies.

In recent years, the Indian government has launched several initiatives at state and national levels.

In 2018, the Indian government published two AI roadmaps the Report of Task Force on Artificial Intelligence by the AI Task Force constituted by the Ministry of Commerce and Industry and the National Strategy for Artificial Intelligence by Niti Aayog.

The National Roadmap for Artificial Intelligence by NITI Aayog proposed creating a National AI marketplace. In particular, the data marketplace would be based on blockchain technology and offer features like traceability, access controls, compliance with local and international regulations, and a robust price discovery mechanism for data.In 2022, the government increased the budget expenditure from INR 6,388 crores to INR 10,676.18 crores for the Digital India programme to boost AI, machine learning, IoT, big data, cybersecurity and robotics. Indias flagship digital initiative plans to make the internet more accessible, promoting e-governance, e-banking, e-education and e-health.

See original here:

Ireland gets its first AI ambassador. Will other countries follow suit? - Analytics India Magazine

Posted in Ai | Comments Off on Ireland gets its first AI ambassador. Will other countries follow suit? – Analytics India Magazine

Microsoft Build Showcases 4-Processor PCs and Useful AI Apps – IT Business Edge

Posted: at 8:34 pm

As vendor events go, Microsoft Build is one of the more interesting because it focuses on the people who create things.

While Build is mostly about software, theres usually a considerable amount of information on hardware that can be, at times, revolutionary. Major breakthroughs for both software and hardware dont typically happen at the same show, but this year we had new ARM-based, four-processor PCs and AI applications that address what is the most pervasive problem in computing that has been largely unaddressed since its creation: Enabling users to interact easily and naturally with PCs.

Also read: Top Artificial Intelligence (AI) Software 2022

The hardware announcement was Project Volterra, which boasts four processors, two more than the typical CPU and GPU weve known for years. The third processor is called a Neural Processing Unit focused on AI loads and handles them faster while using far less energy than CPUs or GPUs, according to Microsoft.

The fourth processor Im calling an ACU, or Azure Compute Unit, and it is in the Azure Cloud. This is arguably the first hybrid PC sharing load between the cloud and the device, which is stackable if more localized performance is needed. Volterra may look like a well provisioned small-form factor PC. However, while its targeted at creating native Windows ARM code, it is predictive of the ARM PCs well see on the market once this code is available.

As fantastic as this new hardware is, Microsoft is a software company with a deep history in development tools that goes all the way back to its roots. A huge problem computing has had since its inception is that people have to learn how to interact with the machines, which makes no sense in an ideal world.

Why would you build a tool that people have to work with and then create programming languages that require massive amounts of training? Why not put in the extra work and do it so we can communicate with them like we communicate with each other? Why not create a system to which we can explain what we want and have the computer create it?

Granted a lot of us have trouble explaining what we want, but at least getting training in doing that better would have broad positive implications for our ability to communicate overall, not just communicate with computers. In short, having computers respond to natural language requests would force us to train people how to generally communicate better, leading to fewer conflicts, fewer mistakes, and far deeper and more understanding relationships, not just with computers, but with each other. Something I think you can agree we need now.

Also read: Microsoft Embraces the Significance of Developers

The featured offering is a release coming from GitHub called Co-Pilot, which collaboratively builds code using an AI. It will anticipate what needs to be done and suggest it, and it will provide written code that corresponds to the coders request. Not sure how to write a command? Just ask how one would be done and Co-Pilot will provide the answer.

Microsoft provided examples of several targeted AI-driven Codex prototypes as well. One seemed to go farther by creating more complete code, while another, used for web research, didnt just identify the source but would pull out the relevant information and summarize it. I expect this capability will find its way underneath digital assistants, making them far more capable of providing complete answers in the future.

A demonstration that really caught my attention was on OpenAIs DALL-E (pronounced Dolly). This is a prototype program that will create an image based on your description. One use: Young schoolchildren who use their imaginations to describe a picture of an invention they had thought up, which led to shoes made of recycled trash, a robotic space trash collector, and even a house kind of like the Jetsons apartment that could be raised or lowered according to the weather.

Right now, due to current events, Im a bit more focused on children this week, but I think a tool like this could be an amazing way to visualize ideas and convey ideas better. They say a picture is worth a thousand words; this AI could create that picture with just a few words. While cartoonish initially (this can be addressed with several upscaling tools from companies like AMD and NVIDIA), they nevertheless excited and enthralled the kids. It was also, I admit, magical for me.

Microsoft Build showed me the best future of AI. Applied not for weapons or to convince me to buy something I dont want (extended car insurance anyone?), but to remove the drudgery from coding, enabling more people with less training to create high-quality code, translate imagination into images and make digital assistants much more useful.

Ive also seen the near-term future of PCs, with quad processors, access to the near unlimited processing power of the web (including Microsoft Azure Supercomputers when needed), and an embedded AI that could use the technology above to help that computer learn, for once, how to communicate with us and not the other way around.

This years Microsoft Build was, in a word, extraordinary. The things they talked about will have a significant, and largely positive, impact on our future.

Read next: Using Responsible AI to Push Digital Transformation

Read this article:

Microsoft Build Showcases 4-Processor PCs and Useful AI Apps - IT Business Edge

Posted in Ai | Comments Off on Microsoft Build Showcases 4-Processor PCs and Useful AI Apps – IT Business Edge

Feds Facing Uphill Road to Deploy AI Tech in Cyber Fight – MeriTalk

Posted: at 8:34 pm

While the use of AI technologies is proving effective as a tool to help stop cyber criminals, the Federal government continues to faces an uphill road in deploying the technology, a U.S. Secret Service official said this week.

Roy Dotson Jr., Acting Special Agent in Charge, USSS National Pandemic Fraud Recovery Coordinator, at the U.S. Secret Service, said at a May 26 ATARC event entitled Impact of AI and Machine Learning on Financial Crime Investigation that the Federal government still lacks some of the professional resources it needs to further implement AI tech to deter financial crimes.

Were extremely limited, its very difficult for us to hire experienced data scientists, forensic accountants those [people] in the fields that would be very beneficial to us, he said.

Dotson also talked about strategic options for AI deployments that would better help deter cyber criminals.

Im a big proponent of being proactive instead of reactive, he said. Thats what I would love to see, so that we can be on the same playing field as the more complex cybercriminal, that would give us a leg up, Dotson said. There is also different AI that Id love to see be used as well, he added.

While the Federal government is still facing AI implementation issues, Dotson explained how the technology has already been helping to stop cyber crimes.

It gives us a better chance of working cases faster, identifying suspects quicker, and that helps us to possibly apprehend people that we might not have a chance because of the time delay that other traditional means that take longer going through data, he said.

Go here to read the rest:

Feds Facing Uphill Road to Deploy AI Tech in Cyber Fight - MeriTalk

Posted in Ai | Comments Off on Feds Facing Uphill Road to Deploy AI Tech in Cyber Fight – MeriTalk

Auterion Delivers Supercomputer Performance Onboard Drones and Mobile Robots With AI Node – sUAS News

Posted: at 8:34 pm

Auterion, the company building an open and software-defined future for enterprise drone fleets, today announced the availability ofAI Nodefor pre-order, an onboard computer for drones and other mobile robots that adds supercomputer performance to Skynode, right at the edge. AI Node easily integrates withAuterions open ecosystem, resulting in a greater choice of powerful solutions for end users.

AI Node is equipped with the NVIDIA Jetson Xavier NX, the worlds smallest AI supercomputer for embedded and edge systems, which enables the direct processing of high bandwidth sensor data for better decision-making during operations. Compute-heavy AI and ML algorithms for object recognition, tracking and counting can be used, on mission, in advanced applications for public safety, security, and wildlife conservation, and across industry use cases.

As enterprises leverage more powerful cameras and sensors on drones,, the huge amount of data being created will overload any current data link, including 5G, said Markus Achtelik, vice president of engineering atAuterion. Its much more efficient to process raw data onboard via supercomputer, so that the operator or even the software itself can engage in real-time decision-making. AI Node delivers the horsepower to run modern neural networks in parallel and distill data from multiple high-resolution sensorswhich translates into faster innovation for enterprises and other organizations.

With AI Node, drone manufacturers, like Watts Innovations, can now easily build systems capable of running high performance AI algorithms onboard.

Some of our customers have very specific requirements necessitating a considerable amount of onboard computing, said Bobby Watts, CEO and principal engineer at Watts Innovations. For this, we turned toAuterionand AI Node, which allows us to run GPU intensive software onboard for applications such as vision-based precision landing, real-time mission navigation and other vision-based capabilities. Because AI Node is a part ofAuterions tight ecosystem, the integration and implementation is as clean as could be.

Other benefits of AI Node include:

Software developers, like Spleenlab, are enabling new and more advanced solutions for end customers. With AI Node, drone manufacturers and integrators can deploy Spleenlabs compute-heavy VISIONAIRY software and advanced AI algorithms directly onboard drones, said Stefan Milz, CEO of Spleenlab. This allows them to increase the safety of systems by executing tasks like ground-risk estimations. Working withAuterions AI Node is a win-win for us as providers and users of their software. We especially appreciate how easy it is to connect and collaborate with other manufacturers within their ecosystem.

Get in touchfor a demo and early access to AI Node for Skynode.

AboutAuterion

Auterionis building the worlds leading autonomous mobility platform for enterprise and government users to better capture data, carry out high-risk work remotely, and deliver goods with drones.Auterions open-source-based platform was nominated by the U.S. government as the standard for its future drone program. With 70+ employees across offices in California, Switzerland, and Germany,Auterions global customer base includes GE Aviation, Quantum-Systems, Freefly Systems, Avy, Watts Innovations, and the U.S. government.

Learn more aboutAuterionathttps://auterion.com/.

Originally posted here:

Auterion Delivers Supercomputer Performance Onboard Drones and Mobile Robots With AI Node - sUAS News

Posted in Ai | Comments Off on Auterion Delivers Supercomputer Performance Onboard Drones and Mobile Robots With AI Node – sUAS News

Scotiabank Is Leading The Way With Advanced Analytics And AI – Forbes

Posted: at 8:34 pm

This is a five-part blog series from an interview that I recently had with Grace Lee, Chief Data and Analytics Officer and Dr. Yannick Lallement, Vice President, AI & ML Solutions at Scotiabank.

Scotiabank is a Canadian multinational banking and financial services company headquartered in Toronto, Ontario. One of Canada's Big Five banks, it is the third largest Canadian bank by deposits and market capitalization. With over 90,000 employees globally, and assets of approximately $1.3 trillion Scotiabank has invested heavily in AI, Analytics and Data and aligned an integrated function that is well supported by all business lines. Although their journey has zig zagged in impact along its way, the organization now has a strong foothold in bringing consistent value and impact to the business. We can all learn a great deal from these words of wisdom in what it takes to advance AI successfully in a large enterprise.

This five-part blog series answers these five questions:

Blog One: How is the advanced analytics function structured and what have been some of the most significant operational challenges in your journey?

Blog Two: What does it take to set up an AI/ML Solutioning Competency Center?

Blog Three: How are some of the operational challenges like Digital Literacy impacting your journey?

Blog Four: What are some of the operational lessons learned?

Blog Five: What does the future hold for Scotiabanks Advanced Analytics and AI function?

Advanced Analytics and AI at Scotiabank

How is your organization structured in terms of analytics, data and AI?

If you've been following our recent history, you would know that we've had a lot of fits and starts. We've had some aborted attempts to bring analytics and data to the Bank in a meaningful way. And through this journey, we have learned from our mistakes to enable us to move from siloed analytics, data, and AI professionals into a unified centre of excellence where we have integrated teams across the various business lines and functions. Prior, we had data in a primarily governance function in our risk management function where they were primarily focused on data quality but did not do much data enablement or delivery.

We currently have over 500 analytics, data, and AI professionals, and about half are skilled in AI. We have quite a diverse team in terms of skills, ranging from business analysts, user-centric designers, data scientists, data engineers, NLP specialists, ModelOps engineers, as well as resources skilled in data and AI ethics. Our people are primarily in North America (75%) and the balance of our talent is located in different global regions, in Mexico, South America (Peru, Chile, Colombia), the Caribbean, etc.

We are proud that our team consists of people that can ensure that our AI modelling and ML solutions are designed and deployed effectively from inception to consumption (Verbatim: Grace Lee).

What were some of the most significant lessons learned in your organizational restructuring journey?

Simply having AI, analytics, and data as capabilities does not mean that we are driving value, and if we don't drive value, we don't have a place in the Bank. So, one of the things we said we must do differently is, rather than put the function in technology, in operations, or in marketing where these teams often live, we will have data and analytics report directly to the business lines. We had to ensure that the value was from the business users using the solutions and driving tangible value (Verbatim: Grace Lee).

What were some of the technical lessons learned?

We learned that by bringing data and analytics tightly together, aligned with technology, and by having priorities and shared goals set by the business, it's less about the sophistication of the model and it's more about the meaningfulness of the outcome.

We have learned that we must work together closely in this ecosystem that we've built. This allows us to activate the virtuous cycle between data, analytics, and technology because technology is necessary to make data; data is necessary to make models; and models must be reintegrated into technology in order to get in front of a customer and employee by being embedded into the operating process. If we don't ensure process integration, we are not working in harmony.

For example, if we built an AI model where data pipelines are built one-off and not sustainable, when something changes in the technology, the models will stop properly functioning and supporting the business this scenario is antithetical to the way that we think about delivering value. When we talk about bringing data and analytics together, it's not just data governance, it's data delivery. Our concept of a reusable authoritative data set underpinning models to ensure operational sustainability is factored in from the onset and is core to our strategy.

This allows us to provide an abstraction layer that allows the end-user data to remain consistent and persist - so if something changes in the systems upstream, we are still able to deliver that same high quality data to all of our models. This means our reports and our processes are, in a way, relatively insulated from technology change. In other words, as you know in AI, often 80% of the problem is in the data sourcing; with well-managed and accessible data, we expect it to be closer to 20%. (Verbatim: Dr. Yannick Lallement)

Note: See Blog Two: What does it take to set up an AI/ML Solutioning Competency Center?

See more here:

Scotiabank Is Leading The Way With Advanced Analytics And AI - Forbes

Posted in Ai | Comments Off on Scotiabank Is Leading The Way With Advanced Analytics And AI – Forbes

Explainable AI – Times of India

Posted: at 8:34 pm

AI is transforming engineering in nearly every industry and application area. With that, comes requirements for highly accurate AI models. Indeed, AI models can often be more accurate as they replace traditional methods, yet this can sometimes come at a price: how is this complex AI model making decisions, and how can we, as engineers, verify the results are working as expected?

Enter explainable AI a set of tools and techniques that help us to understand model decisions and uncover problems with black-box models like bias or susceptibility to adversarial attacks. Explainability can help those working with AI to understand how machine learning models arrive at predictions, which can be as simple as understanding which features drive model decisions but more difficult when trying to explain complex models.

Evolution of AI models

Why the push for explainable AI? Models werent always this complex. In fact, lets start with a simple example of a thermostat in winter. The rule-based model is as follows:

Is the thermostat working as expected? The variables are current room temperature and whether the heater is working, so it is very easy to verify based on the temperature in the room.

Certain models, such as temperature control, are inherently explainable due to either the simplicity of the problem, or an inherent, common sense understanding of the physical relationships. In general, for applications where black-box models arent acceptable, using simple models that are inherently explainable may work and be accepted as valid if they are sufficiently accurate.

However, moving to more advanced models has advantages:

Figure 1: Evolution of AI models. A simple model may be more transparent, while a more sophisticated model can improve performance.

Why Explainability?

AI models are often referred to as black-boxes, with no visibility into what the model learned during training, or how to determine whether the model will work as expected in unknown conditions. The focus on explainable models aims to ask questions about the model to uncover any unknowns and explain their predictions, decisions, and actions.

Complexity vs. Explainability

For all the positives about moving to more complex models, the ability to understand what is happening inside the model becomes increasingly challenging. Therefore, engineers need to arrive at new approaches to make sure they can maintain confidence in the models as predictive power increases.

Figure 2: The tradeoff between explainablility and predictive power. In general, more powerful models tend to be less explainable, and engineers will need new approaches to explainability to make sure they can maintain confidence in the models as predictive power increases.

Using explainable models can provide the most insight without adding extra steps to the process. For example, using decision trees or linear weights can provide exact evidence as to why the model chose a particular result.

Engineers who require more insight into their data and models and are driving explainability research for:

Current Explainability Methods

Explainable methods fall into two categories:

Figure 3: The difference between global and local methods. Local methods focus on a single prediction, while global methods focus on multiple predictions.

Understanding feature influence

Global methods include feature ranking, which sorts features by their impact on model predictions, and partial dependence plots, which home in on one specific feature and indicate its impact on model predictions across the whole range of its values.

The most popular local methods are:

Visualizations

When building models for image processing or computer vision applications, visualizations are one of the best ways to assess model explainability.

Model visualizations: Local methods like Grad-CAM and occlusion sensitivity can identify locations in images and text that most strongly influenced the prediction of the model.

Figure 4: Visualizations that provide insight into the incorrect prediction of the network.

Feature comparisons and groupings: The global method T-SNE is one example of using feature groupings to understand relationships between categories. T-SNE does a good job of showing high-dimensional data in a simple two-dimensional plot.

These are only a few of the many techniques currently available to help model developers with explainability. Regardless of the details of the algorithm, the goal is the same: to help engineers gain a deeper understanding about the data and model. When used during AI modeling and testing, these techniques can provide more insight and confidence into AI predictions.

Beyond Explainability

Explainability helps overcome an important drawback of many advanced AI models and their black-box nature. But overcoming stakeholder or regulatory resistance against black-box models is only one step towards confidently using AI in engineered systems. AI used in practice requires models that can be understood, that were constructed using a rigorous process, and that can operate at a level necessary for safety-critical and sensitive applications.

Continuing areas of focus and improvement include:

Is Explainability Right for Your Application?

The future of AI will have a strong emphasis on explainability. As AI is incorporated into safety-critical and everyday applications, scrutiny from both internal stakeholders and external users is likely to increase. Viewing explainability as essential benefits everyone. Engineers have better information to use to debug their models to ensure the output matches their intuition. They gain more insight to meet requirements and standards. And, theyre able to focus on increased transparency for systems that keep getting more complex.

Views expressed above are the author's own.

END OF ARTICLE

Link:

Explainable AI - Times of India

Posted in Ai | Comments Off on Explainable AI – Times of India

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find – VICE

Posted: at 8:34 pm

A new study shows that humans can learn new things from artificial intelligence systems and pass them to other humans, in ways that could potentially influence wider human culture.

The study, published on Monday by a group of researchers at the Center for Human and Machines at the Max Planck Institute for Human Development, suggests that while humans can learn from algorithms how to better solve certain problems, human biases prevented performance improvements from lasting as long as expected. Humans tended to prefer solutions from other humans over those proposed by algorithms, because they were more intuitive, or were less costly upfronteven if they paid off more, later.

"Digital technology already influences the processes of social transmission among people by providing new and faster means of communication and imitation," the researchers write in the study. "Going one step further, we argue that rather than a mere means of cultural transmission (such as books or the Internet), algorithmic agents and AI may also play an active role in shaping cultural evolution processes online where humans and algorithms routinely interact."

The crux of this research rests on a relatively simple question: If social learning, or the ability of humans to learn from one another, forms the basis of how humans transmit culture or solve problems collectively, what would social learning look like between humans and algorithms? Considering scientists dont always know and often cant reproduce how their own algorithms work or improve, the idea that machine learning could influence human learningand culture itselfthroughout generations is a frightening one.

"There's a concept called cumulative cultural evolution, where we say that each generation is always pulling up on the next generation, all throughout human history," Levin Brinkmann, one of the researchers who worked on the study, told Motherboard. "Obviously, AI is pulling up on human historythey're trained on human data. But we also found it interesting to think about the other way around: that maybe in the future our human culture would be built up on solutions which have been found originally by an algorithm."

One early example cited in the research is Go, a Chinese strategy board game that saw an algorithmAlphaGobeat the human world champion Lee Sedol in 2016. AlphaGo made moves that were extremely unlikely to be made by human players and were learned via self-play instead of analyzing human gameplay data. The algorithm was made public in 2017 and such moves have become more common among human players, suggesting that a hybrid form of social learning between humans and algorithms was not only possible but durable.

We already know that algorithms can and do significantly affect humans. Theyre not only used to control workers and citizens in physical workplaces, but also control workers on digital platforms and influence the behavior of individuals who use them. Even studies of algorithms have previewed the worrying ease with which these systems can be used to dabble in phrenology and physiognomy. A federal review of facial recognition algorithms in 2019 found that they were rife with racial biases. One 2020 Nature paper used machine learning to track historical changes in how "trustworthiness" has been depicted in portraits, but created diagrams indistinguishable from well-known phrenology booklets and offered universal conclusions from a dataset limited to European portraits of wealthy subjects.

I don't think our work can really say a lot about the formation of norms or how much AI can interfere with that, Brinkmann said. We're focused on a different type of culture, what you could call the culture of innovation, right? A measurable value or peformance where you can clearly say, 'Okay this paradigmlike with AlphaGois maybe more likely to lead to success or less likely."

For the experiment, the researchers used transmission chains, where they created a sequence of problems to be solved and participants could observe the previous solution (and copy it) before solving it themselves. Two chains were created: one with only humans, and a hybrid human-algorithm one where algorithms followed humans but didn't know if the previous player was a human or algorithm.

The task to solve was to find "an optimal sequence of moves" to navigate a network of six nodes and receive awards with each move.

As expected, we found evidence of a performance improvement over generations due to social learning, the researchers wrote. Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans solutions with comparable performance.

Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this.

"One thing we are looking at now is what collective effects might play a role here," Brinkmann said. "For instance, there is something called 'context bias.' It's really about social factors which may also play a role, about unintuitive or alien solutions for a group can be sustained. We are also quite excited about the question of communication between algorithms and humans: what does that actually look like, what kind of features do we need from AI to learn or imitate solutions from AI?"

Link:

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find - VICE

Posted in Ai | Comments Off on AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find – VICE

Page 34«..1020..33343536..4050..»