Page 21«..10..20212223..3040..»

Category Archives: Ai

Risks posed by AI are real: EU moves to beat the algorithms that ruin lives – The Guardian

Posted: August 8, 2022 at 12:34 pm

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apples newly launched credit card, calling it sexist for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence now widely used to make lending decisions was to blame. It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM theyve placed their complete faith in does. And what it does is discriminate. This is fucked up.

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EUs General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. The impact of the act, once adopted, cannot be overstated, said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EUs final list of high risk uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or in the case of lenders assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture, Sarah Kocianski, an independent financial technology consultant said. If designed correctly, such systems can provide wider access to affordable credit.

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. There is a danger that they will be biased in terms of what a good borrower looks like, Kocianski said. Notably, gender and ethnicity are often found to play a part in the AIs decision-making processes based on the data it has been taught on: factors that are in no way relevant to a persons ability to repay a loan.

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as black-box syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicants gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called trustworthy AI models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which publicly launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. Correlation-based models are learning the injustices from the past and theyre just replaying it into the future, Matovski said.

He believes the proliferation of so-called causal AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model, he said. We dont know how many people havent gone to university because of a haywire algorithm. We dont know how many people werent able to get their mortgage because of algorithm biases. We just dont know.

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it, he said.

Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk

While the EUs new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present, Circiumaru said.

AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they wont.

See the rest here:

Risks posed by AI are real: EU moves to beat the algorithms that ruin lives - The Guardian

Posted in Ai | Comments Off on Risks posed by AI are real: EU moves to beat the algorithms that ruin lives – The Guardian

Artificial Intelligence: 3 ways the pandemic accelerated its adoption – The Enterprisers Project

Posted: at 12:34 pm

The need for organizations to quickly create new business models and marketing channels has accelerated AI adoption throughout the past couple of years. This is especially true in healthcare, where data analytics accelerated the development of COVID-19 vaccines. In consumer-packaged goods, Harvard Business Reviewreportedthat Frito-Lay created an e-commerce platform,Snacks.com, in just 30 days.

The pandemic also accelerated AI adoption in education, as schools were forced to enable online learning overnight. And wherever possible, the world shifted to touchless transactions, completely transforming the banking industry.

Three technology developments during the pandemic accelerated AI adoption:

[ Also readArtificial Intelligence: How to stay competitive. ]

Lets look at the pros and cons of these developments for IT leaders.

Even 60 years after Moores Law, computing power is increasing, with more powerful machines and more processing power through new chips from companies like NVidia.AI Impactsreports that computing power available per dollar has probably increased by a factor of ten roughly every four years over the last quarter of a century (measured in FLOPS or MIPS). However, the rate has been slower over the past 6-8 years.

Pros: More for less

Inexpensive computing gives IT leaders more choices, enabling them to do more with less.

Cons: Too many choices can lead to wasted time and money

Consider big data. With inexpensive computing, IT pros want to wield its power. There is a desire to start ingesting and analyzing all available data, leading to better insights, analysis, and decision-making.

But if you are not careful, you could end up with massive computing power and not enough real-life business applications.

As networking, storage, and computing costs drop, the human inclination is to use them more. But they dont necessarily deliver business value to everything.

Before the pandemic, the terms data warehouses and data lakes were standard and they remain so today. But new data architectures like data fabric and data mesh were almost non-existent. Data fabric enables AI adoption because it enables enterprises to use data to maximize their value chain by automating data discovery, governance, and consumption. Organizations can provide the right data at the right time, regardless of where it resides.

Pros: IT leaders will have the opportunity to rethink data models and data governance

It provides a chance to buck the trend toward centralized data repositories or data lakes. This might mean more edge computing and data available where it is most relevant. These advancements result in appropriate data being automatically available for decisioning critical to AI operability.

Cons: Not understanding the business need

IT leaders need to understand the business and AI aspects of new data architectures. If they dont know what each part of the business needs including the kind of data and where and how it will be used they may not create the correct type of data architecture and data consumption for proper support. ITs understanding of the business needs, and the business models that go with that data architecture, will be essential.

Statistaresearch underscores the growth of data: The total amount of data created, captured, copied, and consumed globally was 64.2 zettabytes in 2020 and is projected to reach more than 180 zettabytes in 2025. Statista research from May 2022 reports, The growth was higher than previously expected, caused by the increased demand due to the COVID-19 pandemic. Big data sources include media, cloud, IoT, the web, and databases.

Pros: Data is powerful

Every decision and transaction can be traced back to a data source. If IT leaders can use AIOps/MLOps to zero in on data sources for analysis and decision-making, they are empowered. Proper data can deliver instant business analysis and provide deep insights for predictive analysis.

Cons: How do you know what data to use?

More on artificial intelligence

Besieged by data from IoT, edge computing, formatted and unformatted, intelligent and unintelligible IT leaders are dealing with the 80/20 rule: What are the 20 percent credible data sources that deliver 80 percent of the business value? How do you use AI/ML ops to determine the credible data sources, and what data source should be used for analysis and decision-making? Every organization needs to find answers to these questions.

AI is becoming ubiquitous, powered by new algorithms and increasingly plentiful and inexpensive computing power. AI technology has been on an evolutionary road for more than 70 years. The pandemic did not accelerate the development of AI; it accelerated its adoption.

Harnessing AI is the challenge ahead.

[ Want best practices for AI workloads? Get theeBook: Top considerations for building a production-ready AI/ML environment. ]

Visit link:

Artificial Intelligence: 3 ways the pandemic accelerated its adoption - The Enterprisers Project

Posted in Ai | Comments Off on Artificial Intelligence: 3 ways the pandemic accelerated its adoption – The Enterprisers Project

You Need To Stop Doing This On Your AI Projects – Forbes

Posted: at 12:34 pm

Its easy to get excited about AI projects. Especially when you hear about all the amazing things people are doing with AI, from conversational and natural language processing (NLP) systems, to image recognition, autonomous systems and great predictive analytics and pattern and anomaly detection capabilities. However, when people get excited about AI projects, they tend to overlook some significant red flags. And its those red flags that are causing over 80% of AI projects to fail.

One of the biggest reasons for AI project failure is that companies dont justify the use of AI from an return on investment (ROI) perspective. Simply put, theyre not worth the time and expense given the cost, complexity, and difficulty of implementing the AI systems.

Organizations rush past the exploration phase of AI adoption, jumping from simple proof-of-concept demos right to production without first assessing whether the solution will provide any positive return. One big reason for this is that measuring AI project ROI can prove more difficult than first expected. Far too often teams are getting pressure from upper management, colleagues, or external teams to just get started with their AI efforts, and projects move forward without a clear answer to the problem they are actually trying to solve or the ROI thats going to be seen. When companies struggle to develop a clear understanding of what to expect when it comes to the ROI of AI, misalignment of expectations is always the result.

Missing and Misaligned ROI Expectations

So, what happens when the ROI of an AI project isnt aligned with expectations from management? One of the most common reasons why AI projects fail is the ROI is not justified by the investment of money, resources, and time. If you're going to be spending your time, effort, human resources, and money implementing an AI system, you want to get a well-identified positive return.

Even worse than a misaligned ROI is the fact that many organizations arent even measuring or quantifying ROI to begin with. ROI can be measured in a variety of ways from a financial return such as generating income or reducing expenses, but it can also be measured as a return on time, shifting or reallocating of critical resources, improving reliability and safety, reducing errors and improving quality control, or improving security and compliance. Its easy to see how an AI project could provide a positive ROI if you spend a hundred thousand dollars on an AI project to eliminate two million dollars of potential cost or liability, then its worth every dollar spent to reduce the liability. But youll only see that ROI if you actually plan for it ahead of time and manage that ROI.

Management guru Peter Drucker once famously said, you can't manage what you don't measure. The act of measuring and managing AI ROI is what sets apart those who see positive value from AI from those who end up canceling their projects years and millions of dollars into their efforts.

Boiling the Ocean and Biting off More than You Can Chew

Another big reason why companies arent seeing the ROI they are expecting is that projects are trying to bite off way too much all at once. Iterative, agile best-practices, especially those employed by best practice AI methodologies such as CPMAI clearly advise project owners to Think Big. Start Small. Iterate Often. There are unfortunately many unsuccessful AI implementations that have taken the opposite approach by thinking big, starting big, and iterating infrequently. One case in point is Walmarts investment in AI-powered robots for inventory management. In 2017 Walmart invested in robots to scan store shelves, and by 2022 they pulled them out of stores.

Clearly Walmart had sufficient resources and smart people. So you cant blame their failure on bad people or bad technology. Rather, the main issue was a bad solution to the problem. Walmart realized that it was just cheaper and easier to use human employees they already had working in the stores to complete the same tasks the robot was supposed to do. Another example of a project not returning the expected results can be found with the various applications of the Pepper robot in supermarkets, museums, and tourist areas. Better people or better technology wouldnt have solved this problem. Rather just a better approach to managing and evaluating AI projects. Methodology, folks.

Adopting a Step-by-step approach to running AI and machine learning projects

Did these companies get caught up in the hype of the technology? Were these companies just looking to have a robot roaming the halls for the cool factor? Because being cool isnt solving any real business problems nor solving a pain point. Don't do AI for the sake of AI. If you do AI just for the sake of AI then don't be surprised when you don't have a positive ROI.

So, what can companies do to ensure positive ROI for their projects? First, stop implementing AI projects for AIs sake. Successful companies are adopting a step by step approach to running AI and machine learning projects. As mentioned earlier, methodology is often the missing secret sauce to successful AI projects. Organizations are now seeing benefit in employing approaches such as the Cognitive Project Management for AI (CPMAI) methodology, built upon decades-old data centric project approaches such as CRISP-DM and incorporating established best-practice agile approaches to provide for short, iterative sprints for projects.

These approaches all start with the business user and requirements in mind. The very first step of CRISP-DM, CPMAI, and even Agile is to figure out if you should even move forward with an AI project. These methodologies suggest alternate approaches, such as automation or straight up programming or even just more people might be more appropriate to solve the problem at hand.

The AI Go No Go Analysis

AI Go No Decisions, CPMAI Methodology, Cognilytica

If AI is the right solution then you need to make sure that you answer yes to a variety of different questions to assess if youre ready to embark on your AI project. The set of questions you need to ask to determine whether to move forward with an AI project is called the AI Go No Go analysis and this is part of the very first phase in the CPMAI methodology. The AI Go No Go analysis has users ask a series of nine questions in three general categories. In order for an AI project to actually go forward, you need three things in alignment: the business feasibility, the data feasibility, and the technology / execution feasibility. The first of the three general categories asks about the business feasibility and asks you if there is a clear problem definition, if the organization is actually willing to invest in this change once created, and if there is sufficient ROI or impact.

These may seem like very basic questions, but far too often these very simple questions are skipped. The second set of questions deals with data including data quality, data quantity, and data access considerations. The third set of questions is around implementation including whether you have the correct team and skill sets needed, can execute the model as required, and that the model can be used where planned.

The most difficult part of asking these questions is being honest with the answers. Its important to be really honest when addressing whether to move forward with the project, and if you answer no to one or more of these questions it means either you're not ready to move forward yet or you should not move forward at all. Dont just plow ahead and do it anyway because if you do, don't be surprised when youve wasted a lot of time, energy and resources and dont get the ROI you were hoping for.

Here is the original post:

You Need To Stop Doing This On Your AI Projects - Forbes

Posted in Ai | Comments Off on You Need To Stop Doing This On Your AI Projects – Forbes

How can CIOs build the next generation of AI talent? – Wire19

Posted: at 12:34 pm

As technological innovation continues to accelerate and artificial intelligence (AI) becomes more prevalent, businesses are looking for ways to build the next generation of AI talent. According to Gartner, over 80% of Internet of Things (IoT) activities in enterprises will be employing AI and machine learning. Skilled workers are the most important factor in AI development. Although technology and algorithms have become commoditized, there is a big demand for workers who can solve problems with AI.

Here are a few things CIOs can do in order to make this happen.

Nurture the next generation of AI talent

Nurture and grow next-gen AI talent through continuous innovation where industry, science, engineering, and human ingenuity intersect. They need to give talented AI professionals a good place to work. They need to make sure they have the freedom to create value and meet their expectations. Create tech hubs to grow your local ecosystems and build the next generation of AI talent now.

Merit to education in AI

AI workshops, certifications, and bootcamps do not have any educational merit and do not build practitioner level skills. You need to build AI education around the intellectual infrastructure that already exists in local academic communities. The centers of excellence that engage via an eight-stakeholder model must form out of those communities to make AI education effective and bring merit to education in AI. CIOs need to identify the areas where the local ecosystem is lacking and use this as an opportunity to create value. This means that the technology and academic communities in each region need to work together to build local AI centers of excellence. Education is required to lead in the field of artificial intelligence. Thats why it is so important to make sure the academic system for this discipline is good.

Support from national governments

National governments need to support AI ecosystems from the grassroot level. Each stakeholder in an AI ecosystem has a role to play in order to build a value network that goes from the local government up to a federal policymaker. An AI ecosystem is made up of eight different stakeholders, and each one has different goals. For these stakeholders to achieve their goals, they need the support of the government. National governments should recognize AI degrees and education immediately at the graduate school level.

In what seems to align with Indian Prime Minister Narendra Modis Digital India vision, Deloitte and IIT Roorkee have announced a collaboration to empower and build the next generation of Indian talent in the field of AI. Deloitte and IIT Roorkee will together deliver rigorous, immersive programs in AI and machine learning that are designed to build the next generation workforce. This will revolutionize how organizations and academia work together to overcome the AI talent gap by imparting industry-relevant skills to Indian talent in new-age tools and developing future leaders who are highly proficient in AI.

The future of AI is bright, and businesses need to start preparing now for the talent they will need in the future. By considering the suggestions weve outlined, CIOs can make sure their business is at the forefront of this exciting industry. Are you ready to build the next gen of AI talent?

Also read:Automate your work processes with Digital Employees

More:

How can CIOs build the next generation of AI talent? - Wire19

Posted in Ai | Comments Off on How can CIOs build the next generation of AI talent? – Wire19

AI asked to create an image of what death looks like – TweakTown

Posted: at 12:34 pm

An artificial intelligence has been asked to create an image of what death looks like, and the results are simply stunning.

The artificial intelligence (AI) that was asked to create the images seen in the above video is called MidJourney, which was created by David Holtz, co-founder of Leap Motion, and is currently run by a small self-funded team that has several well-known advisors such as Jim Keller, known for his work at AMD, Apple, Tesla, and Intel, Nat Friedman, the CEO of Github, and Bill Warner, the founder of Avid Technology and inventor of nonlinear video editing.

MidJourney is an incredible piece of technology, and it recently went into open beta, which means anyone can try it by simply heading over to its dedicated Discord server. Users can enter "/imagine", followed by a text prompt of what they want the AI to produce. Users have been testing the AI's capabilities by entering descriptive words such as HD, hyper-realistic, 4K, wallpaper, and more. All of which work perfectly.

As for the predictive capability of MidJourney, none of the images seen in this article or any other source should be taken as a prediction. MidJourney was created to expand the human species' imaginative power, not predictions.

Using MidJourney's image generation algorithms, users are able to create ultra-realistic images of whatever they wish. The possibilities are truly endless, and with accurate text inputs, you can create wallpaper-worthy images. I tested the AI and created several images that are now being used as wallpapers, but what was more impressive was what the other users in the Discord were making. Below are some examples of what I found and what the user inputted into the AI to get the result.

Use MidJourney AI here.

VIEW GALLERY - 6 IMAGES

- A detailed futuristic soldier portrait gas mask, slightly visible shoulders, explosion in background

- A detailed oli painting of final fantasy XIII versus battle of light and darkness

- Universe

- A young boy sleeping on a mat , smiling at the camera , big brown eyes , hyper realistic , 4K , very clear

- Cyberpunk cat, 4K, red glasses, ultra realistic

The rest is here:

AI asked to create an image of what death looks like - TweakTown

Posted in Ai | Comments Off on AI asked to create an image of what death looks like – TweakTown

The Computer Scientist Trying to Teach AI to Learn Like We Do – Quanta Magazine

Posted: at 12:34 pm

Kanan has been toying with machine intelligence nearly all his life. As a kid in rural Oklahoma who just wanted to have fun with machines, he taught bots to play early multi-player computer games. That got him wondering about the possibility of artificial general intelligence a machine with the ability to think like a human in every way. This made him interested in how minds work, and he majored in philosophy and computer science at Oklahoma State University before his graduate studies took him to the University of California, San Diego.

Now Kanan finds inspiration not just in video games, but also in watching his nearly 2-year-old daughter learn about the world, with each new learning experience building on the last. Because of his and others work, catastrophic forgetting is no longer quite as catastrophic.

Quanta spoke with Kanan about machine memories, breaking the rules of training neural networks, and whether AI will ever achieve human-level learning. The interview has been condensed and edited for clarity.

It has served me very well as an academic. Philosophy teaches you, How do you make reasoned arguments, and How do you analyze the arguments of others? Thats a lot of what you do in science. I still have essays from way back then on the failings of the Turing test, and things like that. And so those things I still think about a lot.

My lab has been inspired by asking the question: Well, if we cant do X, how are we going to be able to do Y? We learn over time, but neural networks, in general, dont. You train them once. Its a fixed entity after that. And thats a fundamental thing that youd have to solve if you want to make artificial general intelligence one day. If it cant learn without scrambling its brain and restarting from scratch, youre not really going to get there, right? Thats a prerequisite capability to me.

The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. Its inspired by memory consolidation in our brain, where during sleep the high-level encodings of the days activities are replayed as the neurons reactivate.

In other words, for the algorithms, new learning cant completely eradicate past learning since we are mixing in stored past experiences.

There are three styles for doing this. The most common style is veridical replay, where researchers store a subset of the raw inputs for example, the original images for an object recognition task and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is generative replay. Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods.

Unfortunately, though, replay isnt a very satisfying solution.

Read more:

The Computer Scientist Trying to Teach AI to Learn Like We Do - Quanta Magazine

Posted in Ai | Comments Off on The Computer Scientist Trying to Teach AI to Learn Like We Do – Quanta Magazine

Here’s Why Businesses Are Having A Tumultuous Love-Hate Relationship With AI Ethics Boards – Forbes

Posted: at 12:34 pm

AI Ethics Advisory Boards are essential but also require focus and attention, else they can fall ... [+] apart and be untoward for all concerned.

Should a business establish an AI Ethics advisory board?

You might be surprised to know that this is not an easy yes-or-no answer.

Before I get into the complexities underlying the pros and cons of putting in place an AI Ethics advisory board, lets make sure we are all on the same page as to what an AI Ethics advisory board consists of and why it has risen to headline-level prominence.

As everyone knows, Artificial Intelligence (AI) and the practical use of AI for business activities have gone through the roof as a must-have for modern-day companies. You would be hard-pressed to argue otherwise. To some degree, the infusion of AI has made products and services better, plus at times led to lower costs associated with providing said products and services. A nifty list of efficiencies and effectiveness boosts can be potentially attributed to the sensible and appropriate application of AI. In short, the addition or augmenting of what you do by incorporating AI can be a quite profitable proposition.

There is also the shall we say big splash that comes with adding AI into your corporate endeavors.

Businesses are loud and proud about their use of AI. If the AI just so happens to also improve your wares, thats great. Meanwhile, claims of using AI are sufficiently attention-grabbing that you can pretty much be doing the same things you did before, yet garner a lot more bucks or eyeballs by tossing around the banner of AI as being part of your business strategy and out-the-door goods.

That last point about sometimes fudging a bit about whether AI is really being used gets us edging into the arena of AI Ethics. There is all manner of outright false claims being made about AI by businesses. Worse still, perhaps, consists of using AI that turns out to be the so-called AI For Bad.

For example, youve undoubtedly read about the many instances of AI systems using Machine Learning (ML) or Deep Learning (DL) that have ingrained racial biases, gender biases, and other undue improper discriminatory practices. For my ongoing and extensive coverage of these matters relating to adverse AI and the emergence of clamoring calls for AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

So, we have these sour drivers hidden within the seemingly all-rosy AI-use by businesses:

How do these kinds of thoughtless or disgraceful practices arise in companies?

One notable piece of the puzzle is a lack of AI Ethics awareness.

Top executives might be unaware of the very notion of devising AI that abides by a set of Ethical AI precepts. The AI developers in such a firm might have some awareness of the matter, though perhaps they are only familiar with AI Ethics theories and do not know how to bridge the gap in day-to-day AI development endeavors. There is also the circumstance of AI developers that want to embrace AI Ethics but then get a strong pushback when managers and executives believe that this will slow down their AI projects and bump up the costs of devising AI.

A lot of top executives do not realize that a lack of adhering to AI Ethics is likely to end up kicking them and the company in their posterior upon the release of AI which is replete with thorny and altogether ugly issues. A firm can get caught with bad AI in its midst that then woefully undermines the otherwise long-time built-up reputation of the firm (reputational risk). Customers might choose to no longer use the company's products and services (customer loss risk). Competitors might capitalize on this failure (competitive risk). And there are lots of attorneys ready to aid those that have been transgressed, aiming to file hefty lawsuits against firms that have allowed rotten AI into their company wares (legal risk).

In brief, the ROI (return on investment) for making suitable use of AI Ethics is almost certainly more beneficial than in comparison to the downstream costs associated with sitting atop a stench of bad AI that should not have been devised nor released.

Turns out that not everyone has gotten that memo, so to speak.

AI Ethics is only gradually gaining traction.

Some believe that inevitably the long arm of the law might be needed to further inspire the adoption of Ethical AI approaches.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have distinctive laws to govern various development and uses of AI. New laws are indeed being bandied around at the international, federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a measured one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here and the link here.

Lets make sure we are all on the same page about what the basics of AI Ethics contain.

In my column coverage, Ive previously discussed various collective analyses of AI Ethics principles, such as this assessment at the link here, which proffers a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems:

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that only coders or those that program the AI are subject to adhering to the AI Ethics notions. It takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

A means of trying to introduce and keep sustained attention regarding the use of AI Ethics precepts can be partially undertaken via establishing an AI Ethics advisory board.

We will unpack the AI Ethics advisory board facets next.

AI Ethics Boards And How To Do Them Right

Companies can be at various stages of AI adoption, and likewise at differing stages of embracing AI Ethics.

Envision a company that wants to get going on AI Ethics embracement but isnt sure how to do so. Another scenario might be a firm that already has dabbled with AI Ethics but seems unsure of what needs to be done in furtherance of the effort. A third scenario could be a firm that has been actively devising and using AI and internally has done a lot to embody AI Ethics, though they realize that there is a chance that they are missing out on other insights perhaps due to internal groupthink.

For any of those scenarios, putting in place an AI Ethics advisory board might be prudent.

The notion is rather straightforward (well, to clarify, the overall notion is the proverbial tip of the iceberg and the devil is most certainly in the details, as we will momentarily cover).

An AI Ethics advisory board typically consists of primarily external advisors that are asked to serve on a special advisory board or committee for the firm. There might also be some internal participants included in the board, though usually the idea is to garner advisors from outside the firm and that can bring a semi-independent perspective to what the company is doing.

I say semi-independent since there are undoubtedly going to be some potential independence conflicts that can arise with the chosen members of the AI Ethics advisory board. If the firm is paying the advisors, it raises the obvious question of whether the paid members feel reliant on the firm for a paycheck or that they might be uneasy criticizing the gift horse they have in hand. On the other hand, businesses are used to making use of outside paid advisors for all manner of considered independent opinions, so this is somewhat customary and expected anyway.

The AI Ethics advisory board is usually asked to meet periodically, either in-person or on a virtual remote basis. They are used as a sounding board by the firm. The odds are too that the members are being provided with various internal documents, reports, and memos about the efforts afoot related to AI at the firm. Particular members of the AI Ethics advisor board might be asked to attend internal meetings as befitting their specific expertise. Etc.

Besides being able to see what is going on with AI within the firm and provide fresh eyes, the AI Ethics advisory board usually has a dual role of being an outside-to-inside purveyor of the latest in AI and Ethical AI. Internal resources might not have the time to dig into what is happening outside of the firm and ergo can get keenly focused and tailored state-of-the-art viewpoints from the AI Ethics advisory board members.

There are also the inside-to-outside uses of an AI Ethics advisory board too.

This can be tricky.

The concept is that the AI Ethics advisory board is utilized to let the outside world know what the firm is doing when it comes to AI and AI Ethics. This can be handy as a means of bolstering the reputation of the firm. The AI-infused products and services might be perceived as more trustworthy due to the golden seal of approval from the AI Ethics advisory board. In addition, calls for the firm to be doing more about Ethical AI can be somewhat blunted by pointing out that an AI Ethics advisory board is already being utilized by the company.

Questions that usually are brought to an AI Ethics advisory board by the firm utilizing such a mechanism often include:

Tapping into an AI Ethics advisory board assuredly makes sense and firms have been increasingly marching down this path.

Please be aware that there is another side to this coin.

On one side of the coin, AI Ethics advisory boards can be the next best thing since sliced bread. Do not neglect the other side of the coin, namely they can also be a monumental headache and you might regret that you veered into this dicey territory (as youll see in this discussion, the downsides can be managed, if you know what you are doing).

Companies are beginning to realize that they can find themselves in a bit of a pickle when opting to go the AI Ethics advisory board route. You could assert that this machination is somewhat akin to playing with fire. You see, fire is a very powerful element that you can use to cook meals, protect you from predators whilst in the wilderness, keep you warm, bring forth light, and provide a slew of handy and vital benefits.

Fire can also get you burned if you arent able to handle it well.

There have been various news headlines of recent note that vividly demonstrate the potential perils of having an AI Ethics advisory board. If a member summarily decides that they no longer believe that the firm is doing the right Ethical AI activities, the disgruntled member might quit in a huge huff. Assuming that the person is likely to be well-known in the AI field or industry all-told, their jumping ship is bound to catch widespread media attention.

A firm then has to go on the defense.

Why did the member leave?

What is the company nefariously up to?

Some firms require that the members of the AI Ethics advisory board sign NDAs (non-disclosure agreements), which seemingly will protect the firm if the member decides to go rogue and trash the company. The problem though is that even if the person remains relatively silent, there is nonetheless a likely acknowledgment that they no longer serve on the AI Ethics advisory board. This, by itself, will raise all kinds of eyebrow-raising questions.

Furthermore, even if an NDA exists, sometimes the member will try to skirt around the provisions. This might include referring to unnamed wink-wink generic case studies to highlight AI Ethics anomalies that they believe the firm insidiously was performing.

The fallen member might be fully brazen and come out directly naming their concerns about the company. Whether this is a clear-cut violation of the NDA is somewhat perhaps less crucial than the fact that the word is being spread of Ethical AI qualms. A firm that tries to sue the member for breach of the NDA can brutally bring hot water onto themselves, stoking added attention to the dispute and appearing to be the classic David versus Goliath duel (the firm being the large monster).

Some top execs assume that they can simply reach a financial settlement with any member of the AI Ethics advisory board that feels the firm is doing the wrong things including ignoring or downplaying voiced concerns.

This might not be as easy as one assumes.

Oftentimes, the members are devoutly ethically minded and will not readily back down from what they perceive to be an ethical right-versus-wrong fight. They might also be otherwise financially stable and not willing to shave their ethical precepts or they might have other employment that remains untouched by their having left the AI Ethics advisory board.

As might be evident, some later realize that an AI Ethics advisory board is a dual-edged sword. There is a tremendous value and important insight that such a group can convey. At the same time, you are playing with fire. It could be that a member or members decide they no longer believe that the firm is doing credible Ethical AI work. In the news have been indications of at times an entire AI Ethics advisory board quitting together, all at once, or having some preponderance of the members announcing they are leaving.

Be ready for the good and the problems that can arise with AI Ethics advisory boards.

Of course, there are times that companies are in fact not doing the proper things when it comes to AI Ethics.

Therefore, we would hope and expect that an AI Ethics advisory board at that firm would step up to make this known, presumably internally within the firm first. If the firm continues on the perceived bad path, the members would certainly seem ethically bound (possibly legally too) to take other action as they believe is appropriate (members should consult their personal attorney for any such legal advice). It could be that this is the only way to get the company to change its ways. A drastic action by a member or set of members might seem to be the last resort that the members hope will turn the tide. In addition, those members likely do not want to be part of something that they ardently believe has gone astray from AI Ethics.

A useful way to consider these possibilities is this:

The outside world wont necessarily know whether the member that exits has a bona fide basis for concern about the firm or whether it might be some idiosyncratic or misimpression by the member. There is also the rather straightforward possibility of a member leaving the group due to other commitments or for personal reasons that have nothing to do with what the firm is doing.

The gist is that it is important for any firm adopting an AI Ethics advisory board to mindfully think through the entire range of life cycle phases associated with the group.

With all that talk of problematic aspects, I dont want to convey the impression of staying clear of having an AI Ethics advisory board. That is not the message. The real gist is to have an AI Ethics advisory board and make sure you do so the right way. Make that into your cherished mantra.

Here are some of the oft-mentioned benefits of an AI Ethics advisory board:

Here are common ways that firms mess up and undercut their AI Ethics advisory board (dont do this!):

Another frequently confounding problem involves the nature and demeanor of the various members that are serving on an AI Ethics advisory board, which can sometimes be problematic in these ways:

Some firms just seem to toss together an AI Ethics advisory board on a somewhat willy-nilly basis. No thought goes toward the members to be selected. No thought goes toward what they each bring to the table. No thought goes toward the frequency of meetings and how the meetings are to be conducted. No thought goes toward running the AI Ethics advisory board, all told. Etc.

In a sense, by your own lack of resourcefulness, you are likely putting a train wreck in motion.

Dont do that.

Perhaps this list of the right things to do is now ostensibly obvious to you based on the discourse so far, but you would be perhaps shocked to know that few firms seem to get this right:

Conclusion

A few years ago, many of the automakers and self-driving tech firms that are embarking upon devising AI-based self-driving cars were suddenly prompted into action to adopt AI Ethics advisory boards. Until that point in time, there had seemed to be little awareness of having such a group. It was assumed that the internal focus on Ethical AI would be sufficient.

Ive discussed at length in my column the various unfortunate AI Ethics lapses or oversights that have at times led to self-driving car issues such as minor vehicular mishaps, overt car collisions, and other calamities, see my coverage at the link here. The importance of AI safety and like protections has to be the topmost consideration for those making autonomous vehicles. AI Ethics advisory boards in this niche are helping to keep AI safety a vital top-of-mind priority.

My favorite way to express this kind of revelation about AI Ethics is to liken the matter to earthquakes.

Californians are subject to earthquakes from time to time, sometimes rather hefty ones. You might think that being earthquake prepared would be an ever-present consideration. Not so. The cycle works this way. A substantive earthquake happens and people get reminded of being earthquake prepared. For a short while, there is a rush to undertake such preparations. After a while, the attention to this wanes. The preparations fall by the wayside or are otherwise neglected. Boom, another earthquake hits, and all those that should have been prepared are caught unawares as though they hadnt realized that an earthquake could someday occur.

Firms often do somewhat the same about AI Ethics advisory boards.

They dont start one and then suddenly, upon some catastrophe about their AI, they reactively are spurred into action. They flimsily start an AI Ethics advisory board. It has many of the troubles Ive earlier cited herein. The AI Ethics advisory board falls apart. Oops, a new AI calamity within the firm reawakens the need for the AI Ethics advisory board.

Wash, rinse, and repeat.

Businesses definitely find that they sometimes have a love-hate relationship with their AI Ethics advisory board efforts. When it comes to doing things the right way, love is in the air. When it comes to doing things the wrong way, hate ferociously springs forth. Make sure you do what is necessary to keep the love going and avert the hate when it comes to establishing and maintaining an AI Ethics advisory board.

Lets turn this into a love-love relationship.

See the rest here:

Here's Why Businesses Are Having A Tumultuous Love-Hate Relationship With AI Ethics Boards - Forbes

Posted in Ai | Comments Off on Here’s Why Businesses Are Having A Tumultuous Love-Hate Relationship With AI Ethics Boards – Forbes

Artificial Intelligence Revolutionizing Content Writing – Entrepreneur

Posted: at 12:34 pm

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

The idea of Pepper Content germinated in a dormitory of BITS, Pilani. The story of the founders was similar to that of average Indian teenagers who wanted to pursue engineering.

The founders realized a shared passion for content. It was clear that for brands, smartphones and the Internet had changed the principles of customer engagement and experience principles. More than 700 million Internet users, businesses included, were accessing and consuming different forms of content daily. However, access to quality content was not as easy.

"We asked ourselves that if, in this instant noodle economy, items like food and medicine get ordered and delivered at the tap of a button, then why can't content be treated and delivered the same way? Every company in the world has a content need. In today's day and age, this opportunity stands at a staggering $400 billion globally. This was when we began the B2B content marketplace, Pepper Content, in 2017," said Anirudh Singla, co-founder and CEO, Pepper Content.

The co-founders with limited resources, ongoing classes, assignments, and exams, persisted in achieving their dreams. In 2017, the company received its first order of 250 articles on automotives. Pepper Content enables marketers to connect with the best writers, designers, translators, videographers, editors, and illustrators, and vets the marketplace's creative professionals using its AI algorithms to make the right match between business and creative professionals. To support its creators, Pepper Content has invested in building tools that augment their ability and make them more productive, and one of its key products Peppertype.ai is currently being used by over 200,000 users across 150 countries. The company has on-boarded over 1,000 enterprises and fast-growing startups, and works with over 2,500 customers, including organizations such as Adani Enterprises, NPS Trust, Hindustan Unilever, P&G; financial services, and insurance companies such as HDFC Bank, CRED, Groww, SBI Mutual Fund, TATA Capital, and technology firms such as Binance, Google, and Adobe.

According to the co-founders, Pepper Content is not a startup or an agency but a platform that connects people seamlessly. The company aims to create the perfect symphony between creators and brands when it comes to content. The company is enabling strategic collaboration that will have a tangible, on-ground impact.

The co-founders always wanted to take a product-first approach which meant understanding the nuances and solving for every use case. The first products were hyper-customised sheets with deep linking of formulae and scripts that enabled the company to piece together workflows. The team worked on 25,000 content pieces on Google sheets and docs in the initial stages that helped the co-founders understand the customer workflow.

Businesses can directly order quality content on the platform with faster turnaround times and complete transparency on the project's progress. The company's intelligent algorithms take care of all the management aspects: from finding the best creator-project match to running agile workflows and driving integrated tool-supported editorial checks for quality content delivery.

"The content marketing industry stands at $400 billion, globally and it is only going to scale further. However, no organised players are enabling seamless workflow for brands. Every company produces and outsources content in written, image, audio, and video formats. To date, companies are required to post requirements, bid for projects and choose from a large list of bidders, and negotiate pay, making it cumbersome and, frankly, unscalable. We are solving this by offering a managed marketplace. We take care of entire content operations, right from the ordering flow to end-to-end delivery. For companies, quality content delivery creates trust and for creators, takes care of timely payments and operational inefficiencies," said Rishabh Shekhar, co-founder and COO, Pepper Content.

The co-founders struggled in the initial days since they did not know anyone from the investor community. "We cold-emailed 80 VC and angel investors! There were a lot of questions and conversations about the company's scale and our age. It took us three months but we persisted and were oversubscribed for the seed funding round. Over the years we scaled a B2B content marketplace, built a product that was unheard of, and have credible investors backing us. We realized that age is no hindrance if your vision is clear and you have a product that creates real impact."

More:

Artificial Intelligence Revolutionizing Content Writing - Entrepreneur

Posted in Ai | Comments Off on Artificial Intelligence Revolutionizing Content Writing – Entrepreneur

How Rethink Proved Even AI knows "It Has To Be Heinz" | LBBOnline – Little Black Book – LBBonline

Posted: at 12:34 pm

After the success of Heinzs Draw Ketchup campaign last year, where creative agency Rethink organised a social experiment that saw the participants draw Heinz bottles when asked to draw ketchup, the condiment giant has followed up the campaign with a futuristic and tech-forward next step. Using the image-generating AI system DALL-E 2, Rethink and Heinz fed prompts, such as ketchup bottle stained glass and ketchup bottle in psychedelic art, to an AI and produced some fantastically imaginative, but undoubtedly Heinz, pieces of art.

This latest development in the campaign is inspired by the recent rise of AIs popularity in all manner of creative communities online; you cant scroll through Twitter or Instagram without seeing someones latest AI-generated masterpiece, short story or, unfortunately, ape-based NFT. AI systems and online tools such as DALL-E and Midjourney have provided users with hours of entertainment in recent months, allowing people with no artistic ability or resources themselves to produce artwork seemingly pulled straight from their imaginations. Especially for those lucky few with access to the enhanced capabilities of the newly updated DALL-E 2, the results can be frighteningly good or just frightening.

LBBs Ben Conway spoke with Heinzs brand communications senior brand manager, Jacqueline Chao, and Rethink associate creative director, Geoff Baillie, to discuss how they utilised DALL-E 2 to draw ketchup, why the this brand platform showcases the strong emotional connection Heinz-lovers have with the brand, and how AI could one day be implemented anywhere theres a need to explore and create visual concepts.

Jacqueline> Last year, we conducted a social experiment and asked people around the world to draw ketchup and our hunch was proven when they all drew Heinz. We were overwhelmed by the reaction to Draw Ketchup. The positive response from consumers and the ad world not only proved the instinctual association between ketchup and Heinz at the consumer level but showcased the strong emotional connections to our brand beyond how we satisfy appetites. Heinz ketchup fans are passionate lovers of ketchup and rarely miss an opportunity to engage with the brand.

Jacqueline> Since innovation is always at the forefront of everything we do, when we saw AI image generation quickly becoming the latest viral sensation and people taking to social to discuss the alarming accuracy of AI drawings, we quickly knew we needed to test this as we did with Draw Ketchup. Heinz always strives to move at the speed of culture and this is just another example of how we push the boundaries and look for ways to engage our fans in first-to-market ways.

Jacqueline> Together, with the support of all our agencies, we come up with concepts and strategies that are exciting and ownable for Heinz. Thats exactly the type of work we love to create together, and having partners that can help us uncover culturally relevant moments and generate clever, groundbreaking ideas is key to continuing to engage our fans and remaining an iconic brand.

Geoff> You couldnt ask for a better partner. Heinz always encourages the most innovative and exciting ideas, and pushes us to make those ideas even better. Every campaign weve done with them has been a creative collaboration. And its not just a dream client, its a dream brand. We always strive to ground our insights and ideas in brand truths, and working with a brand as iconic as Heinz makes that easy.

Geoff> DALL-E 2s artificial intelligence was trained to recognise different art styles, objects, and compositions which lets you ask for something that never could have existed before - for example, you could have a Van Gogh painting of Marilyn Monroe in space - and it recognises those individual components and pieces them together. When it came to Heinz, we searched for ketchup bottles in different styles and scenarios - Ketchup in Space, Ketchup bottle stained glass, and Ketchup bottle in psychedelic art are a few examples. It helps to try lots of styles and scenarios to get the best variety of images.

Geoff> Everyone sensed there was something special about these text-to-image generators, especially DALLE 2s program. Before we dove into developing the AI ketchup campaign, we let ourselves have fun with the program for a little while. As we toyed around with making bizarre images, we realised how sophisticated this technology is. When it comes to integrating artificial intelligence and creativity, we think this is just the beginning.

The future is coming up fast. Anywhere theres a need to explore and create visual concepts, AI could one day be implemented. As it grows and becomes more advanced we think it'll be similar to Photoshop or 3D printing - a tech tool that helps creatives and brands to express themselves and bring ideas to life. Its already happening with artists and campaigns like ours but theres so much space for it to be applied.

Geoff> Its pretty user-friendly, so it didnt take too long to get a hang of it. But it helped to learn from AI artists and the resources the AI art community is sharing online. Its so new that it feels like the whole world is learning together. There are a lot of ways to make magic but we found that the most effective method was to start with simple prompts and go from there.

Jacqueline> Watching these unique, surreal images being created in seconds was overwhelmingly impressive. Weve long said that when people think of ketchup they think of Heinz. DALL-E 2's creations proved that no matter what we were asking or how many different variations were tested, even AI knows when it comes to ketchup, it has to be Heinz.

Jacqueline> Its hard to pick favourites because we loved them all, but of the images: the bottle in a pool, the ketchup bear and vaporwave ketchup were at the top; they all look distinctly different, while still being quintessentially Heinz. Looking at these images, were seeing a lot of Heinz(isms). Theyre not totally perfect, but whether its the keystone label or our iconic slow-pouring glass bottle, theres no question that our distinct brand elements are incorporated.

Jacqueline> We used DALL-E 2 - one of the most advanced AI image generators available. DALL-E 2 is far more advanced than many of the other AI generators that people are using, but not everyone has access to it yet. By asking our fans to tell us what other AI ketchup creations theyd like us to try out and sharing the most unique, interesting results on our social channels, were giving ketchup fans an opportunity to try DALL-E 2 for themselves.

Jacqueline> Weve long said that when people think of ketchup they think of Heinz, and now we know that something as unbiased as AI does, too. Its very clear that Heinz ketchup has strong brand affinity globally because campaigns such as Draw Ketchup and Heinz Hot Dog Pact resonate on an international level. This allows us to leverage our status as the most iconic ketchup to create innovative programs that people can connect with, keeping Heinz top of mind with consumers and in culture.

Jacqueline> At Heinz , we are always looking for ways to put creativity and culture at the forefront of our campaigns in contextually relevant ways. As part of any campaign, we look to find the right intersection between our brand and relevant culture in order to drive consumer engagement. Our goal is to insert our brands at the speed of culture and uncover unique ways to engage fans in territories that are completely ownable to Heinz. So if theres an opportunity to build on this insight further in a way that resonates with the current moment, thats definitely something we will continue to explore.

Read the original post:

How Rethink Proved Even AI knows "It Has To Be Heinz" | LBBOnline - Little Black Book - LBBonline

Posted in Ai | Comments Off on How Rethink Proved Even AI knows "It Has To Be Heinz" | LBBOnline – Little Black Book – LBBonline

IBM Research Rolls Out A Comprehensive AI And ML Edge Research Strategy Anchored By Enterprise Partnerships And Use Cases – Forbes

Posted: at 12:34 pm

IBM

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBMs long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platformbased innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

Cloud out refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, edge in refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBMs Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBMs strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the worlds data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. Its easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 McDonalds drive-thru

An ordering system using AI and NLP for QRS applications has a global market. A car orders lunch at ... [+] the McDonalds drive-thru in Charnwood, Australian Capital Territory

Dr. Fullers first example centered around Quick Service Restaurants (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 Boston Dynamics and Spot the agile mobile robot

The author with Boston Dynamics Spot the agile mobile robot at IBM Think 2022

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help improve future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

Mobile readings with Boston Dynamics mobile robot

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robots wireless mobility uses self-contained AI/ML that doesnt require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

Thermal Inspection of Planer & Non-Planar Assets

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Edge Market & Use Cases

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

Challenges in scaling AI Application deployments

Determining entry points for AI at the edge is not the difficult part, Dr. Fuller said. Scale is the real issue.

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBMs edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBMs four entry markets. In this segment, AI at the edge can improve quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

Major Automotive OEM

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

Maximo Application Suite

IBMs Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

There is a huge proliferation of data at the edge that exists in multiple spokes, Dr. Fuller said. "However, all that data isnt needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBMs edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

Data and AI Platform: Scaling Day 2 - AI Operations

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

Extending Data Fabric to Hub and Spokes: Key Capabilities

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBMs hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

Multicloud and Edge Platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

5G Network Slice Management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBMs strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.

In IBMs current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Intelligence @ the Edge of 5G networks

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

Secure Decentralized Edge Data Lake

IBM's focus on edge in means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBMs goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBMs architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

Read more:

IBM Research Rolls Out A Comprehensive AI And ML Edge Research Strategy Anchored By Enterprise Partnerships And Use Cases - Forbes

Posted in Ai | Comments Off on IBM Research Rolls Out A Comprehensive AI And ML Edge Research Strategy Anchored By Enterprise Partnerships And Use Cases – Forbes

Page 21«..10..20212223..3040..»