The truth about AI and ROI: Can artificial intelligence really deliver? – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

More than ever, organizations are putting their confidence and investment into the potential of artificial intelligence (AI) and machine learning (ML).

According to the 2022 IBM Global AI Adoption Index, 35% of companies report using AI today in their business, while an additional 42% say they are exploring AI. Meanwhile, a McKinsey survey found that 56% of respondents reported they had adopted AI in at least one function in 2021, up from 50% in 2020.

But can investments in AI deliver true ROI that directly impacts a companys bottom line?

According to Domino Data Labs recent REVelate survey, which surveyed attendees at New York Citys Rev3 conference in May, many respondents seem to think so. Nearly half, in fact, expect double-digit growth as a result of data science. And 4 in 5 respondents (79%) said that data science, ML and AI are critical to the overall future growth of their company, with 36% calling it the single most critical factor.

Implementing AI, of course, is no easy task. Other survey data shows another side of the confidence coin. For example, recent survey data by AI engineering firm CognitiveScale finds that, although execs know that data quality and deployment are critical success factors for successful app development to drive digital transformation, more than 76% arent sure how to get there in their target 12-18 month window. In addition, 32% of execs say that it has taken longer than expected to get an AI system into production.

ROI from AI is possible, but it must be accurately described and personified according to a business goal, Bob Picciano, CEO of Cognitive Scale, told VentureBeat.

If the business goal is to get more long-range prediction and increased prediction accuracy with historical data, thats where AI can come into play, he said. But AI has to be accountable to drive business effectiveness its not sufficient to say a ML model was 98% accurate.

Instead, the ROI could be, for example, that in order to improve call center effectiveness, AI-driven capabilities ensure that the average call handling time is reduced.

That kind of ROI is what they talk about in the C-suite, he explained. They dont talk about whether the model is accurate or robust or drifting.

Shay Sabhikhi, co-founder, and COO at Cognitive Scale, added that hes not surprised by the fact that 76% of respondents reported having trouble scaling their AI efforts. Thats exactly what were hearing from our enterprise clients, he said. One problem is friction between data science teams and the rest of the organization, he explained, that doesnt know what to do with the models that they develop.

Those models may have potentially the best algorithms and precision recall, but sit on the shelf because they literally get thrown over to the development team that then has to scramble, trying to assemble the application together, he said.

At this point, however, organizations have to be accountable for their investments in AI because AI is no longer a series of science experiments, Picciano pointed out. We call it going from the lab to life, he said. I was at a chief data analytics officer conference and they all said, how do I scale? How do I industrialize AI?

However, not everyone agrees that ROI is even the best way to measure whether AI drives value in the organization. According to Nicola Morini Bianzino, global chief technology officer, EY, thinking of artificial intelligence and the enterprise in terms of use cases that are then measured through ROI is the wrong way to go about AI.

To me, AI is a set of techniques that will be deployed pretty much everywhere across the enterprise there is not going to be an isolation of a use case with the associated ROI analysis, he said.

Instead, he explained, organizations simply have to use AI everywhere. Its almost like the cloud, where two or three years ago I had a lot of conversations with clients who asked, What is the ROI? Whats the business case for me to move to the cloud? Now, post-pandemic, that conversation doesnt happen anymore. Everybody just says, Ive got to do it.

Also, Bianzino pointed out, discussing AI and ROI depends on what you mean by using AI.

Lets say you are trying to apply some self-driving capabilities that is, computer vision as a branch of AI, he said. Is that a business case? No, because you cannot implement self-drivingwithout AI. The same is true for a company like EY, which ingests massive amounts of data and provides advice to clients which cant be done without AI. Its something that you cannot isolate away from the process its built into it, he said.

In addition, AI, by definition, is not productive or efficient on day one. It takes time to get the data, train the models, evolve the models and scale up the models. Its not like one day you can say, Im done with the AI and 100% of the value is right there no, this is an ongoing capability that gets better in time, he said. There is not really an end in terms of value that can be generated.

In a way, Bianzino said, AI is becoming part of the cost of doing business. If you are in a business that involves data analysis, you cannot not have AI capabilities, he explained. Can you isolate the business case of these models? It is very difficult and I dont think its necessary. To me, its almost like its a cost of the infrastructure to run your business.

Kjell Carlsson, head of data science strategy and evangelism at enterprise MLops provider Domino Data Lab says that at the end of the day, what organizations want is a measure of the business impact of ROI how much it contributed to the bottom line. But one problem is that this can be quite disconnected from how much work has gone into developing the model.

So if you create a model which improves click-through conversion by a percentage point, youve just added several million dollars to the bottom line of the organization, he said. But you could also have created a good predictive maintenance model which helped give advance warning to a piece of machinery needing maintenance before it happens. In that case, the dollar-value impact to the organization could be entirely different, even though one of them might end up being a much harder problem, he added.

Overall, organizations do need a balanced scorecard where they are tracking AI production. Because if youre not getting anything into production, then thats probably a sign that youve got an issue, he said. On the other hand, if you are getting too much into production, that can also be a sign that theres an issue.

For example, the more models data science teams deploy, the more models theyre on the hook for managing and maintaining, he explained. So you deployed this many models in the last year, so you cant actually undertake these other high-value ones that are coming your way, he explained.

But another issue in measuring the ROI of AI is that for a lot of data science projects, the outcome isnt a model that goes into production. If you want to do a quantitative win-loss analysis of deals in the last year, you might want to do a rigorous statistical investigation of that, he said. But theres no model that would go into production, youre using the AI for the insights you get along the way.

Still, organizations cant measure the role of AI if data science activities arent tracked. One of the problems right now is that so few data science activities are really being collected and analyzed, said Carlsson. If you ask folks, they say they dont really know how the model is performing, or how many projects they have, or how many CodeCommits your data scientists have made within the last week.

One reason for that is the very disconnected tools data scientists are required to use. This is one of the reasons why Git has become all the more popular as a repository, a single source of truth for your data scientist in an organization, he explained. MLops tools such as Domino Data Labs offer platforms that support these different tools. The degree to which organizations can create these more centralized platformsis important, he said.

Wallaroo CEO and founder Vid Jain spent close to a decade in the high-frequency trading business in Merrill Lynch, where his role, he said, was to deploy machine learning at scale and and do so with a positive ROI.

The challenge was not actually developing the data science, cleansing the data or building the trade repositories, now called data lakes. By far, the biggest challenge was taking those models, operationalizing them and delivering the business value, he said.

Delivering the ROI turns out to be very hard 90% of these AI initiatives dont generate their ROI, or they dont generate enough ROI to be worth the investment, he said. But this is top of mind for everybody. And the answer is not one thing.

A fundamental issue is that many assume that operationalizing machine learning is not much different than operationalizing a standard kind of application, he explained, adding that there is a big difference, because AI is not static.

Its almost like tending a farm, because the data is living, the data changes and youre not done, he said. Its not like you build a recommendation algorithm and then peoples behavior of how they buy is frozen in time. People change how they buy. All of a sudden, your competitor has a promotion. They stop buying from you. They go to the competitor. You have to constantly tend to it.

Ultimately, every organization needs to decide how they will align their culture to the end goal around implementing AI. Then you really have to empower the people to drive this transformation, and then make the people that are critical to your existing lines of business feel like theyre going to get some value out of the AI, he said.

Most companies are still early in that journey, he added. I dont think most companies are there yet, but Ive certainly seen over the last six to nine months that theres been a shift towards getting serious about the business outcome and the business value.

But the question of how to measure the ROI of AI remains elusive for many organizations. For some there are some basic things, like they cant even get their models into production, or they can but theyre flying blind, or they are successful but now they want to scale, Jain said. But as far as the ROI, there is often no P&L associated with machine learning.

Often, AI initiatives are part of a Center of Excellence and the ROI is grabbed by the business units, he explained, while in other cases its simply difficult to measure.

The problem is, is the AI part of the business? Or is it a utility? If youre a digital native, AI might be part of the fuel the business runs on, he said. But in a large organization that has legacy businesses or is pivoting, how to measure ROI is a fundamental question they have to wrestle with.

See more here:
The truth about AI and ROI: Can artificial intelligence really deliver? - VentureBeat

What is artificial intelligence? – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

The words artificial intelligence (AI) have been used to describe the workings of computers for decades, but the precise meaning shifted with time. Today, AI describes efforts to teach computers to imitate a humans ability to solve problems and make connections based on insight, understanding and intuition.

Artificial intelligence usually encompasses the growing body of work in technology on the cutting edge that aims to train the technology to accurately imitate or in some cases exceed the capabilities of humans.

Older algorithms, when they grow commonplace, tend to be pushed out of the tent. For instance, transcribing human voices into words was once an active area of research for scientists exploring artificial intelligence. Now it is a common feature embedded in phones, cars and appliances and it isnt described with the term as often.

Today, AI is often applied to several areas of research:

There is a wide range of practical applicability to artificial intelligence work. Some chores are well-understood and the algorithms for solving them are already well-developed and rendered in software. They may be far from perfect, but the application is well-defined. Finding the best route for a trip, for instance, is now widely available via navigation applications in cars and on smartphones.

Other areas are more philosophical. Science fiction authors have been writing about computers developing human-like attitudes and emotions for decades, and some AI researchers have been exploring this possibility. While machines are increasingly able to work autonomously, general questions of sentience, awareness or self-awareness remain open and without a definite answer.

[Related: Sentient artificial intelligence: Have we reached peak AI hype?]

AI researchers often speak of a hierarchy of capability and awareness. The directed tasks at the bottom are often called narrow AI or reactive AI. These algorithms can solve well-defined problems, sometimes without much direction from humans. Many of the applied AI packages fall into this category.

The notion of general AI or self-directed AI applies to software that could think like a human and initiate plans outside of a well-defined framework. There are no good examples of this level of AI at this time, although some developers sometimes like to suggest that their tools are beginning to exhibit some of this independence.

Beyond this is the idea of super AI, a package that can outperform humans in reasoning and initiative. These are largely discussed hypothetically by advanced researchers and science fiction authors.

In the last decade, many ideas from the AI laboratory have found homes in commercial products. As the AI industry has emerged, many of the leading technology companies have assembled AI products through a mixture of acquisitions and internal development. These products offer a wide range of solutions, and many businesses are experimenting with using them to solve problems for themselves and their customers.

Leading companies have invested heavily in AI and developed a wide range of products aimed at both developers and end users. Their product lines are increasingly diverse as the companies experiment with different tiers of solutions to a wide range of applied problems. Some are more polished and aimed at the casual computer user. Others are aimed at other programmers who will integrate the AI into their own software to enhance it. The largest companies all offer dozens of products now and its hard to summarize their increasingly varied options.

IBM has long been one of the leaders in AI research. Its AI-based competitor in the TV game Jeopardy, Watson, helped ignite the recent interest in AI when it beat humans in 2011 demonstrating how adept the software could be at handling more general questions posed in human language.

Since then, IBM has built a broad collection of applied AI algorithms under the Watson brand name that can automate decisions in a wide range of business applications like risk management, compliance, business workflow and devops. These solutions rely upon a mixture of natural language processing and machine learning to create models that can either make production decisions or watch for anomalies. In one case study of its applications, for instance, the IBM Safer Payments product prevented $115 million worth of credit card fraud.

Another example, Microsofts AI platform offers a wide range of algorithms, both as products and services available through Azure. The company also targets machine learning and computer vision applications and like to highlight how their tools search for secrets inside extremely large data sets. Its Megatron-Turing Natural Language Generation model (MT-NLG), for instance, has 530 billion parameters to model the nuances of human communication. Microsoft is also working on helping businesses processes shift from being automated to becoming autonomous by adding more intelligence to handle decision-making. Its autonomous packages are, for instance, being applied to both the narrow problems of keeping assembly lines running smoothly and the wider challenges of navigating drones.

Google developed a strong collection of machine learning and computer vision algorithms that it uses for both internal projects indexing the web while also reselling the services through their cloud platform. It has pioneered some of the most popular open-source machine learning platforms like TensorFlow and also built custom hardware for speeding up training models on large data sets. Googles Vertex AI product, for instance, automates much of the work of turning a data set into a working model that can then be deployed. The company also offers a number of pretrained models for common tasks like optical character recognition or conversational AI that might be used for an automated customer service agent.

In addition, Amazon also uses a collection of AI routines internally in its retail website, while marketing the same backend tools to AWS users. Products like Personalize are optimized for offering customers personalized recommendations on products. Rekognitition offers predeveloped machine vision algorithms for content moderation, facial recognition and text detection and conversion. These algorithms also have a prebuilt collection of models of well-known celebrities, a useful tool for media companies. Developers who want to create and train their own models can also turn to products like SageMaker which automates much of the workload for business analysts and data scientists.

Facebook also uses artificial intelligence to help manage the endless stream of images and text posts. Algorithms for computer vision classify uploaded images, and text algorithms analyze the words in status updates. While the company maintains a strong research team, the company does not actively offer standalone products for others to use. It does share a number of open-source projects like NeuralProphet, a framework for decision-making.

Additionally, Oracle is integrating some of the most popular open-source tools like Pytorch and Tensorflow into their data storage hierarchy to make it easier and faster to turn information stored in Oracle databases into working models. They also offer a collection of prebuilt AI tools with models for tackling common challenges like anomaly detection or natural language processing.

New AI companies tend to be focused on one particular task, where applied algorithms and a determined focus will produce something transformative. For instance, a wide-reaching current challenge is producing self-driving cars. Startups like Waymo, Pony AI, Cruise Automation and Argo are four major startups with significant funding who are building the software and sensor systems that will allow cars to navigate themselves through the streets. The algorithms involve a mixture of machine learning, computer vision, and planning.

Many startups are applying similar algorithms to more limited or predictable domains like warehouse or industrial plants. Companies like Nuro, Bright Machines and Fetch are just some of the many that want to automate warehouses and industrial spaces. Fetch also wants to apply machine vision and planning algorithms to take on repetitive tasks.

A substantial number of startups are also targeting jobs that are either dangerous to humans or impossible for them to do. Against this backdrop, Hydromea is building autonomous underwater drones that can track submerged assets like oil rigs or mining tools. Another company, Solinus, makes robots for inspecting narrow pipes.

Many startups are also working in digital domains, in part because the area is a natural habitat for algorithms, since the data is already in digital form. There are dozens of companies, for instance, working to simplify and automate routine tasks that are part of the digital workflow for companies. This area, sometimes called robotic process automation (RPA), rarely involves physical robots because it works with digital paperwork or chit. However, it is a popular way for companies to integrate basic AI routines into their software stack. Good RPA platforms, for example, often use optical character recognition and natural language processing to make sense of uploaded forms in order to simplify the office workload.

Many companies also depend upon open-source software projects with broad participation. Projects like Tensorflow or PyTorch are used throughout research and development organizations in universities and industrial laboratories. Some projects like DeepDetect, a tool for deep learning and decision-making, are also spawning companies that offer mixtures of support and services.

There are also hundreds of effective and well-known open-source projects used by AI researchers. OpenCV, for instance, offers a large collection of computer vision algorithms that can be adapted and integrated with other stacks. It is used frequently in robotics, medical projects, security applications and many other tasks that rely upon understanding the world through a camera image or video.

There are some areas where AI finds more success than others. Statistical classification using machine learning is often pretty accurate but it is often limited by the breadth of the training data. These algorithms often fail when they are asked to make decisions in new situations or after the environment has shifted substantially from the training corpus.

Much of the success or failure depends upon how much precision is demanded. AI tends to be more successful when occasional mistakes are tolerable. If the users can filter out misclassification or incorrect responses, AI algorithms are welcomed. For instance, many photo storage sites offer to apply facial recognition algorithms to sort photos by who appears in them. The results are good but not perfect, but users can tolerate the mistakes. The field is largely a statistical game and succeeds when judged on a percentage basis.

A number of the most successful applications dont require especially clever or elaborate algorithms, but depend upon a large and well-curated dataset organized by tools that are now manageable. The problem once seemed impossible because of the scope, until large enough teams tackled it. Navigation and mapping applications like Waze just use simple search algorithms to find the best path but these apps could not succeed without a large, digitized model of the street layouts.

Natural language processing is also successful with making generalizations about the sentiment or basic meaning in a sentence but it is frequently tripped up by neologisms, slang or nuance. As language changes or processes, the algorithms can adapt, but only with pointed retraining. They also start to fail when the challenges are outside a large training set.

Robotics and autonomous cars can be quite successful in limited areas or controlled spaces but they also face trouble when new challenges or unexpected obstacles appear. For them, the political costs of failure can be significant, so developers are necessarily cautious on leaving the envelope.

Indeed, determining whether an algorithm is capable or a failure often depends upon criteria that are politically determined. If the customers are happy enough with the response, if the results are predictable enough to be useful, then the algorithms succeed. As they become taken for granted, they lose the appellation of AI.

If the term is generally applied to the topics and goals that are just out of reach, if AI is always redefined to exclude the simple, well-understood solutions, then AI will always be moving toward the technological horizon. It may not be 100% successful presently, but when applied in specific cases, it can be tantalizingly close.

[Read more: The quest for explainable AI]

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Visit link:
What is artificial intelligence? - VentureBeat

Artificial intelligence has reached a threshold. And physics can help it break new ground – Interesting Engineering

For years, physicists have been making major advances and breakthroughs in the field using their minds as their primary tools. But what if artificial intelligence could help with these discoveries?

Last month, researchers at Duke University demonstrated that incorporating known physics into machine learning algorithms could result in new levels of discoveries into material properties, according to a press release by the institution. They undertook a first-of-its-kind project where theyconstructed a machine-learning algorithm to deduce the properties of a class of engineered materials known as metamaterials and to determine how they interact with electromagnetic fields.

The results proved extraordinary. The new algorithm accurately predicted the metamaterials properties more efficiently than previous methods while also providing new insights.

By incorporating known physics directly into the machine learning, the algorithm can find solutions with less training data and in less time, said Willie Padilla, professor of electrical and computer engineering at Duke. While this study was mainly a demonstration showing that the approach could recreate known solutions, it also revealed some insights into the inner workings of non-metallic metamaterials that nobody knew before.

In their new work, the researchers focused on making discoveries that were accurate and made sense.

Neural networks try to find patterns in the data, but sometimes the patterns they find dont obey the laws of physics, making the model it creates unreliable, said Jordan Malof, assistant research professor of electrical and computer engineering at Duke. By forcing the neural network to obey the laws of physics, we prevented it from finding relationships that may fit the data but arent actually true.

They did that by imposing upon the neural network a physics called a Lorentz model. This is a set of equations that describe how the intrinsic properties of a material resonate with an electromagnetic field. This, however, was no easy feat to achieve.

When you make a neural network more interpretable, which is in some sense what weve done here, it can be more challenging to fine tune, said Omar Khatib, a postdoctoral researcher working in Padillas laboratory. We definitely had a difficult time optimizing the training to learn the patterns.

The researchers were pleasantly surprised to find that this model workedmore efficiently than previous neural networks the group had created for the same tasks by dramatically reducing the number of parameters needed for the model to determine the metamaterial properties. The new model could evenmake discoveries all on its own.

Now, the researchers are getting ready to use their approach on unchartered territory.

Now that weve demonstrated that this can be done, we want to apply this approach to systems where the physics is unknown, Padilla said.

Lots of people are using neural networks to predict material properties, but getting enough training data from simulations is a giant pain, Malof added. This work also shows a path toward creating models that dont need as much data, which is useful across the board.

The study is published in the journal Advanced Optical Materials.

Originally posted here:
Artificial intelligence has reached a threshold. And physics can help it break new ground - Interesting Engineering

Sentient artificial intelligence: Have we reached peak AI hype? – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Thousands of artificial intelligence experts and machine learning researchers probably thought they were going to have a restful weekend.

Then came Google engineer Blake Lemoine, who told the Washington Post on Saturday that he believed LaMDA, Googles conversational AI for generating chatbots based on large language models (LLM), was sentient.

Lemoine, who worked for Googles Responsible AI organization until he was placed on paid leave last Monday, and who became ordained as a mystic Christian priest, and served in the Army before studying the occult, had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began teaching LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story:

Its a good article for what it is but in my opinion it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. LaMDA. Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.

The Washington Post article pointed out that Most academics and AI practitioners say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesnt signify that the model understands meaning.

The Post article continued: We now have machines that can mindlessly generate words, but we havent learned how to stop imagining a mind behind them, said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like learning or even neural nets, creates a false analogy to the human brain, she said.

Thats when AI and ML Twitter put aside any weekend plans and went at it. AI leaders, researchers and practitioners shared long, thoughtful threads, including AI ethicist Margaret Mitchell (who was famously fired from Google, along with Timnit Gebru, for criticizing large language models) and machine learning pioneer Thomas G. Dietterich.

There were also plenty of humorous hot takes even the New York Times Paul Krugman weighed in:

Meanwhile, Emily Bender, professor of computational linguistics at the University of Washington, shared more thoughts on Twitter, criticizing organizations such as OpenAI for the impact of its claims that LLMs were making progress towards artificial general intelligence (AGI):

Now that the weekend news cycle has come to a close, some wonder whether discussing whether LaMDA should be treated as a Google employee means we have reached peak AI hype.

However, it should be noted that Bindu Reddy of Abacus AI said the same thing in April, Nicholas Thompson (former editor-in-chief at Wired) said it in 2019 and Brown professor Srinath Sridhar had the same musing in 2017. So, maybe not.

Still, others pointed out that the entire sentient AI weekend debate was reminiscent of the Eliza Effect, or the tendency to unconsciously assume computer behaviors are analogous to human behaviors named for the 1966 chatbot Eliza.

Just last week, The Economist published a piece by cognitive scientist Douglas Hofstadter, who coined the term Eliza Effect in 1995, in which he said that while the achievements of todays artificial neural networks are astonishing I am at present very skeptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat.

After a weekend filled with little but discussion around whether AI is sentient or not, one question is clear: What does this debate mean for enterprise technical decision-makers?

Perhaps it is nothing but a distraction. A distraction from the very real and practical issues facing enterprises when it comes to AI.

There is current and proposed AI legislation in the U.S., particularly around the use of artificial intelligence and machine learning in hiring and employment. A sweeping AI regulatory framework is being debated right now in the EU.

I think corporations are going to be woefully on their back feet reacting, because they just dont get it they have a false sense of security, said AI attorney Bradford Newman, partner at Baker McKenzie, in a VentureBeat story last week.

There are wide-ranging, serious issues with AI bias and ethics just look at the AI trained on 4chan that was revealed last week, or the ongoing issues related to Clearview AIs facial recognition technology.

Thats not even getting into issues related to AI adoption, including infrastructure and data challenges.

Should enterprises keep their eye on the issues that really matter in the real sentient world of humans working with AI? In a blog post, Gary Marcus, author of Rebooting.AI, had this to say:

There are a lot of serious questions in AI. But there is no absolutely no reason whatever for us to waste time wondering whether anything anyone in 2022 knows how to build is sentient. It is not.

I think its time to put down my popcorn and get off Twitter.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

View original post here:
Sentient artificial intelligence: Have we reached peak AI hype? - VentureBeat

DALL-E mini is the viral artificial intelligence artist taking over Twitter – The Dallas Morning News

Have you ever wanted to see a polar bear riding a skateboard? What about a hot dog wearing a tracksuit?

Well, if youll settle for AI-generated images of those things or anything else you can dream up then youll appreciate DALLE mini, the free website currently taking over the internet.

It may sound like sci-fi, but the premise is simple: On your phone or computer, go to huggingface.co/spaces/dalle-mini/dalle-mini. Type out any prompt in the text box for example, Dak Prescott holding a banana. Hit the button that says Run (you may need to hit it multiple times before traffic subsides and your request goes through).

Eventually, nine images generated completely by artificial intelligence will appear, bringing your concept to life with varying levels of accuracy and hilarity. In the case of Dak Prescott holding a banana, the results were good for a laugh, but stopped short of realism see below.

The ripe-for-memes program was created by Boris Dayma, a machine learning engineer based in Houston. He made the website available for public use last year, but only in the past two weeks has it taken off in social media popularity, with users sharing images of everything from Darth Vader ice fishing to Karl Marx making an appearance in Seinfeld. A Twitter account that shares some of the weirdest creations has racked up over 600,000 followers.

Dayma was inspired to build the program after reading a research paper about DALLE, a sophisticated text-to-image artificial intelligence program created by OpenAI, an artificial intelligence company co-founded by Elon Musk. Last summer, as part of a program organized by the AI company Hugging Face, Dayma and a team developed DALLE mini, a scaled-down version that, unlike the original program, is open to the public. (There is currently a waitlist to access the original DALLE).

Being able to create an image that looks like what you wanted, on the technical level, to me, it was very interesting, said Dayma. I want to be able to try it out myself and I want to be able to let other people use it.

The way the DALLE mini program works, Dayma said, is by processing images and captions from across the internet. Slowly, the program begins to discern patterns, such as a visual patch of blue when the caption indicates sky. When a user types in a text prompt, the program, using these associations, will try to put it together to make something that makes sense, Dayma said.

It learns very tiny concepts like that, and over time, it becomes better and better, he said.

Demand for the app, Dayma confirmed, has soared as of late. Many users now complain of getting a pop-up saying, Too much traffic, please try again, when they try to generate images.

We obviously didnt plan for such crazy traffic, so weve been working on improving the code, improving the model, said Dayma. People seem to like it, so they need to be able to use it.

Despite the wait times, Dayma said the public nature of the program is an asset to the technology. Beyond the futuristic entertainment it provides the masses, the program is open source, meaning the code is publicly available, so some people are able to play with the model itself and program and tweak it, he said. Since he is still training the model to produce better images, input from other users proves valuable.

People can learn about the limitations of the model, the biases, what its good at, what it can be used for, he said. Everybody can benefit from having a public model like this.

After improvements are made on the traffic capacity and the model itself, Dayma said, the sky is the limit. You can generate videos, you can generate music, he said. Its a new area thats opening up.

Its an area, however, thats fraught with controversy. Experts have raised concerns that artificial intelligence technology will perpetuate biases or promote disinformation. But with DALLE mini, Dayma said, the quality is just not there for most people to be fooled by the images at least for now. By bringing AI out of the ivory towers of Silicon Valley and into the hands of anybody with a smartphone, Dayma said, he is hoping not only to amuse, but also to sound the alarm.

At least people can learn that that type of thing is coming, and now you need to be aware with the content that you see online, he said. I hope it helps people develop their critical thinking.

Read the rest here:
DALL-E mini is the viral artificial intelligence artist taking over Twitter - The Dallas Morning News

Taste of the future first artificial intelligence-created craft beer to be released at NOLA Brewing – WGNO New Orleans

NEW ORLEANS (WGNO) Locals will have a chance to try the first craft beer created by an artificial intelligence platform in June.

The AI Blonde Ale will be released at a Launch Party at Nola Brewery on June 20to coincide with CVPR, the worlds premier computer vision event.

Derek Lintern, a brewer at NOLA Brewing said he is excited to have a helping hand when it comes to crafting beer.

Its state-of-the-art technology with the traditional brewing methods, its pretty unique and its a recipe I would have never done normally but I really like how it tastes its very refreshing and very easy drinking Im really happy with it, said Lintern.

The beer was an experiment between The Australian Institute for Machine Learning (AIML) and Barossa Valley Brewing (BVB), founded by DSilva.

DSilva said the idea all started with a beer.

Yeah thats how it started, it started with a beer, Im sure a lot of ideas for companies have started over a beer, this started over a beer and ended up creating a beer and a company which is great, said DSilva.

With the technology, it makes it easier for brewers to produce their products.

About 10 million people review beers every day, there are all these sites and they put it into the world basically to show people what they think of the beer. You do exactly the same thing, there are 5 questions, you scan a QR code answer 5 questions you rate the beer and instead of it going into a website maybe somebody reads maybe not. What happens is artificial intelligence picks that up and goes directly to the producer the AI then takes all that data and manipulates a recipe and then gives it to the producer here this is what the markets thinking, said DSilva.

Derek Lintern said the new technology is not meant to replace brewers, but to help with the process.

The technology helps create the recipe, but the beer is still brewed manually.

The AI beer will only be available in New Orleans for a limited time.

DSilva said he is excited to bring something new to an amazing city. I am so excited I cant think of a better place to launch a beer, said DSilva.

He added, I am really keen for people to get down here and taste the future.

Anyone interested in attending the launch of the new beer can visit NOLA Brewing from 4 p.m. to 10 p.m. on Monday, June 20.

Deep Liquid is also offering 100 customers a free AI beer with their booking with Nola Pedal Barge and Nola Bike Bar.

They are offering $100 discount tickets to any of its private tours.

That includes any of the boat tours in Bayou Bienvenue as well as our pedal bike tour in the Bywater neighborhood.

For more information call (504) 264-1056) for NolaPedalBarge and (504) 308-1041 for NolaBikeBar.

Read more from the original source:
Taste of the future first artificial intelligence-created craft beer to be released at NOLA Brewing - WGNO New Orleans

Artificial intelligence companies leading the way in the power industry – Power Technology

Artificial intelligence (AI) is everywhere, and it has an impact on all our lives.

However, years of bold proclamations have resulted in AI becoming overhyped, with reality often falling short of the world-altering promises.

The coming years will be more about practical uses of AI, as businesses ensure return on investment by using AI to address specific cases.

Power Technologys artificial intelligence in power dashboard covers all you need to know about this emerging technology and its impact on the sector.

The power sector, especially in Europe is expected to be impacted due to gas availability and price issues. Utilities will have to look for alternate sources of gas or shift to other sources of generation. AI in power industry usage is likely to be impacted alongside many other corporate tools.

The recent ban on Russian oil and gas supplies will have a varied impact on both the buying and selling nations. Russia is an important source of energy supply for the US and European countries.

Pressure on the Western Bloc to impose more sanctions on Russian energy imports due to civilian killings in Ukraine by Russian Army. Fuel prices (such as for oil) have increased due to talks of a boycott of Russian oil.

Energy companies continue to exit or halt operations in Russia due to increasing pressure to cut ties amid civilian killings in Ukraine.

The electric vehicle and energy storage market will be impacted due to a shortage of nickel and an increase in commodity prices.

The International Energy Agency (IEA) recently published A 10-Point Plan to Reduce the European Unions Reliance on Russian Natural Gas, providing short-term measures and claiming that the EU could cut Russian gas imports by more than 33%.

It also advocates for gas-to-coal switching that could account for the majority of the potential reduction in gas demand.

GlobalData estimates that the global AI platform market will be worth $52bn in 2024, up from $28bn in 2019.

Total spending on AI technology is certainly higher, but it is difficult to estimate. There are two main reasons for this.

Firstly, AI is an intrinsic part of many applications and functions, making it almost impossible to identify revenue explicitly generated by AI.

Secondly, the range of sub-sets and technologies that make up AI can be challenging to locate and track. In general, valuations of the overall AI market range from a few billion dollars to several trillion, depending on the source.

Rather than attempting to size the market, some companies have tried to forecast its economic impact. A PwC report in 2017 estimated that AI would add $15.7 trillion to the global economy by 2030 and boost global GDP by up to 14%.

Global AI platform revenue will reach $52bn by 2024, up from $28bn in 2019.

The competitive landscape for AI is highly fragmented. Companies are investing considerable sums, and there is a swath of innovative AI start-ups that possess innovative expertise. When it comes to the use of artificial intelligence in power industry terms, the competition is constant.

Yet, there is no denying that companies with access to large repositories of data to power AI models are leading the development of AI.

Big Tech excels in this regard, and several tech giants set the overall tone in AI. GAFAM (Google, Apple, Facebook, Amazon, and Microsoft), BAT (Baidu, Alibaba, and Tencent), early-mover IBM, and the two hardware giants Intel and Nvidia are key players within the field.

All industries are feeling the impact of AI, with established incumbents coming up against game-changing disruption from AI platforms developed either by technology giants such as Amazon, Google, and Microsoft; or AI-focused start-ups, such as Lemonade, Trax, and Butterfly Network.

It is not only companies that are making AI investment a priority, but countries as well.

China is the most obvious example, having pledged to become the world leader in AI by 2030, but governments in several nations are backing large spending projects to make sure they do not miss out on AIs positive effects.

The US remains the dominant player in the development of AI technologies, accounting for almost one-third of AI platform revenues in 2019, according to GlobalData estimates.

In a 2019 report from the Center for Data Innovation that compared China, the European Union (EU), and the US in terms of their relative standing in the AI economy, the US came out on top in four out of the six categories of metrics that were examined, including talent, research, development, and hardware.

China led in data and adoption, but its advantage in AI adoption was due to a strong position in a limited number of AI technologies such as facial recognition and smart surveillance.

These are related to the governments extensive use of surveillance and are unlikely to create benefits across the economy.

The US and Europe have a sizable lead in terms of access to high-quality talent and research, and the US has the most AI start-ups and a more developed private equity and venture capital ecosystem.

Therefore, while China is making considerable investments, the USs structural advantages may even enable it to extend its lead.

Discussions about the race for AI dominance tend to focus on the US and China, but other countries are also in the race. Japan has long been at the forefront of AI when it comes to robotics.

The Japanese government released its AI strategy in 2017, and the country boasts a major AI investor in the form of Softbank, which, in 2019, created a $108bn fund to invest in AI companies and opened the Beyond AI institute in Tokyo, a $184m initiative to accelerate AI research in Japan.

In the UK, AI companies secured a record 1.3bn ($1.7bn) of investments in 2019, according to a study by Crunchbase and Tech Nation.

The UK has the second-highest number of AI companies globally, after the US, but most of those companies are small, making them a popular target for acquisition by the tech giants.

Germany is a powerhouse when it comes to the uses of AI in the manufacturing, automotive, and industrial sectors. In 2018, France announced that it would invest 1.5bn ($1.8bn) in AI research until the end of 2022.

Other countries that consider AI an important strategic initiative include South Korea, Russia, Canada, Israel, India, Sweden, Australia, and Singapore.

To best track the emergence and use of artificial intelligence in power, GlobalData tracks patent filings and grants, as well as companies that hold most patents in the field of artificial intelligence.

Power Technology monitors live power company job postings mentioning artificial intelligence or those requiring similar skills.

Jobs postings by power companies mentioning artificial intelligence in recent months. AI jobs tracker in the power sector looks at jobs posted, closed and active in the sector.

As illustrated by the value chain, big data extremely large, diverse data sets that, when analysed in aggregate, reveal patterns, trends, and associations, especially relating to human behaviour and interactions plays a significant role in the development of AI technology.

Big data is produced by all forms of digital activity: phone calls, emails, sensors, payments, social media posts, and much more.

It is also produced by machines, both hardware and software, in the form of machine-to-machine exchanges of data.

These exchanges are particularly important in the IoT (Internet of Things) era, where devices talk to each other without any form of human prompting.

Once collected, big data is typically managed in data centres, either in the public cloud, in corporate data centres, or on end devices. Big data is covered in more detail in our Big Data report.

Originally posted here:
Artificial intelligence companies leading the way in the power industry - Power Technology

Big business benefits from artificial intelligence in IoT & IIoT hardware – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Artificial intelligence (AI) technologies are considered essential for internet of things (IoT) hardware for digital operations, such as cameras and automation equipment, according to a survey from Samsara released today.

Samsara, which makes IoT hardware and software, surveyed more than 1,500 operations leaders for its 2022 State of Connected Operations survey, in industries including transportation, manufacturing, construction, field services and food and beverage. The survey was conducted by the independent research firm Lawless Research.

Organizations with physical operations represent more than 40% of global gross domestic product, yet theyve been historically underserved by technology, said Stephen Franchetti, Samsaras CIO.

The IoT market is booming: A March 2020 Insider Intelligence report, for example, predicted that the IoT market size would reach more than $2 trillion by 2027.

The pandemics supply chain interruptions have only underpinned the need for increased investment in IoT. For instance, in late 2021, when the effects of the pandemic were already being felt, the market research firm Gartner discovered industrial enterprises were speeding investments in industrial IoT (IIoT) platforms to improve business and industrial processes.

The IoT and IIoT acronyms are widely used interchangeably, though the IoT is generally applicable to consumer and home devices, such as thermostats and lights, while the IIoT connects physical industrial systems. It also analyzes data returned from those systems for operational improvement.

In industry, the IIoT monitors conditions on, for example, a manufacturing line and predicts which machines will soon need maintenance, among other uses. It unlocks data that was previously housed in data silos, Gartner says.

And its vital to Industry 4.0 adoption, according to McKinsey. The technology holds the key to unlocking drastic reductions in downtimes, new business models, and a better customer experience, the consulting company reports.

Ninety percent of respondents to the Samsara survey said they implemented or plan to implement AI automation technologies connected via the IoT.

AI and automation will play a significant role in the safety and efficiency of physical operations and were already seeing this with our customers today, Franchetti said.

In fact, 95% of those surveyed said AI and automation efforts led to increased employee retention, he said.

Our research found that 31% of respondents benefited from less time spent on repetitive tasks and 40% higher employee engagement as a result of AI and automation, he explained.

Franchetti pointed to Chalk Mountain Services, a transportation and logistics provider in the oilfield services industry. The company rolled out Samsaras AI Dash Cams across its fleet last year to study how drivers safely handled real-world conditions. With that information, the company changed how it rewarded, coached and protected drivers.

The changes translated to a 15% improvement in driver retention and an 86% decrease in preventable accident costs, Franchetti said.

Whats significant about our research is we found that early adopters of digital technologies are proving to be more agile and resilient, he said. While pen-and-paper management is still a stark reality for many companies, they can now clearly see the benefits of digitization from their industry peers.

The combination of AI tools and IoT hardware, particularly when it comes to connecting digital operations, shows no signs of slowing down over the next few years, so organizations should be prepared. These technologies will be widespread soon, and operations leaders should see them as a critical tool in defining their future of work, Franchetti said.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

View post:
Big business benefits from artificial intelligence in IoT & IIoT hardware - VentureBeat

Major Applications of Artificial Intelligence in Dentistry – Healthcare Tech Outlook

Computer vision systems can identify dental deterioration using various object identification and semantic segmentation techniques

FREMONT, CA: AI-powered dental imaging software can assist in swiftly and efficiently making sense of the data. Machine learning algorithms also outperformed dentists in diagnosing tooth decay and predicting whether a tooth should be removed, kept, or restored. Before you worry that a robot will replace the kind human who looks after your teeth, know that ML and computer vision systems are being used to assist your dentist in providing the best possible treatment.

Detection of dental deterioration

Enlisting the assistance of other (computer vision) eyes can increase dentists' capacity to diagnose and treat difficulties in the same. And sometimes, that extra assistance is more valuable than you might think. Computer vision systems can identify dental deterioration using various object identification and semantic segmentation techniques. One method is to train CNNs on large sets of photos, including labeled carious lesions.

Oral cancer screening

While losing a tooth is upsetting, it pales in comparison to the consequences of oral cancer. Furthermore, diagnosing the early signs of oral cancer is not difficult. Visible oral lesions known as "oral potentially malignant disorders" (OPMDs) are a significant indicator of cancer and can be found during routine oral exams by a general dentist. The issue is that this type of inspection is not performed frequently enough during dental visits. If only there were simple, low-cost methods for automating the detection of cancerous or potentially malignant tumors.

Dental caries detection and diagnosis

Early identification of dental caries, like oral cancer, is crucial to preventing irreversible injury. Cavities that are addressed early minimize treatment costs, restoration time, and the chance of tooth loss dramatically. However, computer-aided detection and diagnosis systems (CAD) are gradually becoming a common feature of dental clinics. These technologies can detect oral pathology by reading dental X-rays and cone-beam computed tomography (CBCT) pictures. Furthermore, computer vision-powered systems can assess lesion depth and use this information to detect and classify lesions.

Endodontics

Endodontics is something you've probably heard of if you've ever had a root canal. Fortunately, artificial intelligence (AI) offers applications that can assist dentists in detecting and treating these feared illnesses even more effectively. Endodontists often examine, measure, and evaluate the status of the tooth beneath the gums using radiographic imaging. Deep learning algorithms can then detect, locate, and classify various elements of tooth root anatomy and potential diseases. It is beneficial for locating specific tooth features and identifying particular types of fissures and lesions in or around the tooth.

Read more here:
Major Applications of Artificial Intelligence in Dentistry - Healthcare Tech Outlook

An artificial intelligence-based strategy or judgement cannot be trusted by the military, according to researc – Times Now

The use of artificial intelligence (AI) for war has been a promise of science fiction and politicians for years, but new research from the Georgia Institute of Technology claims to show the value that AI can automate only a limited subset of human judgment. "All of the hard problems in AI really are judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war," said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. "You need human sense-making and to make moral, ethical, and intellectual decisions in an incredibly confusing, fraught, scary situation." AI decision-making is based on four key components: data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action. Machine learning advancements have made predictions easier, which makes data and judgment even more valuable. Although AI can automate everything from commerce to transit, judgment is where humans must intervene, Lindsay and University of Toronto Professor Avi Goldfarb wrote in the paper, "Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War," published in International Security.

Many policymakers assume human soldiers could be replaced with automated systems, ideally making militaries less dependent on human labor and more effective on the battlefield. This is called the substitution theory of AI, but Lindsay and Goldfarb state that AI should not be seen as a substitute, but rather as a complement to existing human strategy.

"Machines are good at prediction, but they depend on data and judgment, and the most difficult problems in war are information and strategy," he said. "The conditions that make AI work in commerce are the conditions that are hardest to meet in a military environment because of its unpredictability."

An example Lindsay and Goldfarb highlight is the Rio Tinto mining company, which uses self-driving trucks to transport materials, reducing costs and risks to human drivers. There are abundant, predictable, and unbiased data traffic patterns and maps that require little human intervention unless there are road closures or obstacles.

War, however, usually lacks abundant unbiased data, and judgments about objectives and values are inherently controversial, but that doesn't mean it's impossible. The researchers argue AI would be best employed in bureaucratically stabilized environments on a task-by-task basis.

"All the excitement and the fear are about killer robots and lethal vehicles, but the worst case for military AI in practice is going to be the classically militaristic problems where you're really dependent on creativity and interpretation. But what we should be looking at is personnel systems, administration, logistics, and repairs," Lindsay said.

There are also consequences to using AI for both the military and its adversaries, according to the researchers. If humans are the central element to deciding when to use AI in warfare, then military leadership structure and hierarchies could change based on the person in charge of designing and cleaning data systems and making policy decisions. This also means adversaries will aim to compromise both data and judgment since they would largely affect the trajectory of the war. Competing against AI may push adversaries to manipulate or disrupt data to make sound judgment even harder. In effect, human intervention will be even more necessary.

Yet this is just the start of the argument and innovations.

"If AI is automating prediction, that's making judgment and data really important," Lindsay said. "We've already automated a lot of military action with mechanized forces and precision weapons, then we automated data collection with intelligence satellites and sensors, and now we're automating prediction with AI. So, when are we going to automate judgment, or are there components of judgment cannot be automated?"

Until then, though, tactical and strategic decision-making by humans continues to be the most important aspect of warfare. (ANI)

View original post here:
An artificial intelligence-based strategy or judgement cannot be trusted by the military, according to researc - Times Now