Monthly Archives: May 2023

The Boring Future of Generative AI | WIRED – WIRED

Posted: May 18, 2023 at 2:01 am

This week, at its annual I/O developer conference in Mountain View, Google showcased ahead-spinning number of projects and products powered by or enhanced by AI. They included a new-and-improved version of itschatbot Bard, tools to help you write emails and documents or manipulate images,devices with AI baked in, and achatbot-like experimental version of Google search. For a full recap of the event, complete with insightful and witty commentary from my WIRED colleagues, check outour Google I/O liveblog.

Googles big pivot is, of course, largely fueled not by algorithms but by generative AI FOMO.

The appearance last November of ChatGPTthe remarkably clever butstill rather flawed chatbot fromOpenAIcombined with Microsoftadding the technology to its search engine Bing a few months later, triggered something of a panic at Google. ChatGPT proved wildly popular with users, demonstrating new ways to serve up information that threatened Googles vice grip on the search business and its reputation as the leader in AI.

The capabilities of ChatGPT and AI language algorithms like those powering itare so striking that some experts, including Geoffrey Hinton, a pioneering researcher who recently left Google, have felt compelled to warn that we might be building systems thatwe will someday struggle to control. OpenAIs chatbot is often astonishingly good at generating coherent text on a given subject, summarizing information from the web, and even answering extremely tricky questions that require expert knowledge.

And yet, unfettered AI language models are also silver-tongued agents of chaos. They will gladly fabricate facts, express unpleasant biases, andsay unpleasant or disturbing things with the right prompting. Microsoft was forced to limit the capabilities of Bing chat shortly after launch to avoid such embarrassing misbehavior, in part because its bot divulged its secret codenameSydneyandaccused aNew York Times columnist of not loving his spouse.

Google worked hard to tone down the chaotic streak of text-generation technology as it prepared theexperimental search feature announced yesterday that responds to search queries with chat-style answers synthesizing information from across the web.

Googles smarter version of search is impressively narrow-minded, refusing to use the first person or talk about its thoughts or feelings. It completely avoids topics that might be considered risky, refusing to dispense medical advice or offer answers on potentially controversial topics such as US politics.

Read the rest here:

The Boring Future of Generative AI | WIRED - WIRED

Posted in Ai | Comments Off on The Boring Future of Generative AI | WIRED – WIRED

OpenAI readies new open-source AI model, The Information reports – Reuters.com

Posted: at 2:01 am

May 15 (Reuters) - OpenAI is preparing to release a new open-source language model to the public, The Information reported on Monday, citing a person with knowledge of the plan.

OpenAI's ChatGPT, known for producing prose or poetry on command, has gained widespread attention in Silicon Valley as investors see generative AI as the next big growth area for tech companies.

In January, Microsoft Corp (MSFT.O) announced a multi-billion dollar investment in OpenAI, deepening its ties with the startup and setting the stage for more competition with rival Alphabet Inc's (GOOGL.O) Google.

Meta Platforms Inc (META.O) is now rushing to join competitors Microsoft and Google in releasing generative AI products capable of creating human-like writing, art and other content.

OpenAI is unlikely to release a model that is competitive with GPT, the report said.

The company did not immediately respond to Reuters' request for a comment.

Reporting by Ananya Mariam Rajesh in Bengaluru; Editing by Shinjini Ganguli

Our Standards: The Thomson Reuters Trust Principles.

Link:

OpenAI readies new open-source AI model, The Information reports - Reuters.com

Posted in Ai | Comments Off on OpenAI readies new open-source AI model, The Information reports – Reuters.com

What every CEO should know about generative AI – McKinsey

Posted: at 2:01 am

Amid the excitement surrounding generative AI since the release of ChatGPT, Bard, Claude, Midjourney, and other content-creating tools, CEOs are understandably wondering: Is this tech hype, or a game-changing opportunity? And if it is the latter, what is the value to my business?

The public-facing version of ChatGPT reached 100 million users in just two months. It democratized AI in a manner not previously seen while becoming by far the fastest-growing app ever. Its out-of-the-box accessibility makes generative AI different from all AI that came before it. Users dont need a degree in machine learning to interact with or derive value from it; nearly anyone who can ask questions can use it. And, as with other breakthrough technologies such as the personal computer or iPhone, one generative AI platform can give rise to many applications for audiences of any age or education level and in any location with internet access.

All of this is possible because generative AI chatbots are powered by foundation models, which are expansive neural networks trained on vast quantities of unstructured, unlabeled data in a variety of formats, such as text and audio. Foundation models can be used for a wide range of tasks. In contrast, previous generations of AI models were often narrow, meaning they could perform just one task, such as predicting customer churn. One foundation model, for example, can create an executive summary for a 20,000-word technical report on quantum computing, draft a go-to-market strategy for a tree-trimming business, and provide five different recipes for the ten ingredients in someones refrigerator. The downside to such versatility is that, for now, generative AI can sometimes provide less accurate results, placing renewed attention on AI risk management.

With proper guardrails in place, generative AI can not only unlock novel use cases for businesses but also speed up, scale, or otherwise improve existing ones. Imagine a customer sales call, for example. A specially trained AI model could suggest upselling opportunities to a salesperson, but until now those were usually based only on static customer data obtained before the start of the call, such as demographics and purchasing patterns. A generative AI tool might suggest upselling opportunities to the salesperson in real time based on the actual content of the conversation, drawing from internal customer data, external market trends, and social media influencer data. At the same time, generative AI could offer a first draft of a sales pitch for the salesperson to adapt and personalize.

The preceding example demonstrates the implications of the technology on one job role. But nearly every knowledge worker can likely benefit from teaming up with generative AI. In fact, while generative AI may eventually be used to automate some tasks, much of its value could derive from how software vendors embed the technology into everyday tools (for example, email or word-processing software) used by knowledge workers. Such upgraded tools could substantially increase productivity.

CEOs want to know if they should act nowand, if so, how to start. Some may see an opportunity to leapfrog the competition by reimagining how humans get work done with generative AI applications at their side. Others may want to exercise caution, experimenting with a few use cases and learning more before making any large investments. Companies will also have to assess whether they have the necessary technical expertise, technology and data architecture, operating model, and risk management processes that some of the more transformative implementations of generative AI will require.

The goal of this article is to help CEOs and their teams reflect on the value creation case for generative AI and how to start their journey. First, we offer a generative AI primer to help executives better understand the fast-evolving state of AI and the technical options available. The next section looks at how companies can participate in generative AI through four example cases targeted toward improving organizational effectiveness. These cases reflect what we are seeing among early adopters and shed light on the array of options across the technology, cost, and operating model requirements. Finally, we address the CEOs vital role in positioning an organization for success with generative AI.

Excitement around generative AI is palpable, and C-suite executives rightfully want to move ahead with thoughtful and intentional speed. We hope this article offers business leaders a balanced introduction into the promising world of generative AI.

Generative AI technology is advancing quickly (Exhibit 1). The release cycle, number of start-ups, and rapid integration into existing software applications are remarkable. In this section, we will discuss the breadth of generative AI applications and provide a brief explanation of the technology, including how it differs from traditional AI.

Generative AI can be used to automate, augment, and accelerate work. For the purposes of this article, we focus on ways generative AI can enhance work rather than on how it can replace the role of humans.

While text-generating chatbots such as ChatGPT have been receiving outsize attention, generative AI can enable capabilities across a broad range of content, including images, video, audio, and computer code. And it can perform several functions in organizations, including classifying, editing, summarizing, answering questions, and drafting new content. Each of these actions has the potential to create value by changing how work gets done at the activity level across business functions and workflows. Following are some examples.

As the technology evolves and matures, these kinds of generative AI can be increasingly integrated into enterprise workflows to automate tasks and directly perform specific actions (for example, automatically sending summary notes at the end of meetings). We already see tools emerging in this area.

As the name suggests, the primary way in which generative AI differs from previous forms of AI or analytics is that it can generate new content, often in unstructured forms (for example, written text or images) that arent naturally represented in tables with rows and columns (see sidebar Glossary for a list of terms associated with generative AI).

The underlying technology that enables generative AI to work is a class of artificial neural networks called foundation models. Artificial neural networks are inspired by the billions of neurons that are connected in the human brain. They are trained using deep learning, a term that alludes to the many (deep) layers within neural networks. Deep learning has powered many of the recent advances in AI.

However, some characteristics set foundation models apart from previous generations of deep learning models. To start, they can be trained on extremely large and varied sets of unstructured data. For example, a type of foundation model called a large language model can be trained on vast amounts of text that is publicly available on the internet and covers many different topics. While other deep learning models can operate on sizable amounts of unstructured data, they are usually trained on a more specific data set. For example, a model might be trained on a specific set of images to enable it to recognize certain objects in photographs.

In fact, other deep learning models often can perform only one such task. They can, for example, either classify objects in a photo or perform another function such as making a prediction. In contrast, one foundation model can perform both of these functions and generate content as well. Foundation models amass these capabilities by learning patterns and relationships from the broad training data they ingest, which, for example, enables them to predict the next word in a sentence. Thats how ChatGPT can answer questions about varied topics and how DALLE 2 and Stable Diffusion can produce images based on a description.

Given the versatility of a foundation model, companies can use the same one to implement multiple business use cases, something rarely achieved using earlier deep learning models. A foundation model that has incorporated information about a companys products could potentially be used both for answering customers questions and for supporting engineers in developing updated versions of the products. As a result, companies can stand up applications and realize their benefits much faster.

However, because of the way current foundation models work, they arent naturally suited to all applications. For example, large language models can be prone to hallucination, or answering questions with plausible but untrue assertions (see sidebar Using generative AI responsibly). Additionally, the underlying reasoning or sources for a response are not always provided. This means companies should be careful of integrating generative AI without human oversight in applications where errors can cause harm or where explainability is needed. Generative AI is also currently unsuited for directly analyzing large amounts of tabular data or solving advanced numerical-optimization problems. Researchers are working hard to address these limitations.

While foundation models serve as the brain of generative AI, an entire value chain is emerging to support the training and use of this technology (Exhibit 2). Specialized hardware provides the extensive compute power needed to train the models. Cloud platforms offer the ability to tap this hardware. MLOps and model hub providers offer the tools, technologies, and practices an organization needs to adapt a foundation model and deploy it within its end-user applications. Many companies are entering the market to offer applications built on top of foundation models that enable them to perform a specific task, such as helping a companys customers with service issues.

The first foundation models required high levels of investment to develop, given the substantial computational resources required to train them and the human effort required to refine them. As a result, they were developed primarily by a few tech giants, start-ups backed by significant investment, and some open-source research collectives (for example, BigScience). However, work is under way on both smaller models that can deliver effective results for some tasks and training thats more efficient. This could eventually open the market to more entrants. Some start-ups have already succeeded in developing their own modelsfor example, Cohere, Anthropic, and AI21 Labs build and train their own large language models.

CEOs should consider exploration of generative AI a must, not a maybe. Generative AI can create value in a wide range of use cases. The economics and technical requirements to start are not prohibitive, while the downside of inaction could be quickly falling behind competitors. Each CEO should work with the executive team to reflect on where and how to play. Some CEOs may decide that generative AI presents a transformative opportunity for their companies, offering a chance to reimagine everything from research and development to marketing and sales to customer operations. Others may choose to start small and scale later. Once the decision is made, there are technical pathways that AI experts can follow to execute the strategy, depending on the use case.

Much of the use (although not necessarily all of the value) from generative AI in an organization will come from workers employing features embedded in the software they already have. Email systems will provide an option to write the first drafts of messages. Productivity applications will create the first draft of a presentation based on a description. Financial software will generate a prose description of the notable features in a financial report. Customer-relationship-management systems will suggest ways to interact with customers. These features could accelerate the productivity of every knowledge worker.

But generative AI can also be more transformative in certain use cases. Following, we look at four examples of how companies in different industries are using generative AI today to reshape how work is done within their organization. The examples range from those requiring minimal resources to resource-intensive undertakings. (For a quick comparison of these examples and more technical detail, see Exhibit 3.)

The use cases outlined here offer powerful takeaways for CEOs as they embark on the generative AI journey:

The CEO has a crucial role to play in catalyzing a companys focus on generative AI. In this closing section, we discuss strategies that CEOs will want to keep in mind as they begin their journey. Many of them echo the responses of senior executives to previous waves of new technology. However, generative AI presents its own challenges, including managing a technology moving at a speed not seen in previous technology transitions.

Many organizations began exploring the possibilities for traditional AI through siloed experiments. Generative AI requires a more deliberate and coordinated approach given its unique risk considerations and the ability of foundation models to underpin multiple use cases across an organization. For example, a model fine-tuned using proprietary material to reflect the enterprises brand identity could be deployed across several use cases (for example, generating personalized marketing campaigns and product descriptions) and business functions, such as product development and marketing.

To that end, we recommend convening a cross-functional group of the companys leaders (for example, representing data science, engineering, legal, cybersecurity, marketing, design, and other business functions). Such a group can not only help identify and prioritize the highest-value use cases but also enable coordinated and safe implementation across the organization.

Generative AI is a powerful tool that can transform how organizations operate, with particular impact in certain business domains within the value chain (for example, marketing for a retailer or operations for a manufacturer). The ease of deploying generative AI can tempt organizations to apply it to sporadic use cases across the business. It is important to have a perspective on the family of use cases by domain that will have the most transformative potential across business functions. Organizations are reimagining the target state enabled by generative AI working in sync with other traditional AI applications, along with new ways of working that may not have been possible before.

A modern data and tech stack is key to nearly any successful approach to generative AI. CEOs should look to their chief technology officers to determine whether the company has the required technical capabilities in terms of computing resources, data systems, tools, and access to models (open source via model hubs or commercial via APIs).

For example, the lifeblood of generative AI is fluid access to data honed for a specific business context or problem. Companies that have not yet found ways to effectively harmonize and provide ready access to their data will be unable to fine-tune generative AI to unlock more of its potentially transformative uses. Equally important is to design a scalable data architecture that includes data governance and security procedures. Depending on the use case, the existing computing and tooling infrastructure (which can be sourced via a cloud provider or set up in-house) might also need upgrading. A clear data and infrastructure strategy anchored on the business value and competitive advantage derived from generative AI will be critical.

CEOs will want to avoid getting stuck in the planning stages. New models and applications are being developed and released rapidly. GPT-4, for example, was released in March 2023, following the release of ChatGPT (GPT-3.5) in November 2022 and GPT-3 in 2020. In the world of business, time is of the essence, and the fast-paced nature of generative AI technology demands that companies move quickly to take advantage of it. There are a few ways executives can keep moving at a steady clip.

Although generative AI is still in the early days, its important to showcase internally how it can affect a companys operating model, perhaps through a lighthouse approach. For example, one way forward is building a virtual expert that enables frontline workers to tap proprietary sources of knowledge and offer the most relevant content to customers. This has the potential to increase productivity, create enthusiasm, and enable an organization to test generative AI internally before scaling to customer-facing applications.

As with other waves of technical innovation, there will be proof-of-concept fatigue and many examples of companies stuck in pilot purgatory. But encouraging a proof of concept is still the best way to quickly test and refine a valuable business case before scaling to adjacent use cases. By focusing on early wins that deliver meaningful results, companies can build momentum and then scale out and up, leveraging the multipurpose nature of generative AI. This approach could enable companies to promote broader AI adoption and create the culture of innovation that is essential to maintaining a competitive edge. As outlined above, the cross-functional leadership team will want to make sure such proofs of concept are deliberate and coordinated.

As our four detailed use cases demonstrate, business leaders must balance value creation opportunities with the risks involved in generative AI. According to our recent Global AI Survey, most organizations dont mitigate most of the risks associated with traditional AI, even though more than half of organizations have already adopted the technology. Generative AI brings renewed attention to many of these same risks, such as the potential to perpetuate bias hidden in training data, while presenting new ones, such as its propensity to hallucinate.

As a result, the cross-functional leadership team will want to not only establish overarching ethical principles and guidelines for generative AI use but also develop a thorough understanding of the risks presented by each potential use case. It will be important to look for initial use cases that both align with the organizations overall risk tolerance and have structures in place to mitigate consequential risk. For example, a retail organization might prioritize a use case that has slightly lower value but also lower risksuch as creating initial drafts of marketing content and other tasks that keep a human in the loop. At the same time, the company might set aside a higher-value, high-risk use case such as a tool that automatically drafts and sends hyperpersonalized marketing emails. Such risk-forward practices can enable organizations to establish the controls necessary to properly manage generative AI and maintain compliance.

CEOs and their teams will also want to stay current with the latest developments in generative AI regulation, including rules related to consumer data protection and intellectual property rights, to protect the company from liability issues. Countries may take varying approaches to regulation, as they often already do with AI and data. Organizations may need to adapt their working approach to calibrate process management, culture, and talent management in a way that ensures they can handle the rapidly evolving regulatory environment and risks of generative AI at scale.

Business leaders should focus on building and maintaining a balanced set of alliances. A companys acquisitions and alliances strategy should continue to concentrate on building an ecosystem of partners tuned to different contexts and addressing what generative AI requires at all levels of the tech stack, while being careful to prevent vendor lock-in.

Partnering with the right companies can help accelerate execution. Organizations do not have to build out all applications or foundation models themselves. Instead, they can partner with generative AI vendors and experts to move more quickly. For instance, they can team up with model providers to customize models for a specific sector, or partner with infrastructure providers that offer support capabilities such as scalable cloud computing.

Companies can use the expertise of others and move quickly to take advantage of the latest generative AI technology. But generative AI models are just the tip of the spear: multiple additional elements are required for value creation.

To effectively apply generative AI for business value, companies need to build their technical capabilities and upskill their current workforce. This requires a concerted effort by leadership to identify the required capabilities based on the companys prioritized use cases, which will likely extend beyond technical roles to include a talent mix across engineering, data, design, risk, product, and other business functions.

As demonstrated in the use cases highlighted above, technical and talent needs vary widely depending on the nature of a given implementationfrom using off-the-shelf solutions to building a foundation model from scratch. For example, to build a generative model, a company may need PhD-level machine learning experts; on the other hand, to develop generative AI tools using existing models and SaaS offerings, a data engineer and a software engineer may be sufficient to lead the effort.

In addition to hiring the right talent, companies will want to train and educate their existing workforces. Prompt-based conversational user interfaces can make generative AI applications easy to use. But users still need to optimize their prompts, understand the technologys limitations, and know where and when they can acceptably integrate the application into their workflows. Leadership should provide clear guidelines on the use of generative AI tools and offer ongoing education and training to keep employees apprised of their risks. Fostering a culture of self-driven research and experimentation can also encourage employees to innovate processes and products that effectively incorporate these tools.

Businesses have been pursuing AI ambitions for years, and many have realized new revenue streams, product improvements, and operational efficiencies. Much of the successes in these areas have stemmed from AI technologies that remain the best tool for a particular job, and businesses should continue scaling such efforts. However, generative AI represents another promising leap forward and a world of new possibilities. While the technologys operational and risk scaffolding is still being built, business leaders know they should embark on the generative AI journey. But where and how should they start? The answer will vary from company to company as well as within an organization. Some will start big; others may undertake smaller experiments. The best approach will depend on a companys aspiration and risk appetite. Whatever the ambition, the key is to get under way and learn by doing.

Read this article:

What every CEO should know about generative AI - McKinsey

Posted in Ai | Comments Off on What every CEO should know about generative AI – McKinsey

AI creates images of the ‘perfect’ man and woman – Sky News

Posted: at 2:01 am

Wednesday 17 May 2023 10:55, UK

Artificial intelligence has produced its idea of what the "ideal" man and woman look like, based on social media data and results on the World Wide Web.

The AI images of men and women were created through engagement analytics on social media, using tools to look at billions of images of people.

The Bulimia Project, an eating disorder awareness group, monitored the findings and warned the results are "largely unrealistic" in their depiction of body types.

This is a limited version of the story so unfortunately this content is not available. Open the full version

It said the images of women tended to have a bias toward blonde hair, brown eyes and olive skin - while for men, there was a bias toward brown hair, brown eyes and olive skin.

It also found that AI's collection of social media-inspired images were "far more sexually charged" than those based on everything else it found on the World Wide Web.

The study also showed there was some variation between body preferences for men and women.

The images generated of the "perfect" female body according to social media in 2023 featured tanned and Caucasian-looking women with slim figures and small waists.

For women, 37% of the AI-generated images included blonde hair, while 53% of the images included women with olive skin.

Images of the "perfect" male body featured muscly men with a six-pack, wearing tight t-shirts.

The images were created using the AI image generators Dall-E 2, Stable Diffusion, and Midjourney.

For men, 67% of the AI-generated images included brown hair and 63% of the images included olive skin.

The Bulimia Project then asked AI to share its perspective based on images from across the internet.

Read more:Senator's chilling warning after AI imitates himNew iPhone feature can create a voice that sounds like youFirst human trial of dirty bomb antidote begins

For the "perfect" woman in 2023 - AI generated images of women mainly with brown eyes, brown hair and tanned skin. For men with the same prompt, it produced images of men with facial hair, predominantly with brown eyes and hair.

The Bulimia Project said: "Considering that social media uses algorithms based on which content gets the most lingering eyes, it's easy to guess why AI's renderings would come out more sexualised.

"But we can only assume that the reason AI came up with so many oddly shaped versions of the physiques it found on social media is that these platforms promote unrealistic body types, to begin with."

More:

AI creates images of the 'perfect' man and woman - Sky News

Posted in Ai | Comments Off on AI creates images of the ‘perfect’ man and woman – Sky News

Audit AI search tools now, before they skew research – Nature.com

Posted: at 2:01 am

Search tools assisted by large language models (LLMs) are changing how researchers find scholarly information. One tool, scite Assistant, uses GPT-3.5 to generate answers from a database of millions of scientific papers. Another, Elicit, uses an LLM to write its answers to searches for articles in a scholarly database. Consensus finds and synthesizes research claims in papers, whereas SciSpace bills itself as an AI research assistant that can explain mathematics or text contained in scientific papers. All of these tools give natural-language answers to natural-language queries.

Search tools tailored to academic databases can use LLMs to offer alternative ways of identifying, ranking and accessing papers. In addition, researchers can use general artificial intelligence (AI)-assisted search systems, such as Bing, with queries that target only academic databases such as CORE, PubMed and Crossref.

All search systems affect scientists access to knowledge and influence how research is done. All have unique capabilities and limitations. Im intimately familiar with this from my experience building Search Smart, a tool that allows researchers to compare the capabilities of 93 conventional search tools, including Google Scholar and PubMed. AI-assisted, natural-language search tools will undoubtedly have an impact on research. The question is: how?

The time remaining before LLMs mass adoption in academic search must be used to understand the opportunities and limitations. Independent audits of these tools are crucial to ensure the future of knowledge access.

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use

All search tools assisted by LLMs have limitations. LLMs can hallucinate: making up papers that dont exist, or summarizing content inaccurately by making up facts. Although dedicated academic LLM-assisted search systems are less likely to hallucinate because they are querying a set scientific database, the extent of their limitations is still unclear. And because AI-assisted search systems, even open-source ones, are black boxes their mechanisms for matching terms, ranking results and answering queries arent transparent methodical analysis is needed to learn whether they miss important results or systematically favour specific types of papers, for example. Anecdotally, I have found that Bing, scite Assistant and SciSpace tend to yield different results when a search is repeated, leading to irreproducibility. The lack of transparency means there are probably many limitations still to be found.

Already, Twitter threads and viral YouTube videos promise that AI-assisted search can speed up systematic reviews or facilitate brainstorming and knowledge summarization. If researchers are not aware of the limitations and biases of such systems, then research outcomes will deteriorate.

Regulations exist for LLMs in general, some within the sphere of the research community. For example, publishers and universities have hammered out policies to prevent LLM-enabled research misconduct such as misattribution, plagiarism or faking peer review. Institutions such as the US Food and Drug Administration rate and approve AIs for specific uses, and the European Commission is proposing its own legal framework on AI. But more-focused policies are needed specifically for LLM-assisted search.

Why open-source generative AI models are an ethical way forward for science

In working on Search Smart, I developed a way to assess the functionalities of databases and their search systems systematically and transparently. I often found capabilities or limitations that were omitted or inaccurately described in the search tools own frequently asked questions. At the time of our study, Google Scholar was researchers most widely used search engine. But we found that its ability to interpret Boolean search queries, such as ones involving OR and AND, was both inadequate and inadequately reported. On the basis of these findings, we recommended not relying on Google Scholar for the main search tasks in systematic reviews and meta-analyses (M. Gusenbauer & N. R. Haddaway Res. Synth. Methods 11, 181217; 2020).

Even if search AIs are black boxes, their performance can still be evaluated using metamorphic testing. This is a bit like a car-crash test: it asks only whether and how passengers survive varying crash scenarios, without needing to know how the car works internally. Similarly, AI testing should prioritize assessing performance in specific tasks.

LLM creators should not be relied on to do these tests. Instead, third parties should conduct a systematic audit of these systems functionalities. Organizations that already synthesize evidence and advocate for evidence-based practices, such as Cochrane or the Campbell Collaboration, would be ideal candidates. They could conduct audits themselves or jointly with other entities. Third-party auditors might want to partner with librarians, who are likely to have an important role in teaching information literacy around AI-assisted search.

The aim of these independent audits would not be to decide whether or not LLMs should be used, but to offer clear, practical guidelines so that AI-assisted searches are used only for tasks of which they are capable. For example, an audit might find that a tool can be used for searches that help to define the scope of a project, but cant reliably identify papers on the topic because of hallucination.

AI-assisted search systems must be tested before researchers inadvertently introduce biased results on a large scale. A clear understanding of what these systems can and cannot do can only improve scientific rigour.

M.G. is the founder of Smart Search, a free website that tests academic search systems.

Continued here:

Audit AI search tools now, before they skew research - Nature.com

Posted in Ai | Comments Off on Audit AI search tools now, before they skew research – Nature.com

3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI … – InvestorPlace

Posted: at 2:01 am

Source: shutterstock.com/cono0430

Its understandable if some financial traders are skeptical of enterprise artificial intelligence (AI) company C3.ai(NYSE:AI). After all, AI stock rallied hard in early 2023. Yet, C3.ais growth story isnt over yet. There are still reasons to think about investing in this highly touted software startup.

It seems like every publicly listed technology company is jumping on the machine-learning bandwagon nowadays. CEOs are purposely mentioning AI multiple times during conference calls, just to drum up investor interest.

In contrast, C3.ai definitely isnt a bandwagon jumper. The company was been a machine-learning mainstay before the trend picked up steam in 2023. So, lets recap three great reasons to think about buying AI stock now.

By a long shot, C3.ai isnt the biggest company involved in machine learning. As of this writing, C3.ai is No. 13 among the largest AI businesses based on market capitalization.

On the other hand, youre definitely not getting pure-play machine-learning exposure if you invest in Microsoft (NASDAQ:MSFT) or Nvidia(NASDAQ:NVDA). Unlike those tech titans, C3.ai is, to quote Alex Sirois, considered to be among the most direct ways to play the AI boom.

Sure, Microsoft invested in the technology of an AI company (specifically,OpenAIs ChatGPT chatbot). However, C3.ai actually is an AI company first and foremost. This isnt to suggest that you shouldnt invest in Microsoft, Nvidia and so on. Its possible to own shares of a variety of technology companies, while also boosting your portfolios machine-learning exposure with AI stock.

C3.ai serves the public and private sectors and has significant clients in both of those categories. The companys public-sector clients include the Sheriffs Office of San Mateo County, Calif., and even the U.S. Air Force.

Furthermore, C3.ais private-sector clients include such corporate giants as Shell (NYSE:SHEL), Consolidated Edison (NYSE:ED) and Raytheon Technologies (NYSE:RTX). With heavy hitters like those on C3.ais roster of clients, one might expect the company to generate robust revenue.

And indeed, C3.ai has proven itself in that regard. During the third fiscal quarter of 2023, C3.ai generated $66.7 million in total revenue, exceeding the companys guidance of $63 million to $65 million.

Ill admit, folks who took a share position in C3.ai in early April entered into a crowded trade. If they held on to their stake in C3.ai, theyre surely underwater on their investment now.

You might hear analysts warning financial traders about chasing the rally in AI stock. Yet, that rally is old news by now. The stock has pulled back, thereby allowing new investors to get on board and prior shareholders to reduce their cost basis.

In other words, you dont have to worry about being a hype-chaser if you choose to invest in C3.ai now. The C3.ai share price is close to where it was in February of this year, before machine-learning mania took over the financial markets. Therefore, dont hesitate to give C3.ai a chance, as the company deserves a place in your AI-friendly portfolio right now.

On the date of publication, David Moadeldid not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

David Moadel has provided compelling content and crossed the occasional line on behalf of Motley Fool, Crush the Street, Market Realist, TalkMarkets, TipRanks, Benzinga, and (of course) InvestorPlace.com. He also serves as the chief analyst and market researcher for Portfolio Wealth Global and hosts the popular financial YouTube channel Looking at the Markets.

Read more from the original source:

3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI ... - InvestorPlace

Posted in Ai | Comments Off on 3 Reasons C3.ai Stock Could Be Your Golden Ticket to the AI … – InvestorPlace

Zoom makes a big bet on AI with investment in Anthropic – VentureBeat

Posted: at 2:01 am

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Zoom is going all in on generative AI. After announcing a partnership with OpenAI in March, the enterprise communication company today said it is teaming up with AI startup Anthropic to integrate Anthropics Claude AI assistant into Zooms productivity platform. The company has also made an investment of an undisclosed amount in Google-backed Anthropic through its global investment arm.

The partnership, a part of Zooms federated approach to AI, comes as Microsoft continues to roll out AI-powered smarts in Teams, Google brings AI into Workspace and Salesforce focuses on Slack GPT.

However, Zoom says it will first incorporate Claude to evolve its omnichannel contact center offerings before moving on to other segments of the platform. It did not share when or how the broader integration would be executed.

Zooms Contact Center is a video-first support hub that improves customer support for enterprises. It includes multiple products, including Zoom Virtual Agent and Zoom Workforce Management.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

With the Anthropic partnership, Zoom plans to integrate Claude across the entire Contact Center portfolio to build self-service features that not only improve end-user outcomes but also enable superior agent experiences.

For instance, it will be able to understand customers intent from their inputs and guide them to the best solution, and provide actionable insights that managers can use to coach agents.

Anthropics Constitutional AI model is primed to provide safe and responsible integrations for our next-generation innovations, beginning with the Zoom Contact Center portfolio, said Smita Hashim, chief product officer at Zoom. With Claude guiding agents toward trustworthy resolutions and powering self-service for end users, companies will be able to take customer relationships to another level.

Moving ahead, Zoom Contact Center will also use Claude to provide the right resources to agents, enabling them to deliver improved customer service, a company spokesperson told VentureBeat. They added that Claudes capabilities will be expanded across the Zoom platform which includes Team Chat, Meetings, Phone and Whiteboard but did not share specific details.

The partnership with Anthropic is Zooms latest move in its federated approach to AI, where it is using its own proprietary AI models along with those from leading AI companies and select customers own models.

With this flexibility to incorporate multiple types of models, our goal is to provide the most value for our customers diverse needs. These models are also customizable, so they can be tuned to a given companys vocabulary and scenarios for better performance, Hashim said in a blog post.

Zoom has already been working with OpenAI for IQ, its conversational intelligence product. In fact, back in March, Zoom announced multiple AI-powered capabilities for the product with OpenAI, including the ability to generate draft messages and emails and provide summaries for chat threads. The capabilities started rolling out for select customers in April.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Visit link:

Zoom makes a big bet on AI with investment in Anthropic - VentureBeat

Posted in Ai | Comments Off on Zoom makes a big bet on AI with investment in Anthropic – VentureBeat

AI voice phone scams are on the rise. Here’s how to avoid them – USA TODAY

Posted: at 2:01 am

Jennifer Jolly| Special to USA TODAY

The most powerful people on the planet dont quite know what to make of AI as it quickly becomes one of the most significant new technologies in history.

But criminals sure do.

In the six months since OpenAI first unleashed ChatGPT on the masses and ignited an artificial intelligence arms race with the potential to reshape history a new strain of cybercriminals has been among the first to cash in.

These next-gen bandits come armed with sophisticated new tools and techniques to steal hundreds of thousands of dollars from people like you and me.

I am seeing a highly concerning rise in criminals using advanced technology AI-generated deepfakes and cloned voices to perpetrate very devious schemes that are almost impossible to detect, Haywood Talcove, CEO of LexisNexis Risk Solutions' Government Group, a multinational information and analytics company based in Atlanta told me over Zoom.

AI-generated images already fool people: Why experts say they'll only get harder to detect.

Competition in cyberspace: Google ups the ante on AI to compete with ChatGPT. Here's how search and Gmail will change.

If you get a call in the middle of the night and it sounds exactly like your panicked child or grandchild saying, help, I was in a car accident, the police found drugs in the car, and I need money to post bail (or for a retainer for a lawyer), its a scam, Talcove explained.

Earlier this year, law enforcement officials in Canada say one man used AI-generated voices he likely cloned from social media profiles to con at least eight senior citizens out of $200,000 in just three days.

Senior scam: An elderly man was scammed out of millions. Could the bank have done more to prevent fraud?

The what-if scenarios: Fear over AI dangers grows as some question if tools like ChatGPT will be used for evil

Similar scams preying on parents and grandparents are also popping up in nearly every state in America. This month, several Oregon school districts warned parents about a spate of fake kidnapping calls.

The calls come in from an unknown caller ID (though even cell phone numbers are easy to spoof these days). A voice comes on that sounds exactly like your loved one saying theyre in trouble. Then they get cut off, you hear a scream, and another voice comes on the line demanding ransom, or else.

The FBI, FTC, and even the NIH warn of similar scams targeting parents and grandparents across the United States. In the last few weeks, its happened in Arizona, Illinois, New York, New Jersey, California, Washington, Florida, Texas, Ohio, Virginia, and many others.

An FBI special agent in Chicago told CNN that families in America lose an average of $11,000 in each fake-kidnapping scam.

Talcove recommends having a family password that only you and your closest inner circle share. Dont make it anything easily discovered online either no names of pets, favorite bands, etc. Better yet, make it two or three words that you discuss and memorize. If you get a call that sounds like a loved one, ask them for the code word or phrase immediately.

If the caller pretends to be law enforcement, tell them you have a bad connection and will call them back. Ask the name of the facility theyre calling from (campus security, local jail, the FBI), and hang up (even though scammers will say just about anything to get you to stay on the line). If you cant reach your loved one, look up the phone number of that facility or call your local law enforcement and tell them whats going on.

Whatis ChatGPT?: Everything to know about OpenAI's free AI essay writer and how it works

New Twitter CEO: What to know about Linda Yaccarino, Elon Musk's pick

Remember, these criminals use fear, panic, and other proven tactics to get you to share personal information or send money. Usually, the caller wants you to wire money, transfer it directly via Zelle or Venmo, send cryptocurrency, or buy gift cards and give them the card numbers and PINs. These are all giant red flags.

Also, be more careful than ever about what information you put out into the world.

An FTC alert also suggests calling the person who supposedly contacted you to verify the story, use a phone number you know is theirs. If you cant reach your loved one, try to get in touch with them through another family member or their friend, it says on its website.

A criminal only needs three seconds of audio of your voice to clone it, Talcove warns. Be very careful with social media. Consider making your accounts private. Don't reveal the names of your family or even your dog. This is all information that a criminal armed with deepfake technology could use to fool you or your loved ones into a scam.

Talcove shared a half dozen how-to video clips he says he pulled from the dark web showing these scams in action. He explained that criminals often sell information on how to create these deepfakes to other fraudsters.

I keep my eyes on criminal networks and emerging tactics. We literally monitor social media and the dark web and infiltrate criminal groups, he added. Its getting scary. For example, filters can be applied over Zoom to change somebodys voice and appearance. A criminal who grabs just a few seconds of audio from your [social media feeds], for example, can clone your voice and tone.

I skipped all the organized crime parts and just Googled AI voice clone. I wont say exactly which tool I used, but it took me less than ten minutes to upload 30 seconds of my husbands voice from a video saved on my smartphone to an AI audio generator online, for free. I typed in a few funny lines I wanted him to say, saved it on my laptop, and texted it to our family. The most challenging part was transferring the original clip from a .mov to a .wav file (and thats easy too).

It fooled his mom, my parents, and our children.

We're all vulnerable, but the most vulnerable among us are our parents and grandparents, Talcove says. 99-in-100 people couldn't detect a deepfake video or voice clone. But our parents and grandparents, categorically, are less familiar with this technology. They would never suspect that the voice on the phone, which sounds exactly like their child screaming for help during a kidnapping, might be completely artificial.

More from Jennifer Jolly:

Jennifer Jolly is an Emmy Award-winning consumer tech columnist. The views and opinions expressed in this column are the author's and do not necessarily reflect those of USA TODAY.

View original post here:

AI voice phone scams are on the rise. Here's how to avoid them - USA TODAY

Posted in Ai | Comments Off on AI voice phone scams are on the rise. Here’s how to avoid them – USA TODAY

Amazon is building an AI-powered conversational experience for … – The Verge

Posted: at 2:01 am

Amazon is pitching these changes to search as absolutely massive. This will be a once in a generation transformation for Search, just like the Mosaic browser made the Internet easier to engage with three decades ago, Amazon wrote. If you missed the 90sWWW, Mosaic, and the founding of Amazon and Googleyou dont want to miss this opportunity. And we might be seeing the changes sooner rather than later, as Amazon wants to deliver this vision to our customers right away.

Its understandable why Amazon seems to be racing here. A chatbot can be a useful starting point when youre looking to buy something with specific parameters. And just last week, Google showed how its new AI-powered Search Generative Experience can create buying guides from a single search. Amazon certainly doesnt want to lose any ground in shopping, so its not surprising that the company wants to introduce its own chatbot very soon.

That said, its unclear when this new experience might actually be released or what it might look like. When we asked for comment, Amazon spokesperson Keri Bertolino only shared this: We are significantly investing in generative AI across all of our businesses. And given the general state of AI chatbots right now, Im not confident the chatbot will be all that good. (In our comparison from March, ChatGPT generally beat out Microsofts Bing and Googles Bard.)

Still, it seems extremely likely that conversational shopping is coming to Amazon in the not-too-distant future, so you might as well get prepared for the search experience on Amazon to get even more cluttered. Hopefully Amazon makes this new experience optional, like Google is for its own generative AI search.

Original post:

Amazon is building an AI-powered conversational experience for ... - The Verge

Posted in Ai | Comments Off on Amazon is building an AI-powered conversational experience for … – The Verge

AI speculators need to ‘differentiate between actual spending and investment’ and hype: Strategist – Yahoo Finance

Posted: at 2:01 am

Tematica Research CIO Chris Versace and J.P. Morgan Asset Management Global Market Strategist Meera Pandit discuss how the proliferation of AI speculation has impacted markets at large.

BRAD SMITH: Just this morning, Alphabet hit $1 and 1/2 trillion in market cap for the first time in over a year. The tech giant getting a boost from the AI hype. And shares of Alphabet are up 35% this year after a dismal 2022, right alongside Microsoft, Meta, and Amazon, which are all enjoying double-digit gains for the year. So is this rally all AI hype here? I mean, have we seen so many companies just mention AI and the market say, all right, yeah, automatically there's a multiplier effect--

CHRIS VERSACE: So I'm going--

BRAD SMITH: --that you have to benefit from?

CHRIS VERSACE: So I'm going to use one of my favorite words called hopium. There is, I think, a lot of that in there because when you look at the end markets, like-- like for Microsoft, you know, data center, OK, moving along, not necessarily shooting the lights out, PCs continue to be weak, and you look at some of the other end markets also for NVIDIA, for example, what has been the common thread here? It's all been, what you just said, the mention of AI.

And what bothers me a little bit about it is how companies like PepsiCo, Wendy's, and others are starting to talk about how they will be using AI and how that's going to really change their business. And then all of a sudden, I flash back to 1999, 2000 in the dot-com era when all sorts of companies were like, well, we're no longer X company. We're x.com company. And it's almost like I got to be in the game. I got to say something.

BRAD SMITH: Which was actually different for Apple because they didn't bring it up voluntarily. It got brought up in a question for them.

CHRIS VERSACE: Yes. That's 100% correct.

JULIE HYMAN: Although for Apple, it makes more sense than--

BRAD SMITH: It does. It makes way more sense than--

Story continues

JULIE HYMAN: --Coke or Pepsi--

BRAD SMITH: --Wendy's.

JULIE HYMAN: --for example, where it's, like, a random bolt on.

CHRIS VERSACE: Well, but I mean, if you think about your iPhone, you're a carrier-- you are already carrying AI around with you day in, day out. So I'm concerned a little bit that this is overdone, right? You know, typically, when we have some new-new thing on the technology front, expectations do get big and company shares can get out over their skis. So the question to me is, what's the pop in that potential bubble? That's what I'm watching for.

JULIE HYMAN: I want to ask you about a specific one of those names that you mentioned because I know, Meera, you talk sort of more broadly. But I want to ask about Nvidia because this is a stock that has doubled--

CHRIS VERSACE: Yes.

JULIE HYMAN: --this year. It hasn't even reported its earnings yet. That's set for, what--

CHRIS VERSACE: Next week.

JULIE HYMAN: --a week from today.

CHRIS VERSACE: Next week. Next week.

JULIE HYMAN: So is this-- I mean, AI is sort of really knit into the fabric of what Nvidia chips do and what they want them to do. Does it make more sense for a company like this to be up that much or is it also too much?

CHRIS VERSACE: I would say that it's-- going into their earnings, it's probably priced to perfection, which means that they need not only to deliver, they need to beat and raise in order for the stock to-- to move further higher.

BRAD SMITH: AI as a theme, if investors were to even try and position their portfolio or have some type of exposure to AI right now-- from what we've heard even in the mentions over the earnings season, I think Mark Zuckerberg actually did the best of laying this out that it's applications that live on top of the language models that live on top of chips-- is there a strong thematic play that's emerging right now as a subset of AI perhaps?

MEERA PANDIT: I think we need to not put the cart before the horse here because I see a couple of different headwinds from an AI perspective if we bring in the macro story because one, if we think about what AI is going to require, it's going to require businesses to spend more precisely at a time where profits are weakening, companies are trying to batten down the hatches a bit and preserve the profitability. So that's a little bit of a headwind there in terms of how much additional spend can go towards this in the near term.

The other thing I'd say is I think with a lot of these technologies, they actually require more workers up front even if eventually they will save on the labor force. So if we think about the worker's position, the shortage of workers we have, the very specific training that might be required, I think there are some headwinds when we think about the supply of workers and companies' ability to put capital in the near term.

JULIE HYMAN: And sorry to interrupt, Meera, but as you're talking, it occurs to me, do companies also risk putting-- over-resourcing AI at the expense because of the hype, because of the push by investors at the expense of core businesses?

MEERA PANDIT: That's the risk.

JULIE HYMAN: Yeah.

MEERA PANDIT: You don't want businesses to be playing in too many sandboxes at the same time. So I do think that this is the time where businesses need to really focus. And businesses have been good about focusing on wage pressures, cost pressures, higher dollar over the last year or so and making some tough decisions. I don't think company managers should lose that discipline in the face of the newest shiny object.

Now, I think long term, to your point, this is a huge theme that will play out over many years. But we might need to be a little bit patient. Think about, again, some of the ancillary technology required, the inputs required over a longer period of time that can fuel this theme. But I think that the huge run-up in markets solely around AI enthusiasm might be a little bit beyond its skis.

BRAD SMITH: Chris, I saw you leaning in, about to jump on the table.

CHRIS VERSACE: Yeah. Yeah, I was just going to say that we have to kind of differentiate between actual spending and investment on this compared to companies talking about it because in this environment, if you trace it back over the last several weeks, Microsoft shares took off, right, and Google-- kind of Google shares lagged behind really until very recently, especially coming out of their I/O event, where they really talked about how they're incorporating AI not only into Bard, but in other areas, saying that, hey, we are in this game. And again, if you look at that as the model, companies that don't really talk about it, there's going to be this perception that, oh, maybe they're falling behind, and they won't want that.

BRAD SMITH: Are we underestimating what the regulatory framework for AI may look like at this point?

CHRIS VERSACE: My suspicion is yes.

BRAD SMITH: Yeah.

JULIE HYMAN: Yeah, I mean, at the same time, if we have companies that risk not talking about it, like is that-- and falling behind, is that an opportunity for investors? In other words, just because a company isn't talking about it doesn't mean it's not doing it. It doesn't mean-- do you know what I mean?

CHRIS VERSACE: Yeah. Yeah. Yeah. Well, take your point on Apple-- or Brad's point on Apple, right? They are doing it. They are investing in it. But they're not necessarily talking it up. I think the one thing I will say is not for investors, I think for traders.

JULIE HYMAN: Gotcha. An important distinction always to make.

BRAD SMITH: Great to have you both here with us today. We've got Chris Versace, Tematica Research chief investment officer, Chris, great to see you, as always, as well as Meera Pandit, who is the JP Morgan Asset Management global market strategist. We appreciate the time this morning.

CHRIS VERSACE: Thank you.

See the article here:

AI speculators need to 'differentiate between actual spending and investment' and hype: Strategist - Yahoo Finance

Posted in Ai | Comments Off on AI speculators need to ‘differentiate between actual spending and investment’ and hype: Strategist – Yahoo Finance