III. Artificial intelligence and the economy: implications for central banks – bis.org

Key takeaways

The advent of large language models (LLMs) has catapulted generative artificial intelligence (gen AI) into popular discourse. LLMs have transformed the way people interact with computers away from code and programming interfaces to ordinary text and speech. This ability to converse through ordinary language as well as gen AI's human-like capabilities in creating content have captured our collective imagination.

Below the surface, the underlying mathematics of the latest AI models follow basic principles that would be familiar to earlier generations of computer scientists. Words or sentences are converted into arrays of numbers, making them amenable to arithmetic operations and geometric manipulations that computers excel at.

What is new is the ability to bring mathematical order at scale to everyday unstructured data, whether they be text, images, videos or music. Recent AI developments have been enabled by two factors. First is the accumulation of vast reservoirs of data. The latest LLMs draw on the totality of textual and audiovisual information available on the internet. Second is the massive computing power of the latest generation of hardware. These elements turn AI models into highly refined prediction machines, possessing a remarkable ability to detect patterns in data and fill in gaps.

There is an active debate on whether enhanced pattern recognition is sufficient to approximate "artificial general intelligence" (AGI), rendering AI with full human-like cognitive capabilities. Irrespective of whether AGI can be attained, the ability to impose structure on unstructured data has already unlocked new capabilities in many tasks that eluded earlier generations of AI tools.1 The new generation of AI models could be a game changer for many activities and have a profound impact on the broader economy and the financial system. Not least, these same capabilities can be harnessed by central banks in pursuit of their policy objectives, potentially transforming key areas of their operations.

The economic potential of AI has set off a gold rush across the economy. The adoption of LLMs and gen AI tools is proceeding at such breathtaking speed that it easily outpaces previous waves of technology adoption (Graph 1.A). For example, ChatGPT alone reached one million users in less than a week and nearly half of US households have used gen AI tools in the past 12 months. Mirroring rapid adoption by users, firms are already integrating AI in their daily operations: global survey evidence suggests firms in all industries use gen AI tools (Graph 1.B). To do so, they are investing heavily in AI technology to tailor it to their specific needs and have embarked on a hiring spree of workers with AI-related skills (Graph 1.C). Most firms expect these trends to only accelerate.2

This chapter lays out the implications of these developments for central banks, which impinge on them in two important ways.

First, AI will influence central banks' core activities as stewards of the economy. Central bank mandates revolve around price and financial stability. AI will affect financial systems as well as productivity, consumption, investment and labour markets, which themselves have direct effects on price and financial stability. Widespread adoption of AI could also enhance firms' ability to quickly adjust prices in response to macroeconomic changes, with repercussions for inflation dynamics. These developments are therefore of paramount concern to central banks.

Second, the use of AI will have a direct bearing on the operations of central banks through its impact on the financial system. For one, financial institutions such as commercial banks increasingly employ AI tools, which will change how they interact with and are supervised by central banks. Moreover, central banks and other authorities are likely to increasingly use AI in pursuing their missions in monetary policy, supervision and financial stability.

Overall, the rapid and widespread adoption of AI implies that there is an urgent need for central banks to raise their game. To address the new challenges, central banks need to upgrade their capabilities both as informed observers of the effects of technological advancements as well as users of the technology itself. As observers, central banks need to stay ahead of the impact of AI on economic activity through its effects on aggregate supply and demand. As users, they need to build expertise in incorporating AI and non-traditional data in their own analytical tools. Central banks will face important trade-offs in using external vs internal AI models, as well as in collecting and providing in-house data vs purchasing them from external providers. Together with the centrality of data, the rise of AI will require a rethink of central banks' traditional roles as compilers, users and providers of data. To harness the benefits of AI, collaboration and the sharing of experiences emerge as key avenues for central banks to mitigate these trade-offs, in particular by reducing the demands on information technology (IT) infrastructure and human capital. Central banks need to come together to form a "community of practice" to share knowledge, data, best practices and AI tools.

The chapter starts with an overview of developments in AI, providing a deep dive into the underlying technology. It then examines the implications of the rise of AI for the financial sector. The discussion includes current use cases of AI by financial institutions and implications for financial stability. It also outlines the emerging opportunities and challenges and the implications for central banks, including how they can harness AI to fulfil their policy objectives. The chapter then discusses how AI affects firms' productive capacity and investment, as well as labour markets and household consumption, and how these changes in aggregate demand and supply affect inflation dynamics. The chapter concludes by examining the trade-offs arising from the use of AI and the centrality of data for central banks and regulatory authorities. In doing so, it highlights the urgent need for central banks to cooperate.

Artificial intelligence is a broad term, referring to computer systems performing tasks that require human-like intelligence. While the roots of AI can be traced back to the late 1950s, the advances in the field of machine learning in the 1990s laid the foundations of the current generation of AI models. Machine learning is a collective term referring to techniques designed to detect patterns in the data and use them in prediction or to aid decision-making.3

The development of deep learning in the 2010s constituted the next big leap. Deep learning uses neural networks, perhaps the most important technique in machine learning, underpinning everyday applications such as facial recognition or voice assistants. The main building block of neural networks is artificial neurons, which take multiple input values and transform them to output as a set of numbers that can be readily analysed. The artificial neurons are organised to form a sequence of layers that can be stacked: the neurons of the first layer take the input data and output an activation value. Subsequent layers then take the output of the previous layer as input, transform it and output another value, and so forth. A network's depth refers to the number of layers. More layers allow neural networks to capture increasingly complex relationships in the data. The weights determining the strength of connections between different neurons and layers are collectively called parameters, which are improved (known as learning) iteratively during training. Deeper networks with more parameters require more training data but predict more accurately.

A key advantage of deep learning models is their ability to work with unstructured data. They achieve this by "embedding" qualitative, categorical or visual data, such as words, sentences, proteins or images, into arrays of numbers an approach pioneered at scale by the Word2Vec model (see Box A). These arrays of numbers (ie vectors) are interpreted as points in a vector space. The distance between vectors conveys some dimension of similarity, enabling algebraic manipulations on what is originally qualitative data. For example, the vector linking the embeddings of the words "big" and "biggest" is very similar to that between "small" and "smallest". Word2Vec predicts a word based on the surrounding words in a sentence. The body of text used for the embedding exercise is drawn from the open internet through the "common crawl" database. The concept of embedding can be taken further into mapping the space of economic ideas, uncovering latent viewpoints or methodological approaches of individual economists or institutions ("personas"). The space of ideas can be linked to concrete policy actions, including monetary policy decisions.4

The advent of LLMs allows neural networks to access the whole context of a word rather than just its neighbour in the sentence. Unlike Word2Vec, LLMs can now capture the nuances of translating uncommon languages, answer ambiguous questions or analyse the sentiment of texts. LLMs are based on the transformer model (see Box B). Transformers rely on "multi-headed attention" and "positional encoding" mechanisms to efficiently evaluate the context of any word in the document. The context influences how words with multiple meanings map into arrays of numbers. For example, "bond" could refer to a fixed income security, a connection or link, or a famous espionage character. Depending on the context, the "bond" embedding vector lies geometrically closer to words such as "treasury", "unconventional" and "policy"; to "family" and "cultural"; or to "spy" and "martini". These developments have enabled AI to move from narrow systems that solve one specific task to more general systems that deal with a wide range of tasks.

LLMs are a leading example of gen AI applications because of their capacity to understand and generate accurate responses with minimal or even no prior examples (so-called few-shot or zero-shot learning abilities). Gen AI refers to AIs capable of generating content, including text, images or music, from a natural language prompt. The prompts contain instructions in plain language or examples of what users want from the model. Before LLMs, machine learning models were trained to solve one task (eg image classification, sentiment analysis or translating from French to English). It required the user to code, train and roll out the model into production after acquiring sufficient training data. This procedure was possible for only selected companies with researchers and engineers with specific skills. An LLM has few-shot learning abilities in that it can be given a task in plain language. There is no need for coding, training or acquiring training data. Moreover, it displays considerable versatility in the range of tasks it can take on. It can be used to first classify an image, then analyse the sentiment of a paragraph and finally translate it into any language. Therefore, LLMs and gen AI have enabled people using ordinary language to automate tasks that were previously performed by highly specialised models.

The capabilities of the most recent crop of AI models are underpinned by advances in data and computing power. The increasing availability of data plays a key role in training and improving models. The more data a model is trained on, the more capable it usually becomes. Furthermore, machine learning models with more parameters improve predictions when trained with sufficient data. In contrast to the previous conventional wisdom that "over-parameterisation" degrades the forecasting ability of models, more recent evidence points to a remarkable resilience of machine learning models to over-parameterisation. As a consequence, LLMs with well designed learning mechanisms can provide more accurate predictions than traditional parametric models in diverse scenarios such as computer vision, signal processing and natural language processing (NLP).5

An implication is that more capable models tend to be larger models that need more data. Bigger models and larger data sets therefore go together and increase computational demands. The use of advanced techniques on vast troves of data would not have been possible without substantial increases in computing power in particular, the computational resources employed by AI systems which has been doubling every six months.6The interplay between large amounts of data and computational resources implies that just a handful of companies provide cutting-edge LLMs, an issue revisited later in the chapter.

Some commentators have argued that AI has the potential to become the next general-purpose technology, profoundly impacting the economy and society. General-purpose technologies, like electricity or the internet, eventually achieve widespread usage, give rise to versatile applications and generate spillover effects that can improve other technologies. The adoption pattern of general-purpose technologies typically follows a J-curve: it is slow at first, but eventually accelerates. Over time, the pace of technology adoption has been speeding up. While it took electricity or the telephone decades to reach widespread adoption, smartphones accomplished the same in less than a decade. AI features two distinct characteristics that suggest an even steeper J-curve. First is its remarkable speed of adoption, reflecting ease of use and negligible cost for users. Second is its widespread use at an early stage by households as well as firms in all industries.

Of course, there is substantial uncertainty about the long-term capabilities of gen AI. Current LLMs can fail elementary logical reasoning tasks and struggle with counterfactual reasoning, as illustrated in recent BIS work.7 For example, when posed with a logical puzzle that demands reasoning about the knowledge of others and about counterfactuals, LLMs display a distinctive pattern of failure. They perform flawlessly when presented with the original wording of a puzzle, which they have likely seen during their training. They falter when the same problem is presented with small changes of innocuous details such as names and dates, suggesting a lack of true understanding of the underlying logic of statements. Ultimately, current LLMs do not know what they do not know. LLMs also suffer from the hallucination problem: they can present a factually incorrect answer as if it were correct, and even invent secondary sources to back up their fake claims. Unfortunately, hallucinations are a feature rather than a bug in these models. LLMs hallucinate because they are trained to predict the statistically plausible word based on some input. But they cannot distinguish what is linguistically probable from what is factually correct.

Do these problems merely reflect the limits posed by the size of the training data set and the number of model parameters? Or do they reflect more fundamental limits to knowledge that is acquired through language alone? Optimists acknowledge current limitations but emphasise the potential of LLMs to exceed human performance in certain domains. In particular, they argue that terms such as "reason", "knowledge" and "learning" rightly apply to such models. Sceptics point out the limitations of LLMs in reasoning and planning. They argue that the main limitation of LLMs derives from their exclusive reliance on language as the medium of knowledge. As LLMs are confined to interacting with the world purely through language, they lack the tacit non-linguistic, shared understanding that can be acquired only through active engagement with the real world.8

Whether AI will eventually be able to perform tasks that require deep logical reasoning has implications for its long-run economic impact. Assessing which tasks will be impacted by AI depends on the specific cognitive abilities required in those tasks. The discussion above suggests that, at least in the near term, AI faces challenges in reaching human-like performance. While it may be able to perform tasks that require moderate cognitive abilities and even develop "emergent" capabilities, it is not yet able to perform tasks that require logical reasoning and judgment.

The financial sector is among those facing the greatest opportunities and risks from the rise of AI, due to its high share of cognitively demanding tasks and data-intensive nature.9 Table 1 illustrates the impact of AI in four key areas: payments, lending, insurance and asset management.

Across all four areas, AI can substantially enhance efficiency and lower costs in back-end processing, regulatory compliance, fraud detection and customer service. These activities give full play to the ability of AI models to identify patterns of interest in seemingly unstructured data. Indeed, "finding a needle in the haystack" is an activity that plays to the greatest strength of machine learning models. A striking example is the improvement of know-your-customer (KYC) processes through quicker data processing and the enhanced ability to detect fraud, allowing financial institutions to ensure better compliance with regulations while lowering costs.10 LLMs are also increasingly being deployed for customer service operations through AI chatbots and co-pilots.

In payments, the abundance of transaction-level data enables AI models to overcome long-standing pain points. A prime example comes from correspondent banking, which has become a high-risk, low-margin activity. Correspondent banks played a key role in the expansion of cross-border payment activity by enabling transaction settlement, cheque clearance and foreign exchange operations. Facing heightened customer verification and anti-money laundering (AML) requirements, banks have systematically retreated from the business (Graphs 2.A and 2.B). Such retreat fragments the global payment system by leaving some regions less connected (Graph 2.C), handicapping their connectivity with the rest of the financial system. The decline in correspondent banking is part of a general de-risking trend, with returns from processing transactions being small compared with the risks of penalties from breaching AML, KYC and countering the financing of terrorism (CFT) requirements.11

A key use case of AI models is to improve KYC and AML processes by enhancing (i) the ability to understand the compliance and reputational risks that clients might carry, (ii) due diligence on the counterparties of a transaction and (iii) the analysis of payment patterns and anomaly detection. By bringing down costs and reducing risks through greater speed and automation, AI holds the promise to reverse the decline in correspondent banking.

The ability of AI models to detect patterns in the data is helping financial institutions address many of these challenges. For example, financial institutions are using AI tools to enhance fraud detection and to identify security vulnerabilities. At the global level, surveys indicate that around 70% of all financial services firms are using AI to enhance cash flow predictions and improve liquidity management, fine-tune credit scores and improve fraud detection.12

In credit assessment and lending, banks have used machine learning for many years, but AI can bring further capabilities. For one, AI could greatly enhance credit scoring by making use of unstructured data. In deciding whether to grant a loan, lenders traditionally rely on standardised credit scores, at times combined with easily accessible variables such as loan-to-value or debt-to-income ratios. AI-based tools enable lenders to assess individuals' creditworthiness with alternative data. These can include consumers' bank account transactions or their rental, utility and telecommunications payments data. But they can also be of a non-financial nature, for example applicants' educational history or online shopping habits. The use of non-traditional data can significantly improve default prediction, especially among underserved groups for whom traditional credit scores provide an imprecise signal about default probability. By being better able to spot patterns in unstructured data and detect "invisible primes", ie borrowers that are of high quality even if their credit scores indicate low quality, AI can enhance financial inclusion.13

AI has numerous applications in insurance, particularly in risk assessment and pricing. For example, companies use AI to automatically analyse images and videos to assess property damage due to natural disasters or, in the context of compliance, whether claims of damages correspond to actual damages. Underwriters, actuaries or claims adjusters further stand to benefit from AI summarising and synthesising data gathered during a claim's life cycle, such as call transcripts and notes, as well as legal and medical paperwork. More generally, AI is bound to play an increasingly important role in assessing different types of risks. For example, some insurance companies are experimenting with AI methods to assess climate risks by identifying and quantifying emissions based on aerial images of pollution. However, to the extent that AI is better at analysing or inferring individual-level characteristics in risk assessments, including those whose use is prohibited by regulation, existing inequalities could be exacerbated an issue revisited in the discussion on the macroeconomic impact of AI.

In asset management, AI models are used to predict returns, evaluate risk-return trade-offs and optimise portfolio allocation. Just as LLMs assign different characteristics to each word they process, they can be used to elicit unobservable features of financial data (so-called asset embeddings). This allows market participants to extract information (such as firm quality or investor preferences) that is difficult to discern from existing data. In this way, AI models can provide a better understanding of the risk-return properties of portfolios. Models that use asset embeddings can outperform traditional models that rely only on observable characteristics of financial data. Separately, AI models are useful in algorithmic trading, owing to their ability to analyse large volumes of data quickly. As a result, investors benefit from quicker and more precise information as well as lower management fees.14

The widespread use of AI applications in the financial sector, however, brings new challenges. These pertain to cyber security and operational resilience as well as financial stability.

The reliance on AI heightens concerns about cyber attacks, which regularly feature among the top worries in the financial industry. Traditionally, phishing emails have been used to trick a user to run a malicious code (malware) to take over the user's device. Credential phishing is the practice of stealing a user's login and password combination by masquerading as a reputable or known entity in an email, instant message or another communication channel. Attackers then use the victim's credentials to carry out attacks on additional targets and gain further access.15 Gen AI could vastly expand hackers' ability to write credible phishing emails or to write malware and use it to steal valuable information or encrypt a company's files for ransom. Moreover, gen AI allows hackers to imitate the writing style or voice of individuals, or even create fake avatars, which could lead to a dramatic rise in phishing attacks. These developments expose financial institutions and their customers to a greater risk of fraud.

But AI also introduces altogether new sources of cyber risk. Prompt injection attacks, one of the most widely reported weaknesses in LLMs, refer to an attacker creating an input to make the model behave in an unintended way. For example, LLMs are usually instructed not to provide dangerous information, such as how to manufacture napalm. However, in the infamous grandma jailbreak, where the prompter asked ChatGPT to pretend to be their deceased grandmother telling a bedtime story about the steps to produce napalm, the chatbot did reveal this information. While this vulnerability has been fixed, others remain. Data poisoning attacks refer to malicious tampering with the data an AI model is trained on. For example, an attacker could adjust input data so that the AI model fails to detect phishing emails. Model poisoning attacks deliberately introduce malware, manipulating the training process of an AI system to compromise its integrity or functionality. This attack aims to alter the model behaviour to serve the attacker's purposes.16 As more applications use data created by LLMs themselves, such attacks could have increasingly severe consequences, leading to heightened operational risks among financial institutions.

Greater use of AI raises issues of bias and discrimination. Two examples stand out. The first relates to consumer protection and fair lending practices. As with traditional models, AI models can reflect biases and inaccuracies in the data they are trained on, posing risks of unjust decisions, excluding some groups from socially desirable insurance markets and perpetuating disparities in access to credit through algorithmic discrimination.17 Consumers care about these risks: recent evidence from a representative survey of US households suggests a lower level of trust in gen AI than in human-operated services, especially in high-stakes areas such as banking and public policy (Graph 3.A) and when AI tools are provided by big techs (Graph 3.B).18 The second example relates to the challenge of ensuring data privacy and confidentiality when dealing with growing volumes of data, another key concern for users (Graph 3.C). In the light of the high privacy standards that financial institutions need to adhere to, this heightens legal risks. The lack of explainability of AI models (ie their black box nature) as well as their tendency to hallucinate amplify these risks.

Another operational risk arises from relying on just a few providers of AI models, which increases third-party dependency risks. Market concentration arises from the centrality of data and the vast costs of developing and implementing data-hungry models. Heavy up-front investment is required to build data storage facilities, hire and train staff, gather and clean data and develop or refine algorithms. However, once the infrastructure is in place, the cost of adding each extra unit of data is negligible. This centrality leads to so-called data gravity: companies that already have an edge in collecting, storing and analysing data can provide better-trained AI tools, whose use creates ever more data over time. The consequence of data gravity is that only a few companies provide cutting-edge LLMs. Any failure among or cyber attack on these providers, or their models, poses risks to financial institutions relying on them.

The reliance of market participants on the same handful of algorithms could lead to financial stability risks. These could arise from AI's ubiquitous adoption throughout the financial system and its growing capability to make decisions independently and without human intervention ("automaticity") at a speed far beyond human capacity. The behaviour of financial institutions using the same algorithms could amplify procyclicality and market volatility by exacerbating herding, liquidity hoarding, runs and fire sales. Using similar algorithms trained on the same data can also lead to coordinated recommendations or outright collusive outcomes that run afoul of regulations against market manipulation, even if algorithms are not trained or instructed to collude.19 In addition, AI may hasten the development and introduction of new products, potentially leading to new and little understood risks.

Central banks stand at the intersection of the monetary and financial systems. As stewards of the economy through their monetary policy mandate, they play a pivotal role in maintaining economic stability, with a primary objective of ensuring price stability. Another essential role is to safeguard financial stability and the payment system. Many central banks also have a role in supervising and regulating commercial banks and other participants of the financial system.

Central banks are not simply passive observers in monitoring the impact of AI on the economy and the financial system. They can harness AI tools themselves in pursuit of their policy objectives and in addressing emerging challenges. In particular, the use of LLMs and AI can support central banks' key tasks of information collection and statistical compilation, macroeconomic and financial analysis to support monetary policy, supervision, oversight of payment systems and ensuring financial stability. As early adopters of machine learning methods, central banks are well positioned to reap the benefits of AI tools.20

Data are the major resource that stand to become more valuable due to the advent of AI. A particularly rich source of data is the payment system. Such data present an enormous amount of information on economic transactions, which naturally lends itself to the powers of AI to detect patterns.21 Dealing with such data necessitates adequate privacy-preserving techniques and the appropriate data governance frameworks.

The BIS Innovation Hub's Project Aurora explores some of these issues. Using a synthetic data set emulating money laundering activities, it compares various machine learning models, taking into account payment relationships as input. The comparison occurs under three scenarios: transaction data that are siloed at the bank level, national-level pooling of data and cross-border pooling. The models undergo training with known simulated money laundering transactions and subsequently predict the likelihood of money laundering in unseen synthetic data.

The project offers two key insights. First, machine learning models outperform the traditional rule-based methods prevalent in most jurisdictions. Graph neural networks, in particular, demonstrate superior performance, effectively leveraging comprehensive payment relationships available in pooled data to more accurately identify suspect transaction networks. And second, machine learning models are particularly effective when data from different institutions in one or multiple jurisdictions are pooled, underscoring a premium on cross-border coordination in AML efforts (Graph 4).

The benefits of coordination are further illustrated by Project Agor. This project gathers seven central banks and private sector participants to bring tokenised central bank money and tokenised deposits together on the same programmable platform.

The tokenisation built into Agor would allow the platform to harness three capabilities: (i) combining messaging and account updates as a single operation; (ii) executing payments atomically rather than as a series of sequential updates; and (iii) drawing on privacy-preserving platform resources for KYC/AML compliance. In traditional correspondent banking, information checks and account updates are made sequentially and independently, with significant duplication of effort (Graph 5.A). In contrast, in Agor the contingent performance of actions enabled by tokenisation allows for the combination of assets, information, messaging and clearing into a single atomic operation, eliminating the risk of reversals (Graph 5.B). In turn, privacy-enhancing data-sharing techniques can significantly simplify compliance checks, while all existing rules and regulations are adhered to as part of the pre-screening process.22

In the development of a new payment infrastructure like Agor, great care must be taken to ensure potential gains are not lost due to fragmentation. This can be done via access policies to the infrastructure or via interoperability, as advocated in the idea of the Finternet. This refers to multiple interconnected financial ecosystems, much like the internet, designed to empower individuals and businesses by placing them at the centre of their financial lives. The Finternet leverages innovative technologies such as tokenisation and unified ledgers, underpinned by a robust economic and regulatory framework, to expand the range and quality of savings and financial services. Starting with assets that can be easily tokenised holds the greatest promise in the near term.23

Central banks also see great benefits in using gen AI to improve cyber security. In a recent BIS survey of central bank cyber experts, a majority deem gen AI to offer more benefits than risks (Graph 6.A) and think it can outperform traditional methods in enhancing cyber security management.24 Benefits are largely expected in areas such as the automation of routine tasks, which can reduce the costs of time-consuming activities traditionally performed by humans (Graph 6.B). But human expertise will remain important. In particular, data scientists and cyber security experts are expected to play an increasingly important role. Additional cyber-related benefits from AI include the enhancement of threat detection, faster response times to cyber attacks and the learning of new trends, anomalies or correlations that might not be obvious to human analysts. In addition, by leveraging AI, central banks can now craft and deploy highly convincing phishing attacks as part of their cyber security training. Project Raven of the BIS Innovation Hub is one example of the use of AI to enhance cyber resilience (see Box C).

The challenge for central banks in using AI tools comes in two parts. The first is the availability of timely data, which is a necessary condition for any machine learning application. Assuming this issue is solved, the second challenge is to structure the data in a way that yields insights. This second challenge is where machine learning tools, and in particular LLMs, excel. They can transform unstructured data from a variety of sources into structured form in real time. Moreover, by converting time series data into tokens resembling textual sequences, LLMs can be applied to a wide array of time series forecasting tasks. Just as LLMs are trained to guess the next word in a sentence using a vast database of textual information, LLM-based forecasting models use similar techniques to estimate the next numerical observation in a statistical series.

These capabilities are particularly promising for nowcasting. Nowcasting is a technique that uses real-time data to provide timely insights. This method can significantly improve the accuracy and timeliness of economic predictions, particularly during periods of heightened market volatility. However, it currently faces two important challenges, namely the limited usability of timely data and the necessity to pre-specify and train models for concrete tasks.25 LLMs and gen AI hold promise to overcome both bottlenecks (see Box D). For example, an LLM fine-tuned with financial news can readily extract information from social media posts or non-financial firms' and banks' financial statements or transcripts of earning reports and create a sentiment index. The index can then be used to nowcast financial conditions, monitor the build-up of risks or predict the probability of recessions.26 Moreover, by categorising texts into specific economic topics (eg consumer demand and credit conditions), the model can pinpoint the source of changes in sentiment (eg consumer sentiment or credit risk). Such data are particularly relevant early in the forecasting process when traditional hard data are scarce.

Beyond financial applications, AI-based nowcasting can also be useful to understand real-economy developments. For example, transaction-level data on household-to-firm or firm-to-firm payments, together with machine learning models, can improve nowcasting of consumption and investment. Another use case is measuring supply chain bottlenecks with NLP, eg based on text in the so-called Beige Book. After classifying sentences related to supply chains, a deep learning algorithm classifies the sentiment of each sentence and provides an index that offers a real-time view of supply chain bottlenecks. Such an index can be used to predict inflationary pressures. Many more examples exist, ranging from nowcasting world trade to climate risks.27

Access to granular data can also enhance central banks' ability to track developments across different industries and regions. For example, with the help of AI, data from job postings or online retailers can be used to track wage developments and employment dynamics across occupations, tasks and industries. Such a real-time and detailed view of labour market developments can help central banks understand the extent of technology-induced job displacements, how quickly workers find new jobs and attendant wage dynamics. Similarly, satellite data on aerial pollution or nighttime lights can be used to predict short-term economic activity, while data on electricity consumption can shed light on industrial production in different regions and industries.28 Central banks can thereby obtain a more nuanced picture of firms' capital expenditure and production, and how the supply of and demand for goods and services are changing.

Central banks can also use AI, together with human expertise, to better understand factors that contribute to inflation. Neural networks can handle more input variables compared with traditional econometric models, making it possible to work with detailed data sets rather than relying solely on aggregated data. They can further reflect intricate non-linear relationships, offering valuable insights during periods of rapidly changing inflation dynamics. If AI's impact varies by industry but materialises rapidly, such advantages are particularly beneficial for assessing inflationary dynamics.

Recent work in this area decomposes aggregate inflation into various sub-components.29 In a first step, economic theory is used to pre-specify four factors shaping aggregate inflation: past inflation patterns, inflation expectations, the output gap and international prices. A neural network then uses aggregate series (eg the unemployment rate or total services inflation) and disaggregate series (eg two-digit industry output) to estimate the contribution of each of the four subcomponents to overall inflation, accounting for possible non-linearities.

The use of AI could play an important role in supporting financial stability analysis. The strongest suit of machine learning and AI methodologies is identifying patterns in a cross-section. As such, they can be particularly useful to identify and enhance the understanding of risks in a large sample of observations, helping identify the cross-section of risk across financial and non-financial firms. Again, availability of timely data is key. For example, during increasingly frequent periods of low liquidity and market dysfunction, AI could help prediction through better monitoring of anomalies across markets.30

Finally, pairing AI-based insights with human judgment could help support macroprudential regulation. Systemic risks often result from the slow build-up of imbalances and vulnerabilities, materialising in infrequent but very costly stress events. The scarcity of data on such events and the uniqueness of financial crises limit the stand-alone use of data-intensive AI models in macroprudential regulation.31 However, together with human expertise and informed economic reasoning to see through the cycle, gen AI tools could yield large benefits to regulators and supervisors. When combined with rich data sets that provide sufficient scope to find patterns in the data, AI could help in building early warning indicators that alert supervisors to emerging pressure points known to be associated with system-wide risks.

In sum, with sufficient data, AI tools offer central banks an opportunity to get a much better understanding of economic developments. They enable central banks to draw on a richer set of structured and unstructured data, and complementarily, speed up data collection and analysis. In this way, the use of AI enables the analysis of economic activity in real time at a granular level. Such enhanced capabilities are all the more important in the light of AI's potential impact on employment, output and inflation, as discussed in the next section.

AI is poised to increase productivity growth. For workers, recent evidence suggests that AI directly raises productivity in tasks that require cognitive skills (Graph 7.A). The use of generative AI-based tools has had a sizeable and rapid positive effect on the productivity of customer support agents and of college-educated professionals solving writing tasks. Software developers that used LLMs through the GitHub Copilot AI could code more than twice as many projects per week. A recent collaborative study by the BIS with Ant Group shows that productivity gains are immediate and largest among less experienced and junior staff (Box E).32

Early studies also suggest positive effects of AI on firm performance. Patenting activity related to AI and the use of AI are associated with faster employment and output growth as well as higher revenue growth relative to comparable firms. Firms that adopt AI also experience higher growth in sales, employment and market valuations, which is primarily driven by increased product innovation. These effects have materialised over a horizon of one to two years. In a global sample, AI patent applications generate a positive effect on the labour productivity of small and medium-sized enterprises, especially in services industries.33

The macroeconomic impact of AI on productivity growth could be sizeable. Beyond directly enhancing productivity growth by raising workers' and firms' efficiency, AI can spur innovation and thereby future productivity growth indirectly. Most innovation is generated in occupations that require high cognitive abilities. Improving the efficiency of cognitive work therefore holds great potential to generate further innovation. The estimates provided by the literature for AI's impact on annual labour productivity growth (ie output per employee) are thus substantive, although their range varies.34 Through faster productivity growth, AI will expand the economy's productive capacity and thus raise aggregate supply.

Higher productivity growth will also affect aggregate demand through changes in firms' investment. While gen AI is a relatively new technology, firms are already investing heavily in the necessary IT infrastructure and integrating AI models into their operations on top of what they already spend on IT in general. In 2023 alone, spending on AI exceeded $150 billion worldwide, and a survey of US companies' technology officers across all sectors suggests almost 50% rank AI as their top budget item over the next years.35

An additional boost to investment could come from improved prediction. AI adoption will lead to more accurate predictions at a lower cost, which reduces uncertainty and enables better decision-making.36 Of course, AI could also introduce new sources of uncertainty that counteract some of its positive impact on firm investment, eg by changing market and price dynamics.

Another substantial part of aggregate demand is household consumption. AI could spur consumption by reducing search frictions and improving matching, making markets more competitive. For example, the use of AI agents could improve consumers' ability to search for products and services they want or need and help firms in advertising and targeting services and products to consumers.37

AI's impact on household consumption will also depend on how it affects labour markets, notably labour demand and wages. The overall impact depends on the relative strength of three forces (Graph 8): by how much AI raises productivity, how many new tasks it creates and how many workers it displaces by making existing tasks obsolete.

If AI is a true general-purpose technology that raises total factor productivity in all industries to a similar extent, the demand for labour is set to increase across the board (Graph 8, blue boxes). Like previous general-purpose technologies, AI could also create altogether new tasks, further increasing the demand for labour and spurring wage growth (green boxes). If so, AI would increase aggregate demand.

However, the effects of AI might differ across tasks and occupations. AI might benefit only some workers, eg those whose tasks require logical reasoning. Think of nurses who, with the assistance of AI, can more accurately interpret x-ray pictures. At the same time, gen AI could make other tasks obsolete, for example summarising documents, processing claims or answering standardised emails, which lend themselves to automation by LLMs. If so, increased AI adoption would lead to displacement of some workers (Graph 8, red boxes). This could lead to declines in employment and lower wage growth, with distributional consequences. Indeed, results from a recent survey of US households by economists in the BIS Monetary and Economic Department in collaboration with the Federal Reserve Bank of New York indicate that men, better-educated individuals or those with higher incomes think that they will benefit more from the use of gen AI than women and those with lower educational attainment or incomes (Graph 7.B).38

These considerations suggest that AI could have implications for economic inequality. Displacement might eliminate jobs faster than the economy can create new ones, potentially exacerbating income inequality. A differential impact of benefits across job categories would strengthen this effect. The "digital divide" could widen, with individuals lacking access to technology or with low digital literacy being further marginalised. The elderly are particularly at risk of exclusion.39

Through the effects on productivity, investment and consumption the deployment of AI has implications for output and inflation. A BIS study illustrates the key mechanisms at work.40 As the source of a permanent increase in productivity, AI will raise aggregate supply. An increase in consumption and investment raises aggregate demand. Through higher aggregate demand and supply, output increases (Graph 9.A). In the short term, if households and firms fully anticipate that they will be richer in the future, they will increase consumption at the expense of investment, slowing down output growth.

The response of inflation will also depend on households' and businesses' anticipation of future gains from AI. If the average household does not fully anticipate gains, it will increase today's consumption only modestly. AI will act as a disinflationary force in the short run (blue line in Graph 9.B), as the impact on aggregate supply dominates. In contrast, if households anticipate future gains, they will consume more, making AI's initial impact inflationary (red line in Graph 9.B). Since past general-purpose technologies have had an initial disinflationary impact, the former scenario appears more likely. But in either scenario, as economic capacity expands and wages rise, the demand for capital and labour will steadily increase. If these demand effects dominate the initial positive shock to output capacity over time, higher inflation would eventually materialise. How quickly demand forces increase output and prices will depend not only on households' expectations but also on the mismatch in skills required in obsolete and newly created tasks. The greater the skill mismatch (other things being equal), the lower employment growth will be, as it takes displaced workers longer to find new work. It might also be the case that some segments of the population will remain permanently unemployable without retraining. This, in turn, implies lower consumption and aggregate demand, and a longer disinflationary impact of AI.

Another aspect that warrants further investigation is the effect of AI adoption on price formation. Large retail companies that predominately sell online use AI extensively in their price-setting processes. Algorithmic pricing by these retailers has been shown to increase both the uniformity of prices across locations and the frequency of price changes.41 For example, when gas prices or exchange rates move, these companies quickly adjust the prices in their online stores. As the use of AI becomes more widespread, also among smaller companies, these effects could become stronger. Increased uniformity and flexibility in pricing can mean greater and quicker pass-through of aggregate shocks to local prices, and hence inflation, than in the past. This can ultimately change inflation dynamics. An important aspect to consider is how these effects could differ depending on the degree of competition in the AI model and data market, which could influence the variety of models used.

Finally, the impact of AI on fiscal sustainability remains an open question. All things equal, an AI-induced boost to productivity and growth would lead to a reduced debt burden. However, to the extent that faster growth is associated with higher interest rates, combined with the potential need for fiscal programmes to manage AI-induced labour relocation or sustained spells of higher unemployment rates, the impact of AI on the fiscal outlook might be modest. More generally, the AI growth dividend is unlikely to fully offset the spending needs that may arise from the green transition or population ageing over the next decades.

The use of AI models opens up new opportunities for central banks in pursuit of their policy objectives. A consistent theme running through the chapter has been the availability of data as a critical precondition for successful applications of machine learning and AI. Data governance frameworks will be part and parcel of any successful application of AI. Central banks' policy challenges thus encompass both models and data.

An important trade-off arises between using "off-the-shelf" models versus developing in-house fine-tuned ones. Using external models may be more cost-effective, at least in the short run, and leverages the comparative advantage of private sector companies. Yet reliance on external models comes with reduced transparency and exposes central banks to concerns about dependence on a few external providers. Beyond the general risks that market concentration poses to innovation and economic dynamism, the high concentration of resources could create significant operational risks for central banks, potentially affecting their ability to fulfil their mandates.

Another important aspect relates to central banks' role as users, compilers and disseminators of data. Central banks use data as a crucial ingredient in their decision-making and communication with the public. And they have always been extensive compilers of data, either collecting them on their own or drawing on other official agencies and commercial sources. Finally, central banks are also providers of data, to inform other parts of government as well as the general public. This role helps them fulfil their obligations as key stakeholders in national statistical systems.

The rise of machine learning and AI, together with advances in computing and storage capacity, have cast these aspects in an urgent new light. For one, central banks now need to make sense of and use increasingly large and diverse sets of structured and unstructured data. And these data often reside in the hands of the private sector. While LLMs can help process such data, hallucinations or prompt injection attacks can lead to biased or inaccurate analyses. In addition, commercial data vendors have become increasingly important, and central banks make extensive use of them. But in recent years, the cost of commercial data has increased markedly, and vendors have imposed tighter use conditions.

The decision on whether to use external or internal models and data has far-reaching implications for central banks' investments and human capital. A key challenge is setting up the necessary IT infrastructure, which is greater if central banks pursue the road of developing internal models and collecting or producing their own data. Providing adequate computing power and software, as well as training existing or hiring new staff, involves high up-front costs. The same holds for creating a data lake, ie pooling different curated data sets. Yet a reliable and safe IT infrastructure is a prerequisite not only for big data analyses but also to prevent cyber attacks.

Hiring new or retaining existing staff with the right mix of economic understanding and programming skills can be challenging. As AI applications increase the sophistication of the financial system over time, the premium on having the right mix of skills will only grow. Survey-based evidence suggests this is a top concern for central banks (Graph 10). There is high demand for data scientists and other AI-related roles, but public institutions often cannot match private sector salaries for top AI talent. The need for staff with the right skills also arises from the fact that the use of AI models to aid financial stability monitoring faces limitations, as discussed above. Indeed, AI is not a substitute for human judgment. It requires supervision by experts with a solid understanding of macroeconomic and financial processes.

How can central banks address these challenges and mitigate trade-offs? The answer lies, in large part, in cooperation paired with sound data governance practices.

Collaboration can yield significant benefits and relax constraints on human capital and IT. For one, the pooling of resources and knowledge can lower demands among central banks and could ease the resource constraints on collecting, storing and analysing big data as well as developing algorithms and training models. For example, central banks could address rising costs of commercial data, especially for smaller institutions, by sharing more granular data themselves or by acquiring data from vendors through joint procurement. Cooperation could also facilitate training staff through workshops in the use of AI or the sharing of experiences in conferences. This would particularly benefit central banks with fewer staff and resources and with limited economies of scale. Cooperation, for example by re-using trained models, could also mitigate the environmental costs associated with training algorithms and storing large amounts of data, which consume enormous amounts of energy.

Central bank collaboration and the sharing of experiences could also help identify areas in which AI adds the most value and how to leverage synergies. Common data standards could facilitate access to publicly available data and facilitate the automated collection of relevant data from various official sources, thereby enhancing the training and performance of machine learning models. Additionally, dedicated repositories could be set up to share the open source code of data tools, either with the broader public or, at least initially, only with other central banks. An example is a platform such as BIS Open Tech, which supports international cooperation and coordination in sharing statistical and financial software. More generally, central banks could consider sharing domain-adapted or fine-tuned models in the central banking community, which could significantly lower the hurdles for adoption.42 Joint work on AI models is possible without sharing data, so they can be applied even where there are concerns about confidentiality.

An example of how collaboration supports data collection and dissemination is the jurisdiction-level statistics on international banking, debt securities and over-the-counter derivatives by the BIS. These data sets have a long history the international banking statistics started in the 1970s. They are a critical element for monitoring developments and risks in the global financial system. They are compiled from submissions by participating authorities under clear governance rules and using well established statistical processes. At a more granular level, arrangements for the sharing of confidential bank-level data include the quantitative impact study data collected by the Basel Committee on Banking Supervision and the data on large global banks collected by the International Data Hub. Other avenues to explore include sharing synthetic or anonymised data that protect confidential information.

The rising importance of data and emergence of new sources and tools call for sound data governance practices. Central banks must establish robust governance frameworks that include guidelines for selecting, implementing and monitoring both data and algorithms. These frameworks should comprise adequate quality control and cover data management and auditing practices. The importance of metadata, in particular, increases as the range and variety of data expand. Sometimes referred to as "the data about the data", metadata include the definitions, source, frequency, units and other information that define a given data set. This metadata is crucial when privacy-preserving methods are used to draw lessons from several data sets overseen by different central banks. Machine readability is greatly enhanced when metadata are standardised so that the machines know what they are looking for. For example, the "Findable, Accessible, Interoperable and Reusable" (FAIR) principles provide guidance in organising data and metadata to ease the burden of sharing data and algorithms.43

More generally, metadata frameworks are crucial for a better understanding of the comparability and limits of data series. Central banks can also cooperate in this domain. For example, the Statistical Data and Metadata Exchange (SDMX) standard provides a common language and structure for metadata. Such standards are crucial to foster data-sharing, lower the reporting burden and facilitate interoperability. Similarly, the Generic Statistical Business Process Model lays out business processes for official statistics with a unified framework and consistent terminology. Sound data governance practices would also facilitate the sharing of confidential data.

In sum, there is an urgent need for central banks to collaborate in fostering the development of a community of practice to share knowledge, data, best practices and AI tools. In the light of rapid technological change, the exchange of information on policy issues arising from the role of central banks as data producers, users and disseminators is crucial. Collaboration lowers costs, and such a community would foster the development of common standards. Central banks have a history of successful collaboration to overcome new challenges. The emergence of AI has hastened the need for cooperation in the field of data and data governance.

Graph 1.A: The adoption of ChatGPT is proxied by the ratio of the maximum number of website visits worldwide for the period November 2022April 2023 and the worldwide population with internet connectivity. For more details on computer see US Census Bureau; for electric power, internet and social media see Comin and Hobijn (2004) and Our World in Data; for smartphones, see Statista.

Graph 1.B: Based on an April 2023 global survey with 1,684 participants.

Graph 1.C: Data for capital invested in AI companies for 2024 are annualised based on data up to mid-May. Data on the percentage of AI job postings for AU, CA, GB, NZ and US are available for the period 201423; for AT, BE, CH, DE, ES, FR, IT, NL and SE, data are available for the period 201823.

Graph 2.A: Three-month moving averages.

Graphs 2.B and 2.C: Correspondent banks that are active in several corridors are counted several times. Averages across countries in each region. Markers in panel C represent subregions within each region. Grouping of countries by region according to the United Nations Statistics Division; for further details see unstats.un.org/unsd/methodology/m49/.

Graph 3.A: Average scores in answers to the following question: "In the following areas, would you trust artificial intelligence (AI) tools less or more than traditional human-operated services? For each item, please indicate your level of trust on a scale from 1 (much less trust than in a human) to 7 (much more trust)."

Graph 3.B: Average scores and 95% confidence intervals in answers to the following question: "How much do you trust the following entities to safely store your personal data when they use artificial intelligence tools? For each of them, please indicate your level of trust on a scale from 1 (no trust at all in the ability to safely store personal data) to 7 (complete trust)."

Graph 3.C: Average scores (with scores ranging from 1 (lowest) to 7 (highest)) in answers to the following questions: (1) "Do you think that sharing your personal information with artificial intelligence tools will decrease or increase the risk of data breaches (that is, your data becoming publicly available without your consent)?"; (2) "Are you concerned that sharing your personal information with artificial intelligence tools could lead to the abuse of your data for unintended purposes (such as for targeted ads)?"

Graph 6.A: The bars show the share of respondents to the question, "Do you agree that the use of AI can provide more benefits than risks to your organisation?".

Graph 6.B: The bars show the average score that respondents gave to each option when asked to "Rate the level of significance of the following benefits of AI in cyber security"; the score scale of each option is from 1 (lowest) to 5 (highest).

Graph 7.A: The bars correspond to estimates of the increase in productivity of users that rely on generative AI tools relative to a control group that did not.

Acemoglu, D (2024): "The simple macroeconomics of AI", Economic Policy, forthcoming.

Agrawal, A, J Gans and A Goldfarb (2019): "Exploring the impact of artificial intelligence: prediction versus judgment", Information Economics and Policy, vol 47, pp16.

----- (2022): Prediction machines, updated and expanded: the simple economics of artificial intelligence, Harvard Business Review Press, 15 November.

Ahir, H, N Bloom and D Furceri (2022): "The world uncertainty index", NBER Working Papers, no 29763, February.

Aldasoro, I, O Armantier, S Doerr, L Gambacorta and T Oliviero (2024a): "Survey evidence on gen AI and households: job prospects amid trust concerns", BIS Bulletin, no 86, April.

----- (2024b): "The gen AI gender gap", BIS Working Papers, forthcoming.

Aldasoro, I, S Doerr, L Gambacorta, G Gelos and D Rees (2024): "Artificial intelligence, labour markets and inflation", mimeo.

Aldasoro, I, S Doerr, L Gambacorta, S Notra, T Oliviero and D Whyte (2024): "Generative artificial intelligence and cybersecurity in central banking", BIS Papers, no145, May.

See the rest here:

III. Artificial intelligence and the economy: implications for central banks - bis.org

Landlords Have Started Using A.I. Chatbots to Manage Properties – The New York Times

The new maintenance coordinator at an apartment complex in Dallas has been getting kudos from tenants and colleagues for good work and late-night assistance. Previously, the eight people on the propertys staff, managing the buildings 814 apartments and town homes, were overworked and putting in more hours than they wanted.

Besides working overtime, the new staff member at the complex, the District at Cypress Waters, is available 24/7 to schedule repair requests and doesnt take any time off.

Thats because the maintenance coordinator is an artificial intelligence bot that the property manager, Jason Busboom, began using last year. The bot, which sends text messages using the name Matt, takes requests and manages appointments.

The team also has Lisa, the leasing bot that answers questions from prospective tenants, and Hunter, the bot that reminds people to pay rent. Mr. Busboom chose the personalities he wanted for each A.I. assistant: Lisa is professional and informative; Matt is friendly and helpful; and Hunter is stern, needing to sound authoritative when reminding tenants to pay rent.

The technology has freed up valuable time for Mr. Busbooms human staff, he said, and everyone is now much happier in his or her job. Before, when someone took vacation, it was very stressful, he added.

Chatbots as well as other A.I. tools that can track the use of common areas and monitor energy use, aid construction management and perform other tasks are becoming more commonplace in property management. The money and time saved by the new technologies could generate $110 billion or more in value for the real estate industry, according to a report released in 2023 by McKinsey Global Institute. But A.I.s advances and its catapult into public consciousness have also stirred up questions about whether tenants should be informed when theyre interacting with an A.I. bot.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

The rest is here:

Landlords Have Started Using A.I. Chatbots to Manage Properties - The New York Times

Is artificial intelligence making big tech too big? – The Economist

When ChatGPT took everyone by storm in November 2022, it was OpenAI, the startup behind it, that seized the business worlds attention. But, as usual, big tech is back on the front foot. Nvidia, maker of accelerator chips that are at the core of generative artificial intelligence (AI), is duelling with Microsoft, a tech giant of longer standing, to be the worlds most valuable company. Like Microsoft, it is investing in a diverse ecosystem of startups that it hopes will strengthen its lead. Predictably, given the techlash mindset of the regulatory authorities, both firms are high on the watch list of antitrust agencies.

Dont roll your eyes. The trustbusters may have infamously overreached in recent years in their attempts to cut big firms down to size. Yet for years big-tech incumbents in Silicon Valley and elsewhere have shown just as infamous a tendency to strut imperiously across their digital domains. What is intriguing is the speed at which the antitrust authorities are operating. Historically, such investigations have tended to be labyrinthine. It took 40 years for the Supreme Court to order E.I. Du Pont de Nemours, a large American chemical firm, to divest its anticompetitive stake in General Motors, which it first started to acquire in 1917 when GM was a fledgling carmaker. The Federal Trade Commission (FTC), an American antitrust agency, is still embroiled in a battle with Meta, a social-media giant, to unwind Facebooks acquisitions of Instagram and WhatsApp, done 12 and ten years ago, respectively.

Originally posted here:

Is artificial intelligence making big tech too big? - The Economist

A new lab and a new paper reignite an old AI debate – The Economist

AFTER SAM ALTMAN was sacked from OpenAI in November of 2023, a meme went viral among artificial-intelligence (AI) types on social media. What did Ilya see? it asked, referring to Ilya Sutskever, a co-founder of the startup who triggered the coup. Some believed a rumoured new breakthrough at the company that gave the world ChatGPT had spooked Mr Sutskever.

Although Mr Altman was back in charge within days, and Mr Sutskever said he regretted his move, whatever Ilya saw appears to have stuck in his craw. In May he left OpenAI. And on June 19th he launched Safe Superintelligence (SSI), a new startup dedicated to building a superhuman AI. The outfit, whose other co-founders are Daniel Gross, a venture capitalist, and Daniel Levy, a former OpenAI researcher, does not plan to offer any actual products. It has not divulged the names of its investors.

Read the rest here:

A new lab and a new paper reignite an old AI debate - The Economist

When midlife hits hard: Signs from work, family, artificial intelligence and Applebees – The Community Paper

They say, It all begins with a phone call, and they were right. A family member had fallen and broken her hip. We were suddenly immersed in her care. When we werent at work, we were at the facility, in a hospital waiting room, on the phone with service providers, or in a care meeting. There was very little sleep, lots of worry, sporadic panic and overall overwhelmingness.

I am an action person. In times of uncertainty, I busy myself with overcomplicated tasks to keep myself distracted. One day when I was walking out of the health care facility, I arbitrarily decided the walker that the hospital provided was dangerous too tippy so I found the Cadillac of walkers on Amazon. I was about to buy it with one click when I saw a woman with the same walker by the door and asked her if she liked it.

She said, I like it so much, I have two. She didnt need the brand-new second one and offered to sell it to me for half price!

I told her Id be back in 30 minutes. I just needed to run home and get cash.

When I got there, Fourteen said, Oh good, youre home. My sports physical is in 10 minutes. Wed enrolled him in high school two days earlier, and he needed the sports physical that day to start conditioning for football. With a couple of phone calls and some fancy driving on I-4, we made it all happen.

Realizing Im entering the sandwich generation portion of my journey isnt the only flashing neon midlife crisis sign Ive seen lately. Weeks before the phone call, Id made a very unexpected decision to pursue a move onto the admin team at school. Ive been a kindergarten teacher for eight years and taught preschool before that. I was extremely comfortable in my role. In fact, every time I had an HR meeting, I told them wild horses couldnt drag me from my classroom. Here was my wild horse: The preschool director was leaving to pursue new opportunities.

Tom Peters, author of In Search of Excellence, said, If a window of opportunity appears, dont pull down the shade.

Before I knew it, I found myself texting our schools director: Id like to be considered for the preschool director position. I dont know whats more mid-life-y an impulsive career change or reading quotes about opportunity?

Amid all this chaos, we found ourselves relying on the Wendys drive-thru for way too many meals. Then Wendys replaced their human order-takers with an AI kiosk. Last night, when the robot asked me what kind of sauce Id like with one of my Biggie Bags, I said, Barbecue.

It told me barbecue sauce wasnt an option. So I said, Honey mustard.

It said, Barbecue sauce is not an option.

I asked what the options were.

It said, Sweet and sour, honey mustard, and barbecue.

I said, No sauce.

It asked me what Id like to drink.

Sprite.

It replied, Barbecue sauce is not an option.

As I shook my fist and screamed for it to get off my lawn, a human being came over the speaker and took my order. I was frustrated by technology another sign of midlife crisis.

And last, but certainly not the least difficult to process, was trying to passively enjoy an episode of Dateline one evening (wait, theres more) when I heard arguably the best song of 2001, Missy Elliots Get Ur Freak On. Assuming Id accidentally switched to a music channel, I looked up in sheer horror to see it was an Applebees commercial.

That, my friends, is when I decided its time to (1) embrace that music will never be as good as it was 20 years ago, (2) give myself time and grace to figure out how to navigate high schools and assisted living communities simultaneously, and (3) trade in all my jeans that are getting too tight (thanks, hormones!) for comfy Amazon two-piece sets with elastic waists.

Welcome to midlife. I think Im going to like it here.

Read this article:

When midlife hits hard: Signs from work, family, artificial intelligence and Applebees - The Community Paper

Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know – Hunton Andrews Kurth LLP

Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know

Insurance coverage lawsuits often hinge on the plain and ordinary meaning of specific words or phrases. But not every word in an insurance policy can be defined. Yet without stable and predictable definitions, neither policyholders nor insurers can establish a clear and consistent scope of coverage. In a recent concurring opinion, Eleventh Circuit Judge Kevin Newsom suggests that artificial intelligence (AI) large language models (LLMs) could help resolve these definitional debates. His opinion inSnell v. United Specialty Insurance Company, No. 22-12581, 2024 WL 2717700 (11th Cir. May 28, 2024) highlights the pros and cons of calling upon technology to supply plain meaning.

This approach may even offer promise for a fundamental issue plaguing the insurability of AI risk, whichwe discussed last month. That is, how to define AI to ensure a functional and predictable scope of coverage?

LLMs as a Tool in the Interpretive Toolkit

InSnell, an insured sought coverage under a Commercial General Liability policy in connection with a lawsuit brought after a child sustained injuries while using an in-ground trampoline. The insurer denied coverage and refused to defend the lawsuit. The lawsuit alleged that Snell, a landscaper, negligently installed the trampoline in a clients backyard. The district court found that coverage would turn on whether installation of the trampoline amounted to landscaping, as that term was used in the policy. But the policy did not supply a definition for the term landscaping. The court, therefore, turned to the common, everyday meaning of the term, which the district court found to not include trampoline installation.

The Eleventh Circuit ultimately affirmed the district courts decision based on Alabama-law specific grounds unrelated to the meaning of landscaping. Yet, of particular note, in a concurring opinion, Judge Newsom suggested that LLMs like OpenAIs ChatGPT, Googles Gemini and Anthropics Claude could help discern the ordinary meaning of undefined words in legal instruments, including insurance policies.

Judge Newsom identified several benefits to using LLMs for this purpose. LLMs train on vast amounts of ordinary-language datamuch more than might be available through a dictionarymaking them particularly adept at determining common usage. They understand context, which helps detect nuanced language patterns. LLMs are also increasingly accessible, making their use an inexpensive research tool for judges, lawyers and ordinary citizens. In addition, using LLMs to facilitate ordinary meaning interpretation enhances the transparency of the interpreting process because queries and outputs from LLMs can be fully disclosed to the public.

Judge Newsom also pointed out potential drawbacks. LLMs can hallucinate and generate incorrect information at times. They might not fully capture offline speech, potentially missing how underrepresented groups use language. There is also the risk that lawyers, judges or litigants may try to manipulate LLMs to favor their preferred interpretation. Moreover, over-reliance on LLMs could lead to dystopian robo judges.

Despite these concerns, Judge Newsom suggests that AI could be another tool in the judiciarys toolbox, alongside dictionaries and semantic canons.

Can LLMs Define AI?

Asdiscussed previously, we have identified the general inability to clearly and uniformly define AI for purposes of insuring AI risks. Following Judge Newsoms suggestion, we asked ChatGPT to provide a stable and predictable definition of AI for use in insurance policies to answer the question posed in our last article. That is, how should insurance policies define AI to ensure a functional scope of coverage? Heres ChatGPTs response:

Artificial Intelligence (AI) refers to the capability of a machine to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and natural language understanding. AI systems include technologies like machine learning algorithms, neural networks, and natural language processing, which enable these machines to analyze data, recognize patterns, and make decisions or predictions based on that analysis.

ChatGPT even explained why its definition is a good one. It explained that this definition is comprehensive, covering all AI capabilities and ensuring all current and foreseeable technologies are included. It is clear and specific, identifying key components of AI systems, and technologically neutral, making it adaptable to future advancements. By focusing on AIs functional aspects, it aligns with practical insurance concerns. This definition provides a clear framework, reducing disputes and aligning with existing legal frameworks, making it easier to use in legal contexts. It also minimizes differing interpretations, leading to more straightforward claims processes and fewer litigation risks. Whether this AI-generated definition or some iteration might provide a functional definition for purposes of insurance remains to be seen.

Conclusion

As policyholders and insurers work to resolve the age-old task of supplying meaning to undefined terms, or defining new risks like those posed by AI, they might find it useful to follow Judge Newsoms recommendation and use AI among the other tools in their toolkits to resolve definitional debates. For now, however, while landscapers and acrobats can rest assured knowing that trampolines are not landscaping (at least in the 11thCircuit), the more vexing insurance-related AI issue remains: whatisAI?

Go here to see the original:

Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know - Hunton Andrews Kurth LLP

150 Top AI Companies of 2024: Visionaries Driving the AI Revolution – eWeek

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Artificial intelligence companies are riding a hyper-accelerated growth curve. Like the crack of a starting gun, the November 2022 launch of ChatGPT awakened the world to the vast potential of AIparticularly generative AI. As more companies invest in machine learning, automation, robotics, and AI-based data analytics solutions, the AI algorithm has quickly become the foundational technology of business.

This list of AI companies chronicles this growth by reflecting the dynamic shifts disrupting the tech industry. It covers the full ecosystem of AI vendors: new generative AI companies, entrenched giants, AI purveyors across verticals, and upstart visionaries. Theres no telling which will most influence AIs future, but we believe that the players on this list as a whole will profoundly reshape technology and, as a direct result, the arts, retail, and the entirety of culture.

Its no coincidence that this top AI companies list is composed mostly of cloud providers. Artificial intelligence requires massive storage and compute power at the level provided by the top cloud platforms. These cloud leaders are offering a growing menu of AI solutions to existing clients, giving them an enormous competitive advantage in the battle for AI market share. The cloud leaders represented also have deep pockets, which is key to their success, as AI development is exceptionally expensive.

Enterprise leader in AI

As a dominant provider of enterprise solutions and a cloud leaderits Azure Cloud is second only to AWSMicrosoft has invested heavily in AI, with plenty to show for it. For example, it has significantly expanded its relationship with OpenAI, the creator of ChatGPT, leading to the development of intelligent AI copilots and other generative AI technologies that are embedded or otherwise integrated with Microsofts products. Leveraging its massive supercomputing platform, its goal is to enable customers to build out AI applications on a global scale. With its existing infrastructure and partnerships, current trajectory, and penchant for innovation, its likely that Microsoft will be the leading provider of AI solutions to the enterprise in the long run.

Top-tier managed services for cloud and AI

As the top dog in the all-important world of cloud computing, few companies are better positioned than AWS to provide AI services and machine learning to a massive customer base. In true AWS fashion, its profusion of new tools is endless and intensely focused on making AI accessible to enterprise buyers. AWSs long list of AI services includes quality control, machine learning, chatbots, automated speech recognition, and online fraud detection. It is one of the best providers of innovative AI managed services.

To learn about new direction in generative AI, see the eWeek video: AWS VP Bratin Saha on the Bedrock Generative AI Tools

Leading generative AI for technical and non-technical audiences

As the most successful search giant of all time, Googles historic strength is in algorithms, which is the very foundation of AI. Though Google Cloud is perennially a distant third in the cloud market, its platform is a natural conduit to offer AI services to customers. The Gemini ecosystem has proven especially popular and innovative, combining access to generative AI infrastructure, developer tools, and a user-friendly natural language interface. The company is also heavily focused on responsible AI and communicating how it is working toward an ethical AI approach.

Founder of Watson and watsonx AI solutions

A top hybrid and multicloud vendor, boosted by its acquisition of Red Hat in 2019, IBMs deep-pocketed global customer base has the resources to invest heavily in AI. IBM has an extensive AI portfolio, highlighted by the Watson platform, with strengths in conversational AI, machine learning, and automation. The company invests deeply in R&D and has a treasure trove of patents; its AI alliance with MIT will also likely fuel unique advances in the future.

Leading provider of GPUs and other AI infrastructure

All roads lead to Nvidia as AIespecially generative AI and larger modelsgrows ever more important. At the center of Nvidias strength is the companys wicked-fast GPUs, which provide the power and speed for compute-intensive AI applications. Additionally, Nvidia offers a full suite of software solutions, from generative AI to AI training to AI cybersecurity. It also has a network of partnerships with large businesses to develop AI and frequently funds AI startups.

For an in-depth look a how generative AI and advanced hardware are changing security, see the eWeek video: Nvidia CSO David Reber on AI and Cybersecurity

Embedded AI assistance in social media apps

Metathe parent company of Facebook, Instagram, and many other popular platformshas had a slightly slower start on generative AI than some of the other tech giants, but it has nonetheless blazed through to create some of the most ubiquitous and innovative solutions on the market today. Metas Llama 3, for example, is one of the largest and easiest to access LLMs on the market today, as it is open source and available for research and commercial use. The company is also very transparent with its own AI research and resources. Most recently, Meta has developed Meta AI, an intelligent assistant that can operate in the background of Facebook, Messenger, Instagram, and WhatsApp.

Chinese innovator in AI and quantum computing

Little known in the U.S., Baidu owns the majority of the internet search market in China. The companys AI platform, Baidu Brain, processes text and images and builds user profiles. With the most recent generation, Baidu Brain 6.0, quantum computing capabilities have also expanded significantly. It has also launched its own ChatGPT-like tool, a generative AI chatbot called Ernie Bot.

Leader in cloud-based AI support

Oracles cloud platform has leapt forward over the past few yearsits now one of the top cloud vendorsand its cloud strength will be a major conduit for AI services to come. To bulk up its AI credentials, Oracle has partnered with Nvidia to boost enterprise AI adoption. The company stresses its machine learning and automation offerings and also sells a menu of prebuilt models to enable faster AI deployment.

To find out how a cloud leader is facing the challenges of todays IT sector, see the eWeek video: Oracle Clouds Leo Leung on Cloud Challenges and Solutions

Cloud leader and innovator in APAC region

Alibaba, a Chinese e-commerce giant and leader in Asian cloud computing, split into six divisions, each empowered to raise capital. Of particular note is the Alibaba Cloud Intelligence group, which handles cloud and AI innovations and products. While Alibaba has been greatly hampered by government crackdowns, observers see the Cloud Intelligence group as a major support of AI development. The company is also working to optimize a ChatGPT-like tool.

For more information about todays leading generative AI software, see our guide: Top 20 Generative AI Tools & Applications

Think of these AI companies as the forward-looking cohort that is inventing and supporting the systems that propel AI forward. Its a mixed bunch with diverse approaches to AI, some more directly focused on AI tools than others. Note that most of these pioneer companies were founded between 2009 and 2013, long before the ChatGPT hype cycle.

These companies are at the center of a debate about who will have the most control over the future of AI. Will it be these agile and innovative pioneers, or the giant cloud vendors that have the deep infrastructure that AI needs and can sell their AI tools to an already-captive customer base?

Founder of ChatGPT

The world was forever changed when OpenAI debuted ChatGPT in November 2022a major milestone in the history of artificial intelligence. Founded in 2015 with $1 billion in seed funding, San Francisco-based OpenAI benefits from a cloud partnership with Microsoft, which has invested a rumored $13 billion in OpenAI. Not content to rest on its success, OpenAI has launched GPT-4, a larger multimodal version of its successful LLM foundation model, and continues to innovate in areas like text-to-video generation. The company also offers DALL-E, which creates artistic images from user text prompts.

Industry-focused AI solutions and services

Founded in 2009, C3.ai is part of a new breed of vendors that can be called an AI vendor: not a legacy tech company that has shifted into AI but a company created specifically to sell AI solutions to the enterprise. The company offers a long menu of turnkey AI solutions so companies can deploy AI without the complexity of building it themselves. Clients include the U.S. Air Force, which uses AI to predict system failure, and Shell, which uses C3.ai to monitor equipment across its sprawling infrastructure.

For in-depth comparison of C3.ai and a major competitor, see our guide: C3.ai vs. DataRobot: Top Cloud AI Platforms

Solutions provider for generative and predictive AI

Founded in 2011, H2O.ai is another company built from the ground up with the mission of providing AI software to the enterprise. H2O focuses on democratizing AI. This means that while AI has traditionally been available only to a few, H2O works to make AI practical for companies without major in-house AI expertise. With solutions for AI middleware, AI in-app stores, and AI applications, the company claims thousands of customers for its H2O Cloud.

To learn how computers can see the world around them, watch our eWeek video: H2O.ais Prashant Natarajan on AI and Computer Vision

Cloud-agnostic AI and data solutions

Founded in 2012, DataRobot offers an AI Cloud thats cloud-agnostic, so it works with all the cloud leaders (AWS, Azure, and Google, for example). Its built with a multicloud architecture that offers a single platform accessible to all manner of data professionals. Its value is that it provides data pros with deep AI support to analyze data, which supercharges data analysis and processing. Among its outcomes is faster and more flexible machine learning model creation.

For in-depth comparison of DataRobot and a major competitor, read DataRobot vs. H2O.ai: Top Cloud AI Platforms.

Next-gen data warehouse and AI data cloud vendor

Founded in 2012, Snowflake is a next-gen data warehouse vendor. Artificial intelligence requires oceanic amounts of data, properly prepped, shaped, and processed, and supporting this level of data crunching is one of Snowflakes strengths. Operating across AWS, Microsoft Azure, and Google Cloud, Snowflakes AI Data Cloud aims to eliminate data silos for optimized data gathering and processing.

For an expert take on how todays IT platforms are enabling wider data access, see the eWeek video: Snowflakes Torsten Grabs on AI and Democratizing Data

Low-code/no-code AI/ML model development platform

Founded in 2013, Dataiku is a vendor with an AI and machine learning platform that aims to democratize tech by enabling both data professionals and business professionals to create data models. Using shareable dashboards and built-in algorithms, Dataiku users can spin up machine learning or deep learning models; most helpfully, it allows users to create models without writing code.

End-to-end data analytics and AI workflows

Since RapidMiner was acquired by Altair in 2022, the vendor has continued to grow and improve its no-code AI app-building features, which allow non-technical users to create applications without writing software. The company also offers a no-code MLOps solution that uses a containerized approach. As a sign of the times, users can build models using a visual, code-based, or automated approach, depending on their preference.

Unified AI orchestration solution provider

Founded in 2013, Domino Data Lab offers both comprehensive AIOps and MLOps (machine learning operations) solutions through its platform technology. With its enterprise AI platform, users can easily manage their data, software, apps, APIs, and other infrastructural elements in a unified ecosystem. Users have the option to work with hybrid or multicloud orchestration, and they can also choose between a SaaS or self-managed approach. Domino Data Lab has partnered with Nvidia to provide a faster development environment, so expect more innovation from them soon.

To learn how todays software developers are finding ways to work faster, see the eWeek video: Domino Data Labs Jack Parmer on Code First Data Science

AI-optimized data lakehouses and infrastructure

Founded in 2013, Databricks offers an enterprise data intelligence platform that supports the flexible data processing needed to create successful AI and ML deployments; think of this data solution as the crucial building block of artificial intelligence. Through its innovative data storage and management technology, Databricks ingests and preps data from myriad sources. Its data management and data governance tools work with all major cloud players. The company is best known for its integration of the data warehouse (where the data is processed) and the data lake (where the data is stored) into a data lakehouse format.

Interested in the relationship between AI and Data? See the eWeek video: Databrickss Chris DAgostino on AI and Data Management

AI solutions for graphic designers and creatives

Adobe is a SaaS company that primarily offers marketing and creative tools to its users. The company has begun to enhance all of these products with AI solutions, including Adobe Firefly, a robust generative AI tool and assistant that helps users personalize marketing assets, edit visual assets for better quality, and generally create creative content at scale across different Adobe suite products. In late 2023, Adobe expanded its AI capabilities through its acquisition of Rephrase.ai, a text-to-video studio solution.

Drag-and-drop approach to data and AI modeling

A prime example of a mega theme driving AI, Alteryxs goal is to make AI models easier to build. The goal is to abstract the complexity and coding involved with deploying artificial intelligence. The platform enables users to connect data sources to automated modeling tools through a drag-and-drop interface, allowing data professionals to create new models more efficiently. Users grab data from data warehouses, cloud applications, and spreadsheets, all in a visualized data environment. Alteryx was founded in 1997.

Learn about the major trend toward enabling wider access to data by watching the eWeek video: Alteryxs Suresh Vittal on the Democratization of Data Analytics

A conversational approach to generative content

Inflection AI labels itself as an AI studio that is looking to create advanced applied AI that can be used for more challenging use casesWhile it has hinted at other projects in the works, its primary product right now is Pi, a conversational AI that is designed to take a personalized approach to casual conversations. Pi can be accessed through pi.ai as well as iOS and Android apps. The company was founded by many former leaders from DeepMind, Google, OpenAI, Microsoft, and Meta, though several of these leaders have since left to work in the new Microsoft AI division of Microsoft. Its truly up in the air how this change will impact the company and Pi, though they expect to release an API in the near future.

Leading provider of AI for public sector use cases

Scale is an AI company that covers a lot of ground with its products and solutions, giving users the tools to build, scale, and customize AI modelsincluding generative AI modelsfor various use cases. The Scale Data Engine simplifies the process of collecting, preparing, and testing data before AI model development and deployment, while the Scale Generative AI Platform and Scale custom LLMs give users the ability to fine-tune generative AI to their specifications. Scale is also a leading provider of AI solutions for federal, defense, and public sector use cases in the government.

Leader in AI networking solutions

Arista Networks is a longstanding cloud computing and networking company that has quickly advanced its infrastructure and tooling to accommodate high-volume and high-frequency AI traffic. More specifically, the company has worked on its GPU and storage connections and sophisticated network operating software. Tools like the Arista Networks 7800 AI Spine and the Arista Extensible Operating System (EOS) are leading the way when it comes to giving users the self-service capabilities to manage AI traffic and network performance.

Hybrid, cloud-agnostic data platform

Having merged with former competitor Hortonworks, Cloudera now offers the Cloudera Data Platform and the Cloudera Machine Learning solution to help data pros collaborate in a unified platform that supports AI development. The ML solutions are specifically designed to perform data prep and predictive reporting. As an example of emerging trends, Cloudera provides portable cloud-native data analytics. Cloudera was founded in 2008.

For an inside view of where data leader Cloudera is headed, see the eWeek video: Clouderas Ram Venkatesh on the Cloudera Roadmap

Leader in blockchain, Web3, and metaverse technologies

Accubits is a blockchain, Web3, and metaverse tech solutions provider that has expanded its services and projects into artificial intelligence as well. The company primarily works to support other companies in their digital transformation efforts, offering everything from technology consulting to hands-on product and AI development. The companys main AI services include support for AI product and model development, consulting for generative AI projects, solution architecting, and automation solutions.

If the AI pioneers are a mixed bag, this group of AI visionaries is heading off in an even wider array of directions. These AI startups are closer to the edge, building a new vision even as they imagine ittheyre inventing the generative AI landscape in real time, in many cases. More than any technology before, theres no roadmap for the growth of AI, yet these generative AI startups are proceeding at full speed.

Generative AI leader committed to constitutional AI

Founded by two former senior members of OpenAI, Anthropics generative AI chatbot, Claude 3, provides detailed written answers to user questions; with this most recent generation, certain aspects of multimodality have been introduced while other components of the platform have been improved. In essence, its another tool that operates like ChatGPT, but with a twist: Anthropic publicly proclaims its focus on Constitutional AI, a methodology it has developed for consistent safety, transparency, and ethicality in its models.

Leader in generative enterprise search technology

Considered one of the unicorns of the emerging generative AI scene, Glean provides AI-powered search that primarily focuses on workplace and enterprise knowledge bases. With its Workplace Search, Assistant, Knowledge Management, Work Hub, and Connectors features, business leaders can set up a self-service learning and resource management tool for employees to find important documentation and information across business applications and corporate initiatives.

Commitment to general intelligence AI assistants

See the original post:

150 Top AI Companies of 2024: Visionaries Driving the AI Revolution - eWeek

It’s not just Nvidia: AI interest sends these stocks higher – The Washington Post

Attention around artificial intelligence has driven chipmaker Nvidia sharply higher in recent years, to the point where it briefly was the worlds most valuable company but AI investors are targeting other stocks, too.

Some hardware-focused companies in the AI supply chain have seen blistering stock price gains in the past 18 months, and the process of implementing AI across large organizations is already driving business for leading software firms.

The hype has drawn comparisons to the dot-com bubble of the late 1990s, when many internet start-ups saw massive, short-lived investment gains before crashing down. But this time, some analysts say, much of the AI interest has been concentrated on a much smaller number of established technology firms, and it is linked to significant corporate spending happening now.

The impact of generative AI is not as broad-based as initially imagined, said Chirag Dekate, a vice president and analyst at Gartner. There are a very specific entities that are providing the foundational technology.

Here are some publicly traded companies riding a wave of artificial intelligence investment.

Co-founded by technology investor Peter Thiel, this company has evolved from an organization doing mostly defense and intelligence work into a data company serving enterprises of all sorts. Under chief executive Alex Karp, the company has built a growing suite of artificial-intelligence offerings.

It is among a growing industry that implements AI technology for large organizations, a sector that also includes C3AI and the consulting firms Deloitte, Accenture and Ernst & Young, according to Dekate.

Palantirs platform examines a companys data and provides examples of how AI can be employed within an organization. Wedbush Securities analyst Dan Ives said he sees Palantir as the golden child of AI because of its emphasis on the practical use of artificial intelligence within large organizations.

Nvidia chips are just the start, but it all comes down to use cases, Ives said.

The contract manufacturing giant has a ubiquitous presence in the global tech industry with its production of computer chips built into consumer products like smartphones and cars, as well as military satellites and weapons systems.

Deepwater Assets Munster says his firm is invested in TSMC, along with Broadcom and Vertiv, as part of a broader play to capitalize on the growth of AI-enabling hardware. A company called Onto Innovation, which handles specialized measurement for chip construction, is also seen as a niche beneficiary.

Hardware is the play right now because were seeing tangible improvements to their business. They are trading at software-like multiples, Munster said. The hardware for this is just getting built.

See the article here:

It's not just Nvidia: AI interest sends these stocks higher - The Washington Post

Leveraging Artificial Intelligence to Revolutionize Efficiency in Cryptocurrency Staking – GlobeNewswire

Miami, FL, June 27, 2024 (GLOBE NEWSWIRE) -- CryptoHeap, a leading name in the cryptocurrency staking industry, is excited to announce its latest innovation: AI-driven crypto staking. By leveraging cutting-edge artificial intelligence, CryptoHeap aims to revolutionize efficiency and profitability in cryptocurrency staking, setting a new standard for the industry. This groundbreaking technology is poised to enhance user experience, optimize returns, and solidify CryptoHeaps position as one of the best crypto staking platforms available.

Salvage Warwick, CEO of CryptoHeap, highlighted the transformative potential of AI in crypto staking. "The integration of AI into our staking platform is a significant milestone for CryptoHeap. This advancement allows us to provide users with more efficient, accurate, and profitable staking opportunities. We believe AI-driven staking will be a game-changer, not just for our platform, but for the entire industry," Warwick stated.

Enhancing Efficiency with AI

Artificial intelligence offers numerous benefits forcrypto staking platforms. By employing machine learning algorithms and predictive analytics, CryptoHeap can process vast amounts of market data in real-time. This capability enables the platform to make informed decisions, optimize staking strategies, and maximize returns for users. The AI-driven approach also improves risk management, providing investors with a more secure and stable staking experience.

"Our AI-driven platform continuously learns and adapts to market conditions. This means our users benefit from the most current strategies and insights, making their staking experience more rewarding and secure," Warwick explained.

Comprehensive Staking Packages

CryptoHeaps AI-driven platform offers a range of staking packages tailored to various investment goals. These packages include some of thebest crypto stakingcoins, positioning CryptoHeap as a top choice for those looking to invest in the best crypto to stake in 2024. By providing options with daily rewards, capital return, and significant referral bonuses, CryptoHeap ensures a diverse range of opportunities for investors.

Focus on Ethereum Staking

Ethereum remains a focal point for many investors, and CryptoHeap's AI-driven platform offers some of the best ethereum staking platforms available. The platforms advanced AI capabilities provide enhanced insights and strategies for staking Ethereum, ensuring users can maximize their returns safely and efficiently.

Warwick emphasized the benefits of Ethereum staking on the platform. "Ethereum staking is a cornerstone of our offerings. Our AI technology provides users with the best possible strategies for staking Ethereum, addressing common concerns such as 'is staking ethereum a good idea' and 'is staking ethereum safe.' With our platform, users can stake Ethereum with confidence and achieve superior returns," he said.

Comprehensive Staking Packages

CryptoHeap offers a diverse range of staking packages, each tailored to meet various investment needs. These packages include options for some of the best crypto staking coins, making CryptoHeap one of thebest crypto staking platformsin the market. Investors can choose from staking options that offer daily rewards, capital return, and significant referral bonuses.

Warwick emphasized the platform's commitment to providing the best staking crypto options, particularly highlighting Ethereum staking. "Ethereum staking remains one of the most popular choices among our users. We offer some of the best ethereum staking platforms, ensuring that our users can stake their ETH safely and profitably. For those asking 'is staking ethereum a good idea' and 'is staking ethereum safe,' we provide robust solutions that address these concerns," he explained.

Strategic Monitoring and Future Plans

As the crypto market evolves, CryptoHeap remains committed to innovation and user satisfaction. The platform continuously enhances its AI capabilities to ensure users can navigate the complexities of the crypto market effectively.

"We are continuously improving our AI algorithms and expanding our offerings to meet the needs of our users. Our focus on innovation and excellence ensures CryptoHeap remains at the forefront of the crypto staking industry," Warwick concluded.

With the introduction ofAI-driven crypto staking, CryptoHeap is set to revolutionize the industry. The platforms commitment to leveraging cutting-edge technology, providing comprehensive staking packages, and ensuring security and education positions it as a leader in the crypto staking space.

Investors and crypto enthusiasts are encouraged to explore the AI-driven staking packages and other features available on CryptoHeaps platform. For more information about CryptoHeaps services and upcoming enhancements, visit the official website athttps://cryptoheap.com/.

Disclaimer: The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended you practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency & securities.

##

Read the original post:

Leveraging Artificial Intelligence to Revolutionize Efficiency in Cryptocurrency Staking - GlobeNewswire

How Will Artificial Intelligence Change MBA Jobs? – BusinessBecause

Will we see artificial intelligence jobs for MBAs? Find out which MBA jobs AI could revolutionize or replaceand what new opportunities await grads who know how to use AI tools

Like every technological shift, this change brings uncertainty. Many functions look poised to be drastically changed, reduced, or even replaced by AI. Some particular areas where changes are expected are in customer service, proofreading, and bookkeeping.

But what impact will AI have on MBA jobs? Are the career paths most suited to MBAs going to be shut off or rerouted? Or, as the World Economic Forum predicts, will the new technology create more avenues than it closes?

We spoke to AI experts at several top business schools, including Dartmouth College's Tuck School of Business, The Wharton School, and Carnegie Mellon University's Tepper School of Business to find out how AI will affect MBA jobs. Heres what they said.

The MBA degree is designed to prepare students for roles in middle management and higher. With this in mind, MBAs are insulated from some of the immediate changes AI is making.

Right now, we anticipate that AI will have the biggest impact on early career and entry-level roles that our students will oversee in more managerial positions, says Joe Hall, senior associate dean for teaching and learning at the Tuck School of Business at Dartmouth.

However, this does mean that managers must know how to manage changes in subordinates roles and understand how AI tools work.

While AI may not have a massive effect on Tuck MBAs day-to-day lives just yet, they undoubtedly need to understand how this technology works and why it has such great potential, says Joe.

To prepare students, the Tuck school has introduced a slate of new courses focused on AI, including a module specifically addressing AI for managers.

Where AI will make a direct impact on MBA jobs is as a managerial tool.

Eric Bradlow, vice dean of AI and analytics at Wharton, says: AI will clearly create jobs that require workers, managers, and the C-suite to use AI as a decision support tool."

That is, there will need to be workers who know how to utilize AI, bring AI to their organization, lead in a world of AI, train people in AI, and so on. In addition, a number of new industries will arise because of AI.

MBA grads will need to know how to use AI to support decision-making by using it to quickly verify or aggregate data, spot trends, or assess risks.

The Wharton School hasnt wasted any time in preparing students with these AI skills, Eric adds.

We are providing ChatGPT licenses to all MBA students. We will be providing training courses in AI. We haveAI Hack-a-thonswhich reward the best student ideas in AI. We have experiential learning projects that allow students to apply AI to real companies.

The school also recently launched the Wharton AI & Analytics Initiative to harness AI for four groups: industry, researchers, students, and society at large.

So far, weve seen that AI will change the roles that MBAs manage and provide a useful tool for helping them do so. It will also, crucially, accentuate the value of their human skills.

What we can say with a degree of certainty is that in any future jobs we will shift, where appropriate, to a collaboration between workers and AI, says Laurence Ales, professor of economics and GenAI fellow at the Tepper Schools Center for Intelligent Business.

Beyond knowledge of how to interact with AI, this shift will also require workers to be comfortable with higher-level thinking: workers should be able to understand the problems and be able to formulate the right questions more than executing routine tasks to reach the answer.

This type of critical thinking is an essential skill in an AI-enabled workplaceand one that business schools are well-equipped to provide.

At the Tepper school we recognize that decision-making requires a framework to understand problems, says Laurence.

Our program helps students build actionable frameworks in all of the functional areas of business. The strategic, analytical, and leadership skills of our MBA students will be crucial in navigating AI-driven transformations and driving business growth.

Most MBA graduates go into one of three industries after their degree: consulting, finance, or technology. At Wharton, for example, over 86% of the most recent class took up roles in one of these three industries.

How will each of these industries be impacted by AI? Heres our snapshot.

AI is unlikely to replace human consultants, for many of the same reasons that it will not replace human managers across industries. Current AI models cannot replace the combination of creativity and strategic thinking offered by human consultants.

However, AIs potential as a decision support tool comes in especially handy in consulting, for example, fast-tracking data analysis and automating other tasks.

If youre an aspiring MBA consultant, look out for MBA programs that address the need for AI skills in consulting, like the program at Dartmouth Tuck.

Many Tuck students enter the consulting field after graduation and we recognized a need for a course specifically about AIs impact on consulting, says Joe Hall.

So, we introduced AI and Consultative Decision-Making, which serves as a hands-on laboratory to give students experience using generative AI in an advisory and decision-making capacity.

AI will likely make sweeping changes in financial servicesagain, mainly as a decision-support tool. Its likely functions include:

Making transaction processing faster

Automating fraud detection

Analyzing asset performance

Assessing risk for insurance purposes

It can even help on the compliance side of finance by quickly analyzing complex regulations and legal texts.

AI will make broad changes in the tech industry, chiefly as a streamlining tool. One example is in software development, where AI can test and even generate new code.

It also has big implications for cybersecurity. An AI arms race is emerging between cyber attackers, who may use AI to create relentless and constantly-evolving threats, and cyber defenders, who can use the technology to create evolving defenses.

As such, there may be fruitful opportunities for MBAs with interests in cybersecurity and AI to pioneer in this field.

Most MBA jobs are somewhat protected from replacement by artificial intelligence by being management-focused. Humans creativity and strategic thinking skills are still irreplaceable.

However, that doesnt mean that MBAs can ignore the impact of AI, or put off learning about it. To exercise irreplaceable higher-order thinking skills and manage in the world of artificial intelligence, MBAs should learn how to apply AI tools to decision-making and keep abreast of technological developments.

Additionally, MBAs may have opportunities to forge careers on the frontlines of new industries or sectorsfor instance, managing cybersecurity in the age of AI.

Jopwell on Pexels, reproduced underthis license

Visit link:

How Will Artificial Intelligence Change MBA Jobs? - BusinessBecause

The Impact Of Artificial Intelligence On Mental Health Interventions – Dataconomy

Todays world witnesses exceptional advances in technology, positioning Artificial Intelligence (AI) to revolutionise various aspects, including mental health interventions. With its ability to analyse vast data sets, detect intricate patterns and generate practical insights, AI offers the potential for transforming mental healthcare. This could lead to personalised, accessible and effective interventions.

From early issue detection to customised treatment plans and 24/7 virtual support, AI-driven solutions reshape mental wellness approaches. This article delves into six important ways AI is reshaping mental health therapies. It emphasises technologys transformational potential for assisting qualified professionals who have completed counselling courses and other requisite degrees in boosting holistic well-being and improving the lives of individuals suffering from mental illnesses.

AI has the astonishing capacity to handle massive volumes of data. This skill enables the development of highly tailored treatment regimens for individuals dealing with mental health issues. By analysing a patients genetic information, medical history, lifestyle factors and social determinants of health, AI algorithms can identify unique patterns and correlations.

These insights inform tailored treatment strategies. For instance, AI can determine the optimal combination of therapies, medications and lifestyle changes most likely to yield positive outcomes for a specific individual. Moreover, AIs continuous learning and adaptation ensure dynamic adjustments to treatment plans based on the patients response and evolving needs. This tailored strategy increases treatment efficacy, reduces unpleasant effects and ineffective therapies, and, eventually, raises overall care quality.

The application of artificial intelligence in therapy sessions enhances the process of receiving therapy by providing critical insights into clients emotional states and actions. AI systems can identify tiny signals and nuances that humans cannot. AI can detect changes in facial expressions indicative of underlying emotions like sadness, anger, or anxiety, allowing therapists to tailor interventions accordingly. AI-powered sentiment analysis gauges interactions tone and mood, facilitating deeper empathy and rapport between therapists and clients. AI helps therapists improve their observational skills and emotional intelligence, resulting in more nuanced and successful therapy treatments that promote better client understanding, self-awareness and emotional growth.

Artificial Intelligence technology transforms mental healthcare through in-depth data exploration. AI systems compile and analyse anonymised patient information from electronic records, wearable devices, social platforms and other sources. These algorithms detect trends, patterns and risk factors linked to various psychological conditions. For example, AI could uncover connections between specific genetic markers and treatment effectiveness or between environmental stressors and symptom worsening. Such insights drive the development of tailored treatments matching personal needs and circumstances.

Furthermore, AI offers predictive modelling, which enables clinicians to identify and avert approaching mental health crises. By embracing data-driven insights, mental health practitioners may enhance treatment outcomes, save healthcare costs and promote overall community well-being.

AI can help aid mental health experts in proactive strategies. These strategies prevent the onset or recurrence of mental health crises. AI algorithms analyse an individuals past data. This includes treatment outcomes, medication adherence, lifestyle factors and environmental stressors.

AI can identify patterns and triggers indicating potential relapse or declining mental health. For instance, AI may recognise early warning signs like changes in sleep, social withdrawal or mood fluctuations. This prompts timely interventions such as coping strategies, medication adjustments, lifestyle changes or targeted support services.

Proactive interventions can mitigate symptom severity, enhance resilience and improve long-term prognosis. Furthermore, AI facilitates continuous monitoring and feedback, allowing the refinement of preventative interventions based on real-time data and patient feedback. This proactive strategy promotes individual well-being and helps to provide long-term, cost-effective mental healthcare services.

AI technology enhances mental health therapies, reducing the stigma associated with seeking assistance. Many people are hesitant to use traditional services because they are afraid of being judged or discriminated against. AI tools like chatbots and virtual therapists offer confidential, non-judgemental spaces to freely express thoughts, feelings and concerns without stigmas shadow. These AI-based tools are accessible anytime, anywhere, providing discreet, convenient support channels.

Moreover, anonymity allows sensitive disclosures and taboo topic discussions without social repercussions. AI interventions are often perceived as impartial and objective, devoid of human biases, which can reassure those apprehensive about unfair treatment or misunderstanding from professionals. By fostering trust and confidentiality, AI mental health platforms encourage more people to seek help, destigmatising support-seeking and normalising mental health conversations. As people interact with AI mental health tools, they may develop greater comfort in discussing concerns with human professionals, further reducing stigma in healthcare settings.

Ultimately, AI breaks access barriers and cultivates an accepting, supportive environment for those struggling with mental health issues.

Mental health issues can sometimes go unrecognized until they become serious. Fortunately, AI-powered solutions have proven helpful for early detection. By analysing data from various sources like social media, smartphone usage and wearable devices, AI algorithms can identify subtle changes that may indicate mental health concerns. For example, increased social isolation, altered communication patterns or irregular sleep habits could signal depression or anxiety. These deviations from typical behaviour serve as early warning signs.

AI sentiment analysis is also remarkably useful. By examining written text or speech, these algorithms can detect emotional cues and linguistic markers linked to mental health symptoms. The tone, sentiment, and context of communications are scrutinised, with patterns of negative language, hopelessness or self-harm references triggering alerts for further assessment. This proactive approach enables timely intervention before symptoms escalate.

Early detection facilitated by AI allows for prompt support and treatment, preventing symptoms from worsening and improving outcomes. When mental health issues are identified early, clinicians can implement targeted interventions like psychoeducation, counselling or referrals to specialised services. This encourages people to seek care proactively, establishing a sense of agency in controlling their mental health. By utilising AI for early diagnosis, mental healthcare may shift to a preventative approach, with resilience and early intervention as important pillars.

AIs integration into mental healthcare opens doors for ground-breaking solutions, yet we must prioritise ethical practices. Ensuring privacy, fairness and human dignity should guide technological progress. Collaborative efforts between developers, healthcare providers, and individuals with lived experiences can create inclusive, equitable and impactful mental health interventions. Accepting AIs revolutionary potential ethically and compassionately opens the way for a future in which the mind stays at the forefront.

See the article here:

The Impact Of Artificial Intelligence On Mental Health Interventions - Dataconomy

The fast and the deadly: When Artificial Intelligence meets Weapons of Mass Destruction – European Leadership Network

This article was originally published for the German Federal Foreign Offices Artificial Intelligence and Weapons of Mass Destruction Conference 2024, held on the 28th of June, and can be read here. You can also read The implications of AI in nuclear decision-making, by ELN Policy Fellow Alice Saltini, who will be speaking on a panel at the conference.

Artificial intelligence (AI) is a catalyst for many trends that increase the salience of nuclear, biological or chemical weapons of mass destruction (WMD). AI can facilitate and speed up the development or manufacturing of WMD or precursor technologies. With AI assistance, those who currently lack the necessary knowledge to produce fissile materials or toxic substances can acquire WMD capabilities. AI itself is of proliferation concern. As an intangible technology, it spreads easily, and its diffusion is difficult to control through supply-side mechanisms, such as export controls. At the intersection of nuclear weapons and AI, there are concerns about rising risks of inadvertent or intentional nuclear weapons use, reduced crisis stability and new arms races.

To be sure, AI also has beneficial applications and can reduce WMD-related risks. AI can make transparency and verification instruments more effective and efficient because of its ability to process immense amounts of data and detect unusual patterns, which may indicate noncompliant behaviour. AI can also improve situational awareness in crisis situations.

While efforts to explore and exploit the military dimension of AI are moving ahead rapidly, these beneficial dimensions of the AI-WMD intersection remain under-researched and under-used.

The immediate challenge is to build guardrails around the integration of AI into the WMD sphere and to slow down the incorporation of AI into research, development, production, and planning for nuclear, biological and chemical weapons. Meanwhile, governments should identify risk mitigation measures and, at the same time, intensify their search for the best approaches to capitalise on the beneficial applications of AI in controlling WMD. Efforts to ensure that the international community is able to govern this technology rather than let it govern ushave to address challenges at three levels at the AI and WMD intersection.

First, AI can facilitate the development of biological, chemical or nuclear weapons by making research, development and production faster and more efficient. This is true even for old technologies like fissile material production, which remains expensive and requires large-scale industrial facilities. AI can help to optimise uranium enrichment or plutonium separation, two key processes in any nuclear weapons programme.

The connection between AI and chemistry and biochemistry is particularly worrying. The Director General of the Organisation for the Prohibition of Chemical Weapons (OPCW) has warned of the potential risks that artificial intelligence-assisted chemistry may pose to the Chemical Weapons Convention and of the ease and speed with which novel routes to existing toxic compounds can be identified.This creates serious new challenges for the control of toxic substances and their precursors.

Similar concerns exist with regard to biological weapons. Synthetic biology is in itself a dynamic field.But AI puts the development of novel chemical or biological agents through such new technologies on steroids. Rather than going through lengthy and costly lab experiments, AI can predict the biological effects of known and even unknown agents. Amuch-cited paper by Filippa Lentzos and colleaguesdescribes an experiment during which an AI, in less than six hours and running on a standard hardware configuration, generated forty thousand molecules that scored within our desired threshold, meaning that these agents were likely more toxic than publicly known chemical warfare agents.

Second,AI could ease access to nuclear, biological and chemical weapons by illicit actors by giving advice on how to develop and produce WMD or relevant technologies from scratch.

To be sure, current commercial AI providers have instructed their AI models not to answer questions on how to build WMD or related technologies. But such limits will not remain impermeable. And in future, the problem may not be so much preventing the misuse of existing AI models but the proliferation of AI models or the technologies that can be used to build them. Only a fraction of all spending on AI is invested in the safety and security of such models.

Third, the integration of AI into the WMD sphere can also lower the threshold for the use of nuclear, biological or chemical weapons. Thus, all nuclear weapon stateshave begun to integrate AI into their nuclear command, control, communication and information (NC3I) infrastructure. The ability of AI models to analyse large chunks of data at unprecedented speedscan improve situational awareness and help warn, for example, of incoming nuclear attacks. But at the same time AI may also be used to optimise military strike options. Because of the lack of transparency around AI integration, fears that adversaries may be intent on conducting a disarming strike with AI assistance can increase, setting up a race to the bottom in nuclear decision-making.

In a crisis situation, overreliance on AI systems that are unreliable or working with faulty data may create additional problems. Data may be incomplete or may have been manipulated. AI models themselves are not objective. These problems are structural and thus not easily fixed.A UNIDIR study, for example, found that gender norms and bias can be introduced into machine learning throughout its life cycle. Another inherent risk is that AI systems designed and trained for military uses are biased towards war-fighting rather than war avoidance, which would make de-escalation in a nuclear crisis much more difficult.

The consensus among nuclear weapons states that a human always has to stay in the loop before a nuclear weapon is launched, is important, but it remains a problem that the understanding of human control may differ significantly.

It would be a fools errand to try to slow down AIs development. But we need to decelerate AIs convergence with the research, development, production, and military planning related to WMD. It must also be possible to prevent spillover from AIs integration into the conventional military sphere to applications leading to nuclear, biological, and chemical weapons use.

Such deceleration and channelling strategies can build on some universal norms and prohibitions. But they will also have to be tailored to the specific regulative frameworks, norms and patterns regulating nuclear, biological and chemical weapons. Thezero draft of the Pact for the Future, to be adopted at the September 2024Summit of the Future, points in the right direction by suggesting a commitment by the international community to developing norms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process, while also ensuring engagement with stakeholders from industry, academia, civil society and other sectors.

Fortunately, efforts to improve AI governance on WMD do not need to start from scratch. At the global level, the prohibitions of biological and chemical weapons enshrined in the Biological and Chemical Weapons Conventions are all-encompassing: the general purpose criterion prohibits all chemical and biological agents that are not used peacefully, whether AI comes into play or not. But AI may test these prohibitions in various ways, including by merging biotechnology and chemistry seamlessly with other novel technologies. It is, therefore, essential the OPCW monitors these developments closely.

International Humanitarian Law (IHL) implicitly establishes limits on the military application of AI by prohibiting the indiscriminate and disproportionate use of force in war. The Group of Governmental Experts (GGE) on Lethal Autonomous Weapons under the Convention on Certain Conventional Weapons (CCW)is doing important work by attempting to spell out what the IHL requirements mean for weapons that act without human control. These discussions will,mutatis mutandis, also be relevant for any nuclear, biological or chemical weapons that would be reliant on AI functionalities that reduce human control.

Shared concerns around the risks of AI and WMD have triggered a range of UN-based initiatives to promote norms around responsible use. The legal, ethical and humanitarian questions raised at the April 2024Vienna Conference on Autonomous Weapons Systems are likely to inform debates and decisions around limits on AI integration into WMD development and employment, and particularly nuclear weapons use. After all, similar pressures to shorten decision times and improve the autonomy of weapons systems apply to nuclear as well as conventional weapons.

From a regulatory point of view, it is advantageous that the market for AI-related products is still highly concentrated around a few big players. It is positive that some of the countries with the largest AI companies are also investing in the development of norms around responsible use of AI. It is obvious that these companies have agency and, in some cases, probably more influence on politics than small states.

TheBletchley Declarationadopted at the November 2023 AI Safety Summit in the UK, for example, highlighted the particular safety risks that arise at the frontier of AI. These could include risks that may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. The summits on Responsible Artificial Intelligence in the Military Domain (REAIM) are anothereffort at coalition building around military AI that could help to establish the rules of the game.

ThePolitical Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, agreed on in Washington in September 2023, confirmed important principles that also apply to the WMD sphere, including the applicability of international law and the need to implement appropriate safeguards to mitigate risks of failures in military AI capabilities. One step in this direction would be for the nuclear weapon states to conduct so-called failsafe reviewsthat would aim to comprehensively evaluate how control of nuclear weapons can be ensured at all times, even when AI-based systems are incorporated.

All such efforts could and should be building blocks that can be incorporated into a comprehensive governance approach. Yet, the risks around AI leading to increased risk of nuclear weapons use are most pressing. Artificial intelligence is not the only emerging and disruptive technology affecting international security.Space warfare, cyber, hypersonic weapons, and quantum are all affecting nuclear stability. It is, therefore, particularly important that nuclear weapon states amongst themselves build a better understanding and confidence about the limits of AI integration into NC3I.

An understanding between China and the United States on guardrails around military misuse of AI would be the single most important measure to slow down the AI race. The fact that Presidents Xi Jinping and Joe Biden in November 2023 agreed that China and the United States have broad common interests, including on artificial intelligence, and to intensify consultations on that and other issues, was a much-needed sign of hope. Although since then China has been hesitating to actually engagein such talks.

Meanwhile, relevant nations can lead by example when considering the integration of AI into the WMD realm. This concerns, first of all, the nuclear weapon states which can demonstrate responsible behaviour by pledging, for example, that they would not use AI to interfere with the nuclear command, control and communication systems of their adversaries. All states should also practice maximum transparency when conducting experiments around the use of AI for biodefense activities because such activities can easily be mistaken for offensive work. Finally, the German governments pioneering role in looking at the impact of new and emerging technologies on arms control has to be recognised. Its Rethinking Arms Control conferences, including the most recent conference on AI and WMD on June 28 in Berlin with key contributors such as the Director General of the OPCW, are particularly important. Such meetings can systematically and consistently investigate the AI-WMD interplay in a dialogue between experts and practitioners. If they can agree on what guardrails and speed bumps are needed, an important step toward effective governance of AI in the WMD sphere has been taken.

The opinions articulated above represent the views of the author(s) and do not necessarily reflect the position of the European Leadership Network or any of its members. The ELNs aim is to encourage debates that will help develop Europes capacity to address the pressing foreign, defence, and security policy challenges of our time.

Image credit: Free ai generated art image, public domain art CC0 photo. Mixed with Wikimedia Commons / Fastfission~commonswiki

Originally posted here:

The fast and the deadly: When Artificial Intelligence meets Weapons of Mass Destruction - European Leadership Network

AI goes nuclear: INL expo showcases machine learning and artificial intelligence – East Idaho News

IDAHO FALLS Artificial intelligence is transforming the way the nuclear industry works, and Idaho National Laboratory is leading the way developing applications to streamline processes while improving safety at nuclear power plants.

INL scientists showcased 15 projects on Artificial Intelligence (AI) and Machine Learning at an expo at the Energy Innovation Laboratory in Idaho Falls on Tuesday.

Were here to learn about some of the incredible science happening related to artificial intelligence and machine learning, said Katya Le Blanc, human factors scientist at Idaho National Laboratory. Were also developing technologies that can eventually be deployed by the nuclear industry and be used by nuclear utilities.

According to a lab news release, computers that mimic cognitive functions and apply advanced algorithms can help researchers analyze and solve a variety of complex technical challenges. This new approach helps everything from improving materials design for advanced reactors to enhancing nuclear power plant control rooms so they become more effective and efficient.

Technologies on display at the conference included RAVEN a Risk Analysis Virtual ENvironment that provides an open source multi-purpose framework for machine learning, artificial intelligence and digital twinning.

One machine learning technology called Inspection Portal is part of the light water reactor sustainability program that analyzes and aggregates data from human-submitted reports to identify trends and help optimize the operation of nuclear power plants.

The programs machine learning operation is trained on millions of records from across the industry.

We can do things here at the INL that no one else can do, said Brian Wilcken, nuclear science and technology data scientist. Utility companies try to do things like this. They cant touch it. We have so much data we can train extremely powerful models.

Other AI systems provide image detection to read gauges, pinpoint anomalies and determine if a valve has been turned, if a screw is corroded or if a fire breaks out in a nuclear plant. These advancements could reduce the need for personnel to perform menial checks at a nuclear power plant and free up manpower for higher-level work and applications.

Additional tools evaluate the economics of different energy mixes and how to analyze the best cost-benefit and other factors (such as) the reliability associated with energy systems, Le Blanc said.

These systems can determine the proper output needed from a nuclear power plant, a hydro plant and a solar facility in order to meet peoples demand for electricity when they need it while optimizing economic benefit as well, she said.

Some of the applications utilize existing AI programs, while others were created in-house at Idaho National Laboratory.

Sometimes, it requires that you develop it. Theres not a model that can do what you need it to do, but sometimes theres something that already exists that you can adapt, Le Blanc said. It varies depending on (the situation), but theres no reason to start from scratch.

The Artificial Intelligence and Machine Learning Expo is in its second year.

In the future, organizers hope to expand and collaborate with other experts in the AI space to further share the research occurring at Idaho National Laboratory.

I read a lot of papers inside scientific journals related to AI, Le Blanc said. Seeing how this stuff actually works, being able to mess around with it, play with it, talk to the researchers, see what theyre doing and get direct access and ask them questions thats just exciting!

Read the original:

AI goes nuclear: INL expo showcases machine learning and artificial intelligence - East Idaho News

The algorithm at the service of humankind: Communicating in the age of AI – Vatican News – English

Experts gather in the Vatican to discuss the ethical and anthropological implications of Artificial Intelligence, emphasising the need for regulation and responsible use of data.

By Michele Raviart and Francesca Merlo

After Pope Francis message was released for World Communications Day last month, a conference entitled The algorithm at the service of humankind. Communicating in the age of artificial intelligence took place in the Vatican on Thursday, 27 June, gathering experts in the fields of AI and communications to compare ideas and discuss concerns on the issue.

Within the Casina Pio IV walls, the conference addressed questions such as those posed by Dr Paolo Ruffini, Prefect of the Dicastery for Communication, as he gave the opening remarks. He asked: Artificial intelligence translates everything into calculation, but can we reduce everything to a statistical probability? How can we protect professionals and workers in the media from the arrival of AI and maintain the right to inform and be informed on the basis of truth, freedom and responsibility? How can we make large platforms that invest in generative AI interoperable so that they do not reduce humans to a reservoir of data to be exploited?

Welcoming the participants along with Dr Paolo Ruffini was Fr Lucio Ruiz, Secretary of the Dicastery for Communication, who highlighted some of what Pope Francis has said concerning the theme of Artificial Intelligence. He emphasized that the Pope's interventions on Artificial Intelligence demonstrate the Church's "intuition" in walking with humanity through its culture and historical changes. He explained that this was the case 500 years ago when the first Vatican printing press was created - shortly after Gutenberg's discovery. Likewise, it was proved with the construction of Vatican Radio by the inventor of the radio himself, Guglielmo Marconi, in 1931. And another example, he added, is the creation of the vatican.va portal in 1994, when the web had only just begun to appear on people's computers.

The next person to speak was Father Paolo Benanti, professor of ethics and bioethics at the Pontifical Gregorian University, president of the AI Commission for Information, and member of the United Nations AI Committee. He opened the first of two panel discussions on "The Ethics of Algorithms and the Challenges for Communication." Fr Benanti began by highlighting the primary essence of computers, which is to perform calculations. Benanti recalled how the invention of transistors, made available by the United States to its allies after the successes of World War II, changed reality. Early computer prototypes contributed to the discovery of the atomic bomb and the decoding of secret codes used by Nazi Germany. From that centralised vision of technology, and through the revolution led by Silicon Valley pioneers in the 1970s, he noted that we eventually arrived at a "personal" and intimate computation, first through PCs and then smartphones. With ChatGPT and its implementation in Apple and Microsoft phone interfaces, he emphasised that we still do not know how much of the computational power will be personal and how much will be centralised in the cloud. Therefore, he stressed that regulation is necessary, as the European Union has done, to manage artificial intelligence in the same way traffic laws have been established for cars.

Also speaking at the conference was Nunzia Ciardi, Deputy Director General of the National Cybersecurity Agency. She said that Artificial Intelligence is not an impressive technological leap in itself. What makes its implementation something that will have a decisive anthropological impact on reality, she explained, is the fact that it relies on an enormous amount of data collected "brutally" over the decades by companies through free services or applications that have become essential for us. Ciardi highlighted other aspects, such as the use of the English language to train algorithms with all the values and cultural expressions that one language carries compared to another and the risk of increasingly struggling to decode complex messages, which can be dangerous in a democracy.

Professor Mario Rasetti, Emeritus of Theoretical Physics at the Polytechnic University of Turin and President of the Scientific Board of CENTAI, also spoke at the conference. He commented that "knowledge is becoming private property," recounting the experience of OpenAI, which started as a non-profit organisation of scientists and was acquired by Microsoft for $10 billion. Rasetti added that we must make Artificial Intelligence a science with rigorous definitions because, in its current state, it presents itself as a probabilistic tool, which can hardly measure intelligence, truth, and causality.

Read the rest here:

The algorithm at the service of humankind: Communicating in the age of AI - Vatican News - English

Artificial intelligence has spread lies about my good name, and I’m here to settle the score Kansas Reflector – Kansas Reflector

Artificial intelligence lies.

Everyone knows this by now, of course. Programs such as ChatGPT and Googles AI overviews routinely generate nonsense when queried by users. Tech enthusiasts call these mistakes hallucinations, as though AI just needs to sober up and come to its senses. I dont see it that way.

Because AI has started fibbing about me and my family.

Last week, my husband received a spam email from a salesman. It included a history of our last name, as follows:

The last name Wirestone is believed to have originated in Germany. It is a locational surname, meaning it was likely given to individuals based on where they lived. The name Wirestone may have derived from a place name that no longer exists or has changed over time.

The surname Wirestone first appeared in records in the late 19th and early 20th centuries in the United States, with immigrants from Germany bringing the name over. Some variations of the surname include Wierstien, Wierstone, and Wierston.

Today, the surname Wirestone is relatively rare and is primarily found in the United States. Individuals with this last name can be found in various states across the country, but they are most concentrated in the Midwest region.

The only problem with this account is that it is entirely incorrect.

I know this firsthand because the last name Wirestone didnt exist before 2010, when my husband and I made it up. We took the letters from our original last names and arranged them to create a new one. We also considered Cointower and McWren as options.

At the time, we researched to make sure that no one else had the last name of Wirestone. No one did. A marketing company bore the name Wire Stone, but that seemed sufficiently separate for our purposes. We lived in New Hampshire at the time, and the state had just legalized same-sex marriage. We wanted to share a single last name, and we wanted to share that last name with our son.

I even wrote a column mentioning this back in 2013! (Yes, Ive been churning out copy for a long time.)

But when it comes to large language models, the facts dont matter.

The email my husband received looked like the work of ChatGPT to me, so I headed over and put that AI through its paces. Sure enough, it generated loads of lies about my last name, all of them along the same lines. Heres a paragraph from one, this time including a linguistic breakdown:

The last name Wirestone is not as common as some others, but it does have a history rooted in Germanic origins. Wire likely comes from the Middle High German word wir, meaning wire or metal, indicating a possible occupational origin for individuals who worked with wire or metal. Stone suggests a connection to a place or geographical feature, possibly indicating someone who lived near a notable stone or rocky area.

Sounds authoritative! Also, completely false.

You might ask how AI generates something so completely bananas. Its because AI cant tell the difference between true and false. Instead, a complex computer program plays probabilistic language guessing games, betting on what words are most likely to follow other words. If an AI program hasnt been trained on a subject unusual last names, for instance it can conjure up authoritative-seeming but false verbiage.

ChatGPT later spawned a different etymologyfor our last name:

The surname Wirestone appears to have German origins. It is derived from the Old High German name Wiro, which means warrior or army, and stein, which means stone. Thus, the surname Wirestone likely originated as a combination of these elements, possibly indicating someone who was strong like a stone in battle or had characteristics associated with a warrior.

To summarize: My ancestors were either metalworkers who lived near rocky outcroppings or toughened fighters.

You might dismiss this all as mere silliness. I would agree with you, except that leaders have decided over the past year that AI will transform the global economy.

Google, which has become the default source of definitive world knowledge, began employing AI in its search results. Users soon reported that Google was telling them tosmoke cigarettes while pregnant, add glue to theirhome-baked pizza,sprinkle used antifreezeon their lawns, and boil mint in order tocure their appendicitis, according to Slate. The company has since rolled back some of the changes.

Facebook has tacked gaudy AI features across the platform. In the meantime, it managed to block Kansas Reflector and remove every link we had ever posted. Users who attempt to share our stories still report problems doing so, even though we were assured in April by spokesman Andy Stone that the problem had been corrected.

All the while, OpenAI, the company behind ChatGPT, continues to raise money and investor expectations ever higher about the future of its technology.

Yet were not living in the future. Were living in the now, and AI has massively underperformed in every instance where users asked it to perform accurately and reliably. Writing blender instructions in the style of the King James Bible is a fun party trick. But folks turn to the internet to answer real, pressing questions about their world.

I can tell you firsthand, from information I know personally, that the technology does not deliver.

Ten years ago, if you searched Google for information about my last name, you would find links to my work, the marketing company and the column I had written. You would be able to figure out the truth of the situation.

Now, that column has fallen prey to link rot. Those curious about Wirestone may well turn to ChatGPT, as students have done since the technology made its debut. They will be fed lies. The experience of a curious person online has therefore degraded, not improved. Perhaps AI technology will improve in the months and years to come. Perhaps not.

In the meantime, treat the output of opaque AI systems with extreme skepticism. Follow actual news reported and written and edited by actual humans. Visit Kansas Reflectors website. Subscribe to our newsletter.

Focus on reality, and leave the hallucinations behind.

Clay Wirestone is Kansas Reflector opinion editor. Through its opinion section, Kansas Reflector works to amplify the voices of people who are affected by public policies or excluded from public debate. Find information, including how to submit your own commentary, here.

See the article here:

Artificial intelligence has spread lies about my good name, and I'm here to settle the score Kansas Reflector - Kansas Reflector

Hey, Artificial Intelligence Fans! 3 Long-Term AI Stocks to Load Up on Now. – InvestorPlace

Over the past two years, artificial intelligence (AI) has been the key trend that many investors have focused on. The amount of technological innovation with AI has caused waves among users everywhere. Most have tried out ChatGPT or other generative AI models and come to the same conclusion: AI is smart and is certainly going to be a resource we all utilize moving forward.

Questions around how AI will be used aside, certain companies are uniquely poised to benefit from the surge in AI application growth over time. These three long-term AI stocks may not be surprising to many. Im focusing on the best of the best in this sector in this piece. However, its worth noting that quality matters in this space. In my view, these are the three companies with sustainable AI tailwinds I think the market is right to focus on right now.

Source: Piotr Swat / Shutterstock.com

Founded over three decades ago, semiconductor giant Nvidia (NASDAQ:NVDA) is certainly a company many investors have focused on for a variety of reasons. This chip juggernaut has seen previous surges tied to growth in gaming, crypto and a range of other technological advancements. Computing power demand has risen over time in a relatively exponential fashion, with different driers each time.

Thus, investors shouldnt be surprised to see the company pop on a surge in interest around AI. This catalyst is as real as many of the companys previous catalysts, but many think theres a much longer runway to this particular technology (and for good reason).

On Tuesday, June 18, Nvidia replaced Microsoft (NASDAQ:MSFT) as the worlds most valuable company. Shares rose after the news broke out, rising 3.6%. Currently, Nvidia has a market cap of $2.9 trillion, surpassing both Microsoft and Apple (NASDAQ:AAPL).

Over the past year, NVDA stock has seen a 178% increase due to its successful Q1 earnings report last May. Impressively, this stock has also seen a nine-fold increase since 2022, and its most recent rally can be almost entirely tied to the rise of generative AI. To add more positive news, Nvidias 10-for-1 stock split improved its chances of joining the Dow soon.

Source: T. Schneider / Shutterstock.com

Super Micro Computer (NASDAQ:SMCI) shares rose 10% on Thursday, driven by strong Broadcom (NASDAQ:AVGO) earnings, positive Oracle (NYSE:ORCL) news and AI stock momentum. With surging AI demand driving demand for server hardware and solutions, the server specialists stock has surged nearly 200% this year.

Super Micro Computers rack-scale systems, integrating power, storage, cooling and software, support high-performance Nvidia and AMD (NASDAQ:AMD) AI chips. This demand drove its sales to $3.9 billion in the last fiscal quarter, a 200% increase. Earnings per share surged 308% to $6.65, benefiting from the growing need for complex data processing.

Moreover, the company expanded its manufacturing capabilities globally, including in San Jose, Taiwan and Malaysia. They aimed to increase monthly rack production to 5,000, up from 4,000 last year and 3,000 in 2022. Now, with a strong focus on AI data centers and its 5S Strategy, Supermicro forecasts $25 billion in sales over the next few years, contradicting its 2024 forecast of $14.7 billion.

Source: rafapress / Shutterstock.com

Another AI stock investors may want to consider is Palantir Technologies (NYSE:PLTR), founded in 2003 by Peter Thiel and Alex Karp. While the company has existed for years, it only went public and became a listing in 2020. However, since then, the stock has surged 138%. The companys momentum accelerated from early 2023 with the launch of its Artificial Intelligence Platform (AIP), integrated into platforms like Foundry and Gotham.

PLTR stock has been on the rise, surging during June 20s premarket session. That was tied to news that Palantir secured an exclusive deal to supply data management solutions for the Starlab commercial space station, led by Voyager Space, Airbus SE (OTCMKTS:EADSY), Mitsubishi (OTCMKTS:MSBHF) and MDA Space. CEO Alexander Karp expressed excitement about enhancing global intelligence capabilities on Earth and in space.

Starlab Space and Palantir utilized digital twins and AI to optimize operations. Palantir also secured a $19 million, two-year contract from ARPA-H for critical data infrastructure. Assuming more deals come down the pike, this is an AI stock with some pretty clear catalysts investors are right to focus on right now.

On the date of publication, Chris MacDonald did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Chris MacDonalds love for investing led him to pursue an MBA in Finance and take on a number of management roles in corporate finance and venture capital over the past 15 years. His experience as a financial analyst in the past, coupled with his fervor for finding undervalued growth opportunities, contribute to his conservative, long-term investing perspective.

Read the original:

Hey, Artificial Intelligence Fans! 3 Long-Term AI Stocks to Load Up on Now. - InvestorPlace

Apple AI Could Produce ‘Really Really Good’ Version of Siri – PYMNTS.com

What ifApples voice assistant Siri was really, really, really good?

That question is at the heart of much of the tech giants artificial intelligence (AI) research, according to a report Sunday (May 5) by The Verge reviewing those efforts.

For example, a team of Apple researchers has been trying to develop a way to use Siri without having to use a wake word.

Rather than waiting for the user to say Hey Siri or Siri, the voice assistant would be able to intuit whether someone was speaking to it.

This problem is significantly more challenging than voice trigger detection, the researchers did acknowledge per the report, since there might not be a leading trigger phrase that marks the beginning of a voice command.

The Verge report added that this could be why another research team came up with a system to more accurately detect wake words. Another paper trained a model with better understanding of rare words, which are in many cases not well understood by assistants.

Apple is also working on ways to make sure Siri understands what it hears. For example, the report said, the company developed a system called STEER (Semantic Turn Extension-Expansion Recognition) that is designed improve users back-and-forth communication with an AI assistant by trying to determine when the user is asking a follow-up question and when they are asking a new one.

The report comes at a time when Apple appears to be taking as PYMNTS wrote last week a measured approachto its AI efforts.

Among its projects is the ReALM (Reference Resolution As Language Modeling) system, which simplifies the complex process of understanding screen-based visual references into a language modeling task using large language models.

On the one hand, if we havebetter, faster customer experience, theres a lot of chatbots that just make customers angry, AI researcher Dan Faggella, who is not affiliated with Apple, said in an interview with PYMNTS. But if in the future, we have AI systems that can helpfully and politely tackle the questions that are really quick and simple to tackle and can improve customer experience, it is quite likely to translate to loyalty and sales.

The voice tech sector is on the rise. According to research by PYMNTS Intelligence, theres a notable interest amongconsumers in this technology, with more than half (54%) saying they look forward to using it more in the future due to its rapidity.

For all PYMNTS AI coverage, subscribe to the dailyAINewsletter.

See original here:

Apple AI Could Produce 'Really Really Good' Version of Siri - PYMNTS.com

See how Nvidia became one of the world’s most valuable companies – The Washington Post

Chipmaker Nvidia surpassed Microsoft for the first time this month to become the worlds most valuable company, with a market capitalization of $3.3 trillion. Though its reign on the top of the charts was brief, it crowned a rapid climb for the company, which was little known outside tech circles just two years ago.

For most of its three decades of existence, Nvidia was mostly a niche player, making computer chips for video games, but the companys central position in the artificial intelligence boom has led to a spectacular rise.

Nvidia sells the graphics processing units (GPUs) and the software crucial to training and running the AI algorithms that power chatbots and image generators.

Heres how Nvidia became one of the worlds most valuable companies.

Nvidia went public in January 1999 at $12 a share, six years after its founding and a year before the dot-com crash would wipe out much of the stock market value of the burgeoning internet industry. The company was building a reputation for making some of the best chips for video games, and in 2001, it won a contract to supply GPUs for Microsofts Xbox gaming console.

Nvidia had long been traded by professional investment firms, but during the pandemic, millions of people with day jobs got into stock investing through apps such as Robinhood and online forums like Wall Street Bets. Gamers turned retail investors recognized Nvidia as the company that helped power the improvement in video game graphics over the past two decades.

In 2021, Facebook rebranded itself as Meta and brought renewed interest in the concept of the metaverse a future where people spend much of their time plugged into a virtual world. Nvidia chief executive Jensen Huang jumped on the idea and said his companys chips would power the future world of the metaverse. He even used a digital clone of himself speaking at Nvidias annual conference to showcase the tech.

Metas grand plans for the metaverse have yet to pan out, but at the time, some investors were betting it was the next big thing. On Nov. 4, 2021, financial analysts from Wells Fargo published a report detailing how Nvidia was well positioned to benefit from the prophesied metaverse boom, and the stock jumped 12 percent.

At the end of 2022, OpenAI, an artificial intelligence lab founded as a nonprofit in 2015, unveiled ChatGPT. It was more capable than any chatbot that regular people had interacted with yet. The tech industry was enthralled, and within months, Microsoft had invested billions into OpenAI. The AI arms race was on.

Nvidias chips and software are crucial to building the large language models that serve as the underlying technology in ChatGPT and image generators like OpenAIs Dall-E 3, which launched in 2023.

Huang told investors on Feb. 22, 2023, that the company stood to benefit from the AI boom, which was quickly gaining steam. Wall Street was convinced and the stock shot up 14 percent to give the company a total value of $582.3 billion.

Nvidias stock kept climbing. In May 2023, Nvidia reported earnings showing for the first time with real numbers that it was a prime beneficiary of the AI frenzy. The stock jumped 25 percent and the companys valuation briefly crossed $1 trillion, one of only a handful of companies to ever reach that mark.

As the company reported higher revenue numbers, more investors piled in, pushing the stock up until it ended the year worth $1.2 trillion. Because many AI start-ups and companies, including OpenAI, are not public, there were few options for regular people to invest in the AI boom. Many bought Nvidia stock.

In the first quarter of 2024, Nvidias revenue rose to $26 billion from only $7.2 billion in the same period a year before.

AI start-ups, companies trying to add AI to their products and venture capital firms are all trying to get their hands on Nvidias chips, driving up their price. But the biggest buyers are Big Tech companies Microsoft, Amazon, Meta and Google that need the chips to build and train their own AI models.

Earlier this year, Microsoft, Meta and Google told their investors they would increase spending on AI investments. Google alone plans to spend at least $12 billion every four months this year. Much of that money is going straight into Nvidias coffers.

View original post here:

See how Nvidia became one of the world's most valuable companies - The Washington Post

Warren Buffett Warns of AI Use in Scams – PYMNTS.com

Berkshire HathawaysWarren Buffett has compared the development of artificial intelligence (AI) to the atomic bomb.

Just like that invention, the multibillionaire said Saturday (May 4) at Berkshires annual meeting, AI could producedisastrous resultsfor civilization.

We let a genie out of the bottle when we developed nuclear weapons, said Buffett, whose comments were reported by The Wall Street Journal (WSJ). AI is somewhat similar its part way out of the bottle.

While Buffett acknowledged his understanding of AI was limited, he argued he still had cause for concern, discussing a recent sighting of a deepfake of his voice and image. This leads him to believe AI will allow scammers to more effectively pull off their crimes.

If I was interested ininvesting in scamming, its going to be the growth industry of all time, he said.

The WSJ report noted that Buffetts comments come amid a debate among business leaders about how AI will impact society. And while not everyone compares the technology to the atomic bomb, there are those who worry AI will wipe out white-collar jobs.

Others see the upside to AI, like JPMorgan Chase CEO Jamie Dimon has said AI could invent cures for canceror allow more people in future generations to live to 100 years old.

It will create jobs. It will eliminate some jobs. It will make everyone more productive, Dimon said in a recent WSJ interview.

It is also transforming how companies train and upskill their employees, PYMNTS wrote last week, providing personalized learning experiences that can cut costs and improve efficiency.

The global AI-in-education market is projected to expand from $3.6 billion in 2023 to around $73.7 billion by 2033, according to a report from Market.US. But in spite of this impressive forecast, online education companyChegg, which has invested in AI tools, recently saw a decline in stock, something that underscores the sectors volatility.

Generative AI can provide alevel of personalizationin learning that is nearly impossible to achieve without this advanced technology, Ryan Lufkin, global vice president of strategy at the education technology company Instructure, told PYMNTS.

This means we can quickly assess what an employee knows and teach directly to their knowledge gaps, reducing the amount of time spent learning and improving time-to-productivity.

For all PYMNTS AI coverage, subscribe to the dailyAINewsletter.

Here is the original post:

Warren Buffett Warns of AI Use in Scams - PYMNTS.com

HHS shares its Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by … – HHS.gov

Today, the U.S. Department of Health and Human Services (HHS) publicly shared its plan for promoting responsible use of artificial intelligence (AI) in automated and algorithmic systems by state, local, tribal, and territorial governments in the administration of public benefits. Recent advances in the availability of powerful artificial intelligence (AI) in automated or algorithmic systems open up significant opportunities to enhance public benefits program administration to better meet the needs of recipients and to improve the efficiency and effectiveness of those programs.

HHS, in alignment with OMB Memorandum M-24-10, is committed to strengthening governance, advancing responsible innovation, and managing risks in the use of AI-enabled automated or algorithmic systems. The plan provides more detail about how the rights-impacting and/or safety-impacting risk framework established in OMB Memorandum M-24-10 applies to public benefits delivery, provides information about existing guidance that applies to AI-enabled systems, and lays out topics that HHS is considering providing future guidance on.

Read the original here:

HHS shares its Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by ... - HHS.gov