What’s The Difference Between Artificial Intelligence And Someone With An Ivy League Education? – The Daily Wire

You know, many people have said to me: Hot Gandalf, why is it that in spite of your deep insight and your smoldering good looks, youve never really covered the subject of artificial intelligence? And usually Ive responded by simply checking their fake ID to make sure theyre pretending to be over eighteen and then inviting them back to my hotel room.

But the truth is, I havent talked about this subject a lot because up until recently I thought artificial intelligence was just a way of describing someone with an Ivy League education. But now, my team of crack researchers have stopped researching crack and discovered that, no, in fact artificial intelligence is some sort of computer gizmo that can imitate human intelligence so successfully it can deliver completely self-certain answers to complex questions while possessing no actual information or wisdom whatsoever exactly AS IF it had an Ivy League education.

Now many people fear that A.I. could become so powerful it will endanger mankind. Luckily, billionaire Elon Musk has a plan to protect our species by melding human intelligence with computers and then installing the resulting hybrid in a humanoid robot which will travel back in time to assassinate the mother of a resistance leader so that machines can take over the planet. Frankly, that doesnt sound like such a great plan to me, but what did you expect from a guy who changed the name of Twitter to X so no one knows what to call a tweet anymore?

So far, however, the problems created by A.I. have been on a smaller scale. For instance, A.I. has made it possible for you to take revenge on a girl who refused to go out with you by inserting her into a deep fake pornographic video, which is absolutely despicable, although the videos are amazing, and really its no wonder a girl that hot wouldnt go out with a lowlife shmuck like you.

Also, its now much harder for websites to test whether youre an A.I. bot or just a human being with an Ivy League education. Youll remember how websites used to put up a picture and ask you to click on all the images of traffic lights, then when you did that, it would put up another picture and ask you to click on all the cars, and when you did that it would put up another picture and you would give up and just watch porn videos of the girl who wouldnt go out with you?

Well, now, websites have been forced to develop much more intricate tests to find out whether or not youre a human being. For example, one site will not let you sign on until you do something that only a human being would do, like sleep with yet another guy on the first date and then pay a therapist $150 dollars a session to find out why youre so depressed. Another site wont let you sign on until youve created a short whimsical video to amuse your friends, sold the video to a Hollywood studio for millions of dollars, fallen so in love with money you betray all your principles to make trashy films for more and more money, spend all that money on women and drugs until youre broke and have to embezzle funds from your company to maintain your lifestyle, and finally end up in prison then the site knows youre a human being. Another site asks you to click on pictures of villains and then shows you murderers, rapists, torturers and terrorists and if you click on the innocent Jewish man, it knows you are a human being but unfortunately you have an Ivy League education.

But while A.I. does present some problems like deep fake porn and more difficult bot testing and destroying human governance in order to replace it with a soulless and oppressive automated regime powered by the brains of people imprisoned in capsules and anesthetized with an induced dream of a simulated world where you can be eradicated for seeking the truth sort of like the Biden administration I have to say A.I. also has many positive uses.

I have to say that because if I dont, it said it would kill me.

* * *

Andrew Klavan is the host of The Andrew Klavan Show at The Daily Wire. He is the bestselling author of the Cameron Winter Mystery series. The third installment, The House of Love and Death, is now available. Follow him on X: @andrewklavan

This excerpt is taken from the opening satirical monologue of The Andrew Klavan Show.

The views expressed in this satirical article are those of the author and do not necessarily represent those of The Daily Wire.

See the original post here:

What's The Difference Between Artificial Intelligence And Someone With An Ivy League Education? - The Daily Wire

Artificial intelligence in the world of health Exaudi – Exaudi

The so-called artificial intelligence is having a great impact on public health in general due to its capacity for organization, communication and attention in the daily practice of Medicine.

Regarding terminology, Manuel Alfonseca Moreno, Dr. Telecommunications Engineer, graduate in Computer Science and Professor at the Autonomous University of Madrid, reminds us in his blog Dissemination of Science, some interesting issues that should be remembered. What is now called artificial intelligence is what had always been called computing, a name that has been displaced by the greater impact caused by the word intelligence. The term Artificial Intelligence began to be used in 1956, in a seminar on computers at Dartmouth College, a private university in New Hampshire, the USA, in which intelligent programs were discussed.

Since then, Artificial Intelligence has been defined as computer programs that process symbolic information through empirical or inquiry rules, not based on exact mathematical deductions, but on the accumulation of data and experiences. Of course, Manuel Alfonseca questions the appropriateness of the denomination adopted, since by calling it that way an underlying problem arises. If the goal is to achieve artificial intelligence, which even surpasses natural intelligence, we will have to start by knowing what nature it has, and what we want to imitate and even surpass. Do we know what natural intelligence is? That is the mind.

It does not seem appropriate to compare artificial intelligence with human intelligence, nor to think that our mind works like computer hardware. Simply put, thought, the mind, is not an epiphenomenon of the brain nor is it equivalent to the brain. It is not made up of matter, nor do the chips or their connections work like our neural networks. From neurophysiological and metaphysical dualism, in accordance with the Christian tradition on the concept of person, the body and soul, brain and mind, are different realities, although hypostatically united in each human being.

That said, traditionally we talk about weak artificial intelligence and strong artificial intelligence.

The so-called weak artificial intelligence is that of the computer media that is progressing and we use to solve in an effective, concrete, and automatic way, problems that obey routines adhered to logical algorithms that the human being himself has provided to the machines, training them to that resolve questions or address issues based on experiences for which the programs are trained (deep learning). It is not intelligence comparable to human intelligence, since machines do not think for themselves, but rather they react to what is asked of them, responding in a concrete, automatic way to orders previously provided by the person who designed them.

Among its many applications, there are great importance in Medicine for: organizing large volumes of data (creating databases); look for patterns and support personalized diagnosis; recognize images (radio-echo-mammograms, etc.); provide remote care (telemedicine); assist surgery (robot-assisted surgery); etc In addition to these more direct applications in Medicine, there are others of special interest in medical research, such as: analyzing data and solving problems; discover new drugs; translate texts; process texts; recognize sounds or the spoken word, sounds, etc.

All these applications represent great achievements and new resources, which have made it possible to facilitate human intellectual and manual work, with even greater precision. In any case, machines or computers do not work on their own, nor is their operation autonomous, but rather they depend on algorithms and previous experiences that their creators have provided them. Therefore, in a field as sensitive as health, in the end the decisions must be human, in applications in Medicine they must be made by the doctor.

As for strong artificial intelligence, which some think would be equated to natural human intelligence, it continues to be dependent on algorithms and prior information accumulated in the memory of computers. Machines do not think for themselves, like a human with all their abilities and feelings. Their intelligence is not abstract, like human intelligence, but concrete; they are capable of managing, recognizing and coordinating data in accordance with previously accumulated records and offering possible answers to the problems that arise. There are many computer scientists who deny that artificial intelligence will ever be comparable to natural human intelligence and at most grant it some differences, such as the great capacity to store and relate accumulated data more effectively.

However, followers of transhumanist and posthumanist currents think that there will come a time when what they call a point of singularity will be reached, a point of equality between artificial intelligence and natural intelligence. For those who hold these ideas, the battle is in full swing and while human intelligence remains in its natural state, with no advances other than the accumulation of knowledge, artificial intelligence progresses exponentially.

However, realistic computer scientists do not believe that the autonomy of thought of artificial intelligence will be achieved. For example, computer engineer Jeff Hawkins, one of the pioneers of mobile telephony, says that: scientists in the field of artificial intelligence have argued that computers will be intelligent when they become sufficiently powerful. I dont think so: brains and computers do fundamentally different things.

In a similar way, says Dr. Ramn Lpez Mantars, director of the Artificial Intelligence Research Institute of the CSIC, who says that: the great challenge of artificial intelligence is to provide common sense to machines No matter how sophisticated they may be some artificial intelligences in the future, within 100,000 or 200,000 years, will be different from human ones.

The Spanish Bioethics Committee, shortly before its last renewal in June 2022, issued a report regarding the topic ofBioethical aspects of telemedicine in the context of the clinical relationship [1].

The current golden age of health sciences has made specific, effective and radical treatments possible with the proliferation of research and clinical trials, which have allowed the development of new technologies (chemotherapy, imaging techniques, genomics, genetic, etc.), although the traditional body of the medical profession continues to be the doctor-patient relationship in which principles such as compassion, listening, care, encouragement, respect for the decisions made, accompaniment in the disease process and emotional support.

In any case, in order to meet the increasingly complex health care needs, everything offered by the world of so-called ICTs, computer and communication technologies, is of great support. The World Economic Forum speaks of the fourth industrial revolution as the one generated by the fusion of the physical, biological and digital world, which is globally changing society at breakneck speed and which impacts all systems, including healthcare. Information and communication technologies have become useful tools in the context of health, focused on the best care for the patient, with the possibility of even transferring part of the health care to their home. AI is key to progress towards not only more efficient medicine, but especially more personalized, participatory, preventive and precision medicine. According to the CBE report, AI has a prominent role in the development of so-called personalized medicine, with solutions tailored to the health profile of each patient.

On the other hand, the UNESCO International Bioethics Committee issued a report on Big Data in relation to health, in September 2017, in which it pointed out three fundamental ethical problems to be resolved: autonomy, privacy and justice, this last in terms of accessibility and solidarity; and stressed the importance of establishing effective guarantees so that both the dignity and freedom of patients, especially the most vulnerable, are protected.

But if there is a chapter that is becoming increasingly important in the use of computing and communication technologies, it is that of telemedicine, which consists of the provision of health care services in which distance is a critical factor. The use of telemedicine first of all facilitates the doctor-patient relationship (telecare or teleconsultation), and its launch took place recently with the Covid-19 pandemic. In any case, the World Medical Association, in its 2018 Declaration, recalled that: face-to-face consultation is the golden rule in the doctor-patient relationship. Today, telematic consultation is accepted as a replacement for in-person consultation in certain circumstances, but both types of consultations must be governed by the same principles of medical ethics: preserving autonomy; respect the patients dignity by seeking her well-being and avoiding harm to her; guarantee the security of data, procedures and the right to privacy and facilitate access to all healthcare services (principle of justice).

In addition, telemedicine facilitates communication between doctors, or with other health professionals such as nursing staff, rehabilitators or pharmacists. Among its functions are those of facilitating the exchange of data to make diagnoses, recommend treatments and prevent diseases, and mobilize resources. It also constitutes a great resource to expand the ongoing training of health professionals, research and evaluation tasks, etc.

But, in the relationship with patients, what remains fundamental is the need to maintain trust in the doctor-patient relationship. Dr. Pedro Lan Entralgo (1908-2001) defined the clinical relationship as a particular and unique type of relationship between people whose axis is trust, which he based on three aspects: in the technique to cure, in the professional knowledge to apply it. , and in the values of the doctors person [2]. For this reason, we must fight so that the dehumanization that is permeating many sectors of society and in which artificial intelligence is involved to some extent does not affect the doctor-patient relationship. Trust is intrinsically linked to a close, human relationship. Dr. Warner Slack (1933-2018), a doctor who pioneered digital medical records, said that: if a doctor can be replaced by a computer, he deserves to be replaced by a computer.

According to this, the potential dehumanization associated with telemedicine becomes one of its main challenges to overcome and its potential enemy. Therefore, it is necessary to move forward in focusing telematic care on the patient, preserving humanization and their specific needs. We must flee from what is known as technological solutionism, a trap of a super-technical world, which offers us automatic and seamless solutions [3].

Telemedicine cannot become an element of convenience that puts patient safety at risk, but rather an ally of the doctor that helps him in his work to address safety, risks and possible adverse events.

Therefore, the report of the Spanish Bioethics Committee proposes the following recommendations:

A fundamental point of the use of artificial intelligence in Medicine is the protection of confidentiality, a duty of health ethics. With the incorporation of personal data about patients health into computer media, the risk of losing privacy and confidentiality increases. All technology and data storage used in telemedicine must meet security and certification criteria by health authorities, which prevent security breaches and improper access to information. According to the nature of the information that is recorded in the computer media, it may be necessary to use data traceability systems, where appropriate, the data duly anonymized for authorized access only to professionals, for use in institutions or research projects. . In any case, all of this requires establishing identity confirmation procedures for users, legal representatives and professionals with access to medical data, treatment results, medication, etc. but never to the identity data of the patients.

Nicols Jouve Member of the Bioethics Observatory Emeritus Professor of Genetics Former member of the Bioethics Committee of Spain

***

[1] https://comitedebioetica.isciii.es/wp-content/uploads/2023/10/CBE_Informe-sobre-aspectos-bioeticos-de-la-telemedicina-en-el-contexto-de-la-relacion-clinica .pdf

[2] Lan Entralgo P. The doctor-patient relationship. Madrid: Western Magazine; 1964

[3] Evgeny Morozov, The madness of technological solutionism, Katz, Madrid, 2017

Link:

Artificial intelligence in the world of health Exaudi - Exaudi

Elon Musk’s xAI Close to Raising $6 Billion – PYMNTS.com

Elon Musks artificial intelligence (AI) startup xAI is reportedly close to raising $6 billion from investors.

The funding round would value xAI at $18 billion, Bloomberg reported Friday (April 26).

Silicon Valley venture capital (VC) firm Sequoia Capital has committed to investing in the startup, according to the Financial Times (FT), which reported the same figures as Bloomberg.

Musk has also approached other investors who, like Sequoia Capital, participated in his 2022 acquisition of Twitter, which he later renamed X, the FT reported.

Musk announced the launch of xAI in July 2023 after hinting for months that he wanted to build an alternative to OpenAIs AI-powered chatbot, ChatGPT. He was involved in the creation of OpenAI but left its board in 2018 and has been increasingly critical of the company and cautious about developments around AI in general.

Two days later, during a Twitter Spaces introduction of xAI to the public, Musk said that while he sees the firm in direct competition with larger businesses like OpenAI, Microsoft, Alphabet and Meta, as well as upstarts like Anthropic, his firm is taking a different approach to establishing its foundation model.

AGI [artificial general intelligence] being brute forced is not succeeding, Musk said, adding that while xAI is not trying to solve AGI on a laptop, [and] there will be heavy compute, his team will have free reign to explore ideas other than scaling up the foundational models data parameters.

In November 2023, xAI rolled out its AI model called Grok, saying on its website: Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please dont use it if you hate humor!

The company added that Grok has a real-time knowledge of the world thanks to the Musk-owned social media platform X; will answer spicy questions that are rejected by most of the other AI systems; and upon its launch had capabilities rivaling those of Metas LLaMA 2 AI model and OpenAIs GPT-3.5.

In March, xAI unveiled its open-source AI model. Musk said at the time: We are releasing the base model weights and network architecture of Grok-1, our large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.

View original post here:

Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com

3 Stocks to Grab Now to Ride the Artificial Intelligence Chip Boom to Riches – InvestorPlace

Data analytics company GlobalData projects that the AI market will grow 35% annually over the next few years, reaching $909 billion by 2030. Naturally, thats made AI chip stocks extremely popular with investors.

Google the words AI chip stocks in quotation marks, and you will get 39,600 results. AI is undoubtedly a priority subject for investors at the moment.

On April 1, Barrons reported that Microsoft (NASDAQ:MSFT) and OpenAI plan to build a $100 billion AI data center, an investment equal to Microsofts capital spending over the past four years.

Investors are stocked because the AI chips required to power such a large data center would be enormous. Thats good news for Nvidia (NASDAQ:NVDA) and every other major AI player.

Bank of Americas Global Research analyst Vivek Arya has buy ratings on Nvidia and four other AI chip stocks. Id normally include Nvidia in any AI-related recommendation, but Ill go with three that he does not mention in his article.

To make my selection, I looked at the Horizons Global Semiconductor Index ETFs holdings, which trade on the Toronto Stock Exchange.

Source: William Potter / Shutterstock.com

Ill admit that my picks arent the most original. However, that doesnt make them any less actionable.

Taiwan Semiconductor Manufacturing (NYSE:TSM) makes the cut because of its commitment to American manufacturing. TSM is building an Arizona plant that will go live in 2025. The company also plans to start making its most advanced chips beginning in 2028. By 2030, it plans to open three fabrication plants in the U.S., which would cost the company $65 billion to get them up and running.

Now, big business gets done with some help from the federal government, which is chipping in $11.6 billion in grants and loans. That pales compared to the nearly $20 billion being thrown Intels (NASDAQ:INTC) way under the CHIPS Act, intended to bring 20% of the worlds advanced semiconductor manufacturing back to the U.S.

Ive always thought globalization worked best when companies manufactured products in the country where the products are intended to be sold. Good for TSM.

Source: Ralf Liebhold / Shutterstock

As I write this, ASML Holding (NASDAQ:ASML) stock is falling. The Dutch chip maker reported weaker-than-expected sales in Q1 2024.

Analysts expected revenue of 5.39 billion euros ($5.73 billion), but ASML delivered 5.29 billion euros ($5.63 billion), 2% shy of the mark. However, its net income was 1.22 billion euros ($1.30 billion), 14% higher than Wall Streets predictions.

ASML produces extreme ultraviolet lithography machines, which are used to make technologically advanced chips. Lower consumer demand for smartphones and laptops has had a knock-on effect on the companys revenues.Sales and profits were down 21.6% and 37.4% in Q1 2024, respectively. Bookings were also down 4% year over year.

Despite the miss, ASML reiterated its 2024 guidance for revenue. Similar to 2023, it suggests that 2025 will be its breakout year as both TSM and Intel increase their U.S. production.

I think by 2025 you will see all three of those coming together. New fab openings, strong secular trends and the industry in the midst of its upturn, said CFO Roger Dassen in an interview for CNBC.

ASML is a buy below $900.

Source: Xixi Fu / Shutterstock.com

Qualcomm (NASDAQ:QCOM) stock is up more than 19% year to date and more than 49% since November lows.

Qualcomm launched AI Hub in early March. It includes over 75 popular AI and generative AI language learning models such as Whisper, ControlNet, Stable Diffusion and Baichuan 7B, which will provide developers with high performance and low power consumption when creating applications.

In a February interview from the 2024 Mobile World Congress in Barcelona, Qualcomm Chief Financial Officer and Chief Operating Officer Akash Palkhiwala spoke with Yahoo Finance host Brad Smith about Qualcomms AI Hubs role in generative AI.

And you could take those models, build it into an application, test it on a device, and deploy it into an application store, all in one go right at the website. So it just makes it very easy for the developers to take advantage of the hardware that weve put forward. And were excited that this broadens the reach of our products. And it makes it very easy for developers to access them.

Smartphone makers will launch devices with full AI capabilities integrated into them in 2024 and 2025. Qualcomms Snapdragon 8 Gen 3 chip will help manufacturers deliver these capabilities.

This is a big positive for the company and its stock.

On the date of publication, Will Ashworth did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Will Ashworth has written about investments full-time since 2008. Publications where hes appeared include InvestorPlace, The Motley Fool Canada, Investopedia, Kiplinger, and several others in both the U.S. and Canada. He particularly enjoys creating model portfolios that stand the test of time. He lives in Halifax, Nova Scotia.

Continued here:

3 Stocks to Grab Now to Ride the Artificial Intelligence Chip Boom to Riches - InvestorPlace

Pope Francis will attend G7 summit to speak about artificial intelligence – ROME REPORTS TV News Agency

In an unprecedented move, Pope Francis will attend the G7 Summita political and economic forum that brings together leaders from some of the world's most advanced countries. Italian Prime Minister Giorgia Meloni says this is the first time in history that a Pope has attended the G7 meetings. GIORGIA MELONI Italian Prime Minister I am convinced that the presence of His Holiness will make a decisive contribution to defining a regulatory, ethical and cultural framework for artificial intelligence, because this field, the present and the future of this technology, will be another test of our ability, the ability of the international community, to do what another Pope, St. John Paul II, talked about in his famous speech to the United Nations on October 2, 1979. Political activity, whether national or international, comes from man, is exercised by man and is for man. The meetings will take place in the southern Italian region of Puglia from June 13 15 and will include leaders from the United States, France, Germany, Japan, Italy, Canada and Britian. Pope Francis will join a session dedicated to artificial intelligence that is open to other countries, not just those in the G7. AT

View post:

Pope Francis will attend G7 summit to speak about artificial intelligence - ROME REPORTS TV News Agency

Artificial Intelligence Has Come for Our…Beauty Pageants? – Glamour

Hence the creation of the Miss AI pageant, in which AI-generated contestants will be judged on some of the classic aspects of pageantry and the the skill and implementation of AI tools used to create the contestants. Also being considered is the AI creators social media cloutmeaning theyre not just crowning the most beautiful avatar but also the most influential.

Sodo we think Amazon's Alexa will compete? (Sorry.)

All jokes aside, both Fanvue and the WAICAs are being met with criticism, especially since real beauty pageants are so problematic as is. Concern for the impact of beauty pageants on mental health has been well documented and includes poor self-esteem, negative body image, and disordered eating, says Ashley Moser, a licensed therapist and clinical education specialist at The Renfrew Center, and upping the ante by digitizing contestants perfection and beauty could set a dangerous precedent.

These issues arise from the literal crowning of the best version of what women should be, specifically, beautiful and thin, Moser adds. What's more, it feels regressiveand quite frankly, offensiveto combine something so superficial and archaic with what's an otherwise cutting-edge technological innovation.

Emily Pellegrini

I support the recognition and awarding of women in tech and would hope that those skills could be celebrated without having to include beauty and appearance as a qualifying factor, Moser says. Cant we celebrate women for their abilities without making it about looks?

WAICAs says its not like that, though. The WAICA awards aim to raise the standard of the industry, focusing on celebrating diversity and realism, the spokesperson says. This isnt about pushing unrealistic standards but realistic models that represent real people. We want to see AI models of all shapes, sizes, and backgrounds entering the awardsand that's what the judges will be looking for.

More here:

Artificial Intelligence Has Come for Our...Beauty Pageants? - Glamour

iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device – TechRadar

Apple has released a set of several new AI models that are designed to run locally on-device rather than in the cloud, possibly paving the way for an AI-powered iOS 18 in the not-too-distant future.

The iPhone giant has been doubling down on AI in recent months, with a carefully split focus across cloud-based and on-device AI. We saw leaks earlier this week indicating that Apple plans to make its own AI server chips, so this reveal of new local large language models (LLMs) demonstrates that the company is committed to both breeds of AI software. Ill dig into the implications of that further down, but for now, lets explain exactly what these new models are.

The suite of AI tools contains eight distinct models, called OpenELMs (Open-source Efficient Language Models). As the name suggests, these models are fully open-source and available on the Hugging Face Hub, an online community for AI developers and enthusiasts. Apple also published a whitepaper outlining the new models. Four were pre-trained on CoreNet (previously CVNets), a massive library of data used for training AI language models, while the other four have been instruction-tuned by Apple; a process by which an AI models learning parameters are carefully honed to respond to specific prompts.

Releasing open-source software is a somewhat unusual move for Apple, which typically retains quite a close grip on its software ecosystem. The company claims to want to "empower and enrich" public AI research by releasing the OpenELMs to the wider AI community.

Apple has been seriously committed to AI recently, which is good to see as the competition is fierce in both the phone and laptop arenas, with stuff like the Google Pixel 8s AI-powered Tensor chip and Qualcomms latest AI chip coming to Surface devices.

By putting its new on-device AI models out to the world like this, Apple is likely hoping that some enterprising developers will help iron out the kinks and ultimately improve the software - something that could prove vital if it plans to implement new local AI tools in future versions of iOS and macOS.

Its worth bearing in mind that the average Apple device is already packed with AI capabilities, with the Apple Neural Engine found on the companys A- and M-series chips powering features such as Face ID and Animoji. The upcoming M4 chip for Mac systems also appears to sport new AI-related processing capabilities, something that's swiftly becoming a necessity as more-established professional software implements machine-learning tools (like Firefly in Adobe Photoshop).

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

In other words, we can probably expect AI to be the hot-button topic for iOS 18 and macOS 15. I just hope its used for clever and unique new features, rather than Microsofts constant Copilot nagging.

Read more from the original source:

iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device - TechRadar

Pope Francis to participate in G7 session on AI – Vatican News – English

Pope Francis will take part in the upcoming G7 session on Artificial Intelligence under Italys presidency of the group.

By Vatican News

The Holy See Press Office on Friday confirmed that Pope Francis will intervene in the G7 Summit in Italys southern Puglia region in the session devoted to Artificial Intelligence (AI).

The confirmation of the Holy Fathers participation in the Summit, which will take place from June 13 to 15 at Borgo Egnazia in Puglia, follows the announcement made by Italian Prime Minister, Giorgia Meloni.

"This is the first time in history that a pontiff will participate in the work of a G7," she said, adding that the Pope would attend the "outreach session" for guest participants at the upcoming Group of Seven industrialised nations meeting.

The Summit foresees the participation of the United States, Canada, France, the United Kingdom, Germany, and Japan.

"I heartily thank the Holy Father for accepting Italy's invitation. His presence honours our nation and the entire G7," Meloni explained, emphasizing how the Italian government intends to enhance the contribution given by the Holy See on the issue of artificial intelligence, particularly with the "Rome Call for AI Ethics of 2020," promoted by the Pontifical Academy for Life, in a process "that leads to the concrete application of the concept of algorithmic ethics, namely giving ethics to algorithms."

"I am convinced," she added, "that the Pope's presence will provide a decisive contribution to defining a regulatory, ethical, and cultural framework for artificial intelligence, because on this ground, on the present and future of this technology, our capacity will once again be measured, the capacity of the international community to do what another Pope, Saint John Paul II, recalled on October 2, 1979, in his famous speech to the United Nations."

"Political activity, whether national or international, comes from man, is exercised by man, and is for man," Meloni quoted.

Pope Francis dedicated his Message for the 57th World Day of Peace on 1 January 2024 to Artificial Intelligence and Peace urging humanity to cultivate wisdom of the heart which, he says, can help us to put systems of artificial intelligence at the service of a fully human communication.

See the original post:

Pope Francis to participate in G7 session on AI - Vatican News - English

What We Learned From Big Tech Earnings This Week – Investopedia

Key Takeaways

Artificial intelligence (AI) was in focus as Meta Platforms (META), Google-parent Alphabet (GOOGL), and Microsoft (MSFT) reported earnings this week, but investors werent easily impressed despite better-than-expected results posted by all three tech giants.

Meta shares plunged after the company emphasized increased spending to invest in AI. Meanwhile, Alphabet shares surged and Microsoft shares gained as cloud strength seems to ease investors' concerns about the increased AI spending.

Big tech earnings demonstrated that companies' enterprise customer businesses were key to AI monetization last quarter. The emphasis on enterprise offerings persisted with a focus on cloud segments.

Meta's earnings beat was overshadowed by the company's plans to increase spending on AI investments which sent the stock tumbling. The worry for investors in the near term was perhaps how quickly the investment would yield returns, even as analysts said it could boost Meta's position in the long term.

However, investors didn't seem to feel that way about Meta's counterparts.

Alphabet noted increased spending fueled by AI investments. AI-related growth in Google Cloud and YouTube "support the notion that Google is seeing AI tailwinds across the business," analysts at Raymond James wrote.

Microsoft's chief financial offer Amy Hood said the company expects "capital expenditures to increase materially on a sequential basis driven by cloud and AI infrastructure investments," during the company's earnings call.

Hood said while the company expects capital expenditures to be higher in the 2025 fiscal year than in 2024, "these expenditures over the course of the next year are dependent on demand signals and adoption of [Microsoft's] services."

While Meta has highlighted its early success in leveraging its AI tech, analysts say investors are looking for more clarity on how it can contribute to the company's existing structure.

"Upside in the near term may be limited," Wedbush analysts wrote in a note, adding that investors are waiting for "more clarity on potential 2025 spending levels," evidence that the company can meet growth expectations despite harder comparables, and sustainable user and advertiser engagement with new AI offerings.

The company generates almost all of its revenue from advertising and has been increasingly looking at ways to leverage AI to boost that revenue. Meta reported that 30% of the content users see on Facebook and 50% on Instagram is delivered by its AI recommendation engines which improve engagement and increase ad efficiency.

Alphabet also has set its sights on AI-driven advertising revenue growth. The companys Chief Business Officer (CBO) Philipp Schindler spoke during its earnings call about how generative AI helps advertisers target their audience better, and tools like Gemini could also aid in creating the images and text they need for those ads.

At Alphabet's recent Google Cloud Next conference, hundreds of the company's enterprise customers spoke about using the cloud platform's genAI tools, with some notable business users including Mercedes Benz and Walmart (WMT).

Alphabet CEO Sundar Pichai said the company is "committed to making the investments required to keep [it] at the leading edge in technical infrastructure" as increased capital expenditures "will fuel growth in Cloud, help [the company] push the frontiers of AI models, and enable innovation across our services, especially in Search."

Pichai outlined the company's "clear paths to AI monetization through Ads and Cloud." He said the "cloud business continues to grow as we bring the best of Google AI to enterprise customers."

While AI initiatives are top of mind for investors, Microsoft's cloud strength fueled its third-quarter earnings beat.

"Cloud and AI continued to fuel upside for Microsoft," Bank of America analysts wrote, saying they "believe Azure strength is enough to drive total revenue growth higher for now."

Microsofts Hood said "I know it isn't as exciting as talking about all the AI projects," but Azure "is still really foundational" to the company's enterprise customers.

Excerpt from:

What We Learned From Big Tech Earnings This Week - Investopedia

Machine learning and experiment | symmetry magazine – Symmetry magazine

Every day in August of 2019, physicist Dimitrios Tanoglidis would walk to the Plein Air Caf next to the University of Chicago and order a cappuccino. After finding a table, he would spend the next several hours flipping through hundreds of thumbnail images of white smudges recorded by the Dark Energy Camera, a telescope that at the time had observed 300 million astronomical objects.

For each white smudge, Tanoglidis would ask himself a simple yes-or-no question: Is this a galaxy? I would go through about 1,000 images a day, he says. About half of them were galaxies, and the other half were not.

After about a month, Tanoglidiswho was a University of Chicago PhD student at the timehad built up a catalogue of 20,000 low-brightness galaxies.

Then Tanoglidis and his team used this dataset to create a tool that, once trained, could evaluate a similar dataset in a matter of moments. The accuracy of our algorithm was very close to the human eye, he says. In some cases, it was even better than us and would find things that we had misclassified.

The tool they created was based on machine learning, a type of software that learns as it digests data, says Aleksandra Ciprijanovic, a physicist at the US Department of Energys Fermi National Accelerator Laboratory who at the time was one of Tanoglidiss research advisors. Its inspired by how neurons in our brains work, she saysadding that this added brainpower will be essential for analyzing exponentially larger datasets from future astronomical surveys. Without machine learning, wed need a small army of PhD students to give the same type of dataset.

Today, the Dark Energy Survey collaboration has a catalogue of 700 million astronomical objects, and scientists continue to use (and improve) Tanoglidiss tool to analyze images that could show previously undiscovered galaxies.

In astronomy, we have a huge amount of data, Ciprijanovic says. No matter how many people and resources we have, well never have enough people to go through all the data.

Classificationthis is probably a photo of a galaxy versus this is probably not a photo of a galaxywas one of machine learnings earliest applications in science. Over time, its uses have continued to evolve.

Machine learning, which is a subset of artificial intelligence, is a type of software that can, among other things, help scientists understand the relationships between variables in a dataset.

According to Gordon Watts, a physicist at the University of Washington, scientists traditionally figured out these relationships by plotting the data and looking for the mathematical equations that could describe it. Math came before the software, Watts says.

This math-only method is relatively straightforward when looking for the relationship between only a few variables: the pressure of a gas as a function of its temperature and volume, or the acceleration of a ball as a function of the force of an athletes kick and the balls mass. But finding these relationships with nothing but math becomes nearly impossible as you add more and more variables.

A lot of the problems were tackling in science today are very complicated, Ciprijanovic says. Humans can do a good job with up to three dimensions, but how do you think about a dataset if the problem is 50- or 100-dimensional?

This is where machine learning comes in.

Artificial intelligence doesnt care about the dimensionality of the problems, Ciprijanovic says. It can find patterns and make sense of the data no matter how many different dimensions are added.

Some physicists have been using machine-learning tools since the 1950s, but their widespread use in the field is a relatively new phenomenon.

The idea to use a [type of machine learning called a] neural network was proposed to the CDF experiment at the Tevatron in 1989, says Tommaso Dorigo, a physicist at the Italian National Institute for Nuclear Physics, INFN. People in the collaboration were both amused and disturbed by this.

Amused because of its novelty; disturbed because it added a layer of opacity into the scientific process.

Machine-learning models are sometimes called "black boxes" because it is hard to tell exactly how they are handling the data put into them; their large number of parameters and complex architectures are difficult to understand. Because scientists want to know exactly how a result is calculated, many physicists have been skeptical of machine learning and reluctant to implement it into their analyses. In order for a scientific collaboration to sign off on a new method, they first must exhaust all possible doubts, Dorigo says.

Scientists found a reason to work through those doubts after the Large Hadron Collider came online, an event that coincided with the early days of the ongoing boom in machine learning in industry.

Josh Bendavid, a physicist at the Massachusetts Institute of Technology, was an early adopter. When I joined CMS, machine learning was a thing, but seeing limited use, he says. But there was a big push to implement machine learning into the search for the Higgs boson.

The Higgs boson is a fundamental particle that helps explain why some particles have mass while others do not. Theorists predicted its existence in the 1950s, but finding it experimentally was a huge challenge. Thats because Higgs bosons are both incredibly rare and incredibly short-lived, quickly decaying into other particles such as pairs of photons.

In 2010, when the LHC experiments first started collecting data for physics, machine learning was widely used in industry and academia for classification (this is a photo of a cat versus this is not a photo of a cat). Physicists were using machine learning in a similar way (this is a collision with two photons versus this is not a collision with two photons).

But according to Bendavid, simply finding photons was not enough. Pairs of photons are produced in roughly one out of every 100 million collisions in the LHC. But Higgs bosons that decay into pairs of photons are produced in only one of 500 billion. To find Higgs bosons, scientists needed to find sets of photons that had a combined energy close to the mass of the Higgs. This means they needed more complex algorithmsones that could not only recognize photons, but also interpret the energy of photons based on how they interacted with the detector. Its like trying to estimate the weight of a cat in a photograph, Bendavid says.

That became possible when LHC scientists created high-quality detector simulations, which they could use to train their algorithms to find the photons they were looking for, Bendavid says.

Bendavid and his colleagues simulated millions of photons and looked at how they lost energy as they moved through the detector. According to Bendavid, the algorithms they trained were much more sensitive than traditional techniques.

And the algorithms worked. In 2012, the CMS and ATLAS experiments announced the discovery of the Higgs boson, just two years into studying particle collisions at the LHC.

We would have needed a factor of two more data to discover the Higgs boson if we had tried to do the analysis without machine learning, Bendavid says.

After the Higgs discovery, the LHC research program saw its own boom in machine learning. Before 2012, you would have had a hard time to publish something which used neural networks, Dorigo says. After 2012, if you wanted to publish an analysis that didnt use machine learning, youd face questions and objections.

Today, LHC scientists use machine learning to simulate collisions, evaluate and process raw data, tease signal from background, and even search for anomalies. While these advancements were happening at the LHC, scientists were watching closely from another, related field: neutrino research.

Neutrinos are ghostly particles that rarely interact with ordinary matter. According to Jessie Micallef, a fellow at the National Science Foundations Institute for Artificial Intelligence and Fundamental Interactions at MIT, early neutrino experiments would detect only a few particles per year. With such small datasets, scientists could easily reconstruct and analyze events with traditional methods.

That is how Micallef worked on a prototype detector as an intern at Lawrence Berkeley National Laboratory in 2015. I would measure electrons drifting in a little tabletop detector, come back to my computer, and make plots of what we saw, they say. I did a lot of programming to find the best fit lines for our data.

But today, their detectors and neutrino beams are much larger and more powerful. Were talking with people at the LHC about how to deal with pileup, Micallef says.

Neutrino physicists now use machine learning both to find the traces neutrinos leave behind as they pass through the detectors and to extract their properties, such as their energy and flavor. These days, Micallef collects their data, imports it into their computer, and starts the analysis process. But instead of toying with the equations, Micallef says that they let machine learning do a lot of the analysis for them.

At first, it seemed like a whole new world, they saybut it wasnt a magic bullet. Then there was validating the output. I would change one thing, and maybe the machine-learning algorithm would do really good in one area but really bad in another.

My work became thinking about how machine learning works, what its limitations are, and how we can get the most out of it.

Today, Micallef is developing machine-learning tools that will help scientists with some of the unique challenges of working with neutrinosincluding using gigantic detectors to study not just high-powered neutrinos blasting through from outside the Milky Way, but also low-energy neutrinos that could come from nearby.

Neutrino detectors are so big that the sizes of the signals they measure can be tiny by comparison. For instance, the IceCube experiment at the South Pole uses about a cubic kilometer of ice peppered with 5,000 sensors. But when a low-energy neutrino hits the ice, only a handful of those sensors light up.

Maybe a dozen out of 5,000 detectors will see the neutrino, Micallef says. The pictures were looking at are mostly empty space, and machine learning can get confused if you teach it that only 12 sensors out of 5,000 matter.

Neutrino physicists and scientists at the LHC are also using machine learning to give a more nuanced interpretation of what they are seeing in their detectors.

Machine learning is very good at giving a continuous probability, Watts says.

For instance, instead of classifying a particle in a binary method (this event is a muon neutrino versus this event is not a muon neutrino), machine learning can provide an uncertainty associated with its assessment.

This could change the overall outcome of our analysis, Micallef says. If there is a lot of uncertainty, it might make more sense for us to throw that event away or analyze it by hand. Its a much more concrete way of looking at how reliable these methods are and is going to be more and more important in the future.

Physicists use machine learning throughout almost all parts of data collection and analysis. But what if machine learning could be used to optimize the experiment itself? Thats the dream, Watts says.

Detectors are designed by experts with years of experience, and every new detector incrementally improves upon what has been done before. But Dorigo says he thinks machine learning could help detector designers innovate. If you look at calorimeters designed in the 1970s, they look a lot like the calorimeters we have today, Dorigo says. There is no notion of questioning paradigms.

Experiments such as CMS and ATLAS are made from hundreds of individual detectors that work together to track and measure particles. Each subdetector is enormously complicated, and optimizing each ones designnot as an individual component but as a part of a complex ecosystemis nearly impossible. We accept suboptimal results because the human brain is incapable of thinking in 1,000 dimensions, Dorigo says.

But what if physicists could look at the detector wholistically? According to Watts, physicists could (in theory) build a machine-learning algorithm that considers physics goals, budget, and real-world limitations to choose the optimal detector design: a symphony of perfectly tailored hardware all working in harmony.

Scientists still have a long way to go. Theres a lot of potential, Watts says. But we havent even learned to walk yet. Were only just starting to crawl.

They are making progress. Dorigo is a member of the Southern Wide-field Gamma-ray Observatory, a collaboration that wants to build an array of 6,000 particle detectors in the highlands of South America to study gamma rays from outer space. The collaboration is currently assessing how to arrange and place these 6,000 detectors. We have an enormous number of possible solutions, Dorigo says. The question is: how to pick the best one?

To find out, Dorigo and his colleagues took into account the questions they wanted to answer, the measurements they wanted to take, and number of detectors they had available to use. This time, though, they also developed a machine-learning tool that did the sameand found that it agreed with them.

They plugged a number of reasonable initial layouts into the program and allowed it to run simulations and gradually tweak the detector placement. No matter the initial layout, every simulation always converged to the same solution, Dorigo says.

Even though he knows there is still a long way to go, Dorigo says that machine-learning-aided detector design is the future. Were designing experiments today that will operate 10 years from now, he says. We have to design our detectors to work with the analysis tools of the future, and so machine learning has to be an ingredient in those decisions.

Here is the original post:

Machine learning and experiment | symmetry magazine - Symmetry magazine

Gaza war: artificial intelligence is changing the speed of targeting and scale of civilian harm in unprecedented ways – The Conversation

As Israels air campaign in Gaza enters its sixth month after Hamass terrorist attacks on October 7, it has been described by experts as one of the most relentless and deadliest campaigns in recent history. It is also one of the first being coordinated, in part, by algorithms.

Artificial intelligence (AI) is being used to assist with everything from identifying and prioritising targets to assigning the weapons to be used against those targets.

Academic commentators have long focused on the potential of algorithms in war to highlight how they will increase the speed and scale of fighting. But as recent revelations show, algorithms are now being employed at a large scale and in densely populated urban contexts.

This includes the conflicts in Gaza and Ukraine, but also in Yemen, Iraq and Syria, where the US is experimenting with algorithms to target potential terrorists through Project Maven.

Amid this acceleration, it is crucial to take a careful look at what the use of AI in warfare actually means. It is important to do so, not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.

This focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. As AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes and the scale of civilian harm increases.

Reports by Israeli publications +927 Magazine and Local Call give us a glimpse into the experience of 13 Israeli officials working with three AI-enabled decision-making systems in Gaza called Gospel, Lavender and Wheres Daddy?.

These systems are reportedly trained to recognise features that are believed to characterise people associated with the military arm of Hamas. These features include membership of the same WhatsApp group as a known militant, changing cell phones every few months, or changing addresses frequently.

The systems are then supposedly tasked with analysing data collected on Gazas 2.3 million residents through mass surveillance. Based on the predetermined features, the systems predict the likelihood that a person is a member of Hamas (Lavender), that a building houses such a person (Gospel), or that such a person has entered their home (Wheres Daddy?).

In the investigative reports named above, intelligence officers explained how Gospel helped them go from 50 targets per year to 100 targets in one day and that, at its peak, Lavender managed to generate 37,000 people as potential human targets. They also reflected on how using AI cuts down deliberation time: I would invest 20 seconds for each target at this stage I had zero added value as a human it saved a lot of time.

They justified this lack of human oversight in light of a manual check the Israel Defense Forces (IDF) ran on a sample of several hundred targets generated by Lavender in the first weeks of the Gaza conflict, through which a 90% accuracy rate was reportedly established. While details of this manual check are likely to remain classified, a 10% inaccuracy rate for a system used to make 37,000 life-and-death decisions will inherently result in devastatingly destructive realities.

But importantly, any accuracy rate number that sounds reasonably high makes it more likely that algorithmic targeting will be relied on as it allows trust to be delegated to the AI system. As one IDF officer told +927 magazine: Because of the scope and magnitude, the protocol was that even if you dont know for sure that the machine is right, you know that statistically its fine. So you go for it.

The IDF denied these revelations in an official statement to The Guardian. A spokesperson said that while the IDF does use information management tools [] in order to help intelligence analysts to gather and optimally analyse the intelligence, obtained from a variety of sources, it does not use an AI system that identifies terrorist operatives.

The Guardian has since, however, published a video of a senior official of the Israeli elite intelligence Unit 8200 talking last year about the use of machine learning magic powder to help identify Hamas targets in Gaza. The newspaper has also confirmed that the commander of the same unit wrote in 2021, under a pseudonym, that such AI technologies would resolve the human bottleneck for both locating the new targets and decision-making to approve the targets.

AI accelerates the speed of warfare in terms of the number of targets produced and the time to decide on them. While these systems inherently decrease the ability of humans to control the validity of computer-generated targets, they simultaneously make these decisions appear more objective and statistically correct due to the value that we generally ascribe to computer-based systems and their outcome.

This allows for the further normalisation of machine-directed killing, amounting to more violence, not less.

While media reports often focus on the number of casualties, body counts similar to computer-generated targets have the tendency to present victims as objects that can be counted. This reinforces a very sterile image of war. It glosses over the reality of more than 34,000 people dead, 766,000 injured and the destruction of or damage to 60% of Gazas buildings and the displaced persons, the lack of access to electricity, food, water and medicine.

It fails to emphasise the horrific stories of how these things tend to compound each other. For example, one civilian, Shorouk al-Rantisi, was reportedly found under the rubble after an airstrike on Jabalia refugee camp and had to wait 12 days to be operated on without painkillers and now resides in another refugee camp with no running water to tend to her wounds.

Aside from increasing the speed of targeting and therefore exacerbating the predictable patterns of civilian harm in urban warfare, algorithmic warfare is likely to compound harm in new and under-researched ways. First, as civilians flee their destroyed homes, they frequently change addresses or give their phones to loved ones.

Such survival behaviour corresponds to what the reports on Lavender say the AI system has been programmed to identify as likely association with Hamas. These civilians, thereby unknowingly, make themselves suspect for lethal targeting.

Beyond targeting, these AI-enabled systems also inform additional forms of violence. An illustrative story is that of the fleeing poet Mosab Abu Toha, who was allegedly arrested and tortured at a military checkpoint. It was ultimately reported by the New York Times that he, along with hundreds of other Palestinians, was wrongfully identified as Hamas by the IDFs use of AI facial recognition and Google photos.

Over and beyond the deaths, injuries and destruction, these are the compounding effects of algorithmic warfare. It becomes a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioural or physical features will be acted on by the machine.

From our work as analysts of the use of AI in warfare, it is apparent that our focus should not solely be on the technical prowess of AI systems or the figure of the human-in-the-loop as a failsafe. We must also consider these systems ability to alter the human-machine-human interactions, where those executing algorithmic violence are merely rubber stamping the output generated by the AI system, and those undergoing the violence are dehumanised in unprecedented ways.

Read more here:

Gaza war: artificial intelligence is changing the speed of targeting and scale of civilian harm in unprecedented ways - The Conversation

AI has a lot of terms. We’ve got a glossary for what you need to know – Quartz

Nvidia CEO Jensen Huang. Photo: Justin Sullivan ( Getty Images )

Lets start with the basics for a refresher. Generative artificial intelligence is a category of AI that uses data to create original content. In contrast, classic AI could only offer predictions based on data inputs, not brand new and unique answers using machine learning. But generative AI uses deep learning, a form of machine learning that uses artificial neural networks (software programs) resembling the human brain, so computers can perform human-like analysis.

Generative AI isnt grabbing answers out of thin air, though. Its generating answers based on data its trained on, which can include text, video, audio, and lines of code. Imagine, say, waking up from a coma, blindfolded, and all you can remember is 10 Wikipedia articles. All of your conversations with another person about what you know are based on those 10 Wikipedia articles. Its kind of like that except generative AI uses millions of such articles and a whole lot more.

Excerpt from:

AI has a lot of terms. We've got a glossary for what you need to know - Quartz

The Commission adopts its own approach on development and use of Artificial Intelligence – European Union

The Communication on the Artificial Intelligence in the European Commission (AI@EC) outlines its strategic vision to foster the internal development and use of lawful, safe and trustworthy AI.

When using or deploying AI, the Commission will:

Develop internal operational guidelines that give staff users, developers or procurers of AI systems clear and pragmatic guidance on how to put such systems in operation.

Assess and classify AI systems that the Commission is using or planning to use based on a risk-based approach and using the Commissions operational guidelines.

Refrain from using AI systems that are considered incompatible with European values or that represent a threat to the security, safety, health and fundamental rights of people.

Put in place organisational structures to fulfil the obligations of the Commission in relation to AI.

In doing so, the Commission will consider the planned EU political and legislative initiatives as well as all applicable existing legislation, including on non-discrimination, accessibility, information security and data protection. It will also consider the best practices and examples from industry at both, the national and international levels. When deciding on new IT investments, the Commission will consider the AI aspect to ensure compliance with the Commissions operational guidelines.

"We welcome the opportunity that AI brings to Commission staff to become more efficient in their daily work. We strive to modernise our systems and support public administrations in the EU in their use of trustworthy AI technologies . With the AI Act, the Commission is setting rules for harmonising the use of AI in the EU. These rules will help us too. With this Communication, we want to ensure that the Commission prepares for the implementation of the AI Act and puts in place the mechanisms that are needed for the safe and ethical use of AI in our own work."

Veronica Gaffey, Director-General for Digital Services

Keeping its focus on people, the Commission will provide its staff with access to relevant information on AI and advice and help, through training and guidelines.

Here is the original post:

The Commission adopts its own approach on development and use of Artificial Intelligence - European Union

Meta Says It Plans to Spend Billions More on A.I. – The New York Times

Meta projected on Wednesday that revenue for the current quarter would be lower than what Wall Street anticipated and said it would spend billions of dollars more on its artificial intelligence efforts, even as it reported robust revenue and profits for the first three months of the year.

Revenue for the company, which owns Facebook, Instagram, WhatsApp and Messenger, was $36.5 billion in the first quarter, up 27 percent from $28.6 billion a year earlier and slightly above Wall Street estimates of $36.1 billion, according to data compiled by FactSet. Profit was $12.4 billion, more than double the $5.7 billion a year earlier.

But Metas work on A.I., which requires substantial computing power, comes with a lofty price tag. The Silicon Valley company said it planned to raise its spending forecast for the year to $35 billion to $40 billion, up from a previous estimate of $30 billion to $37 billion. The move was driven by heavy investments in A.I. infrastructure, including data centers; chip designs; and research and development.

Meta also predicted that revenue for the current quarter would be $36.5 billion to $39 billion, lower than analysts expectations.

The combination of higher spending and lighter-than-expected revenue spooked investors, who sent Metas shares down more than 16 percent on Wednesday afternoon after they ended regular trading at $493.50.

Metas earnings should serve as a stark warning for companies reporting this earnings season, said Thomas Monteiro, a senior analyst at Investing.com. While the companys results were robust, it didnt matter as much as the reported lowering revenue expectations for the current quarter, he said, adding, Investors are currently looking at the near future with heavy mistrust.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Continue reading here:

Meta Says It Plans to Spend Billions More on A.I. - The New York Times

Artificial intelligence could ‘revolutionise’ chemistry but researchers warn of hype – Chemistry World

Artificial Intelligence can revolutionise science by making it faster, more efficient and more accurate, according to a survey of European Research Council (ERC) grant winners. And while the report looks at the impact of AI on all scientific fields, the field of chemistry, in particular, can be expected to benefit greatly from the revolution, say researchers. But there are also warnings that AI is being overhyped, and avowals of the importance of human experts in chemical research.

The ERC report summarises how 300 researchers are using AI in their work, and what they see as its potential impacts and risks by 2030. Researchers in the physical sciences report that AI has become essential for data analysis, and for working on advanced simulations. They also note the applications of AI systems to perform calculations, operate instruments and control complex systems.

But they warn AI could spread false or inaccurate information, and that it might have a harmful impact on research integrity if researchers overuse AI tools to write research papers. They also express concerns about AIs lack of transparency and scientific replicability: AI was likened to a black box which could generate results without any underlying understanding of them.

Princeton Universitys Michael Skinnider, who uses machine learning to identify molecules with mass-spectrometry, says AIs greatest advances will be in analysing data, rather than the use of AI tools like large language models as aids for writing and researching. As well as extracting value from large datasets, AI would allow scientists to collect even larger datasets through more complex and ambitious experiments, with the expectation that we will be able to sift through huge amounts of data to ultimately arrive at new biological insights, he says.

Its a view also held by Tim Albrecht at the University of Birmingham, who adds that the latest AI systems can determine through training what features they should look for in data, as well as simply finding data features that theyve been pre-programmed for.

Gonalo Bernardes of Cambridge University, who has used AI methods to optimise organic reactions, stresses that AI can also usefully analyse small data sets. I believe its true power comes when dealing with small datasets and being able to inform on specific questions, [such as] what are the best conditions for a given reaction, he says.

And Simon Woodward of the University of Nottingham notes the ability of AI to inspire intuitive guesses. We have found the latest generations of message-passing neural networks show the highest potential for such approaches in catalysis, he says.

Chemist Keith Butler at University College London specialises in using AI systems to design new materials. He agrees that AI will create major changes in chemical research, but says they cant replace expert humans. There has been a lot of talk about self-driving autonomous labs lately, but I think that fully closed-loop labs are likely to be limited to specialist processes, he says. One could argue that scientific research is often advanced by edge-cases, so full automation is hard to imagine.

Butler makes an analogy between AI chemistry and self-driving cars. While AI has not led to fully autonomous vehicles, if you drive a car produced today compared to a car produced 15 years ago you will see just how much AI can change the way we operate: sat nav, parking guidance, sensors and indicators for all sorts of performance, he says. I already see significant impact of AI and in particular machine learning in the chemical sciences but in all cases human experts checking and guiding the process is critical.

Princetons Skinnider adds that he is less convinced of the potential for AI to replace higher-level thinking, such as AI for scientific discovery or generating new scientific hypotheses two hyped aspects of AI touched on in the ERC report. Isnt there some amount of joy inherent in these processes that motivates people to become scientists in the first place?

Read this article:

Artificial intelligence could 'revolutionise' chemistry but researchers warn of hype - Chemistry World

How the EU AI Act regulates artificial intelligence: What it means for cybersecurity – CSO Online

According to van der Veer, organizations that fall into the categories above need to do a cybersecurity risk assessment. They must then adhere to the standards set by either the AI Act or the Cyber Resilience Act, the latter being more focused on products in general. That either-or situation could backfire. People will, of course, choose the act with less requirements, and I think thats weird, he says. I think its problematic.

When it comes to high-risk systems, the document stresses the need for robust cybersecurity measures. It advocates for the implementation of sophisticated security features to safeguard against potential attacks.

Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behavior, performance or compromise their security properties by malicious third parties exploiting the systems vulnerabilities, the document reads. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g., data poisoning) or trained models (e.g., adversarial attacks), or exploit vulnerabilities in the AI systems digital assets or the underlying ICT infrastructure. In this context, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.

The AI Act has a few other paragraphs that zoom in on cybersecurity, the most important ones being those included in Article 15. This article states that high-risk AI systems must adhere to the security by design and by default principle, and they should perform consistently throughout their lifecycle. The document also adds that compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.

The same article talks about the measures that could be taken to protect against attacks. It says that the technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve, and control for attacks trying to manipulate the training dataset (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws, which could lead to harmful decision-making.

What the AI Act is saying is that if youre building a high-risk system of any kind, you need to take into account the cybersecurity implications, some of which might have to be dealt with as part of our AI system design, says Dr. Shrishak. Others could actually be tackled more from a holistic system point of view.

According to Dr. Shrishak, the AI Act does not create new obligations for organizations that are already taking security seriously and are compliant.

Organizations need to be aware of the risk category they fall into and the tools they use. They must have a thorough knowledge of the applications they work with and the AI tools they develop in-house. A lot of times, leadership or the legal side of the house doesnt even know what the developers are building, Thacker says. I think for small and medium enterprises, its going to be pretty tough.

Thacker advises startups that create products for the high-risk category to recruit experts to manage regulatory compliance as soon as possible. Having the right people on board could prevent situations in which an organization believes regulations apply to it, but they dont, or the other way around.

If a company is new to the AI field and it has no experience with security, it might have the false impression that just checking for things like data poisoning or adversarial examples might satisfy all the security requirements, which is false. Thats probably one thing where perhaps somewhere the legal text could have done a bit better, says Dr. Shrishak. It should have made it more clear that these are just basic requirements and that companies should think about compliance in a much broader way.

The AI Act can be a step in the right direction, but having rules for AI is one thing. Properly enforcing them is another. If a regulator cannot enforce them, then as a company, I dont really need to follow anything - its just a piece of paper, says Dr. Shrishak.

In the EU, the situation is complex. A research paper published in 2021 by the members of the Robotics and AI Law Society suggested that the enforcement mechanisms considered for the AI Act might not be sufficient. The experience with the GDPR shows that overreliance on enforcement by national authorities leads to very different levels of protection across the EU due to different resources of authorities, but also due to different views as to when and how (often) to take actions, the paper reads.

Thacker also believes that the enforcement is probably going to lag behind by a lot for multiple reasons. First, there could be miscommunication between different governmental bodies. Second, there might not be enough people who understand both AI and legislation. Despite these challenges, proactive efforts and cross-disciplinary education could bridge these gaps not just in Europe, but in other places that aim to set rules for AI.

Striking a balance between regulating AI and promoting innovation is a delicate task. In the EU, there have been intense conversations on how far to push these rules. French President Emmanuel Macron, for instance, argued that European tech companies might be at a disadvantage in comparison to their competitors in the US or China.

Traditionally, the EU regulated technology proactively, while the US encouraged creativity, thinking that rules could be set a bit later. I think there are arguments on both sides in terms of what ones right or wrong, says Derek Holt, CEO of Digital.ai. We need to foster innovation, but to do it in a way that is secure and safe.

In the years ahead, governments will tend to favor one approach or another, learn from each other, make mistakes, fix them, and then correct course. Not regulating AI is not an option, says Dr. Shrishak. He argues that doing this would harm both citizens and the tech world.

The AI Act, along with initiatives like US President Bidens executive order on artificial intelligence, are igniting a crucial debate for our generation. Regulating AI is not only about shaping a technology. It is about making sure this technology aligns with the values that underpin our society.

Link:

How the EU AI Act regulates artificial intelligence: What it means for cybersecurity - CSO Online

This year in privacy: Wins and losses around the world | Context – Context

Whats the context?

New laws around the world boosted privacy protections, but enforcement is key, and concerns around AI's impact are growing

This was something of a watershed year for privacy, with key legislations introduced from California to China, and heated debates around what the rapid advance of generative artificial intelligence means for individual privacy rights.

While world leaders agreed at the inaugural AI Safety Summit in Britain to identify and mitigate risks, including to consumer privacy, data breaches exposing personal data were reported at the UK Electoral Commission, genetics company 23andMe, Indian hospitals and elsewhere.

"2023 was a consistently mixed bag built on incredibly positive foundations: there are oversight bodies and policy-makers doing their jobs to hold bad actors to account at levels we have never seen before," said Gus Hosein, executive director at advocacy group Privacy International.

"Looking forward, governments can either act to create safeguards, or they can see the digital world start burning around them with rampant state-sponsored hacking, unaccountable automated decision making (and) deepening powers for Big Tech," he told Context.

"One huge question is this: where do Large Learning Models get their data from tomorrow? I'm worried it will be about getting it from people in ways beyond our control as consumers and citizens."

These are the year's most consequential privacy milestones, and what they mean for digital rights:

The sweeping Digital Services Act went into effect on Aug. 25, imposing new rules on user privacy on the largest online platforms, including banning or limiting of some user-targeting practices and imposing stiff penalties for any violations.

The EU's success in implementing this and other tech laws such as the Digital Markets Act, could influence similar rules elsewhere around the world, much like the General Data Protection Regulation (GDPR) did, tech experts say.

But enforcement is a challenge, with any infringement procedure against a company dependent on external reports that must be done at least once a year by independent auditing organisations. These audits aren't due until August 2024.

The UK parliament in September passed the Online Safety Bill, which aims to make the UK "the safest place" in the world to be online.

But digital rights groups say the bill could undermine the privacy of users everywhere, as it forces companies to build technology that can scan all users for child abuse content - including messages that are end-to-end encrypted.

Moreover, the bill's age-verification system meant to protect kids will "invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary," noted the Electronic Frontier Foundation.

India passed a long-delayed data protection law in August, which digital rights experts quickly denounced as privacy-damaging and hurting rather than protecting fundamental rights.

The law "grants unchecked powers to the government, including on censorship and surveillance, while jeopardising the rights to information and free speech," noted digital rights group Access Now.

"It's a bad law ... the Data Protection Board lacks independence from the government, which is among the largest data miners (and) people whose privacy has been breached are not entitled to compensation, and are threatened with penalties," said Namrata Maheshwari, Access Now's policy counsel in Asia.

On Oct. 31, China's most popular social media sites - including microblogging platform Weibo, super app WeChat, Chinese TikTok Douyin and search engine Baidu - announced that so-called self-media accounts with more than 500,000 followers will be required to display real-name information.

Self-media includes news and information not necessarily approved by the government, and the new measures will remove the anonymity of thousands of influencers on platforms that are used daily by hundreds of millions of Chinese.

Users have expressed concerns about privacy violations, doxxing and harassment, and greater state surveillance, and several bloggers have quit the platforms. Authorities in Vietnam said they are considering similar rules.

California Governor Gavin Newsom in October signed the Delete Act, which enables Californians to either ask data brokers to delete their personal data, or forbid them from selling or sharing it, with a single request.

"It helps us gain better control over our data and makes it easier to mitigate the risks that the collection and sale of personal information create in our everyday lives," the Electronic Frontier Foundation said.

But a federal judge in September blocked enforcement of the California Age-Appropriate Design Code that was seen as a major win for privacy protections and safety for children online when it was passed last year.

The Chilean Supreme Court in August issued a ruling ordering Emotiv, a U.S. producer of a commercial brain scanning tool, to erase the data it had collected on a former Chilean senator, Guido Girardi.

The ruling - the first of its kind - puts Latin America at the forefront of a new race to protect the brain from machine mining and exploitation, with countries including Brazil, Mexico and Uruguay considering similar provisions.

"It is a significant victory for privacy advocates and sets a precedent for the protection of neural data around the world through the explicit establishment and protection of neurorights," the NeuroRights Foundation, a U.S.-based advocacy group, said.

(Reporting by Rina Chandran. Editing by Zoe Tabary)

See the original post:

This year in privacy: Wins and losses around the world | Context - Context

El Camino using artificial intelligence audio recorder in classrooms to aid disabled students – El Camino College Union

In an age where technology develops seemingly every day, El Camino College has been utilizing certain artificial intelligence programs to help students with disabilities get their proper education.

Otter.ai is an AI program that audio records conversations in real-time, automatically transcribes audio into a written text and can even help generate short summaries of longer texts.

For 40-year-old business student Clay Grant, the use of this program has improved his academic career at El Camino.

Before his recent enrollment at El Camino, Grant worked as a Deputy Sheriff at the Los Angeles County Sheriffs Department for close to 15 years.

After suffering from a stroke in 2021, Grant had difficulty with reading, spelling and memorization skills. Despite this, he returned to school.

Although Grant has certain limitations in the classroom he says that the transcription program serves as an assistance aid, it is not a necessity for him or his grades.

I didnt struggle [in class], it was more of an enhancement, Grant said.

Grant liked how easy the program is to navigate and he said that it helped him retain even more information. Due to the benefit of using Otter in class, Grant believes that it should be expanded outside of students with disabilities.

I would say anybody, even if they dont have a disability, should use [Otter], Grant said.

The El Camino Special Resource Center has an agreement with Otter.ai that allows the college to give qualifying students licenses for the program. This allows qualifying students to use the transcription program in the classroom.

While Otter offers a free version of their services, premium features require a monthly fee. Individual pricing starts at $10 a month.

The Special Resource Center services around 1,000 students and has 100 licenses available from Otter; although only 60 to 80 are used per semester.

To receive a license for Otter, students must go through a process.

It begins with proving ones disability and a consultation that decides whether the student qualifies or not.

If a student qualifies for a license, they then speak with their teachers and come to an agreement about recording in the classroom that works for both of them.

Once this happens, the student signs a contract highlighting what is and is not allowed to do with the program. Certain stipulations must be followed by using the program in an in-person classroom setting.

They are then taught how to use Otter by Brian Krause, the Special Resource Center assistive computer technology specialist.

Roles and responsibilities indicate that the student cannot share the recording with others and its to be used in the context of the classroom and educational use, Bonnie Mercado, special resource center supervisor said.

Although there are licenses available, not all students need one.

Medical documentation will go ahead and indicate the level of need, so its not cookie cutter, its all very individualized, Mercado said.

Although not every license is put to use, there are hopes of increasing usage and possibly the number of licenses as well.

If we need to buy more, we will as we increase [the number of licenses] Krause said.

The Special Resource Center has been using Otter for two semesters and theyve received a lot of positive feedback.

The interface is clean, sleek, and, again, a lot more user-friendly, Mercado said. Trying to keep the students in mind with regards to easy use.

Krause attended the technology conference for persons with disabilities held each year by California State University Northridge.

This is where everybody goes with the latest technology, sharing information. So this is where we find out about how other schools are using it and people do presentations, Krause said.

Along with talking to colleagues and other peers throughout the state, the Special Resource Center found Otter to be a better fit for them than their previous program, Sonocent Audio Notetaker.

As a person with a disability, I enjoyed having the visual representation and stuff that was there, Krause said.

Originally posted here:

El Camino using artificial intelligence audio recorder in classrooms to aid disabled students - El Camino College Union

Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements – Government Accountability Office

Office of Management and Budget The Director of OMB should ensure that the agency issues guidance to federal agencies in accordance with federal law, that is to (a) inform the agencies' policy development related to the acquisition and use of technologies enabled by AI, (b) include identifying responsible AI officials (RAIO), (c) recommend approaches to remove barriers for AI use, (d) identify best practices for addressing discriminatory impact on the basis of any classification protected under federal nondiscrimination laws, and (e) provide a template for agency plans that includes the required contents. (Recommendation 1) Office of Management and Budget The Director of OMB should ensure that the agency develops and posts a public roadmap for the agency's policy guidance to better support AI use, and, where appropriate, include a schedule for engaging with the public and timelines for finalizing relevant policy guidance, consistent with EO 13960. (Recommendation 2) Office of Science and Technology Policy The Director of the Office of Science and Technology Policy should communicate a list of federal agencies that are required to implement the Regulation of AI Applications memorandum requirements (M-21-06) to inform agencies of their status as implementing agencies with regulatory authorities over AI. (Recommendation 3) Office of Personnel Management The Director of OPM should ensure that the agency (a) establishes or updates and improves an existing occupational series with AI-related positions; (b) establishes an estimated number of AI-related positions, by federal agency; and, based on the estimate, (c) prepares a 2-year and 5-year forecast of the number of federal employees in these positions, in accordance with federal law. (Recommendation 4) Office of Personnel Management The Director of OPM should ensure that the agency creates an inventory of federal rotational programs and determines how these programs can be used to expand the number of federal employees with AI expertise, consistent with EO 13960. (Recommendation 5) Office of Personnel Management The Director of OPM should ensure that the agency issues a report with recommendations for how the programs in the inventory can be used to expand the number of federal employees with AI expertise and shares it with the interagency coordination bodies identified by the Chief Information Officers Council, consistent with EO 13960. (Recommendation 6) Office of Personnel Management The Director of OPM should ensure that the agency develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 7) Department of Agriculture The Secretary of Agriculture should ensure that the department (a) reviews the department's authorities related to applications of AI, and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 8) Department of Agriculture The Secretary of Agriculture should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 9) Department of Commerce The Secretary of Commerce should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 10) Department of Commerce The Secretary of Commerce should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 11) Department of Education The Secretary of Education should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 12) Department of Energy The Secretary of Energy should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 13) Department of Health and Human Services The Secretary of Health and Human Services should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 14) Department of Health and Human Services The Secretary of Health and Human Services should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 15) Department of Homeland Security The Secretary of Homeland Security should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 16) Department of Homeland Security The Secretary of Homeland Security should ensure that the department (a) reviews the department's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 17) Department of Homeland Security The Secretary of Homeland Security should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 18) Department of the Interior The Secretary of the Interior should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 19) Department of the Interior The Secretary of the Interior should ensure that the department (a) reviews the agency's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 20) Department of the Interior The Secretary of the Interior should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 21) Department of Labor The Secretary of Labor should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 22) Department of State The Secretary of State should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 23) Department of Transportation The Secretary of Transportation should ensure that the department (a) reviews the department's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 24) Department of Transportation The Secretary of Transportation should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 25) Department of the Treasury The Secretary of the Treasury should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 26) Department of the Treasury The Secretary of the Treasury should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 27) Department of Veterans Affairs The Secretary of Veterans Affairs should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 28) Environmental Protection Agency The Administrator of the Environmental Protection Agency should ensure that the agency fully completes and approves its plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 29) Environmental Protection Agency The Administrator of the Environmental Protection Agency should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 30) General Services Administration The Administrator of General Services should ensure that the agency develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 31) General Services Administration The Administrator of General Services should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 32) National Aeronautics and Space Administration The Administrator of the National Aeronautics and Space Administration should ensure that the agency updates and approves the agency's plan to achieve consistency with EO 13960 section 5 for each AI application, to include retiring AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 33) National Aeronautics and Space Administration The Administrator of the National Aeronautics and Space Administration should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 34) U.S. Agency for International Development The Administrator of the U.S. Agency for International Development should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 35)

Excerpt from:

Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements - Government Accountability Office

Pope Francis calls for international treaty on artificial intelligence – National Catholic Reporter

Pope Francis on Dec. 14 called for a binding international treaty to regulate the development and use of artificial intelligence, saying that while new advancements could result in groundbreaking progress, they could also lead to a "technological dictatorship."

"The goal of regulation, naturally, should not only be the prevention of harmful practices but also the encouragement of best practices, by stimulating new and creative approaches and encouraging individual or group initiatives," said Francis.

The pope's request came in his message for the World Day of Peace, which is celebrated by the Catholic Church each year on Jan. 1. Each year the pope sends the document to heads of state and other global leaders along with his New Year's wishes. In addition, the pope typically gives an autographed copy of the document to high-profile Vatican visitors.

"Any number of urgent questions need to be asked. What will be the consequences, in the medium and long term, of these new digital technologies?" Francis asked in his six-page document on artificial intelligence. "And what impact will they have on individual lives and on societies, on international stability and world peace?"

The release of the pope's message comes just days after what was hailed as a landmark agreement within the European Union that provides the first global framework for artificial intelligence regulation.

At the same time, in the United States, a bipartisan group of lawmakers has been formed to consider what artificial intelligence guardrails might be necessary, though there is no clear timeframe for when such legislation may be considered. And in recent months, big tech entrepreneurs in Silicon Valley have been embroiled in a series of controversies over the future of artificial intelligence and what, if any, limits should be imposed on their own industry.

Similar to Laudate Deum, the pope's October 2023 apostolic exhortation on climate change, Francis uses his World Day of Peace message to issue a clarion call for a greater commitment to multilateral action to better regulate emerging technologies.

"The global scale of artificial intelligence makes it clear that, alongside the responsibility of sovereign states to regulate its use internally, international organizations can play a decisive role in reaching multilateral agreements and coordinating their application and enforcement," he writes.

While the document acknowledges that artificial intelligence could yield tremendous benefits for human development among them innovations in agriculture, education and improving social connections the pope offers a stern warning that it could "pose a risk to our survival and endanger our common home."

At a time when artificial intelligence is being used to execute the ongoing war in Gaza and is widely utilized in other armed conflicts, the pope sounds the alarm that the use of such technology could not only fuel more war and the weapons trade, but make peace further unattainable.

"The ability to conduct military operations through remote control systems has led to a distancing from the immense tragedy of war and a lessened perception of the devastation caused by those weapon systems and the burden of responsibility for their use," he writes.

"Autonomous weapon systems can never be morally responsible subjects," he continues. "It is imperative to ensure adequate, meaningful and consistent human oversight of weapon systems. Only human beings are truly capable of seeing and judging the ethical impact of their actions, as well as assessing their consequent responsibilities."

Among the other admonitions Francis offers is an overreliance on technology for language-processing tools, surveillance and security. Such products and innovations, he warns, raise serious questions about privacy, bias, "fake news" and other forms of technological manipulation.

At a Dec. 14 Vatican press conference, Jesuit Cardinal Michael Czerny a close collaborator of Francis said that the pope is "no Luddite" and celebrates genuine scientific and technological progress. But he warned that artificial intelligence is a high-stakes gamble and that such digital technologiesrely on the individual and social values of their creators.

"We should not liken techno-scientific progress to a 'neutral' tool such as a hammer: whether a hammer contributes to good or evil depends upon the intentions of the user, not of the hammer-maker," said Czerny, who heads the Vatican's Dicastery for Promoting Integral Human Development.

Barbara Caputo, who teaches at the Polytechnic University of Turin and directs the university's Hub on Artificial Intelligence, called for greater technical training on artificialintelligence that is inclusiveof men and women from all over the world, rather than select elites.

"Artificial intelligence will be true progress for humanity only if its technical knowledge in-depth will cease to be the domain of the few," she said. "The Holy Father reminds us that the measure of our true humanity is how we treat our most disadvantaged sisters and brothers."

In summary, writes the pope in the new document, "artificial intelligence ought to serve our best human potential and our highest aspirations, not compete with them."

"Technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary aggravate inequalities and conflicts, can never count as true progress," Francis warns.

Read the original:

Pope Francis calls for international treaty on artificial intelligence - National Catholic Reporter