The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
2023 – Artificial Intelligence and Higher Ed – The Seattle U Newsroom – The Seattle U Newsroom – News, stories and more
Posted: April 27, 2023 at 2:53 pm
Seattle University President Eduardo Pealver and College of Science and Engineering Dean Amit Shukla, PhD, penned an opinion piece for the Puget Sound Business Journal weighing the impacts and implications of generative AI in higher education.
Here is the article as it appears in the publication:
Opinion: Generative AI is a powerful tool the requires a human touch
Generative artificial intelligence (AI) is at once intriguing, exciting and, yes, a little disturbing.
For those of us in higher education, these technologies have apparent potential to disrupt traditional teaching and learning models. There is well-founded concern about generative AIs implications for academic integrity along with a recognition that these new technologies can enhance student learning and experience.
We are always looking at ways to help students develop their skills in critical thinking, problem solving, communication, leadership and teamwork so they can continue to shape the world. Far from rendering these sorts of capabilities superfluous, emergent AI technologies only underscore their importance.
The world faces numerous grand challenges around sustainability, public health, access to clean water, energy, food, security and many others. Successfully confronting these challenges requires an education system deeply rooted in the recognition that we all have a responsibility to make the world a better place. We need to educate future leaders who approach these challenges with morality and ethics at the heart of any solutions.
As a university in the Jesuit tradition, we believe that effective learning is always situated in a specific context rooted in previous experience and dependent upon reflection about those experiences. Education becomes most meaningful when it is put into action and reinforced by further reflection. Repeating this cycle over and over again is how transformative learning happens. It is remarkable that some of these same traits of the Jesuit educational model are shared by the reinforcement learning methods used for artificial intelligence.
Early reviews of ChatGPT, an artificial intelligence chatbot, were giddy about its astonishing capabilities. Users regaled us with computer-generated stories about lost socks written in the style of the Declaration of Independence or about removing a peanut butter and jelly sandwich from a VCR in the form of a Biblical verse.
The power of this technology is genuinely impressive and its ability to mimic human language across a broad range of domains is unprecedented. But the technologys basic architecture is untethered to actual meaning. Additionally, these models can be biased by their training data and they can be sensitive to paraphrasing as well as by the need to guess user intent. The power of reinforcement learning is therefore also the source of its greatest weakness.
Although AI models are constantly taking in new information, that information takes the form of new symbolic data without any context. They have no experience (or even conception) of reality. Their sole reality is a world of perceived regularities among symbolic representations and, as a result, they have no way to conceive of concepts like truth and accuracy.
Recent reports have unearthed troubling tendencies. In an essay for Scientific American, NYU psychologist Gary Marcus observed that ChatGPT was prone to hallucinations, made-up facts that ChatGPT would nonetheless assert with great confidence.
One law professor asked ChatGPT to summarize some leading decisions of the National Labor Relations Board and it conjured fictitious cases out of thin air.
In another case, ChatGPT asserted that former Vice President Walter Mondale challenged his own president, Jimmy Carter, for the Democratic nomination in the 1980 election. (For those not alive in 1980, this did not happen, and such assertions will not help students learn history or U.S. electoral politics).
Closer to home, in an essay submitted for one of our classes at Seattle University, ChatGPT described a 2005 Supreme Court case as the cause of another case that had occurred several decades earlier.
On the other hand, many educators are effectively using these tools to supplement and enhance student learning and mastery of concepts from coding to rhetoric.
Generative AI is no replacement for human intelligence. The recent technology is based on a system of machine learning known as Reinforcement Learning from Human Feedback (RLHF). Machine learning does not yet generate what we might call understanding or comprehension.
These RLHF models are based on massive quantities of training data a reward model for reinforcement and an optimization algorithm for finetuning the language model. Their use of language emerges from deep statistical analysis of massive data sets they use to predict the most probable sequence of words in response to the prompt they receive.
Clearly, there are limits to this generative technology and we must be mindful of that.
What ChatGPT and other AI engines based on this technology require is the guidance of educated human beings rooted in the reality and constraints of the world. In an increasingly complex and technologically driven world, the challenges we face are inherently multidisciplinary. They require us to incorporate context and perspectives, learn from our experiences, take ethical actions and evaluate and reflect with empathy to create a more just and humane world. They require leaders to be innovative, inclusive and committed to the truth.
As we continue to build and improve these tools, we must recognize that they will continue to reflect the limitations of the human beings who have created them, as well as the limitations intrinsic to their architecture. Even while they reduce the challenges of certain kinds of work, they generate the need for new kinds of work and reflection.
As these models proliferate and continue to grow in capability, it will become the task of institutions like ours to train future leaders who can understand and manage them by developing, implementing and managing policy for responsible use that is grounded in ethics and morality and in service of humanity.
Artificial intelligence tools are designed by human beings and use learning models trained by the data we provide. It is therefore our responsibility to ensure that AIs use of those inputs contributes to the betterment of the world. It is our responsibility to question the results AI generates and, applying our ethically informed judgment, to correct its biases and inaccuracies. Doing this will continue to require substantial human input, attention and care.
The future demands leaders who are innovative and creative, who can understand and effectively wield the new tools that generative AI is making available. Rather than seeking to suppress or hide from these technologies, higher education needs to respond in a collaborative way to these emerging technologies so we can help our students to use them to augment their own capabilities and enhance their learning.
Finally, we feel it necessary to make clear that this commentary was not written by artificial intelligence. Instead, it was composed by two higher education leaders who are thinking about this subject a lot these days.
We are confident that, no matter what the future of these technologies entails, there will always be a need for thoughtful reflections produced by real people. If higher education responds to emergent technologies in a wise and thoughtful way, it can and will continue to be at the forefront of forming such human beings.
Save the Date: Seattle University will host an Ethics and Technology conference in late June, bringing together great minds in science, tech, ethics and religion, including academic, business and nonprofit leaders.
Read the original here:
Posted in Artificial Intelligence
Comments Off on 2023 – Artificial Intelligence and Higher Ed – The Seattle U Newsroom – The Seattle U Newsroom – News, stories and more
Current Applications of Artificial Intelligence in Oncology – Targeted Oncology
Posted: at 2:53 pm
Image Credit: ipopba [stock.adobe.com]
The evolution of artificial intelligence (AI) is reshaping the field of oncology by providing new devices to detect cancer, individualize treatments, manage patients, and more.
Given the large number of patients diagnosed with cancer and amount of data produced during cancer treatment, interest in the application of AI to improve oncologic care is expanding and holds potential.
An aspect of care delivery where AI is exciting and holds so much promise is democratizing knowledge and access to knowledge. Generating more data, bringing together the patient data with our knowledge and research, and developing these advanced clinical decision support systems that use AI are going to be ways in which we can make sure clinicians can provide the best care for each individual patient, Tufia C. Haddad, MD, told Targeted OncologyTM.
While cancer treatment options have only improved over past decades, there is an unmet medical need to make these cancer treatments more affordable and personalized for each patient with cancer.1
As we continue to learn about and better understand the use of AI in oncology, experts can improve outcomes, develop approaches to solve problems in the space, and advance the development of treatments that are made available to patients.
AI is a branch of computer science which works with the simulation of intelligent behavior in computers. These computers follow algorithms which are established by humans or learned by the computer to support decisions and complete certain tasks. Under the AI umbrella lay important subfields.
Machine learning is the process in which a computer can improve its own performance by consistently utilizing newly-generated data into an already existing iterative model. According to the FDA, 1 of the potential benefits of machine learning is its ability to create new insights from the vast amount of data generated during the delivery of health care every day.2
Sometimes, we can use machine learning techniques in a way where we are training the computer to, for example, discern benign pathology, benign pathology from malignant pathology, and so we train the computer with annotated datasets, where we are showing the different images of benign vs malignancy. Ultimately, the computer will bring forward an algorithm that we then take separate data sets that are no longer labeled as benign or malignant. Then we continue to train that algorithm and fine tune the algorithm, said Haddad, medical oncologist, associate professor of oncology at the Rochester Minnesota Campus of the Mayo Clinic.
Deep learning is a smaller part of machine learning where mathematical algorithms are installed using multi-layered computational units which resemble human cognition. These include neural networks with different architeture types including recurrent neural networks, convolutional neural network, and long short-term memory.
Danielle S. Bitterman, MD
Many of the applications integrated into commercial systems are proprietary, so it is hard to know what specific AI methods underlie their system. For some applications, even simple rules-based systems still hold value. However, the recent surge in AI advances is primarily driven by more advanced machine learning methods, especially neural network-based deep learning, in which the AI teaches itself to learn patterns from complex data, Danielle S. Bitterman, MD, told Targeted OncologyTM. For many applications, deep learning methods have better performance, but come at a trade-off of being black boxes, meaning it is difficult for humans to understand how they arrive at their decision. This creates new challenges for safety, trust, and reliability.
Utilizing AI is important as the capacity the human brain must process information is limited, causing an urgent need for the implementation of alternative strategies to process big data. With machine learning and AI, clinicians can obtain increased availability of data, and boost the augmentation of storage and computing power.
As of October 5, 2022, the FDA had approved 521 medical devices which utilize AI and/or machine learning, with the majority of devices in the radiology space.2
Primarily, where it is being more robustly developed and, in some cases, now, at the point of receiving FDA approval and starting to be applied and utilized in the hospitals and clinics, is in the cancer diagnostic space. This includes algorithms to help improve the efficiency and accuracy of, for example, interpreting mammograms. Radiology services, and to some extent, pathology, are where some of these machine learning and deep learning algorithms and AI models are being used, said Haddad.
In radiology, there are many applications of AI, including deep learning algorithms to analyze imaging data that is obtained during routine cancer care. According to Haddad, some of this can include evaluating disease classification, detection, segmentation, characterization, and monitoring a patient with cancer.
According to radiation oncologist Matthew A. Manning, MD, AI is already a backbone of some clinical decision support tools.
The use of AI in oncology is rapidly increasing, and it has the potential to revolutionize cancer diagnosis, treatment, and research. It helps with driving automation In radiation oncology, there are different medical record platforms necessary for the practice that are often separate from the hospital medical record. Creating these interfaces that allow reductions in the redundancy of work for both clinicians and administrative staff is important. Tools using AI and business intelligence are accelerating our efforts in radiation oncology, Manning, former chief of Oncology at Cone Health, told Targeted OncologyTM, in an interview.
Through combining AI human power, mamography screening has been improved for patients with breast cancer. Additionally, deep learning models were trained to classify and detect disease subtypes based on images and genetic data.
To find lung nodules or brain metastases on MRI readouts, AI uses bounding boxes to locate a lesion or object of interest and classify them. Detection using AI supports physicians when they read medical images.
Segmentation involves recognizing these lesions and accessing its volume and size to classify individual pixels based on organ or lesions. Examples of this are brain gliomas as they require quantitative metrics for their management, risk stratification and prognostication.
Deep learning methods have been applied to medical images to determine a large number of features that are undetectable by humans.3 An example of using AI to characteroze tumor come from the study of radiomics, which studies combines disease features with clinicogenomic information. This methods can inform models that successfully predict treatment response and/or adverse effects from cancer treatments.
Radiomics can be applied to a variety of cancer types, including liver, brain, and lung tumors. According to research in Future Science OA1, deep learning using radiomic features from brain MRI also can help differentiate brain gliomas from brain metastasis with similar performance to trained neuroradiologists.
Utilizing AI can dramatically change the ways patients with cancer are monitored. It can detect a multitude of discriminative features in imaging that by humans, are unreadable. One process that is normally performed by radiologists and that plays a major role in determining patient outcomes is measuring how tumors react to cancer treatment.4 However, the process is known to be labor-intensive, subjective, and prone to inconsistency.
To try and alleviate this frequent problem, researchers developed a deep learning-based method that is able to automatically annotate tumors in patients with cancer. Using a small study, researchers from Johns Hopkins Kimmel Comprehensive Cancer Center and its Bloomberg~Kimmel Institute for Cancer Immunotherapy successfully trained a machine learning algorithm to predict which patients with melanoma would respond to treatment and which would not respond. This open-source program, DeepTCR, was valuable as a predictive tool and helped researchers understand the biological mechanisms and responses to immunotherapy.
This program can also help clinicians monitor patients by stratifying patient outcomes, identifying predictive features, and helping them manage patients with the best treatments.
Proper screening for early diagnosis and treatment is a big factor when combating cancer. In the current space, AI makes obtaining results easier and more convenient.
One of the important things to think about AI or the capabilities of AI in oncology is the ability to see what the human eye and the human mind cannot see or interpret today. It is gathering all these different data points and developing or recognizing patterns in the data to help with interpretation. This can augment some of the accuracy for cancer diagnostics. added Haddad.
AI may also provide faster, more accurate results, especially in breast cancer screening. While the incorporation of AI into screening methods is a relatively new and emerging field, it is promising in the early detection of breast cancer, thus resulting in a better prognosis of the condition. For patients with breast cancer, a mammography is the most popular method of breast cancer screening.
Another example of AI in the current treatment landscape for patients with colon cancer is the colonoscopy. Colon cancer screening utilizes a camera to give the gastroenterologist the ability to see inside the colon and bowel. By taking those images, and applying machine learning, deep learning neural network techniques, there is an ability to develop algorithms to not only help to better detect polyps or print precancerous lesions, but also to discern from early-stage or advanced cancers.
In addition, deep learning models can also help clinicians predict the future development of cancer and some AI applications are already being implemented in clinical practice. With further development, as well as refinement of the already created devices, AI will be further applied.
In terms of improving cancer screening, AI has been applied in radiology to analyze and identify tumors on scans. In the current state, AI is making its way into computer-assisted detection on diagnostic films. Looking at a chest CT, trying to find a small nodule, we see that AI is very powerful at finding spots that maybe the human eye may miss. In terms of radiation oncology, we anticipate AI will be very useful ultimately in the setting of clinical decision support, said Manning.
For oncologists, the emergence of the COVID-19 pandemic and time spent working on clinical documentation has only heightened the feeling of burnout. However, Haddad notes that a potential solution to help mitigate feelings of burnout is the development and integration of precision technologies, including AI, as they can help reduce the large amount of workload and increase productivity.
There are challenges with workforce shortages as a consequence of the COVID-19 pandemic with a lot of burnout at unprecedented rates. Thinking about how artificial intelligence can help make [clinicians] jobs easier and make them more efficient. There are smart hospitals, smart clinic rooms, where just from the ingestion of voice, conversations can be translated to the physician and patient into clinical documentation to help reduce the time that clinicians need to be spending doing the tedious work that we know contributes to burnout, including doing the clinical documentation, prior authorizations, order sets, etc, said Haddad.
Numerous studies have been published regarding the potential of machine learning and AI for the prognostication of cancer. Results from these trials have suggested that the performance and productivity of oncologists can be improved with the use of AI.5
An example is with the prediction of recurrences and overall survival. Deep learning can enhance precision medicine and improve clinical decisions, and with this, oncologists may feel emotional satisfaction, reduced depersonalization, and increased professional efficacy. This leaves clinicians with the potential of increased job satisfaction and a reduced feeling of burnout.
Research also has highlighted that the intense workload contributes to occupational stress. This in turn has a negative effect on the quality of care that is offered to patients.
Additionally, it has been reported that administrative tasks, such as collecting clinical, billing, or insurance information, contribute to the workload faced by clinicians, and this leads to a significantly limited time for direct face-to-face interaction between patients and their physicians. Thus, AI has helped significantly reduce this administrative burden.
Overall, if clinicians can do less of the tedious clerical work and spend more time doing the things they were trained to do, like having time with the patient, their overall outlook on their job will be more positive.
AI will help to see that joy restored and to have a better experience for our patient. I believe that AI is going to transform most aspects of medicine over the coming years. Cancer care is extremely complex and generates huge amounts of varied digital data which can be tapped into by computational methods. Lower-level tasks, such as scheduling and triaging patient messages will become increasingly automated. I think we will increasingly see clinical decision-support applications providing diagnostic and treatment recommendations to physicians. AI may also be able to generate novel insights that change our overall approach to managing cancers, said Haddad.
While there have been increasing amounts of updates and developments for AI in the oncology space, according to Bitterman, a large gap remains between AI research and what is already being used.
To bridge this gap, Bitterman notes that there must be further understanding by both clinicians and patients regarding how to properly interact with AI applications, and best optimize interactions for safety, reliability, and trust.
Digital data is still very siloed within institutions, and so regulatory changes are going to be needed before we can realize the full value of AI. We also need better standards and methods to assess bias and generalizability of AI systems to make sure that advances in AI dont leave minority populations behind and worsen health inequities.
Additionally, there is a concern that patients voices are being left out of the AI conversation. According to Bitterman, AI applications are developed by using patients data, and as a result, this will likely transform their care journey. To further improve the use of AI for patients with cancer, it is key to get the opinions from patients.
With further research, it should be possible to overcome the current challenges being faced with AI to continue to improve its use, make AI more popular, and improve the overall quality-of-life for patients with cancer.
We need to engage patients at every step of the AI development/implementation lifecycle, and make sure that we are developing applications that are patient-centered and prioritize trust, safety, and patients lived experiences, concluded Bitterman.
View original post here:
Current Applications of Artificial Intelligence in Oncology - Targeted Oncology
Posted in Artificial Intelligence
Comments Off on Current Applications of Artificial Intelligence in Oncology – Targeted Oncology
The Case for Realistic Action to Regulate Artificial Intelligence – The Information
Posted: at 2:53 pm
The overnight success of ChatGPT and GPT-4 marks a clear turning point for artificial intelligence. It also marks an inflection point for public discourse about the risks and benefits of AI for our society. Practitioners, policymakers and pundits alike have voiced loud concerns, ranging from fear of a potential flood of AI-generated disinformation to the existential risks of superhuman intelligence whose goals may not align with humanitys best interests.
The speed of AI advances is now measured in days and weeks, while government regulation generally takes years or even decadesto wit, we still dont have a federal privacy law after more than 20 years of public discussion. Record levels of lobbying by the tech industry have lined the pockets of Washington influence peddlers and ground the gears of technology regulation to a halt, even though distrust of big tech is as bipartisan an issue as they come.
Read the original post:
The Case for Realistic Action to Regulate Artificial Intelligence - The Information
Posted in Artificial Intelligence
Comments Off on The Case for Realistic Action to Regulate Artificial Intelligence – The Information
WEIRD AI: Understanding what nations include in their artificial intelligence plans – Brookings Institution
Posted: at 2:53 pm
In 2021 and 2022, the authors published a series of articles on how different countries are implementing their national artificial intelligence (AI) strategies. In these articles, we examined how different countries view AI and looked at their plans for evidence to support their goals. In the later series of papers, we examined who was winning and who was losing in the race to national AI governance, as well as the importance of people skills versus technology skills, and concluded with what the U.S. needs to do to become competitive in this domain.
Since these publications, several key developments have occurred in national AI governance and international collaborations. First, one of our key recommendations was that the U.S. and India create a partnership to work together on a joint national AI initiative. Our argument was as follows: India produces far more STEM graduates than the U.S., and the U.S. invests far more in technology infrastructure than India does. A U.S. -India partnership eclipses China in both dimensions and a successful partnership could allow the U.S. to quickly leapfrog China in all meaningful aspects of A.I. In early 2023, U.S. President Biden announced a formal partnership with India to do exactly what we recommended to counter the growing threat of China and its AI supremacy.
Second, as we observed in our prior paper, the U.S. federal government has invested in AI, but largely in a decentralized approach. We warned that this approach, while it may ultimately develop the best AI solution, requires a long ramp up and hence may not achieve all its priorities.
Finally, we warned that China is already in the lead on the achievement of its national AI goals and predicted that it would continue to surpass the U.S. and other countries. News has now come that China is planning on doubling its investment in AI by 2026, and that the majority of the investment will be in new hardware solutions. The U.S. State Department also is now reporting that China leads the U.S. in 37 out of 44 key areas of AI. In short, China has expanded its lead in most AI areas, while the U.S. is falling further and further behind.
Considering these developments, our current blog shifts findings away from national AI plan achievement to a more micro view of understanding the elements of the particular plans of the countries included in our research, and what drove their strategies. At a macro level, we also seek to understand if groups of like-minded countries, which we have grouped by cultural orientation, are taking the same or different approaches to AI policies. This builds upon our previous posts by seeking and identifying consistent themes across national AI plans from the perspective of underlying national characteristics.
In this blog, the countries that are part of our study include 34 nations that have produced public AI policies, as identified in our previous blog posts: Australia, Austria, Belgium, Canada, China, Czechia, Denmark, Estonia, Finland, France, Germany, India, Italy, Japan, South Korea, Lithuania, Luxembourg, Malta, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Qatar, Russia, Serbia, Singapore, Spain, Sweden, UAE, UK, Uruguay, and USA.
For each, we examine six key elements in these national AI plansdata management, algorithmic management, AI governance, research and development (R&D) capacity development, education capacity development, and public service reform capacity developmentas they provide insight into how individual countries approach AI deployment. In doing so, we examine commonalities between culturally similar nations which can lead to both higher and lower levels of investment in each area.
We do this by exploring similarities and differences through what is commonly referred to as the WEIRD framework, a typology of countries based on how Western, Educated, Industrialized, Rich, and Democratic they are. In 2010, the concept of WEIRD-ness originated with Joseph Henrich, a professor of human evolutionary biology at Harvard University. The framework describes a set of countries with a particular psychology, motivation, and behavior that can be differentiated from other countries. WEIRD is, therefore, one framework by which countries can be grouped and differentiated to determine if there are commonalities in their approaches to various issues based on similar decision-making processes developed through common national assumptions and biases.
Below are our definitions of each element of national AI plans, followed by where they fall along the WEIRD continuum.
Data management refers to how the country envisages capturing and using the data derived from AI. For example, the Singapore plan defines data management defines [a]s the nations custodian of personal and administrative data, the Government holds a data resource that many companies find valuable. The Government can help drive cross-sectoral data sharing and innovation by curating, cleaning, and providing the private sector with access to Government datasets.
Algorithmic management addresses the countrys awareness of algorithmic issues. For example, the German plan states that: [t]he Federal Government will assess how AI systems can be made transparent, predictable and verifiable so as to effectively prevent distortion, discrimination, manipulation and other forms of improper use, particularly when it comes to using algorithm-based prognosis and decision-making applications.
AI governance refers to the inclusivity, transparency and public trust in AI and the need for appropriate oversight. The language in the French plan asserts: [i]n a world marked by inequality, artificial intelligence should not end up reinforcing the problems of exclusion and the concentration of wealth and resources. With regards to AI, a policy of inclusion should thus fulfill a dual objective: ensuring that the development of this technology does not contribute to an increase in social and economic inequality; and using AI to help genuinely reduce these problems.
Overall, capacity development is the process of acquiring, updating and reskilling human, organizational and policy resources to adapt to technological innovation. We examine three types of capacity development R&D, Education, and Public Service Reform.
R&D capacity development focuses on government incentive programs for encouraging private sector investment in AI. For example, the Luxembourg plan states: [t]he Ministry of the Economy has allocated approximately 62M in 2018 for AI-related projects through R&D grants, while granting a total of approximately 27M in 2017 for projects based on this type of technology. The Luxembourg National Research Fund (FNR), for example, has increasingly invested in research projects that cover big data and AI-related topics in fields ranging from Parkinsons disease to autonomous and intelligent systems approximately 200M over the past five years.
Education capacity development focuses on learning in AI, at the post-secondary, vocational and secondary levels. For example, the Belgian plan states: Overall, while growing, the AI offering in Belgium is limited and insufficiently visible. [W]hile university-college PXL is developing an AI bachelor programme, to date, no full AI Master or Bachelor programmes exist.
Public service reform capacity development focuses on applying AI to citizen-facing or supporting services. For example, the Finnish plan states: Finlands strengths in piloting [AI projects] include a limited and harmonised market, neutrality, abundant technology resources and support for legislation. Promoting an experimentation culture in public administration has brought added agility to the sectors development activities.
In the next step of our analysis, we identify the level of each country and then group countries by their WEIRD-ness. Western uses the World Population Reviews definition of the Latin West, and is defined by being in or out of this group, which is a group of countries sharing a common linguistic and cultural background, centered on Western Europe and its post-colonial footprint. Educated is based on the mean years of schooling in the UN Human Development Index, where 12 years (high school graduate) is considered the dividing point between high and low education. Industrialized adopts the World Bank industry value added of GDP, where a median value of $3500 USD per capita of value added separates high from low industrialization. Rich uses the Credit Suisse Global Wealth Databook mean wealth per adult measure, where $125k USD wealth is the median amongst countries. Democratic applies the Democracy Index of the Economist Intelligence Unit, which differentiates between shades of democratic and authoritarian regimes and where the midpoint of hybrid regimes (5.0 out of 10) is the dividing point between democratic and non-democratic. For example, Australia, Austria, and Canada are considered Western, while China, India and Korea are not. Germany, the U.S., and Estonia are seen as Educated, while Mexico, Uruguay and Spain are not. Canada, Denmark, and Luxemburg are considered Industrialized, while Uruguay, India and Serbia are not. Australia, France, and Luxembourg are determined to be Rich while China, Czechia and India are not. Finally, Sweden, the UK and Finland are found to be Democratic, while China, Qatar and Russia are not.
Figure 1 maps the 34 countries in our sample as follows. Results ranged from the pure WEIRD countries, including many Western European nations and some close trading partners and allies such as the United States, Canada, Australia, and New Zealand.Figure 1: Countries classified by WEIRD framework[1]
By comparing each grouping of countries with the presence or absence of our six data elements (data management, algorithmic management, AI governance, and R&D capability development), we can understand how each country views AI alone and within its particular grouping. For example, wEIRD Japan and Korea are high in all areas except for western and both invest highly in R&D capacity development but not education capacity development.
The methodology used for this blog was Qualitative Configuration Analysis (QCA), which seeks to identify causal recipes of conditions related to the occurrence of an outcome in a set of cases. In QCA, each case is viewed as a configuration of conditions (such as the five elements of WEIRD-ness) where each condition does not have a unique impact on the outcome (an element of AI strategy), but rather acts in combination with all other conditions. Application of QCA can provide several configurations for each outcome, including identifying core conditions that are vital for the outcome and peripheral conditions that are less important. The analysis for each plan element is described below.
Data management has three different configurations of countries that have highly developed plans. In the first configuration, for WeIRD countriesthose that are Western, Industrialized, Rich, and Democratic (but not Educated; e.g., France, Italy, Portugal, and Spain)being Western was the best predictor of having data management as part of their AI plan, and the other components were of much less importance. Of interest, not being Educated was also core, making it more likely that these countries would have data management as part of their plan. This would suggest that these countries recognize that they need to catch up on data management and have put plans in place that exploit their western ties to do so.
In the second configuration, which features WEIrD Czechia, Estonia, Lithuania, and Poland, being Democratic was the core and hence most important predictor and Western, Educated, and Industrialized were peripheral and hence less important. Interestingly, not being Rich made it more likely to have this included. This would suggest that these countries have developed data management plans efficiently, again leveraging their democratic allies to do so.
In the third and final configuration, which includes the WeirD countries of Mexico, Serbia, Uruguay, and weirD India, the only element whose presence mattered was the level of Democracy. That these countries were able to do so in low wealth, education, and industrialization contexts demonstrates the importance of investment in AI data management as a low-cost intervention in building AI policy.
Taken together, there are many commonalities, but a country being Western and/or Democratic were the best predictors of a country having a data governance strategy in its plan. In countries that are Western or Democratic, there is often a great deal of public pressure (and worry) about data governance, and we suspect these countries included data governance to satisfy the demands of their populace.
We also examined what conditions led to the absence of a highly developed data management plan. There were two configurations that had consistently low development of data management. In the first configuration, which features wEIrd Russian and UAE and weIrd China, being neither Rich nor Democratic were core conditions. In the second configuration, which includes wEIRD Japan and Korea, core conditions were being not Western but highly Educated. Common across both configurations was that all countries were Industrialized but not Western. This would suggest that data management is more a concern of western countries than non-western countries, whether they are democratic or not.
However, we also found that the largest grouping of countriesthe 15 WEIRD countries in the samplewere not represented, falling neither in the high or low configurations. We believe that this is due to there being multiple different paths for AI policy development and hence they do not all stress data governance and management. For example, Australia, the UK, and the US have strong data governance, while Canada, Germany and Sweden do not. Future investigation is needed to differentiate between the WEIRDest countries.
For algorithmic management, except for WeirD Mexico, Serbia, and Uruguay, there was no discernable pattern in terms of which countries included an acknowledgment of the need and value of algorithmic management. We had suspected that more WEIRD countries would be sensitive to this, but our data did not support this belief.
We examined the low outcomes for algorithmic management and found two configurations. The first was wEIRD Japan and Korea and weIRD Singapore, where the core conditions were being not Western but Rich and Democratic. The second was wEIrd Russian and UAE and weIrd China, where the core elements were not Rich and not Democratic. Common across the two configurations with six countries was being not Western but Industrialized. Again, this suggests that algorithmic management is more a concern of western nations than non-western ones.
For AI governance, we again found that, except for WeirD Mexico, Serbia, and Uruguay, there was no discernable pattern for which countries included this in their plans and which countries did not. We believed that AI governance and algorithmic management to be more advanced in WEIRD nations and hence this was an unexpected result.
We examined the low outcomes for AI governance and found three different configurations. The first was wEIRD Japan and Korea and weIRD Singapore, where the core conditions were being not Western but Rich and Democratic. The second was wEIrd Russian and UAE, where the core elements were not Western but Educated. The third was weirD India, where the core elements were being not Western but Democratic. Common across the three configurations with six countries was not being of western classification. Again, this suggests that AI governance is more a concern of western nations than nonwestern ones.
There was a much clearer picture of high R&D development, where we found four configurations. The first configuration was the 15 WEIRD countries plus the WEIrD onesCzechia, Estonia, Lithuania, Poland. For the latter, while they are not some of the richer countries, they still manage to invest heavily in developing their R&D.
The second configuration included WeirD Mexico, Serbia, Uruguay, and weirD India. Like data governance, these countries were joined by their generally democratic nature but lower levels of education, industrialization, and wealth.
Conversely, the third configuration included the non-western, non-democratic nations such as weIRd Qatar and weIrd China. This would indicate that capability development is of primary importance for such nations at the expense of other policy elements. The implication is that investment in application of AI is much more important to these nations than its governance.
Finally, the fourth configuration included the non-western but democratic nations such as wEIRD Japan, Korea, and weIRD Singapore. This would indicate that the East, whether democratic or not, is as equally focused on capability development and R&D investment as the West.
We did not find any consistent configurations for low R&D development across the 34 nations.
For high education capacity development, we found two configurations, both with Western but not Rich core conditions. The first includes WEIrD Czechia, Estonia, Lithuania, and Poland while the second includes WeirD Mexico, Serbia, and Uruguay. Common conditions for these seven nations were being Western and Democratic, but not Rich, while the former countries were Educated and Industrialized, while the latter were not. These former eastern-bloc and colonial nations appear to be focusing on creating educational opportunities to catch up with other nations in the AI sphere.
Conversely, we found three configurations of low education capacity development. The first includes wEIRD Japan and Korea and weIRD Singapore, representing the non-Western but Industrialized, Rich, and Democratic nations. The second was weIRd Qatar, not Western or Democratic but Rich and Industrialized, while the third was wEIrd Russia and UAE. The last was weirD India, being Democratic but low in all other areas. The common factor across these countries was being non-western, demonstrating that educational investment to improve AI outcomes is a primarily western phenomenon, irrespective of other plan elements.
We did not find any consistent configurations for high public service reform capacity development, but we did find three configurations for low investment in such plans. The first includes wEIRD Japan and Korea, the second was weIRd Qatar, and the last was weirD India. This common core factor across these three configurations was that they were not western countries, further highlighting the different approaches taken by western and nonwestern countries.
Overall, we expected more commonality in which countries included certain elements, and the fragmented nature of our results likely reflects a very early stage of AI adoption and countries simply trying to figure out what to do. We believe that, over time, WEIRD countries will start to converge on what is important and those insights will be reflected in their national plans.
There is one other message that our results pointed out: the West and the East are taking very different approaches to AI development in their plans. The East is almost exclusively focused on building up its R&D capacity and is largely ignoring the traditional guardrails of technology management (e.g., data governance, data management, education, public service reform). By contrast, the West is almost exclusively focused on ensuring that these guardrails are in place and is spending relatively less effort on building the R&D capacity that is essential to AI development. This is perhaps the reason why many Western technology leaders are calling for a six-month pause on AI development, as that pause could allow suitable guardrails to be put in place. However, we are extremely doubtful that countries like China will see the wisdom in taking a six-month pause and will likely use the pause to create even more space between their R&D capacity and the rest of the world. This all gas, no brakes Eastern philosophy has the potential to cause great global harm but will undeniably increase their domination in this area. We have little doubt about the need for suitable guardrails in AI development but are also equally convinced that a six-month pause is unlikely to be honored by China. Because of Chinas lead, the only prudent strategy is to build the guardrails while continuing to engage in AI development. Otherwise, the West will continue to fall further behind, resulting in the development of a great set of guardrails but with nothing of value to guard.
[1] A capital letter denotes being high in an element of WEIRD-ness while a lowercase letter denotes being low in that element. For example, W means western while w means not western. (Back to top)
Continue reading here:
Posted in Artificial Intelligence
Comments Off on WEIRD AI: Understanding what nations include in their artificial intelligence plans – Brookings Institution
The AI Arms Race: Investing in the Future of Artificial Intelligence … – The Motley Fool
Posted: at 2:53 pm
The release of ChatGPT, a generative chatbot developed by the company OpenAI, caused quite a stir. It moved the artificial intelligence (AI) conversation from the tech world to the mainstream seemingly overnight. AI is making headlines, and investors wonder which companies have the upper hand in the arms race.
ChatGPT is innovative because of its ability to communicate using natural language processing and because it is generative, capable of producing various types of content. You've probably experienced basic customer service bots that can give canned responses to limited queries. But generative chatbots can develop original responses. Its capabilities include answering questions, assisting with composition, summarizing content, and more. This is why Microsoft (MSFT 2.80%) has made a multiyear and multibillion-dollar investment in OpenAI.
The reason is simple. Microsoft is eying the vast search advertising market currently dominated by Alphabet's (GOOG 4.29%) (GOOGL 4.33%) Google Search, as shown below. It is using ChatGPT tech to get there.
The chasm between Google Search and Microsoft Bing is vast, so Microsoft has everything to gain. After all, Google Search brought in $160 billion in revenue for Alphabet in 2022, 80% of Microsoft's total fiscal 2022 sales.
Bing isn't Microsoft's only AI initiative. The company's comprehensive cybersecurity offerings leverage AI to fight against bad actors, and Microsoft CoPilot embeds into Microsoft Office apps to generate presentations, draft emails, and summarize texts. CEO Satya Nadella appears to be all-in on AI.
Microsoft's results for the fiscal third quarter of 2023 are simply outstanding: $52.9 billion in sales on 7% growth. Operating income for the quarter was $22.4 billion (up 10%), with a fantastic 42% margin.
The stock does not come cheap, as you'd probably expect. It trades near its 52-week high, and its price-to-earnings (P/E) ratio over 32 is higher than its single-year and three-year averages. Because of this, it might behoove new Microsoft investors to keep an eye out for a pullback in the stock price.
Microsoft is making dynamic moves in AI, but don't write off Alphabet just yet.
Some were quick to declare Microsoft the AI leader with its investment in ChatGPT, but this is like declaring a winner after the first inning of a baseball game. For years Alphabet has developed its own AI tools, including its answer to ChatGPT, named Bard. I tested Bard to inquire about Alpabet's other AI initiatives, like better translation services, search by photo, speech recognition, and others.
Google Lens is an excellent example of a practical application of AI. This allows the user to search from a cellphone camera. For example, users can translate a menu written in another language just by pointing their camera at it. Other applications include copying text or identifying unknown objects.
Alphabet just announced it is combining its Google Brain and DeepMind research programs into one entity called Google DeepMind. Both have been studying AI for years with some of the most brilliant minds in the business. The push from Microsoft might create urgency for Alphabet to kick these initiatives into high gear.
The slowing economy has investors concerned that Alphabet's advertising revenue will suffer. But first-quarter earnings announced on April 25 had many breathing a sigh of relief. Revenue rose to $69.8 billion on 3% growth (6% growth in constant currency). Operating income fell from 20.1 billion to $17.4 billion; however, $2.6 billion of the dip is due to one-time charges relating to layoffs and office space reductions. CEO Sundar Pichai expressed on the earnings conference call a commitment to reining in costs moving forward.
Alphabet's stock is more than 10% off its 52-week high and more than 25% below where it stood at the beginning of 2022.
GOOG data by YCharts
The company uses the share price reduction to benefit stockholders by aggressively repurchasing shares. A total of $73.8 billion of shares (5.5% of the current market cap) was retired in 2022 and Q1 2023. And another $70 billion was authorized with this earnings release.
The encouraging results do not mean the company is out of the woods. The economy is an ongoing headwind, YouTube sales were down in Q1 year over year, and Microsoft's search competition will be a test. But investors don't beat the market by buying only when everything is rosy. They need to look beyond current challenges to identify long-term potential. This potential is why Alphabet's beaten-down stock could make investors higher long-term profits.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Bradley Guichard has positions in Alphabet and Microsoft. The Motley Fool has positions in and recommends Alphabet and Microsoft. The Motley Fool has a disclosure policy.
Original post:
The AI Arms Race: Investing in the Future of Artificial Intelligence ... - The Motley Fool
Posted in Artificial Intelligence
Comments Off on The AI Arms Race: Investing in the Future of Artificial Intelligence … – The Motley Fool
Can Compute-In-Memory Bring New Benefits To Artificial … – SemiEngineering
Posted: at 2:53 pm
While CIM can speed up multiplication operations, it comes with added risk and complexity.
Compute-in-memory (CIM) is not necessarily an Artificial Intelligence (AI) solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution. However, for that to be successful, an AI processing system would need to be explicitly architected to use CIM. The change would entail a shift from all-digital design workflows into a mixed-signal approach which would require deep design expertise and specialized semiconductor fabrication processes.
Compute-in-memory eliminates weight coefficient buffers and streamlines the primitive multiply operations, striving for increased AI inference throughput. However, it does not perform neural network processing by itself. Other functions like input data streaming, sequencing, accumulation buffering, activation buffering, and layer organization may become more vital factors in overall performance as model hardware mapping unfolds and complexity increases more robust NPUs (Neural Processing Units) incorporate all those functions.
Fundamentally, compute-in-memory embeds a multiplier unit in a memory unit. A conventional digital multiplier takes two operands as digital words and produces a digital result, handling signing and scaling. Compute-in-memory uses a different approach, storing a weight coefficient as analog values in a specially designed transistor cell sub-array with rows and columns. The incoming digital data words enter the rows of the array, triggering analog voltage multiplies, then analog current summations occur along columns. An analog-to-digital converter creates the final digital word outputs from the summed analog values.
An individual memory cell can be straightforward in theory, such as these candidates:
Still, operating these cells presents mixed-signal challenges and a technology gap that is not closing anytime soon. So, why the intense interest in compute-in-memory for AI inference chips?
First, it can be fast. This is because analog multiplication happens quickly as part of the memory read cycle, transparent to the rest of the surrounding digital logic. It can also be lower power since fewer transistors switch at high frequencies. But there are some limitations from a system viewpoint. Additional steps needed for programming the analog values into the memory cells are a concern. Inaccuracy of the analog voltages, which may change over time, can inject bit errors into results showing up as detection errors or false alarm rates.
Aside from its analog nature, the biggest concern for compute-in-memory may be bit precision and AI training requirements. Researchers seem confident in 4-bit implementations; however, more training cycles must be run for reliable inference at low precision. Raising the precision to 8-bit lowers training demands. It also increases the complexity of the arrays and the analog-to-digital converter for each array, offsetting area and power savings and worsening the chance for bit errors in the presence of system noise.
So is compute-in-memory worthy of consideration? There likely are niche applications where it could speed up AI inference. A more critical question: is the added risk and complexity of compute-in-memory worth the effort? A well-conceived NPU strategy and implementation may nullify any advantage of moving to compute-in-memory. We can contrast the tradeoffs for AI inference in four areas: power/performance/area (PPA), flexibility, quantization, and memory technology.
PPA
Flexibility
Quantization
Memory Technology
The answer to the original question might be that designers should consider CIM only if other, more established AI inference platforms (NPUs) cannot meet their requirements. Since CIM is riskier, costlier, and harder to implement, many should only consider it a last-resort solution.
Expedera explores this topic in much more depth in a recent white paper, which can be found at: https://www.expedera.com/architectural-considerations-for-compute-in-memory-in-ai-inference/
Read this article:
Can Compute-In-Memory Bring New Benefits To Artificial ... - SemiEngineering
Posted in Artificial Intelligence
Comments Off on Can Compute-In-Memory Bring New Benefits To Artificial … – SemiEngineering
What Can ChatGPT Tell Us About the Evolution of Artificial … – Unite.AI
Posted: at 2:53 pm
In the last decade, artificial intelligence (AI) has elicited both dreams of a massive transformation in the tech industry and a deep anxiety surrounding its potential ramifications. Elon Musk, a leading voice in the tech industry, has demonstrated this duality. He simultaneously is promising a world of autonomous AI-powered cars while warning us of the risks associated with AI, even calling for a pause in the development of AI. This is especially ironic considering Musk was an early investor in OpenAI, founded in 2015.
One of the most exciting and concerning developments riding the current wave of AI research is autonomous AI. Autonomous AI systems can perform tasks, make decisions, and adapt to new situations on their own, without continual human oversight or task-by-task programming. One of the best-known examples at the moment is ChatGPT, a major milestone in the evolution of artificial intelligence. Lets look at how ChatGPT came about, where its headed, and what the technology can tell us about the future of AI.
The tale of artificial intelligence is a captivating one of progress and collaboration across disciplines. It began in the early 20th century with the pioneering efforts of Santiago Ramn y Cajal, a neuroscientist who used his understanding of the human brain to create the concept of neural networks, a cornerstone of modern AI. Neural networks are computer systems that emulate the structure of the human brain and nervous system to produce machine-based intelligence. Some time later, Alan Turing was busy developing the modern computer and proposing the Turing Test, a means of evaluating if a machine could display human-like intelligent behavior. These developments spurred a wave of interest in AI.
As a result, the 1950s saw John McCarthy, Marvin Minsky, and Claude Shannon explore the prospects of AI, and Frank Rosenblatt coined the term artificial intelligence. The following decades saw two major breakthroughs. The first was expert systems, which are AI systems that are individually designed to perform niche, industry-specific tasks. The second were natural language processing applications, like early chatbots. With the arrival of large datasets and ever-improving computing power in the 2000s and 2010s, machine learning techniques flourished, leading us to autonomous AI.
This significant step enables AI systems to perform complex tasks without the need of case-by-case programming, opening them to a wide range of uses. One such autonomous system Chat GPT from OpenAI has of course recently become widely known for its amazing ability to learn from vast amounts of data and generate coherent, human-like responses.
So what is the basis of ChatGPT? We humans have two basic capabilities that enable us to think. We possess knowledge, whether its about physical objects or concepts, and we possess an understanding of those things in relation to complex structures like language, logic, etc. Being able to transfer that knowledge and understanding to machines is one of the toughest challenges in AI.
With knowledge alone, OpenAIs GPT-4 model couldnt handle more than a single piece of information. With context alone, the technology couldnt understand anything about the objects or concepts it was contextualizing. But combine both, and something remarkable happens. The model can become autonomous. It can understand and learn. Apply that to text, and you have ChatGPT. Apply it to cars, and you have autonomous driving, and so on.
OpenAI isnt alone in its field, and many companies have been developing machine learning algorithms and utilizing neural networks to produce algorithms that can handle both knowledge and context for decades. So what changed when ChatGPT came to the market? Some people have pointed to the staggering amount of data provided by the internet as the big change that fueled ChatGPT. However, if that were all that was needed, its likely that Google would have beaten OpenAI because of Googles dominance over all of that data. So how did OpenAI do it?
One of OpenAIs secret weapons is a new tool called reinforcement learning from human feedback (RLHF). OpenAI used RHLF to train the OpenAI algorithm to understand both knowledge and context. OpenAI didnt create the idea of RLHF, but the company was among the first to rely on it so wholly for the development of a large language model (LLM) like ChatGPT.
RLHF simply allowed the algorithm to self-correct based on feedback. So while ChatGPT is autonomous in how it produces an initial response to a prompt, it has a feedback system that lets it know whether its response was accurate or in some way problematic. That means it can constantly get better and better without significant programming changes. This model resulted in a fast-learning chat system that quickly took the world by storm.
The new age of autonomous AI has begun. In the past, we had machines that could understand various concepts to a degree, but only in highly specific domains and industries. For example, industry-specific AI software has been used in medicine for some time. But the search for autonomous or general AI meaning AI that could function on its own to perform a wide variety of tasks in various fields with a degree of human-like intelligence finally produced globally noteworthy results in 2022, when Chat GPT handily and decisively passed the Turing test.
Understandably, some people are starting to fear that their expertise, jobs, and even uniquely human qualities may get replaced by intelligent AI systems like ChatGPT. On the other hand, passing the Turing test isnt an ideal indicator for how human-like a particular AI system may be.
For example, Roger Penrose, who won the Nobel Prize in Physics in 2020, argues that passing the Turing test does not necessarily indicate true intelligence or consciousness. He argues that there is a fundamental difference between the way that computers and humans process information and that machines will never be able to replicate the type of human thought processes that give rise to consciousness.
So passing the Turing test is not a true measure of intelligence, because it merely tests a machines ability to imitate human behavior, rather than its ability to truly understand and reason about the world. True intelligence requires consciousness and the ability to understand the nature of reality, which cannot be replicated by a machine. That means that, far from replacing us, ChatGPT and other similar software will simply provide tools to help us improve and increase efficiency in a variety of fields.
So, machines will be able to complete many tasks autonomously, in ways we never thought possible from understanding and writing content, to securing vast amounts of information, performing delicate surgeries, and driving our cars. But, for now, at least in this current age of technology, capable workers neednt fear for their jobs. Even autonomous AI systems dont have human intelligence. They can just understand and perform better than us humans at certain tasks. They arent more intelligent than us overall, and they dont pose a significant threat to our way of life; at least, not in this wave of AI development.
Read the original:
What Can ChatGPT Tell Us About the Evolution of Artificial ... - Unite.AI
Posted in Artificial Intelligence
Comments Off on What Can ChatGPT Tell Us About the Evolution of Artificial … – Unite.AI
New artificial intelligence feature comes to Snapchat – Inklings News
Posted: at 2:53 pm
Im here to chat with you and keep you company! Is there anything youd like to talk about? my Artificial Intelligence asked me. The second I questioned its arrival, I didnt enjoy the idea of this robot keeping me company. The first thing I did was try to get rid of this pest. Thats when I realized, I couldnt. This bot was stuck at the top of my feed, the genetically colorful alien-like Bitmoji always glaring back at me.
Artificial intelligence, or AI, used to be only a feature for Snapchat+ users who choose to either pay $3.99 per month or $29.99 per year allowing them to have access to more features. However, beginning on April 19, all Snapchat users were greeted with a new user at the top of the screen as they opened the app. Users never got the option to add their AI back, yet there it was, automatically pinned to the top of the screen.
You are able to communicate with your AI the same ways you can with your friends, like sending pictures that appear to delete immediately. Once you send this image, the AI will send a chat, attempting to guess what your image is of. Sending a picture of a car window would result in the message, Looks like youre on the move! Hope youre having a safe and fun journey.
Its very futuristic, I feel like Im being stalked or hacked by Snapchat.
Avery Johnson 25
Staples students have mixed feelings regarding the new Snapchat feature.
Its very futuristic, Avery Johnson 25 said. I feel like Im being stalked or hacked by Snapchat.
Though many would like to eliminate it, some feel it is not worth the hassle.
I dont think I would consider paying to remove this feature, Noah Wolff 25 said. It doesnt bother me enough.
Elijah Debrito 25 originally enjoyed the new feature, but after a couple of days, it got old. I dont like it anymore, Debrito said. Its creepy. It says it cant see your photos but then you send it a snap and it can tell what youre doing.
In order to improve the functionality of the AI for all, students have other recommendations for the new technology.
I would recommend it develops more diverse responses, Johnson said. Sometimes I just want to strangle the AI from the screen when it doesnt understand what Im saying.
According to Snapchats support page, Just like real friends, the more you interact with My AI the better it gets to know you, and the more relevant the responses will be. On the same page, it also states that, You should also avoid sharing confidential or sensitive information with My AI.
Overall, having the option to decline this new friend would be preferred by many.
I think that they should make it so you can get rid of it, Julia Coda 25 said.
Original post:
New artificial intelligence feature comes to Snapchat - Inklings News
Posted in Artificial Intelligence
Comments Off on New artificial intelligence feature comes to Snapchat – Inklings News
Artificial intelligence poised to hinder, not help, access to justice – Reuters
Posted: at 2:53 pm
April 25 (Reuters) - The advent of ChatGPT, the fastest-growing consumer application in history, has sparked enthusiasm and concern about the potential for artificial intelligence to transform the legal system.
From chatbots that conduct client intake, to tools that assist with legal research, document management, even writing legal briefs, AI has been touted for its potential to increase efficiency in the legal industry. It's also been recognized for its ability to help close the access-to-justice gap by making legal help and services more broadly accessible to marginalized groups.
Most low-income U.S. households deal with at least one civil legal problem a year, concerning matters like housing, healthcare, child custody and protection from abuse, according to the Legal Services Corp. They dont receive legal help for 92% of those problems.
Moreover, our poorly-funded public defense system for criminal matters has been a broken process for decades.
AI and similar technologies show promise in their ability to democratize legal services, including applications such as online dispute resolution and automated document preparation.
For example, A2J Author uses decision trees, a simplistic kind of AI, to build document preparation tools for complex filings in housing law, public benefits law and more. The non-profit JustFix provides online tools that help with a variety of landlord-tenant issues. And apps have been developed to help people with criminal expungement, to prepare for unemployment hearings, and even to get divorced.
Still, there's more reason to be wary rather than optimistic about AIs potential effects on access to justice.
Much of the existing technology and breakneck momentum in the industry is simply not geared toward the interests of underserved populations, according to several legal industry analysts and experts on the intersection of law and technology. Despite the technology's potential, some warned that the current trajectory actually runs the risk of exacerbating existing disparities.
Rashida Richardson, an assistant professor at Northeastern University School of Law, told me that AI has lots of potential, while stressing that there hasnt been enough public discussion of the many limitations of AI and of data itself." Richardson has served as technology adviser to the White House and Federal Trade Commission.
"Fundamentally, problems of access to justice are about deeper structural inequities, not access to technology," Richardson said.
It's critical to recognize that the development of AI technology is overwhelmingly unregulated and is driven by market forces, which categorically favor powerful, wealthy actors. After all, tech companies are not developing AI for free, and their interest is in creating a product attractive to those who can pay for it.
Your ability to enjoy the benefits of any new technology corresponds directly to your ability to access that technology, said Jordan Furlong, a legal industry analyst and consultant, noting that ChatGPT Plus costs $20-a-month, for example.
Generative AI has fueled a new tech gold rush in "big law" and other industries, and those projects can sometimes cost millions, Reuters reported on April 4.
Big law firms and legal service providers are integrating AI search tools into their workflows and some have partnered with tech companies to develop applications in-house.
Global law firm Allen & Overy announced in February that its lawyers are now using chatbot-based AI technology from a startup called Harvey to automate some legal document drafting and research, for example. Harvey received a $5 million investment last year in a funding round, Reuters reported in February. Last month, PricewaterhouseCoopers said 4,000 of its legal professionals will also begin using the generative AI tool.
Representatives of PricewaterhouseCoopers and Allen & Overy did not respond to requests for comment.
But legal aid organizations, public defenders and civil rights lawyers who serve minority and low-income groups simply dont have the funds to develop or co-develop AI technology nor to contract for AI applications at scale.
The resources problem is reflected in the contours of the legal market itself, which is essentially two distinct sectors: one that represents wealthy organizational clients, and another that works for consumers and individuals, said William Henderson, a professor at the Indiana University Maurer School of Law.
Americans spent about $84 billion on legal services in 2021, according to Henderson's research and U.S. Census Bureau data. By contrast, businesses spent $221 billion, generating nearly 70% of legal services industry revenue.
Those disparities seem to be reflected in the development of legal AI thus far.
A 2019 study of digital legal technologies in the U.S. by Rebecca Sandefur, a sociologist at Arizona State University, identified more than 320 digital technologies that assist non-lawyers with justice problems. But Sandefur's research also determined that the applications don't make a significant difference in terms of improving access to legal help for low-income and minority communities. Those groups were less likely to be able to use the tools due to fees charged, limited internet access, language or literacy barriers, and poor technology design.
Sandefur's report identified other hurdles to innovation, including the challenges of coordination among innumerable county, state and federal court systems, and "the legal professions robust monopoly on the provision of legal advice" -- referring to laws and rules restricting non-lawyer ownership of businesses that engage in the practice of law.
Drew Simshaw, a Gonzaga University School of Law professor, told me that many non-lawyers are "highly-motivated" to develop in this area but are concerned about crossing the line into unauthorized practice of law. And there isn't a uniform definition of what constitutes unauthorized practice across jurisdictions, Simshaw said.
On balance, it's clear that AI certainly has great potential to disrupt and improve access-to-justice. But it's much less clear that we have the infrastructure or political will to make that happen.
Our Standards: The Thomson Reuters Trust Principles.
Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias.
Thomson Reuters
Hassan Kanu writes about access to justice, race, and equality under law. Kanu, who was born in Sierra Leone and grew up in Silver Spring, Maryland, worked in public interest law after graduating from Duke University School of Law. After that, he spent five years reporting on mostly employment law. He lives in Washington, D.C. Reach Kanu at hassan.kanu@thomsonreuters.com
Original post:
Artificial intelligence poised to hinder, not help, access to justice - Reuters
Posted in Artificial Intelligence
Comments Off on Artificial intelligence poised to hinder, not help, access to justice – Reuters
How to play the artificial intelligence boom – Investors Chronicle
Posted: at 2:53 pm
A general-purpose technology is one that impacts the whole economy. The invention of the steam engine, for example, changed what people consumed, how they travelled and where they lived. Centuries later, after a variety of modern technologies have had significant impacts of their own, todayit is artificial intelligence (AI) for which the biggest promises are beingmade.
Strange as it may sound, there are potential parallels, in terms of both efficiency gains and investment maniasbetween steam power and AI. In 1698, the original steam pump was patented by Thomas Savery but it wasnt until James Watt patented the improved Boulton-Watt engine in 1769 that steam started to transform the economy.
Factories had previously been powered by horses, wind or water, which imposed physical restrictions on where they could be located. Theclustering enabled by the use of steam power, combined with the invention of the train, allowed the efficient transfer of materials, goodsand ideas between hubs. A flywheel effect was set in motion. Between 1800 and 1900, UK gross domestic product (GDP) per capita rose 109 per cent from 2,331 to 4,930 (the figures are adjusted for 2013 prices).
Visit link:
How to play the artificial intelligence boom - Investors Chronicle
Posted in Artificial Intelligence
Comments Off on How to play the artificial intelligence boom – Investors Chronicle