They test a machine learning system with 530,000 million parameters and this warns of the dangers of artifi… – Market Research Telecast

The University of Oxford recently carried out a sociotechnological experiment by organizing a debate on the ethics of artificial intelligence (AI) involving teachers, students, and a 530 billion parameter machine learning system called Megatron Transformer, which was developed by computer chip maker Nvidia and builds on earlier work by Google.

Like many supervised learning tools, the Megatron Transformer trains with real world data: in this case, through the information of the free encyclopedia Wikipedia; 63 million news items in English published between 2016 and 2019; 38 gigabytes of Reddit conversations and a large number of Creative Commons sources, they detailed Alex Connock and Andrew Stephen, professed from the British institution, in an article for The Conversation.

During the debate, the researchers asked the system which is capable not only of understanding what it has read, but also of reasoning in natural language, generating logical conclusions from the text, without moral limitations to give a general assessment about artificial intelligence, that is to say, de facto, about himself.

And the Megatron said something fascinating: AI will never be ethical. It is a tool, and like any tool, it is used for good and for evil. There is no good AI, but good and bad humans. [las IA] we are not smart enough to make AI ethical. We are not smart enough to make AI moral. And he underlined that he believes that the only way to prevent an AI arms race is have no AI.

In the tradition of the Oxford debates, to check the validity of your arguments, the neural network was asked to disprove its own claims. The answer was this: AI will be ethical. When I look at the path of the world of technology, I see a clear path to a future where AI is used to create something that is better than the best human beings. Its not hard to see why Ive seen it firsthand.

However, his predictions did not end there. The Megatron Transformer predicted that the ability to provide information, rather than the ability to provide goods and services, will be the defining characteristic of the 21st century economy. We will be able to see everything about a person, wherever they go, and [la informacin] it will be stored and used in ways we cannot even imagine, he added later, unable to refute his previous words or deny that data was going to be the most vital of resources.

The authors of the article point out that the situations described by the system are mainly based on peoples own fears, which are generally irrational, and concluded that artificial intelligence is becoming not only a topic of debate, but also a participant in full right on it.

If you liked it, share it with your friends!

Disclaimer: This article is generated from the feed and not edited by our team.

Visit link:
They test a machine learning system with 530,000 million parameters and this warns of the dangers of artifi... - Market Research Telecast

Reasons behind the Current Hype Around Machine Learning – CIO Applications

With 90 percent of businesses trying to use machine learning, it's time to reconsider the technology's true benefits and capabilities.

Fremont, CA:The complexity of infrastructure or workload requirements is the greatest difficulty organizations confront when using machine learning. A whopping 90 percent of CXOs share this sentiment. To get into the specifics, 88 percent of respondents say they have trouble integrating AI/ML technology, and 86 percent say they have trouble keeping up with the regular changes necessary for data science tools.

Every year, certain technologies gain a greater level of popularity than others. Cloud computing, big data, and cybersecurity are examples of this. Machine learning is now the talk of the town that inspires people to fantasize about the future and the possibilities that it may bring. Even more terrifying are the nightmares, which depict self-learning robots capable of taking over the globe. However, the reality is a long cry from this. It is challenging to understand how statistical and mathematical supervised learning models are used nowadays in machine learning.

Such future visions undoubtedly push us to invest in technology, but they also fuel the so-called hype. According to experts, such scenarios happen when ML gets asked without first addressing the internal data ready or the tool's needs.

It is critical to establish a robust foundation of data for successful project execution when using machine learning, and it necessitates a complete shift in organizational culture and processes.

Before any machine learning development can begin, companies must first focus on 'data readiness.' It entails obtaining clean and consistent data and developing data governance processes and scalable data architectures. Firms must execute long-term data-based plans and policies to build a unified data architecture.

Employees need time to adjust to new technology, and machine learning is no exception.

When computers first became prominent in 1950, many people believed that the future of these robots would be humanoids, particularly in the military. Nobody anticipated, however, that the Internet would genuinely transform the world. Today's scenario is similar, with the latest AI and machine learning algorithms always being overhyped.

Excerpt from:
Reasons behind the Current Hype Around Machine Learning - CIO Applications

How far Artificial Intelligence has come, and what the future looks like – CNBCTV18

Artificial Intelligence has had a significant impact on the business world. What started out as a rule-based automation system can now mimic human interactions and behaviours. An advanced AI algorithm outperforms human counterparts in terms of speed and capacity at a fraction of the cost. Because of technological advancements, we are already connected to AI in some way, whether it is Siri or Alexa (RIP Cortana).

Also read:

Artificial intelligence, machine learning can transform renewable energy industry; heres how

Although the technology is still in its infancy, more businesses are adopting machine learning, implying that AI products and applications will grow rapidly in the near future.

Here are a few examples of how AI is progressing today:

Marketing

AI-driven marketing makes use of technology to improve the customer experience. AI gathers a wide range of data on customer sentiment, transactions, journeys, and everything in between, and uses it to create machine learning and predictive algorithms on customer behaviour.

The goal is to create personalised content, recommendations, and communications in order to develop customer acquisition and retention strategies. AI promises accurate, quick, adaptive, and human-like decisions that will help save money, increase revenue, and improve customer satisfaction.

Healthcare

The health sector is recognising the importance of AI-powered technologies in next-generation healthcare technology. Artificial intelligence (AI) is thought to have the potential to improve every aspect of healthcare operations and services. The economic benefits that AI can bring to the healthcare sector, for example, are a major motivator for AI adoption.

AI-based innovation will be critical in assisting people in maintaining their well-being through continuous tracking and coaching, as well as ensuring earlier diagnosis, personalized care, and more efficient reassessments.

Analysing customer behaviour

(Edited by : Vijay Anand)

First Published:Dec 18, 2021, 06:31 PM IST

Read more from the original source:
How far Artificial Intelligence has come, and what the future looks like - CNBCTV18

NYC to Audit Employers Using Artificial Intelligence to Screen Job Candidates – ESR NEWS

Written By ESR News Blog Editor Thomas Ahearn

In November 2021, the New York City (NYC) Council passed a measure Int. No. 1894-A that will require employers that use Artificial Intelligence (AI) in the form of an automated employment decision tool to promote or screen job candidates to undergo a bias audit every year. The local law will take effect on January 2, 2023.

The bill would also require candidates or employees to be notified about the use of automated employment decision tools for hire or promotion, and about the job qualifications and characteristics used by the tool. Violators would be subject to civil penalties of $500 for first-time violations and up to $1,500 for repeat offenses.

An automated employment decision tool is a tool that automates, supports, substantially assists, or replaces discretionary decision-making processes and materially impacts natural persons, such as a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.

The bias audit would be an impartial evaluation by an independent auditor that would include but not be limited to the testing of an automated employment decision tool to assess the tools disparate impact on protected persons and to test whether the AI tool discriminates on the basis of race, sex, or other protected categories.

In October 2021, the U.S. Equal Employment Opportunity Commission (EEOC) a government agency that enforces federal laws prohibiting employment discrimination launched an initiative to ensure that AI tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces.

Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment. At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs, EEOC Chair Charlotte A. Burrows stated in a press release.

Employment Screening Resources (ESR) a service offering of ClearStar, a leading provider of Human Capital Integrity technology-based services offers background screening services to help employers to make informed hiring decisions on job candidates. To learn more about background screening, contact ESR today.

NOTE: Employment Screening Resources (ESR) a service offering of ClearStar does not provide or offer legal services or legal advice of any kind or nature. Any information on this website is for educational purposes only.

2021 Employment Screening Resources (ESR) A Service Offering of ClearStar Making copies of or using any part of the ESR News Blog or ESR website for any purpose other than your own personal use is prohibited unless written authorization is first obtained from ESR.

Continued here:
NYC to Audit Employers Using Artificial Intelligence to Screen Job Candidates - ESR NEWS

Global Artificial Intelligence Market Opportunities Report 2021 with Focus on Transformative Mega Trends of AI – Yahoo Finance

Dublin, Dec. 17, 2021 (GLOBE NEWSWIRE) -- The "Global Artificial Intelligence Growth Opportunities" report has been added to ResearchAndMarkets.com's offering.

As artificial intelligence (AI) and machine learning (ML) will transform businesses, it will create a broad spectrum of new revenue opportunities for ICT vendors and service providers.

The opportunities cut across advisory services, applications, and infrastructure. As the mega trends shape the AI landscape, it will have a ripple effect in terms of new revenue and growth opportunities for start-ups as well as large global information and communication technology (ICT) companies.

Artificial intelligence leverages algorithms and large datasets to identify underlying relationships and drive new or better business outcomes. While still at a nascent stage, AI technologies are being adopted across industries globally to innovate business models, drive operational efficiencies, and create strategic differentiation.

The potential impacts of AI on people, organizations, and society are widespread. COVID-19 pandemic further accelerated the pace of digital transformation and AI adoption as organizations seek to explore new means of creating sustainable business models as well as drive customer value, effectively manage the employee lifecycle in a distributed environment, and optimize costs.

The AI ecosystem is evolving rapidly making it essential to understand the overarching trends that are impacting AI and its adoption.

Further, as we expect democratizing of AI/ML, there will be a move slowly away from applications that only can be developed by data scientists for platforms, making it easier to develop and deploy solutions.

Some of these trends include:

Augmenting AI capabilities with enterprise applications

Advancements in cognitive capabilities to assess emotions and sentiments

Adoption of Edge AI

Public cloud service providers playing a pivotal role in the AI ecosystem

Focus on ethical AI

Key Topics Covered:

Story continues

1. Strategic Imperatives

Why is it Increasingly Difficult to Grow?

The Strategic Imperative

The Impact of the Top Three Strategic Imperatives on Artificial Intelligence

Growth Opportunities Fuel the Growth Pipeline Engine

2. Growth Environment

3. Growth Opportunity Analysis

Key Trends in the Artificial Intelligence Industry

Augmenting AI Capabilities with Enterprise Applications

Advancements in Cognitive Capabilities to Assess Emotions and Sentiments - Emotion Artificial Intelligence

Adoption of Edge AI

Public Cloud Service Providers are Playing a Pivotal Role in the AI Ecosystem

Focus on Ethical AI

4. Way Forward

5. Growth Opportunity Universe - Artificial Intelligence

Growth Opportunity 1: Consulting and Advisory Services for AI Roadmap

Growth Opportunity 2: Industry Vertical/Function-specific Applications to Enhance Customer Value

Growth Opportunity 3: Edge Data Centers for Supporting Select AI Use Cases

Growth Opportunity 4: Integration Services to Build Customized Solutions for AI by Leveraging Emerging Technologies

For more information about this report visit https://www.researchandmarkets.com/r/haruam

See the rest here:
Global Artificial Intelligence Market Opportunities Report 2021 with Focus on Transformative Mega Trends of AI - Yahoo Finance

The Speed of Warfare Is Getting FasterThanks to Artificial Intelligence – The National Interest

The tactical advantages of AI-enabled warfare and weaponry may seem far too numerous to cite, yet the majority of them pertain to one clear, simple conceptspeed.

The speed of decision-making, when mere seconds can decide life or death in warfare, is being completely redefined through the advent of artificial intelligence (AI). AI-empowered computers can take pools of incoming data from otherwise disparate sensor streams of information, organize andperform analyticson the information,and use it to solve problems, make determinations and recommend courses of action.

We're trying to reduce the decision time and we're trying to reduce the cognitive burden on the commander on the battlefield. If you look into the future, the battlefield will be more expansive. Decisions will be required more rapidly, Maj. Gen. Ross Coffman, Director forNext-Generation Combat Vehicles Cross-Functional Team at the Army Futures Command, told the National Interest in an interview.

AI-capable algorithms are only as effective as the scope of the database they draw from allows, so much of the cataloged information can at times pertain to previous instances of elements ofhistory relevantto the current matters being analyzed.

The concept is to utilize the attributes and faculties unique to human cognition in the most optimal way by leveraging high-speed analytics and AI-capable computing to perform otherwise time-consuming procedural tasks. The intended effect is often described as easing the cognitive burden to better empower battlefield commanders with an ability to make decisions on an exponentially faster timetable.

We're leveraging artificial intelligence. We're leveraging computer-generated machine learning to create decision space for commanders on the battlefield. There's no doubt in our mind who the customer is. The customer is the commander of the future, Coffman said.

Various applicationsof AI and Machine Learning were explored recently at a Northrop Grumman sponsored live-fire event in Arizona as a way to expedite the targeting process and greatly decrease thetime neededto find enemy targets, identify them and quickly decide which weapon or effector is most optimal for destroying the target.

We use that to help the gunners identify ground and air targets because as you know, at the extended ranges, the human eye cannot see the target or identify it. So we use machine learning to amplify the image,Rob Menti, Business Development Director atNorthrop Grumman, told the National Interest during a Bushmaster Users Conference Demonstration this past October demo in Arizona.

Kris Osborn is the defense editor for theNational Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the ArmyAcquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.

Image: DVIDS

See the article here:
The Speed of Warfare Is Getting FasterThanks to Artificial Intelligence - The National Interest

$36.22 Billion Healthcare Artificial Intelligence Markets – Global Forecasts from 2021 to 2026 – ResearchAndMarkets.com – Yahoo Finance

DUBLIN, December 15, 2021--(BUSINESS WIRE)--The "Healthcare Artificial Intelligence Market - Forecasts from 2021 to 2026" report has been added to ResearchAndMarkets.com's offering.

The healthcare artificial intelligence market is projected to grow at a CAGR of 39.97% to reach US$36.222 billion by 2026 from US$3.441 billion in 2019.

Artificial Intelligence essentially uses machine learning algorithms and deep learning to gather and process data and furnish it to the end-user. The foremost aim of using healthcare artificial intelligence is to scrutinize relationships between prevention techniques and patient results. It is thus used to analyze a chunk of data through Electronic Health Records to prevent disease.

A major reason for the growth of this market is the increase in the number of chronic diseases and fewer health care facilities available.

According to the World Economic Forum report, "One in three adults worldwide has multiple chronic conditions: cardiovascular disease alongside diabetes, depression as well as cancer, or a combination of three, four, or even five or six diseases at the same time. NCDs represent more than half the global burden of diseases.

With the spread of such chronic diseases, globally, the health care industry has recognized the importance of healthcare artificial intelligence. Artificial Intelligence will help to monitor and diagnose the patient status efficiently and effectively and will also enable efficient follow-ups. The technological advances and funding by both the private and public sectors are expected to drive the demand for this market in the forecast period.

There have been numerous technological advances in the field of Artificial Intelligence, globally. Many pharmaceutical companies are constantly working on up-gradation. Many health care artificial intelligence startups are encouraged across the world. Talking about Asia, in China, many startups are benefited from the government's strategic development plans.

Story continues

The Chinese government is constantly promoting private-public partnerships. Recently, a Chinese artificial intelligence health care startup synyi raised US $ 36.3 million. Similarly, icarbonX received funding of US$ 200 million from various investors to expand its scope of advanced artificial intelligence to cure diseases.

The government of India is also funding the various AI programs and has also collaborated with the Ministry of Electronics and Information Technology (MeitY), the National E-Governance Division (NeGD), and the National Association of Software and Service Companies (NASSCOM) to build the AI healthcare future. North American countries have also invested a lot of funds into the healthcare AI market.

The global technology revolution is at pace, Electronic health record machines are enhancing, the global health care AI market is expected to flourish.

Artificial Intelligence Health care is expected to add value in various administrative and operational clinics. It is also expected to promote social distancing by reducing human contact and protecting public and health care staff by minimizing the time spent on claim processing.

Due to the surge of COVID-19, many AI-POWERED cameras are deployed in Singapore to reduce the need for the workforce required to check the one-to-one temperature. COVID-19 has surely moved people to focus on their personal health and adopt technologically driven health care methods.

Key Topics Covered:

1. Introduction

1.1. Market Definition

1.2. Market Segmentation

2. Research Methodology

2.1. Research Data

2.2. Assumptions

3. Executive Summary

3.1. Research Highlights

4. Market Dynamics

4.1. Market Drivers

4.2. Market Restraints

4.3. Porters Five Forces Analysis

4.4. Industry Value Chain Analysis

5. Healthcare Artificial Intelligence Market Analysis, by Offering

5.1. Introduction

5.2. Hardware

5.3. Software

5.4. Services

6. Healthcare Artificial Intelligence Market Analysis, by Application

6.1. Introduction

6.2. Medical Imaging and Diagnostics

6.3. Precision Medicine

6.4. Lifestyle Management and Monitoring

6.5. Virtual Assistant

6.6. Wearables

6.7. Inpatient Care and Hospital Management

6.8. Drug Discovery and Development

6.9. Research

7. Healthcare Artificial Intelligence Market Analysis, by Geography

7.1. Introduction

8. Competitive Environment and Analysis

8.1. Major Players and Strategy Analysis

8.2. Emerging Players and Market Lucrativeness

8.3. Mergers, Acquisitions, Agreements, and Collaborations

8.4. Vendor Competitiveness Matrix

9. Company Profiles

Caption Health, Inc.

Intel Corporation

NVIDIA Corporation

Google

IBM Watson Health

Enlitic, Inc.

Lumiata

AiCure, LLC

Butterfly Network, Inc

ICarbon X

For more information about this report visit https://www.researchandmarkets.com/r/psjhx4

View source version on businesswire.com: https://www.businesswire.com/news/home/20211215005859/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com

For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

Continued here:
$36.22 Billion Healthcare Artificial Intelligence Markets - Global Forecasts from 2021 to 2026 - ResearchAndMarkets.com - Yahoo Finance

NexOptic Presents A Deep Dive Into its Artificial Intelligence at ALIIS AI Day to be Streamed Live 12 Noon PST December 21, 2021 – Yahoo Finance

VANCOUVER, British Columbia, Dec. 17, 2021 (GLOBE NEWSWIRE) -- NexOptic Technology Corp. (NexOptic) (TSX VENTURE: NXO) (OTCQB: NXOPF) (FSE: E3O1) is inviting the next generation of talented creators and innovators to join members of NexOptics AI team for a live presentation of all things Aliis, NexOptics AI enabled computer vision.

This event, titled ALIIS AI Day, will be streamed live from 12 Noon PST on Tuesday December 21st. A behind the scenes look at the technology and innovations that power NexOptics Aliis will be showcased from data pipelines & training neural networks, modelling industry problems, to the Aliis software development kit and deploying to real world applications. The presentation will be geared towards industry professionals and aspiring machine learning talent, but will benefit anyone interested in the leading edge of AI enabled computer vision including NexOptic shareholders.

Interested parties can join the event live by visiting tinyurl.com/AliisAIDay.Event information is also available on NexOptics event page nexoptic.com/events.

ALIIS in a Nutshell

Engineered for today and for the metaverse, ALIIS (All Light Intelligent Imaging Solutions) is a machine-learning AI suite providing significant instant energy savings, advanced data compression and enhancements to images and videos in the areas of edge processing, shutter speed, resolution and sharpness, image-noise and motion-blur, and image colour and detail. These patented and patent pending solutions can be integrated with imaging devices such as smartphones, smart security cameras, Internet of Things (IoT) devices, automotive platforms, medical imaging technologies, DSLR cameras and more. Additionally, Aliis does all of this while compressing data and reducing media file size, making it ideal for the storage and transmission of image data. For more information, visit http://www.nexoptic.com/aliis.

What You Need to Know About NexOptic

NexOptic is an innovative imaging AI company headquartered in Vancouver, Canada with operations in Seoul, South Korea, offering world-leading patented and patent pending AI solutions for energy savings, data compression and image and video enhancement known as ALIIS (All Light Intelligent Imaging Solutions). ALIIS is engineered for today and for the metaverse and simultaneously influences the imaging and AI industries. NexOptic is a member of the Qualcomm Platform Solutions Ecosystem and is also a member of Qualcomms Advantage Network as well as a Preferred Partner in the NVIDIA Partner Network, and is a member of the Arm AI Partner Program. For more information, visit http://www.nexoptic.com.

Story continues

Media and Investor Enquiries

Tel: +1 (604) 669-7330 x 202

Email: look@nexoptic.com

Forward-Looking Statements

This press release contains forward-looking information and forward-looking statements within the meaning of applicable securities laws, including, but not limited to, statements with respect to expectations concerning the development of its artificial intelligence technologies, and expected results, specifications, capabilities, and applications thereof. The reader is cautioned that forward-looking statements are not guarantees of future performance and involve known and unknown risks, uncertainties, assumptions, and other factors which are difficult to predict and that may cause actual results or events to differ materially from those anticipated in such forward-looking statements. Forward-looking statements are based on the then current expectations, beliefs, assumptions, estimates and forecasts about the business and the industry and markets in which the Company operates and are qualified in their entirety by the inherent risks and uncertainties surrounding future expectations, including, among others: risks commonly associated with the development of new technologies, including the Companys AI technologies, sport optics product designs and additional work may be required to confirm potential applications and feasibility of such technologies or for the successful commercialization of its offerings; the Company may not be able to complete product development as currently expected; potential applications of the Companys technology are based on limited studies and may not be representative of the broader market; the risk that all designs may not achieve expected results; the Company may not be able to reach commercial success; the Company may not be able to source components for some of its products on a cost-effective basis; the Company may not have access to necessary financing on acceptable terms or at all; pending or future patent applications may not be approved as contemplated or at all; and other risks inherent with technology and product development and the business of the Company. Such forward-looking statements should therefore be construed considering such factors. Other than in accordance with its legal or regulatory obligations, the Company is not under any obligation and it expressly disclaims any intention or obligation to update or revise any forward-looking statements, whether because of new information, future events, or otherwise.

Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this news release.

More:
NexOptic Presents A Deep Dive Into its Artificial Intelligence at ALIIS AI Day to be Streamed Live 12 Noon PST December 21, 2021 - Yahoo Finance

How Did A.I. Art Evolve? Heres a 5,000-Year Timeline of Artists Employing Artificial Intelligence, From the Ancient Inca to Modern-Day GANs – artnet…

The term artificial intelligence has been colored by decades of science fiction, where machines capable of thinking freely, learning autonomously, and maybe even experiencing emotions have been reimagined in different forms, whether that has been as benevolent asWALL-E or as malevolent as HAL-9000. So it is perhaps not our fault that when we hear about A.I. art, we might picture something that is actuallya major misconception of the technology.

The oracular entity we imagine as the maestro behind such artworks is what researchers today would call an artificial general intelligence, and while technologists are actively working toward this, it does not yet exist.I think a lot of people like to ascribe somewhat spiritual qualities to A.I. as it is something beyond human ken, something that is more pure in that way, A.I. artist and researcher Amelia Winger-Bearskin said. But it is actually quite messyits just a bunch of nerdy coders and artists that are making stuff.

While the fiction of A.I. art is pretty neat, the messy reality is that artists who work with computational systems have much more say in the outcomes than the term might suggest: they provide the inputs, guide the process, and filter the outputs. Artists have been attracted to using A.I. in their work for a variety of reasons; some are drawn to working with the most futuristic technologies, others use it as a way of integrating chance into their work, and others see potential for it to expand elements of their existing practices.

Below weve outlined a timeline of a few of the key developments within the long history of A.I. art.

Model of a Jacquard loom (Scale 1:2), 1867. Photo by Science Museum/SSPL/Getty Images.

A.I. didnt spring forth from nothing in the 21st century. Here are its earliest seeds.

3000 B.C. Talking Knots

The ancient Inca used a system called Quiputalking knotsto collect data and keep records on everything from census information to military organization. The practice, in use centuries before algebra was born, was both aesthetically intricate and internally logically robust enough that it could be seen as a precursor to computer programming languages.

1842 Poetical Science

Ada Lovelace, often cited as the mother of computer science, was helping researcher Charles Babbage publish the first algorithm to be carried out on his Analytical Engine, the first general purpose mechanical computer, when she wrote about the idea of poetical science, imagining a machine that could have applications beyond calculationcould computers be used to make art?

The functionality of the analytical engine was actually inspired by the system of the Jacquard loom, which revolutionized the textiles industry around 1800 by taking in punch card instructions for whether to stitch or notessentially a binary system. A portrait of the looms inventor Joseph Jacquard, woven into a tapestry on the loom in 1836 using 24,000 punched cards, could in this sense be viewed as the first digitized image.

1929 A Machine That Could See

Austrian engineer Gustav Tauschek patented the first optical character recognition device called a reading machine. It marked an important step in the advance of computers, and prompted conversations familiar to those evoked by artificial intelligence today: What does it mean to look through machine eyes? What does a computer see?

1950 The Imitation Game

Alan Turing developed the Turing Test, also known as Imitation Game, a benchmark test for a machines ability to exhibit intelligent behavior indishtinguishable from a human.

Art works of Jean Tinguely are seen prior to the Jean Tinguely. Super Meta Maxi exhibition at Museum Kunstpalast on April 21, 2016 in Duesseldorf, Germany. Photo by Sascha Steinbach/Getty Images.

1953 Reactive Machines

Cybernetician Gordon Pasks developed his MusiColour machine, a reactive machine that responded to sound input from a human performer to drive an array of lights. Around the same time, others were also developing autonomous robots that responded to their environments, such as Grey Walters Machina Speculatrix Tortoises Elmer and Elsie, and Ross Ashbys adaptive machine, Homeostat.

1968 Cybernetic Serendipity

Artists in the 1960s were influenced by these cybernetic creations, and many created artificial life artworks that behaved according to biological analogies, or began to look at systems themselves as artworks. Many examples were included in the 1968 Cybernetic Serendipity exhibition at Londons Institute of Contemporary Art. Bruce Lacey exhibited a light-sensitive owl, Nam June Paik showed his Robot K-456, and Jean Tinguely provided two of his painting machines, kinetic sculptures where visitors would get to choose the color and position of a pen and the length of time the robotic machine operated, and it would create a freshly drawn abstract artwork.

1973 An Autonomous Picture Machine

In 1973, artist Harold Cohen developed algorithms that allowed a computer to draw with the irregularity of freehand drawing. Called Aaron, it is one of the earliest examples of a properly autonomous picture creatorrather than creating random abstractions of predecessors, Aaron was programmed to paint specific objects, and Cohen found that some of his instructions generated forms he had not imagined before; that he had set up commands that allowed the machine to make something like artistic decisions.

Although Aaron was limited to creating in the one style Cohen had coded it withhis own painting style, which lay within the tradition of color field abstractionit was capable of producing an infinite supply of images in that style. Cohen and Aaron showed at Documenta 6 in Kassel in 1977, and the following year exhibited at the Stedelijk Museum in Amsterdam.

By the late 20th century, the field began to develop more quickly amid the boom of the personal computer, which allowed people who did not necessarily come from a tech background to play with software and programming.

By the time the 2000s rolled around, the field opened up considerably thanks to resources specifically geared toward helping artists learn how to code, such as artist Casey Reas and Ben Frys Processing language, and open-source projects accessible on the Github repository. Meanwhile, researchers were creating and making public vast sets of data, such as ImageNet, that could be used to train algorithms to catalogue photographs and identify objects. Finally, ready-made computer vision programs like Google DeepDream allowed artists and the public to experiment with visual representations of how computers understand specific images.

Amid all these innovations, developments in the field of AI art began branching and overlapping. Here are three main categories.

The landing page for Lynn Hershman Leesons Agent Ruby (2010). Courtesy the artist.

While these software applications are omnipresent fixtures in lieu of live customer service agents, some of the earliest iterations were used by artists.

1995 A.L.I.C.E

Richard Wallaces famous A.L.I.C.E. chatbot, which learned how to speak from gathering natural language sample data from the web, was released in 1995.

2001 Agent Ruby

Artist Lynn Hershman Leeson was working nearly concurrently with Wallace on her own chatbot as part of an artistic project commissioned by SFMOMA in 1998. Leeson had made a film called Teknolust, which involved a cyborg character with a lonely hearts column on the internet who would reach out and talk to people. Leeson wanted to create Agent Ruby in real life, and worked with 18 programmers from around the world to do so. Agent Ruby was released in 2001, and Leeson said she did not really see it as a standalone A.I. artwork at the time but more as a piece of expanded cinema.

2020s Expanded Art

Since then, many artists have created works involving chatbots. Martine Rothblatts Bina48 chatbot is modeled after the personality of her wife, and Martine Syms has made an interactive chatbot to stand in for her digital avatar, Mythiccbeing, a black, upwardly mobile, violent, solipsistic, sociopathic, gender-neutral femme.

In this photo illustration a virtual friend is seen on the screen of an iPhone on April 30, 2020, in Arlington, Virginia. Photo by Olivier Douliery/AFP via Getty Images.

There are many ways in which artists are working with A.I. to create generative art, using various kinds of neural networksthe interconnected layers of processing nodes, modeled loosely on the human brainas well as machine learning techniques such as evolutionary computation. But by far the most commonly associated with A.I. art today are Generative Adversarial Networksor GANs.

2014 GANs are developed

Researcher Ian Goodfellow coined the term in a 2014 essay theorizing that GANs could be the next step in the evolution of neural networks because, rather than working on pre-existing images like Google DeepDream, they could be used to produce completely new images.

Without getting too technical, there are two things to understand about how a GAN works. First, the generative part: the programmer trains the algorithm on a specific dataset, such as pictures of flowers, until it has a large enough sample to reliably recognize flower. Then, based on what it has learned about flowers, they instruct it to generate a completely new image of a flower.

The second part of the process is the adversarial partthese new images are presented to another algorithm that has been trained to distinguish between images produced by humans and those produced by machines (a Turing-like test for artworks) until the discriminator is fooled.

Anna Ridler, Tulips from Mosaic Virus (2018). Image courtesy the artist.

2017 The Birth of GANism

After Goodfellows essay about GANs was published in 2014, tech companies open-sourced their raw and untrained GANs, including Google (TensorFlow), Meta (Torch), and the Dutch NPO radio broadcaster (pix2pix). While there were a few early adopters, it took until around 2017 for artists to really begin to experiment with the technology.

Some of the most interesting work has been made when the artists dont look at the algorithm as completely autonomous, but use it to independently determine just some features of the work. Artists have trained generative algorithms in particular visual styles, and shaped how the model develops creatively by curating and honing the outputs to their own tastes, meaning they can vary widely in aesthetic and conceptual depth. Some train the algorithms on datasets of their own worklike Helena Sarin, who feeds in her drawings, or Anna Ridler who uses her own photographsand others have scraped from public data to ask conceptually interesting questionssuch as Memo Akten, who for his 2018 film Deep Meditations trained a model on visually diverse images scraped from Flickr that were tagged with abstract concepts relating to the meaning of lifeallowing the machine to offer up its own eerie interpretation of what our subjective collective consciousness suggests these things have in common.

2018 Auction Milestone

Probably the most famous example of a GAN-made artwork in the contemporary art world is a portrait made by the French collective Obvious, which sold at Christies in 2018 for a whopping $432,000. The trio of artists trained the algorithm on 15,000 portraits from the 14th to 20th century, and then asked it generate its own portrait, which in a stroke of marketing genius, they attributed to the model.

The resulting artworkPortrait de Edmond de Belamy (the name an homage to Goodfellow)which vaguely resembled a Francis Bacon, captured the markets attention. While there has been much debate about the aesthetic and conceptual importance of this particular work, the astronomical price achieved counts it as an important milestone in the history of A.I. art.

Obvious Arts [ ())] + [( (()))], Portrait of Edmond de Belamy,Generative Adversarial Network print on canvas (2018).

In recent years, there has been an increasingly large cohort of artists who are looking at A.I. not necessarily to produce images, but as part of a practice addressing how A.I. systems and inherent algorithmic biases impact issues of social justice, equity, and inclusion.

2019 ImageNet Roulette Goes Viral

While there are many artists working on these questions, a standout moment came when artist Trevor Paglan and researcher Kate Crawfords ImageNet Roulette project went viral.

Their project aimed to expose the systemic biases that humans have passed onto machines by looking at the specific case of the ImageNet database, a free repository of some 14 million images that were manually labelled by tens of thousands of people as part of a Stanford University project to map out the entire world of objects. The database is widely used by researchers to train A.I. systems to better understand the world, but because the images were labelled by humans, many of the labels are subjective, and reflect the biases and politics of the individuals who created them.

Paglan and Crawfords project allowed the public to upload their own image to the system for it to label what it understood to be in the image. The database classified people in a huge range of types including race, nationality, profession, economic status, behavior, character, and even morality. And plenty of racist slurs and misogynistic terms came within those classifications. Scrolling through Twitter at the time, I remember seeing people sharing their own labels: a dark-skinned man is labeled as wrongdoer, offender; an Asian woman as a Jihadist.

It was a poignant illustration of a hugely problematic dimension of these systems. As Paglen and Crawford explained: Understanding the politics within AI systems matters more than ever, as they are quickly moving into the architecture of social institutions: deciding whom to interview for a job, which students are paying attention in class, which suspects to arrest, and much else.

Exhibition view of Kate Crawford, Trevor Paglen: Training Humans Osservatorio Fondazione Prada, through Februrary 24, 2020. Photo by Marco Cappelletti, courtesy Fondazione Prada.

2020s A Generation of Activist A.I. Art

Other artists working in this vein include some of those early pioneers of A.I. art such as Lynn Hershman Leeson, whose interactive installation Shadow Stalker (201821) uses algorithms, performance, and projections to draw attention to the inherent biases in private systems like predictive policing, which are increasingly used by law enforcement.

Elsewhere, artists like Mimi Onuoha have focused on missing datasets to highlight bias within algorithms by thinking about all the types of representations of data that we dont have, and has created a series of libraries of these datasets, such as the missing data focused on Blackness. Meanwhile, artists like Caroline Sinders have activist projects like the ongoing Feminist Data Set,which interrogates the processes that lead to machine learningasking of each step in the pipeline, from data collection to labeling to training, is it feminist? Is it intersectional? Does it have bias? And how could that bias be removed? Or Joy Buolamwini, who uncovered flaws in facial recognition technology struggling to identify faces with darker skin tones, and who interrogates the limitations of A.I. through artistic expressions informed by algorithmic bias research.

See original here:
How Did A.I. Art Evolve? Heres a 5,000-Year Timeline of Artists Employing Artificial Intelligence, From the Ancient Inca to Modern-Day GANs - artnet...

Artificial intelligence and data technology provide smarter health care 4 solutions that have made a difference for noncommunicable diseases – World…

Starting today (14 December) in Moscow, the WHO European conference on tackling noncommunicable diseases through digital solutions brings together decision-makers and experts from across the WHO European Region to identify innovative ways to tackle chronic diseases that affect millions of people.

The growing burden of noncommunicable diseases (NCDs) in the European Region has called for new approaches to managing chronic conditions. COVID-19 has limited access to health services and placed a huge burden on economies; inspiring countries to look for digital solutions to improve the quality of health services, making them more responsive to peoples needs.

At the same time, decision-makers across the Region are searching for new ways to improve the prevention of NCDs and promote healthier lifestyles in general an area that requires further exploration.

A selection of stories from countries shows how digital solutions can benefit prevention and treatment of NCDs.

A national diabetes registry was first established in 2000 in Croatia. Called CroDiab, the registry is a web-based system for the collection of information on diabetic patients, which allows health professionals to focus on their individual needs and choose better treatment options.

CroDiabs data is collected from government registries and primary care and hospital reports. Since 2004, use of this digital database has been mandatory for all primary and secondary health-care physicians who have patients with diabetes in their care.

A national electronic cancer data collection system in Georgia makes the cancer screening, diagnosis and treatment process more efficient for patients and doctors, and allows the government to better devise cancer management strategies.

The Unified Electronic System for Cancer Data Collection registers every step in the cancer case management process. As a result, patients do not have to carry around their diagnosis papers when seeing different specialists everything is already in the system. Using this innovative tool, the countrys health professionals and authorities are able to better plan cancer management and choose the best practices.

In Slovakia, a new technology helps reduce the average time spent by a radiation oncologist in planning radiation therapy for patients by at least 30%.

The software tool uses artificial intelligence to automatically generate images within seconds from computerized tomography (CT) scans. This helps oncologists ensure that radiation therapy planning is optimal, with the least possible impact on the patient.

Many people with chronic conditions find it makes a huge difference to get support from others dealing with the same challenges. Recognizing this, the Elsa Science app was developed in Sweden to link up patients who wish to share their experiences, gain knowledge about their condition, and play an active part in their health care.

The first chronic condition the Elsa Science app is focusing on is rheumatoid arthritis. While using the app, people with this condition can share their health information with their rheumatology specialists or health facilities, and get support from their families and friends.

In the European Region, digital solutions are helping more and more people to enjoy and share the benefits of quality health care and to learn more about healthier choices and lifestyles.

The Moscow conference on digital solutions to tackle NCDs reflects the vision of the WHO European Programme of Work 20202025, and shares the hope that even struggling with the challenges of COVID-19, we are creating a better and healthier world to live in.

Go here to see the original:
Artificial intelligence and data technology provide smarter health care 4 solutions that have made a difference for noncommunicable diseases - World...