What is Artificial Intelligence? – Definition & History | Study.com

Brief History

The field of artificial intelligence as we know it today began in the 1940s. World War II and its need for rapid technological advancement to fight the enemy spurred on the creation of this field thanks to the likes of mathematician Alan Turing and neurologist Grey Walter. These men, and many others like them, began to exchange ideas regarding the various possibilities of intelligent machines and what would count as an intelligent machine.

It wasn't until the 1950s, however, that the actual term 'artificial intelligence' was coined by computer scientist John McCarthy. During this time, scientist Marvin Minsky's ideas on how to pre-program computers with rules of intelligence would come to dominate the coming decades. In fact, he and McCarthy received a lot of funding to develop AI in the hopes of getting an upper hand against the Soviet Union. However, Minsky's predictions about artificial intelligence (namely the pace of its advancement) fell woefully flat over time.

It was also in the late 1960s that the first mobile decision making robot capable of various actions was made. Its name was Shakey. Shakey could create a map of its surroundings prior to moving. However, Shakey was very slow in its ability to sense the surrounding environment. Shakey was a good example of the shaky ground AI was on at the time.

This is because in the 1970s, owing to a derisive and what would ultimately prove to be a wrong conclusion by mathematician Sir. James Lighthill about AI's capabilities, AI hit a snag. Funding was massively slashed for AI projects and very little development occurred during this decade.

But by the early 1980s, AI started to receive funding for commercial projects as companies noted that AI had a use for specific niches that could save them money. In the 1990s, AI had a mini-revolution of sorts. Many in the field discarded Minsky's approach to AI and, instead, adopted the approach pushed by Rodney Brooks. Instead of pre-programming a computer with algorithms of intelligence, as Minsky advised, Brooks advised that AI be built with neural networks that worked like brain cells and thus learned new behaviors. Brooks didn't come up with this idea himself but he did help bring it back to life. In fact, you can thank Brooks' company for coming up with the first widely used robot for the home, the Roomba vacuum.

Besides the Roomba vacuum, the 2000s had a lot going on in AI. Maybe you've seen Youtube clips of the robot BigDog? It looks like a big scary metallic dog-horse of some sort. It was built to function as an artificial pack animal in rough terrain for the military. Or, perhaps you've heard of PackBot? This is a bomb disposal robot that has been used in the Middle East by U.S. troops.

Even if you haven't heard of these incredible machines, then you've almost certainly heard of speech recognition on your cell-phone, speech recognition that learns your voice and becomes better over time. That's another great example of AI in the modern world.

If you're a fan of Jeopardy then you saw AI function under the name 'Watson', a machine system that beat the top two Jeopardy champions of all time in answering a wide variety of question. Watson's technology now helps give doctors recommendations about their patients.

Today's artificial intelligence hits on almost every aspect of society, from the military and entertainment to your cell phone and driverless cars, from real time voice translation to a vacuum that know where and how to clean your floor without you, from your own computer to your doctor's office.

So what where is AI going in the future? No one can tell you for sure but here are some possible ideas:

Some people claim that, no matter, what machines will never be truly intelligent. However, it's a matter of debate as to what intelligence actually is and how you can actually gauge it. So far, AI has been limited to very specific tasks and in some of those tasks it has become better than humans, such as playing chess. In more complex tasks, like speech recognition, it's not as good as you and I (at least not yet). In some limited ways, computers are already more intelligent than people. For instance, unlike people, they aren't influenced by unintelligent superstitions (unless programmed to be). The idea for whether or not a machine will ever truly surpass all of your intellectual abilities and be able to learn new things and make decisions on par or better with humans is simply unknown. Many will argue yes and no. Perhaps, there will be no actual delineation between AI and human in the future. We may simply, albeit slowly, merge into one in the future and become completely inseparable.

Artificial intelligence (AI) is the ability of a computer to perform tasks that are similar (at least in a limited sense) to that of human learning and decision making. AIs roots go back to the 1940s, with Alan Turing and Grey Walter. In the 1950s, John McCarthy coined the term 'artificial intelligence' and Marvin Minsky was a well-known scientist of the field. In the 1980s, companies began using AI to save money and in the 1990s and 2000s the field of AI really took off with the likes of Watson, speech recognition, and a lot more.

See more here:
What is Artificial Intelligence? - Definition & History | Study.com

Artificial intelligence and its potential to change healthcare – Chief Healthcare Executive

A panel of physicians and leaders in the field expressed enthusiasm for AIs possible benefits for patients. They also said solutions must be designed with health equity in mind.

Many have hailed the potential of artificial intelligence to transform healthcare.

Michael Howell, Googles chief clinical officer and deputy chief health officer, says, Its hard to imagine a technology that is more hyped than AI.

Even so, Stephen Parodi, executive vice president of The Permanente Federation, says, Widespread AI use in healthcare is still in its infancy.

Still, many are projecting significant growth in the prevalence of AI in medicine in the near future.

During a one-hour forum hosted by The Permanente Federation Monday, healthcare leaders, all physicians, assessed the possibilities of AI, the keys to success, and expectations on its future uses.

Even in a forum where leaders talked about potential challenges, including designing technology with patients in mind and the urgent need to focus on equity, the participants spoke with enthusiasm, even excitement, about the growing role of artificial intelligence in medicine.

Its appropriate to bring some healthy skepticism and ask questions about the potential of artificial intelligence in healthcare, Howell said.

However, Howell also said he expected, AI will do things we didnt think were possible.

Earlier interventions

Edward Lee, executive vice president and chief information officer of The Permanente Federation, talked about how AI is being used across the Kaiser Permanente system.

At Kaiser Permanente, researchers have used AI to examine retinal images of patients with diabetes, to possibly determine if patients are more likely to lose their vision, Lee said.

In addition, Kaiser Permanente is using AI-powered models to analyze which patients in hospitals may be at higher risk of deteriorating or could require intensive care. "This gives us a chance to intervene before patients get sicker, Lee said.

Hundreds of patients have likely been saved, he said, and thats a conservative estimate.

The system is using AI to analyze emails to make sure they are getting to the right member of the care team. This helps our patients get timely responses to their health concerns, Lee said.

John Halamka, president of Mayo Clinic Platform, said he expected that within the next six quarters, artificial intelligence is going to be brought into the workflow of electronic health records.

The Mayo Clinic has been increasingly using AI in research. Mayo Clinic researchers have been studying the use of artificial intelligence to identify pregnant patients who may be at risk for complications, as well as patients who could have greater likelihood of suffering a stroke.

When asked about when AI would gain greater prevalence, Halamka cited the author William Gibson, who once said, The future is already here, its just not evenly distributed.

I believe the perfect storm for innovation requires technology thats good enough, policy thats enabling and cultural change that creates a sense of urgency, Halamka said.

Patients have greater expectations of healthcare, and that will help expand the use of AI in medicine, panelists said. The cultural demands of our patients will drive us forward, Halamka added.

Google Health is using artificial intelligence to bring better technology to care teams, and also in reaching out to consumers when theyre searching for health information online, steering them to relevant and accurate results and away from misinformation, Howell said. The tech giant is also using AI in community context, he said, such as better projections of flood threats.

Vivian Lee, president of health platforms at Verily, a sister company of Google, talked about the use of AI algorithms to identify patients at higher risk of hypertension, substance use, or a longer hospital stay. She said the goal is getting that information to the clinicians to make that data more actionable.

Artificial intelligence also presents opportunities to engage patients in different ways, and that goes beyond just personalized medicine, Vivian Lee said. With AI, she said the question becomes, How do we move to precision health and precision engagement?

I really believe the advances we are making now will enable us to do personalized care at scale, Vivian Lee said.

During the forum, participants, including the audience weighed in on where AI would have the most potential to improve healthcare. Most said it would be the use of artificial intelligence to predict potential health risks.

I think the thing about risk prediction is it can affect not only individual patients it can affect entire populations, entire communities, Edward Lee said. We can positively contribute to the health of many, many patients.

Focusing on health equity

Even as the panelists touted AIs promise, they also said health systems aiming to use artificial intelligence must focus on closing healthcare disparities.

There is deep evidence that care that isnt equitable just isnt high quality, Howell said.

Everyone should have the opportunity to receive the full benefits of AI We should work systematically to make sure that happens," he said.

Researchers are using artificial intelligence to predict risks in patients, but as Howell noted, the problem is some data is missing when it comes to patients from underrepresented communities. In a sense, disparities can be baked into the data being analyzed.

Vivian Lee shared similar concerns. We need to be attentive to bias and health equity, she said.

Fatima Paruk, chief health officer and senior vice president of Salesforce, said AI could be both an enabler or a barrier. But she said, It leaves me thinking we can deliver more equitable care.

The technology of AI in and of itself is only so useful, Edward Lee said.

Combining with expertise is when you can really make a difference in the lives of the patients, he said.

The panels members said they were hopeful in part because much of the research in AI and the new artificials intelligence are being developed by those in the healthcare industry.

Paruk touted AIs potential, combined with remote patient monitoring, in helping older patients potentially live at home longer. Health systems could eventually use data to get a sense of when those older patients may need more assistance.

That would also be a boon to many in the sandwich generation, who are caring for both their children and aging parents. Theres a huge amount of potential there, she said.

While panel members noted similar predictions about electronic medical records reducing demands on physicians, Paruk and others said AI could reduce burnout among clinicians.

But ultimately, the panel members expressed the most enthusiasm for how artificial intelligence could transform patient care.

Im incredibly hopeful for the future, Paruk said.

Follow this link:
Artificial intelligence and its potential to change healthcare - Chief Healthcare Executive

Chipotle Is Testing More Artificial Intelligence Solutions To Improve Operations – Forbes

Chipotle's Chippy, an autonomous kitchen assistant that integrates culinary traditions with ... [+] artificial intelligence to make tortilla chips, is moving into the next phase of testing and will be integrated in a restaurant next month.

During Chipotles Q2 earnings call in late July, executives made it clear the system needed to refine some of its operational processes as dine-in business returns while off-premise business remains elevated.

In doing so, Chief Restaurant Officer Scott Boatwright touted the companys Project Square One, a game plan focused on employee training to execute orders more efficiently. Today, the company announced its also getting more technology involved.

Chipotle is testing two technologies specifically to streamline operations and reduce frictiona kitchen management system and an advanced location-based platform.

In eight Southern California restaurants, Chipotle is testing PreciTastes kitchen management system that provides demand-based cooking and ingredient preparation forecasts by leveraging artificial intelligence and machine learning. According to Chipotle, the system monitors ingredient levels in real time and notifies employees how much to prep, cook and when to start cooking. The system was created to not only optimize throughput but to also minimize food waste.

The new kitchen management system has alleviated manual tasks for our crew and given restaurant managers the tools they need to make informed in-the-moment decisions, ultimately enabling them to focus on an exceptional culinary and an outstanding guest experience, Chief Technology Officer Curt Garner said in a statement.

This isnt Chipotles first foray into AI. Earlier this year, Chipotle announced a test with Miso Robotics to bring its artificial intelligence-driven Chippy into its Cultivate [innovation] Center to replicate the chains signature tortilla chips. That test is now expanding, with Chippy making its first restaurant debut next month in a Fountain Valley, California, location.

From there, the company will gauge employee and guest feedback before developing a broader rollout plan.

During a recent interview, Garner said the company is looking at everything from internet of things to machine learning to run its restaurants more efficiently and enable crew members to focus on other tasks.

When you see us leaning into this space, it will be a question of are there better tools to help our crews versus removing a task? Those are the kind of things were looking at, Turner said.

The company is also currently testing Radius Networks Flybuy, a contextual restaurant program, at 73 Cleveland-area restaurants designed to identify Chipotle app users intent upon arrival. The location-based technology utilizes real-time data to let customers know their orders are ready, to remind them to scan the Chipotle Rewards QR code at checkout and more. It even alerts customers if theyre in the wrong pick-up location.

The program has yielded positive results so far, according to Chipotle, including improved in-store rewards engagement and delivery efficiencies.

Empowering our restaurants with advanced technologies is critical for operational excellence and better positions our teams for our ambitious growth plans, Boatright said in a statement.

Notably, Chipotle isnt the only chain exploring AI technology to improve operations. White Castle has been testing Miso Technologys Flippy in the back of the house for about two years, for instance, while Jamba has partnered with autonomous food platform Blendid to automate smoothies. Several restaurant chains, including Applebees, IHOP and Tropical Smoothie Cafe, leverage Flybuy.

In fact, a new survey from Capterra found that 76% of restaurants are currently using automation in three or more areas of operation, while 96% of restaurants are using some type of automation tool in the back of the house. As such, the cooking robotics space is expected to grow by over 16% a year through 2028 with an estimated worth of $322 million by 2028.

That said, Chipotles scale, company-owned model and zero-debt balance sheet adds a bit more intrigue to this trend. Chipotle has some latitude to pilot new solutions without franchisee investment or pushback, and any proven return on investment will likely provide a strong case for adoption across an industry still very much struggling with labor shortages.

Further, all of these technologies enhance throughput, a major focus for Chipotle to drive more sales. During the companys Q2 call, for example, CEO Brian Niccol said order fulfilment was in the low 30s on a per-15-minute basis nearly 10 years ago, which adds a full percent on comp sales on the day.

On a 15-minute basis, thats what were going after, he said during the earnings call.

Chipotles announcements today come on the heels of the companys Cultivate Next venture fund launch, created to identify strategically aligned companies for early-stage investments. As part of this $50 million fund, Chipotle has already invested in Hyphen, a foodservice platform that automates kitchen operations, and Meati Foods, a company that provides plant-based proteins.

Chipotle is also leveraging a new scheduling tool, has invested in autonomous delivery company, Nuro, and is testing radio-frequency identification to trace and track ingredients in its restaurants.

In a recent statement, Garner said the company is exploring investments in innovations that will enhance employee and guest experience and quite possibly revolutionize the restaurant industry.

Investing in forward-thinking ventures that are looking to drive meaningful change at scale will help accelerate Chipotles aggressive growth plans, he said.

Chipotle currently has about 3,000 locations, with plans to grow to about 7,000 in the coming years.

See the rest here:
Chipotle Is Testing More Artificial Intelligence Solutions To Improve Operations - Forbes

Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality | Practical Ethics – Practical Ethics

By Maximilian Kiener. First published on the Public Ethics Blog

AI, Today and Tomorrow

77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars. And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an intelligence explosion. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.

Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.

There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazons Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a responsibility gap, a situation in which no one is responsible for the harm caused by AI. Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.

AI & The Conditions of Responsibility

First, many scholars argue that a key condition of responsibility is control: one can be responsible for something only if one had meaningful control over it. Yet, AI systems afford very little control to humans. Once in use, AI systems can operate at a speed and level of complexity that make it impossible for humans to intervene. Admittedly, people may be able to decide whether to apply AI in the first place, but once this decision has been made, and justifiably so, there is not much control left. The mere decision toriska bad outcome, if it is itself justified and not negligent or reckless, may not be sufficient for genuine moral responsibility. Another reason for the lack of control is the increasing autonomy of AI. Autonomy here means the ability of AI systems not only to execute tasks independently of immediate human control, but also (via machine learning) to shape the principles and algorithms that govern the operation of these systems; such autonomy significantly disconnects AI from human control and oversight. Lastly, there is also the so-called problem of many hands: a vast number of people are involved in the development and use of AI, and each of them has, at most, only a very marginal degree of control. Hence, insofar as control is required for responsibility, responsibility for the outcome of AI may be lacking.

Second, scholars have argued that responsibility has an epistemic condition: one can be responsible for something only if one could have reasonably foreseen or known what would happen as a result of ones conduct. But again, AI makes it very difficult to meet this condition. The best AI systems tend to be those that are extremely opaque. We may understand what goes into an AI system as its input data, and also what comes out as either a recommendation or action, but often we cannot understand what happens in between. For instance, deep neural networks can base a single decision on over 20 million parameters, e.g. the image recognition model Inception v3 developed by Google, which makes it impossible for humans to examine the decision-making process. In addition, AI systems ways of processing information and making decisions is becoming increasingly different from human reasoning so that even scrutinising all the steps of a systems internal working processes wouldnt necessarily lead to an explanation that seems sensible to a human mind. Finally, AI systems are learning systems and constantly change their algorithms in response to their environment, so that their code is in constant flux, leading to some sort of technological panta rhei. For these reasons, we often cannot understand what an AI will do, why it will do it, and what may happen as a further consequence. And insofar as the epistemic condition of responsibility requires the foreseeability of harm to some degree of specificity, rather than only in very general terms (e.g. that autonomous cars sometimes hit people), meeting the epistemic condition presents a steep challenge too.

Third, some theorists argue that one is responsible for something when it reflects ones quality of will, which could be either ones character, ones judgment, or ones regard for others. On this view, control and foresight may not be strictly necessary, but even then, the use of AI poses problems. When an autonomous car hits a pedestrian, for instance, it may well be that the accident does not reflect the will of any human involved. We can imagine a case in which there is no negligence but just bad luck, so that the accident would not reflect poorly on anyones character, judgment, or regard for others.

Thus, various approaches to responsibility suggest that no one may be morally responsible for the harm caused by AI. But even if this is correct, a further important question remains: why should we care about a responsibility gap in the first place? What would be so bad about a future without, or with significantly diminished, human responsibility?

AI & The Point of Responsibility

To address this question, we need to distinguish between at least two central ideas about responsibility. The first explains responsibility in terms of liability to praise or blame.[1] On some of these views, being responsible for some harm means deserving blame for it. Thus, a responsibility gap would mean that no one could be blamed for the harm caused by AI. But would this be so bad? Of course, people may have the desire to blame and punish someone in the aftermath of harm. In addition, scholars argue that blaming practice can be valuable for us, e.g. by helping us to defend and maintain shared values.[2] Yet, the question remains as to whether, in the various contexts of AI, peoples desire to blame really ought to be satisfied, rather than overcome, and also what value blaming practices ultimately hold in these different contexts. Depending on our answer to these questions, we may conclude that a gap of responsibility in terms of blameworthiness may not be so disadvantageous in some areas, but maybe still of value in others.

The second idea identifies responsibility with answerability, where an answerable person is one who can rightly be asked to provide an explanation of their conduct.[3] Being answerable for something does not imply any liability to blame or praise. It is at most an obligation to explain ones conduct to (certain) others. Blame would be determined by the quality of ones answer, e.g. by whether one has a justification or excuse for causing harm. This approach to responsibility features the idea of an actual or hypothetical conversation, based on mutual respect and equality, where the exchanged answers are something that we owe each other as fellow moral agents, citizens, or friends. Here, the question of a responsibility gap arises in a different way and concerns the loss of a moral conversation. Depending on our view on this matter, we may conclude that losing responsibility as answerability could indeed be a serious concern for our moral and social relations, at least in those contexts where moral conversations are important. But in any case, the value and role of answerability may be quite different from the value and role of blame, and thus addressing the challenge of responsibility gaps requires a nuanced approach too.

[1] Cf. Pereboom, D.Free will, agency, and meaning in life. Oxford University Press, 2014.

[2] Cf. Franklin, C. Valuing Blame. In Coats, J., Tognazzini, N. (Eds.) Blame: Its Nature and Norms, 207-223. Oxford University Press, 2012.

[3] Smith, A. (2015). Responsibility as Answerability. Inquiry 58(2): 99-126.

View post:
Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality | Practical Ethics - Practical Ethics

Faroe Islands’ National Gallery Becomes the First to Launch Artificial Intelligence Exhibit – Skift Travel News

The National Gallery of the Faroe Islands launched an exhibition this week containing 40 images of the archipelago developed by artificial intelligence program Midjourney, becoming the first national gallery to feature a fully produced show created by artificial intelligence.

The exhibit, which runs from September 29 through October 30, reveals how prominent artists such as Vincent Van Gogh, Claude Monet and Pablo Picasso might have depicted the landscape of the remote archipelago in the North Atlantic. Visitors to the national gallery will also have the opportunity to create their own images of the Faroe Islands using Midjourney.

When I first heard of (artificial intelligence) and Midjourney and how it is possible to create new pictures just like individual artists might have done, it immediately intrigued me, said Karina Lykke Grand, director of the National Gallery of the Faroe Islands.

It was fascinating to see how by giving prompts, the system can get an idea of how an artist such as Van Gogh or Picasso might have painted the Faroe Islands.

Go here to read the rest:
Faroe Islands' National Gallery Becomes the First to Launch Artificial Intelligence Exhibit - Skift Travel News

A.I. is solving traffic problems to get you where youre going safely – Fortune

I havent met anyone that really loves traffic, says Karina Ricks of the Federal Transit Administration.

Except, possibly, professionals like her who are tasked with reducing it.

Ricks has made her career out of caring about traffic patterns. Before her current role as the associate administrator for research, innovation, and demonstration at the FTA, she was the director of mobility and infrastructure for the City of Pittsburgh in Pennsylvania. She has spent countless hours thinking about cars, public transit, roads, and pedestriansand how to make it all flow more smoothly.

When youre in the peak times for travel, when the system is so full, it only takes a small disruption to cause really big problems, Ricks says. The work is to quickly flag those disruptions and rapidly retool the system to operate around them.

What Ricks aims to optimize affects anyone moving from point A to point B, especially in cities. She explained that congestion is the number one problem when it comes to traffic, and a common occurrence in metropolitan areas. Add to that the number of variables at any given time, including human operators of vehicles and geography, and it results in a mind-boggling puzzle to even attempt to solve.

If there were an easy way to reduce traffic, it would have been actioned in the past 50 years, she said. Instead, she, government organizations, and startups in the space, such as Lyt, are all looking at an immense amount of traffic data availablefrom traffic sensors to ride share data and even bike and scooter data from smartphonesand using it to inform decisions on how to get people to work, home, and the grocery store safely and quickly.

That solution involves artificial intelligence and machine learning.

There are tasks that humans just arent good at that machinery is, and thats recognizing patterns, explains Tim Menard, founder and chief executive officer of Lyt, a software technology platform providing mobility solutions for cities. A.I. is a great technology to use, because youre looking at all parts of the system. You can start feeding it different information, and you can put that into a system that can make operational changes.

Menard started Lyt after studying intelligent transportation systems for more than 13 years. His company uses vehicle data to solve traffic problems, especially when it comes to the efficiency of public transit options. For Menard, the end goal is to make more cities equitable by making public transit reliable, predictable, and faster.

Both Ricks and Menard believe that the way to reduce traffic is to get more people onto public transportation, such as buses, subways, and light rail systems. Public transportation is the safest surface transportation mode, with fewer injuries and fatalities. Its also a speedier way to move a larger number of people.

Ricks explained that most of congestion is caused by low-volume vehicles, ie. single-occupant cars. Those drivers are human; some drive faster, some slower; some change lanes often, others stop abruptly when a traffic light flashes yellow before red. Because humans behave so differently, there is a level of unpredictability in the traffic system. Much of her work aims to make mass transit more enticing for commuters.

Youre reducing the rate of crashes that might occur when youre reducing the number of vehicles that are there, Ricks added.

With that in mind, Menard started looking at the Internet of Things for his cloud platform, pulling data from smartphones, automotive sensors, public transportation logs, and delivery vehicles to understand traffic patterns at various times of the day as well as during special one-off events, such as a sports game at a local stadium. He said that the first hurdle was to operate from a place of known information rather than guessing; in the past, he explained, it took a human looking at a video screen for hours and hours to even begin to make an estimate on next steps.

He launched in San Jose, Calif., where for the past three years, he has collaborated with the city to optimize bus routes by 20%, thereby reducing fuel consumption by 14% and emissions at intersections by 12%. Using a predictive estimated time of arrival at each traffic light, his platform reduced the travel time between bus stops by optimizing bus lanes and traffic lights to ensure buses could move as effectively as possible without disrupting other traffic. He now works in other northern California cities, including additional Bay Area towns and Sacramento, as well as in the Pacific Northwest: Seattle and Portland, Ore.

Menard is also looking at bicycle and pedestrian traffic, something he says is of interest and priority to many transit authorities. He has worked to make bicycling safer by creating dedicated, curbed bike lanes with their own traffic signals synced with those of vehicle traffic to help avoid car-bicycle collisions. For pedestrians, Ricks explained that foot traffic uses sensors and adaptive controls to adjust settings in real time based on needsa moment when the A.I. algorithm and real time data intersect.

Another benefit of A.I. technology for traffic patterns surrounds first responders. Menard employed machine learning to analyze data from emergency vehicles like ambulances and fire trucks to improve speed. He noted that in many urban environments, congestion and traffic patterns prohibit first responders from promptly arriving on scene or to a hospital with a life-or-death situation. In Sacramento, Calif., he tackled this problem.

It was literally night and day better in under 15 minutes, he said of taking a look at amassed data from all the relevant stakeholders in the city. There, he improved the slowest 10% of the emergency vehicles by more than 10 miles per hour, allowing them to arrive 70% faster on any response. Even the performing top 10% of vehicles saw an improvement of 6 miles per hour.

For every single-occupant car that swaps to public transit, there is one less vehicle on the road causing congestion. Menard regularly reminds people that when they are sitting in their car, stuck in traffic, they are surrounded by many other people doing the exact same thing. If they traded to a shared vehiclea high-occupancy mode of transitthey may speed along very quickly.

But its always challenging to inspire commuters to change habits, so the new option needs to be compelling enough to motivate them to adjust the way they operate. What you want in a transit system is to show up now [and] theres a bus ready to get you in a timely fashion, Ricks said. We need to address traffic in order for transit to be that attractive alternative. Theres quite a bit of work to still do.

Excerpt from:
A.I. is solving traffic problems to get you where youre going safely - Fortune

Artificial Intelligence Theory Into Practice (and Into Controversy) | LBBOnline – Little Black Book – LBBonline

Art contests are usually controversial solely because the aesthetics of art are so subjective. However, the latest art controversy arose because of a cutting-edge technical question: is art created by artificial intelligence really "art" at all?

This story is interesting because, despite the sensationalist headlines that artificial intelligence is going to take over all our jobs, the fundamental truth even in the area of creativity is that humans are still needed to determine what image works and to fine tune the end-product (although AI will probably get better at this). The work of an artist or a creative person may change in its technique and technology, but creativity itself will never go away.

Already, people have been racing to redefine art in the context of digital technologies, now even more so within the context of tools like Midjourney and DALL-E 2. This is important because these AI engines are acquiring artistic styles from learning about existing art and then generating artwork that resembles the desired artists style. You can now use these programs to create an ad featuring a yellow unicorn on the beach in the style of Jackson Pollock or a portrait of a brand spokesperson as if it were rendered by Rembrandt.

Are you feeling uncomfortable at the thought of AI being used this broadly? You are not alone. In a recent conversation, my colleagues first reaction to employing AI was that we shouldnt use this because our clients generally pay by rate card if a copywriter or an art director were using AI to work faster or better, we would miss out on a higher fee.

There is also a potential danger to genuineness inherent in using AI technologies. These content generators can be, and have been, used to create and spread fake news or misinformation, relying on the fact that it can be virtually impossible to distinguish real images from fakes. There is a growing scrutiny in the public's eye when sensational or controversial images or videos are presented, with everyone looking for evidence that a deepfake, filter, or Photoshop trick has been used. While brands such as Hulu, State Farm, and ESPN have used these techniques for eye-catching ads, we should always be mindful of the dangers and limits of these technologies. And as we learn how to mitigate these risks, we will also be forming new ways of generating and interacting with creative content.

We shouldnt be afraid of the change and challenges presented by artificial intelligence. Yes, the advent of AI in the creative space means we probably need to change the way we approach many issues, from the concept of genuineness to the way we are compensated by our clients for work that can be performed faster and more efficiently. However, limiting the way we embrace technology because of our reliance on an outdated business model is not a win-win in any situation. If we are going to see our business transformed by AI technologies and it will happen then from our client meetings all the way to the judging tent at the state fair, we need to understand that our human creativity is still front and centre, and is still the premium we offer our clients. Through AI, it will just gain a little help.

---

More here:
Artificial Intelligence Theory Into Practice (and Into Controversy) | LBBOnline - Little Black Book - LBBonline

The Coming Menace of Artificial Intelligence And How We Can Respond As Artists – Fstoppers

The future of art has arrived. And it isnt pretty.

Like, I suspect, many people reading this article, I enjoy the late night comedy shows. When I was a kid, late night comedy was rather dry, filled with stale jokes designed to appeal to the broadest possible demographic. But as weve reached the age of peak T.V., so too have we reached the age of peak late night T.V. In our stratified media landscape, where we no longer turn in unison to the same broadcast and news outlets and the idea of a common knowledge base is quickly fading, late night comedians have also taken on the somewhat unfair mantle of often being a source for breaking news. I shouldnt have to watch John Olivers monologue to get a deep dive into real issues affecting the world today. But, with the major networks taking a USA Today approach to most stories these days (only the minimum amount of ink required), late night comedy is often the only place devoting any amount of time to the pressing issues of the day.

This is not meant as an advertisement for late night comedy or a rant against modern news. Rather, it is meant to explain why I found myself watching a story on Last Week Tonight with John Oliver last week about a new service called Midjourney, which allows people to use artificial intelligence to create digital artwork by typing in a series of keywords. While the user themselves isnt doing any of the drawing or painting, the words they choose to enter result in the computer generating their best approximation of the users intent. Predictably, the results range from sublime to awful. And the segment is played mostly for laughs. But the fact that something like this not only could exist, but already does exist, should raise an artist's hackles regardless of discipline.

Only a day or two after watching that segment, I saw a news story about an A.I. rapper that lost its record deal. Yes, you read that right. And I have so many questions. One, how did a rapper that literally doesnt exist get an actual record deal? Two, how did they teach the A.I. to rap? Three, who is behind the A.I. rapper? As it turns out, its that last part that led the rappers contract to be rescinded and leads to more pressing questions of why it was offered a contract in the first place. As it turns out, the digital gangster rapper, decked out in every stereotype of what certain people seem to think of African-Americans in general, was the brainchild of two non-African Americans. Im not going to go into the entire history of blackface and minstrel shows in this essay. That is a topic for its own essay, book, and/or documentary series. But, lets just agree that a digital minstrel show is no better than a live one.

Of course, the echos of history didnt stop there.The voice of the A.I. avatar who secured the record deal was actually that of a real rapper. But, much like many a shady music producer throughout the history of the industry, the producers had manipulated the situation to take the talent from the human artist, convert it into more easily manipulated ones and zeros, capitalize financially on the artists work, then cut them out of the financial profits all together.

As more facts about this A.I. rapper came to light and more outrage began to build, the record company canceled the contract and scrapped their plans. But why could they not have seen the problem with the plan upfront?

And, stepping away from any cultural appropriation issues for a moment, what implications does that have for the music industry going forward? We already live in a world of autotune which can make almost any average shower singer into a songbird. As someone who is musically challenged, I appreciate the idea that a computer can make me sound like Marvin Gaye. But does that mean that I can actually sing? Id say no. But what the story of the canceled rapper tells us is that weve rapidly reached a point where the human being might not be needed at all. Just plug in a handful of appropriate hashtags and voice samples and voila! Youve got the next Whitney Houston. And, better yet, you dont have to pay her a cent.

The same thing is coming to the visual arts as well. I was watching Trevor Noahs show last night (yes, another late night comedian), and he had a story about an A.I. artist who won an art contest. The winning image was 90 percent created by artificial intelligence, according to the artist. He submitted the original keywords and did some finishing work in Photoshop. But the bulk of the heavy lifting was done by an algorithm. Without commenting at all on the quality of the result, what larger questions does that pose? Is the submitter really the artist behind the image at all if he only contributed 10% of the work? He did come up with the original set of keywords to generate the image. So, its fair to say the image wouldnt exist without him. But all the intricate brushwork, lighting, and composition that can take a human being decades to perfect were all done by the computer. I dont know if his winning image would technically be termed a painting or a photograph. Its very photorealistic. But, even if its technically a painting, the implications for photographers are obvious.

Why would a client pay you thousands of dollars in creative fees and licensing to take a picture of their products when they can have a computer generate an image for them? True, it probably wont be totally free. No doubt, a cottage industry will sprout up around A.I. images similar to creating VFX for film and television, and there will be new leaders in the emerging field demanding higher rates. Thats how capitalism works. But does that mean that we are on a precipice of seeing our entire professions replaced by machines?

Lets take the example of a product photographer, for instance. Lets say Coca Cola needs to make a new ad to promote the introduction of their new flavor of Coke. The ad is going to be a spinning soda can, with music and text. It is entirely possible that it would be far more cost effective for Coca Cola to type the dimensions and hex values of its can into a computer and have it generate the photorealistic imagery of the spinning can than it would be to hire a full video crew. As mentioned earlier, they can even turn to an A.I. musical artist to generate a jingle for the spot that wont require them to pay royalties.

As someone whose work generally centers around human beings rather than products, I could easily delude myself into thinking the computer couldnt possibly do what I do. But that would clearly be presumptuous on my part. Assuming that others share my same aversion for ones and zeros. I mean, honestly, even if youre a fan of Marvel movies, what are they really if not just 2.5-hour tributes to special effects. And, based on ticket sales, people dont seem to mind.

Weve been debating ethical issues surrounding digitally manipulating human performances since back when Ridley Scott was forced to use a CGI version of actor Oliver Reed for parts of the film Gladiator following the actors sudden death three weeks before the end of principal photography. In the ensuing 22 years, the technology has only gotten better to the point where the majority of blockbuster movies these days contain large amounts of avatar stuntmen and women doing feats of strength that defy human ability. I wont even get started on the absolute mediocrity of the modern superhero film that seems to not only utilize A.I. avatars on screen, but apparently uses A.I. algorithms to write the scripts as well. Okay, Im being a little snarky there. But, honestly, are you truly sure that Netflix doesnt have some kind of reverse algorithm that knows exactly what movies you are watching and directly feeds this information into another A.I. machine that is writing scripts designed to hit the broadest common denominator?

So, its not even remotely out of the question that in a society which increasingly views art as mere content, that sheer practicality will dictate that the majority of content we consume within 10 to 15 years, if not sooner, will be created almost entirely in a computer. The question is, as artists, what do we do about it?

The enterprising capitalist may look at that scenario and decide the only way to make money as an artist in the future is to become the puppet master behind the creation of A.I. art. If I were Benjamin Braddocks plastic loving neighbor in 2022, it would probably be A.I. that I would be urging the hapless hero to get into. But as an artist who straddles both the analog and digital generations, the idea of giving up even an inkling of my artistic output to a machine just somehow seems wrong. Thats not to say that it is wrong. We all have our own levels of comfort with how much of our art should be computer generated and how much should be done in camera so to speak. I personally tend to reject computer generated art instinctively, like a body rejecting a blood transfusion. But thats entirely subjective, no doubt affected by my age and personal preferences. And, as CGI gets better and better, the line between computer generated art and the real world continues to blur. So my own red line will undoubtedly shift.

The other day, I was rewatching one of my favorite movies, Braveheart. The film is famous for its massive battle scenes with hoards of opposing armies going toe to toe across sweeping vistas. I didnt know it when I first saw it in the theater, but in order to have that many soldiers in the battle scenes, a large number of the soldiers were created digitally. This is commonplace now, but at the time was revolutionary. They did have a significant number of extras to play soldiers. But they essentially doubled and tripled those extras to fill out the scene. To me, this feels like a good use of digital technology. The filmmakers still accomplished the bulk of the work practically. They simply rounded out the hard work they had already done with these digital soldiers so that they wouldnt break the budget. I never noticed it in the film, and it didnt bother me.

Compare that with the modern superhero movie where not only the actors, but the environment, and half of the props are created whole cloth from digital assets. The actors give performances against green screens (or the new, more immersive virtual LED walls), and everything around them is literally created by a computer. There are still, at this moment, real human beings serving as VFX artists to create those digital worlds. So, its not the same as an A.I. generated environment. The human VFX artists are truly digital gods. But I do find myself emotionally disengaging from the majority of these modern action films simply because its impossible for my mind to divorce from the fact that Im watching one and zeros turned into avatars rather than actual people.

People, as imperfect as we are, have value. In fact, its our flaws that give us that little extra something that allows an audience to relate to a character and see a little something of themselves. Human beings have oddities that make them special. Audrey Hepburn complained about having an extra long neck, but I defy anyone to see her as anything less than beautiful. Humphrey Bogart had a lisp. Jimmy Stewarts awkward and stilting way of talking not only didnt hurt his career, but became one of his trademarks.

A few months ago, I was revisiting John Woos 1997 action film Face/Off. Theres a scene in the film where the two characters, played by Nicolas Cage and John Travolta, are in a speedboat race. At some point, theres a boat crash and the two characters are thrown from the boat, flying high through the air. As many times as Ive seen the film this particular time, it became painfully clear to me that the people flying out of the boat in the wide shot were quite decidedly not Nicolas Cage and John Travolta. This makes sense. This is why the profession of stuntman exists. And its not like, deep in my brain, I didnt already realize that theres no way the two stars would do that stunt themselves. But, in the context of the film, I had suspended disbelief long enough to just go with it. And, you know what? It totally worked.

Nowadays, those stuntmen would have been replaced by digital avatars. They would have been able to superimpose Cage and Travoltas faces onto those avatars and likely created the boats and the flailing bodies with VFX. But, somehow, I seriously doubt it would have felt as exciting. Theres something about seeing a real person, even an imperfect one, in peril. One of the main reasons the new Top Gun: Maverick sequel works so well is because of Tom Cruises dedication to practical stunts. Surely, the movie has its fair share of digital effects. But by keeping those to a minimum, you allow your audience a greater ability to connect to the story. Because what you are seeing is real. Its human. Its relatable.

So, as we continue to hurtle our way towards a world where A.I. is going to start taking away a larger and larger chunk of the creative jobs in which many of us make our living, I suspect that the most powerful tool we will have in our defense is our very imperfection. This sounds counterintuitive, but whats the one thing about us that a computer will never be able to replicate? Our humanity.

If your entire artistic voice is based on your ability to expose perfectly to your light meter, you may find yourself in trouble. Computers can already perform that trick. Just think about all the tasks your camera is already capable of doing for you. But your camera, no matter how many megapixels, cant replicate your artistic voice. Your artistic voice is the culmination of all the experiences that youve had in life, art-related or otherwise. Your artistic voice is the grand sum of your life's emotions. It is when you put yourself, all of yourself, into your work and art that you can create greatness.

There will no doubt be more and more pressure to turn over more and more of your artistic creation to technology as time goes on. And, no doubt, new technologies will come along that will genuinely afford you a chance to be a more effective storyteller and artist. But ask yourself why you wanted to be an artist in the first place. Was it just to get a result as quickly and as cheaply as possible? Was it just as a shortcut to fame and fortune? Or, did you become an artist because you had something to say? Did you look at the great ballad of life and feel that you had a right to contribute a verse?

Theres no question that the machines are coming for us as artists. But trends and technologies can never replace the value of human intuition. Our creativity will sustain us. Our humanity will sustain us. And we, as artists, will survive. No matter what SkyNET says.

Read more:
The Coming Menace of Artificial Intelligence And How We Can Respond As Artists - Fstoppers

New artificial intelligence lab at SMU’s AT&T Center for Virtualization to hunt bias in automated systems – EurekAlert

Intelligent Systems and Bias Examination Lab ISaBEL will pair industry and academic research to equalize impact in automated systems

DALLAS (SMU) Quantifying and minimizing bias in artificial intelligence systems is the goal of a new lab established within SMUs AT&T Center for Virtualization. Pangiam, a global leader in artificial intelligence, is the first industry partner for the Intelligent Systems and Bias Examination Lab (ISaBEL).

ISaBELs mission is to understand how Artificial Intelligence (AI) systems, such as facial recognition algorithms, perform on diverse populations of users.The Lab will examine how existing bias can be mitigated in these systems using the latest research, standards, and other peer reviewed scientific studies.

Algorithms provide instructions for computers to follow in performing certain tasks, and bias can be introduced through such things as incomplete data or reliance on flawed information. As a result, the automated decisions propelled through algorithms that support everything from airport security to judicial sentencing guidelines can inadvertently create disparate impact across certain groups.

ISaBEL will design and execute experiments using a variety of diverse datasets that will quantify AI system performance across demographic groups.

As the lab grows ISaBEL will seek additional industry partners to submit their algorithms for certification. SMUs AT&T Center for Virtualizationis the perfect place to work on these issues with its focus on cross-disciplinary research, education and training, and community outreach, said center directorSuku Nair.

Both artificial intelligence and computer vision, which enables computers to pull information from digital images and videos, are quickly evolving and becoming increasingly accessible and adopted.

How to study and mitigate bias in AI systems is a fast moving area, with pockets of researchers all over the world making important contributions, said John Howard, an AT&T Center research fellow and biometrics expert. Labs like ISaBEL will help ensure these breakthroughs make their way into the products where they can do the most good and also educate the next generation of computer scientists about these important issues.

AI industry leaders know that end users must clearly understand bias measurement and the progress of bias mitigation to build trust among the general public and drive full market adoption.

At Pangiam, we are fundamentally committed to driving the industry forward with impactful efforts such as this, said Pangiam Chief AI Officer and SMU Alumnus, Shaun Moore. Bias mitigation has been a paramount focus for our team since 2018 and we set out to demonstrate publicly our effort toward parity of performance across countries and ethnicities. SMU is the perfect institution for this research.

ISaBEL is currently recruiting graduate and undergraduate students to participate in the labs AI research.Please contact Suku Nair atattcenter@smu.eduif interested.

About SMU

SMUis the nationally ranked global research university in the dynamic city of Dallas.SMUs alumni, faculty and more than 12,000 students in eight degree-granting schools demonstrate an entrepreneurial spirit as they lead change in their professions, communities and the world.

About Pangiam

AtPangiam, we are innovators, problem solvers, technologists and experts. Founded by a team of global trade, travel, aviation and homeland security leaders with a passion for whats possible, Pangiam brings together visionaries from government, technology and commercial sectors to drive change. Through innovation, emerging technologies and the power of data, we solve the operational, facilitation and security challenges facing organizations today to get their world moving.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Continued here:
New artificial intelligence lab at SMU's AT&T Center for Virtualization to hunt bias in automated systems - EurekAlert

Artificial intelligence and augmented reality freshen up men’s fragrance – Packaging Europe

Created by robots for humans is a special edition body spray for young men which has been developed using augmented reality and artificial intelligence in a bid to produce the perfect scent and packaging.

Axe A.I. (LYNX A.I. in the UK), is the result of a specially designed AI applied to analyze 6,000 perfume ingredients with 3.5million potential combinations with the goal of discovering the ideal fragrance. The result combines an aromatic floral blend of sage, artemisia, and mint, refreshed with marine, apple and citrus notes and finished with a woody, ambery, and moss background.

Not only did the brand use AI algorithms to help create the scent but they are doubling down on technology and are using Augmented Reality (AR) to help market it.

Powered by Zappars WebAR technology, all limited edition Lynx/Axe A.I. packs will feature a smartphone scannable QR code that will launch a web page where British rapper Aitch will introduce the product and ask the user to spray an AR can of LYNX/Axe in the air to reveal a code allowing entry in a special competition. Six lucky winners will be invited to a special house memorable party hosted by a hologram of Aitch himself.

Senior brand manager for Lynx at Unilever, Josh Plimmer, said: Lynx has always been at the -cutting-edge of fragrance. The launch of Lynx A.I. which was created by crunching 46 terabytes of data, unlocks the code to smell iconic. This groundbreaking new scent was created in collaboration with Swiss firm Firmenich, which is an expert in fragrance and taste.

Caspar Thykier, CEO and co-founder at Zappar added: The Lynx AI concept and campaign clearly demonstrate how innovative brands are leveraging technology to create and market new products, getting ahead in the new connected pack revolution as an always-on platform and part of their owned-media strategy. AR and AI are driving more engaging product experiences and better connecting young customers with brands.

It is daring, courageous and innovative to combine humans and technology in such an emotional field - the sense of smell. commented Firmenichs chief information officer, Eric Saracchi, We are very excited to launch this game-changing fragrance with Lynx to power up guys daily routines. Over 50 years of fragrance data and knowledge - along with over 46 terabytes of data has led to the creation of Lynx A.I.

The special edition Lynx/Axe A.I. campaign will run through to March 2023.

This article was created in collaboration with AIPIA (the Active and Intelligent Packaging Industry Association). Packaging Europe and AIPIA are joining forces to bring news and commentary about the active and intelligent packaging landscape to a larger audience. To learn more about this partnership, click here.

Originally posted here:
Artificial intelligence and augmented reality freshen up men's fragrance - Packaging Europe