Page 88«..1020..87888990..100110..»

Category Archives: Ai

DataX is funding new AI research projects at Princeton, across disciplines – Princeton University

Posted: November 21, 2021 at 10:17 pm

Graphic courtesy of the Center for Statistics and Machine Learning

Ten interdisciplinary research projects have won funding fromPrinceton Universitys Schmidt DataX Fund, with the goal of spreading and deepening the use of artificial intelligence and machine learning across campus to accelerate discovery.

The 10 faculty projects, supported through a major gift from Schmidt Futures, involve 19 researchers and several departments and programs, from computer science to politics.

The projects explore a variety of subjects, including an analysis of how money and politics interact, discovering and developing new materials exhibiting quantum properties, and advancing natural language processing.

We are excited by the wide range of projects that are being funded, which shows the importance and impact of data science across disciplines, saidPeter Ramadge, Princeton's Gordon Y.S. Wu Professor of Engineering and the director of the Center for Statistics and Machine Learning (CSML).These projects are using artificial intelligence and machine learning in multifaceted ways: to unearth hidden connections or patterns, model complex systems that are difficult to predict, and develop new modes of analysis and processing.

CSML is overseeing a range of efforts made possible by the Schmidt DataX Fund to extend the reach of data science across campus. These efforts include the hiring of data scientists and overseeing the awarding of DataX grants. This is the second round of DataX seed funding, with thefirst in 2019.

Discovering developmental algorithmsBernard Chazelle, the Eugene Higgins Professor of Computer Science;Eszter Posfai, the James A. Elkins, Jr. '41 Preceptor in Molecular Biology and an assistant professor of molecular biology;Stanislav Y.Shvartsman,professor of molecular biology and the Lewis Sigler Institute for Integrative Genomics, and also a 1999 Ph.D. alumnus

Natural algorithms is a term used to described dynamic, biological processes built over time via evolution. This project seeks to explore and understand through data analysis one type of natural algorithm, the process of transforming a fertilized egg into a multicellular organism.

MagNet: Transforming power magnetics design with machine learningtools and SPICE simulationsMinjie Chen, assistant professor of electrical and computer engineering and the Andlinger Center for Energy and the Environment;Niraj Jha, professor of electrical and computer engineering; Yuxin Chen,assistant professor of electrical and computer engineering

Magnetic components are typically the largest and least efficient components in power electronics. To address these issues, this project proposes the development of an open-source, machine learning-based magnetics design platform to transform the modeling and design of power magnetics.

Multi-modal knowledge base construction for commonsense reasoningJia Deng andDanqi Chen, assistant professors of computer science

To advance natural language processing, researchers have been developing large-scale, text-based commonsense knowledge bases, which help programs understand facts about the world. But these data sets are laborious to build and have issues with spatial relationships between objects. This project seeks to address these two limitations by using information from videos along with text in order to automatically build commonsense knowledge bases.

Generalized clustering algorithms to map the types of COVID-19 responseJason Fleischer, professor of electrical and computer engineering

Clustering algorithms are made to group objects but fall short when the objects have multiple labels, the groups require detailed statistics, or the data sets grow or change. This project addresses these shortcomings by developing networks that make clustering algorithms more agile and sophisticated. Improved performance on medical data, especially patient response to COVID-19, will be demonstrated.

New framework for data in semiconductor device modeling, characterization and optimization suitable for machine learning toolsClaire Gmachl, the Eugene Higgins Professor of Electrical Engineering

This project is focused on developing a new, machine learning-driven framework to model, characterize and optimize semiconductor devices.

Individual political contributionsMatias Iaryczower, professor of politics

To answer questions on the interplay of money and politics, this project proposes to use micro-level data on the individual characteristics of potential political contributors, characteristics and choices of political candidates, and political contributions made.

Building a browser-based data science platformJonathan Mayer,assistant professor of computer science and public affairs, Princeton School of Public and International Affairs

Many research problems at the intersection of technology and public policy involve personalized content, social media activity and other individualized online experiences. This project, which is a collaboration with Mozilla, is building a browser-based data science platform that will enable researchers to study how users interact with online services. The initial study on the platform will analyze how users are exposed to, consume, share, and act on political and COVID-19 information and misinformation.

Adaptive depth neural networks and physics hidden layers: Applications to multiphase flowsMichael Mueller,associate professor of mechanical and aerospace engineering; Sankaran Sundaresan, the Norman John Sollenberger Professor in Engineering and a professor of chemical and biological engineering

This project proposes to develop data-based models for complex multi-physics fluids flows using neural networks in which physics constraints are explicitly enforced.

Seeking to greatly accelerate the achievement of quantum many-body optimal control utilizing artificial neural networksHerschel Rabitz, the Charles Phelps Smyth '16 *17 Professor of Chemistry; Tak-San Ho, research chemist

This project seeks to harness artificial neural networks to design, model, understand and control quantum dynamics phenomena between different particles, such as atoms and molecules.(Note: This project also received DataX funding in 2019.)

Discovery and design of the next generation of topological materials using machine learningLeslie Schoop,assistant professor of chemistry; Bogdan Bernevig, professor of physics; Nicolas Regnault, visiting research scholar in physics

This project aims to use machine learning techniques to uncover and develop topological matter, a type of matter that exhibits quantum properties, whose future applications can impact energy efficiency and the rise of super quantum computers. Current topological matters applications are severely limited because its desired properties only appear at extremely low temperatures or high magnetic fields.

Read more:

DataX is funding new AI research projects at Princeton, across disciplines - Princeton University

Posted in Ai | Comments Off on DataX is funding new AI research projects at Princeton, across disciplines – Princeton University

How Mitchell vs. Machines, Belle and Rons Gone Wrong Filmmakers Tackled Tech, AI in Their Films – Hollywood Reporter

Posted: at 10:17 pm

This year, three animated movies aim to talk to children and adults about the progress (and perils) exhibited by AI, social media and the internet and the filmmakers all agree that animation is an ideal medium with which to translate these ideas into something visual.

You get to invent everything, says Michael Rianda, writer-director of The Mitchells vs. the Machines. And because you have so much control, you can really caricature things in a way that you cant in real life.

Netflixs Mitchells produced by Phil Lord and Christopher Miller and Sony Pictures Animation features a robot apocalypse led by a maniacal virtual assistant, PAL (voiced by Olivia Colman), who is enraged that her owner has upgraded to newer technology.

In depicting PAL, Rianda wanted to contrast imperfect humans with more symmetrical robots by looking at James Turrell installations and Stanley Kubrick movies anything that felt perfect and symmetrical, says Rianda. The visuals were matching the themes of the movie, which is that these machines are maybe a little too perfect.

Rianda adds that one animator even kept a 1999 Microsoft news conference queued up for inspiration, as the goal was to be observational about what tech companies do, because some of the stuff they do is scary, and they want to put a clean, cute face on it. We wanted to do the same thing with PALs face. But Rianda says that ultimately Mitchells is as much about family as it is a reflection on tech. I think there is this innate fear of technology, but also on the other hand during COVID, we couldnt communicate with each other without technology. It was imperative to be even-handed. This technology is here in many ways. Its already part of the family, its at your dinner table.

Writer-director Mamoru Hosodas Beauty and the Beast-inspired Belle (which gets a theatrical release Jan. 14 from GKIDS) follows Suzu, a shy 17-year-old from a rural village who becomes an international singing sensation when she enters a virtual world known as U, depicted as a vast metropolis. Speaking through a translator, Hosoda notes that earlier depictions of the internet (including his own) tended to be full of white backdrops, a little more fun. In 2021, I think that imagery has shifted, and you can see that in Belle. Its becoming another reality. This is an issue a lot of our younger generations are facing today where the internet has already grown into this second reality.

Hosoda suggests that the earlier hopefulness has waned a little bit. Its become more of a toxic culture. In spite of this, he says, I still think the newer generations should approach it with some amount of hopefulness. Thats what I wanted to say in Belle.

Also set in the social media age, 20th Century Studios and Locksmith Animations Rons Gone Wrong follows Barney (Jack Dylan Grazer), a socially awkward middle schooler, and Ron (Zach Galifianakis), his new digitally connected best friend, who causes chaos when he malfunctions. It feels really important that were talking to our kids and helping them reflect upon the experiences that theyre having in the virtual and the online world, says Sarah Smith, a writer and director on the film, noting that these can be both good and bad the extraordinary explosion of creativity, connection and friendship but also the loneliness and isolation and dangers of it as well.

Smith admits there are no easy answers, but she does hope to convey through the movie that you need the uncurated friendship, you need relationships between people who no algorithm would put together. That is an essential part of life.

This story first appeared in the Nov. 17 issue of The Hollywood Reporter magazine. Click here to subscribe.

Read the rest here:

How Mitchell vs. Machines, Belle and Rons Gone Wrong Filmmakers Tackled Tech, AI in Their Films - Hollywood Reporter

Posted in Ai | Comments Off on How Mitchell vs. Machines, Belle and Rons Gone Wrong Filmmakers Tackled Tech, AI in Their Films – Hollywood Reporter

Pony.ai opens R&D Center in Shenzhen, Broadening the Reach of its Global R&D Sites to Cover all of Chinas Tier-1 Cities – Yahoo Finance

Posted: at 10:17 pm

Pony.ai also announces partnership with ONTIME, GACs ride-hailing app

GAC is one of Chinas largest automakers and a longtime Pony.ai partner

SHENZHEN, China, November 20, 2021--(BUSINESS WIRE)--Pony.ai, a global leading autonomous driving technology company, announced that it has established a research and development center in Shenzhen, its fifth R&D site globally.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20211119005785/en/

Pony.ai - ONTIME show car at the Guangzhou Auto Show, November 19, 2021 (Photo: Business Wire)

It marks the official commencement of a strategic layout and R&D network in the Guangdong-Hong Kong-Macao Bay Area to promote the large-scale development of autonomous driving. Located in the Qianhai Cooperation Zone, the establishment of Pony.ais Shenzhen R&D center enables the company to fully cover all four Tier-1 cities in China - Beijing, Shanghai, Guangzhou, and Shenzhen, while linking the two key cities of Guangzhou and Shenzhen in the Guangdong-Hong Kong-Macao Bay Area.

"The Greater Bay Area offers tremendous opportunity and with this office, Pony.ai links the twin cities of Guangzhou and Shenzhen, strengthening road testing and commercialization of our two core businesses - autonomous driving passenger cars and trucks - to accelerate the deployment of autonomous driving software and hardware systems," said James Peng, co-founder and CEO of Pony.ai.

Today Pony.ai also announced a collaboration with ONTIME (Ruqi Chuxing), the leading Chinese mobility technology platform launched by GAC Group, one of Chinas largest automakers. GAC has been a longtime strategic partner for Pony.ai. As part of the announced collaboration, Pony.ai and ONTIME will jointly accelerate commercial deployment of autonomous driving technologies to a larger scale in China. As mass commercial deployment gets close, Pony.ai and ONTIME will be able to test various forms of collaboration to rapidly deploy the Pony.ai Robotaxi along the ONTIME platform.

Story continues

ONTIME is one of Chinas leading ride-hailing providers, it offers a wide range of app-based services across core cities in Chinas Greater Bay Area, such as Guangzhou, Foshan, Zhuhai, Shenzhen and Dongguan. Pony.ai was the first company to offer Robotaxi services in China and is a leader in autonomous driving technology. The close collaboration will allow the two companies to leverage their own unique advantages and facilitate commercial deployment of autonomous driving technologies.

About Pony.ai

Silicon Valley-based Pony.ai, Inc. ("Pony.ai") is pursuing an ambitious vision for autonomous mobility. We aim to bring safe, sustainable, and accessible mobility to the entire world. We believe that autonomous technology can make our roads exponentially safer for travelers. Founded in late 2016, Pony.ai has been a pioneer in autonomous mobility technologies and services across the U.S. and China, spearheading public-facing Robotaxi pilots in both markets. Pony.ai has formed partnerships with leading OEMs including Toyota, Hyundai, GAC Group, FAW Group, etc.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211119005785/en/

Contacts

Adam Steinbergmedia@pony.ai

Visit link:

Pony.ai opens R&D Center in Shenzhen, Broadening the Reach of its Global R&D Sites to Cover all of Chinas Tier-1 Cities - Yahoo Finance

Posted in Ai | Comments Off on Pony.ai opens R&D Center in Shenzhen, Broadening the Reach of its Global R&D Sites to Cover all of Chinas Tier-1 Cities – Yahoo Finance

Haptics, AI, and robots-to-rent will make up business of the future – Daily Express

Posted: at 10:17 pm

Smart AI fashion boutiques will use scanners to create individual made-to-measure clothes, reducing the need for retailers to house expensive stock as well as ensuring the perfect fit. Haptics - wearable tech allowing you to feel heat and touch - were cited as the future of gaming and education industries. These are the predictions of four of Britains leading futurists and consumer business experts behind the NatWest Future Businesses Report, which offers a vision of how British industry could look by the year 2036.

The report was commissioned by the bank to inspire the next generation of start-ups and SMEs and authored by leading futurist Dr Ian Pearson, consumer business guru Kate Hardcastle, Shivvy Jervis, founder of FutureScape248 - the award-winning human-centred innovation lab and futurist and author Tom Cheesewright. The panel revealed that in the near future, travel agents could let holidaymakers"try-before-you-fly"through virtual reality experiences and the daily commute could take place in high-speed personal travel pods to beat city congestion.

Protein-rich insects served by bug burger bars are predicted to become the fast foods of choice with fried locust or a worm burger replacing the late-night kebab. While AI fashion boutiques could use technology to design perfectly made-to-measure clothes.

Futurist Dr Ian Pearson, one of the authors of the NatWest Future Businesses Report, said: The NatWest Future Business Report helps to paint a picture of what changes we may see in the business environment over the next 15 years. What was clear to all of us was how a greater interaction with technology is going to revolutionise businesses and transform almost all industries.

One thing the panel all agreed on is this is not the end of our high streets, which will thrive if businesses can offer good enough reasons to go there. Long term, more than 50 per cent of retail will still be in high street shops with predictions like AI tailoring and insect food outlets showing how businesses could adapt in future.

The report stated robot farmers or drones, like those seen in 2014 film Interstellar, will help to meet humanitys soaring food demands estimated to spike by up to 98 percent by 2050. Human health monitoring will get a boost from smart toilets with the power to analyse urine and faeces for killer diseases such as diabetes and cancer.

Despite fears of robots robbing humans of jobs, only four percent of respondents thought their work would become completely obsolete by 2036. Almost two thirds (64 percent) admitted their skill sets would need to adapt in the next 15 years to keep up with technology and 52 percent claimed they wanted to work with robots by 2036.

Andrew Harrison, managing director for business banking at NatWest, added: As this landscape evolves, NatWest continues to be the biggest supporter of UK small business at all stages of development.

From our Dream Bigger programme in schools encouraging young people to explore entrepreneurial mindset; our fully funded Business Builder initiative for early-stage entrepreneurs; and our Entrepreneur Accelerator hubs for high growth, green and diverse businesses, our vision is to help more companies start, scale and succeed.

The NatWest Future Businesses Report is availablehere. To see how NatWest is backing businesses across the UK you can also Watch Alison Hammond spend a day as an intern with a selection of businesses from across the UK as part of NatWests Backing Businesses partnership with ITV.

More here:

Haptics, AI, and robots-to-rent will make up business of the future - Daily Express

Posted in Ai | Comments Off on Haptics, AI, and robots-to-rent will make up business of the future – Daily Express

Scientists call on AI to save bristlebird | The Times | Victor Harbor, SA – Victor Harbor Times

Posted: at 10:17 pm

Scientists are using a high-tech call recognition tool to map and save the bushfire-ravaged eastern bristlebird, a melodic but shy ground-dwelling species.

The Artificial Intelligence (AI) pattern recognition tool is one of eight recovery projects getting taxpayer funding through a dedicated federal national species co-ordinator.

"One of the key learnings from the Black Summer bushfires was a need for co-ordinated on-ground action, monitoring and research, across the entire range of a species, to support its recovery," Environment Minister Sussan Ley said on Monday.

The endangered eastern bristlebird can be easily recognised by its song and alarm-call.

By creating a tool that automatically and accurately detects the bird's calls from remote field recordings, and using updated radio transmission methods, the remaining populations can be tracked.

"We will also be using highly skilled volunteer scientists to collect data that will inform the future recovery actions we need to take to support the recovery of the bristlebird across its entire range," she said.

Other projects for the eastern bristlebird include habitat restoration, health and disease research, and support for the establishment of a new genetically viable population in Victoria as a safety net in case of extreme weather events or the spread of disease.

The $10 million funding initiative for the long-term recovery of more than 70 species will target the most fire-affected regions across New South Wales, the ACT, Victoria and Queensland.

Australian Associated Press

Follow this link:

Scientists call on AI to save bristlebird | The Times | Victor Harbor, SA - Victor Harbor Times

Posted in Ai | Comments Off on Scientists call on AI to save bristlebird | The Times | Victor Harbor, SA – Victor Harbor Times

Security AI is the next big thing – VentureBeat

Posted: November 1, 2021 at 6:40 am

In the world of cybersecurity, speed kills. In less than 20 minutes, a skilled adversary can break into an organizations network and start exfiltrating critical data assets, and as the volume of data modern companies produce increases, its becoming ever more difficult for human analysts to spot malicious activity until its too late. This is where cybersecurity AI can come to the rescue.

This hostile threat landscape has led organizations such as Microsoft to use AI as part of their internal and external cybersecurity strategy. Were seeing this incredible increase in the volume of attacks, from human-operated ransomware through all different kinds of zero-day attacks, said Ann Johnson, corporate vice president of security, compliance, and identity at Microsoft.

Given the complexity of modern attacks, there is absolutely no way that human defenders can keep up with it, so we must have artificial intelligence capabilities in the technologies and solutions were providing, Johnson said. For modern organizations, AI is now vital for keeping up with the fast-moving threat landscape and offers a variety of use cases that enterprises can leverage to improve their security posture.

Perhaps the most compelling use case for AI in cybersecurity is incident response. AI enables organizations to automatically detect anomalous behavior within their environments and conduct automated responses to contain intrusions as quickly as possible.

One of the most high-profile uses of AI this year occurred at the Olympic Games in Tokyo, when Darktrace AI identified a malicious Raspberry Pi IoT device that an intruder had planted into the office of a national sporting body directly involved in the Olympics. The solution detected the device port scanning nearby devices, blocked the connections, and supplied human analysts with insights into the scanning activity so they could investigate further.

Darktrace was able to weed out that there was something new in the environment that was displaying interesting behavior, Darktraces chief information security officer (CISO) Mike Beck said. Beck noted there was a distinct change in behavior in terms of the communication profiles that exist inside that environment.

When considering the amount of data the national body was processing in the run-up to the Olympics, it would have been impossible for a human analyst to spot such an attack at the same speed as the AI, Beck said.

In 2021, and going forward, there is too much digital data. That is the raw reality, Beck said. You have to be using intelligent AI to find these attacks, and if you dont, theres going to be a long period of dwell time, and those attackers are going to have free rein.

Keeping up with the latest threats isnt the only compelling use case that AI has within cybersecurity. AI also offers the ability to automatically process and categorize protected data so that organizations can have complete transparency over how they process this data; it also ensures that they remain compliant with data privacy regulations within an ever-more-complex regulatory landscape.

Our regulatory department tells me we evaluate 250 new regulations daily across the world to see what we need to be in compliance, so then take all of that and think about all the different laws that are being passed in different countries around data; you need machine-learning capabilities, Johnson said.

In practice, Johnson said, that means using a lot of artificial intelligence and machine learning to understand what the data actually is and to make sure we have the commonality of labeling, to make sure we understand where the data is transiting, a task too monumental for even the largest team of security analysts.

Its up to AI to decide: Is this a U.S. Social Security number, or just [nine] characters that are something else? Johnson said.

By categorizing and labeling sensitive data, AI makes it easier for an organization to take inventory of what protected information is transiting where, so admins can accurately report to regulators on how that data is handled and prevent exposure to unauthorized individuals.

At the same time, the ability to build automated zero-trust architectures and to ensure that only authorized users and devices have access to privileged information is emerging as one of the most novel use cases of AI. AI-driven authentication can ensure that nobody except authorized users has access to sensitive information.

As Ann Cleaveland, executive director of the Center for Long-Term Cybersecurity at UC Berkeley, explained, One of the most powerful emerging use cases is the implementation of so-called zero-trust architectures and continuous or just-in-time authentication of users on the system and verification of devices.

Zero-trust AI systems leverage a range of data points to identify and authenticate authorized users at machine speed accurately. These systems are underpinned by machine-learning models that take time, location, behavior data, and other factors to assign a risk score that is used to grant or deny access, Cleaveland said.

When utilized correctly, these solutions can detect when unauthorized individual attempts to access privileged information and block the connection. Cleaveland said that these capabilities are becoming more important following the mass shift to remote or hybrid work environments that have taken place throughout the COVID-19 pandemic.

One of the main drivers of adoption for some organizations is AIs ability to bridge the IT skills gap by enabling in-house security teams to do more with less through the use of automation. AI can automatically complete tedious manual tasks, such as processing false-positive alerts so that analysts have a more manageable workload and additional time to focus on more productive and rewarding high-level tasks.

Weve been able to automate 97% of routine tasks that occupied a defenders time just a few years ago, and we can help them respond 50 percent faster, Johnson said. And the reason is that we can do a lot of automated threat hunting across all of the platforms in a much quicker way than a human could actually do them.

This isnt a takeover by AI, Beck said. AI is there to be a force multiplier for security teams. Its doing a whole load of digital work behind the scenes now to present to human teams genuine decisions that they have to make so that we have a point where those human teams can decide how to take action.

Ultimately, humans have control over the types of tasks they automate, choosing what tasks are automated and how they use AI solutions. While AI is essential to cybersecurity for modern organizations, so are human analysts, and guess what? Theyre not going away anytime soon.

Here is the original post:

Security AI is the next big thing - VentureBeat

Posted in Ai | Comments Off on Security AI is the next big thing – VentureBeat

AI and human rights – a different take on an old debate – Diginomica

Posted: at 6:40 am

On September 15 2021, the UN issued a statement that AI must not interfere with human rights. This isn't a new sentiment - last year, a similar pronouncement was issued:

Michelle Bachelet, UN High Commissionerfor Human Rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces. It also said Wednesday that countries should expressly ban AI applications which don't comply with international human rights law.

The September 15 announcement also comes with a new report:

As part of its work on technology and human rights, the UN Human Rights Office has today published a report that analyses how AI including profiling, automated decision-making and other machine-learning technologies affects peoples right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression. [The report can be downloaded as a Word document via this link].

Bachelet has further said that:

Applications that should be prohibited include government "social scoring" systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.

When I read the new report, I first thought about how much ethical AI principles and proclamations emanate from European governments and UN agencies based in Switzerland. The only detailed report I read coming from the US Federal government was from the JAIC (The Joint Artificial Intelligence Center of the Department of Defense's (DoD) Artificial Intelligence (AI) Center of Excellence.). This is a pattern of the past few years, where guidelines for responsible AI emanate from the EU, UNESCO, and the World Economic Forum. And it begs the question:

After a calamitous, genocidal twentieth century, why is Europe in the vanguard of human rights and primarily ethical AI not interfering with human rights, while the US is mute?

The first thing I wanted to understand is why the US government is so reticent about AI. The second thing is why Europe seems to have agency in pushing the problem with AI and human rights. But there is a third thing. Is it unusual for a significant technology to burst on the scenes and NOT be used to suppress people?

First Question: In 2020, the United States tech sector contributed around 1.99 trillion U.S. dollars to the country's overall gross domestic product (GDP), making up approximately 10.5 percent of total GDP. Too big to fail. As I see it, the government in the US is more concerned with protecting business than protecting people.

I believe that the EU, which does not allow campaign financing from corporations, labor unions, etc. is less susceptible to the needs of business and to a certain extent, is more focused on the welfare of its population than the US.

The second question more or less follows from the first. But also, the USA isn't a single country. It's a federal republic and the the states are very powerful. Germany, for example, or the entire EU, can issue a policy about AI and human rights, but the US federal government would have to campaign to all 50 states where it would likely be rejected by a fair number. We have one Department of Defense, so they are free to draft their own regulations.

Third question: Is it unusual for a significant technology to burst on the scenes and NOT be used to suppress people?

In a paper, The Impact of Technology on Human Rights, C.G.Weeramantry, Judge, International Court of Justice, The Hague, Netherlands, writes:

In English history, William the Conqueror launched the project of the Domesday Book, which recorded every farm in England, the number of cattle on the farm, its extent and value and debts, who owned it and how many people there were on each farm. If he could have done this in 1086, imagine what he would have done if he had lived in the computer age nine centuries later. In fact, there is technology now available to a dictator by which you can bounce off a light beam from the window pane of a house and capture and record the conversation that goes on within it. With all those devices available, imagine the life of people in that society.

As the world gets faster and more information-centered, it also gets meaner: disparities of wealth and power strengthen; opportunities change and often fade away. This isn't new, however. Since the discovery of the New Word, the history of African-Americans and their encounter with technology, has been irremediably devastating to their hopes, dreams, and possibilities.

After the War of 1812 the proliferation of steamboats "put the interior South on the cutting edge of technological advance in America." [PDF link]. The growth of steamboat traffic and of river-based trade, also revealed "a southern willingness to adopt new technology as a way to modernize slavery." The South developed a booming cotton market as well as markets for foodstuffs and lumber. Hundreds of thousands of slaves were pushed west to support the economy.

But by the end of the eighteenth century, the efficiency of the slave economy on cotton plantations was being questioned. Technology changed the picture and made life for plantation slaves worse. Eli Whitney invented a simple machine to allow harvested cotton to be picked clean of seeds - an essential step before milling - on a far greater scale than had previously been possible. Suddenly rendered cost-efficient, cotton farming became a way to get rich quick. Thousands of black Africans were imported to do the work; in Mississippi alone the number of slaves increased from 498 in 1784 to 195,211 by 1840.

It's not possible to summarize all the relevant examples in one post. But if history teaches us anything, it warns us that new technology is rarely neutral - it is often deployed by the powerful in a way that has far-reaching, and sometimes tragic impacts.

The concern over AI as a destructive force to human rights is not only palpable, it's inevitable. If history tells us anything at all, the answer is a resounding yes - it already is. Consider the way algorithmically-driven social media platforms have affected our elections and our faith in them, affected the mental health of young people, and dispense an endless stream of disinformation reinforces AI concerns. There are over 400 million surveillance cameras in China, and despite their lofty AI ethics manifestos, make no mistake, China is a country that keeps a firm grip on what its populations sees.

Bottom line: AI isn't the problem. We are. History shows us that technology innovation consistently aids the powerful and oppresses the weak. My suggestion to the UN High Commissioner is that the next planned doctrine on Human Rights should be mindful of history.

Read the original:

AI and human rights - a different take on an old debate - Diginomica

Posted in Ai | Comments Off on AI and human rights – a different take on an old debate – Diginomica

ANALYSIS: Key Patent Trends and Priority Shifts in AI Inventions – Bloomberg Law

Posted: at 6:40 am

Artificial intelligence is one of the most important technologies of this era, standing to represent the next general-purpose technology like electricity. As AI technology advances rapidly, AI patent activity is experiencing accelerated growth and broad diffusion across industries. A novel Artificial Intelligence Patent Dataset (AIPD), released in July by the U.S. Patent and Trademark Office, identified AI in more than 13.2 million U.S. patents and pre-grant publications, citing an increase of annual AI patent applications by more than double from 2002 to 2018. This surge in AI patenting activity is expected to continue in 2022.

AI technology is complex and spans many different fields. Inventors and patent attorneys often face the challenge of effectively protecting new AI technology development. Much of the publics attention on patenting AI inventions has centered around the issue of inventorship. The Eastern District of Virginias Thaler v. Hirshfeld ruling is the first U.S. court decision in the global dispute over AI inventions, finding that an AI machine cannot be an inventor under current U.S. patent law. This ruling is currently on appeal at the Federal Circuit.

With the profound explosion in the adoption of AI-based technologies, we can expect to see a priority shift in patenting considerations for AI inventionsone that ensures that patent laws governance and treatment of AI is comprehensive and adaptive. Rather than directing efforts toward a potential expansion of patent law based on, for example, inventorship, engaging in proactive patent examination procedures that promote assurance in quality and enforceability of patents, applying existing patent laws through the lens of technology, and acting with openness toward new forms of IP protection will minimize disruptions to legal frameworks as well as promote innovation.

Given how rapidly AI technology is advancing, stakeholders must proactively engage and cautiously address ways for the patent system to promote innovation. Uncertainty with regard to validity and enforceability can weaken a patents market value. We can expect to see a shift to prioritize quality of AI patents to address this concern.

In October 2020, the USPTO published a report titled Public Views on Artificial Intelligence and Intellectual Property Policy, which summarizes responses to patent-related questions regarding AI and similarly reflects the shift in focus to the quality, and in turn, enforceability of AI patents.

Written description and enablement are likely to be areas of focus in achieving high quality and enforceable AI patents. AI inventions pose significant challenges in satisfying the disclosure requirement, which stem from the complexities of the AI technology itself and lack of transparency in how AI tools function. For many AI systems, there is an inability to explain how the technology operates, because the specific AI logic is in some respects unknown. The critical need for the USPTO to police these requirements for the purpose of ensuring patent quality is confirmed by comments in the USPTO report. Similarly, an exploration of enablement can be expected, especially as it may be difficult to enable certain AI inventions seeking patent protection. Such disclosure deficits may necessitate development of an enhanced AI patent disclosure.

The quantity and accessibility of prior art is also likely to be an area of emphasis. Specifically, the issues of what can be considered prior art, the quantity of prior art, and the accessibility of prior art will have significant impacts on patent quality and enforceability. As AI technologies evolve, massive amounts of prior art may be generated. While standard AI techniques may be described in traditional prior art literature, there is still a significant proportion of AI technology that is only documented in source code, which may or may not be available and is generally considered difficult to search for. The USPTO will likely push for additional resources for identifying the adequate AI-related prior art necessary to perform a comprehensive examination and issuance of quality patents.

A common theme in the USPTO report is the importance of examiner training. In addition to a likely push for the USPTO to proactively provide prior art to examiners, a similar push for examiner technical training is also imminent. At some point a heightened obviousness standard may be essential, but looking ahead to 2022, a tactical evaluation of these examination considerations is the likely means to uphold patent quality and enforceability.

As with other fields of technology, the development of AI presents many opportunities for invention. For example, designing an AI algorithm, implementing hardware to enhance an AI algorithm, or applying methods of preparing inputs to an AI algorithm present a variety of patent considerations ranging from subject matter eligibility to written description and enablement. The Thaler v. Hirshfeld ruling tied its holding that U.S. patent law requires an inventor to be a natural person to the current state of AI technology, acknowledging that [a]s technology evolves, there may come a time when artificial intelligence reaches a level of sophistication that might satisfy accepted meanings of inventorship.

In 2022, much of the focus of AI technology will be on running AI models on user devices. Rising privacy concerns about having personal data sent, processed, and stored in the cloud has led to this shift in the AI industry. Developments in AI technology should be monitored to ensure that patent interests are keeping pace with AI technology developments, and patenting considerations must be made through the lens of the current state of technology.

Data is a foundational component of AI. We can expect to see a shift in focus from protecting not just the AI invention itself, but also AI data.

AI data, including its collection and compiling, has value, and big data in particular may be expensive to acquire. For example, input training data may require access to millions of users internet activity and human medical data, and such data may be geographically dispersed and in different formats. If gaps arise in current IP protections ability to effectuate and keep up with the rapid development in AI technology, new forms of IP may be considered, including an IP right for nonpublic data. This may take the form of data exclusivity rights similar to the regulatory data protection rights of proprietary clinical data thats submitted to the FDA and other regulatory authorities.

In 2022, AI is expected to enable such innovations that would otherwise be impossible through human efforts alone. Given that AI technology is advancing so expeditiously, a focus on proactive, technology-driven, and comprehensive legal protections will prioritize and encourage further innovation in and around this critical area.

Access additional analyses from our Bloomberg Law 2022 series here, including pieces covering trends in Litigation, Regulatory & Compliance, Transactions & Contracts and the Future of the Legal Industry.

Bloomberg Law subscribers can find related content on our Patents and Trade Secrets Practice Center resource.

If youre reading this on the Bloomberg Terminal, please run BLAW OUT in order to access the hyperlinked content, or click here to view the web version of this article.

Read the original:

ANALYSIS: Key Patent Trends and Priority Shifts in AI Inventions - Bloomberg Law

Posted in Ai | Comments Off on ANALYSIS: Key Patent Trends and Priority Shifts in AI Inventions – Bloomberg Law

McDonald’s looks to supersize its AI prowess via a new partnership with IBM – Diginomica

Posted: at 6:40 am

McDonalds decision to sell off its AI tech labs to IBM is a gambit that reflects the fast food firms digital forward thinking, but is also set to provide a useful fix for some of the its staffing issues.

The hamburger behemoth has seen salary costs soar as well as finding that labor shortages are forcing outlets to close earlier and impacting on the speed of service. This is something that that CEOChris Kempczinski fears is unlikely to change over the next few quarters:

It's a very challenging staffing environment in the US. A little bit less so in Europe, but still challenging in Europe. In the US for us, we are seeing that there is wage inflation. Our franchisees are increasing wages, they are over 10% wage inflation year-to-dateWere up over 15% on wages. That is having some helpful benefits. Certainly, the higher wages that you pay, it allows you to stay competitive.

But we're also seeing that it's just is very challenging right now in the market to find the level of talent that you need. And so, for us, it is putting some pressure on things like operating hours where we might be dialing back late night, for example, from what we would ordinarily be doing. It's also putting some pressure around speed of service where we are down a little bit on speed of service over the last kind of year-to-date and maybe even in the last quarter. That's also a function of not being able to have the restaurants fully staffed.

To help tackle this situation. McDonalds intends to sell its AI-business McD Tech Labs, to IBM, one outcome of which will be to develop voice-recognition Automated Order Taking (AOT) technology in McDonald's drive-thru lanes. Upon closing of the deal at the end of this year, the McD Tech Labs team of around 100 people will become part of the IBM Cloud & Cognitive Software division.

McD Tech Labs was created to advance employee and customer facing innovations following McDonalds 2019 acquisition of Apprente. That wasnt the firms first dabble in AI - back in 2015 the firm piloted digital menu boards in some locations that were able to make recommendations for food and drink choices based on the weather conditions at the time. The company also acquired Israeli AI firm Dynamic Yield.

But with McDonalds strategic growth plan, "Accelerating the Arches," committing the company to innovation across Digital, Delivery and Drive Thru, the setting up of McD Tech Labs was a much more significant investment. Since then the company has rolled out more and more trials of AI tech on the ground with some success. The new deal with IBM is intended to build on that and take things to the next level.

Kempczinski explained:

These tests have shown substantial benefits to customers and the crew experience. To enable development and scale deployment of this program, McDonald's has now entered into a strategic relationship with IBM. In my mind, IBM is the ideal partner for McDonald's, given their expertise in building AI -powered customer care solutions and voice recognition.

As per the two firms joint announcement:

This agreement will accelerate McDonalds efforts to provide an even more convenient and unique customer and crew experience. McDonalds development and testing of AOT technology in restaurants has shown substantial benefits to customers and the restaurant crew experience.

Moving forward, IBMs expertise in building customer care solutions with AI and natural language processing will help scale the AOT technology across markets and tackle integrations including additional languages, dialects and menu variations.

The acquisition of McD Tech Labs will complement IBM's existing work to develop and deliver AI-driven customer caresolutions with IBM Watson. Businesses across industries from financial services and healthcare to telecommunicationsand retail are using Watson to drive business outcomes.

Expanding on the plans, Kempczinski added that theres a need to ramp efforts up and that this means thinking about what can be best done alone and what needs to be outsourced to a partner:

There are certain times where it may make sense for us to go acquire a technology so that we can accelerate the development of that and make sure that it is bespoke to McDonald's needs. But at some point, that technology reaches a level of development, where I think getting it to a partner, who can then blow it out and scale it globally, makes more sense.

What we did with Apprente is very much consistent with that philosophy, which is we've had it for a couple of year. I've been really pleased with how the team has progressed the development of that. We're seeing some very encouraging results in the restaurants that we have it. But there's still a lot of work that needs to go into introducing other languages, being able to do it across 14,000 restaurants with all the various menu permutations, etc. And that work is beyond the scale of our core competencies, if you will.

There could well be other such partnerships ahead, he added:

Going forward, it's going to be very much on a case-by-case basis as to when we go from day one with the partner versus where we might bring something in-house for a period of time. But the nice thing about being McDonald's is we're everybody's first call when it comes to a partner in the restaurant industry. And so, we have a really good visibility to the various partners out there. Certainly, I think our overall view is we are best on a long-term sustaining basis to use others externally partnering. But again, there may be time-to-time where there's benefit for us from being able to accelerate and learn to have it in for a period of time.

Elsewhere the Digital, Delivery and Drive Thru three-pronged strategy continues apace. Around 20% of sales in McDonalds top six markets are coming via digital channels, be they app, in-store kiosks or digitally-enabled delivery. Kempczinski also points to the rollout of the firms MyMcDonalds Rewards loyalty program as a digital success story:

Our loyalty program has been an instant fan-favorite and delivers great value to our most loyal customers. It also creates another touch point to increase engagement and take our relationship with customers to more responsive, more personalized places.We're already seeing increased customer satisfaction and a higher frequency among digital customers compared to non-digital.

He added:

The more we learn about loyalty, the more optimistic that we get about loyalty. I think for us in terms of what that means for the business long-term, certainly the benefits you get with a loyalty program is the ability to increase frequency. In the markets where we operate, roughly 80% of the population visits the McDonald's once a year, so, it's not that we have a reach opportunity; its about driving frequency in this business. We've seen in the places where we have deployed loyalty that it absolutely does increase customer frequency. So, for us, that's really encouraging.

The program is also helping McDonalds to know its customers better, he said:

We had set out earlier an aspiration where we wanted to have 40% of our customers be known customers. Today, that number is probably only about 5% of the customers where we actually know who is the customer, what did they buy, what did they buy previously. You can imagine all sorts of things that you're able to learn about customers and their preferences when you're able to get more and more of your transactions where you know who the customer is. Loyalty is certainly the way that you get that customer to engage and share information with you.

As diginomica has noted on many occasions, McDonalds savvy exploitation of digital transformation has put it at the forefront of the Quick Service Restaurant sector. This latest gambit with IBM represents an interesting and potentially lucrative extension of its capabilities.

Continue reading here:

McDonald's looks to supersize its AI prowess via a new partnership with IBM - Diginomica

Posted in Ai | Comments Off on McDonald’s looks to supersize its AI prowess via a new partnership with IBM – Diginomica

How AI is speeding up the digital transformation in enterprise – YourStory

Posted: at 6:40 am

The operating environment of enterprises is rapidly and fundamentally being altered. Powered by a smarter, more demanding customer spoilt for choices, and doting employees who expect a consumerised experience to deliver value for their organisation, enterprises who tend to delay the adoption to newer and cutting-edge business demands are bound to be left behind and relegated to irrelevance.

While some leaders are already moving ahead, many others are still studying the strategic justification for moving beyond BI reporting systems to implement Artificial Intelligence.

While the benefits can be profound, the commitment is significant too. In a fast-evolving business environment, strategic objectives need to be paired with the ability to make more frequent, more responsive, and more accurate business decisions.

Today, technology has the capability to take over the influencing, improving, and managing of every interaction between consumers and businesses.

Consider the most pertinent decisions any consumer commerce business has to make what to sell (build), where to sell, when to sell, and what to sell it for. From product to pricing to promotion and now, logistics, and shipping, organisations are looking to deliver value as a proposition.

With the right internal business data gathered, collected, and analysed, AI solutions can also access third-party data that originate outside the company, including understanding the competitive landscape and competitive intelligence on pricing.

Not just confined to sales trends and tactics, AI algorithms also help commerce to capture business effective business strategies and consumer behaviour.

Todays tech savvy consumers are looking for intuitive systems that minimise their interaction downtimes and maximise their digital delight.

Consumer preferences and choices are changing every day and the influx in options is also not going to change anytime soon.

In situations like these enterprises and businesses need to ensure that the kind of messaging and strategy devised is exactly in line with their defined niche to reduce the chances of losing prospective/customers to competition.

Here, integration of AI plays an important role, it helps in stimulating consumer behaviour and preferences to predict the upcoming trends in the demand and supply chain management.

With the right AI solution, businesses have the opportunity to pivot their businesses to greater heights, the inclusion of this state-of-the-art futuristic technology can easily help understand and evaluate customers through multi-level interaction to further make sense and predict the factors that influence their preferences and buying behaviours.

Simply put, these emerging, turnkey solutions are designed to lead enterprises of all shapes and sizes comprehensively and conclusively into the age of AI.

Much more than traditional Business Intelligence (BI) technology, these solutions can measure and monitor business results, clarify why they are occurring, and recommend actions to drive improvements.

The key differentiator being their ability to offer real-time, actionable insights that can trigger specific decisions that drive business value.

Todays AI solutions thrive on the power of data. As more and more complex data patterns get sifted through to create intelligent responses to up business performance, AI will be a critical ingredient for value delivery.

Done right AI implementation can overnight transform businesses with visible substantiable metrics on revenue profitability. These systems have the potential to minimize excess, maximise output and optimise performance.

What makes good AI implementation is the level of nativity that it provides to the business it serves. There is a high level of domain sync that the AI system absorbs ploughing into the primary and secondary data across the industry making the solution significantly contextual.

Its important to understand that there is no solution out there that can be everything to everyone. At the moment, the developers of these platforms are staking their claims in specific industries and focusing their products on the unique market challenges of these verticals. While this may change in the future, its simply not the case right now.

Today, there are emerging platforms that focus on consumer commerce industries, such as retail, restaurant, hospitality, convenience, ecommerce, financial services, manufacturing, and consumer packaged goods.

Additionally, there are also specialised platforms for businesses operating in utilities, heavy machinery, cyber security and more.

At the end of the day, all of these solutions are designed with a common goalto speed up digital transformation and ensure that enterprises are able to massively scale the quality and speed of their decisions in order to delight their customers and generate more business value.

It is imperative that enterprises speed up their digital transformation to fully leverage AI and extract value from applying AI into their decision-making processes. Enterprises need to drive their agility by enabling unfettered access to data, compute and enabling a high degree of integration and automation. One way to achieve this is by tapping into one of the hottest sectors within the software industry, enterprise AI platforms.

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)

See the original post here:

How AI is speeding up the digital transformation in enterprise - YourStory

Posted in Ai | Comments Off on How AI is speeding up the digital transformation in enterprise – YourStory

Page 88«..1020..87888990..100110..»