Page 136«..1020..135136137138..150160..»

Category Archives: Ai

Ethics of AI: Benefits and risks of artificial intelligence – ZDNet

Posted: May 4, 2021 at 8:10 pm

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems.

Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised.

Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived."

Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers.

But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve.

Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life.

That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners.

Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens.

Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing.

As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?"

Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion.

Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December. Gebru made clear that she was fired by Google, a claim Mitchell backs up in her letter. Jeff Dean, head of AI at Google, wrote in an internal email to staff that the company accepted the resignation of Gebru. Gebru's former colleagues offer a neologism for the matter: Gebru was "resignated" by Google.

Margaret Mitchell [right], was fired on the heels of the removal of Timnit Gebru.

I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired 🙂

Timnit Gebru (@timnitGebru) December 3, 2020

Mitchell, who expressed outrage at how Gebru was treated by Google, was fired in February.

The departure of the top two ethics researchers at Google cast a pall over Google's corporate ethics, to say nothing of its AI scruples.

As reported by Wired's Tom Simonite last month, two academics invited to participate in a Google conference on safety in robotics in March withdrew from the conference in protest of the treatment of Gebru and Mitchell. A third academic said that his lab, which has received funding from Google, would no longer apply for money from Google, also in support of the two professors.

Google staff quit in February in protest of Gebru and Mitchell's treatment, CNN's Rachel Metz reported. And Sammy Bengio, a prominent scholar on Google's AI team who helped to recruit Gebru, resigned this month in protest over Gebru and Mitchell's treatment, Reuters has reported.

A petition on Medium signed by 2,695 Google staff members and 4,302 outside parties expresses support for Gebru and calls on the company to "strengthen its commitment to research integrity and to unequivocally commit to supporting research that honors the commitments made in Google'sAI Principles."

Gebru's situation is an example of how technology is not neutral, as the circumstances of its creation are not neutral, as MIT scholars Katlyn Turner, Danielle Wood, Catherine D'Ignazio discussed in an essay in January.

"Black women have been producing leading scholarship that challenges the dominant narratives of the AI and Tech industry: namely that technology is ahistorical, 'evolved', 'neutral' and 'rational' beyond the human quibbles of issues like gender, class, and race," the authors write.

During an online discussion of AI in December, AI Debate 2, Celeste Kidd, a professor at UC Berkeley, reflecting on what had happened to Gebru, remarked, "Right now is a terrifying time in AI."

"What Timnit experienced at Google is the norm, hearing about it is what's unusual," said Kidd.

The questioning of AI and how it is practiced, and the phenomenon of corporations snapping back in response, comes as the commercial and governmental implementation of AI make the stakes even greater.

Ethical issues take on greater resonance when AI expands to uses that are far afield of the original academic development of algorithms.

The industrialization of the technology is amplifying the everyday use of those algorithms. A report this month by Ryan Mac and colleagues at BuzzFeed found that "more than 7,000 individuals from nearly 2,000 public agencies nationwide have used technology from startup Clearview AI to search through millions of Americans' faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members."

Clearview neither confirmed nor denied BuzzFeed's' findings.

New devices are being put into the world that rely on machine learning forms of AI in one way or another. For example, so-called autonomous trucking is coming to highways, where a "Level 4 ADAS" tractor trailer is supposed to be able to move at highway speed on certain designated routes without a human driver.

A company making that technology, TuSimple, of San Diego, California, is going public on Nasdaq. In its IPO prospectus, the company says it has 5,700 reservations so far in the four months since it announced availability of its autonomous driving software for the rigs. When a truck is rolling at high speed, carrying a huge load of something, making sure the AI software safely conducts the vehicle is clearly a priority for society.

TuSimple says it has almost 6,000 pre-orders for a driverless semi-truck. When a truck is rolling at high speed, carrying a huge load of something, making sure the AI software safely conducts the vehicle is clearly a priority for society.

Another area of concern is AI applied in the area of military and policing activities.

Arthur Holland Michel, author of an extensive book on military surveillance, Eyes in the Sky, has described how ImageNet has been used to enhance the U.S. military's surveillance systems. For anyone who views surveillance as a useful tool to keep people safe, that is encouraging news. For anyone worried about the issues of surveillance unchecked by any civilian oversight, it is a disturbing expansion of AI applications.

Calls are rising for mass surveillance, enabled by technology such as facial recognition, not to be used at all.

As ZDNet's Daphne Leprince-Ringuet reported last month, 51 organizations, including AlgorithmWatch and the European Digital Society, have sent a letter to the European Union urging a total ban on surveillance.

And it looks like there will be some curbs after all. After an extensive report on the risks a year ago, and a companion white paper, and solicitation of feedback from numerous "stakeholders," the European Commission this month published its proposal for "Harmonised Rules On Artificial Intelligence For AI." Among the provisos is a curtailment of law enforcement use of facial recognition in public.

"The use of 'real time' remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply," the report states.

The backlash against surveillance keeps finding new examples to which to point. The paradigmatic example had been the monitoring of ethic Uyghurs in China's Xianxjang region. Following a February military coup in Myanmar, Human Rights Watch reports that human rights are in the balance given the surveillance system that had just been set up. That project, called Safe City, was deployed in the capital Naypidaw, in December.

As one researcher told Human Rights Watch, "Before the coup, Myanmar's government tried to justify mass surveillance technologies in the name of fighting crime, but what it is doing is empowering an abusive military junta."

Also: The US, China and the AI arms race: Cutting through the hype

The National Security Commission on AI's Final Report in March warned the U.S. is not ready for global conflict that employs AI.

As if all those developments weren't dramatic enough, AI has become an arms race, and nations have now made AI a matter of national policy to avoid what is presented as existential risk. The U.S.'s National Security Commission on AI, staffed by tech heavy hitters such as former Google CEO Eric Schmidt, Oracle CEO Safra Catz, and Amazon's incoming CEO Andy Jassy, last month issued its 756-page "final report" for what it calls the "strategy for winning the artificial intelligence era."

The authors "fear AI tools will be weapons of first resort in future conflicts," they write, noting that "state adversaries are already using AI-enabled disinformation attacks to sow division in democracies and jar our sense of reality."

The Commission's overall message is that "The U.S. government is not prepared to defend the United States in the coming artificial intelligence era." To get prepared, the White House needs to make AI a cabinet-level priority, and "establish the foundations for widespread integration of AI by 2025." That includes "building a common digital infrastructure, developing a digitally-literate workforce, and instituting more agile acquisition, budget, and oversight processes."

Why are these issues cropping up? There are issues of justice and authoritarianism that are timeless, but there are also new problems with the arrival of AI, and in particular its modern deep learning variant.

Consider the incident between Google and scholars Gebru and Mitchell. At the heart of the dispute was a research paper the two were preparing for a conference that crystallizes a questioning of the state of the art in AI.

The paper that touched off a controversy at Google: Gebru and Bender and Major and Mitchell argue that very large language models such as Google's BERT present two dangers: massive energy consumption and perpetuating biases.

The paper, coauthored by Emily Bender of the University of Washington, Gebru, Angelina McMillan-Major, also of the University of Washington, and Mitchell, titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" focuses on a topic within machine learning called natural language processing, or NLP.

The authors describe how language models such as GPT-3 have gotten bigger and bigger, culminating in very large "pre-trained" language models, including Google's Switch Transformer, also known as Switch-C, which appears to be the largest model published to date. Switch-C uses 1.6 trillion neural "weights," or parameters, and is trained on a corpus of 745 gigabytes of text data.

The authors identify two risk factors. One is the environmental impact of larger and larger models such as Switch-C. Those models consume massive amounts of compute, and generate increasing amounts of carbon dioxide. The second issue is the replication of biases in the generation of text strings produced by the models.

The environment issue is one of the most vivid examples of the matter of scale. As ZDNet has reported, the state of the art in NLP, and, indeed, much of deep learning, is to keep using more and more GPU chips, from Nvidia and AMD, to operate ever-larger software programs. Accuracy of these models seems to increase, generally speaking, with size.

But there is an environmental cost. Bender and team cite previous research that has shown that training a large language model, a version of Google's Transformer that is smaller than Switch-C, emitted 284 tons of carbon dioxide, which is 57 times as much CO2 as a human being is estimated to be responsible for releasing into the environment in a year.

It's ironic, the authors note, that the ever-rising cost to the environment of such huge GPU farms impacts most immediately the communities on the forefront of risk from change whose dominant languages aren't even accommodated by such language models, in particular the population of the Maldives archipelago in the Arabian Sea, whose official language is Dhivehi, a branch of the Indo-Aryan family:

Is it fair or just to ask, for example, that the residents of the Maldives (likely to be underwater by 2100) or the 800,000 people in Sudan affected by drastic floods pay the environmental price of training and deploying ever larger English LMs [language models], when similar large-scale models aren't being produced for Dhivehi or Sudanese Arabic?

The second concern has to do with the tendency of these large language models to perpetuate biases that are contained in the training set data, which are often publicly available writing that is scraped from places such as Reddit. If that text contains biases, those biases will be captured and amplified in generated output.

The fundamental problem, again, is one of scale. The training sets are so large, the issues of bias in code cannot be properly documented, nor can they be properly curated to remove bias.

"Large [language models] encode and reinforce hegemonic biases, the harms that follow are most likely to fall on marginalized populations," the authors write.

The risk of the huge cost of compute for ever-larger models has been a topic of debate for some time now. Part of the problem is that measures of performance, including energy consumption, are often cloaked in secrecy.

Some benchmark tests in AI computing are getting a little bit smarter. MLPerf, the main measure of performance of training and inference in neural networks, has been making efforts to provide more representative measures of AI systems for particular workloads. This month, the organization overseeing MLPerf, the MLCommons, for the first time asked vendors to list not just performance but energy consumed for those machine learning tasks.

Regardless of the data, the fact is systems are getting bigger and bigger in general. The response to the energy concern within the field has been two-fold: to build computers that are more efficient at processing the large models, and to develop algorithms that will compute deep learning in a more intelligent fashion than just throwing more computing at the problem.

Cerebras's Wafer Scale Engine is the state of the art in AI computing, the world's biggest chip, designed for the ever-increasing scale of things such as language models.

On the first score, a raft of startups have arisen to offer computers dedicate to AI that they say are much more efficient than the hundreds or thousands of GPUs from Nvidia or AMD typically required today.

They include Cerebras Systems, which has pioneered the world's largest computer chip; Graphcore, the first company to offer a dedicated AI computing system, with its own novel chip architecture; and SambaNova Systems, which has received over a billion dollars in venture capital to sell both systems but also an AI-as-a-service offering.

"These really large models take huge numbers of GPUs just to hold the data," Kunle Olukotun, Stanford University professor of computer science who is a co-founder of SambaNova, told ZDNet, referring to language models such as Google's BERT.

"Fundamentally, if you can enable someone to train these models with a much smaller system, then you can train the model with less energy, and you would democratize the ability to play with these large models," by involving more researchers, said Olukotun.

Those designing deep learning neural networks are simultaneously exploring ways the systems can be more efficient. For example, the Switch Transformer from Google, the very large language model that is referenced by Bender and team, can reach some optimal spot in its training with far fewer than its maximum 1.6 trillion parameters, author William Fedus and colleagues of Google state.

The software "is also an effective architecture at small scales as well as in regimes with thousands of cores and trillions of parameters," they write.

The key, they write, is to use a property called sparsity, which prunes which of the weights get activated for each data sample.

Scientists at Rice University and Intel propose slimming down the computing budget of large neural networks by using a hashing table that selects the neural net activations for each input, a kind of pruning of the network.

Another approach to working smarter is a technique called hashing. That approach is embodied in a project called "Slide," introduced last year by Beidi Chen of Rice University and collaborators at Intel. They use something called a hash table to identify individual neurons in a neural network that can be dispensed with, thereby reducing the overall compute budget.

Chen and team call this "selective sparsification", and they demonstrate that running a neural network can be 3.5 times faster on a 44-core CPU than on an Nvidia Tesla V100 GPU.

As long as large companies such as Google and Amazon dominate deep learning in research and production, it is possible that "bigger is better" will dominate neural networks. If smaller, less resource-rich users take up deep learning in smaller facilities, than more-efficient algorithms could gain new followers.

The second issue, AI bias, runs in a direct line from the Bender et al. paper back to a paper in 2018 that touched off the current era in AI ethics, the paper that was the shot heard 'round the world, as they say.

Buolamwini and Gebru brought international attention to the matter of bias in AI with their 2018 paper "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," which revealed that commercial facial recognition systems showed "substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems."

That 2018 paper, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," was also authored by Gebru, then at Microsoft, along with MIT researcher Joy Buolamwini. They demonstrated how commercially available facial recognition systems had high accuracy when dealing with images of light-skinned men, but catastrophically bad inaccuracy when dealing with images of darker-skinned women. The authors' critical question was why such inaccuracy was tolerated in commercial systems.

Buolamwini and Gebru presented their paper at the Association for Computing Machinery's Conference on Fairness, Accountability, and Transparency. That is the same conference where in February Bender and team presented the Parrot paper. (Gebru is a co-founder of the conference.)

Both Gender Shades and the Parrot paper deal with a central ethical concern in AI, the notion of bias. AI in its machine learning form makes extensive use of principles of statistics. In statistics, bias is when an estimation of something turns out not to match the true quantity of that thing.

So, for example, if a political pollster takes a poll of voters' preferences, if they only get responses from people who talk to poll takers, they may get what is called response bias, in which their estimation of the preference for a certain candidate's popularity is not an accurate reflection of preference in the broader population.

Also: AI and ethics: One-third of executives are not aware of potential AI bias

The Gender Shades paper in 2018 broke ground in showing how an algorithm, in this case facial recognition, can be extremely out of alignment with the truth, a form of bias that hits one particular sub-group of the population.

Flash forward, and the Parrot paper shows how that statistical bias has become exacerbated by scale effects in two particular ways. One way is that data sets have proliferated, and increased in scale, obscuring their composition. Such obscurity can obfuscate how the data may already be biased versus the truth.

Second, NLP programs such as GPT-3 are generative, meaning that they are flooding the world with an amazing amount of created technological artifacts such as automatically generated writing. By creating such artifacts, biases can be replicated, and amplified in the process, thereby proliferating such biases.

On the first score, the scale of data sets, scholars have argued for going beyond merely tweaking a machine learning system in order to mitigate bias, and to instead investigate the data sets used to train such models, in order to explore biases that are in the data itself.

Before she was fired from Google's Ethical AI team, Mitchell lead her team to develop a system called "Model Cards" to excavate biases hidden in data sets. Each model card would report metrics for a given neural network model, such as looking at an algorithm for automatically finding "smiling photos" and reporting its rate of false positives and other measures.

One example is an approach created by Mitchell and team at Google called model cards. As explained in the introductory paper, "Model cards for model reporting," data sets need to be regarded as infrastructure. Doing so will expose the "conditions of their creation," which is often obscured. The research suggests treating data sets as a matter of "goal-driven engineering," and asking critical questions such as whether data sets can be trusted and whether they build in biases.

Another example is a paper last year, featured in The State of AI Ethics, by Emily Denton and colleagues at Google, "Bringing the People Back In," in which they propose what they call a genealogy of data, with the goal "to investigate how and why these datasets have been created, what and whose values influence the choices of data to collect, the contextual and contingent conditions of their creation, and the emergence of current norms and standards of data practice."

Vinay Prabhu, chief scientist at UnifyID, in a talk at Stanford last year described being able to take images of people from ImageNet, feed them to a search engine, and find out who people are in the real world. It is the "susceptibility phase" of data sets, he argues, when people can be targeted by having had their images appropriated.

Scholars have already shed light on the murky circumstances of some of the most prominent data sets used in the dominant NLP models. For example, Vinay Uday Prabhu, who is chief scientist at startup UnifyID Inc., in a virtual talk at Stanford University last year examined the ImageNet data set, a collection of 15 million images that have been labeled with descriptions.

The introduction of ImageNet in 2009 arguably set in motion the deep learning epoch. There are problems, however, with ImageNet, particularly the fact that it appropriated personal photos from Flickr without consent, Prabhu explained.

Those non-consensual pictures, said Prabhu, fall into the hands of thousands of entities all over the world, and that leads to a very real personal risk, he said, what he called the "susceptibility phase," a massive invasion of privacy.

Using what's called reverse image search, via a commercial online service, Prabhu was able to take ImageNet pictures of people and "very easily figure out who they were in the real world." Companies such as Clearview, said Prabhu, are merely a symptom of that broader problem of a kind-of industrialized invasion of privacy.

An ambitious project has sought to catalog that misappropriation. Called Exposing.ai, it is the work of Adam Harvey and Jules LaPlace, and it formally debuted in January. The authors have spent years tracing how personal photos were appropriated without consent for use in machine learning training sets.

The site is a search engine where one can "check if your Flickr photos were used in dozens of the most widely used and cited public face and biometric image datasets [] to train, test, or enhance artificial intelligence surveillance technologies for use in academic, commercial, or defense related applications," as Harvey and LaPlace describe it.

Some argue the issue goes beyond simply the contents of the data to the means of its production. Amazon's Mechanical Turk service is ubiquitous as a means of employing humans to prepare vast data sets, such as by applying labels to pictures for ImageNet or to rate chat bot conversations.

An article last month by Vice's Aliide Naylor quoted Mechanical Turk workers who felt coerced in some instances to produce results in line with a predetermined objective.

The Turkopticon feedback aims to arm workers on Amazon's Mechanical Turk with honest appraisals of the work conditions of contracting for various Turk clients.

A project called Turkopticon has arisen to crowd-source reviews of the parties who contract with Mechanical Turk, to help Turk workers avoid abusive or shady clients. It is one attempt to ameliorate what many see as the troubling plight of an expanding underclass of piece workers, what Mary Gray and Siddharth Suri of Microsoft have termed "ghost work."

There are small signs the message of data set concern has gotten through to large organizations practicing deep learning. Facebook this month announced a new data set that was created not by appropriating personal images but rather by making original videos of over three thousand paid actors who gave consent to appear in the videos.

The paper by lead author Caner Hazirbas and colleagues explains that the "Casual Conversations" data set is distinguished by the fact that "age and gender annotations are provided by the subjects themselves." Skin type of each person was annotated by the authors using the so-called Fitzpatrick Scale, the same measure that Buolamwini and Gebru used in their Gender Shades paper. In fact, Hazirbas and team prominently cite Gender Shades as precedent.

Hazirbas and colleagues found that, among other things, when machine learning systems are tested against this new data set, some of the same failures crop up as identified by Buolamwini and Gebru. "We noticed an obvious algorithmic bias towards lighter skinned subjects," they write.

See the original post:

Ethics of AI: Benefits and risks of artificial intelligence - ZDNet

Posted in Ai | Comments Off on Ethics of AI: Benefits and risks of artificial intelligence – ZDNet

Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience – Forbes

Posted: at 8:10 pm

Over the last 12 months, we have seen a surge in investment in Artificial Intelligence (AI) enabled customer self-service technologies as brands have put in place tools that have helped deflect calls away from their support teams and allow customers to self serve.

However, despite these investments, we have also seen how the phone is still an important and vital channel for many organizations regarding customer service. According to Salesforce data, daily call volume reached an all-time high last year, up 24% compared to 2019 levels. Meanwhile, Accenture found that 58% of customers prefer to speak to a support agent if they need to solve an urgent or complex issue, particularly during times of crisis.

Now, consider one of those calls.

When a customer gets through to an agent, they are not thinking about how many calls they have already answered that day, what those calls have been like and how it may have impacted them. The customer, in the moment, is only thinking about solving their particular problem.

That's all very well, you might say.

Stressed call center agent.

But, in the face of consistently high call volumes and the strains of working remotely for an extended period, reports are now starting to emerge that many contact center agents are beginning to experience a similar phenomenon to what many nurses and doctors often go through: compassion fatigue. This is the situation where, due to consistently high workloads, they become emotionally exhausted, on the verge of getting burnt out and become unable to deliver a high level of service.

That, in turn, feeds directly through to the service and experience that the patient or customer receives.

However, Dr Skyler Place, Chief Behavioural Science Officer at Cogito, believes that compassion fatigue is avoidable, and organizations should be using AI to enable and support their agents whilst on a call and, at the same time, manage their well-being and performance.

He believes that there are three areas that organizations are under utilizing AI when trying to improve their customer experience (CX).

The first is that brands should be leveraging AI technology to provide real-time feedback whilst an agent is on a call to support and empower them in the moment.

Secondly, given that many support teams are still working remotely, AI technology can replace the tradition of walking the floor and help supervisors understand how their teams are doing and what sort of coaching and support they need from call to call.

Thirdly, when you combine that data with customer outcome data and apply AI technology, you can identify insights that, as Place puts it, "can help you improve your business processes, your business outcomes and drive macro strategies beyond the call and beyond the call center."

The potential of a system that provides both in-call real-time support for agents but also intelligently understands call demand, an agents experience and in-shift call profiles such that it can optimize call matching to help achieve positive customer and employee outcomes is nothing but a good thing.

Compassion fatigue is real, and organizations need to be managing their agent's performance and well-being if they are to achieve excellent phone-based customer service.

Visit link:

Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience - Forbes

Posted in Ai | Comments Off on Three Ways That Organizations Are Under Utilizing AI In Their Customer Experience – Forbes

Yet another Google AI leader has defected to Apple – Ars Technica

Posted: at 8:10 pm

Enlarge / AI researcher Samy Bengio (left) poses with his brother Yoshua Bengio (right) for a photo tied to a report from cloud-platform company Paperspace on the future of AI.

Apple has hired Samy Bengio, a prominent AI researcher who previously worked at Google. Bengio will lead "a new AI research unit" within Apple, according to a recent report in Reuters. He is just the latest in a series of prominent AI leaders and workers Apple has hired away from the search giant.

Apple uses machine learning to improve the quality of photos taken with the iPhone, surface suggestions of content and apps that users might want to use, power smart search features across its various software offerings, assist in palm rejection for users writing with the iPad's Pencil accessory, and much more.

Bengio was part of a cadre of AI professionals who left Google to protest the company's firings of its own AI ethics researchers (Margaret Mitchell and Timnit Gebru) after those researchers raised concerns about diversity and Google's approach to ethical considerations around new applications of AI and machine learning. Bengio voiced his support for Mitchell and Gebru, and he departed of his own volition after they were let go.

In his 14 years at Google, Bengio worked on AI applications like speech and image analysis, among other things. Neither Bengio nor Apple has said exactly what he will be researching in his new role in Cupertino.

See the article here:

Yet another Google AI leader has defected to Apple - Ars Technica

Posted in Ai | Comments Off on Yet another Google AI leader has defected to Apple – Ars Technica

Forbes AI 50 Selects Nines Radiology as one of the Most Promising AI Companies – MedTech Dive

Posted: at 8:10 pm

Press Releases PRESS RELEASE FROM NINES

Forbes AI 50 Selects Nines as one of the Most Promising AI Companies

Palo Alto, Calif., April 30, 2021 Forbesrecentlyannounced thatNines, Inc.has been selected one of 50 most promising private AI companies in the US and Canada. TheForbes AI 50highlights companies that are using artificial intelligence (AI) in meaningful ways and demonstrating business potential.

Being included in this list of Most Promising AI Companies is a true honor, said David Stavens, CEO of Nines, Inc. To achieve this selection is a validation of our unique approach to deliver quality, reliable care to patients in hospitals, imaging centers and radiology practices.

According to Forbes, the magazine received nearly 400 submissions from the US and Canada. Out of the 400, the finalists were whittled down to 100 companies. The judges, leading experts in AI, then selected the 50 most compelling companies. Nines is the only teleradiology practice included in the list.

The Forbes AI list features 31 companies appearing for the first time. At least 13 are valued at $100 million or less, while 13 are valued at $1 billion or more. Silicon Valley remains the hub for the AI startups, with 37 of the 50 honorees come from the San Francisco Bay area.

###

About Nines

Nines, Inc. and affiliated professional entities do business under the Nines brand. Headquartered in Silicon Valley, Nines provides a better approach to teleradiology, improving patient care with an exceptional team of clinical experts, engineers, and data scientists. These innovations focus on improving efficiencies in clinical workflows, yielding more reliable reports, turnaround times, and system uptime. Hospitals and imaging centers rely on Nines for its unmatched innovation cadence and roster of world-class radiologists. To learn more details, visitnines.com.

Read the original post:

Forbes AI 50 Selects Nines Radiology as one of the Most Promising AI Companies - MedTech Dive

Posted in Ai | Comments Off on Forbes AI 50 Selects Nines Radiology as one of the Most Promising AI Companies – MedTech Dive

John Deere and Audi Apply Intel’s AI Technology – Automation World

Posted: at 8:10 pm

While many earlier applications of artificial intelligence (AI) in manufacturing have focused on data analytics and identifying product and component defects with machine vision, use of the technology is already expanding beyond such applications in the real world. Two good examples of this can be seen at John Deere and Audi, where Intels AI technology is being used to improve welding processes.

Christine Boles, vice president of the Internet of Things Group and general manager of the Industrial Solutions Division at Intel.Explaining how Intel got involved in addressing industrial welding applications, Christine Boles, vice president of the Internet of Things Group and general manager of the Industrial Solutions Division at Intel, said, Intel and Deere first connected at an industry conference to discuss some of the ways technology could be used to solve manufacturing challenges. Arc welding defect detection came up as an industry-wide challenge that Intel decided to take on.

She added that, like with Deere, Intel met with Audi at a conference years ago and the first project we worked on was spot welding quality detection in Audis Neckarsulm plant. Boles added that this initial project with Audi has since expanded into other areas of collaboration around edge analytics and machine learning.

Gas metal arc welding (GMAW) is used at Deeres 52 factories around the world to weld mild- to high-strength steel to create machines and products. Across these factories, hundreds of robotic arms consume millions of pounds of weld wire annually.

The specific welding issue Deere is looking to address with Intels AI technology is porositycavities in the weld metal caused by trapped gas bubbles as the weld cools. These cavities weaken the weld strength.

Its critical to find porosity defects early in the manufacturing process because, if these flaws are found later, re-work or even scrapping of full assemblies is often required.

AdLinks EOS-i6000-M Series AI GigE Vision Systems for the Edge featuring Intel Movidius Myriad VPU.Intel and Deere worked collaboratively to develop an integrated, end-to-end system of hardware and software that could generate insights in real-time at the edge. Using a neural network-based inference engine, the system logs defects in real-time and automatically stops the welding process when defects are found to correct the issue in real time.

Combining an industrial grade ADLink Machine Vision Platform and a MeltTools welding camera, the edge system at Deere is powered by Intel Core i7 processors and uses Intel Movidius VPUs (vision processing units) and the Intel Distribution of OpenVINO toolkit.

Deere is leveraging AI and machine vision to solve a common challenge with robotic welding, said Boles. By leveraging Intel technology and smart infrastructure in their factories, Deere is positioning themselves to capitalize not only on this welding solution, but potentially others that emerge as part of their broader Industry 4.0 transformation.

A key aspect of this goal involves Audis recognition that creating customized hardware and software to handle individual use cases is not preferrable. Instead, the company focuses on developing scalable and flexible platforms that allow them to more broadly apply advanced digital capabilities such as data analytics, machine learning, and edge computing.

MeltToolss Sync is a GigE based arc view camera.With that perspective in mind, Audi worked with Intel and Nebbiolo Technologies (a supplier of fog/edge computing technologies) on a proof-of-concept project to improve quality control for the welds on vehicles produced at its Neckarsulm, Germany, assembly plant. Approximately 1,000 vehicles are produced every day of production at the Neckarsulm factory, with an average of 5,000 welds in each car. That translates to more than 5 million welds each day.

Nine hundred of the 2,500 autonomous robots on its production line at this facility carry welding guns to do spot welds that hold pieces of metal together. To ensure the quality of its welds, Audi performs manual quality control inspections. Because its impossible to manually inspect 1,000 cars every day, Audi uses the industrys standard sampling method.

To do this, Audi pulls one car off the line each day and 18 engineers with clipboards use ultrasound probes to test the welding spots and record the quality of every spot, says Rita Wouhaybi, principal engineer for the Internet of Things Group in the Industrial Solutions Division at Intel and lead architect for Intels Industrial Edge Insights software.

To cost effectively test the welds on the other 999 vehicles produced each day, Audi worked with Intel to create algorithms using Intels Industrial Edge Insights software and the Nebbiolo edge platform for streaming analytics. The machine-learning algorithm developed by Intels data scientists for this application was trained for accuracy by comparing the predictions it generated to actual inspection data provided by Audi.

The machine learning model uses data generated by the welding controllers, rather than the robot controllers, so that electric voltage and current curves during the welding operation can be tracked. Other weld data used includes configuration of the welds, the types of metal, and the health of the electrodes.

A dashboard lets Audi employees visualize the data, and the system alerts technicians whenever it detects a faulty weld or a potential change in the configuration that could minimize or eliminate the faults altogether.

Overview of artificial intelligence at the edge in action at Audi.Inline inspection of 5,000 welds per car and inferring the results of each weld within 18msec highlights the scale and real-time analytics response Nebbiolos edge platform brings to manufacturing, says Pankaj Bhagra, software architect at Nebbiolo. Our software stack provides the centralized management for distributed edge computing clusters, data ingestion from heterogeneous sources, data cleansing, secure data management and onboarding of AI/ML models, which allowed Audi and Intel data science teams to continuously iterate the machine learning models until they achieved the desired level of accuracy.

According to Intel, the result is a scalable, flexible platform that Audi can use to improve quality control for spot welding and as the foundation for other use cases involving robots and controllers such as riveting, gluing and painting.

Intel was the project leader, said Mathias Mayer of the Data Driven Production Tech Hub at the Audi Neckarsulm site. They have production experience as well as knowing how to set up a system that does statistical process control. This is completely new to us. Intel taught us how to understand the data, how to use the algorithms to analyze data at the edge, and how we can work with data in the future to improve our operations on the factory floor.

Henning Loser, senior manager of the Audi Production Lab, agrees: This solution is like a blueprint for future solutions. We have a lot of technologies in the factory, and this solution is a model we can use to create quality-inspection solutions for those other technologies so that we dont have to rely on manual inspections.

Moving from manual inspections to an automated, data-driven process has allowed Audi to increase the scope and accuracy of its quality-control processes, said Loser. Other benefits include a 30%- 50% reduction in labor costs at the Neckarsulm factory.

Read the original here:

John Deere and Audi Apply Intel's AI Technology - Automation World

Posted in Ai | Comments Off on John Deere and Audi Apply Intel’s AI Technology – Automation World

Cover Story: Now its AI that is eating the world – Which-50

Posted: at 8:10 pm

Marc Andreessen famously observed, Software is eating the world, and according to Jamila Gordon, CEO and founder of Lumachain, so is Artificial Intelligence (AI).

AI and Machine Learning (ML) are well and truly ingrained in every industry. From healthcare, to agribusiness, to food manufacturing, AI is improving efficiencies and productivity, while also generating common challenges.

To better understand the use-cases and impediments of AI across different verticals we hosted a panel with three AI CEOs who, coincidentally were all award winners in this years Women in AI Awards. They included;

AI Use-Cases

Presagens first product, Life Whisperer uses cloud-based AI in the embryo selection process during IVF. The AI is trained using more than 20 thousand 2D embryo images to better identify the most viable embryos. According to Perugini, AI in womens health products and in the fertility sector is not only bringing about efficiencies, but is also advancing accessibility to healthcare products and services.

AI in our space, in the fertility sector, is really advancing patient outcomes. Its improving efficiency and standardisation within the clinic environment, its bringing global technology into clinics that wouldnt otherwise be able to access or afford it. And its bringing affordable and accessible health care to patients around the world, Perugini says.

Both Bitwise Agronomy and Lumachain are deploying computer vision, camera based AI, which works to mimic human vision. In agribusiness, Turner says that AI in the sector is really starting to take off across all sorts of horticultural livestock and all facets of agriculture.

Bitwise Agronomy claims that its AI solution delivers better results to farmers by using accurate data to improve yield and reduce costs.

Farmers can use GoPro cameras to capture footage during their work and upload this to the Bitwise Agronomy Platform. This then provides insights and uses historical data to make predictions around processes including crop performance, harvesting dates, climate impacts, water stress levels, sprays and irrigation systems.

Lumachains use of computer vision based AI is deployed to track the safety and security of the food manufacturing supply chain.

According to Gordon, Lumachain provides an end to end set of modules for the global food supply chain, which shows where the food has come from, where it has

traveled, what were the conditions, as well as ensuring that the products were safely, humanely and efficiently produced, while also ensuring the employees safety.

Challenges and Impediments

When it comes to the impediments of AI, the panellists across their varied industries, were in agreement that they are facing the same challenges.

According to Perugini, I think there are some common challenges with respect to impediments to AI, mainly around data access and quality and quantity and type of data. And I think the world is kind of shifting their thinking around this. It used to be that everyone was trying to get the largest data sets. I dont think its like that anymore.

I think theres a recognition that you need the right data sets. You need globally scalable data sets. Those data sets need to be representative broadly of the domain in which youre using AI to solve a particular problem.

When it comes to AI in healthcare, Perugini highlights the importance of broad data sets across multiple clinical environments with a wide range of patient demographics. Should these data sets not be wide enough, she says, then the AI will need retraining and rebuilding, leading to higher end-user costs.

So everything that we do as a company is around solving that scalability challenge and getting the right data, which is globally diverse so that we can deliver these products

at scale and low cost, she says.

In agribusiness, Turner talks to the same challenge using different language. She says, Its about how we curate our data sets. This curation involves varied regions and growing types that are broad and deep enough to ensure that the AI is trained to multiple variables.

Ethics and AI

One of the key challenges that is facing AI globally is the rise of unethical AI. Rob Sibo, data and analytics Senior Director at the technology consulting firm Slalom Australia told Which-50 that the cognitive biases in the human thinking process are replicated in AI.

Humans create the machine learning algorithms at the moment and a lot of times we propagate the same biases when we design the algorithms or when we collect the data that trains the algorithms, says Sibo.

Theres a lot of biases that get replicated into the machine learning models, which is what concerns me as well, because the model might be perfectly fine, the data might be fine, but the way we frame the problem and the objective is completely skewed. So you just apply a perfectly good model to a skewed problem.

To mitigate the rise of unethical AI, The Australian Governments Department of Industry, Science, Energy and Resources developed an AI Ethics Framework, which it claims helps to achieve better outcomes, reduce risk and encourage good governance.

According to Perugini, who helped to develop the framework, it is reminiscent of Australias strict regulatory framework for healthcare.

I think its a very structured and strong way to manage risk around how many people are you going to impact with this AI, what is the outcomes of kind of getting it wrong and how do we therefore mitigate those risks or ensure that the right data has been utilised or that testing has been done to protect the consumers that we are serving, she says.

The Future Of AI

AI will be in every industry in some way, says Gordon, and AI will impact every aspect of our lives.

AI is set to become even more integrated into our lives than it already is, and according to Turner, so much so that we wont even know it is there.

Looking to the future, Gordon sees human collaboration with AI to be the next big step where AI can play a supervising role, as well as automating manual tasks is set to increase.

The integration between AI and robotics, according to Turner, will be one of the greatest drivers of efficiency where the AI can act as the brain and the vision to the robots physical counterpart.

I think quantum computing is going to help accelerate the growth so we can eventually get to General AI, which is a fair way off where your AI can do multiple things at once,

More these kind of futuristic AI robots that you hear of. I think well get there, but were a fair way off from that.

Read more:

Cover Story: Now its AI that is eating the world - Which-50

Posted in Ai | Comments Off on Cover Story: Now its AI that is eating the world – Which-50

New AI Regulations Are Coming. Is Your Organization Ready? – Harvard Business Review

Posted: at 8:10 pm

In recent weeks, government bodies including U.S. financial regulators, the U.S. Federal Trade Commission, and the European Commission have announced guidelines or proposals for regulating artificial intelligence. Clearly, the regulation of AI is rapidly evolving. But rather than wait for more clarity on what laws and regulations will be implemented, companies can take actions now to prepare. Thats because there are three trends emerging from governments recent moves.

Over the last few weeks, regulators and lawmakers around the world have made one thing clear: New laws will soon shape how companies use artificial intelligence (AI). In late March, the five largest federal financial regulators in the United States released a request for information on how banks use AI, signaling that new guidance is coming for the finance sector. Just a few weeks after that, the U.S. Federal Trade Commission (FTC) released an uncharacteristically bold set of guidelines on truth, fairness, and equity in AIdefining unfairness, and therefore the illegal use of AI, broadly as any act that causes more harm than good.

The European Commission followed suit on April 21 released its own proposal for the regulation of AI, which includes fines of up to 6% of a companys annual revenues for noncompliance fines that are higher than the historic penalties of up to 4% of global turnover that can be levied under the General Data Protection Regulation(GDPR).

For companies adopting AI, the dilemma is clear: On the one hand, evolving regulatory frameworks on AI will significantly impact their ability to use the technology; on the other, with new laws and proposals still evolving, it can seem like its not yet clear what companies can and should do. The good news, however, is that three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems dont run afoul of any existing and future laws and regulations.

The first is the requirement to conduct assessments of AI risks and to document how such risks have been minimized (and ideally, resolved). A host of regulatory frameworks refer to these types of risk assessments as algorithmic impact assessments also sometimes called IA for AI which have become increasingly popular across a range of AI and data protection frameworks.

Indeed, some of these types of requirements are already in place, such as Virginias Consumer Data Protection Act signed into law last month, it requires assessments for certain types of high-risk algorithms. In the EU, the GDPR currently requires similar impact assessments for high-risk processing of personal data. (The UKs Information Commissioners Office keeps its own plain language guidance on how to conduct impact assessments on its website).

Unsurprisingly, impact assessments also form a central part of the EUs new proposal on AI regulation, which requires an eight-part technical document for high-risk AI systems that outlines the foreseeable unintended outcomes and sources of risks of each AI system, along with a risk-management plan designed to address such risks. The EU proposal should be familiar to U.S. lawmakers it aligns with the impact assessments required in a bill proposed in 2019 in both chambers of Congress called the Algorithmic Accountability Act. Although the bill languished on both floors, the proposal would have mandated similar reviews of the costs and benefits of AI systems related to AI risks. That bill that continues to enjoy broad support in both the research and policy communities to this day, and Senator Ron Wyden (D-Oregon), one of its cosponsors, reportedly plans to reintroduce the bill in the coming months.

While the specific requirements for impact assessments differ across these frameworks, all such assessments have the two-part structure in common: mandating a clear description of the risks generated by each AI system and clear descriptions of how each individual risk has been addressed. Ensuring that AI documentation exists and captures each requirement for AI systems is a clear way to ensure compliance with new and evolving laws.

The second trend is accountability and independence, which, at a high level, requires both that each AI system be tested for risks and that the data scientists, lawyers, and others evaluating the AI have different incentives than those of the frontline data scientists. In some cases, this simply means that AI be tested and validated by different technical personnel than those who originally developed it; in other cases (especially higher-risk systems), organizations may seek to hire outside experts to be involved in these assessments to demonstrate full accountability and independence. (Full disclosure: bnh.ai, the law firm that I run, is frequently asked to perform this role.) Either way, ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI.

The FTC has been vocal on exactly this point for years. In its April 19 guidelines, it recommended that companies embrace accountability and independence and commended the use of transparency frameworks, independent standards, independent audits, and opening data or source code to outside inspection. (This recommendation echoed similar points on accountability the agency made publicly in April of last year.)

The last trend is the need for continuous review of AI systems, even after impact assessments and independent reviews have taken place. This makes sense. Because AI systems are brittle and subject to high rates of failure, AI risks inevitably grow and change over time meaning that AI risks are never fully mitigated in practice at a single point in time.

For this reason, lawmakers and regulators alike are sending the message that risk management is a continual process. In the eight-part documentation template for AI systems in the new EU proposal, an entire section is devoted to describing the system in place to evaluate the AI system performance in the post-market phase in other words, how the AI will be continuously monitored once its deployed.

For companies adopting AI, this means that auditing and review of AI should occur regularly, ideally in the context of a structured process that ensures the highest-risk deployments are monitored the most thoroughly. Including details about this process in documentation who performs the review, on what timeline, and the parties responsible is a central aspect of complying with these new regulations.

Will regulators converge on other approaches to managing AI risks outside of these three trends? Surely.

There are a host of ways to regulate AI systems from explainability requirements for complex algorithms to strict limitations for how certain AI systems can be deployed (e.g., outright banning certain use cases such as the bans on facial recognition that have been proposed in various jurisdictions throughout the world).

Indeed, lawmakers and regulators have still not even arrived at a broad consensus on what AI is itself, a clear prerequisite for developing a common standard to govern AI. Some definitions, for example, are tailored so narrowly that they only apply to sophisticated uses of machine learning, which are relatively new to the commercial world; other definitions (such as the one as in the recent EU proposal) appear to cover nearly any software system involved in decision-making, which would apply to systems that have been in place for decades. Diverging definitions of artificial intelligence are simply one among many signs that we are still in the early stages of global efforts to regulate AI.

But even in these early days, the ways that governments are approaching the issue of AI risk have clear commonalities, meaning that the standards for regulating AI are already becoming clear. So organizations adopting AI right now and those seeking to ensure their existing AI remains compliant need not wait to start preparing.

View original post here:

New AI Regulations Are Coming. Is Your Organization Ready? - Harvard Business Review

Posted in Ai | Comments Off on New AI Regulations Are Coming. Is Your Organization Ready? – Harvard Business Review

AI bias is an ongoing problem, but there’s hope for a minimally biased future – TechRepublic

Posted: at 8:10 pm

Removing bias from AI is nearly impossible, but one expert sees a future with potentially bias-free decisions made by machines.

TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence. The following is an edited transcript of their conversation.

Karen Roby: We talk a lot about AI and the misconceptions involved here. What is the biggest misconception? Do you think it's that people just think that it should be perfect, all of the time?

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

Mohan Mahadevan: Yeah, certainly. I think whenever we try to replace any human activity with machines, the expectation from us is that it's perfect. And we want to very much focus on finding problems, every little nitpicky problem that the machine may have.

Karen Roby: All right, Mohan. And if you could just break down for us, why does bias exist in AI?

Mohan Mahadevan: AI is driven primarily by data. AI refers to the process by which machines learn how to do certain things, driven by data. Whenever you do that, you have a particular dataset. And any dataset, by definition, is biased, because there is no such thing as a complete dataset, right? And so you're seeing a part of the world, and from that part of the world, you're trying to understand what the whole is like. And you're trying to model behavior on the whole. Whenever you try to do that, it is a difficult job. And in order to do that difficult job, you have to delve into the details of all the aspects, so that you can try to reconstruct the whole as best as you can.

Karen Roby: Mohan, you've been studying and researching AI for many years now. Talk a little bit about your role, there at Onfido, and what your job entails.

Mohan Mahadevan: Onfido is a company that takes a new approach to digital identity verification. So what we do is we connect the physical identity to a digital identity, thereby enabling you to prove who you are, to any service or product that you wish to access. It could be opening a bank account, or it could be renting a car, or opening an account and buying cryptocurrency, in these days. What I do, particularly, is that I run the computer vision and the AI algorithms that power this digital identity verification.

SEE: Digital transformation: A CXO's guide (free PDF) (TechRepublic)

Karen Roby: When we talk about fixing the problem, Mohan, "how" is a very complex issue when we talk about bias. How do we fix it? What type of intervention is needed at different levels?

Mohan Mahadevan: I'll refer back to my earlier point, just for a minute. So what we covered there was that, any dataset by itself is incomplete, which means it's biased in some form. And then, when we build algorithms, we then exacerbate that problem by adding more bias into the situation. Those are two things first that we need to really pay close attention to and handle well. Then what happens is, the researchers that formulate these problems, they bring in their human bias into the problem. That could either fix the problem or make it worse, depending on the motivation of the researchers and how focused they are on solving this particular problem. Lastly, let us assume that all of these things worked out really well. OK? The researchers were unbiased, the dataset completion problem was solved.

The algorithms were modeled correctly. Then you have this perfect AI system that is currently unbiased or minimally biased. There's no such thing as unbiased. It's minimally biased. Then, you take it and apply it in the real world. You take it to the real world. And the real world data is always going to drift and move and vary. So, you have to pay close attention to monitor these systems when they're deployed in the real world, to see that they remain minimally biased. And you have to take corrective actions as well, to correct for this bias as it happens in the real world.

SEE: Hyperautomation takes RPA to the next level, allowing workers to do more important tasks (TechRepublic)

Karen Roby: I think people hear a lot about bias and they think they know what that means. But what does it really mean, when bias exists in an AI?

Mohan Mahadevan: In order to understand the consequences, let's look at all the stakeholders in the equation. You have a company that builds a product based on AI. And then you have a consumer that consumes that product, which is driven by AI. So let's look at both sides, and the consequences are very different on both sides.

On the human side, if I get a loan rejected, it's terrible for me. Right? Even if, for all the Indian people ... So I'm from India. And so for all the Indian people, if a AI system was proven to be fair, but I get my loan rejected, I don't care that it's fair for all Indian people. Right? It affects me very personally and very deeply. So, as far as the individual consumer goes, the individual fairness is a very critical component.

As far as the companies go, and the regulators and the governments go, they want to make sure that no company is systematically excluding any group. So they don't care so much about individual fairness, they look at group fairness. People tend to think of group fairness and individual fairness as separate things. If you just solve the group, you're OK. But the reality is, when you look at it from the perspective of the stakeholders, they're very different consequences.

Karen Roby: We'll flip the script a little bit here, Mohan. In terms of the positives with AI, what excites you the most?

SEE: 9 questions to ask when auditing your AI systems (TechRepublic)

Mohan Mahadevan: There are just so many things that excite me. But in regards to bias itself, I'll tell you. Whenever a human being is making a decision on any kind of thing, whether it be a loan, whether it be an admission or whatever, there's always going to be a conscious and unconscious bias, within each human being. And so, if you think of an AI that looks at the behavior of a large number of human beings and explicitly excludes the bias from all of them, the possibility for a machine to be truly or very minimally biased is very high. And this is exciting, to think that we might live in a world where machines actually make decisions that are minimally biased.

Karen Roby: It definitely impacts us all in one way or another, Mohan. Wrapping up here, there's a lot of people that are scared of AI. Anytime you take people, humans, out of the equation, it's a little bit scary.

Mohan Mahadevan: Yeah. I think we should all be scared. I think this is not something that we should take lightly. And we should ask ourselves the hard questions, as to what consequences there can be of proliferating technology for the sake of proliferating technology. So, it's a mixed bag, I wish I had a simple answer for you, to say, "This is the answer." But, overall, if we look at machines like the washing machine, or our cars, or our little Roombas that clean our apartments and homes, there's a lot of really nice things that come out of even AI-based technologies today.

Those are examples of what we think of as old-school technologies, that actually use a lot of AI today. Your Roomba, for example, uses a lot of AI today. So it certainly makes our life a lot easier. The convenience of opening a bank account from the comfort of your home, in these pandemic times, oh, that's nice. AI is able to enable that. So I think there's a lot of reason to be excited about AI, the positive aspects of AI.

The scary parts I think come from several different aspects. One is bias-related. When an AI system is trained poorly, it can generate all kinds of systematic and random biases. That can cause detrimental effects on a per-person and on a group level. So we need to protect ourselves against those kinds of biases. But in addition to that, when it is indiscriminately used, AI can also lead to poor behaviors on the part of humans. So, at the end of the day, it's not the machine that's creating a problem, it's how we react to the machine's behavior that creates bigger problems, I think.

Both of those two areas are important. It's not only the machines giving us good things, but also struggling with bias when the humans don't build them right. Then, when the humans use them indiscriminately and in the wrong way, they can create other problems as well.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence.

Image: Mackenzie Burke

Read more:

AI bias is an ongoing problem, but there's hope for a minimally biased future - TechRepublic

Posted in Ai | Comments Off on AI bias is an ongoing problem, but there’s hope for a minimally biased future – TechRepublic

Precision AI raises $20 million to reduce the chemical footprint of agriculture – PRNewswire

Posted: at 8:10 pm

The financing will support the advancement of a disruptive precision farming platform that deploys swarms of artificially intelligent drones to dramatically reduce herbicide use in row crop agriculture.

Precision AI's drone-based computer vision technology enables surgically precise application of herbicide to individual weeds in row crop farming. By spraying only weeds and avoiding the crop, yields can be maintained at a fraction of the chemical cost. Ultimately, the company's vision is to deploy hives of intelligent drones that will automate the crop protection process throughout the entire growing season, optimizing every square inch of farmland on a per-plant basis.

"Farms of the future must be sustainable and produce healthier foods," said Daniel McCann, CEO and founder of Precision AI. "Using artificial intelligence to target individual weeds is a quantum leap in efficiency and sustainability over today's practices of indiscriminate broadcast application of herbicide."

Herbicide spraying is one of the least efficient agricultural activities, with over 80 percent wasted on bare ground and another 15 percent falling on the crop. While competitors have focused on high-value, low acreage crops, Precision AI's disruptive approach to drone swarming allows for application on large acreage crops at a much lower cost than traditional large farming machinery. It holds the promise to reduce pesticide use by up to 95% while maintaining crop yield and saving farmers up to $52 per acre per growing season. "The cost savings are massive," said McCann. "And the affordable unit economics of drones makes the technology accessible to even the smallest farm".

"We were immediately struck by Precision AI's unique combination of drone technology with precise chemical application. Not only can it minimize toxic runoff to protect waterways and downstream ecology, but also reduce farmers' operating costs and increase their revenue with a zero-chemical residue label," said Laurie Menoud, Partner at At One Ventures and member of the Board of Directors.

"BDC Capital is excited to back an ambitious entrepreneur with a great syndicate of investment partners. Precision AI's technology, by applying Artificial Intelligence technologies in the field, will reduce reliance on crop inputs and enable benefits to farmers, the broader food supply chain, and the environment. We are hopeful that Precision AI can be among the next generation of Agtech solutions that change the industry." said Joe Regan, Managing Partner, Industrial Innovation Venture Fund, BDC Capital, who will be joining the Board of Directors.

The platform also increases producer competitiveness in the global market with integrated food supply chain traceability and proof of sustainable farming practices.

"Autonomous, precision spraying is the future of modern agriculture, and Precision AI's best-in-class technology stack and deep management expertise have the potential to accelerate the development of this industry in exciting ways," said Kevin Lockett, partner at U.S.-based Fulcrum Global Capital. "With an increasingly informed consuming public demanding greater transparency into the food it eats, we are excited to partner with Precision AI and the other co-investors in commercializing multiple ways to reduce the use of traditional chemicals within our food system while increasing sustainability and farmer profitability."

"Precision AI's technology is revolutionizing the agriculture industry. Its innovative application of precision spraying not only prevents the overuse of herbicides but reduces operating costs for farmers and delivers improved and sustainable crop protection practices. Precision AI is a shining example of Canadian cleantech innovation and SDTC is proud to invest in its transformative technology." said Leah Lawrence, President and CEO of Sustainable Development Technology Canada.

About Precision AIFounded in 2018, Precision AI is at the forefront of the autonomous farming revolution. Using computer vision and robotics, the company provides fully autonomous spraying and crop protection solutions for small to large farms and farm machinery manufacturers.www.precision.ai

SOURCE Precision AI

http://www.precision.ai

Go here to read the rest:

Precision AI raises $20 million to reduce the chemical footprint of agriculture - PRNewswire

Posted in Ai | Comments Off on Precision AI raises $20 million to reduce the chemical footprint of agriculture – PRNewswire

guardDog.ai Brings Solution for Securing Networks and Devices in Edge Territory to Latin America via Distribution Agreement with Clean Technologies IP…

Posted: at 8:10 pm

Ongoing Global Expansion of Access to Protection from Threats and Vulnerabilities Not Addressed by Traditional Network and Device Management Solutions

SALT LAKE CITY(BUSINESS WIRE)#BeyondVPNGuard Dog Solutions, Inc., dba guardDog.ai, a rapidly expanding leader in cyber security solutions for consumers and businesses, today announced a continued step in its growth through a distribution agreement with Latin America-based Clean Technologies IP LLC.

Under the terms of the agreement, Clean Technologies will lead Latin American distribution for guardDog.ai. guardDog PCS (Protective Cloud Services) and guardDog Fido are a cloud-based software service with companion network security device. This easy-to-install cyber security solution offers painless threat detection, automated countermeasures, and assessments of vulnerabilities it finds on your network and attached devices.

In Wi-Fi and wired networks, guardDog.ai protects and warns against threats outside the perimeter of the network or on attached devices that other solutions often dont see in an area the company calls edge territory. Devices of every kind are inherently vulnerable to the networks they join. guardDog.ai employs patent-pending artificial intelligence to recognize, expose, and help prevent cyber security threats before they become a problem.

The guardDog.ai solution is especially important in countries with government agencies, financial Institutions, and large manufacturing corporations that are heavily reliant on Wi-Fi and have limited access to wired infrastructure, as they are especially vulnerable to cyber attacks. Likewise, businesses (and consumers) are struggling to manage the security risks that result from the explosion of workers in remote working environments, and a shortfall of talent or tools to secure them. guardDog.ai addresses these challenges.

Said guardDog.ai CEO Peter Bookman, Cyber threats have exploded globally, the security landscape has changed, and solutions havent kept up. Covid-19 has accelerated trends like remote working, and with it have vastly expanded the attack surface. Clean Technologies understands our vision for changing the approach to the problem in order to get better results, and we look forward to working with them to bring our solution to Latin America.

Peter Zimeri, CEO of Clean Technologies IP LLC stated, We are very pleased to partner with guardDog to deliver effective cybersecurity solutions to Wi-Fi and wired networks throughout Latin America. As cybercrimes rise in government and financial institutions, we feel we are providing a tremendous value. We are protecting these networks from ransomware, phishing, identity theft, hacking, scamming, computer viruses and malware, botnets and DDoS attacks, all of which require new approaches and the protection guardDog delivers.

About guardDog.ai

Headquartered in Salt Lake City, Utah, guardDog.ai has developed a cloud-based software service with a companion device that work together to simplify network security. The solution exposes invisible threats on networks, and the devices attached to them, with patented technology to address and prevent cybersecurity threats before they compromise network environments. Every business, government, healthcare institution, home consumer, or other organization, are grappling to find security solutions that are adapting to this changing world. guardDog.ai is pioneering new innovations designed to meet these challenges.

Safe Harbor Statement

This press release contains forward looking statements of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Exchange Act. Forward looking statements are not a guarantee of future performance and results, and will not be accurate indications of the times, or by, which such performance will be achieved.

For more information visit guardDog.ai and explore edge territory analytics at Live Map.

#EdgeTerritory #Cybersecurity #ProtectiveCloudServices #BeyondVPN

Contacts

Sales Contact:

sales@guarddog.ai833-248-2733

Press Contact:

Snapp Conner

Cheryl Conner

801-806-0150

info@snappconner.com

Read the original here:

guardDog.ai Brings Solution for Securing Networks and Devices in Edge Territory to Latin America via Distribution Agreement with Clean Technologies IP...

Posted in Ai | Comments Off on guardDog.ai Brings Solution for Securing Networks and Devices in Edge Territory to Latin America via Distribution Agreement with Clean Technologies IP…

Page 136«..1020..135136137138..150160..»