Daily Archives: June 20, 2020

The importance and opportunities of transatlantic cooperation on AI – Brookings Institution

Posted: June 20, 2020 at 10:41 am

Introduction

Artificial intelligence (AI) is a potentially transformational technology that will impact how people work and socialize and how economies grow. AI will also have wide-ranging international implications, from national security to international trade. In this submission, we address the significance of international cooperation as a vehicle for realizing the ambitious goals in the key areas of AI innovation and regulation set out in the European Commissions white paper on AI. We focus particularly on the EU relationship with the U.S., which as both a major EU trading partner and a world leader in AI, is a logical partner for such cooperation.

The white paper talks to the importance of international cooperation. Specifically, the white paper observes that the EU will continue to cooperate with like-minded countries, but also with global players, on AI, based on an approach based on EU rules and values. The white paper also goes on to note that the Commission is convinced that international cooperation on AI matters must be based on an approach that promotes the respect of fundamental rights, including human dignity, pluralism, inclusion, non-discrimination and protection of privacy and personal data and it will strive to export its values across the world. The U.S. and the EU, as the worlds leading economies with strong ties grounded in common values, provide a strong basis for AI governance that can work for the EU and the U.S. and provide a model globally.

This submission is divided into two parts. The first outlines why transatlantic cooperation on AI is important. The second identifies three broad areas for transatlantic cooperation on AI: innovation, regulation, and standards, including with respect to data.

Download the paper

More:

The importance and opportunities of transatlantic cooperation on AI - Brookings Institution

Posted in Ai | Comments Off on The importance and opportunities of transatlantic cooperation on AI – Brookings Institution

AI bias and privacy issues require more than clever tech – ComputerWeekly.com

Posted: at 10:41 am

Algorithmic bias, lack of artificial intelligence (AI) explainability and failure to seek meaningful consent for personal data collection and sharing are among the biggest barriers facing AI, according to analysis from the Centre for Data Ethics and Innovation(CDEI).

The CDEIs AI Barometer analysis was based on workshops and scoring exercises involving 100 experts. The study assessed opportunities, risks and governance challenges associated with AI and data use across five key UK sectors.

Speaking at the launch of the report, Michael Birtwistle, AI Barometerlead at the CDEI, said: AI and data use have some very promising opportunities, but not all are equal. Some will be harder to achieve but have high benefits, such as realising decarbonisation and understanding public health risk or automatic decision support to reduce bias.

Birtwistle said the CDEI analysis showed that what these application areas have in common is complex data flows about people that affect them directly. We are unlikely to achieve the biggest benefits without overcoming the barriers, he added.

Roger Taylor, chair of the CDEI, said: AI and data-driven technology has the potential to address the biggest societal challenges of our time, from climate change to caring for an ageing society. However, the responsible adoption of technology is stymied by several barriers, among them low data quality and governance challenges, which undermine public trust in the institutions that they depend on.

As we have seen in the response to Covid-19, confidence that government, public bodies and private companies can be trusted to use data for our benefit is essential if we are to maximise the benefits of these technologies. Now is the time for these barriers to be addressed, with a coordinated national response, so that we can pave the way for responsible innovation.

The report found that the use of biased algorithmic tools due to biased training data, for example entrenches systematic discrimination against certain groups, such as reoffending risk scoring in the criminal justice system.

During a virtual panel discussion at the launch of the AI Barometer, Areeq Chowdhury, founder of WebRoots Democracy, discussed how technology inadvertently amplifies systemic discrimination. For instance, while there is a huge public debate about the accuracy rate of facial recognition systems to identify people from black and Asian minorities, the ongoing racial tension in the US has shown that the problem is wider than the actual technology.

According to Chowdhury, such systemic discrimination builds up from a collection of policies over a period of time.

The experts who took part in the CDEI analysis raised concerns about the lack of clarity over where oversight responsibility lies. Despite AI and data being commonly used within and across sectors, it is often unclear who has formal ownership of regulating its effects, said the CDEI in the report.

Cathryn Ross, head of the Regulatory Horizon Council, who also took part in the panel discussion, said: A biting constraint on the take-up of technology is public trust and legitimacy. Regulations can help to build public trust to enable tech innovation.

Mirroring her remarks, fellow panellist Annemarie Naylor, director of policy and strategy at Future Care Capital, said: Transparency has never been so important.

The AI Barometer also reported that the experts the CDEI spoke to were concerned about low data quality, availability and infrastructure: It said: The use of poor quality or unrepresentative data in the training of algorithms can lead to faulty or biased systems (eg diagnostic algorithms that are ineffective in identifying diseases among minority groups).

Equally, the concentration of market power over data, the unwillingness or inability to share data (eg due to non-interoperable systems), and the difficulty of transitioning data from legacy and non-digital systems to modern applications can all stymie innovation.

The CDEI noted that there is often disagreement among the public about how and where AI and data-driven technology should be deployed. Innovations can pose trade-offs such as between security and privacy, and between safety and free speech, which take time to work through.

However, the lockdown has shown that people are prepared to make radical changes very quickly if there are societal benefits. This has implications for data privacy policies.

The challenge for regulators is that existing data regulations are often sector-specific. In Rosss experience, technological innovation with AI cuts across different industry sectors. She said a fundamentally different approach that coordinated regulations was needed.

Discussing what the coronavirus has taught policy-makers and regulators about peoples attitudes to data, Ross said: Society is prepared to take more risk for a bigger benefit, such as saving lives or reducing lockdown measures.

Read the rest here:

AI bias and privacy issues require more than clever tech - ComputerWeekly.com

Posted in Ai | Comments Off on AI bias and privacy issues require more than clever tech – ComputerWeekly.com

Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? – Wccftech

Posted: at 10:41 am

When FaceApp initially launched back in 2017, it took the world by storm because of its capabilities. Granted, we have seen several apps in the past that could make you look old or young but the accuracy and precision were not there. FaceApp, however, used artificial intelligence to do that, and the results were mind-boggling at best. Even when it was launched, a lot of security researchers raised concerns over the consequences of using this app. After all, you are uploading your pictures onto an app for it to use and run through its AI algorithm. But people still continued using it.

After almost 3 years, the app has exploded all over Twitter, Instagram, and Facebook again. However, this time, people are using the gender swap feature that uses AI to change a person's gender and present them with a picture that is very convincing, and at the same time, quite scary.

iOS Might Be Getting Rebranded to iPhoneOS, as Apple Might Be Looking to Streamline Its Operating System Names

Now, there have been several apps in the past like that. Even Snapchat introduced a feature that would swap your gender. But the implementation here is different since it uses artificial intelligence, the very thing many people fear, in the first place. But it is not just artificial intelligence that we should be afraid of, it is the privacy policy of the app. if you head over to the recently updated privacy policy of the app, this is the highlighted text.

Now the clever thing here is that when you do visit the page, only the aforementioned lines are highlighted, which is more than enough to convince any user that this app is indeed safe. However, if you just take a minute and read beyond those two lines, you start becoming wary of just what is being stored and used. True, some would say that they are not worried about the app or what it does with the photos but keep in mind that this app has over 100 million downloads on Google Play Store alone. One could only imagine how many underage individuals are using this app for swapping their genders and potentially risking their pictures.

Now, if you are one of those people who believe that just deleting the app will get rid of the photos that are taken or used by FaceApp, that is not the case. However, it is not as easy as it may sound. For those who want to get their pictures removed, you will actually have to put in a request for it to happen, in the first place. In order to do that, you have to go to Settings >Support >Report a bug with the word "privacy" in the subject line, and then write a formal request. Which is convoluted to a point that most people will not go through it, in the first place.

To confirm just how convincing or borderline creepy this app can become, I asked a few of my friends to provide their pictures. Now, it was easy for me to tell the difference because I know them, but to an unsuspecting eye, it might not be the same case.

And privacy is just one concern that people have raised. On the other hand, we have a shockingly powerful AI in place which could very well be learning the patterns for a much stronger facial recognition pattern.

Zoom Decides to Offer Encryption Plans for All Users in July

In all honesty, the results are shocking at best. What is even more shocking is the amount of information we are knowingly handing away to an app just for the sake of shock value. Again, whether or not this app is going to have severe consequences or not is still yet to be seen. But as a word of warning, keep in mind that the FBI did issue a warning pertaining to the safety of the app, and this happened back in December 2019.

Calling FaceApp an imminent threat to privacy or an AI nightmare would be stretching it a bit too far. However, at the same time, we have to keep in mind that in a world where our privacy is among our most important assets, there are some questionable practices and activities that can easily take place if things go rogue. We can only say that the more we are protecting our privacy, the better it is in the longer run. Currently, you can download FaceApp on both iOS and Android for your amusement.

Link:

Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? - Wccftech

Posted in Ai | Comments Off on Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? – Wccftech

Where AI Meets Big Data, Innovation is Assured – Analytics Insight

Posted: at 10:41 am

Image Credit: NewtonX

When you work around the idea of the fourth industrial revolution, two technologies that make a quick flash into your brain are Big Data and Artificial Intelligence. A lot has been said and done in this regard. The combination of both has driven numerous industries towards success. While data or big data is considered the lifeblood of modern businesses, AI on the other hand is the heart to fill life into it.

Speaking more technically, we can say, big data is at the core of every business and the technology to harness its true value, AI is extremely essential to extract true meaning from data.

From Machine Learning to Computer Vision to Natural Language Processing, all AI subsets play a great role in getting something meaningful from just voluminous data.

According to a New Vantage survey report, the percentage of firms investing greater than US$50 million is up to 64.8% in 2020 from just 39.7% in 2018, with a total of 98.8% of firms investing in Big Data and AI initiatives. However, the pace of investment is leveling off, as only 51.9% of firms are accelerating their rate of investment, in stark contrast to the 91.6% who were accelerating their pace of investment in 2019. This includes some of the biggest companies, including Google, JP Morgan Chase, Glaxo Smith Kline, and Bank of America, among others.

The rising stars and the tech giants all have developed mastery at the intersection where big data meets AI.

This convergence of big data and Artificial Intelligence is what the MIT Sloan Management Review called the single most important development that is shaping the future of how firms drive business value from their data and analytics capabilities. These organizations understand how to combine data-savvy and strong AI capabilities into strongly differentiated solutions with massive market value. Here are a few ways data and AI empower each other:

Big data, the massive data collections that were all contributing to every day, is only getting bigger. Its estimated that by 2020, every person on earth will generate 1.7 MB of data every second, according to DOMO. Within that data, if we know how to unlock it, lies the potential to build amazing new businesses and solve some of the worlds greatest challenges.

Data is the fuel that powers AI, and large data sets make it possible for machine learning applications (machine learning is a branch of AI) to learn independently and rapidly. The abundance of data we collect supplies our AIs with the examples they need to identify differences, increase their pattern recognition capabilities, and see the fine details within the patterns.

AI enables us to make sense of massive data sets, as well as unstructured data that doesnt fit neatly into database rows and columns. AI is helping organizations create new insights from data that was formerly locked away in emails, presentations, videos, and images.

Databases are becoming increasingly versatile and powerful. In addition to traditional relational databases, we now have powerful graph databases that are more capable of connecting data points and uncovering relationships, as well as databases that specialize in document management.

To understand further lets look at some of the most common yet revolutionary applications of AI and Big Data driving business innovation.

With so many options on their fingertips, todays consumers are amongst the most fickle that the world of business has ever witnessed. Deep learning, which is a subset of AI, is helping businesses to predict consumer behavior by recognizing voice and search patterns.

Similarly, Big Data, in conjunction with predictive analytics and learning algorithms, come up with offers for customers even before they realize the need for it. A fantastic example of this is Starbucks personalizing customer experience. Through its app, it uses Big Data and AI to recommend different types of caffeine based on the weather and the customers location and many other features.

Digital marketing is the byword of any company worth its salt. Along the same lines are SEO, generating leads, and conversion. Big Data and AI work in combination to provide better insights to companies and narrow down on the targeted consumers who are critical to digital marketing.

By making use of the latest analytic tools, businesses are also able to save a lot of money and have better conversions. Netflix, the live streaming giant, incorporates AI into its digital marketing, which has exponentially raised their subscriptions. This directly translates to increased revenue.

Not many of us can imagine starting the day without the help of virtual assistants or VAs. The most notable VAs are Siri and Alexa. The same is true for many businesses all around the globe. AI has helped decipher or at least make sense of Big Data, which makes the VAs more intelligent.

The reaches of Big Data and AI are seen in self-driving cars that drive on auto-pilot. Among them is the Tesla. But of course, it is no secret as the companys CEO has been openly propagating the immense ways in which Artificial intelligence will change the very course of history.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Go here to see the original:

Where AI Meets Big Data, Innovation is Assured - Analytics Insight

Posted in Ai | Comments Off on Where AI Meets Big Data, Innovation is Assured – Analytics Insight

Amazons AI-powered distance assistants will warn workers when they get too close – The Verge

Posted: at 10:41 am

Amazon, which is currently being sued for allegedly failing to protect workers from COVID-19, has unveiled a new AI tool it says will help employees follow social distancing rules.

The companys Distance Assistant combines a TV screen, depth sensors, and AI-enabled camera to track employees movements and give them feedback in real time. When workers come closer than six feet to one another, circles around their feet flash red on the TV, indicating to employees that they should move to a safe distance apart. The devices are self-contained, meaning they can be deployed quickly where needed and moved about.

Amazon compares the system to radar speed checks which give drivers instant feedback on their driving. The assistants have been tested at a handful of the companys buildings, said Brad Porter, vice president of Amazon Robotics, in a blog post, and the firm plans to roll out hundreds more to new locations in the coming weeks.

Importantly, Amazon also says it will be open-sourcing the technology, allowing other companies to quickly replicate and deploy these devices in a range of locations.

Amazon isnt the only company using machine learning in this way. A large number of firms offering AI video analytics and surveillance have created similar social-distancing tools since the coronavirus outbreak began. Some startups have also turned to physical solutions, like bracelets and pendants which use Bluetooth signals to sense proximity and then buzz or beep to remind workers when they break social distancing guidelines.

Although these solutions will be necessary for workers to return to busy facilities like warehouses, many privacy experts worry their introduction will normalize greater levels of surveillance. Many of these solutions will produce detailed data of workers movements throughout the day, allowing managers to hound employees in the name of productivity. Workers will also have no choice but to be tracked in this way if they want to keep their job.

Amazons involvement in this sort of technology will raise suspicions as the company is often criticized for the grueling working conditions in its facilities. In 2018, it even patented a wristband that would track workers movements in real time, directing not just which task they should do next, but if their hands are moving towards the wrong shelf or bin.

The companys description of the Distance Assistant as a standalone unit that only requires power suggests its not storing any data about workers movement, but weve contacted the company to confirm what information, if any, might be retained.

Link:

Amazons AI-powered distance assistants will warn workers when they get too close - The Verge

Posted in Ai | Comments Off on Amazons AI-powered distance assistants will warn workers when they get too close – The Verge

AI Can Autocomplete Images in the Same Way It Does Text – Adweek

Posted: at 10:41 am

Just as the artificial intelligence in your iPhone can guess the next word you might type in a message, the same technology can predict the bottom half of an image simply from scanning the top half.

That was the finding of a new experiment from research group OpenAI, which trained a version of its hyper-sophisticated text generator, GPT-2, on millions of images to prove that it could generate coherent patterns of pixels in the same manner that it does sentences.

Researchers demonstrated the results by feeding the system the top halves of images like movie posters, sports photos and popular memes. The system was then able to generate a bottom half based on the patterns of pixels it saw. While the results didnt always match the original image, the output was usually photorealistic and fit seamlessly with the rest of the image.

ImageGPT's various autocomplete generations based on the top half of a real image on the far right.

OpenAI

The research could help scientists better understand the parallels between how machines understand and generate language versus visual data. While a model of machine learning called a transformer has spurred a new research boom in the subfield of natural language processing AI in the past couple of years, the method has not proven as successful with tasks like image classification and generation, the researchers write in the introduction to their paper.

Our work aims to understand and bridge this gap, they said.

AI image generation technology has also made strides in recent yearseven spawning a flourishing experimental art scenebut the model of machine learning that has provided the foundation for that research is fundamentally different from that which OpenAI uses in this paper. Most image-producing AI relies on a model called a Generative Adversarial Network (GAN) that can produce varied imitations of a style of image once trained on a large dataset of similar visuals.

OpenAIs model, dubbed Image GPT, is based instead on a version of GPT-2, which can generate realistic-sounding copy based on a text prompt. OpenAI made waves when it announced GPT-2 early last year and declined to release the code all at once for fear that it would supercharge fake news and spam mass production. Instead, the model has been more often used for novelty applications, like a text-based adventure game, parody Twitter accounts and Adweeks own Super Bowl Bot.

OpenAI recently released an even larger form of that program called GPT-3, trained on a dataset more than 100 times bigger than the previous iteration, and plans to use it as the backbone of its first commercial product.

Excerpt from:

AI Can Autocomplete Images in the Same Way It Does Text - Adweek

Posted in Ai | Comments Off on AI Can Autocomplete Images in the Same Way It Does Text – Adweek

OODA Video: The Technologies of AI Security and Ethics – OODA Loop

Posted: at 10:41 am

This video provides succinct context on technology companies fielding solutions to help address key concerns in the area of AI security and AI ethics. We produced this presentation for OODA members based on our own continuous market research that includes tracking VC investment and engaging with the community via AI related events but also our own AI centric due diligence engagements and cybersecurity assessments.

This content is restricted to OODA Network members only. Members get access to all site content plus access to exclusive reports and events. Please consider becoming a member. For more information please click here. Thanks!

Already a member?Sign in to your account.

The latestOODA Network member benefitis a video on demand series that provides expert context from our analysts.

Current titles in our on demand video library include:

We built each of these presentations with the busy decision-maker in mind. Each video provides succinct articulations of issues leading to recommended actions, all based on our research and expert assessments.

OODA Members can access this premium content atThe OODA Member Video Library

Excerpt from:

OODA Video: The Technologies of AI Security and Ethics - OODA Loop

Posted in Ai | Comments Off on OODA Video: The Technologies of AI Security and Ethics – OODA Loop

The UK has only just begun to see the transformative potential of AI – Telegraph.co.uk

Posted: at 10:41 am

Over the last year, our attention has been focused on a series of issues that are of global importance. Last year, Extinction Rebellion pushed climate change to the top of the agenda. This year, COVID-19 has made effective public health monitoring a priority. In recent weeks, anger at systemic racial injustice has fuelled public protests.

While these are very different issues, they share something in common: they will not be addressed if we do not use data-driven technologies to understand, monitor and improve complex systems - whether that is the healthcare system, the justice system or the energy grid.

This week the CDEI published its AI Barometer, a major analysis of the key risks, opportunities, and governance challenges associated with AI and data use in the UK. The conclusion we came to, after engaging with 120 experts, is that there are significant barriers that will prevent the UK from using AI and data-driven technology to overcome the complex challenges its facing.

The problem is often not with the technology itself, but a failure to harness it. As identified in the CDEIs AI Barometer, game changing opportunities related to the use of AI and data often have common characteristics, which make them challenging to realise.

They require coordination across organisations, they involve the use of large scale complex data about people, and they affect decisions that have a direct and substantial impact on peoples lives.

The good news is that, despite these challenges, we can break through the technological impasse. In order to realise the game changing opportunities identified in the CDEIs AI Barometer, policymakers, industry and other decision-makers must tend to three areas.

Firstly, we must improve access to - and the sharing of - high quality data. AI which is trained and tested on poor quality or biased data may result in unfair and erroneous outcomes. The CDEI will soon be reporting on bias in algorithmic decision-making, including the difficulty organisations face in accessing data to measure and mitigate bias.

Equally, the concentration of data in the hands of a small number of companies, the unwillingness or inability to share data, and the difficulty of transitioning data from legacy and non-digital systems to modern applications, can all stymie innovation.

Secondly, we must improve the clarity and consistency of data governance. Guidance around AI and data use in the UK is often highly localised, with different police forces setting their own policies for facial recognition technology use, for example. The application of existing regulatory standards in an AI context can be unclear and can vary between and even within sectors. This can lead to confusion among both those deploying and overseeing technology.

Improved data governance will not only mean that the public can have confidence that effective regulatory mechanisms are in place, but that public servants, businesses and innovators have the clarity and confidence to use data in ways that are not just legal, but also responsible.

Thirdly, we need to improve transparency around the use of AI and data. Private firms and public sector organisations are not always transparent about how they are using these technologies or their governance mechanisms. This prevents scrutiny and accountability, which could otherwise spur responsible innovation.

All three areas need to be addressed if we are to foster public trust in the use of AI and data-driven technology by institutions, the importance of which has been demonstrated during the COVID-19 response.

Policymakers, industry and regulators need to work together - including through the UK Governments forthcoming National Data Strategy - to address these issues. Steps need to be taken to create a governance environment that enables responsible innovation and commands public trust. With a coordinated national response, we can begin to realise the transformative potential of AI and data-driven technology.

Roger Taylor is Chair of the Centre for Data Ethics and Innovation

Read the original:

The UK has only just begun to see the transformative potential of AI - Telegraph.co.uk

Posted in Ai | Comments Off on The UK has only just begun to see the transformative potential of AI – Telegraph.co.uk

US Corporations are talking about bans for AI. Will the EU? – EURACTIV

Posted: at 10:41 am

The conversation on banning certain uses of artificial intelligence (AI) is back on the legislative agenda. The European Commissions public consultation closed recently, leaving the EU to grapple with what the public think about AI regulation, write Nani Jansen Reventlow and Sarah Chander.

Nani Jansen Reventlow is Director of the Digital Freedom Fund, working on strategic litigation to defend human rights in the digital environment. Sarah Chander is Senior Policy Adviser at European Digital Rights (EDRi), a network of 44 digital rights organisations in Europe.

Last weeks announcements from tech giants IBM, Amazon and Microsoft that they would partially or temporarily ban the use of facial recognition by law enforcement were framed in the language of racial justice (in the case of IBM) and human rights (in the case of Microsoft).

The bans follow years of tireless campaigning by racial justice and human rights activists, and is a testament to the power of social movements. In the context of artificial intelligence more generally, human rights organisations are also making clear demands about upholding human rights, protecting marginalised communities, and, therefore, banning certain uses of AI. Will European policymakers follow the lead of big tech and heed these demands?

To fix or to ban?

So far, the EU policy debate has only partially addressed the harms resulting from AI. Industry has ensured that the words innovation, investment and profit ring loudly and consistently.

What we are left with is described by some as ethics-washing; a discourse of trustworthy or responsible AI, without clarity about how people and human rights will be protected by law. Addressing AIs impact on marginalised groups is primarily spoken of as the need to fix bias through system tweaks, better data and more diverse design teams.

But bias is a captivating diversion. The focus on fixing bias has swayed the regulatory conversation in favour of technical fixes to AI, rather than looking at how its deployment changes governance of public sector functions and essential services, leads to mass surveillance, and exacerbates racialised over-policing.

Focusing on bias locates AIs problems with flawed individuals who embed their prejudices into the systems they build. Whilst bias is real, this framing lends itself to the dangerous argument that deep, structural inequalities can simply be fixed. This is a red herring and skips a necessary first step: a democratic conversation about the uses of AI we find acceptable and the ones we dont. If we dont debate this, we allow a profit-motivated industry to decide the answer.

Instead, we need to proactively define limits for the use of AI. Truly protecting people and prioritising human rights requires a focus on what is termed by the Algorithmic Justice League as impermissible use. They argue that justice requires that we prevent AI from being used by those with power to increase their absolute level of control, particularly where it would automate long-standing patterns of injustice. European policymakers need to help set those boundaries.

Where do we draw the (red) line?

AI is being designed, deployed and promoted for myriad functions touching almost all areas of our lives. In policing, migration control, and social security the harms are becoming evident, and sometimes the deployment of AI will literally be a matter of life and death. In other areas, automated systems will make life-altering decisions determining the frequency of interactions with police, the allocation of health care services, whether we can access social benefits, whether we will be hired for a new job, or whether or not our visa application will be approved.

Thanks to the work of organizers and researchers, the discriminatory impact of AI in policing is now part of the political conversation, but AI has the potential to harm many other groups and communities in ways still largely obscured by the dominant narrative of AIs productive potential.

Many organisations are now asking, if a system has the potential to produce serious harm, why would we allow it? If a system is likely to structurally disadvantage marginalised groups, do we want it?

For example, European Digital Rights (EDRi) are calling to ban uses of AI which are incompatible with human rights, in predictive policing, at the border, facial recognition and indiscriminate biometric processing in public places. The European Disability Forum echoes the call for a ban on biometric identification systems, warning that sensitive data about an individuals chronic illness or disability could be used to discriminate them.

Meanwhile, pro-industry groups characterise increased oversight and precautionary principles as a drag on innovation and adoption of technology. Policymakers should not buy in to this distraction. We cannot shy away from discussing the need for legal limits for AI and bans on impermissible uses which clearly violate human rights.

Yet this is only a first stepthe beginning of a conversation. This conversation needs to be people-centered; the perspectives of individuals and communities whose rights are most likely to be violated by AI are needed to make sure the red lines around AI are drawn in the right places.

Here is the original post:

US Corporations are talking about bans for AI. Will the EU? - EURACTIV

Posted in Ai | Comments Off on US Corporations are talking about bans for AI. Will the EU? – EURACTIV

‘Lost memories’: War crimes evidence threatened by AI moderation – Reuters

Posted: at 10:41 am

NEW YORK/AMMAN (Thomson Reuters Foundation) - From bombings and protests to the opening of a new health center, student journalist Baraa Razzouk has been documenting daily life in Idlib, Syria, for years, and posting the videos to his YouTube account.

But this month, the 21-year-old started getting automated emails from YouTube alerting him that his videos violated its policy, and that they would be deleted. As of this month, more than a dozen of his videos had been removed, he said.

Documenting the (Syrian) protests in videos is really important. Also, documenting attacks by regime forces, he told the Thomson Reuters Foundation in a phone interview. This is something I had documented for the world and now its deleted.

YouTube, Facebook, and Twitter warned in March that videos and other content may be erroneously removed for policy violations, as the coronavirus pandemic forced them to empty offices and rely on automated takedown software.

But those AI-enabled tools risk confusing human rights and historical documentation like Razzouks videos with problematic material like terrorist content - particularly in war-torn countries like Syria and Yemen, digital rights activists warned.

AI is notoriously context-blind, said Jeff Deutch, a researcher for Syrian Archive, a nonprofit which archives video from conflict zones in the Middle East.

It is often unable to gauge the historical, political or linguistic settings of posts ... human rights documentation and violent extremist proposals are too often indistinguishable, he said in a phone interview.

Erroneous takedowns threaten content like videos that are used as formal evidence of rights violations by international bodies such as the International Criminal Court and the United Nations, said Dia Kayyali of digital rights group Witness.

Its a perfect storm, the tech and advocacy coordinator said.

After the Thomson Reuters Foundation flagged Razzouks account to YouTube, a spokesman said the company had deleted the videos in error, although the removal was not appealed through their internal process. They have now restored 17 of Razzouks videos.

With the massive volume of videos on our site, sometimes we make the wrong call, the spokesman said in emailed comments. When its brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it.

In recent years social media platforms have come under increased pressure from governments to quickly remove violent content and disinformation from their platforms - increasing their reliance on AI systems.

With the help of automated software, YouTube removes millions of videos a year, and Facebook deleted more than 1 billion accounts last year for violating rules like posting terrorist content.

Last year social media companies pledged to block extremist content following a livestreamed terror attack on Facebook of a gunman killing 51 people at two mosques in Christchurch, New Zealand.

Governments have followed suit, with French President Emmanuel Macron vowing to make France a leader in containing the spread of illicit content and false information on social media platforms.

But the countrys top court this week rejected most of a draft law that would have compelled social media giants to remove any hateful content within 24 hours.

Companies like Facebook have also pledged to remove misinformation about the coronavirus outbreak that could contribute to imminent physical harm.

These pressures, combined with an increased reliance on AI during the pandemic, puts human rights content in particular jeopardy, said Kayyali.

Social media firms typically do not disclose how frequently their AI tools mistakenly take down content.

So, the Syrian Archive group has been using its own data to approximate change over time in the rate of deletions of human rights documentation on crimes committed in Syria, which has been battered by nearly a decade of war.

The group flags accounts posting human rights content on social media platforms, and archives the posts on its servers. To approximate the rate of deletions they run a script pinging the original post each month to see if it has been removed.

Our research suggests that since the beginning of the year, the rate of content takedowns of Syrian human rights documentations on YouTube roughly doubled (from 13% to 20%) said Deutch, calling the increase unprecendented.

In May, Syrian Archive detected more than 350,000 videos on YouTube had disappeared - up from 200,000 in May 2019, including videos of aerial attacks, protests, and destruction of civilians homes in Syria.

Deutch said he had seen content takedowns in other war-torn countries in the region, including Yemen and Sudan. Users in conflict zones are more vulnerable, he said.

Other groups, including Amnesty International and Witness, have warned of the trend elsewhere, including in sub-Saharan Africa.

Syrian Archive was not able to test for takedowns at Facebook, because outside researchers are restricted from the platforms application programing interface (API).

But earlier this month Syrians began using the hashtag Facebook is fighting the Syran revolution to flag similar content takedowns on the platform.

Last month Yahya Daoud, a Syrian humanitarian worker with the White Helmets emergency response group, shared a post and a photo showing a woman who died in a 2012 massacre by the forces of Syrian President Bashar al-Assad in the Houla region.

By the end of the month Daoud said his account - which he had used since 2011 to document his life in Syria - was automatically deleted without explanation. I was depending on Facebook to be an archive for me, he said.

So many memories have been lost: the death of my friends, the day I became displaced, the death of my mother, he said, adding that he had unsuccessfully tried to appeal the decision through Facebooks automated complaints system.

Facebook did not respond to requests for comment.

Researchers say they are only able to detect a small slice of erroneous content takedowns.

We dont know how many people are trying to speak and we arent hearing them, said Alexa Koenig, director of the University of California Berkeleys Human Rights Center.

These algorithms are grabbing the content before we even see it, said Koenig, whose center uses images and videos posted from conflict zones like Syria to document human rights abuses and build cases.

YouTube said that 80% of videos flagged by its AI were deleted before anyone had seen them in the second quarter of 2019.

That concerns Koenig, who worries that the erasure of these videos could jeopardize ongoing investigations around the world.

In 2017 the International Criminal Court issued its first arrest warrant that rested primarily on social media evidence, after video emerged on Facebook of Libya commander Mahmoud al-Werfalli.

The video purportedly showed him shooting dead 10 blindfolded prisoners at the site of a car bombing in Benghazi. He is still at large.

Koenig worries this kind of documentation is now under threat: The danger is much higher than it was just a few months ago, she said.

Its a sickening feeling, to know we arent close to where we need to be in preserving this content.

Reporting by Avi Asher-Schapiro @AASchapiro in New York and Ban Barkawi @banbarkawi in Amman, Editing by Zoe Tabary. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org

Originally posted here:

'Lost memories': War crimes evidence threatened by AI moderation - Reuters

Posted in Ai | Comments Off on ‘Lost memories’: War crimes evidence threatened by AI moderation – Reuters