The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? – Wccftech
Posted: June 20, 2020 at 10:41 am
When FaceApp initially launched back in 2017, it took the world by storm because of its capabilities. Granted, we have seen several apps in the past that could make you look old or young but the accuracy and precision were not there. FaceApp, however, used artificial intelligence to do that, and the results were mind-boggling at best. Even when it was launched, a lot of security researchers raised concerns over the consequences of using this app. After all, you are uploading your pictures onto an app for it to use and run through its AI algorithm. But people still continued using it.
After almost 3 years, the app has exploded all over Twitter, Instagram, and Facebook again. However, this time, people are using the gender swap feature that uses AI to change a person's gender and present them with a picture that is very convincing, and at the same time, quite scary.
iOS Might Be Getting Rebranded to iPhoneOS, as Apple Might Be Looking to Streamline Its Operating System Names
Now, there have been several apps in the past like that. Even Snapchat introduced a feature that would swap your gender. But the implementation here is different since it uses artificial intelligence, the very thing many people fear, in the first place. But it is not just artificial intelligence that we should be afraid of, it is the privacy policy of the app. if you head over to the recently updated privacy policy of the app, this is the highlighted text.
Now the clever thing here is that when you do visit the page, only the aforementioned lines are highlighted, which is more than enough to convince any user that this app is indeed safe. However, if you just take a minute and read beyond those two lines, you start becoming wary of just what is being stored and used. True, some would say that they are not worried about the app or what it does with the photos but keep in mind that this app has over 100 million downloads on Google Play Store alone. One could only imagine how many underage individuals are using this app for swapping their genders and potentially risking their pictures.
Now, if you are one of those people who believe that just deleting the app will get rid of the photos that are taken or used by FaceApp, that is not the case. However, it is not as easy as it may sound. For those who want to get their pictures removed, you will actually have to put in a request for it to happen, in the first place. In order to do that, you have to go to Settings >Support >Report a bug with the word "privacy" in the subject line, and then write a formal request. Which is convoluted to a point that most people will not go through it, in the first place.
To confirm just how convincing or borderline creepy this app can become, I asked a few of my friends to provide their pictures. Now, it was easy for me to tell the difference because I know them, but to an unsuspecting eye, it might not be the same case.
And privacy is just one concern that people have raised. On the other hand, we have a shockingly powerful AI in place which could very well be learning the patterns for a much stronger facial recognition pattern.
Zoom Decides to Offer Encryption Plans for All Users in July
In all honesty, the results are shocking at best. What is even more shocking is the amount of information we are knowingly handing away to an app just for the sake of shock value. Again, whether or not this app is going to have severe consequences or not is still yet to be seen. But as a word of warning, keep in mind that the FBI did issue a warning pertaining to the safety of the app, and this happened back in December 2019.
Calling FaceApp an imminent threat to privacy or an AI nightmare would be stretching it a bit too far. However, at the same time, we have to keep in mind that in a world where our privacy is among our most important assets, there are some questionable practices and activities that can easily take place if things go rogue. We can only say that the more we are protecting our privacy, the better it is in the longer run. Currently, you can download FaceApp on both iOS and Android for your amusement.
Link:
Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? - Wccftech
Posted in Ai
Comments Off on Is FaceApp Just a Gender Swapping Gimmick or an AI and Privacy Nightmare in the Making? – Wccftech
Where AI Meets Big Data, Innovation is Assured – Analytics Insight
Posted: at 10:41 am
Image Credit: NewtonX
When you work around the idea of the fourth industrial revolution, two technologies that make a quick flash into your brain are Big Data and Artificial Intelligence. A lot has been said and done in this regard. The combination of both has driven numerous industries towards success. While data or big data is considered the lifeblood of modern businesses, AI on the other hand is the heart to fill life into it.
Speaking more technically, we can say, big data is at the core of every business and the technology to harness its true value, AI is extremely essential to extract true meaning from data.
From Machine Learning to Computer Vision to Natural Language Processing, all AI subsets play a great role in getting something meaningful from just voluminous data.
According to a New Vantage survey report, the percentage of firms investing greater than US$50 million is up to 64.8% in 2020 from just 39.7% in 2018, with a total of 98.8% of firms investing in Big Data and AI initiatives. However, the pace of investment is leveling off, as only 51.9% of firms are accelerating their rate of investment, in stark contrast to the 91.6% who were accelerating their pace of investment in 2019. This includes some of the biggest companies, including Google, JP Morgan Chase, Glaxo Smith Kline, and Bank of America, among others.
The rising stars and the tech giants all have developed mastery at the intersection where big data meets AI.
This convergence of big data and Artificial Intelligence is what the MIT Sloan Management Review called the single most important development that is shaping the future of how firms drive business value from their data and analytics capabilities. These organizations understand how to combine data-savvy and strong AI capabilities into strongly differentiated solutions with massive market value. Here are a few ways data and AI empower each other:
Big data, the massive data collections that were all contributing to every day, is only getting bigger. Its estimated that by 2020, every person on earth will generate 1.7 MB of data every second, according to DOMO. Within that data, if we know how to unlock it, lies the potential to build amazing new businesses and solve some of the worlds greatest challenges.
Data is the fuel that powers AI, and large data sets make it possible for machine learning applications (machine learning is a branch of AI) to learn independently and rapidly. The abundance of data we collect supplies our AIs with the examples they need to identify differences, increase their pattern recognition capabilities, and see the fine details within the patterns.
AI enables us to make sense of massive data sets, as well as unstructured data that doesnt fit neatly into database rows and columns. AI is helping organizations create new insights from data that was formerly locked away in emails, presentations, videos, and images.
Databases are becoming increasingly versatile and powerful. In addition to traditional relational databases, we now have powerful graph databases that are more capable of connecting data points and uncovering relationships, as well as databases that specialize in document management.
To understand further lets look at some of the most common yet revolutionary applications of AI and Big Data driving business innovation.
With so many options on their fingertips, todays consumers are amongst the most fickle that the world of business has ever witnessed. Deep learning, which is a subset of AI, is helping businesses to predict consumer behavior by recognizing voice and search patterns.
Similarly, Big Data, in conjunction with predictive analytics and learning algorithms, come up with offers for customers even before they realize the need for it. A fantastic example of this is Starbucks personalizing customer experience. Through its app, it uses Big Data and AI to recommend different types of caffeine based on the weather and the customers location and many other features.
Digital marketing is the byword of any company worth its salt. Along the same lines are SEO, generating leads, and conversion. Big Data and AI work in combination to provide better insights to companies and narrow down on the targeted consumers who are critical to digital marketing.
By making use of the latest analytic tools, businesses are also able to save a lot of money and have better conversions. Netflix, the live streaming giant, incorporates AI into its digital marketing, which has exponentially raised their subscriptions. This directly translates to increased revenue.
Not many of us can imagine starting the day without the help of virtual assistants or VAs. The most notable VAs are Siri and Alexa. The same is true for many businesses all around the globe. AI has helped decipher or at least make sense of Big Data, which makes the VAs more intelligent.
The reaches of Big Data and AI are seen in self-driving cars that drive on auto-pilot. Among them is the Tesla. But of course, it is no secret as the companys CEO has been openly propagating the immense ways in which Artificial intelligence will change the very course of history.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!
Go here to see the original:
Where AI Meets Big Data, Innovation is Assured - Analytics Insight
Posted in Ai
Comments Off on Where AI Meets Big Data, Innovation is Assured – Analytics Insight
Amazons AI-powered distance assistants will warn workers when they get too close – The Verge
Posted: at 10:41 am
Amazon, which is currently being sued for allegedly failing to protect workers from COVID-19, has unveiled a new AI tool it says will help employees follow social distancing rules.
The companys Distance Assistant combines a TV screen, depth sensors, and AI-enabled camera to track employees movements and give them feedback in real time. When workers come closer than six feet to one another, circles around their feet flash red on the TV, indicating to employees that they should move to a safe distance apart. The devices are self-contained, meaning they can be deployed quickly where needed and moved about.
Amazon compares the system to radar speed checks which give drivers instant feedback on their driving. The assistants have been tested at a handful of the companys buildings, said Brad Porter, vice president of Amazon Robotics, in a blog post, and the firm plans to roll out hundreds more to new locations in the coming weeks.
Importantly, Amazon also says it will be open-sourcing the technology, allowing other companies to quickly replicate and deploy these devices in a range of locations.
Amazon isnt the only company using machine learning in this way. A large number of firms offering AI video analytics and surveillance have created similar social-distancing tools since the coronavirus outbreak began. Some startups have also turned to physical solutions, like bracelets and pendants which use Bluetooth signals to sense proximity and then buzz or beep to remind workers when they break social distancing guidelines.
Although these solutions will be necessary for workers to return to busy facilities like warehouses, many privacy experts worry their introduction will normalize greater levels of surveillance. Many of these solutions will produce detailed data of workers movements throughout the day, allowing managers to hound employees in the name of productivity. Workers will also have no choice but to be tracked in this way if they want to keep their job.
Amazons involvement in this sort of technology will raise suspicions as the company is often criticized for the grueling working conditions in its facilities. In 2018, it even patented a wristband that would track workers movements in real time, directing not just which task they should do next, but if their hands are moving towards the wrong shelf or bin.
The companys description of the Distance Assistant as a standalone unit that only requires power suggests its not storing any data about workers movement, but weve contacted the company to confirm what information, if any, might be retained.
Link:
Amazons AI-powered distance assistants will warn workers when they get too close - The Verge
Posted in Ai
Comments Off on Amazons AI-powered distance assistants will warn workers when they get too close – The Verge
AI Can Autocomplete Images in the Same Way It Does Text – Adweek
Posted: at 10:41 am
Just as the artificial intelligence in your iPhone can guess the next word you might type in a message, the same technology can predict the bottom half of an image simply from scanning the top half.
That was the finding of a new experiment from research group OpenAI, which trained a version of its hyper-sophisticated text generator, GPT-2, on millions of images to prove that it could generate coherent patterns of pixels in the same manner that it does sentences.
Researchers demonstrated the results by feeding the system the top halves of images like movie posters, sports photos and popular memes. The system was then able to generate a bottom half based on the patterns of pixels it saw. While the results didnt always match the original image, the output was usually photorealistic and fit seamlessly with the rest of the image.
ImageGPT's various autocomplete generations based on the top half of a real image on the far right.
OpenAI
The research could help scientists better understand the parallels between how machines understand and generate language versus visual data. While a model of machine learning called a transformer has spurred a new research boom in the subfield of natural language processing AI in the past couple of years, the method has not proven as successful with tasks like image classification and generation, the researchers write in the introduction to their paper.
Our work aims to understand and bridge this gap, they said.
AI image generation technology has also made strides in recent yearseven spawning a flourishing experimental art scenebut the model of machine learning that has provided the foundation for that research is fundamentally different from that which OpenAI uses in this paper. Most image-producing AI relies on a model called a Generative Adversarial Network (GAN) that can produce varied imitations of a style of image once trained on a large dataset of similar visuals.
OpenAIs model, dubbed Image GPT, is based instead on a version of GPT-2, which can generate realistic-sounding copy based on a text prompt. OpenAI made waves when it announced GPT-2 early last year and declined to release the code all at once for fear that it would supercharge fake news and spam mass production. Instead, the model has been more often used for novelty applications, like a text-based adventure game, parody Twitter accounts and Adweeks own Super Bowl Bot.
OpenAI recently released an even larger form of that program called GPT-3, trained on a dataset more than 100 times bigger than the previous iteration, and plans to use it as the backbone of its first commercial product.
Excerpt from:
AI Can Autocomplete Images in the Same Way It Does Text - Adweek
Posted in Ai
Comments Off on AI Can Autocomplete Images in the Same Way It Does Text – Adweek
OODA Video: The Technologies of AI Security and Ethics – OODA Loop
Posted: at 10:41 am
This video provides succinct context on technology companies fielding solutions to help address key concerns in the area of AI security and AI ethics. We produced this presentation for OODA members based on our own continuous market research that includes tracking VC investment and engaging with the community via AI related events but also our own AI centric due diligence engagements and cybersecurity assessments.
This content is restricted to OODA Network members only. Members get access to all site content plus access to exclusive reports and events. Please consider becoming a member. For more information please click here. Thanks!
Already a member?Sign in to your account.
The latestOODA Network member benefitis a video on demand series that provides expert context from our analysts.
Current titles in our on demand video library include:
We built each of these presentations with the busy decision-maker in mind. Each video provides succinct articulations of issues leading to recommended actions, all based on our research and expert assessments.
OODA Members can access this premium content atThe OODA Member Video Library
Excerpt from:
OODA Video: The Technologies of AI Security and Ethics - OODA Loop
Posted in Ai
Comments Off on OODA Video: The Technologies of AI Security and Ethics – OODA Loop
The UK has only just begun to see the transformative potential of AI – Telegraph.co.uk
Posted: at 10:41 am
Over the last year, our attention has been focused on a series of issues that are of global importance. Last year, Extinction Rebellion pushed climate change to the top of the agenda. This year, COVID-19 has made effective public health monitoring a priority. In recent weeks, anger at systemic racial injustice has fuelled public protests.
While these are very different issues, they share something in common: they will not be addressed if we do not use data-driven technologies to understand, monitor and improve complex systems - whether that is the healthcare system, the justice system or the energy grid.
This week the CDEI published its AI Barometer, a major analysis of the key risks, opportunities, and governance challenges associated with AI and data use in the UK. The conclusion we came to, after engaging with 120 experts, is that there are significant barriers that will prevent the UK from using AI and data-driven technology to overcome the complex challenges its facing.
The problem is often not with the technology itself, but a failure to harness it. As identified in the CDEIs AI Barometer, game changing opportunities related to the use of AI and data often have common characteristics, which make them challenging to realise.
They require coordination across organisations, they involve the use of large scale complex data about people, and they affect decisions that have a direct and substantial impact on peoples lives.
The good news is that, despite these challenges, we can break through the technological impasse. In order to realise the game changing opportunities identified in the CDEIs AI Barometer, policymakers, industry and other decision-makers must tend to three areas.
Firstly, we must improve access to - and the sharing of - high quality data. AI which is trained and tested on poor quality or biased data may result in unfair and erroneous outcomes. The CDEI will soon be reporting on bias in algorithmic decision-making, including the difficulty organisations face in accessing data to measure and mitigate bias.
Equally, the concentration of data in the hands of a small number of companies, the unwillingness or inability to share data, and the difficulty of transitioning data from legacy and non-digital systems to modern applications, can all stymie innovation.
Secondly, we must improve the clarity and consistency of data governance. Guidance around AI and data use in the UK is often highly localised, with different police forces setting their own policies for facial recognition technology use, for example. The application of existing regulatory standards in an AI context can be unclear and can vary between and even within sectors. This can lead to confusion among both those deploying and overseeing technology.
Improved data governance will not only mean that the public can have confidence that effective regulatory mechanisms are in place, but that public servants, businesses and innovators have the clarity and confidence to use data in ways that are not just legal, but also responsible.
Thirdly, we need to improve transparency around the use of AI and data. Private firms and public sector organisations are not always transparent about how they are using these technologies or their governance mechanisms. This prevents scrutiny and accountability, which could otherwise spur responsible innovation.
All three areas need to be addressed if we are to foster public trust in the use of AI and data-driven technology by institutions, the importance of which has been demonstrated during the COVID-19 response.
Policymakers, industry and regulators need to work together - including through the UK Governments forthcoming National Data Strategy - to address these issues. Steps need to be taken to create a governance environment that enables responsible innovation and commands public trust. With a coordinated national response, we can begin to realise the transformative potential of AI and data-driven technology.
Roger Taylor is Chair of the Centre for Data Ethics and Innovation
Read the original:
The UK has only just begun to see the transformative potential of AI - Telegraph.co.uk
Posted in Ai
Comments Off on The UK has only just begun to see the transformative potential of AI – Telegraph.co.uk
US Corporations are talking about bans for AI. Will the EU? – EURACTIV
Posted: at 10:41 am
The conversation on banning certain uses of artificial intelligence (AI) is back on the legislative agenda. The European Commissions public consultation closed recently, leaving the EU to grapple with what the public think about AI regulation, write Nani Jansen Reventlow and Sarah Chander.
Nani Jansen Reventlow is Director of the Digital Freedom Fund, working on strategic litigation to defend human rights in the digital environment. Sarah Chander is Senior Policy Adviser at European Digital Rights (EDRi), a network of 44 digital rights organisations in Europe.
Last weeks announcements from tech giants IBM, Amazon and Microsoft that they would partially or temporarily ban the use of facial recognition by law enforcement were framed in the language of racial justice (in the case of IBM) and human rights (in the case of Microsoft).
The bans follow years of tireless campaigning by racial justice and human rights activists, and is a testament to the power of social movements. In the context of artificial intelligence more generally, human rights organisations are also making clear demands about upholding human rights, protecting marginalised communities, and, therefore, banning certain uses of AI. Will European policymakers follow the lead of big tech and heed these demands?
To fix or to ban?
So far, the EU policy debate has only partially addressed the harms resulting from AI. Industry has ensured that the words innovation, investment and profit ring loudly and consistently.
What we are left with is described by some as ethics-washing; a discourse of trustworthy or responsible AI, without clarity about how people and human rights will be protected by law. Addressing AIs impact on marginalised groups is primarily spoken of as the need to fix bias through system tweaks, better data and more diverse design teams.
But bias is a captivating diversion. The focus on fixing bias has swayed the regulatory conversation in favour of technical fixes to AI, rather than looking at how its deployment changes governance of public sector functions and essential services, leads to mass surveillance, and exacerbates racialised over-policing.
Focusing on bias locates AIs problems with flawed individuals who embed their prejudices into the systems they build. Whilst bias is real, this framing lends itself to the dangerous argument that deep, structural inequalities can simply be fixed. This is a red herring and skips a necessary first step: a democratic conversation about the uses of AI we find acceptable and the ones we dont. If we dont debate this, we allow a profit-motivated industry to decide the answer.
Instead, we need to proactively define limits for the use of AI. Truly protecting people and prioritising human rights requires a focus on what is termed by the Algorithmic Justice League as impermissible use. They argue that justice requires that we prevent AI from being used by those with power to increase their absolute level of control, particularly where it would automate long-standing patterns of injustice. European policymakers need to help set those boundaries.
Where do we draw the (red) line?
AI is being designed, deployed and promoted for myriad functions touching almost all areas of our lives. In policing, migration control, and social security the harms are becoming evident, and sometimes the deployment of AI will literally be a matter of life and death. In other areas, automated systems will make life-altering decisions determining the frequency of interactions with police, the allocation of health care services, whether we can access social benefits, whether we will be hired for a new job, or whether or not our visa application will be approved.
Thanks to the work of organizers and researchers, the discriminatory impact of AI in policing is now part of the political conversation, but AI has the potential to harm many other groups and communities in ways still largely obscured by the dominant narrative of AIs productive potential.
Many organisations are now asking, if a system has the potential to produce serious harm, why would we allow it? If a system is likely to structurally disadvantage marginalised groups, do we want it?
For example, European Digital Rights (EDRi) are calling to ban uses of AI which are incompatible with human rights, in predictive policing, at the border, facial recognition and indiscriminate biometric processing in public places. The European Disability Forum echoes the call for a ban on biometric identification systems, warning that sensitive data about an individuals chronic illness or disability could be used to discriminate them.
Meanwhile, pro-industry groups characterise increased oversight and precautionary principles as a drag on innovation and adoption of technology. Policymakers should not buy in to this distraction. We cannot shy away from discussing the need for legal limits for AI and bans on impermissible uses which clearly violate human rights.
Yet this is only a first stepthe beginning of a conversation. This conversation needs to be people-centered; the perspectives of individuals and communities whose rights are most likely to be violated by AI are needed to make sure the red lines around AI are drawn in the right places.
Here is the original post:
US Corporations are talking about bans for AI. Will the EU? - EURACTIV
Posted in Ai
Comments Off on US Corporations are talking about bans for AI. Will the EU? – EURACTIV
‘Lost memories’: War crimes evidence threatened by AI moderation – Reuters
Posted: at 10:41 am
NEW YORK/AMMAN (Thomson Reuters Foundation) - From bombings and protests to the opening of a new health center, student journalist Baraa Razzouk has been documenting daily life in Idlib, Syria, for years, and posting the videos to his YouTube account.
But this month, the 21-year-old started getting automated emails from YouTube
Documenting the (Syrian) protests in videos is really important. Also, documenting attacks by regime forces, he told the Thomson Reuters Foundation in a phone interview. This is something I had documented for the world and now its deleted.
YouTube, Facebook, and Twitter warned in March that videos and other content may be erroneously removed for policy violations, as the coronavirus pandemic forced them to empty offices and rely on automated takedown software.
But those AI-enabled tools risk confusing human rights and historical documentation like Razzouks videos with problematic material like terrorist content - particularly in war-torn countries like Syria and Yemen, digital rights activists warned.
AI is notoriously context-blind, said Jeff Deutch, a researcher for Syrian Archive, a nonprofit which archives video from conflict zones in the Middle East.
It is often unable to gauge the historical, political or linguistic settings of posts ... human rights documentation and violent extremist proposals are too often indistinguishable, he said in a phone interview.
Erroneous takedowns threaten content like videos that are used as formal evidence of rights violations by international bodies such as the International Criminal Court and the United Nations, said Dia Kayyali of digital rights group Witness.
Its a perfect storm, the tech and advocacy coordinator said.
After the Thomson Reuters Foundation flagged Razzouks account to YouTube, a spokesman said the company had deleted the videos in error, although the removal was not appealed through their internal process. They have now restored 17 of Razzouks videos.
With the massive volume of videos on our site, sometimes we make the wrong call, the spokesman said in emailed comments. When its brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it.
In recent years social media platforms have come under increased pressure from governments to quickly remove violent content and disinformation from their platforms - increasing their reliance on AI systems.
With the help of automated software, YouTube removes millions of videos a year, and Facebook deleted more than 1 billion accounts last year for violating rules like posting terrorist content.
Last year social media companies pledged to block extremist content following a livestreamed terror attack on Facebook of a gunman killing 51 people at two mosques in Christchurch, New Zealand.
Governments have followed suit, with French President Emmanuel Macron vowing to make France a leader in containing the spread of illicit content and false information on social media platforms.
But the countrys top court this week rejected most of a draft law that would have compelled social media giants to remove any hateful content within 24 hours.
Companies like Facebook have also pledged to remove misinformation about the coronavirus outbreak that could contribute to imminent physical harm.
These pressures, combined with an increased reliance on AI during the pandemic, puts human rights content in particular jeopardy, said Kayyali.
Social media firms typically do not disclose how frequently their AI tools mistakenly take down content.
So, the Syrian Archive group has been using its own data to approximate change over time in the rate of deletions of human rights documentation on crimes committed in Syria, which has been battered by nearly a decade of war.
The group flags accounts posting human rights content on social media platforms, and archives the posts on its servers. To approximate the rate of deletions they run a script pinging the original post each month to see if it has been removed.
Our research suggests that since the beginning of the year, the rate of content takedowns of Syrian human rights documentations on YouTube roughly doubled (from 13% to 20%) said Deutch, calling the increase unprecendented.
In May, Syrian Archive detected more than 350,000 videos on YouTube had disappeared - up from 200,000 in May 2019, including videos of aerial attacks, protests, and destruction of civilians homes in Syria.
Deutch said he had seen content takedowns in other war-torn countries in the region, including Yemen and Sudan. Users in conflict zones are more vulnerable, he said.
Other groups, including Amnesty International and Witness, have warned of the trend elsewhere, including in sub-Saharan Africa.
Syrian Archive was not able to test for takedowns at Facebook, because outside researchers are restricted from the platforms application programing interface (API).
But earlier this month Syrians began using the hashtag Facebook is fighting the Syran revolution to flag similar content takedowns on the platform.
Last month Yahya Daoud, a Syrian humanitarian worker with the White Helmets emergency response group, shared a post and a photo showing a woman who died in a 2012 massacre by the forces of Syrian President Bashar al-Assad in the Houla region.
By the end of the month Daoud said his account - which he had used since 2011 to document his life in Syria - was automatically deleted without explanation. I was depending on Facebook to be an archive for me, he said.
So many memories have been lost: the death of my friends, the day I became displaced, the death of my mother, he said, adding that he had unsuccessfully tried to appeal the decision through Facebooks automated complaints system.
Facebook did not respond to requests for comment.
Researchers say they are only able to detect a small slice of erroneous content takedowns.
We dont know how many people are trying to speak and we arent hearing them, said Alexa Koenig, director of the University of California Berkeleys Human Rights Center.
These algorithms are grabbing the content before we even see it, said Koenig, whose center uses images and videos posted from conflict zones like Syria to document human rights abuses and build cases.
YouTube said that 80% of videos flagged by its AI were deleted before anyone had seen them in the second quarter of 2019.
That concerns Koenig, who worries that the erasure of these videos could jeopardize ongoing investigations around the world.
In 2017 the International Criminal Court issued its first arrest warrant that rested primarily on social media evidence, after video emerged on Facebook of Libya commander Mahmoud al-Werfalli.
The video purportedly showed him shooting dead 10 blindfolded prisoners at the site of a car bombing in Benghazi. He is still at large.
Koenig worries this kind of documentation is now under threat: The danger is much higher than it was just a few months ago, she said.
Its a sickening feeling, to know we arent close to where we need to be in preserving this content.
Reporting by Avi Asher-Schapiro @AASchapiro in New York and Ban Barkawi @banbarkawi in Amman, Editing by Zoe Tabary. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org
Originally posted here:
'Lost memories': War crimes evidence threatened by AI moderation - Reuters
Posted in Ai
Comments Off on ‘Lost memories’: War crimes evidence threatened by AI moderation – Reuters
Tesla’s Head of AI Says The Firm Uses a Harder Approach to Self-Driving for Scalability Reasons – Interesting Engineering
Posted: at 10:41 am
Earlier this week, Tesla's head of Artificial Intelligence (AI)Andrej Karpathy took part in a CVPR20 workshop on Scalability in Autonomous Drivingduring which he discussed the firm's approach to self-driving. In the talk, he confessed that Tesla is using a harder approach to autonomous driving but one that is more likely to scale properly.
RELATED:NEW VIDEO SHOWS TESLA'S FULL SELF-DRIVING TECHNOLOGY AT WORK
The executive gave a presentation where he shared two videos: one of Teslas self-driving car doing a turn and one of Waymos doing the same. He explained that while both turns looked identical, the decision making behind them was very different.
"Waymo and many others in the industry use high-definition maps. You have to first drive some car that pre-maps the environment, you have to have lidar with centimeter-level accuracy, and you are on rails. You know exactly how you are going to turn in an intersection, you know exactly which traffic lights are relevant to you, you where they are positioned and everything. We do not make these assumptions. For us, every single intersection we come up to, we see it for the first time. Everything has to be sold just like what a human would do in the same situation," said Kaparthy.
Kaparthy went on to saythat Tesla is working on a scalable self-driving system deployable in millions of cars which is why the firm is using a vision-based approach. Because it is easier to scale.
"Speaking of scalability, this is a much harder problem to solve, but when we do essentially solve this problem, theres a possibility to beam this down to again millions of cars on the road. Whereas building out these lidar maps on the scale that we operate in with the sensing that it does require would be extremely expensive. And you cant just build it, you have to maintain it and the change detection of this is extremely difficult," added Kaparthy.
The rest is here:
Posted in Ai
Comments Off on Tesla’s Head of AI Says The Firm Uses a Harder Approach to Self-Driving for Scalability Reasons – Interesting Engineering
AI in Healthcare Market projected a CAGR of 52.3% during the forecast period, 2020-2026 – 3rd Watch News
Posted: at 10:41 am
According toBlueWeave Consultingthe globalAI in Healthcare marketis estimated to reach US$ 37.9 Billion by 2026 with a growing CAGR of 52.3 % during the forecast period 2020- 2026. Several factors driving growth are the increasing need to reduce healthcare costs, rising importance of big data in healthcare, increased acceptance of precision medicine and raising hardware costs. Increasing applicability of AI-based software in medical care and growing investment in venture capital can also be attributed to the surge in demand for this technology. For example, Care Predict, Inc. is using AI technology to track changes in behavioral patterns and activity to predict health issues early.
Request to get the report sample pages at :https://www.blueweaveconsulting.com/ai-in-healthcare-market-bwc19396/report-sample
Increasing number of cross-industry partnerships are expected to boost the healthcare sectors adoption of AI, which is further responsible for its lucrative growth rate. GNS Healthcare entered into a cross-industry partnership with Alliance and Amgen in September 2018 to conduct oncology clinical trials. The goal of the collaboration was to use data from clinical trials and Artificial Intelligence (AI) to identify factors that improve treatment responses in patients with metastatic colorectal cancer (CRC).
AI adoption in healthcare is increasing, with an increased focus on improving patient care quality through the use of artificial intelligence in various aspects of healthcare services, such as virtual assistants & surgeries. The AI-based technologies, such as clinical decision support systems & voice recognition software, help streamline hospital workflow procedures, and optimize medical care, thus improving patient experience. Incorporating AI into healthcare has multiple advantages for both patients and healthcare providers. AI, such as, allows personalized treatment, based on health conditions and past medical history. In addition, AI-based software can be used for continuous health monitoring, which in effect can ensure prompt care & treatment and may ultimately decrease hospital stay. On the other side, medical practitioners unwillingness to adopt new technology, a drastic lack of predetermined and uniform regulatory guidelines, a shortage of curated health care data and data privacy issues impede the markets potential to attain higher grounds.
AI-enabled bots are an AI program that patients can communicate with on a website or by telephone via a chat window. Applications such as scheduling appointments; reviewing insurance coverage parameters; quick access to information on drug interactions and side effects; collecting up-to-date information on patient medications, health care staff and recent procedures; designing special diet strategies for nutritional limited patients; and contacting discharged patients to follow up on treatment plans and ads. Such technologies are expected to lead the growth of hospital and inpatient care systems. Furthermore, the growing need for accurate & early diagnosis of chronic diseases and disorders further supports this markets growth. Nevertheless, the reluctance to implement AI technologies among end-users, lack of trust and potential risks associated with AI in the healthcare sector somewhat restrict the growth of this market.
Application for patient management to see significant growth in the market with significant pace in coming years as successful patient management is one of the most important needs for hospital facilities. Several studies have shown how important patient participation is in improving health outcomes. Lack of such participation contributed greatly to preventable deaths. Smart wearables also play a crucial role in transforming the current healthcare sector. Consumers are also becoming more aware of wearables, and many consumers today believe that wearing a smart device that monitors their vitalities will lead to increased average life expectancy.
Request to get the report description pages at :https://www.blueweaveconsulting.com/ai-in-healthcare-market-bwc19396/
Artificial intelligence in healthcare market is fragmented owing to the presence of number of large-sized companies, mid-sized & small-sized companies, and many start-ups that provide artificial intelligence in healthcare industry. However, the companies that hold the majority share of artificial intelligence in healthcare market are NVIDIA, Intel, IBM, Microsoft, Google, Siemens Healthineer, General Electric (GE) Company, Medtronic, Amazon Web Services (AWS), Koninklijke Philips, Johnson & Johnson Services, Butterfly Network, Welltok, Inc., Micron Technology and Other Prominent Players.
About Us
BlueWeave Consultingis a one-stop solution for market intelligence regarding various products and services online & offline. We offer worldwide market research reports by analyzing both qualitative and quantitative data to boost up the performance of your business solution. BWC has built its reputation from the scratches by delivering quality performance and nourishing the long-lasting relationships with its clients. We are one of the promising digital market intelligence generation company delivering unique solutions for blooming your business and making the morning, more rising & shining.
Contact Us:
[emailprotected]
https://www.blueweaveconsulting.com
Global Contact: +1 866 658 6826,+1 425 320 4776
See the article here:
Posted in Ai
Comments Off on AI in Healthcare Market projected a CAGR of 52.3% during the forecast period, 2020-2026 – 3rd Watch News