Censorship on social media? It’s not what you think – CBS News

Watch the newCBS Reportsdocumentary, "Speaking Frankly | Censorship," in the video player above.

Musician Joy Villa's red carpet dresses at the past three years' Grammy Awards were embellished with pro-Trump messages that cemented her as an outspoken darling of the conservative movement. With over 500,000 followers across Instagram, Facebook, YouTube, and Twitter, Villa refers to her social media community as her "Joy Tribe," and a few years ago she enlisted them to help wage a public battle against what she claimed was YouTube's attempt to censor her.

"I had released my 'Make America Great Again" music video on YouTube, and within a few hours it got taken down by YouTube," Villa told CBS Reports. "I took it to the rest of my social media. I told my fans: 'Hey listen, YouTube is censoring me. This is unfair censorship.'"

Villa saw it as part of a pattern of social media companies trying to shut down conservative voices an accusation that many other like-minded users, including President Trump himself, have leveled against Facebook, YouTube, and Twitter in recent years.

But those who study the tech industry's practices say that deciding what content stays up, and what comes down, has nothing to do with "censorship.""There is this problem in the United States that when we talk about free speech, we often misunderstand it," said Henry Fernandez, co-chair of Change the Terms, a coalition of organizations that work to reduce hate online.

"The First Amendment is very specific: It protects all of us as Americans from the government limiting our speech," he explained. "And so when people talk about, 'Well, if I get kicked off of Facebook, that's an attack on my free speech or on my First Amendment right' that's just not true. The companies have the ability to decide what speech they will allow. They're not the government."

A YouTube spokesperson said Villa's video wasn't flagged over something she said, but due to a privacy complaint. Villa disputed that, but once she blurred out the face of someone who didn't want to be seen in the video, YouTube put it back online, and her video remains visible on the platform today.

"At YouTube, we've always had policies that lay out what can and can't be posted. Our policies have no notion of political affiliation or party, and we enforce them consistently regardless of who the uploader is," said YouTube spokesperson Alex Joseph.

While Villa and others on the right have been vocal about their complaints, activists on the opposite side of the political spectrum say their online speech frequently ends up being quashed for reasons that have gotten far less attention.

Carolyn Wysinger, an activist who provided Facebook feedback and guidance about minority users' experience on the platform, told CBS Reports that implicit bias is a problem that permeates content moderation decisions at most social media platforms.

"In the community standards, white men are a protected class, the same as a black trans woman is. The community standards does not take into account the homophobia, and the violence, and how all those things intersect. It takes all of them as individual things that need to be protected," said Wysinger.

The artificial intelligence tools that automate the process of moderating and enforcing community standards on the sites don't recognize the intent or background of those doing the posting.

For instance, Wysinger said, "I have been flagged for using imagery of lynching. ... I have been flagged for violent content when showing images about racism and about transphobia."

According to the platforms' recent transparency reports, from April to June 2020, nearly 95% of comments flagged as hate speech on Facebook were detected by AI; and on YouTube 99.2% of comments removed for violating Community Standards were flagged by AI.

"That means you're putting these community standards in place and you have these bots who are just looking for certain specific things. It's automated. It doesn't have the ability for nuanced decision-making in regards to this," said Wysinger.

Biases can be built into the algorithms by the programmers who designed them, even if it's unintentional.

"Unfortunately tech is made up of a homogenous group, mostly White and Asian males, and so what happens is the opinions, the experiences that go into this decision-making are reflective of a majority group. And so people from different backgrounds Black, Latino, different religions, conservative, liberal don't have the accurate representation that they would if these companies were more diverse," said Mark Luckie, a digital strategist who previously worked at Twitter, Reddit and Facebook.

Facebook CEO Mark Zuckerberg has said he believes the platform "should enable as much expression as possible," and that social media companies "shouldn't be the arbiter of truth of everything that people say online."

Nonetheless, a recent Pew Research Center survey found that nearly three-quarters of U.S. adults believe social media sites intentionally censor political viewpoints. In the last two years, two congressional hearings have focused on the question of tech censorship.

"We hear that there is an anti-conservative bias on the part of Facebook or other platforms because conservatives keep saying that," said Susan Benesch, executive director of the Dangerous Speech Project, an organization based in Washington D.C. that has advised Facebook, Twitter, Google and other internet companies on how to diminish harmful content online while protecting freedom of speech.

But she adds, "I would be surprised if that were the case in part because on most days the most popular, most visited groups on Facebook and pages on Facebook are very conservative ones."

She said she also finds it interesting that "many conservatives or ultra-conservatives complain that the platforms have a bias against them at the same time as Black Lives Matter activists feel that the platforms are disproportionately taking down their content."

A 2019review of over 400 political pages on Facebook, conducted by the left-leaning media watchdog Media Matters, found conservative pages performed about equally as well as liberal ones.

But reliable data on the subject is scarce, and social media platforms are largely secretive about how they make decisions on content moderation.

Amid ongoing criticism, Facebook commissioned an independent review, headed by former Republican Senator Jon Kyl, to investigate accusations of anti-conservative bias. Kyl's 2019report detailed recommendations to improve transparency, and Facebook agreed to create an oversight board for content removal decisions. Facebook said it "would continue to examine, and where necessary adjust, our own policies and practices in the future."

According to Fernandez, the focus should be on requiring tech companies to publicly reveal their moderation rules and tactics.

Benesch points out, "We have virtually zero oversight regarding take-down, so in truth content moderation is more complicated than just take it down or leave it up," referring to the fact that, to date, there has been little publicly available data provided by tech companies to allow an evaluation of the process.

"Protecting free expression while keeping people safe is a challenge that requires constant refinement and improvement. We work with external experts and affected communities around the world to develop our policies and have a global team dedicated to enforcing them," Facebook said in a statement.

And a statement from Twitter said, "Twitter does not use political ideology to make any decisions whether related to ranking content on our service or how we enforce our rules. In fact, from a simple business perspective and to serve the public conversation, Twitter is incentivized to keep all voices on the service."

Meanwhile users like Wysinger struggle with mixed feelings about social media sites that promise connection but sometimes leave them out in the cold.

"Whether we like it or not, we are all on Facebook and Instagram and Twitter all day long, and when they take us off the banned list, I don't know anyone who doesn't post a status on Facebook right away, after the ban is lifted: 'I'm back y'all!'," said Wysinger.

"It's like an abusive relationship, you can't even leave the abusive relationship because you become so used to and dependent on it."

Go here to read the rest:

Censorship on social media? It's not what you think - CBS News

A New Artificial Intelligence Research Proposes Multimodal Chain-of-Thought Reasoning in Language Models That Outperforms GPT-3.5 by 16% (75.17% …

A New Artificial Intelligence Research Proposes Multimodal Chain-of-Thought Reasoning in Language Models That Outperforms GPT-3.5 by 16% (75.17% 91.68%) on ScienceQA  MarkTechPost

View original post here:
A New Artificial Intelligence Research Proposes Multimodal Chain-of-Thought Reasoning in Language Models That Outperforms GPT-3.5 by 16% (75.17% ...

The Chelsea Manning Case: A Timeline | News & Commentary | American …

On May 17, Chelsea Manning will be released from military prison after her sentence was commuted by former President Obama in his last week in office. She has been incarcerated for seven years far less than the 35 years to which she was sentenced, but longer than any whistleblower in U.S. history.

Chelseas story raises numerous issues of interest to the ACLU, including the governments treatment of whistleblowers, the struggle of transgender people to survive and be treated with dignity, and prisoners rights. Below is a timeline covering some of the basics of her case.

May 2010 Chelsea Manning is arrested in Iraq for disclosing information to Wikileaks that was ultimately published by The New York Times, The Guardian, and Der Spiegel. The information, largely comprised of military and diplomatic documents, included evidence of civilian deaths in Iraq and Afghanistan, U.S. attempts to cover up the CIA torture program, and other matters of public interest. Shortly after her arrest, she is transferred to a U.S. military base in Kuwait, and then to the Quantico Marine base in Virginia.

April 2011 After being held for almost a year in solitary confinement in Kuwait and Quantico, Chelsea is transferred to a medium-security military prison in Kansas. Shortly before her transfer, the ACLU sends then-Defense Secretary Robert Gates a letter objecting to her treatment as cruel and unusual. She had been regularly stripped naked, subjected to prolonged isolated confinement and sleep deprivation, deprived of any meaningful opportunity to exercise, and stripped of her reading glasses so she could not read. Almost 300 academics, most of them legal scholars, sign a letter objecting to her treatment.

March 2012 The United Nations Special Rapporteur on Torture Juan Mendez formally rules that the U.S. governments treatment of Chelsea was cruel, inhuman, and degrading. The Pentagon refuses to allow Mendez to meet with Chelsea.

June 2013 Chelseas court martial trial begins. She is initially charged with aiding the enemy, despite the fact that the government never claimed she directly provided military adversaries with information. The charge is akin to treason and punishable by death.

July 2013 Chelsea is convicted of 17 of the 22 charges against her but acquitted of aiding the enemy. The following month, she is sentenced to 35 years in prison. When a soldier who shared information with the press and public is punished far more harshly than others who tortured prisoners and killed civilians, something is seriously wrong with our justice system, the ACLU says. This is the heaviest sentence handed down to a whistleblower or leaker in U.S. history, almost 20 times the pre-Obama record.

August 2013 Chelsea publicly announces she is transgender and will be seeking hormone therapy as part of her transition during incarceration. The military, despite its own diagnosis of Chelseas gender dysphoria, responds by stating that it does not provide hormone therapy or sex-reassignment surgery for gender identity disorder.

September 2014 The ACLU sues the Department of Defense over its refusal to provide Chelsea medical treatment for gender dysphoria.

February 2015 Prompted by over a year of litigation, the army begins to treat Chelsea with hormone therapy. She becomes the first person to receive health care related to gender transition while in military prison. The military continues to refuse to permit her to follow female grooming standards despite recommendations from its own medical providers that such treatment is a necessary part of her health care.

May 2016 Chelsea appeals her conviction. The ACLU files an amicus brief in support of the appeal. We argue that her prosecution under the Espionage Act of 1917, which does not allow defendants to argue that their disclosures were in the public interest, violated the Constitution and should be overturned.

September 2016 Chelsea ends a hunger strike she launched in protest of the armys refusal to provide her with medical treatment related to her gender dysphoria, after receiving assurances from the military that it would grant her request for gender transition surgery a first for a transgender inmate.

January 2017 Following a sustained, high-profile advocacy campaign including a WhiteHouse.gov petition, videos from former whistleblowers and celebrities, and letters of support from every major LGBTQ organization President Barack Obama announces he is commuting all but four months of Chelseas sentence. She will be released on May 17, 2017.

See the rest here:
The Chelsea Manning Case: A Timeline | News & Commentary | American ...

What Is Super Artificial Intelligence (AI)? Definition, Threats, and …

Artificial superintelligence (ASI) is defined as a form of AI capable of surpassing human intelligence by manifesting cognitive skills and developing thinking skills of its own. This article explains the fundamentals of ASI, the potential threat and advantages of super AI systems, and five key trends for super AI advancements in 2022.

Artificial superintelligence (ASI) is a form of AI that is capable of surpassing human intelligence by manifesting cognitive skills and developing thinking skills of its own.

Also known as super AI, artificial superintelligence is considered the most advanced, powerful, and intelligent type of AI that transcends the intelligence of some of the brightest minds, such as Albert Einstein.

Human-like Capabilities of Super AI

Machines with superintelligence are self-aware and can think of abstractions and interpretations that humans cannot. This is because the human brains thinking ability is limited to a set of a few billion neurons. Apart from replicating multi-faceted human behavioral intelligence, ASI can also understand and interpret human emotions and experiences. ASI develops emotional understanding, beliefs, and desires of its own, based on the comprehension capability of the AI.

ASI finds application in virtually all domains of human interests, be it math, science, arts, sports, medicine, marketing, or even emotional relations. An ASI system can perform all the tasks humans can, from defining a new mathematical theorem for a problem to exploring physics law while venturing into outer space.

ASI systems can quickly understand, analyze, and process circumstances to stimulate actions. As a result, the decision-making and problem-solving capabilities of super-intelligent machines are expected to be more precise than humans.

Currently, superintelligence is a theoretical possibility rather than a practical reality as most of the development today in computer science, and AI is inclined toward artificial narrow intelligence (ANI). This implies that AI programs are designed to solve only specific problems.

Machine learning and deep learning algorithms are further advancing such programs by utilizing neural networks as the algorithms learn from the results to iterate and improve upon themselves. Thus, such algorithms process data more effectively than previous AI versions. However, despite the advancements in neural nets, these models can only solve problems at hand, unlike human intelligence.

Engineers, AI researchers, and practitioners are developing technology and machines with artificial general intelligence (AGI), which is expected to pave the way for ASI development. Although there have been significant developments in the area, such as IBMs Watson supercomputer and Apples Siri, todays computers cannot fully simulate and achieve an average humans cognitive abilities and capabilities.

See More: What Is General Artificial Intelligence (AI)? Definition, Challenges, and Trends

Luminaries in AI are still skeptical about the progression and sustainability of ASI in the long run. According to a recent study published in the Journal of Artificial Intelligence Research in January 2021, researchers from premier institutes such as the Max Planck Institute concluded that it would be almost impossible for humans to contain super AIs.

The team of researchers explored the recent developments in machine learning, computational capabilities, and self-aware algorithms to map out the true potential of super-intelligent AI. They then performed experiments to test the system against some known theorems to evaluate whether containing it is feasible, if at all possible.

Nevertheless, if accomplished, superintelligence will usher a new era in technology with the potential to initiate another industrial revolution at a jaw-dropping pace. Some of the typical characteristics of ASI that will set it apart from other technologies and forms of intelligence include:

See More: 10 Industries AI Will Disrupt the Most by 2030

While artificial superintelligence has numerous followers and supporters, many theorists and technological researchers have cautioned on the idea of machines surpassing human intelligence. They believe that such an advanced form of intelligence could lead to a global catastrophe, as shown in several Hollywood movies such as Star Trek and Matrix. Moreover, even technology experts such as Bill Gates and Elon Musk are apprehensive about ASI and consider it a threat to humanity.

Here are some of the potential threats of superintelligence.

Potential Threats of Super AI

One potential danger of superintelligence that has received a lot of attention from experts worldwide is that ASI systems could use their power and capabilities to carry out unforeseen actions, outperform human intellect, and eventually become unstoppable. Advanced computer science, cognitive science, nanotechnology, and brain emulations have achieved greater-than-human machine intelligence.

If something goes wrong with any one of these systems, we wont be in a position to contain them once they emerge. Moreover, predicting the systems response to our requests will be very difficult. Loss of control and understanding can thus lead to the destruction of the human race altogether.

Today, it seems logical enough to think that highly advanced AI systems could potentially be used for social control or weaponization. Governments around the world are already using AI to strengthen their military operations. However, the addition of weaponized and conscious superintelligence could only transform and impact warfare negatively.

Additionally, if such systems are unregulated, they could have dire consequences. Superhuman capabilities in programming, research & development, strategic planning, social influence, and cybersecurity could self-evolve and take positions that could become detrimental to humans.

Super AI can be programmed to our advantage; however, there lies a non-zero probability of super AI developing a destructive method to achieve its goals. Such a situation may arise when we fail to align our AI goals. For example, if you give a command to an intelligent car to drive you to the airport as fast as possible, it might get you to the destination but may use its own route to comply with the time constraint.

Similarly, if a super AI system is assigned a critical geoengineering project, it may disturb the overall ecosystem while completing the project. Moreover, any human attempt to stop the super AI system may be viewed by it as a threat to achieving its goals, which wouldnt be an ideal situation to be in.

The successful and safe development of AGI and superintelligence can be ensured by teaching it the aspects of human morality. However, ASI systems can be exploited by governments, corporations, and even sociopaths for various reasons, such as oppressing certain societal groups. Thus, superintelligence in the wrong hands can be devastating.

With ASI, autonomous weapons, drones, and robots could acquire significant power. The danger of nuclear attacks is another potential threat of superintelligence. Enemy nations can attack countries with technological supremacy in AGI or superintelligence with advanced and autonomous nuclear weapons, ultimately leading to destruction.

Super-intelligent AI systems are programmed with a predetermined set of moral considerations. The problem is humanity has never agreed upon a standard moral code and has lacked an all-encompassing ethical theory. As a result, teaching human ethics and values to ASI systems can be quite complex.

Super-intelligent AI can have serious ethical complications, especially if AI exceeds the human intellect but is not programmed with the moral and ethical values that coincide with human society.

See More: Top 10 Machine Learning Algorithms

Artificial superintelligence is an emerging technology that simulates human reasoning, emotions, and experiences in AI systems. Although detractors continue to debate the existential risks of super AI, the technology seems to be very beneficial as it can revolutionize any professional sector.

Lets look at the potential advantages of super AI.

Potential Advantages of Super AI

Its human to make errors. Computers, or machines, when appropriately programmed, can considerably reduce the instances of these mistakes. Consider the field of programming and development. Programming is a time- and resource-consuming process that demands logical, critical, and innovative thinking.

Human programmers and developers often encounter syntactical, logical, arithmetic, and resource errors. Super AI can be helpful here as it can access millions of programs, build logic from the available data on its own, compile and debug programs, and at the same time, keep programming errors to a minimum.

One of the most significant advantages of super AI is that the risk limitations of humans can be overcome by deploying super-intelligent robots to accomplish dangerous tasks. These can include defusing a bomb, exploring the deepest parts of the oceans, coal and oil mining, or even dealing with the consequences of natural or human-induced disasters.

Consider the Chernobyl nuclear disaster that occurred in 1986. At the time, AI-powered robots were not yet invented. The nuclear power plants radiation was so intense that it could kill any human who went close to the core in a matter of minutes. Authorities were forced to use helicopters to pour sand and boron from a distance above.

However, with significant technological advancements, superintelligent robots can be deployed in such situations where salvage operations can be carried out without any human intervention.

Although most humans work for 6 to 8 hours a day, we need some time out to recuperate and get ready for work the very next day. We also need weekly offs to keep a healthy work-life balance. However, using super AI, we can program machines to work 247 without any breaks.

For example, educational institutes have helpline centers that receive several queries daily. This can be effectively handled using super AI, providing query-specific solutions round the clock. Super AI can also offer subjective student counseling sessions to academic institutions.

Super AI can facilitate space exploration, as the technical challenges in developing a city on Mars, interstellar space travel, and even interplanetary travel can be addressed by the problem-solving capabilities of advanced AI systems.

With a thinking ability of its own, super AI can be effectively used to test and estimate the success probabilities of many equations, theories, researches, rocket launches, and space missions. Organizations such as NASA, SpaceX, ISRO, and others are already using AI-powered systems and supercomputers such as the Pleiades to expand their space research efforts.

The development of super AI can also benefit the healthcare industry vertical significantly. It can play a pivotal role in drug discovery, vaccine development, and drug delivery. A 2020 research paper by Nature revealed the design and use of miniaturized intelligent nanobots for intracellular drug delivery.

AI applications in healthcare for vaccine and drug delivery are already a reality today. Moreover, with the addition of conscious superintelligence, the discovery and delivery of new strains of medicines will become much more effective.

See More: How Is AI Changing the Finance, Healthcare, HR, and Marketing Industries

Post-pandemic, we have witnessed accelerated adoption of AI and ML across industries. Moreover, automation, coupled with AI hardware and software developments, further pushes the super AI envelope. Although ASI is still in its infancy stage, recent AI trends will almost certainly lay the foundation for advanced AI systems in the future.

Lets look at the five key AI trends that will speed up super AI advancements in 2022.

2022 Trends for Super AI Advancements

Language models use NLP techniques and algorithms to predict the occurrence of a sequence of words in a sentence. Such models are expert systems that can summarize textual data and create visual charts from those very texts.

LLM models are trained on massive datasets. Popular examples of LLMs include OpenAIs GPT-2 and GPT-3 and Googles BERT. Similarly, Naver, a South Korean company, has built a comprehensive AI-based Korean language model, HyperCLOVA. These models can generate simple essays, develop next-generation conversation AI tools, and even design complex financial models for corporations.

Deep learning algorithms have trained the underlying models on a single data source. For example, an NLP model is trained on the text data source, while a computer vision model is trained on an image dataset. Similarly, an acoustic model uses wake word detection and noise cancellation parameters to handle speech. The type of ML employed here is a single modal AI as the model outcome is mapped to one source of data typetext, images, and speech.

On the other hand, Multimodal AI combines visual and speech modalities to create scenarios that match human perception. DALL-E from OpenAI is a recent example of multimodal AI-generated images from texts. Googles multimodal AI multitask unified model (MUM) helps enhance the user search experience since the search results are shortlisted by considering contextual information mined from 75 different languages. Another example is NVIDIAs GauGAN2 model, which generates photo-realistic images from text inputs. It uses text-to-image generation to develop the finest photorealistic art.

There has been significant development in AI-driven programming in the last couple of years. Several tools such as Amazon CodeGuru have provided relevant recommendations to improve overall code quality by determining the applications costly code. Moreover, recently, GitHub and OpenAI launched Github Copilot as an AI pair programmer to help programmers and developers write efficient code. Another example comes from Salesforce, where CodeT5 has been launched as an open-source project to assist programmers with AI-powered coding.

Thus, advancements in LLMs and wider availability of open source code will promote intelligent code generation, which will be compact and high quality. Additionally, such systems will also translate code from one language to another, opening up an applications code to a broader community.

Top AI vendors such as Amazon, Google, and Microsoft now commercialize their AI-based products. Amazon Connect and Google Contact Center AI are efforts for better contact center management. Both products leverage ML capabilities that offer automated assistance to contact center agents and drive conversations through bots.

Moreover, Microsofts Azure Percept provides computer vision and conversational AI capabilities at the edge. It is based on IoT, AI, and edge computing services on Azure. The convergence of all these technologies, cutting-edge research in LLMs, and conversational and multimodal AIs will make the development of super AI possible in 2022.

AI is already powering inventions in almost every domain, from creating music, art, and literature to developing scientific theories. Recently, DABUS, an artificial inventive machine, came up with ideas for two patentable inventions. While the first invention relates to a device that attracts attention and is helpful in search and rescue operations, the second is a type of beverage container.

Moreover, it has been observed that the DABUS machine also has an emotional appreciation for whatever ideas it conceives. With advanced AI systems in place, the number of such inventions that solve complex problems can considerably increase over the coming years.

Progress in multimodal AI will give rise to the next wave in creative AI, where AI-generated images, infographics, and even videos will be realized.

See More: 10 Most Common Myths About AI

Although the scope of artificial superintelligence is yet to be fully realized, it has garnered immense attention from researchers worldwide. ASI brings numerous risks to the table, yet AI practitioners feel achieving it will be a significant accomplishment for humanity, as it may allow us to unravel the fascinating mysteries of the universe and beyond.

Today, the future of super AI looks extremely bright, despite the uncertainty and fear revolving around its unpredictable nature and the dire consequences that malevolent superintelligence may throw at us. The coming decades will reveal the true nature of superintelligence and whether it will prove to be a boon or bane to humanity.

Do you think super AI will be a threat or boon to humanity? Share your thoughts with us on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . Wed love to hear from you!

MORE ON ARTIFICIAL INTELLIGENCE

Continued here:
What Is Super Artificial Intelligence (AI)? Definition, Threats, and ...