Qloo, the Leading Artificial Intelligence Platform for Culture and Taste Preferences, Raises $15M in Series B – Business Wire

NEW YORK--(BUSINESS WIRE)--Qloo, the leading artificial intelligence platform for culture and taste preferences, announced today that it has raised $15M in Series B funding from Eldridge and AXA Venture Partners. This latest round of funding brings Qloos total capital raised to $30M, and will enable the privacy-centric AI leader to expand its team of world-class data scientists, enrich its technology, and build on its sales channels in order to continue to offer premier insights into global consumer taste for Fortune 500 companies across the globe.

Founded in 2012, Qloo pioneered the predictive algorithm as a service model, using AI technology to help brands securely analyze anonymized and encrypted consumer taste data to provide recommendations based on a consumers preferences. Demand for Qloo has been accelerating as companies look for privacy centric solutions - in fact, API request volumes across endpoints grew more than 273% year-over-year in Q2.

Before Qloo, consumer taste was really only examined within the silo of a certain app or service - which made it impossible to model a fuller picture of peoples preferences, said Alex Elias, Founder and CEO of Qloo. Qloo is the first AI platform that takes into account all the cross-sections of our preferences - like how our music tastes correlate to our favorite restaurants, or how our favorite clothing brands may lend themselves to a great movie recommendation.

Qloos flagship API works across multiple layers to process and correlate over 575 million primary entities (such as a movie, book, restaurant, song, etc.) across entertainment, culture, and consumer products, giving the most accurate and expansive predictions of consumer taste based on demographics, preferences, cultural entities, metadata, and geolocational factors. Qloos API can be plugged directly into leading data platforms such as Snowflake and Tableau, with results populated in only a matter of seconds making it easy for companies to improve product development, media buying, and consumer experiences in real time.

Qloo currently delivers cultural AI that powers inferences for clients serving over 550 million customers globally in 2022, including industry leaders across media and publishing, entertainment, technology, e-commerce, consumer brands, travel, hospitality, automakers, fashion, financial services, and more.

About Qloo:

Qloo is the leading artificial intelligence platform on culture and taste preferences, providing completely anonymized and encrypted consumer taste data and recommendations for leading companies in the tech, entertainment, publishing, retail, travel, hospitality and CPG sectors. Qloos proprietary API can predict consumers' preferences and connect how their tastes correlate across over a dozen major categories, including music, film, television, podcasts, dining, nightlife, fashion, consumer products, books and travel. Launched in 2012, Qloo combines the latest in machine learning, theoretical research in Neuroaesthetics and one of the largest pipelines of detailed taste data to better inform its customers - and makes all of this intelligence available through an API. By allowing companies to speak more effectively with their target consumers, Qloo helps its customers solve real-world problems such as driving sales, saving money on media buys, choosing locations and building brands. Qloo is the parent company of TasteDive, a cultural recommendation engine and social community that allows users to discover what to watch, read, listen to, and play based on their existing unique preferences.

Learn more at qloo.com and http://www.tastedive.com.

Continue reading here:
Qloo, the Leading Artificial Intelligence Platform for Culture and Taste Preferences, Raises $15M in Series B - Business Wire

Perceptron: Face-tracking earables, analog AI chips, and accelerating particle accelerators – TechCrunch

Kyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself. occasionally -- if mostly unsuccessfully.

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column,Perceptron, aims to collect some of the most relevant recent discoveries and papers particularly in, but not limited to, artificial intelligence and explain why they matter.

An earable that uses sonar to read facial expressions was among the projects that caught our eyes over these past few weeks. So did ProcTHOR, a framework from the Allen Institute for AI (AI2) that procedurally generates environments that can be used to train real-world robots. Among the other highlights, Meta created an AI system that can predict a proteins structure given a single amino acid sequence. And researchers at MIT developed new hardware that they claim offers faster computation for AI with less energy.

The earable, which was developed by a team at Cornell, looks something like a pair of bulky headphones. Speakers send acoustic signals to the side of a wearers face, while a microphone picks up the barely-detectable echoes created by the nose, lips, eyes, and other facial features. These echo profiles enable the earable to capture movements like eyebrows raising and eyes darting, which an AI algorithm translates into complete facial expressions.

Image Credits: Cornell

The earable has a few limitations. It only lasts three hours on battery and has to offload processing to a smartphone, and the echo-translating AI algorithm must train on 32 minutes of facial data before it can begin recognizing expressions. But the researchers make the case that its a much sleeker experience than the recorders traditionally used in animations for movies, TV, and video games. For example, for the mystery game L.A. Noire, Rockstar Games built a rig with 32 cameras trained on each actors face.

Perhaps someday, Cornells earable will be used to create animations for humanoid robots. But those robots will have to learn how to navigate a room first. Fortunately, AI2s ProcTHOR takes a step (no pun intended) in this direction, creating thousands of custom scenes including classrooms, libraries, and offices in which simulated robots must complete tasks, like picking up objects and moving around furniture.

The idea behind the scenes, which have simulated lighting and contain a subset of a massive array of surface materials (e.g., wood, tile, etc.) and household objects, is to expose the simulated robots to as much variety as possible. Its a well-established theory in AI that performance in simulated environments can improve the performance of real-world systems; autonomous car companies like Alphabets Waymo simulate entire neighborhoods to fine-tune how their real-world cars behave.

Image Credits: Allen Institute for Artificial Intelligence

As for ProcTHOR, AI2 claims in a paper that scaling the number of training environments consistently improves performance. That bodes well for robots bound for homes, workplaces, and elsewhere.

Of course, training these types of systems requires a lot of compute power. But that might not be the case forever. Researchers at MIT say theyve created an analog processor that can be used to create superfast networks of neurons and synapses, which in turn can be used to perform tasks like recognizing images, translating languages, and more.

The researchers processor uses protonic programmable resistors arranged in an array to learn skills. Increasing and decreasing the electrical conductance of the resistors mimics the strengthening and weakening of synapses between neurons in the brain, a part of the learning process.

The conductance is controlled by an electrolyte that governs the movement of protons. When more protons are pushed into a channel in the resistor, the conductance increases. When protons are removed, the conductance decreases.

Processor on a computer circuit board

An inorganic material, phosphosilicate glass, makes the MIT teams processor extremely fast because it contains nanometer-sized pores whose surfaces provide the perfect paths for protein diffusion. As an added benefit, the glass can run at room temperature, and it isnt damaged by the proteins as they move along the pores.

Once you have an analog processor, you will no longer be training networks everyone else is working on, lead author and MIT postdoc Murat Onen was quoted as saying in a press release. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft.

Speaking of acceleration, machine learning is now being put to use managing particle accelerators, at least in experimental form. At Lawrence Berkeley National Lab two teams have shown that ML-based simulation of the full machine and beam gives them a highly precise prediction as much as 10 times better than ordinary statistical analysis.

Image Credits: Thor Swift/Berkeley Lab

If you can predict the beam properties with an accuracy that surpasses their fluctuations, you can then use the prediction to increase the performance of the accelerator, said the labs Daniele Filippetto. Its no small feat to simulate all the physics and equipment involved, but surprisingly the various teams early efforts to do so yielded promising results.

And over at Oak Ridge National Lab an AI-powered platform is letting them do Hyperspectral Computed Tomography using neutron scattering, finding optimal maybe we should just let them explain.

In the medical world, theres a new application of machine learning-based image analysis in the field of neurology, where researchers at University College London have trained a model to detect early signs of epilepsy-causing brain lesions.

MRIs of brains used to train the UCL algorithm.

One frequent cause of drug-resistant epilepsy is what is known as a focal cortical dysplasia, a region of the brain that has developed abnormally but for whatever reason doesnt appear obviously abnormal in MRI. Detecting it early can be extremely helpful, so the UCL team trained an MRI inspection model called Multicentre Epilepsy Lesion Detection on thousands of examples of healthy and FCD-affected brain regions.

The model was able to detect two thirds of the FCDs it was shown, which is actually quite good as the signs are very subtle. In fact, it found 178 cases where doctors were unable to locate an FCD but it could. Naturally the final say goes to the specialists, but a computer hinting that something might be wrong can sometimes be all it takes to look closer and get a confident diagnosis.

We put an emphasis on creating an AI algorithm that was interpretable and could help doctors make decisions. Showing doctors how the MELD algorithm made its predictions was an essential part of that process, said UCLs Mathilde Ripart.

Original post:
Perceptron: Face-tracking earables, analog AI chips, and accelerating particle accelerators - TechCrunch

Bethenny Frankel Alleges TikTok ‘Shadow Banning’ After Slamming Kardashians

Bethenny Frankel claims has been "shadow banned" on TikTok following comments about the Kardashians and celebrity brands.

The Real Housewives of New York City alum, 51, spoke out against the reality TV clan on Just B with Bethenny Frankel.

"We need a Kardashian intermission," she said, before admitting she was not sure if she should continue commenting on the family.

"And I've honestly been afraid to say it. It's not because Kris [Jenner] is the mafia and controls a lot of the media, because I don't give a f**k, cancel me."

In her anti-Kardashian rant, Frankel wondered about the influence they had on younger generations.

"What are we saying to our kids? What is the message? Take it all? Be as rich as possible? Filter as much as possible? Be as fake as much as possible. Brag as much as possible?" Frankel questioned.

"Get plastic surgery and lie about it as much as possible? What the f**k are we doing? Then do a charity donation to like rinse it over as much as possible? What are we doing?"

Frankel then took her complaints to the next level saying she felt "waterboarded" over the excessive amount of "Kard-data" in the media.

"Kard-data" was a term she came up with to describe what she saw as relentless media images of the Kardashian-Jenners, saying "please stop shoving [the Kardashians] down my throat."

Frankel also claimed she was getting fewer views on TikTok, slamming the popular app for "shadow banning."

"Since I seem to have been "shadowbanned" on Tik Tok, since posting about celeb brands, & get only 10 percent of normal views on this reel, hmmmm (my smart followers pointed this out) it seemed like this should be posted here.... Coincidence maybe? Or is censorship real?" Frankel wrote on Instagram alongside a video of her Kardashian diatribe.

"Shadow banning" refers to the practice by social media brands to partially blocking a user who has fallen out of favor by reducing how much of their content is seen by other users.

It might not be immediately apparent to the user they have been shadow banned, but the aim of the practice is the hope they will become tired or bored of the app if not getting the usual amount of interactions.

Newsweek has been unable to verify Frankel's claims about shadow banning. We have reached out to TikTok for comment.

Frankel is no stranger to controversy.

Last year she came under fire for her comments about a "person with a penis, who identifies as being a girl."

"We have to go into the fact that I did a Zoom for my daughter's school and [had] the pronouns conversation with each teacher, each parent, each child," she said about her daughter Bryn.

"And my daughter says in school, too, that everybody has to say their pronouns. And my daughter didn't even know what hers were."

She continued about a "person with a penis, who identifies as being a girl" sharing a room with other girls at school camp.

"So, the other girls saw a penis," the TV personality said. "They're 9, 10 years old, so the parents obviously weren't that happy ... A penis often goes into a vagina so they might not want that visual so soon."

Frankel added that if she had a transgender child, she would "want my child to go to another camp where there were kids in the same situation."

"Not every situation is set up to make someone thrive," she said. "I know parents who won't send their children to very athletic school because they're not jocks so they're gonna set them up for not feeling successful. You can't make every situation fit. The camp didn't think it through."

She then turned her attention to gender-inclusive bathrooms.

"What happens if a child isn't ready to make a decision?" Frankel asked. "Don't a lot of girls in college have a lesbian phase and then they realize that they're not?

"Maybe they're going through something, maybe they want attention, maybe they're going through a bad break-up. What is the age that someone's absolutely positive who they are? There's got to be gray area."

"I've heard of situations when [people] unmake that decision," she went on. "What does that mean for that camp? What does that mean for that bunk? Maybe a mother isn't ready for her child to see a penis in a bunk and understand that child identifies as a girl."

Frankel praised her "amazing" daughter for her knowledge of the subject, saying Bryn "understands all this" and has "different language" to discuss it.

She added: "She also hasn't seen a penis. In a camp, she would see girl parts."

Her podcast statements provoked fury online, with a number of Twitter users branding Frankel transphobicbut she quickly hit back.

She tweeted to one detractor: "Listen to the podcast. Then comment. I was absolutely not wrong. And I'm going to discuss this again this week. Thankfully I have a platform to clarify what the media loves to distort... ps. I'm not afraid of cancelation so not afraid of charged discussions."

See original here:

Bethenny Frankel Alleges TikTok 'Shadow Banning' After Slamming Kardashians

Deplatforming online extremists reduces their followers but there’s a price – The Conversation

Conspiracy theorist and US far-right media personality Alex Jones was recently ordered to pay US$45 million (37 million) damages to the family of a child killed in the 2012 Sandy Hook school shooting.

Jones had claimed that being banned or deplatformed from major social media sites for his extreme views negatively affected him financially, likening the situation to jail. But during the trial, forensic economist Bernard Pettingill estimated Joness conspiracy website InfoWars made more money after being banned from Facebook and Twitter in 2018.

So does online deplatforming actually work? Its not possible to measure influence in a scientifically rigorous way so its difficult to say what happens to a person or groups overall influence when they are deplatformed. Overall, research suggests deplatforming can reduce the activity of nefarious actors on those sites. However, it comes with a price. As deplatformed people and groups migrate elsewhere, they may lose followers but also become more hateful and toxic.

Typically, deplatforming involves actions taken by the social media sites themselves. But it can be done by third parties like the financial institutions providing payment services on these platforms, such as PayPal.

Closing a group is also a form of deplatforming, even if the people in it are still free to use the sites. For example, The_Donald subreddit (a forum on the website Reddit) was closed for hosting hateful and threatening content, such as a post encouraging members to attend a white supremacist rally.

Research shows deplatforming does have positive effects on the platform the person or group was kicked out of. When Reddit banned certain forums victimising overweight people and African Americans, a lot of users who were active on these hateful subreddits stopped posting on Reddit altogether. Those who stayed active posted less extreme content.

But the deplatformed group or person can migrate. Alex Jones continues to work outside mainstream social networks, mainly operating through his InfoWars website and podcasts. A ban from big tech may be seen as punishment for challenging the status quo in an uncensored manner, reinforcing the bonds and sense of belonging between followers.

Gab was created as an alternative social network in 2016, welcoming users who have been banned from other platforms. Since the US Capitol insurrection, Gab has been tweeting about these bans as a badge of honour, and said its seen a surge in users and job applications.

My teams research looked at the subreddits The_Donald and Incels (a male online community hostile towards women), which moved to standalone websites after being banned from Reddit. We found that as dangerous communities migrated onto different platforms, their footprints became smaller, but users got significantly more extreme. Similarly, users who got banned from Twitter or Reddit showed an increased level of activity and toxicity upon relocating to Gab.

Other studies into the birth of fringe social networks like Gab, Parler, or Gettr have found relatively similar patterns. These platforms market themselves as bastions of free speech, welcoming users banned or suspended from other social networks. Research shows that not only does extremism increase as a result of lax moderation but also that early site users have a disproportionate influence on the platform.

The unintended consequences of deplatforming are not limited to political communities but extend to health disinformation and conspiracy theory groups. For instance, when Facebook banned groups discussing COVID-19 vaccines, users went on Twitter and posted even more anti-vaccine content.

What else can be done to avoid the concentration of online hate that deplatforming can encourage? Social networks have been experimenting with soft moderation interventions that do not remove content or ban users. They limit the contents visibility (shadow banning), restrict the ability of other users to engage with the content (replying or sharing), or add warning labels.

These approaches are showing encouraging results. Some warning labels have prompted site users to debunk false claims. Soft moderation sometimes reduces user interactions and extremism in comments.

However, there is potential for popularity bias (acting on or ignoring content based on the buzz around it) about what subjects platforms like Twitter decide to intervene on. Meanwhile, warning labels seem to work less effectively for fake posts if they are right-leaning.

It is also still unclear whether soft moderation creates additional avenues for harassment, for example mocking users that get warning labels on their posts or aggravating users who cannot re-share content.

A crucial aspect of deplatforming is timing. The sooner platforms act to stop groups using mainstream platforms to grow extremist movements, the better. Rapid action could in theory put the brakes on the groups efforts to muster and radicalise large user bases.

But this would also need a coordinated effort from mainstream platforms as well other media to work. Radio talk shows and cable news play a crucial role in promoting fringe narratives in the US.

We need an open dialogue on the deplatforming tradeoff. As a society, we need to discuss if our communities should have fewer people exposed to extremist groups, even if those who do engage become ever more isolated and radicalised.

At the moment, deplatforming is almost exclusively managed by big technology companies. Tech companies cant solve the problem alone, but neither can researchers or politicians. Platforms must work with regulators, civil rights organisations and researchers to deal with extreme online content. The fabric of society may depend upon it.

Read the original:

Deplatforming online extremists reduces their followers but there's a price - The Conversation

FBI warns of residential proxies used in credential stuffing attacks – BleepingComputer

The Federal Bureau of Investigation (FBI) warns of a rising trend of cybercriminals using residential proxies to conduct large-scale credential stuffing attacks without being tracked, flagged, or blocked.

The warning was issued as a Private Industry Notification on the Bureau's Internet Crime Complaint Center (IC3) late last week to raise awareness among internet platform admins who need to implement defenses against credential stuffing attacks.

Credential stuffing is a type of attack where threat actors use large collections of username/password combinations exposed in previous data breaches to try and gain access toother online platforms.

Because people commonly use the same password at every site, cybercriminals have ample opportunity to take over accounts without cracking passwords or phishing any other information.

"Malicious actors utilizing valid user credentials have the potential to access numerous accounts and services across multiple industries to include media companies, retail, healthcare, restaurant groups and food delivery to fraudulently obtain goods, services, and access other online resources such as financial accounts at the expense of legitimate account holders," details the FBI's announcement.

Because credential stuffing attacks carry specific characteristics that differentiate them from regular login attempts, websites can easily detect and stop them.

To override basic protections, the FBI warns that threat actors are using residential proxies to hide their actual IP address behind ones commonly associated with home users, which are unlikely to be present in blocklists.

Proxies are online servers that accept and forward requests, making it appear like a connection is from them rather than the actual initiator (attacker).

Residential proxies are preferable over data center-hosted proxies because they make it harder for protection mechanisms to discern between suspicious and regular consumer traffic.

Typically, these proxies are made available to cybercriminals by hacking legitimate residential devices such asmodems or other IoTsorthrough malwarethat converts a home user's computer into a proxy without their knowledge.

Using these tools, cybercriminals automate credential stuffing attacks, with bots attempting to log in across numerous sites using previously stolen login credentials.

Moreover, some of these proxy tools offer the option to brute-force account passwords or include "configs" that modify the attack to accommodate particular requirements, like having a unique character, minimum password length, etc.

The FBI says credential stuffing attacks are not limited to websites and have been seen targeting mobile applications due to their poor security.

"Cyber criminals may also target a companys mobile applications as well as the website," warns the FBI advisory.

"Mobile applications, which often have weaker security protocols than traditional web applications, frequently permit a higher rate of login attempts, known as checks per minute (CPMs), facilitating faster account validation."

In a joint operation involving the FBI and the Australian Federal Police, the agencies investigated two websites that contained over 300,000 unique sets of credentials obtained through credential stuffing attacks.

The FBI says these websites counted over 175,000 registered users and generated over $400,000 in sales for their services.

FBI's advisory urges administrators to follow certain practices to help protect their users from losing their accounts to credential stuffing attacks, even when they use weak passwords.

The key points include:

Regular users can protect themselves by activating MFA on their accounts, using strong and unique passwords, and remaining vigilant against phishing attempts.

Read more:

FBI warns of residential proxies used in credential stuffing attacks - BleepingComputer

Book Banning By Conservatives Spreading Across Texas – Reform Austin

As kids return to school, districts are having to deal with a flurry of book challenges to their libraries, mostly from conservatives.

In March,The Houston Chroniclecompiled a detailed listof the challenges that were being made to school libraries. Starting in 2019, when the term critical race theory became a catch-all used to oppose teaching about racism, the labor movement, gender identity, and sexual orientation, challenges and bans sharply skyrocket. In 2017, only five books were challenged and none removed. By 2021, the challenges had reached 29 and the removals 19.

A look at the titles that have been targeted reveal a sinister culture war being waged in public schools.Recently in Katy ISD, a parent complaint actually led to a police officer physically removing a book from the shelves after a parent complained it was harmful. The book in question wasFlamerby Mike Curato, a critically acclaimed and award-winning coming of age story of a gay boy. The parent claimed that depictions of sex in the book made it pornographic. After a review of the material, KISD high school libraries will continue to makeFlameravailable to students.

Katy has hada number of disturbing challenges from conservative parents. A biography of Michelle Obama was challenged for promoting reverse racism.

In Keller ISD, a graphic novel version ofThe Diary of Anne Frankwas temporarily removed from shelves, though the prose version remained available. Once again, a complaint of pornography was behind the challenge. Anne Frank describes her genitals in her tale of living in hiding during the Nazi occupation of Amsterdam. Once again, cooler heads prevailed and te book was unbanned.

Some schools, such as Cy Fair ISD, have attempted to reach a compromise with the angry parents demanding that books be banned from shelves. In that district, parents have the option to prohibit their child from checking out books from the library entirely, though they have ti opt-in specifically for that.

Since last fall, teachers and school librarians have been living under a shadow when it comes to what books can be allowed in schools. The state government essentially put out a hit list targeting books. Those titles are almost exclusively based around diversity. In Prosper, a parent wanted a title about a Black Olympiad who dealt with racism in 1940s Tennessee off the shelves. Eanes ISD was attacked for stickingHow to be Antiracistby Ibram X. Kendi on the shelves. A parent group wanted it replaced with copies ofThe Bible.

Many of the challenges tie in with another conservative crusade, that of accusing LGBT people of grooming children for sexual activity and deviant lifestyles. Many texts about growing up queer on trans have been accused of being pornography, with almost no depictions of heterosexuality in books receiving the same amount of resistance. Coupled with the states recent anti-trans initiatives such as sports bans and investigating parents for child abuse if they provide their children with gender afforming care, it appears that conservatives are trying to wipe the existence of trans people from public schools entirely.

Originally posted here:

Book Banning By Conservatives Spreading Across Texas - Reform Austin

What is the shadowban on Instagram? – Gearrice

Social networks are not static. Its functionalities and algorithms are evolving. Their rules are also changing to achieve an ecosystem in which they keep us hooked for more and more time. When new changes are applied to a social network like Instagram, there are always users who do their best to skip them. If we violate Instagram rules, the network will end up deleting our account, which is what is known as a ban. However, there is another measure that is becoming more and more popular, and it is known as shadowban. Is it real or is it an urban legend?How is a ban different from a shadowban?

In computing jargon, the fact that an account or profile ends up being removed for breaking the rules. This term does not only apply to social networks, as it already existed in forums and even in online games. The word comes from the English banwhich means prohibition.

In a lifetime ban, your account is locked or removed from the server. There can be many reasons why a moderator or moderation automatism decides to kick you out of a community. From breaking the rules to receiving a massive report by people who dont like you.

However, some social networks penalize practices that are not lawful, but that are not serious enough to expel the user. This is when we start talking about shadowbana concept that many say is a myth.

Shadowban is a type of silent ban that applies to those who are breaking some rules or are misusing the tools of a platform to take advantage.

Basically, the shadownban turns the user into a kind of ghost. Your publications will no longer have such an impact and almost all is lost engagement.. Your interactions with other users will drop drastically. And the worst thing is that the user will never be notified that they have a shadowban. This condition can apparently be reversed with time and good behavior.

There are several reasons that many users have been discovering. Instagram has never commented on shadowbanning, so there is no official guide to shed any light on it either. However, these are some practices that can lead to a shadowban:

Some experts in social networks affirm that this shadowban on networks like Instagram is a myth. According to them, an account that has thousands of followers and less than 100 likes in a publication does not have any special condition, but rather that the social network is not working well. However, here we could enter the debate about whether they are right or if what they want is for us to hire them so that they are the ones who resolve the ballot for us.

In any case, in other social networks, the shadowban exists and you can see it with your own eyes. Twitter, for example, often hide replies in threads when the profile usually comments on irrelevant things. And surprise, you have never heard the official team talk about the shadowban technique.

Continued here:

What is the shadowban on Instagram? - Gearrice

Digilantism, hackbacks and mutual aid are used by online activists to fight trolls – GlobalComment.com

Sandra Jeppesen, Lakehead University

On Aug. 5, 2022, digital trans activist Clara Sorrenti found herself arrested at gunpoint at her home in London, Ont. Anti-trans trolls had falsely reported she had killed her mother and was planning a shooting at city hall.

Sorrenti had been swatted.

Swatting involves calling 911 to falsely report a high-risk emergency at their victims home, triggering deployment of a SWAT team. In some swatting cases, victims have died at the hands of police.

Sorrentis experience is consistent with my findings in long-term research with intersectional global media activists.

She is a new type of intersectional digital activist. These activists work on intersectional issues, drawing connections between systems of oppression including race, gender, sexuality, and so on. And a great deal of their activism takes place online.

Digital campaigns such as #MeToo and #BlackLivesMatter have been successful partially because young women, Black people and LGBTQ+ are the power users of social media they are online more often and particularly adept at using social networks.

But despite successes in social justice campaigns, intersectional activists are increasingly at risk both online and off.

The online trolling and offline swatting of Sorrenti illustrate how intersectional activists face an emotional tax emotional stress over and above everyday norms mostly from dealing with violent attacks by online trolls.

Intersectional activists are also doxxed at higher rates, meaning personal information is dumped online, such as their address, phone number or workplace. Sorrentis swatting is a textbook example there are ongoing emotional impacts of her doxxing, including confronting transphobic police behaviours such as using her deadname (the name used before transitioning) and incorrect gender.

A deeper problem is that internet users are not all treated equally by the internets technical codes.

Research has repeatedly demonstrated that algorithms the computer codes that program the internet are biased.

Algorithms and the big data that drives them are often racist, gendered or transphobic.

One type of algorithmic bias is shadowbanning, which happens when a platform limits the visibility of specific users without outright banning them. Activists have noted that social media content about intersectional issues is often shadowbanned.

For example, on May 5, 2021 Red Dress Day in Canada almost all posts on Instagram related to missing and murdered Indigenous women disappeared . Instagram claimed it was a technical issue, whereas users claimed it was a shadowbanning of intersectional female, Indigenous activist content. But shadowbanning is often difficult to prove.

There is also evidence that the popular video-hosting platform TikTok has shadowbanned intersectional LGBTQ+, disability, size activism and anti-racist content.

Algorithmic bias and shadowbanning of marginalized users can make intersectional activists feel invisible, with their posts facing challenges to achieve the virality crucial to activist campaigns.

One tactic activists have used to address intersectionality online is to create a breakaway hashtag. The #MeToo movement is a powerful example of hashtag activism that drew global attention to sexual harassment and abuse. However, for Egyptian-American writer Mona Eltahawy, #MeToo did not feel like the right space for her as a Muslim woman. She created #MosqueMeToo to draw attention to sexual assault in the Muslim community, focusing on the intersectional context of gender, Islamophobia and racism.

Breakaway hashtags like #MosqueMeToo add intersectional dimensions to the premise of a mainstream hashtag, both relying on the original hashtags virality and challenging its limitations.

Young feminist women who are trolled online use the tactic of digilante justice, or digilantism, which involves using digital means to fight for justice, in this case against trolls. They learn how to hack social media platforms to reveal the identities of trolls and confront them in real life. Activists have also excluded trolls from their personal social networks through hackback tactics, which are hacker tactics used against hackers.

In another example, feminist game developer Randi Harper was intensely trolled by misogynists in an incident known as GamerGate. In response, Harper developed Good Game Auto Blocker (ggautoblocker) that blocks users who follow misogynist Twitter accounts, the digital equivalent of walking out of a room when someone spews hateful speech.

Digital activists understand that social media platforms are designed for the capitalist exploitation of content and data produced by everyday users. Countering this, intersectional hacktivists (hacker activists) have designed technologies for solidarity rather than exploitation.

For example, activists in Athens designed an app to share text message costs so media activists within a group would not have to foot the whole bill. The program itself was designed with sharing in mind, illustrating that technologies do not have to be exploitative.

Intersectional activists aim to empower both givers and receivers of support, acknowledging that all citizens play both roles, sometimes needing support and other times contributing it. This is sometimes called mutual aid.

Digital mutual aid can take place through mentorship and skillshare workshops that might teach new marginalized activists how to code computers, promote social media posts, produce radio shows or write media releases. Workshops are conducted by individuals sharing some aspect of their identities with participants to create a safer space through a shared experience of lived oppression.

Digital solidarity and mutual aid are important strategies of support and care that can work toward countering the negative emotional tax of being trolled, doxxed, shadowbanned or subjected to algorithmic bias.

Beyond intersectional digital activism, more work needs to be done by the tech industry, police services and broader social movements to eliminate the colonialism, racism, sexism and transphobia of online interactions and the devastating offline impacts they can have in peoples everyday lives.

This work is important to a well-functioning, inclusive and diverse democracy, as it aims to ensure that online participation is available equally and safely to all citizens.

Sandra Jeppesen, Professor of Media, Film, and Communications, Lakehead University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image: Rod Long

Here is the original post:

Digilantism, hackbacks and mutual aid are used by online activists to fight trolls - GlobalComment.com

Opinion: Don’t let the banned Andrew Tate be a martyr – 1News

On-and-on the commentary around Andrew Tate goes. Granted, his rhetoric is dangerous, but bearing witness to all this backlash and blocking is his audience.

Andrew Tate (Source: Sunday)

Today I want to ask, are we considering how they will see this?

ICYMI: Facebook and Instagram - sorry, Meta - booted Tate from its platforms.

We know who his audience is, because the impact he has on the world is primarily through his impact on their views: impressionable young men.

The beliefs he spews are vile and numerous. They include - but are certainly not limited to - women belonging in the home, women being unable to drive and that a woman is a man's property.

He's quoted as saying it's easier to get off rape charges in Eastern Europe, which was "40% of the reason I moved to Romania".

So upfront, I want to be explicit; this isn't a defence of him. But I'm not here to indict him either. The danger his views pose are well-documented by others.

No, I want to discuss the role the platforms - Facebook, Instagram, Twitter, TikTok and YouTube - have had in his meteoric rise to internet stardom before they kicked him to the curb. Far, far too late if I may add.

Facebook and Instagram - sorry, Meta - say Tate was banned for violating its policies on dangerous organisations and individuals. But that had been the case for a long time.

For that reason, Tate was called out from many corners of the internet at once, and that should've been enough.

Communities should be able to moderate themselves because the moderate view should naturally be the most common, it being moderate and all. But very often, they aren't - and that's thanks to algorithms.

Social media algorithms are complicated, but anecdotally, we can all attest to a simple fact - it shows us more of what we consume.

They provide a way to sort our feeds by relevance, and they decide what's relevant based on what you're interested in.

So if you consume Tate's videos you're going to get more, and more, until eventually, you're a student of the hustle (If you don't understand the end of that sentence, just hold on for three more). Thanks to the algorithm, dissenting content struggles to reach these young men.

Now that's one side of the coin; the other is Tate himself, and how ingeniously he's played the hand he was dealt.

A huge part of his rise is that Hustler's University, the get-rich-quick course (and alleged Ponzi scheme) was built around the goal of making him popular (It's so meta I almost called it Facebook and Instagram again).

At its height, this unaccredited online 'academy' boasted at least 140,000 people on its Discord servers. And the job of this legion for the last little while has been to publish videos of Tate everywhere. They're incentivised to do that because until recently, they got a meaningful commission on anyone they brought into the 'university' through the link attached to the video.

But now, he's gone.

Meta and Twitter have used the 'break glass in case of emergency' option of banning him from their platform, and for a very simple reason - the platforms' algorithms, designed to spread his messages, means that's the only tool they have left to combat harmful rhetoric.

But when it comes to Tate, banning him is only half the job. That rhetoric survives online in countless videos on countless accounts. In the shadow of his banning, this content becomes the only way to keep his rhetoric alive, a mission many have taken up.

We need to show young men why these messages are wrong, not what happens to them if they think they're right.

But because of the way our social media ecosystem is set up, we are now in triage, harmful views spreading as quickly as a novel coronavirus. And now - like a lockdown - eliminating them is all the option we have left. There's a reason they call videos 'viral'.

This won't work forever though.

Take TikTok. Its banned an account related to him, but his hashtag has still racked up 13 billion views and counting.

It's the best move we can make in the present situation, but now we need to cure the misinformation that's already in the community.

If we don't, community transmission will continue, and in a bit of a Covid role-reversal, it is youngsters who are now most at risk.

The sites that have banned him are the very platforms which let it get to this point.

This ecosystem is in their control, and an inability to moderate itself does not excuse it of the need to be moderated.

See more here:

Opinion: Don't let the banned Andrew Tate be a martyr - 1News

High school football: All of the scores from Fridays games – The Whittier Daily News

Scores from the CIF Southern Section and L.A. City Section high school football games Friday, Aug. 26.

CIF SOUTHERN SECTION

NONLEAGUE

Aliso Niguel 26, Rancho Mirage 0

Alta Loma 34, Don Lugo 20

Anaheim Canyon 28, Irvine 22

Arrowhead Christian 21, Riverside Notre Dame 16

Ayala 28, Etiwanda 7

Baldwin Park 55, South El Monte 20

Bellflower 28, Beckman 6

Bishop Amat 42, La Habra 7

Bishop Montgomery 26, Bolsa Grande 12

Brentwood 33, Salesian 14

Buena Park 29, Savanna 0

Burbank Burroughs 17, Fillmore 7

Cajon 34, Jurupa Hills 0

Canyon Country Canyon 28, Hueneme 7

Canyon Springs 60, Banning 12

Cerritos Valley Christian 42, Gahr 9

Chaffey 46, San Gorgonio 18

Chaminade 24, JSerra 21 (OT)

Chino Hills 54, Diamond Ranch 0

Citrus Valley 58, Rancho Verde 7

Colony 26, Corona Santiago 14

Compton 69, Compton Centennial 0

Corona Centennial 42, San Diego Cathedral 7

Corona del Mar 28, Los Gatos 14

Costa Mesa 21, Pioneer 20

Crean Lutheran 49, Mary Star 0

Culver City 35, West Torrance 13

Damien 34, Loyola 7

Dana Hills 28, Laguna Beach 24

Dos Pueblos 41, Nordhoff 7

Downey 47, El Toro 13

Eastside 35, Viewpoint 14

Edison 31, Leuzinger 22

Eisenhower 40, Temescal Canyon 27

El Rancho 42, Bell Gardens 22

Foothill 9, Tustin 7

Fullerton 39, Whittier 7

Garden Grove 61, Artesia 6

Garden Grove Santiago 41, Godinez 6

Glenn 49, Firebaugh 8

Golden Valley 42, Antelope Valley 6

Granite Hills 41, Rialto 14

Hacienda Heights Wilson 47, La Puente 6

Hemet 37, Indio 6

Heritage Christian 42, Riverside Prep 8

Hesperia 38, Victor Valley 0

Hoover 14, Ganesha 6

Indian Springs 42, Arroyo Valley 14

Jurupa Valley 35, Desert Hot Springs 12

Kaiser 36, Ramona 35

King 55, Redlands 14

Laguna Hills 49, Placentia Valencia 29

Liberty 42, Miller 0

Long Beach Jordan 41, Peninsula 35

Long Beach Poly 17, Gardena Serra 3

Los Amigos 22, Ocean View 6

Los Osos 41, Redlands East Valley 6

Maranatha 42, Beverly Hills 6

Mater Dei 24, Bishop Gorman (Nev.) 21

Mayfair 49, St. Anthony 6

Mission Viejo 42, Servite 23

Monrovia 49, Arcadia 14

Montebello 30, Cantwell-Sacred Heart 26

Mountain View 48, Workman 27

Murrieta Valley 52, Great Oak 14

North Torrance 20, St. Genevieve 6

Northwood 37, La Palma Kennedy 6

Oak Park 23, Calabasas 22

Oaks Christian 38, Sierra Canyon 21

Ontario Christian 45, Xavier Prep 15

Orange Lutheran 24, Upland 7

Oxnard 21, Hart 18

Palmdale 42, Ridgecrest Burroughs 6

Paloma Valley 42, Moreno Valley 20

Pasadena 35, Glendora 10

Patriot 49, Rubidoux 20

Pomona 47, Garey 14

Rancho Alamitos 10, La Sierra 7

Rancho Christian 43, San Bernardino 8

Rancho Cucamonga 13, Apple Valley 12

Redondo 33, Long Beach Wilson 0

Rim of the World 48, Littlerock 6

Rio Hondo Prep 29, El Monte 6

Rio Mesa 47, Camarillo 28

Royal 41, Castaic 20

Saddleback 27, Century 0

San Marino 14, Arroyo 13

Santa Margarita 36, Norco 14

Santa Monica 33, El Segundo 14

Santa Paula 26, Channel Islands 20

Santa Rosa Academy 54, Nuview Bridge 12

Saugus 41, Moorpark 13

Segerstrom 53, Santa Ana Valley 0

Shadow Hills 56, Citrus Hill 0

Silverado 28, Yucaipa 13

Simi Valley 51, Knight 0

South Hills 16, Covina 14

Read more from the original source:

High school football: All of the scores from Fridays games - The Whittier Daily News