Parents divided over the role of artificial intelligence in children's education The National Desk
More:
Parents divided over the role of artificial intelligence in children's education - The National Desk
Tech platforms use recommender algorithms to control societys key resource: attention. With these algorithms they can quietly demote or hide certain content instead of just blocking or deleting it. This opaque practice is called shadowbanning.
While platforms will often deny they engage in shadowbanning, theres plenty of evidence its well and truly present. And its a problematic form of content moderation that desperately needs oversight.
Simply put, shadowbanning is when a platform reduces the visibility of content without alerting the user. The content may still be potentially accessed, but with conditions on how it circulates.
It may no longer appear as a recommendation, in a search result, in a news feed, or in other users content queues. One example would be burying a comment underneath many others.
The term shadowbanning first appeared in 2001, when it referred to making posts invisible to everyone except the poster in an online forum. Todays version of it (where content is demoted through algorithms) is much more nuanced.
Shadowbans are distinct from other moderation approaches in a number of ways. They are:
Platforms such as Instagram, Facebook and Twitter generally deny performing shadowbans, but typically do so by referring to the original 2001 understanding of it.
When shadowbanning has been reported, platforms have explained this away by citing technical glitches, users failure to create engaging content, or as a matter of chance through black-box algorithms.
That said, most platforms will admit to visibility reduction or demotion of content. And thats still shadowbanning as the term is now used.
In 2018, Facebook and Instagram became the first major platforms to admit they algorithmically reduced user engagement with borderline content which in Meta CEO Mark Zuckerbergs words included sensationalist and provocative content.
YouTube, Twitter, LinkedIn and TikTok have since announced similar strategies to deal with sensitive content.
In one survey of 1,006 social media users, 9.2% reported they had been shadowbanned. Of these 8.1% were on Facebook, 4.1% on Twitter, 3.8% on Instagram, 3.2% on TikTok, 1.3% on Discord, 1% on Tumblr and less than 1% on YouTube, Twitch, Reddit, NextDoor, Pinterest, Snapchat and LinkedIn.
Further evidence for shadowbanning comes from surveys, interviews, internal whistle-blowers, information leaks, investigative journalism and empirical analyses by researchers.
Experts think shadowbanning by platforms likely increased in response to criticism of big techs inadequate handling of misinformation. Over time moderation has become an increasingly politicised issue, and shadowbanning offers an easy way out.
The goal is to mitigate content thats lawful but awful. This content trades under different names across platforms, whether its dubbed borderline, sensitive, harmful, undesirable or objectionable.
Through shadowbanning, platforms can dodge accountability and avoid outcries over censorship. At the same time, they still benefit financially from shadowbanned content thats perpetually sought out.
Recent studies have found between 3% and 6.2% of sampled Twitter accounts had been shadowbanned at least once.
The research identified specific characteristics that increased the likelihood of posts or accounts being shadowbanned:
On Twitter, having a verified account (a blue checkmark) reduced the chances of being shadowbanned.
Of particular concern is evidence that shadowbanning disproportionately targets people in marginalised groups. In 2020 TikTok had to apologise for marginalising the black community through its Black Lives Matter filter. In 2021, TikTok users reported that using the word Black in their bio page would lead to their content being flagged as inappropriate. And in February 2022, keywords related to the LGBTQ+ movement were found to be shadowbanned.
Overall, Black, LQBTQ+ and Republican users report more frequent and harsher content moderation across Facebook, Twitter, Instagram and TikTok.
Detecting shadowbanning is difficult. However, there are some ways you can try to figure out if it has happened to you:
rank the performance of the content in question against your normal engagement levels if a certain post has greatly under-performed for no obvious reason, it may have been shadowbanned
ask others to use their accounts to search for your content but keep in mind if theyre a friend or follower they may still be able to see your shadowbanned content, whereas other users may not
benchmark your contents reach against content from others who have comparable engagement for instance, a black content creator can compare their TikTok views to those of a white creator with a similar following
refer to shadowban detection tools available for different platforms such as Reddit (r/CommentRemovalChecker) or Twitter (hisubway).
Read more: Deplatforming online extremists reduces their followers but there's a price
Shadowbans last for varying amounts of time depending on the demoted content and platform. On TikTok, theyre said to last about two weeks. If your account or content is shadowbanned, there arent many options to immediately reverse this.
But some strategies can help reduce the chance of it happening, as researchers have found. One is to self-censor. For instance, users may avoid ethnic identification labels such as AsianWomen.
Users can also experiment with external tools that estimate the likelihood of content being flagged, and then manipulate the content so its less likely to be picked up by algorithms. If certain terms are likely to be flagged, theyll use phonetically similar alternatives, like S-E-G-G-S instead of sex.
Shadowbanning impairs the free exchange of ideas and excludes minorities. It can be exploited by trolls falsely flagging content. It can cause financial harm to users trying to monetise content. It can even trigger emotional distress through isolation.
As a first step, we need to demand transparency from platforms on their shadowbanning policies and enforcement. This practice has potentially severe ramifications for individuals and society. To fix it, well need to scrutinise it with the thoroughness it deserves.
Read the original here:
How does encryption work?
Encryption uses a cipher (an encryption algorithm) and an encryption key to encode data into ciphertext. Once this ciphertext is transmitted to the receiving party, a key (the same key, for symmetric encryption; a different, related value, for asymmetric encryption) is used to decode the ciphertext back into the original value. Encryption keys work much like physical keys, which means that only users with the right key can unlock or decrypt the encrypted data.
Encryption vs. tokenization
Encryption andtokenizationare related data protection technologies; the distinction between them has evolved.
In common usage, tokenization typically refers to format-preserving data protection: data protection that substitutes a token a similar-looking but different value for individual sensitive values. Encryption typically means data protection that converts data one or more values, or entire data sets into gibberish that looks very different from the original.
Tokenization may be based on various technologies. Some versions useformat-preserving encryption, such as NIST FF1-mode AES; some generate random values, storing the original data and the matching token in a secure token vault; others produce tokens from a pre-generated set of random data. Following the definition of encryption above, tokenization of any sort is clearly a form of encryption; the difference is tokenizations format-preserving attribute.
Encryption plays a vital role in protecting sensitive data that is transmitted over the Internet or stored at rest in computer systems. Not only does it keep the data confidential, but it can authenticate its origin, ensure that data has not changed after it was sent, and prevent senders from denying they sent an encrypted message (also known as nonrepudiation).
In addition to the robust data privacy protection it provides, encryption is often necessary to uphold compliance regulations established by multiple organizations or standards bodies. For example, the Federal Information Processing Standards (FIPS) are a set of data security standards that U.S. government agencies or contractors must follow per theFederal Information Security Modernization Act of 2014(FISMA 2014). Within these standards,FIPS 140-2requires the secure design and implementation of a cryptographic module.
Another example is thePayment Card Industry Data Security Standard(PCI DSS). This standard requires merchants to encrypt customer card data when it is stored at rest, as well as when transmitted across public networks. Other important regulations many businesses must follow includeThe General Data Protection Regulation (GDPR)and theCalifornia Consumer Privacy Act of 2018 (CCPA).
There are two main types of encryption: symmetric and asymmetric.
Symmetric encryption
Symmetric encryption algorithms use the same key for both encryption and decryption. This means that the sender or computer system encrypting the data must share the secret key with all authorized parties so they can decrypt it. Symmetric encryption is typically used for encrypting data in bulk, as it is usually faster and easier to implement than asymmetric encryption.
One of the most widely used symmetric encryption ciphers is the Advanced Encryption Standard (AES), defined as a U.S. government standard by theNational Institute of Standards and Technology (NIST)in 2001. AES supports three different key lengths, which determine the number of possible keys: 128, 192, or 256 bits. Cracking any AES key length requires levels of computational power that are currently unrealistic and unlikely ever to become so. AES is widely used worldwide, including by government organizations like the National Security Agency (NSA).
Asymmetric encryption
Asymmetric encryption, also known as public key encryption, uses two distinct but mathematically linked keys a public key and a private key. Typically, the public key is shared publicly and is available for anyone to use, while the private key is kept secure, accessible only to the key owner. Sometimes the data is encrypted twice: once with the senders private key and once with the recipients public key, thus ensuring both that only the intended recipient can decrypt it and that the sender is who they claim to be. Asymmetric encryption is thus more flexible for some use cases, since the public key(s) can be shared easily; however, it requires more computing resources than symmetric encryption, and these resources increase with the length of data protected.
A hybrid approach is thus common: a symmetric encryption key is generated and used to protect a volume of data. That symmetric key is then encrypted using the recipients public key, and packaged with the symmetrically encrypted payload. The recipient decrypts the relatively short key using asymmetric encryption, and then decrypts the actual data using symmetric encryption.
One of the most widely used asymmetric encryption ciphers is RSA, named after its inventors Ron Rivest, Adi Shamir, and Leonard Adleman in 1977. RSA remains one of the most widely used asymmetric encryption algorithms. Like all current asymmetric encryption, the RSA cipher relies on prime factorization, which involves multiplying two large prime numbers to create an even larger number. Cracking RSA is extremely difficult when the right key length is used, as one must determine the two original prime numbers from the multiplied result, which is mathematically difficult.
Like many othercybersecurity strategies, modern encryption can have vulnerabilities. Modern encryption keys are long enough that brute-force attacks trying every possible key until the right one is found are impractical. A 128-bit key has 2128 possible values: 100 billion computers each testing 10 billion operations per second would take over a billion years to try all of these keys.
Modern cryptographic vulnerabilities typically manifest as a slight weakening of the encryption strength. For example, under certain conditions, a 128-bit key only has the strength of a 118-bit key. While the research that discovers such weaknesses are important in terms of ensuring encryption strength, they are not significant in real-world use, often requiring unrealistic assumptions such as unfettered physical access to a server. Successful attacks on modern strong encryption thus center on unauthorized access to keys.
Data encryption is a key element of a robust cybersecurity strategy, especially as more businesses move towards the cloud and are unfamiliar with cloud security best practices.
Cybersecurity, a OpenText line of business, and its Voltage Data Privacy and Protection portfolio enable organizations to accelerate to the cloud, modernize IT, and meet the demands of data privacy compliance with comprehensivedata encryption softwarelike Voltage SecureData by OpenText and Voltage SmartCipher. CyberRes Voltage portfolio solutions enable organizations to discover, analyze, and classify data of all types to automate data protection and risk reduction. Voltage SecureData provides data-centric, persistent structured data security, while Voltage SmartCipher simplifies unstructured data security and provides complete visibility and control over file usage and disposition across multiple platforms.
Email encryption
Email continues to play a fundamental role in an organizations communications and day to day business and represents a critical vulnerability in its defenses. Too often, the sensitive data being transmitted via email is susceptible to attack and inadvertent disclosure.Email encryptionrepresents a vital defense in addressing these vulnerabilities.
In highly regulated environments such ashealthcareandfinancial services,compliance is mandatory but difficult for companies to enforce. This is especially true with email because end-users strongly resist any changes to their standard email workflow. SecureMail delivers a simple user experience across all platforms including computers, tablets, and native mobile platform support with full capability to send secure, originate, read, and share messages. Within Outlook, iOS, Android, and BlackBerry, for example, senders can access their existing contacts and simply click a Send Secure button to send an encrypted email. The recipient receives secure messages in their existing inbox, just as they would with clear text email
Encrypting big data, data warehouses and cloud analytics
Unleash the power of big data security, use continuous data protection for privacy compliance, and enable high-scale secure analytics in the cloud and on-premises. Companies are increasingly shifting their workloads and sensitive data into the cloud,transforming their IT environments to hybrid or multicloud. TheCloud Analytics Market size is set to grow from USD 23.2 billion in 2020 to USD 65.4 billion by 2025v, according to a market research report published by MarketsandMarkets.
Voltage for Cloud Analyticshelps customers reduce the risk of cloud adoption by securing sensitive data in cloud migration and safely enables user access and data sharing for analytics. The encryption and tokenization technologies help customers comply with privacy requirements by discovering and protecting regulated data at rest, in motion and in use in cloud warehouses and applications. These solutions also minimize multi-cloud complexity by centralizing control with data-centric protection that secures sensitive data wherever it flows across multi-cloud environments.
Integration of with cloud data warehouses (CDWs), such asSnowflake, Amazon Redshift, Google BigQuery, and Azure Synapse, enables customers to conduct high-scale secure analytics and data science in the cloud using format-preserved, tokenized data that mitigates the risk of compromising business-sensitive information while adhering to privacy regulations.
PCI security compliance and payment security
Enterprises, merchants, and payment processors face severe, ongoing challenges securing their networks and high-value sensitive data, such as payment cardholder data, to comply with the Payment Card Industry Data Security Standard (PCI DSS)and data privacy laws. Simplify PCI security compliance and payment security in your retail point-of-sale, web, and mobile eCommerce site with our format-preserving encryption and tokenization.
Voltage Secure Stateless Tokenization (SST)is an advanced, patented, data security solution that provides enterprises, merchants, and payment processors with a new approach to help assure protection for payment card data. SST is offered as part of the SecureData Enterprise data security platform that unites market-leading Format-Preserving Encryption (FPE), SST, data masking, and Stateless Key Management to protect sensitive corporate information in a single comprehensive solution.
Protect POS payments data
Encrypt ortokenize retail point-of-sale credit card dataupon card swipe, insertion, tap, or manual entry.
SST payment technology
Our Voltage Secure Stateless Tokenization (SST)enables payments data to be used and analyzed in its protected state.
Protect web browser data
Voltage SecureData Webby OpenText encrypts or tokenizes payment data as it is entered in the browser, reducing PCI audit scope.
PCI security for mobile
Voltage SecureData Mobileby OpenText offers PCI security for data captured on a mobile endpoint throughout the payment flow.
The rest is here:
What is Encryption and how does it work? | OpenText
However, Koeltl said accessing the contents of their phones if that occurred invaded the visitors privacy rights under the U.S. Constitution.
The misconduct alleged is a violation of the plaintiffs reasonable expectation of privacy in the contents of their electronic devices under the Fourth Amendment, the judge wrote.
Koeltl, an appointee of President Bill Clinton, threw out part of the lawsuit that sought money damages against former CIA Director Mike Pompeo. But the judge said the plaintiffs could continue to seek a ruling requiring the spy agency to destroy any records it may have gleaned from the Assange visitors phones.
Spokespeople for the CIA and for the U.S. Attorneys Office in Manhattan, which is representing the federal government in the case, declined to comment.
The judges ruling could prompt officials to try to invoke the state-secrets privilege a legal doctrine that can be used to shut down civil suits that implicate classified information.
The suit was filed in August 2022 on behalf of two attorneys who visited Assange in 2017, Margaret Ratner Kunstler and Deborah Hrbek, along with two journalists: John Goetz with German broadcaster NDR and Charles Glass, a freelance reporter formerly with ABC News.
We are thrilled that the Court rejected the CIAs efforts to silence the Plaintiffs, who merely seek to expose the CIAs attempt to carry out Pompeos vendetta against WikiLeaks, the lawyer for the visitors, Richard Roth, said in an email to POLITICO.
The suit tracks allegations in reports by the Spanish newspaper El Pais that a security firm at the Ecuadorian embassy gave the CIA information about Assanges visitors. The data was gleaned from hidden cameras and microphones and from opening their phones while they were meeting with the WikiLeaks founder.
The suit accuses Pompeo of spearheading the effort, citing his record of public animosity towards WikiLeaks, the controversial group which anonymously obtains secrets from governments, militaries, banks and political figures and publishes them onlineoften in raw form.
Critics have accused the group of being a pawn of Russia, but supporters say the organizations practice of radical transparency has been groundbreaking.
As a presidential candidate in 2016, Donald Trump praised the leaks of hacked emails from advisers to his opponent at the time, Hillary Clinton.
Pompeo also welcomed those disclosures at the time, but after being confirmed as CIA chief the following year, he declared WikiLeaks to be a hostile intelligence service and spurred government-wide efforts to target the organization and Assange.
Assange, an Australian citizen, entered the Ecuadorian embassy in London in 2012 and was granted asylum while he was on bail pending efforts by the Swedish government to extradite him to face a rape charge.
That investigation was dropped in 2017, but the U.S. brought criminal charges against him the next year for allegedly conspiring to hack U.S. government computers and to disclose national security secrets.
Ecuador effectively turned Assange over to U.K. officials in 2019, who have been detaining him for the past four years as he fights extradition to the U.S.
See the article here:
Judge: Assange visitors can proceed with spying suit against CIA