The Good Censors – Bloomberg

Niall Ferguson is the Milbank Family Senior Fellow at the Hoover Institution at Stanford University and a Bloomberg Opinion columnist. He was previously a professor of history at Harvard, New York University and Oxford. He is the founder and managing director of Greenmantle LLC, a New York-based advisory firm.

Photographer: Olivier Douliery/AFP/Getty Images

Photographer: Olivier Douliery/AFP/Getty Images

When talking among themselves, Silicon Valley big shots sometimes say weird things. In an internal presentation in March 2018, Google executives were asked to imagine their company acting as a Good Censor, in order to limit the impact of users behaving badly.

In a 2016 internal video, Nick Foster, Googles head of design, envisioned a goal-driven ledger of all users data, endowed with its own volition or purpose, which would nudge us to take decisions (say, about shopping or travel) that would reflect Googles values as an organization.

If that doesnt strike you as weird like dialogue from some dystopian science-fiction novel then you need to read more dystopian science fiction. (Start with Yevgeny Zamyatins astonishingly prescient We.)

The lowliest employees of big tech companies the content moderators whose job it is to spot bad stuff online offer a rather different perspective. Remember Were the free speech wing of the free speech party? one of them asked Alex Feerst of OneZero last year, alluding to an early Twitter slogan. How vain and oblivious does that sound now? Well, its the morning after the free speech party, and the place is trashed.

More from

And how.

I dont know if, as the New York Post alleged last week, Democratic presidential nominee Joe Biden met with a Ukrainian energy executive named Vadym Pozharskyi in 2015. I dont know if Bidens son Hunter tried to broker such a meeting as part of his board directorship deal with Pozharskyis firm, Burisma Holdings. And I am pretty doubtful that the meeting, if indeed it happened, was the reason Biden demanded that the Ukrainian government fire its prosecutor general, Viktor Shokin, who was (allegedly but probably not)investigating Burisma. I am even open to the theory that the whole story is bunk, the emails fake, and the laptop and its hard-drive an infowars gift from Russia, with love.

What I do know is that if I read the story online and found it compelling, I should have been able to share it with friends. Instead, both Facebook and Twitter made a decision to try to kill the Posts scoop.

Andy Stone, the former Democratic Party staffer who is now Facebooks policy communications manager, announced that his company would be reducing the distribution of the Post story. Twitter barred its users from sharing it not only with followers but also through direct messages, locking the accounts of people including White House press secretary Kayleigh McEnany who retweeted it.

This is not an isolated incident. In May, Twitter attached a health warning to one of President Trumps Tweets. There was uproar at Facebook when chief executive Mark Zuckerberg declined to follow Twitters lead. Days later, Facebook was pressured into taking down 88 Trump campaign ads that used an inverted red triangle (a Nazi symbol) to attack antifa, the far-left movement. In August, Facebook removed a group with nearly 200,000 members for repeatedly posting content that violated our policies. The group promoted the QAnon conspiracy theory, which is broadly pro-Trump. Earlier this month, the company deleted all QAnon accounts from its platforms.

Google has been doing the same sort of thing. In June, it excluded the website ZeroHedge from its ad platform because of violations in the comments sections of stories about Black Lives Matter.

The remarkable thing is not that Silicon Valley is playing a highly questionable role in the election of 2020. It is that the same was true in 2016 and, despite a great many fine words and some minor pieces of legislation, Americans did nothing about it.

Far from addressing the glaring problems created by the rise of the network platforms that now dominate the American (and indeed the global) public sphere, we largely decided to shut our eyes and ears to them. In the past 10 months, Ive read as many op-ed articles and reports about this election as I can stand. Im staggered by how few even mention the role of the internet and social media. (Kevin Rooses work on the conservative dominance of Facebook shared content is an honorable exception.) You would think it was still the 1990s as if this contest will be decided by debates on television, newspaper endorsements or stump speeches, and accurately predicted by opinion polls. (Actually, make that the 1960s.)

Yet the new role of social media is staring us in the face (literally). The number of U.S. Facebook users was 240 million in 2019, more than 72% of the population. Adults spend an average of 75 minutes of each day on social media. Half that time is on Facebook. Google accounts for 88% of the U.S. search-engine market, and 95% of all mobile searches. Between them, Google and Facebook captured a combined 60% of U.S. digital-ad spending in 2018.

The top U.S. tech companies are now among the biggest businesses on earth by market capitalization. But their size is not the important thing about them. Earlier this month, the House Judiciary Committees Antitrust Subcommittee released the findings of its 16-month long investigation into Big Tech. The conclusion? Apple, Amazon, Google and Facebook each possess significant market power over large swaths of our economy. In recent years, each company has expanded and exploited their power of the marketplace in anticompetitive ways.

Cue years of antitrust actions that will enrich a great many lawyers and have minimal consequences for competition, like the ultimately failed attempt 20 years ago to prevent Microsoft from dominating software.

An antitrust action against Amazon is doomed. Consumers love the company. It has measurably reduced the prices of innumerable products as well as rendering shopping in bricks-and-mortar stores an obsolescent activity. Good luck, too, with breaking up Google. Even the much less trusted Facebook (according to polls) will be hard to dismantle, without a complete transformation of the way the courts apply competition law. Its free, for heavens sake. And there are network effects on the internet that cant be wished away by judges.

Is it stupidity or venality that has convinced Americas legislators that antitrust is the answer to the problem of Big Tech? A bit of both, I suspect. Either way, its the wrong answer.

The core problem is not a lack of competition in Silicon Valley. It is that the network platforms are now the public sphere. Every other part of what we call the media newspapers, magazines, even cable TV is now subordinated to them. In 2019, the average American spent 6 hours and 35 minutes a day using digital media, more than television, radio and print put together.

Not only do the big tech companies dominate ad revenue, they drive the news cycle. In 2017, two-thirds of American adults said they got news from social media sites. A Pew study showed that, at the end of 2019, 18% of them relied primarily on social media for political news. Among those aged 30 to 49, the share was 40%; among those aged 18 to 29, it was 48%. The pathologies that flow from this new reality are numerous. Antitrust actions address none of them.

I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place, Evan Williams, one of the founders of Twitter, told the New York Times in 2017. I was wrong about that. Indeed, he was.

Subject to the most minimal regulation in their country of origin far less than the TV networks in their heyday the network platforms tend, because of their central imperative to sell the attention of their users to advertisers, to pollute national discourse with a torrent of fake news and extreme views. The effects on the democratic process, not only in the U.S. but all over the world, have been deeply destabilizing.

Moreover, the vulnerability of the network platforms to outside manipulation has posed and continues to pose a serious threat to national security. Yet half-hearted and ill-considered attempts by the companies to regulate themselves better have led to legitimate complaints that they are restricting free speech.

How did we arrive at this state of affairs when such important components of the public sphere could operate solely with regard to their own profitability as attention merchants? The answer lies in the history of American internet regulation to be precise, in section 230 of the 1934 Communications Act, as amended by the 1996 Telecommunications Act, which was enacted after a New York court held the online service provider Prodigy liable for a users defamatory posts.

Previously, a company that managed content was classified as a publisher, and subject to civil liability creating a perverse incentive not to manage content at all. Thus, Section 230c, Protection for Good Samaritan blocking and screening of offensive material, was written to encourage nascent firms to protect users and prevent illegal activity without incurring massive content-management costs. It states:

1. No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

2. No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.

In essence, Section 230 gave and still gives websites immunity from liability for what their users post (under-filtering), but it also protects them when they choose to remove content (over-filtering). The idea was to split the difference between publishers liability, which would have stunted the growth of the fledgling internet, and complete lack of curation, which would have led to a torrent of filth. The surely unintended result is that some of the biggest companies in the world today are utilities when they are acting as publishers, but publishers when acting as utilities, in a way rather reminiscent of Joseph Hellers Catch-22.

Heres how Catch-22 works. If one of the platforms hosts content that is mendacious, defamatory or in some other way harmful, and you sue, the Big Tech lawyers will cite Section 230: Hey, were just a tech company, its not our malicious content. But if you write something that falls afoul of their content-moderation rules and duly vanishes from the internet, theyll cite Section 230 again: Hey, were a private company, the First Amendment doesnt apply to us.

Remember the good censor? Another influential way of describing the network platforms is as the New Governors. That creeps me out the way Zuckerbergs admiration of Augustus Caesar creeps me out.

For years, of course, the big technology companies have filtered out child pornography and (less successfully) terrorist propaganda. But there has been mission creep. In 2015, Twitter added a new line to its rules that barred promoting violence against others on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability. Repeatedly throughout the Trump presidency for example, after the violence in Charlottesville, Virginia, in 2017 there have been further modifications to the platforms terms of service and community standards, as well as to their non-public content moderation policies.

There is no need to detail all the occasions in recent years when mostly right-leaning content was censored, buried far down the search results, or demonetized. The key point is that, in the absence of a coherent reform of the way the network platforms are themselves governed, there has been a dysfunctional tug-of-war between the platforms spasmodic and not wholly sincere efforts to fix themselves and the demands of outside actors (ranging from the German government to groups of left-wing activists) for more censorship of whatever they deem to be hate speech.

At the same time, the founding generation of Silicon Valley entrepreneurs, most of whom had libertarian inclinations, have repeatedly yielded to internal pressure from their younger employees, schooled in the modern campus culture of no-platforming any individuals whose ideas they consider unsafe. In the words of Brian Amerige, whose career at Facebook ended not long after he created a FBers for Political Diversity group, the companys employees are quick to attack often in mobs anyone who presents a view that appears to be in opposition to leftleaning ideology.

The net result seems to be the worst of both worlds. On the one hand, conspiracy theories such as Plandemic flourish on Facebook and elsewhere. On the other, the network platforms arbitrarily intervene when a legitimate article triggers the hate-speech-spotting algorithms and the content-moderating grunts. (As one of them described the process, I was like, I can just block this entire domain, and they wont be able to serve ads on it? And the answer was, Yes. I was like, But Im in my mid-twenties.)

At a lecture at Georgetown University in October 2019, Zuckerberg pledged to continue to stand for free expression and against an ever-expanding definition of what speech is harmful. But even Facebook has had to ramp up the censorship this year. The bottom line is that the good censors are not very good and the new governors cant even govern themselves.

Two years ago, I wrote a lengthy paper on all this with a well-worn title, What Is to Be Done? Since then, almost nothing has been done, beyond some legislative tinkering at the margins. The public has been directed down a series of blind alleys: not only antitrust, but also net neutrality and an inchoate notion of tighter regulation. In reality, as I argued then, only two reforms will fix this godawful mess.

First, we need to repeal or significantly amend Section 230, making the network platforms legally liable for the content they host, and leaving the rest to the courts. Second, we need to impose the equivalent of First Amendment obligations on the network platforms, recognizing that they are too dominant a part of the public sphere to be able to regulate access to it on the basis of their own privately determined and almost certainly skewed community standards.

To such proposals, Big Tech lawyers respond by lamenting that they would massively increase their clients legal liabilities. Yes. That is the whole idea. The platforms will finally discover that there are risks to being a publisher and responsibilities that come with near-universal usage.

In recent few years, these ideas have won growing support and not only among Republican legislators such as Senator Josh Hawley. In the words of Judge Alex Kozinski in Fair Housing Council v. Roommate.com (2008), the Internet has outgrown its swaddling clothes and no longer needs to be so gently coddled. He was referring to Section 230, which gives the tech giants a now-indefensible advantage over traditional publishers, while at the same time empowering them to act as censors.

While Section 230 protects internet companies from liability over removing any content that they believed to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable, successive court rulings have clearly established that the last two words werent intended to permit discrimination against particular political viewpoints.

Meanwhile, in Packingham v. North Carolina (2017), the Supreme Court overturned a state law that banned sex offenders from using social media. In the opinion, Justice Anthony Kennedy likened internet platforms to the modern public square, arguing that it was therefore unconstitutional to prevent even sex offenders from accessing, and expressing opinions, on social-network platforms. In other words, despite being private companies, the big tech companies have a public function.

If the network platforms are the modern public square, then it cannot be their responsibility to remove hateful content (as 19 prominent civil rights groups demanded of Facebook in October 2017) because hateful content unless it explicitly instigates violence against a specific person is protected by the First Amendment.

Unfortunately, this sea change has come too late for root-and-branch reform to be enacted under the Trump administration. And, contemplating the close links between Silicon Valley and Senator Kamala Harris, I see little prospect of progress other than down the antitrust cul-de-sac if she is elected vice president next month. Quite apart from the bountiful campaign contributions Harris and the rest of Democratic Party elite receive from Big Tech, they have no problem at all with Facebook, Twitter and company seeking to kill stories like Huntergate.

In 1931, British Prime Minister Stanley Baldwin accused the principal newspaper barons of the day, Lords Beaverbrook and Rothermere, of aiming at power, and power without responsibility the prerogative of the harlot throughout the ages. (The phrase was his cousin Rudyard Kiplings.) As I contemplate the under-covered and overmighty role that Big Tech continues to play in the American political process, I dont see good censors. I see big, bad harlots.

(Updated to clarify details of Ukrainian prosecutor's investigation in sixth paragraph of article published Oct. 18)

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story:Niall Ferguson at nferguson23@bloomberg.net

To contact the editor responsible for this story:Tobin Harshaw at tharshaw@bloomberg.net

Before it's here, it's on the Bloomberg Terminal.

Niall Ferguson is the Milbank Family Senior Fellow at the Hoover Institution at Stanford University and a Bloomberg Opinion columnist. He was previously a professor of history at Harvard, New York University and Oxford. He is the founder and managing director of Greenmantle LLC, a New York-based advisory firm.

See the original post here:

The Good Censors - Bloomberg

Expert available to speak on how magazine censorship helped strengthen the LGBTQ community – Newswise

Newswise The Stonewall Riots often are cited as the beginning of the LGBTQ movement. However, recent research from Jason Shepard, chair and professor of communications at Cal State Fullerton, highlights how First Amendment law was both a weapon and shield in the expansion of LGBTQ rights.

Shepard can provide an in-depth perspective and researched-based context to LGBTQ rights discussions. His research examines the legal history of three 1950s and early 1960s cases in which the Supreme Court overturned the censorship of magazines by and for sexual minorities, and how that allowed LGBTQ Americans develop identity and community, laid the foundation for the future of LGBTQ rights law. Shepard summarizes his research in this one-minute video.

ONE magazine, published from 1953 to 1967, was the first widely distributed LGBT magazine in the U.S. It was banned from the mail in 1954.

"The cases I examined are another reminder of how powerful the U.S. Supreme Court is and has been in the history of our democracy. In 1958, the Supreme Court decided that America's first gay-rights magazine couldn't be banned from the U.S. mail. The decision allowed ONE magazine to connect gays and lesbians to a broader subculture that later launched the gay liberation movement."

Read Shepard's research in "The First Amendment and the Roots of LGBTQ Rights Law: Censorship in the Early Homophile Era, 1958-1962" published in the William & Mary Journal of Race, Gender and Social Justice.

Jason Shepard, chair and professor of communications

Shepard teaches courses in the communications law and journalism. He has authored several books, including: "Privileging the Press: Confidential Sources, Journalism Ethics and the First Amendment," "Major Principles of Media Law," and "Ethical Issues in Communication Professions: New Agendas in Communications." He writes "Online Legalities," a regular column in California Publisher.Shepard also has published research inYale Journal of Law and Technology,Communication Law and Policy,Journal of Media Law & Ethics,Nexus Journal of Law and Policy,andDrake Law Review. Shepards research has been cited widely, including by a federal appellate court and in theNew York Times.

Read the original:

Expert available to speak on how magazine censorship helped strengthen the LGBTQ community - Newswise

Milot’s Musings: Censor This | | dailyadvance.com – The Daily Advance

We have three major breaking stories for you tonight. Many news anchors start out their shows like this every night, but the stories do not usually all qualify as major.

But last week, major does not begin to characterize the stories that broke like giant waves crashing ashore one after the other.

The confirmation hearings of Amy Coney Barrett soaked up most of the air time for three days and were truly newsworthy, but then the New York Post broke a sensational front page news story that said emails had been found on a Hunter Biden computer hard drive that could destroy Joe Bidens candidacy.

This was like one of those monster Bonzai Pipeline waves at a surfing competition on the north shore of Oahu.

These and subsequent emails posited Hunter Biden connections with foreign parties in Ukraine, Russia, and China that resulted in enrichment of the Biden family, including the former Vice-President.

Louisiana Senator John Kennedy characterized this as a message to the world that the United States of America can be bought like a sack of potatoes. In his usually colorful language, Kennedy said these accusations are as serious as four heart attacks and a stroke.

As explosive as this story was, it was met with total silence in the establishment media. Worse, links to it were blocked by Facebook and Twitter. Overnight, the venality of the Bidens was no longer the big story: censorship was.

Twitter CEO Jack Dorsey quickly apologized for blocking the Post story, but hes going to have to appear before the Senate Judiciary Committee to explain his companys blackout of a story damaging to Bidens campaign.

There is no acceptable explanation. The fact is that social media monopolies are in the tank for the Democrats and will justify any perverse action to help them gain power. We are accustomed to the lying, cheating, and dirty tricks that have earned politicians the lowest trustworthy rankings among all segments of our society. But censorship of a story in the press is more than that. It is a direct and corrosive attack on our democracy.

This is especially true in this case because we are in the midst of a presidential election. When Twitter censored the New York Post story, it effectively cut off a popular source of news for millions of voters on the day they went to the polls. It may or may not have an effect on the outcome of this election.

But the point is that Twitters censorship of a story as serious as four heart attacks and a stroke was perniciously partisan and should be condemned by everyone, even any of the yet silent media.

The New York Post is a conservative newspaper. Within recognized legal limits, it is entitled to the fundamental right of press freedom spelled out so clearly in the First Amendment to the Constitution, just as its rivals at the liberal New York Times and Washington Post are entitled to it. Twitter violated that right.

Our Founding Fathers recognized that the exchange of ideas, even contrarious ones, is essential in a free society, and that the freedom to express these ideas in the press must be protected.

They would be appalled, and saddened, at the sight of social media giants willingness to crack a fundamental pillar of our democracy to achieve their partisan goals.

Claude Milot of Hertford worked in the publishing business for 33 years.

Read more here:

Milot's Musings: Censor This | | dailyadvance.com - The Daily Advance

Oh, Frak Avoiding the Censors the SFF Way – tor.com

Every culture has its own set of taboos surrounding bodily functions, religion, and naming things. In Anglophone cultures, our taboos generally involve waste excretion, particular body parts, sexual acts, and Christian deities. But we can still talk about these things (with varying degrees of comfort) by replacing them with non-taboo words, or we can soften them to non-taboo forms by changing something about the word itself. This column will unavoidably include cusswords, though I will try to keep them to a minimum

Taboo words in English have non-taboo counterparts and, in many cases, elevated/clinical terms as well. (As a native US-English speaker, Im focusing on that variety, but Ill mention some British as well.) Take, for example, the word feces. Its a dry, clinical, neutral term for solid bodily waste. We also have crap, less clinical, slightly vulgar but still allowed on TV, poo or poop and all its variants, a childhood word, and the delightful, vulgar Germanic word shit. Each of these words has situations where its appropriate and inappropriate, and they all indicate something about the person using them (and the situation theyre in).

Medical records will use feces (or possibly stool, excrement, or excreta) but none of the others; when people step in dog feces on the street, they dont refer to it as dog feces, but use one of the other words, like dog crap, dog poo, doggy doo-doo, dog turds, or dog shit. Some of these things are more okay to say in front of a child than others, and one of them is too vulgar for broadcast TV.

When used as an exclamation or interjection, we dont use feces, turd, or doo-doo; these are strongly tied to the object. Instead, well say crap, shit, or poop, depending on our personal preferences and whos around us at the time. I try really hard to avoid cussing in front of my five-year-old niece, because shes a sponge for that sort of thing, and we dont need her to go to school sounding like a sailor.

We can also say shoot or sugar or something similar, where you can still recognize the vulgarity, but its been changed. When I was a young 3dgy teen, my mom would give me this Look and say, its gosh darn it. She still doesnt like me cussing, but Im 44 now, and here I am, writing about swear words.

Reading Shakespeare as a teen, I saw all these zounds! and the like, and had no idea what it meant, but, based on context, I could tell it was some sort of swear. I pronounced it rhyming with sounds, because thats what it looked like, but I later learned it was derived from Gods woundsand thus a blasphemous swear. Bloody also stems from religion: Gods blood. Jiminy cricket is also a deformation of a blasphemous swear, as are gee, geez/jeez, and a whole plethora of words.

As language users, we thus have a few tricks in our bag for how to avoid taboos, and we use them all the time. In many cases, we use avoidance words without even knowing that theyre avoiding something!

When script writers had to avoid bad words because of FCC broadcast rules, they could take a variety of tacks, just like we do every day. You get lots of oh, geez and shoot or freaking in your contemporary (and historical) fare, but in SFF-land, writers have another trick up their sleeves: alien languages, or even made-up future-English words. Thats where our fraks and frells come in (via Battlestar Galactica and Farscape, respectively). Sometimes you get other inventive ways of evading the censors, like Joss Whedon did with Firefly and having people cuss in Chinese.

Of course, now, with the rise of Netflix and Prime originals, people can swear to their hearts content. In the Expanse books, Chrisjen Avasarala uses fuck freely and creatively. In the SyFy seasons, she doesnt swear much, but once the show switched over to Amazon Prime, she now gets to use her favorite word almost as much as in the books. Its delightful to see this respectable grandmother and politician with a gravelly voice talking like a sailor, and I love it.

Of course, evading the censors isnt the only reason to deform taboo words. Some authors use invented swears as worldbuilding or because they arent as potty-mouthed as I am.

In his book The Widening Gyre, Michael R. Johnston has the main character comment that Kelvak, one of the non-human languages, is his favorite to curse in, because theres nothing as satisfying as the harsh consonants in the word skalk.

Theres something to that statement. The two most common vulgarities, shit and fuck, are characterized by a fricative at the word onset and a plosive as the coda. A successful deformation of these wordsone that leaves the speaker satisfiedfollows that pattern. Deformations that are closer to the original are also more satisfying. Shoot is more satisfying than sugar; frak is more satisfying (to me) than frell. Judas priest is more satisfying (and blasphemous) than jiminy cricket. The Kelvak word skalk starts with a fricative (albeit in a cluster) and ends with a plosive, so it feels sweary.

You could theorize that theres some sort of sound-symbolic connection with the fricative-vowel-plosive combination, where the plosive represents a closing or hitting, but that gets a bit Whorfian. We dont need psychological justification for it.

So: what are some of your favorite SFF swears and taboo deformations? Im partial to Bilairys balls! from Lynn Flewellings Nightrunner series, in which Bilairy is the god of the dead.

CD Covington has masters degrees in German and Linguistics, likes science fiction and roller derby, and misses having a cat. She is a graduate of Viable Paradise 17 and has published short stories in anthologies, most recently the story Debridement in Survivor, edited by Mary Anne Mohanraj and J.J. Pionke.

See the original post:

Oh, Frak Avoiding the Censors the SFF Way - tor.com

When Encryption Was a Crime: The 1990s Battle for Free Speech in Software – Reason

This is the third installment in Reason's four-part documentary series titled "Cypherpunks Write Code." Watch part 1 and part 2.

In 1977, a team of cryptographers at MIT made an astonishing discovery: a mathematical system for encrypting secret messages so powerful that it had the potential to make government spying effectively impossible.

Before the MIT team could publish a description of how this system worked, the National Security Agency (NSA) made it known that doing so could be considered a federal crime. The 1976 Arms Export Control Act (AECA) made it illegal to distribute munitions in other countries without a license, including cryptography. The penalty for violating AECA was up to 10 years in prison or a fine of up to one million dollars.

It was the beginning of the "crypto wars"the legal and public relations battle between the intelligence community and privacy activists over the rights of citizens to use end-to-end encryption. Many of those who were involved in the crypto wars were associated with the "cypherpunk movement," a community of hackers, hobbyists, and computer scientists, which the mathematician Eric Hughes once described as "cryptography activists."

The crypto wars continue to this day: On October 11, 2020, U.S. Attorney General William P. Barr issued a joint statement with officials from six other countries that implored tech companies not to use strong end-to-end encryption in their products so that law enforcement agencies can access the communications of their customers.

The government's stance traces back to World War II, when Allied code-breakers helped secure victory by deciphering secret messages sent by the Axis powers. "And that is the origin of the regulations that said, 'This is munition, this is an item of war,'" civil liberties activist John Gilmore told Reason. "And the problem was that they didn't really take freedom of speech, freedom of inquiry, academic freedom, into account in that."

In 1977, the Institute of Electrical and Electronics Engineers, which was planning to hold a conference on cryptography at Cornell University, received a letter from an NSA employee posing as a concerned citizen, who wrote that the U.S. government considered these mathematical systems "modern weapons technologies" and that distributing them was a federal crime. The letter caused widespread alarm in the cryptography community.

In 1977, the computer scientist Mark S. Miller was a 20-year-old student at Yale. Like many future cypherpunks, he read about the breakthrough at MITin Martin Gardner's "Mathematical Games" column published in Scientific American. The article laid out the astounding details of what"RSA," as it was called after its co-discoverers, Ron Rivest, Adi Shamir, and Leonard Adleman, made possible. Gardner omitted the technical details, but he offered his readers the opportunity to mail in a self-addressed stamped envelope to get a full description. The authors received 7,000 requests for the memo but didn't end up distributing the paper because of the NSA's threats.

"I decided quite literally that they are going to classify this over my dead body," Miller recalls. He traveled to MIT and got his hands on the unpublished paper describing how RSA worked. Then he went to "a variety of different copy shops, so I wasn't making lots of copies in any one place" and mailed them anonymously "to home and hobbyist computer organizations and magazines all across the country."

"I gave copies of the paper to some select friends of mine," Miller told Reason, "and I told them, 'if I disappear, make sure this gets out.'"

The following year, the RSA paper was published in Communications of the A.C.M. "And the world has been on a different course ever since it got published," says Miller.

But the crypto wars were just getting started. By the early 1990s, after the launch of the commercial internet and the web, RSA and public-key cryptography were no longer a rarified topic; they were privacy salvation. Internet users could use RSA to fully disguise their online activities from government spies. This sent the intelligence community once again scrambling to stop the dissemination of this powerful tool.

In 1991, a software developer named Phil Zimmermann released the first relatively easy-to-use, messaging system with end-to-end encryption, which was called Pretty Good Privacy, or PGP. So the U.S. Justice Department launched a three-year criminal investigation of Zimmermann on the grounds that by making his software accessible outside the country, he could be guilty of exporting weapons.

The NSA made the public case that Zimmermann's software would be used by child molesters and criminals. "PGP, they say, is out there to protect freedom fighters in Latvia," Stewart A. Baker, the NSA's general counsel, remarked during a panel discussion at the 1994 Conference on Computers, Freedom, and Privacy. "But the fact is, the only use that has come to the attention of law enforcement agencies is a guy who was using PGP so the police could not tell what little boys he had seduced over the 'net."

"Child pornographers, terrorists, money launderers, take your pickthese are the people who will be invoked as the bringers of death and destruction," Tim May, a former Intel physicist and co-founder of the cypherpunk movement, told Reason. "It's true" that these individuals would make use of end-to-end encryption, May conceded, "but all technologies have had bad effects. Telephones led to extortion, death threats, bomb threats, kidnapping cases. Uncontrolled publishing of books could allow satanic books to appear."

In his 1994 essay "The Cyphernomicon," May referred to terrorists, pedophiles, drug dealers, and money launderers as "The Four Horsemen of the Infocalypse." This fearmongering was the government's main playbook for how "privacy and anonymity [could] be attacked."

The cypherpunks argued that although PGP was encryption software, it was protected by the First Amendment because under the hood it was just a written series of instructions to be carried out by a machine.

The economist and entrepreneur Phil Salin was one of the first to argue this point in an influential 1991 essay titled "Freedom of Speech in Software." Salin wrote that "[r]estraint on freedom of expression of software writers is anathema in a free society and a violation of the First Amendment."

"Encryption can't be controlled whether or not it's powerful or has impacts on the government because it's free speech," says Gilmore, a co-founder of both the cypherpunk movement and the Electronic Frontier Foundation. In the 1990s, he risked going to jail in his campaign to force the government to acknowledge that regulating encryption violated the First Amendment.

"We basically had a community of a thousand people scattered around who were all trying different ideas on how to get around the government to get encryption to the masses," Gilmore recalls.

The Clinton administration noted in a 1995 background congressional briefing that "Americans have no constitutional right to choose their own method of encryption" and pushed for legislation that would require companies to build in a mechanism for law enforcement agencies to break in.

"We're in favor of strong encryption, robust encryption," then FBI DirectorLouis J. Freeh said at a May 11, 1995, Senate hearing. "We just want to make sure we have a trap door and a key under some judge's authority where we can get there if somebody is planning a crime."

The cypherpunks looked for ways to undercut the government's case by pointing out the similarities between encryption software and other forms of protected speech. While under federal investigation for making his software available for download outside the U.S., to prove a point Zimmermann convinced MIT press to mirror his action in the analog world, by printing out the PGP source code, adding a binding, and shipping it to European bookstores.

"MIT was at that time like three times as old as NSA, and it's at least as large a player in the national security community," says the cryptographer Whitfield Diffie, who co-discovered the concept of public-key cryptography on which RSA is based. 'It's one thing to try to go and step on little Phil Zimmermann; it's quite another thing to go after MIT."

"The government knew if they went to court to suppress the publication of a book from a university that they would lose and they would lose in a hurry," Gilmore recalls.

"There were people who actually got encryption code tattooed on their bodies and then started asking, 'Can I go to a foreign country?,'" Gilmore says. "We printed up T-shirts that had encryption code on them and submitted them to the government office of munitions control'Can we publish this T-shirt?' Ultimately, they never answered that query because they realized to say 'no' would be to invite a lawsuit they would lose and so the best answer was no answer at all."

In 1996, the Justice Department announced that it wouldn't pursue criminal charges against Phil Zimmermann and major legal victories came in two separate federal court decisions, which found that encryption is protected by the First Amendment.

"The crypto wars is still ongoing," says Gilmore. "What we won in the first rounds was the right to publish it and the right to put it in mass-market software, but what we didn't actually do is deploy it in mass-market software. Now there are major companies building serious encryption into their products, and we're getting a lot of pushback from the government about this."

In the early 90s, at the same time that Gilmore was fighting his legal battle for freedom of speech in software, the cypherpunks were exploring cryptography's potential in the context of collapsing political borders and the rise of liberal democracy. Part four in Reason's series, "Cypherpunks Write Code," will look at how those dreams turned to disillusionment, and the rebirth of the cypherpunk movement after the invention of bitcoin.

Written, shot, edited, narrated, and graphics by Jim Epstein; opening and closing graphics and Mark S. Miller/RSA graphics by Lex Villena; audio production by Ian Keyser; archival research by Regan Taylor; feature image by Lex Villena.

Music: "Crossing the ThresholdGhostpocalypse" and "Darkest Child"byKevin MacLeodis licensed under aCreative Commons Attribution license; "High Flight" by Michele Nobler licensed from Artlist; "modum" by Kai Engel used under Creative Commons.

Photos: Photo 44356598 Konstantin KamenetskiyDreamstime.com; Photo 55458936 Jelena IvanovicDreamstime.com; Photo 21952682 Martin HaasDreamstime.com; Photo 143489196 Chalermpon PoungpethDreamstime.com; ID 118842101 Andrey Golubtsov | Dreamstime.com; Freeh and Clinton, Mark Reinstein/ZUMA Press/Newscom; Freeh and Clinton, Ron SachsCNP/Newscom; WhatsApp Founders, Peter DaSilva/Polaris/Newscom; Bill Barr and Trump: CNP/AdMedia/Newscom; MIT, DEWITT/SIPA/Newscom; John Gilmore photos by Quinn Norton, Attribution-NonCommercial-ShareAlike 2.0 Generic; Bill Clinton in Oval Office, Robert McNeely/SIPA/Newscom; Bill Clinton, White House/SIPA/Newscom; Louis J. Freeh and Bill Clinton, Ron SachsCNP/Newscom; James Comey, KEVIN DIETSCH/UPI/Newscom; Bobby Inmann, Dennis Brack / DanitaDelimont.com "Danita Delimont Photography"/Newscom; John Gilmore, Paul Kitagaki Jr./ZUMA Press/Newscom; Berlin Wall, Associated Press.

Read the original:
When Encryption Was a Crime: The 1990s Battle for Free Speech in Software - Reason

Crypto craze may drive regulators to back their use – Mint

Central bankers globally are wary of rising interest in cryptocurrencies since Facebook decided to launch one of its own. Besides PayPal Holdings Inc. allowing its customers to use virtual currency on Wednesday has added to investor exuberance. Mint explores the issue.

View Full Image

Hows cryptocurrency trade carried out?

Cryptocurrencies, including Bitcoin, are digital currencies, wherein the transaction records are verified and maintained on a decentralized system, which uses cryptography, replacing a central authority for maintaining records. Bitcoin is one among several such products, but is one of the most widely-known and used cryptocurrencies. The technology is based on block-chain, or a distributed public ledger. There are several cryptocurrency exchanges that allow trading using actual money essentially allowing the cryptocurrency to be converted to cash.

Where do they derive their value from?

Normal currencies derive their value by fiat and are thus known as fiat currencies. That is, a 2,000 note has a nominal value of 2,000 because of the governments fiat. However, in case of Bitcoin, there is no such central authority that determines its value. The value is determined by the cryptocurrency exchanges or markets where the forces of demand and supply interact, leading to price discovery. This is one primary reason why the cryptocurrency experience is highly volatile in terms of their value, which undermines the critical function of store of value provided by fiat money.

View Full Image

What is the risk of having private cryptocurrency?

Most central bankers regulate the amount of money supply and determine interest rates to ensure price stability, but will not be able to have control over private the supply of a cryptocurrency. Besides, there are concerns over cryptocurrency use to finance illegal activities, which further makes governments wary of such private cryptocurrencies.

Will central banks look at digital currencies?

The key difference over the last few months has been in the central banks approach towards crypto-currencies. There have been talks on the possibility of central bank-backed digital currencies. The Bank of International Settlements, along with seven other central banks have published a report that lays out the norms for central bank-backed digital currencies. The key requirement is that the CBDCs must complement cash and other legal tender instead of replacing them to ensure monetary and financial stability.

How will such digital currencies benefit?

The key advantage is that it will serve as a medium of exchange, and a store of value. That will also encourage it as a means of payment and, would eventually, improve efficiency of payments. The move of interest-bearing CBDC could also be key towards improving monetary policy transmission. The benefit would be in the form of reducing transaction costs of digital transactions, which could be instrumental in financial inclusion improvement across the globe.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Go here to read the rest:
Crypto craze may drive regulators to back their use - Mint

15%+ growth for Encryption Software Market by 2024, global revenue to reach $20bn – News by Decresearch

Increasing number of data breaches and cybercrimes and supportive government policies will enable encryption software market to witness a bullish growth over the coming years. This can be validated by the draft of an encryption law published by Chinas State Cryptography Administration (SCA) in November 2019. The draft was issued to bring about encryption in the private & public sectors and set guidelines on the use of cryptography for protecting national security.

Cybersecurity vendors are addressing evolving threats by offering security threats, resulting in the higher implementation of email, mobile, and disk encryption capabilities, which will spur encryption software market growth.

Get sample copy of this research report @ https://www.decresearch.com/request-sample/detail/4484

Rising instances of data breaches and cyberespionages coupled with intensifying concerns to safeguard critical data in various sectors, including BFSI, healthcare, defense, etc. is likely to drive global encryption software market outlook.

According to the Health Insurance Portability and Accountability Act Journal, the healthcare sector encounters the highest breach costs, accounting for an average mitigation cost of USD 6.45 million, globally.

Supportive government initiatives to combat the issue of cybercrime will support industry growth. For instance, in 2019, China SCA (State Cryptography Administration) published a draft of an encryption law, which will regulate encryption in the public and private sectors. The draft has also set guidelines on the usage of cryptography to safeguard national security.

Apart from the healthcare sector, the retail sector is also likely to observe heavy uptake of encryption solutions the sector extensively uses third-party services to support online transactions. This is resulting in increased number of data breaches, exploitation of sensitive customer information, such as bank account and credit card details. To keep pace the with constantly evolving cyber risks, cybersecurity vendors are implementing mobile, disk, and email encryption capabilities with their security suites. Global encryption software market is forecast to cross the USD 21 billion mark by 2026.

Increasing dependence on electronic medium as a means of communication also comes with the risk of data breaches. The most commonly utilized modern-day form of communication is e-mail. Organizations and companies of all statures rely on e-mails for communicating confidential matters, such as contract papers, personal data, business secret passwords, etc.

To secure various aspects of email systems including content, media attachments or email access, demand for email data protection software will skyrocket. The email data protection software encrypts data at rest, as well as in transit, and also supports multi-factor authentication for added security, in order to ensure that sensitive information is always protected in line with regulatory compliance.

With increasing uptake of security software to protect data from identity thefts, phishing and malware, email encryption software market is estimated hold a share of around 25% by 2026.

Growing usage of digital platforms has led to rising number of cyberattacks on critical data and secret information. According to a report published by a cybersecurity provider, Thales eSecurity, around 75% of the retailers across the U.S. have experienced data breach in 2018, much more than what is was in 2017 (52%).

Latin America encryption software market is touted to grow at 18% CAGR over 2020-2026 owing to supportive government initiatives to promote cybersecurity solutions aimed at curbing increased number of cyberattacks on the business-critical infrastructure.

The rapidly evolving threat landscape has compelled governments across the globe to promote digital security to safeguard sensitive information and to prevent theft of general publics confidential data.

The government agencies in multiple countries, including Brazil, Argentina, and Mexico are introducing cybersecurity strategies to respond to a plethora of cyberattacks. For instance, the Mexican government presented the National Cyber Security Strategy in collaboration with the CICTE (Inter-American Committee against Terrorism) in 2017. The strategy adopted by the Mexican government aims at establishing best practices to fight against cybercrimes.

Request for customization @ https://www.decresearch.com/roc/4484

In 2019, Mexican institutions, including the National Defense Ministry (Sedena), Mexico Central Bank, the House of Representatives and the Mexico Supreme Court, recorded more than 45 million attempted attacks to access databases and steal information.

Table of Contents (ToC) of the report:

Chapter 5. Encryption Software Market, By Component

5.1. Key trends, by component

5.2. Software

5.2.1. Market estimates and forecast, 2015 2026

5.2.2. Endpoint encryption

5.2.2.1. Market estimates and forecast, 2015 2026

5.2.3. Email encryption

5.2.3.1. Market estimates and forecast, 2015 2026

5.2.4. Cloud encryption

5.2.4.1. Market estimates and forecast, 2015 2026

5.3. Service

5.3.1. Market estimates and forecast, 2015 2026

5.3.2. Training & consulting

5.3.2.1. Market estimates and forecast, 2015 2026

5.3.3. Integration & maintenance

5.3.3.1. Market estimates and forecast, 2015 2026

5.3.4. Managed service

5.3.4.1. Market estimates and forecast, 2015 2026

Chapter 6. Encryption Software Market, By Deployment Model

6.1. Key trends, by deployment model

6.2. On-premise

6.2.1. Market estimates and forecast, 2015 2026

6.3. Cloud

6.3.1. Market estimates and forecast, 2015 2026

Chapter 7. Encryption Software Market, By Application

7.1. Key trends, by application

7.2. IT & telecom

7.2.1. Market estimates and forecast, 2015 - 2026

7.3. BFSI

7.3.1. Market estimates and forecast, 2015 2026

7.4. Healthcare

7.4.1. Market estimates and forecast, 2015 - 2026

7.5. Retail

7.5.1. Market estimates and forecast, 2015 2026

7.6. Government & public sector

7.6.1. Market estimates and forecast, 2015 - 2026

7.7. Manufacturing

7.7.1. Market estimates and forecast, 2015 2026

7.8. Others

7.8.1. Market estimates and forecast, 2015 - 2026

Browse complete Table of Contents (ToC) of this research report @ https://www.decresearch.com/toc/detail/encryption-software-market

Go here to see the original:
15%+ growth for Encryption Software Market by 2024, global revenue to reach $20bn - News by Decresearch

How to prioritize open-source risk with susceptibility analysis – TechBeacon

It's rare today to find anapplication that isn't built on open source. Using open-source components reduces time-to-market. It allows you to focus on what you do well and not worry about what you don't do well. And it lets youtake advantage of the skills of many, sometimes thousands, of developers who have built a common component. You'd have to be insane to write all your code yourself.

Not only is more open source code being used, but that code is also more complex, making open-source frameworks and libraries enormous. While all that has enabled developers to offer more fully featured applications, it's also made the software development cycle much more challenging, especially from a security perspective.

In a perfect world, when you encounter a vulnerability in an open-source component, you should be able to upgrade to a vulnerability-free version of it, plug it into your application, and distribute a new release of your app with the clean component.

But we don't live in a perfect world. For example, there isn't always an upgrade that gets rid of a vulnerability. Moreover, open-source components run the gamut from small widgets that perform minor tasks to large frameworks that have millions of lines of code and can take months to upgrade. That's why it's not always realistic to upgrade components that you're using to the next non-vulnerable version.

Susceptibility analysis can help you to address those problems by prioritizing the risk posed to a project by open-source components.

Susceptibility analysis, which looks at your code and determines the impact a vulnerability might have on it,gives you the truest picture of actual risk. That can be a time-saver for developers because it can identify false positives produced by static analysis security tools.

A static tool will identify the vulnerability, but if the vulnerable function isn't being used by the application, it poses little risk to it. Those kinds of false positives can be identified with susceptibility analysis, which is an evolution of open-source analysis. It provides the next level of analytics and data for making smarter decisions about addressing vulnerabilities.

Susceptibility analysis is a nascent technology, with the first commercial product appearingonly about 18 months ago. Because it's so new, itisn't in widespread use yet, andit's limited to certain computer languages. Libraries of CVE signatures for susceptibility analysis programs are also in the process of being built.

For example, at Micro Focus, our susceptibility analysis product is focused on Java. We've created signatures for 25,000 Java vulnerabilities. Signatures are composed of the library for the open-source component and the function calls made by a developer to that library.

The signature looks for a function name and determines if a developer has called that function, either directly or indirectly, and if that call makes the code susceptible to attack. That's because you can call a function with a vulnerability in it, but due to the way the code is written, an attacker may not be able to access that vulnerability.

You can implement all sorts of logic that prevents a vulnerability from being implemented even though it still exists in the open-source component. For example, you might be using a component that, in order to be exploited, needs an attacker to feed it JavaScript. However, when you use the component, you add input limitationsas you would do, for instance, if you expected the input to be a credit card number and nothing else. That would keep the vulnerability from being exploited, but not from being the source of a false positive.

Half of all known vulnerabilities in open-source components can be triaged as false positives because of the specific use of the components by custom code. It's very unlikely that you're leveraging 100%of a component. You might be using a calendar widget because you need a timestamp for your application, for example, but you don't need to access the scheduling functionality. However, if that's where a vulnerabilityexists, it shouldn't matter to your app.

As with every issue turned up by a code scan, you haveto examine those false positives. Done manually, that can take hours. Susceptibility can shave some hours off the process by using automation to triage vulnerabilities that are irrelevant to the operation of an application.

If you have 20 known vulnerabilities in the components you've used to build your software, you've got a problem. Without susceptibility analysis, you'll needa developer to look at each of the vulnerabilities and figure out whether your code is susceptible to each vulnerability. When that review is finished, you may have five vulnerabilities you need to worry about, while you can forget about the other 15.

An even worse scenario may occur. Instead of investigating the vulnerabilities, your developers may recommend upgrading to eliminate the flaws, a move that could delay the next release of your software for months. So your software schedule gets skewed because you wasted time taking care of vulnerabilities that did not affect the security of your app and upgrading your component before it was necessary to do so.

Both scenarios could have been avoided with susceptibility analysis, eliminating the need for an investigation or upgrade of open-source components. You would have been able to immediately know which issues have to be fixed and not worry about those that didn't need fixing.

Right now, susceptibility analysis is tied to static application security testing (SAST), but in the future it could be used with dynamic application security testing (DAST), too. SAST is the first foray into susceptibility analysis. It does what it does well: finding functions and following code flow path to determine whether invalidated data reaches functions or methods.

As you look for other edge cases and other vulnerabilities that can't be found unless an application is running, you'll need dynamic testing. That's why the evolution of susceptibility analysis will eventually include DAST. You'll inevitablyneed dynamic analysisto validate certain classes of vulnerabilities or use itin conjunction with static analysis to reduce false positives.

As the use of open-source components continues to grow, so does the need to identify the impact of publicly known vulnerabilities on the custom code you create. The most efficient way to meet that need is through susceptibility analysis, which can save you time and money that you'd otherwise spendscrutinizing false positives and upgrading component libraries, with no security benefits.

Irecently presented on susceptibility analysis at SecureGuildwith my talk, "Do You Know How to Prioritize Your Open-Source Findings?" Registrants have full access to recorded sessions.

More here:

How to prioritize open-source risk with susceptibility analysis - TechBeacon

The Future of Software Supply Chain Security: A focus on open source management – Global Banking And Finance Review

By Pete Bulley, Director of Product, Aire

The last six months have brought the precarious financial situation of many millions across the world into sharper focus than ever before. But while the figures may be unprecedented, the underlying problem is not a new one and it requires serious attention as well as action from lenders to solve it.

Research commissioned by Aire in February found that eight out of ten adults in the UK would be unable to cover essential monthly spending should their income drop by 20%. Since then, Covid-19 has increased the number without employment by 730,000 people between July and March, and saw 9.6 million furloughed as part of the job retention scheme.

The figures change daily but here are a few of the most significant: one in six mortgage holders had opted to take a payment holiday by June. Lenders had granted almost a million credit card payment deferrals, provided 686,500 payment holidays on personal loans, and offered 27 million interest-free overdrafts.

The pressure is growing for lenders and with no clear return to normal in sight, we are unfortunately likely to see levels of financial distress increase exponentially as we head into winter. Recent changes to the job retention scheme are signalling the start of the withdrawal of government support.

The challenge for lenders

Lenders have been embracing digital channels for years. However, we see it usually prioritised at acquisition, with customer management neglected in favour of getting new customers through the door. Once inside, even the most established of lenders are likely to fall back on manual processes when it comes to managing existing customers.

Its different for fintechs. Unburdened by legacy systems, theyve been able to begin with digital to offer a new generation of consumers better, more intuitive service. Most often this is digitised, mobile and seamless, and its spreading across sectors. While established banks and service providers are catching up offering mobile payments and on-the-go access to accounts this part of their service is still lagging. Nowhere is this felt harder than in customer management.

Time for a digital solution in customer management

With digital moving higher up the agenda for lenders as a result of the pandemic, many still havent got their customer support properly in place to meet demand. Manual outreach is still relied upon which is both heavy on resource and on time.

Lenders are also grappling with regulation. While many recognise the moral responsibility they have for their customers, they are still blind to the new tools available to help them act effectively and at scale.

In 2015, the FCA released its Fair Treatment of Customers regulations requiring that consumers are provided with clear information and are kept appropriately informed before, during and after the point of sale.

But when the individual financial situation of customers is changing daily, never has this sentiment been more important (or more difficult) for lenders to adhere to. The problem is simple: the traditional credit scoring methods relied upon by lenders are no longer dynamic enough to spot sudden financial change.

The answer lies in better, and more scalable, personalised support. But to do this, lenders need rich, real-time insight so that lenders can act effectively, as the regulator demands. It needs to be done at scale and it needs to be done with the consumer experience in mind, with convenience and trust high on the agenda.

Placing the consumer at the heart of the response

To better understand a customer, inviting them into a branch or arranging a phone call may seem the most obvious solution. However, health concerns mean few people want to see their providers face-to-face, and fewer staff are in branches, not to mention the cost and time outlay by lenders this would require.

Call centres are not the answer either. Lack of trained capacity, cost and the perceived intrusiveness of calls are all barriers. We know from our own consumer research at Aire that customers are less likely to engage directly with their lenders on the phone when they feel payment demands will be made of them.

If lenders want reliable, actionable insight that serves both their needs (and their customers) they need to look to digital.

Asking the person who knows best the borrower

So if the opportunity lies in gathering information directly from the consumer the solution rests with first-party data. The reasons we pioneer this approach at Aire are clear: firstly, it provides a truly holistic view of each customer to the lender, a richer picture that covers areas that traditional credit scoring often misses, including employment status and savings levels. Secondly, it offers consumers the opportunity to engage directly in the process, finally shifting the balance in credit scoring into the hands of the individual.

With the right product behind it, this can be achieved seamlessly and at scale by lenders. Pulse from Aire provides a link delivered by SMS or email to customers, encouraging them to engage with Aires Interactive Virtual Interview (IVI). The information gathered from the consumer is then validated by Aire to provide the genuinely holistic view of a consumer that lenders require, delivering insights that include risk of financial difficulty, validated disposable income and a measure of engagement.

No lengthy or intrusive phone calls. No manual outreach or large call centre requirements. And best of all, lenders can get started in just days and they save up to 60 a customer.

Too good to be true?

This still leaves questions. How can you trust data provided directly from consumers? What about AI bias are the results fair? And can lenders and customers alike trust it?

To look at first-party misbehaviour or gaming, sophisticated machine-learning algorithms are used to validate responses for accuracy. Essentially, they measure responses against existing contextual data and check its plausibility.

Aire also looks at how the IVI process is completed. By looking at how people complete the interview, not just what they say, we can spot with a high degree of accuracy if people are trying to game the system.

AI bias the system creating unfair outcomes is tackled through governance and culture. In working towards our vision of a world where finance is truly free from bias or prejudice, we invest heavily in constructing the best model governance systems we can at Aire to ensure our models are analysed systematically before being put into use.

This process has undergone rigorous improvements to ensure our outputs are compliant by regulatory standards and also align with our own company principles on data and ethics.

That leaves the issue of encouraging consumers to be confident when speaking to financial institutions online. Part of the solution is developing a better customer experience. If the purpose of this digital engagement is to gather more information on a particular borrower, the route the borrower takes should be personal and reactive to the information they submit. The outcome and potential gain should be clear.

The right technology at the right time?

What is clear is that in Covid-19, and the resulting financial shockwaves, lenders face an unprecedented challenge in customer management. In innovative new data in the form of first-party data, harnessed ethically, they may just have an unprecedented solution.

Read the original here:

The Future of Software Supply Chain Security: A focus on open source management - Global Banking And Finance Review

Synopsys expert on proactive application security strategies for uncertain times – Intelligent CIO ME

As cybercriminals take advantage of the fear and uncertainty surrounding the pandemic, its crucial that organisations ensure the software they build and operate is secure despite reduced resources. Adam Brown, Associate Managing Security Consultant, Synopsys, talks us through the steps organisations can take to improve their application security programmes to protect organisational data and that of their customers.

In 2020, organisations have been faced with the prospect of months of staffing and Business Continuity challenges. Concurrently, cyberattacks by opportunistic hackers and cybercrime groups looking to profit or further disrupt society are on the rise. Organisations must ensure the software they build and operate is secure against these increasing attacks, even as their available security resources may be decreasing.

And a remote workforce is only one of the challenges organisations face in terms of securing their digital properties and sensitive data. While many companies want to invest in security, they may not know where to start. After all, its a challenging endeavor to identify where and how to secure your most valuable or vulnerable projects.

Its a daunting task. However, by tactically addressing their security testing capacity, staff skills and software supply chain risks today, organisations can respond to resource challenges now while fundamentally improving the effectiveness of their AppSec program going forward. Heres how.

Establish a benchmark and mature your strategy

Get started by gathering a full understanding of what your organisations security activities involve. The Building Security In Maturity Model (BSIMM) is not a how-to guide, nor is it a one-size-fits-all prescription. A BSIMM assessment reflects the software security activities currently in place within your organisation. Thus, giving you an objective benchmark whereby to begin building or maturing your software security strategy.

The BSIMM, now in its 11th iteration, is a measuring stick and can be used to inform a roadmap for organisations seeking to create or improve their SSIs, not by prescribing a set way to do things but by showing what others are already doing.

Previous years reports have documented that organisations have been successfully replacing manual governance activities with automated solutions. One reason for this is the need for speed, otherwise known as feature velocity. Organisations are doing away with the high-friction security activities conducted by the software security group (SSG) out-of-band and at gates. In their place is software-defined lifecycle governance.

Another reason is a people shortage the skills gap has been a factor in the industry for years and continues to grow. Assigning repetitive analysis and procedural tasks to bots, sensors and other automated tools makes practical sense and is increasingly the way organisations are addressing both that shortage and time management problems.

But while the shift to automation has increased velocity and fluidity across verticals, the BSIMM11 finds that it hasnt put the control of security standards and oversight out of the reach of humans.

Apply a well-rounded risk mitigation strategy

In fact, the roles of todays security professionals and software developers have become multi-dimensional. With their increasing responsibilities, they must do more in less time and while keeping applications secure. As development workflows continue to evolve to keep up with organisational agility goals, they must account for a variety of requirements, including:

This is the reality around which organisations build and/or consume software. Over the years weve witnessed the use and expansion of automation in the integration of tools such as GitLab for version control, Jenkins for continuous integration (CI), Jira for defect tracking and Docker for container integration within toolchains. These tools work together to create a cohesive automated environment that is designed to allow organisations to focus on delivering higher quality innovation faster to the market.

Through BSIMM iterations weve seen that organisations have realised theres merit in applying and sharing the value of automation by incorporating security principles at appropriate security touchpoints in the software development life cycle (SDLC), shifting the security effort left. This creates shorter feedback loops and decreases friction, which allows engineers to detect and fix security and compliance issues faster and more naturally as part of software development workflows.

More recently, a shift everywhere movement has been observed through the BSIMM as a graduation from shift left meaning firms are not just testing early in development but conducting security activity as soon as possible with the highest fidelity as soon as is practical. As development speeds and deployment frequencies intensify, security testing must compliment these multifaceted dynamic workflows. If organisations want to avoid compromising security and time to market delays, directly integrating security testing is essential.

Since organisations time to innovate continues to accelerate, firms must not abdicate their security and risk mitigation responsibilities.Managed security testing provides and delivers the key people, process and technology considerations that help firms maintain the desired pace of innovation, securely.

In fact, the right managed security testing solutions will provide the ability to invert the relationship between automation and humans, where the humans powering the managed service act out-of-band to deliver high-quality input in an otherwise machine-driven process, rather than the legacy view in which automation augments and/or complements human process.

It also affords organisations the application security testing flexibility required while driving fiscal responsibility. Organisation gain access to the brightest minds in the cybersecurity field when you need them and not paying for them when you dont; you simply draw on them as needed to address current resource testing constraints. This results in unrivaled transparency, flexibility and quality at a predictable cost plus provides the data required to remediate risks efficiently and effectively.

Enact an open source management strategy

And we must not neglect the use of open source software (OSS) a substantial building block of most, if not all modern software. Its use is persistently growing and it provides would-be attackers with a relatively low-cost vector to launch attacks on a broad range of entities that comprise the global technology supply chain.

Open source code provides the foundation of nearly every software application in use today across almost every industry. As a result, the need to identify, track and manage open source components and libraries has increased exponentially. License identification, processes to patch known vulnerabilities and policies to address outdated and unsupported open source packages are all necessary for responsible open source use. The use of open source isnt the issue, especially since reuse is a software engineering best practice; its the use of unpatched OSS that puts organisations at risk.

The 2020 Open Source Security and Risk Analysis (OSSRA) report contains some concerning statistics. Unfortunately, the time it takes organisations to mitigate known vulnerabilities is still unacceptably high. For example, six years after initial public disclosure, 2020 was the first year the Heartbleed vulnerability was not found in any of the audited commercial software that forms the basis of the OSSRA report.

Notably, 91% of the codebases examined contained components that were more than four years out of date or had no development activity in the last two years, exposing those components to a higher risk of vulnerabilities and exploits. Furthermore, the average age of vulnerabilities found in the audited codebases was a little less than 4 years. The percentage of vulnerabilities older than 10 years was 19% and the oldest vulnerability was 22 years old. It is clear that we (as open source users) are doing a less than optimal job in defending ourselves against open source enabled cyberattacks.

To put this in a bit more context, 99% of the code bases analysed for the report contained open source software, of those, 75% contained at least one vulnerability and 49% contained high-risk vulnerabilities.

If youre going to mitigate security risk in your open source codebase, you first have to know what software youre using and what exploits could impact its vulnerabilities. One increasingly popular way to get such visibility is to obtain a comprehensive bill of materials from your suppliers (sometimes referred to as a build list or a software bill of materials or SBOM). The SBOM should contain not only all open source components but also the versions used, the download locations for each project and all dependencies, the libraries to which the code calls and the libraries to which those dependencies link.

Modern applications consistently contain a wealth of open source components with possible security, licensing and code quality issues. At some point, as that open source component ages and decays (with newly discovered vulnerabilities in the code base), its almost certainly going to break or otherwise open a codebase to exploit. Without policies in place to address the risks that legacy open source can create, organisations open themselves up to the possibility of issues in their cyber assets that are 100% dependent on software.

Organisations need clearly communicated processes and policies to manage open source components and libraries; to evaluate and mitigate their open source quality, security and license risks; and to continuously monitor for vulnerabilities, upgrades and the overall health of the open source codebase. Clear policies covering introduction and documentation of new open source components can help to ensure control over what enters the codebase and that it complies with company policies.

Theres no finish line when it comes to securing the software and applications that power your business. But it is critically important to manage and monitor your assets as well as to have a clear view into your software supply chain. No matter the size of your organisation, the industry in which you conduct business, the maturity of your security programme or budget at hand, there are strategies you can enact today to progress your programme and protect your organisational data and that of your customers.

Facebook Twitter LinkedInEmailWhatsApp

Read this article:

Synopsys expert on proactive application security strategies for uncertain times - Intelligent CIO ME