Algorithms control your online life. Here’s how to reduce their influence. – Mashable

Mashable's series Algorithms explores the mysterious lines of code that increasingly control our lives and our futures.

The world in 2020 has been given plenty of reasons to be wary of algorithms. Depending on the result of the U.S. presidential election, it may give us one more. Either way, it's high time we questioned the impact of these high-tech data-driven calculations, which increasingly determine who or what we see (and what we don't) online.

The impact of algorithms is starting to scale up to a dizzying degree, and literally billions of people are feeling the ripple effects. This is the year the Social Credit System, an ominous Black Mirror-like "behavior score" run by the Chinese government, is set to officially launch. It may not be quite as bad as you've heard, but it will boost or tighten financial credit and other incentives for the entire population. There's another billion unexamined, unimpeachable algorithms hanging over a billion human lives.

In the UK, few will forget this year's A-level algorithm. A-levels are key exams for 18-year olds; they make or break college offers. COVID-19 canceled them. Teachers were asked what each pupil would have scored. But the government fed these numbers into an algorithm alongside the school's past performance. Result: 40 percent of all teacher estimates were downgraded, which nixed college for high-achieving kids in disadvantaged areas. Boris Johnson backed down, eventually, blaming a "mutant algorithm." Still, even a former colleague of the prime minister thinks the A-level fiasco may torpedo his reelection chances.

In the U.S., we don't tend to think about shadowy government algorithms running or ruining our lives. Well, not unless you're a defendant in one of the states where algorithms predict your likelihood of committing more crime (eat your heart out, Minority Report) and advise judges on sentencing. U.S. criminal justice algorithms, it probably won't surprise you to learn, are operated by for-profit companies and stand accused of perpetuating racism. Such as COMPAS in Florida and Wisconsin, which ProPublica found was twice as likely to label Black defendants "high risk" than white defendants and was wrong about 40 percent of the time.

The flaws in such "mutant algorithms," of course, reflect their all-too-human designers. Math itself isn't racist, or classist, or authoritarian. An algorithm is just a set of instructions. Technically, the recipe book in your kitchen is full of them. As with any recipe, the quality of an algorithm depends on its ingredients and those of us who have to eat the result really don't think enough about what went on in the kitchen.

"All around us, algorithms provide a kind of convenient source of authority, an easy way to delegate responsibility; a short cut that we take without thinking," writes mathematician Hannah Fry in her 2018 book Hello World: Being Human in the Age of Algorithms. "Who is really going to click through to the second page of Google every time and think critically about every result?"

Try to live without algorithms entirely, however, and you'll soon notice their absence. Algorithms are often effective because they are able to calculate multiple probabilities faster and more effectively than any human mind. Anyone who's ever spent longer on the road because they thought they could outsmart Google Maps' directions knows the truth of this. This thought experiment imagining a day without algorithms ended in terrible gridlock, since even traffic-light systems use them.

Still, you would be right to be concerned about the influence algorithms have on our internet lives particularly in the area of online content. The more scientists study the matter, the more it seems that popular search, video and social media algorithms are governing our brains. Studies have shown they can alter our mood (Facebook itself proved that one) and yes, even our 2016 votes (which explains why the Trump campaign is investing so much into Facebook ads this time around).

So before we find out the full effect of algorithms in 2020 let's take a look at the algorithms on each of the major content services many of which are surprisingly easy to erase from our lives.

No algorithm on Earth, not even China's Social Credit system, has the power of Mark Zuckerberg's. Every day, nearly 2 billion people visit Facebook. Nearly all of them allow the algorithm to present posts in the order that the company has determined most likely to keep them engaged. That means you see a lot more posts from friends you've engaged with in the past, regardless of how close you actually are to them. It also means content that causes big back-and-forth fights is pushed to the top. And Zuckerberg knows it.

"Our algorithms exploit the human brains attraction to divisiveness," warned a 2018 internal Facebook study, unearthed by the Wall Street Journal. Left unchecked, these mutant algorithms would favor "more and more divisive content in an effort to gain user attention & increase time on the platform."

Zuckerberg, reportedly afraid that conservatives would be disproportionately affected if he tweaked the algorithm to surface more harmonious posts, shelved the study. It's been a good four years for conservatives on Facebook, who have been playing the referee ever since they petitioned Zuckerberg to stop using human editors to curate news in 2016. Now look at Facebook's top performing posts in 2020; on a daily basis, the list is dominated by names such as Ben Shapiro, Franklin Graham, and Sean Hannity.

But even conservatives have cause to be disquieted by the Facebook algorithm. Seeing friends' popular posts has been shown to make us more depressed. Facebook addiction is heavily correlated with depressive disorder. So-called "super sharers" drown out less active users, according to the 2018 report; an executive who tried to reduce the super-sharer influence on the algorithm abruptly left the company.

How to fix it

Luckily, you can reduce their influence yourself. Because Facebook still allows you to remove the sorting algorithm from your timeline, and simply view all posts from all your friends and follows in reverse chronological order (that is, most recently posted at the top). On Facebook.com, click the three dots next to "News Feed," then click "most recent." On the app, you'll need to click "settings," then "see more," then "most recent."

The result? Well, you might be surprised to catch up with old friends you'd almost forgotten about. And if you interact with their posts, you're training the content algorithm for when you go back to your regular timeline. In my experience, reverse chronological order isn't the most thrilling way to browse Facebook the algorithm knows what it's doing, locking your brain in with the most exciting posts but it's a nice corrective. If you're one of the two billion on Facebook every day, try this version at least once a week.

The YouTube "watch next" algorithm may be even more damaging to democracy than Facebook's preference for controversial posts. Some 70 percent of YouTube videos we consume were recommended by the service's algorithm, which is optimized to make you watch more YouTube videos and ads no matter what (the average viewing session is now above one hour).

That means YouTube prioritizes controversial content, because whether you love it or hate it, you'll keep watching. And once you've watched one piece of controversial content, the algorithm will assume that's what you're into, steering you to the kind of stuff viewers of that video opted to watch next. Which explains how your grandparents can start by watching one relatively innocuous Fox News video and end up going down a QAnon conspiracy theory rabbit hole.

A former Google programmer, Guillaime Chaslot, found the YouTube algorithm may have been biased enough to swing the outcome of the 2016 election, which was decided by 77,000 votes in three states. "More than 80 percent of recommended videos were favorable to Trump, whether the initial query was 'Trump' or 'Clinton'," he wrote in the immediate aftermath. "A large proportion of these recommendations were divisive and fake news." Similarly, Chaslot found that 90 percent of videos recommended from the search query "is the Earth flat?" said that yes, indeed it is.

This isn't just a problem in the U.S. One of the most important case studies of the YouTube algorithm's political impact was in Brazil, where fringe right-wing candidate Jair Bolsonaro was elected president after unexpectedly becoming a YouTube star. "YouTubes search and recommendation system appears to have systematically diverted users to far-right and conspiracy channels in Brazil," a 2019 New York Times investigation found. Even Bolsonaro's allies credited YouTube for his win.

How to fix it

Keep the algorithm at bay. Disable 'Up Next.'

Turning off autoplay, an option next to the "Up Next" list, will at least stop you from blindly watching whatever the YouTube algorithm recommends. You can't turn off recommendations altogether, but you can at least warn less tech-savvy relatives that the algorithm is doing its level best to radicalize them in service of views.

Chaslot's nonprofit algotransparency.org will show you what videos are most recommended across the site on any given day. By now, you may not be surprised to see that Fox News content tends to float to the top. Your YouTube recommendation algorithm may look normal to you if it's had years to learn your likes and dislikes. But a brand-new user will see something else entirely.

While parent company Facebook allows you to view your feed in reverse chronological order, Instagram banished that option altogether back in 2016 leading to a variety of conspiracy theories about "shadow banning." It will still show you every photo and story if you keep scrolling for long enough, but certain names float to the top so frequently that you'd be forgiven for feeling like a stalker. (Hello, Instagram crushes!)

How to fix it

As of a February update, Instagram will at least let you see who you've been inadvertently ignoring. Click on your profile icon in the bottom right corner, click on your "following" number, and you'll see two categories: "Least Interacted With" and "Most Shown In Feed." Click on the former, scroll through the list, and give your most ignored follows some love.

You can also sort your feed by the order in which you followed accounts, which is truly infuriating. Why offer that option, and not just give us a straight-up chronological feed? Instagram is also said to be testing a "Latest posts" feature that will catch you up on recent happenings, but this hasn't rolled out to all users yet.

Just like its social media rivals, Twitter is obsessed with figuring out how it can present information in anything other than most recent order the format that Twitter has long been known for. Founder Jack Dorsey has introduced solutions that will allow you to follow topics, not just people, and to show you tweets in your timeline that drove the most engagement first.

How to fix it

Go! See latest tweets! Be free of the algorithm!

All of these non-chronological tweaks fall under the "Home" heading at the top of the page. Click the star icons next to it, and you'll have the opportunity to go back to traditional Twitter-style "Latest Tweets." Of all the social media services, Twitter is the one that makes it easiest to ignore its recommendation algorithm.

It may take a little more scrolling to find the good stuff on Latest Tweets, and of course what you're seeing depends on what time of day you're dipping into the timeline. Still, Latest Tweets is your best bet for a range of opinions and information from your follows unimpeded by any mutant algorithms.

Read more from Algorithms:

Read the rest here:

Algorithms control your online life. Here's how to reduce their influence. - Mashable

The US Is Determined to Make Julian Assange Pay for Exposing the Cruelty of Its War on Iraq – PRESSENZA International News Agency

By Vijay Prashad

OnSeptember 7, 2020, Julian Assange will leave his cell in Belmarsh Prison in London and attend a hearing that will determine his fate. After a long period of isolation, he was finally able to meet his partnerStella Morisand see their two sonsGabriel (age three) and Max (age one)on August 25. After the visit, Morissaidthat he looked to be in a lot of pain.

The hearing that Assange will face has nothing to do with the reasons for his arrest from the embassy of Ecuador in London on April 11, 2019. He was arrested that day for hisfailureto surrender in 2012 to the British authorities, who would have extradited him to Sweden; in Sweden, at that time, there were accusations of sexual offenses against Assange that weredroppedin November 2019. Indeed, after the Swedish authorities decided not to pursue Assange, he should have been released by the UK government. But he was not.

The true reason for the arrest was never the charge in Sweden; it was the desire of the U.S. government to have him brought to the United States on a range of charges. On April 11, 2019, the UK Home Office spokespersonsaid, We can confirm that Julian Assange was arrested in relation to a provisional extradition request from the United States of America. He is accused in the United States of America of computer-related offenses.

Manning

The day after Assanges arrest, the campaign group Article 19 published astatementthat said that while the UK authorities had originally said they wanted to arrest Assange for fleeing bail in 2012 toward the Swedish extradition request, it had now become clear that the arrest was due to a U.S. Justice Departmentclaimon him. The U.S. wanted Assange on a federal charge of conspiracy to commit computer intrusion for agreeing to break a password to a classified U.S. government computer. Assange was accused of helping whistleblowerChelsea Manningin 2010 when Manning passed WikiLeaksled by Assangean explosive trove of classified information from the U.S. government that contained clear evidence of war crimes. Manning spent seven years in prison before her sentence wascommutedby former U.S. President Barack Obama.

While Assange was in the Ecuadorian embassy and now as he languishes in Belmarsh Prison, the U.S. government has attempted to create an air-tight case against him. The U.S. Justice DepartmentindictedAssange on at least 18 charges, including the publication of classified documents and a charge that he helped Manning crack a password and hack into a computer at the Pentagon. One of theindictmentsfrom 2018makes the case against Assange clearly.

The charge that Assange published the documents is not the central one, since the documents were also published by a range of media outlets such as the New York Times and the Guardian. The keychargeis that Assange actively encouraged Manning to provide more information and agreed to crack a password hash stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a United States government network used for classified documents and communications. Assange is also charged with conspiracy to commit computer intrusion for agreeing to crack that password hash. The problem here is that it appears that the U.S. government has no evidence that Assange colluded with Manning to break into the U.S. system.

Manning does not deny that she broke into the system, downloaded the materials, and sent them to WikiLeaks. Once she had done this, WikiLeaks, like the other media outlets, published the materials. Manning had a very trying seven years in prison for her role in the transmission of the materials. Because of the lack of evidence against Assange, Manning was asked to testify against him before a grand jury. She refused and now is once more inprison; the U.S. authorities are using her imprisonment as a way to compel her to testify against Assange.

What Manning Sent to Assange

On January 8, 2010, WikiLeaksannouncedthat it had encrypted videos of U.S. bomb strikes on civilians. The video, later released as Collateral Murder, showed in cold-blooded detail how on July 12, 2007, U.S. AH-64 Apache helicopters fired 30-millimeter guns at a group of Iraqis in New Baghdad; among those killed were Reuters photographer Namir Noor-Eldeen and his driver Saeed Chmagh. Reuters immediately asked for information about the killing; they were fed the official story and told that there was no video, but Reuters futilelypersisted.

In 2009, Washington Post reporter David Finkel publishedThe Good Soldiers, based on his time embedded with the 2-16 battalion of the U.S. military. Finkel was with the U.S. soldiers in the Al-Amin neighborhood when they heard the Apache helicopters firing. For his book, Finkel had watched the tape (this is evident frompages 96 to 104); he defends the U.S. military, saying that the Apache crew had followed the rules of engagement and that everyone had acted appropriately. The soldiers, he wrote, were good soldiers, and the time had come for dinner. Finkel had made it clear that a video existed, even though the U.S. government denied its existence to Reuters.

Thevideois horrifying. It shows the callousness of the pilots. The people on the ground were not shooting at anyone. The pilots fire indiscriminately. Look at those dead bastards, one of them says, while another says, Nice, after they fire at the civilians. A van pulls up at the carnage, and a person gets out to help the injuredincluding Saeed Chmagh. The pilots request permission to fire at the van, get permission rapidly, and shoot at the van. Army Specialist Ethan McCordpart of the 2-16 battalion that had Finkel embedded with themsurveyed the scene from the ground minutes later. In 2010, McCordtoldWireds Kim Zetter what he saw: I have never seen anybody being shot by a 30-millimeter round before. It didnt seem real, in the sense that it didnt look like human beings. They were destroyed.

In the van, McCord and other soldiers found badly injured Sajad Mutashar (age 10) and Doaha Mutashar (age five); their father, Salehwho had tried to rescue Saeed Chmaghwas dead on the ground. In the video, the pilot saw that there were children in the van; Well, its their fault for bringing their kids into a battle, he says callously.

Robert Gibbs, the press secretary for President Barack Obama,saidin April 2010 that the events on the video were extremely tragic. But the cat was out of the bag. This video showed the world the actual character of the U.S. war on Iraq, which the United Nations Secretary-General Kofi Annan hadcalledillegal. The release of the video by Assange and WikiLeaks embarrassed the United States government. All its claims of humanitarian warfare had no credibility.

The campaign to destroy Assange begins at that point. The United States government has made it clear that it wants to try Assange for everything up totreason. People who reveal the dark side of U.S. power, such as Assange andEdward Snowden, are given no quarter. There is a long list of peoplesuch as Manning,Jeffrey Sterling,James Hitselberger,John Kiriakou, andReality Winnerwho, if they lived in countries being targeted by the United States, would be called dissidents. Manning is a hero for exposing war crimes; Assange, who merely assisted her, is being persecuted in plain daylight.

On January 28, 2007, a few months before he was killed by the U.S. military, Namir Noor-Eldeen took aphotographin Baghdad of a young boy with a soccer ball under his arm steps around a pool of blood. Beside the bright red blood lie a few rumpled schoolbooks. It was Noor-Eldeens humane eye that went for that photograph, with the boy walking around the danger as if it were nothing more than garbage on the sidewalk. This is what the U.S. illegal war had done to his country.

All these years later, that war remains alive and well in a courtroom in London; there Julian Assangewho revealed the truth of the killingwill struggle against being one more casualty of the U.S. war on Iraq.

This article was produced by Globetrotter, a project of the Independent Media Institute.

Vijay Prashad is an Indian historian, editor and journalist. He is a writing fellow and chief correspondent at Globetrotter, a project of the Independent Media Institute. He is the chief editor of LeftWord Books and the director of Tricontinental: Institute for Social Research. He is a senior non-resident fellow at Chongyang Institute for Financial Studies, Renmin University of China. He has written more than 20 books, including The Darker Nations and The Poorer Nations. His latest book is Washington Bullets, with an introduction by Evo Morales Ayma.

Read the original here:

The US Is Determined to Make Julian Assange Pay for Exposing the Cruelty of Its War on Iraq - PRESSENZA International News Agency

WATCH: The War on Journalism: The Case of Julian Assange – Consortium News

A new documentary by Juan Passarelli can be seen here on Consortium News, followed by a panel discussion with Passarelli, director Ken Loach and filmmaker Suzie Gilbert. Journalists are under attack globally for doing their jobs. Julian Assange is facing a 175 year sentence for publishing if extradited to the United States. The Trump administration has gone from denigrating journalists as enemies of the people to now criminalizing common practices in journalism that have long served the public interest.

Imprisoned WikiLeaks founder and editor Assanges extradition is being sought by the Trump administration, in a hearing to begin Sept. 7, for publishing U.S. government documents, which exposed war crimes and human rights abuses. He is being held in maximum security HMP Belmarsh in London. There is a war on journalism and Julian Assange is at the centre of that war. If this precedent is set then what happens to Assange can happen to any journalist. Join director Ken Loach and film-maker Suzie Gilbert for a discussion with Juan Passarelli about his new documentary The War on Journalism: The Case of Julian Assange.

Watch the replay here:

Read the original post:

WATCH: The War on Journalism: The Case of Julian Assange - Consortium News

Russian troll farm enlisted US journalist to write about divisive issues – SiliconANGLE

Following a tip from the FBI, Facebook Inc. today said that its removed Pages and accounts linked to Russias infamous troll farm, the Internet Research Agency.

In areport, Facebook said it removed 13 Facebook accounts and two Pages that violated its policy against foreign interference through coordinated inauthentic behavior. The activity originated in Russia, with the focus mainly being on the U.S., the U.K., Algeria, Egypt and other English-speaking countries and countries in the Middle East and North Africa.

The content came from fake accounts using fictitious personas designed to move traffic to a phony news organization. The accounts used fake profile pictures to hoodwink people into thinking they were genuine editors, but more worrying, U.S.-based freelance journalists were duped into writing stories.

People recruited were said to be on the left of the political spectrum, with some of the content that was posted related to social and racial justice in the U.S. Facebook said other stories centered on President Donald Trump, the Biden-Harris campaign, Julian Assange, QAnon, alleged Western war crimes, the coronavirus pandemic, migrants (n the U.K.), corruption, U.S. military policies, among other divisive topics.

Around 14,000 accounts followed one of more of the two Pages and $480 was spent of ads, paid for mostly in U.S. dollars. Its reported that the Russian agency is again trying to help elect Trump by dividing Democratic voters on such issues.

One of the pages went under the name Peace Data, which described itself as an international news organization. The same outfit has also just had accounts suspended on Twitter and LinkedIn. On Facebook alone, it posted 500 stories in English and a further 200 stories in Arabic between February and August this year.

These actors get caught between a rock and a hard place, Nathaniel Gleicher, Facebooks cybersecurity boss, said in a press conference. They can run a large noisy network that gets caught quickly, or they can work very hard to hide themselves, still get caught, and not get a lot of attention.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

More:

Russian troll farm enlisted US journalist to write about divisive issues - SiliconANGLE

Updated: New FBI Documents Show What Witnesses In The Mueller Probe Told Federal Investigators About Trump And Russia – BuzzFeed News

BuzzFeed News; Getty Images

The federal government has once again released hundreds of pages of previously unseen records from former special counsel Robert Muellers two-year investigation into Russian interference in the 2016 election and President Donald Trumps attempts to obstruct the inquiry.

These documents, interview summaries known within the FBI as 302s, were turned over to BuzzFeed News and CNN in response to a Freedom of Information Act lawsuit. They reveal what hundreds of people many of them close to Trump and his campaign told federal investigators when they were questioned as part of the probe, which began in May 2017.

Since last November, more than 3,000 pages of the interview summaries, excerpts of which are sprinkled throughout Muellers final report, have been released to the public. However, many details were never cited in the report. For example, Paul Manafort was still actively advising the Trump campaign three days before Election Day in 2016 despite having been fired as campaign manager nearly three months earlier. That fact, wrote Trumps next campaign manager, Steve Bannon, in an email, needed to be kept secret or they are going to try to say the Russians worked with wiki leaks to give this victory to us.

The latest cache includes additional interview summaries from Manafort and his fellow associate Rick Gates, and former spokesperson for the Office of Director of National Intelligence, Timothy Barrett, who now works as a spokesperson for CIA. The identities of dozens of other witnesses were redacted on privacy grounds.

At 4:51 p.m. on May 9, 2017, an hour before Trump fired James Comey, the White House was getting impatient. It asked the FBI for Comeys email address. Given the option of classified or unclassified, the reply was, it doesnt matter, just give us his email address. Four minutes later, according to the version of events provided by the FBI agent to Muellers team, the bureaus command center was notified that White House aide and longtime Trump associate Keith Schiller was at the FBI headquarters building with a letter for Comey. FBI staff scrambled to find someone to receive it. At around 5:38 pm, a person whose name was redacted met with Schiller and accepted the letter, which was delivered to Comeys office two minutes later.

Comey was not in Washington at the time; he learned about it from a news bulletin while meeting with FBI agents in Los Angeles. Still, the agent told Muellers office that one of the FBI staff involved its not clear who, given the many redactions commented that whoever conveyed the letter may have just handled history.

Paul Manafort spoke to Muellers office at length about his old business partner, Roger Stone. Manafort described in detail how the two were in touch when Manafort ran Trumps campaign in the spring and summer of 2016 but Stone was no longer an official part of the operation. Manafort made clear that he believed Stone had some line of communication to WikiLeaks and its founder, Julian Assange, albeit an indirect one.

On June 12, 2016, Assange announced that WikiLeaks planned to release a cache of Hillary Clintons emails. Manafort said he told Trump that Stone had predicted it correctly, and Trump asked if Stone knew what was in them; Manafort said no.

Manafort said he told Stone to stay on top of what WikiLeaks was doing, but did not mention that the request came from Trump, because he didnt want to be an errand boy.

Manafort said Stone claimed to have no control over the October 2016 release of Clinton campaign chair John Podestas hacked emails, but said he may have had advance knowledge.

Manafort was confused as to the various people and hacks, according to the interview summary, and at one point asked Stone to walk him through it all.

A person whose name was redacted on privacy grounds was interviewed over the phone by the FBI on October 10, 2017. A portion of the interview was redacted. But the person told the FBI that in their opinion, "Russian President Vladimir Putin has 'bit off more than he can chew' in his government's efforts to interfere in the U.S. election."

"The Russian administration sought to throw a wrench into the U.S. political process for what it perceived was a slight by the Obama administration in which Russia was not taken seriously," the interview summary says.

In another interview, an individual involved in fundraising for Trump whose name was redacted, told investigators in December 2017 that the campaign seemed totally unprepared to raise money or ensure that it complied with federal election laws. The campaign, which the person called unorthodox, had no donor lists and was not actively raising money as late as May 2016, when he became the Republican partys presumptive nominee for president. The only activity was the campaign merchandise store, the FBI memorandum said.

Following a fundraiser hosted by Tom Barrack, the private equity baron and close ally of Trump, money began pouring in. But little attention seemed to be paid internally to ensuring that federal election rules were followed, including verifying whether non-US citizens might be contributing. The witness was asked but was not sure of what controls the campaign had in place for foreign, excessive, or other ineligible contributions, the FBI 302 said.

The newest entry in the interview summaries is Timothy Barrett, the former spokesperson for the Office of Director of National Intelligence who now works for CIA in the same capacity. Barrett was interviewed by FBI agents in the Washington field office on November 29, 2017. His connection to the Mueller probe has not been previously reported.

He discussed with agents a phone call he received from Saoud Mekhennet, a German journalist he knew who has contributed to The Washington Post and The New York Times and who wrote a book about the Islamic State. Apparently Mekhennet had queried Barrett about something Russia-related and she was seeking confirmation "as well as guidance on if there were reasons she should not publish the story."

The details of what she was reporting are redacted because they relate to an ongoing law enforcement investigation. Thats notable because most of the investigations that came out of Muellers inquiry have ended by now. Barrett was unavailable for comment Tuesday evening.

Although the Mueller investigation ultimately led to 37 indictments and seven convictions, Trump has aggressively sought to discredit it since the time it was launched, repeatedly referring to it as a witch hunt. Those efforts have been supported by Attorney General Bill Barr, who has intervened in several cases related to the investigation, including the prosecutions of former national security adviser Michael Flynn and political consultant Roger Stone.

The final 448-page Mueller report, released in April 2019, reflected only a tiny fraction of the information gathered by Muellers team of federal prosecutors and FBI and IRS agents amassed over the course of the two-year probe. Much of the contents of the typewritten summaries taken for each and every interview has never before been reviewed publicly. A month after the report was released, BuzzFeed News sued the FBI and the Department of Justice, seeking access to those records. That litigation was subsequently joined by CNN.

The majority of the 302s have been heavily redacted, leaving vast swaths of information about what witnesses told investigators obscured from view. BuzzFeed News has challenged some of those redactions, arguing in court that one category of exemption the government has cited to justify the withholdings was legally unfounded, politically motivated, and implemented solely to protect the president.

Read the original post:

Updated: New FBI Documents Show What Witnesses In The Mueller Probe Told Federal Investigators About Trump And Russia - BuzzFeed News

How does open source thrive in a cloud world? "Incredible amounts of trust," says a Grafana VC – TechRepublic

Commentary: The shift in the open source industry from infrastructure like Splunk to Elasticsearch comes down to trust, says Gaurav Gupta, a prominent product executive turned investor.

Image: marekuliasz, Getty Images/iStockphoto

Back in 2013 Mike Olson made a bold claim: "No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form." Olson is a smart guy, and he was nearly correct except for one small exception to his rule: Splunk. Splunk thrived in spite of its proprietary nature, and leading that success was Gaurav Gupta, then vice president of product at Splunk, and now a partner with Lightspeed Venture Partners. It was a "different time," he said in an interview, both for the industry and for him.

Ever since then he's been building infrastructure the open source way, whether running product at Elastic or later investing in companies like Grafana as a VC. As successful as Splunk was, however, Gupta believes that the "incredible amounts of trust" that open source fosters, coupled with low friction to experimentation, make it the smart investment for today, whether you're a VC or an enterprise trying to innovate your way through a pandemic.

Image: Lightspeed Venture Partners

It's worth dwelling for a moment on Gupta's Splunk experience. Splunk, after all, exploded in adoption at a time when much of the infrastructure world went open source. According to Gupta, Splunk may have slipped into the market just in time. After all, he noted, "Open source didn't exist back then [2004] for the most part." Yes, Linux was around and, yes, things like MySQL and Drupal were taking root, but open source had yet to command the market like it does today.

Splunk was also helped by the fact it catered to a customer (system administrators and similar roles analyzing log data) that was perhaps neither capable nor interested in digging into source code. What this audience did appreciate, by contrast, was an "incredible end-to-end [product] that really focused on great user experience, and traditionally open source hasn't done a great job on user experience [for] less technical audiences." It didn't hurt that "We were the only one in the market for years," Gupta continued.

SEE:How to build a successful developer career (free PDF)(TechRepublic)

By Gupta's reckoning, despite years of VCs trying to fund "copycat" competitors to Splunk, no one successfully did so...until Elastic managed the feat by accident. "Elastic wasn't designed to be a logging company at all, it was a search company." Having left Splunk for Elastic, Gupta and team saw that users were starting to use the search tool for logging use cases, and hired the developers behind Logstash and Kibana to help build out Elastic's log management capabilities. Unlike open source companies before it, Elastic determined to "not be super generic" and instead "create an integrated stack" to target specific use cases like search and logging.

All of which helps to explain how Splunk emerged as a hugely successful proprietary software company in an area of software (infrastructure) that increasingly skewed open source. It also explains how Gupta jumped from proprietary software to open source. But in a world where cloud delivers and, perhaps, perfects many of the benefits of open source ("ultimately people want to consume open source as a service," he said), what is it about open source that makes it fertile ground for investments, decades after open source stopped being novel?

Cloud gives enterprises a "get-out-of-the-burden-of-maintaining-open-source free" card, but savvy engineering teams still want open source so as to "not lock themselves in and to not create a bunch of technical debt." How does open source help to alleviate lock-in? Engineering teams can build "a very modular system so that they can swap in and out components as technology improves," something that is "very hard to do with the turnkey cloud service."

SEE: Linux commands for user management (TechRepublic Premium)

That's the technical side of open source, but there's more to it than that, Gupta noted. Referring to how Elastic ate away at Splunk's installed base, Gupta said, "The biggest reason...is there is a deep amount of developer love and appreciation and almost like an addiction to the [open source] product." This developer love is deeper than just liking to use a given technology: "You develop [it] by being able to feel it and understand the open source technology and be part of a community."

Is it impossible to achieve this community love with a proprietary product? No, but "It's a lot easier to build if you're open source." He went on, "When you're a black box cloud service and you have an API, that's great. People like Twilio, but do they love it?" With open source projects like Grafana and Elasticsearch, by contrast, developers really love the project, he said, because it's more than a project, more than a technology: "As a developer, you want to be part of that movement."

One key aspect of such developer movements isn't a matter of open source code, though that helps. No, it's really about trust.

A lot of it comes from the fact that things are very transparent in these open source companies, their Github repositories, their issues, their roadmaps. [The] majority of the code may be written by the company, but they do a pretty good job of explaining why every single decision is being made, how it's been made, how it's architected.

It's about trust. When developers have to make a big decision, they're making a bet. Maybe they're embedding Elasticsearch, or they're banking their entire operations team on Grafana. They think, 'This is something [we're] going to be stuck with for a while. I'm actually putting my neck on the line to do this.' And so, good open source companies build incredible amounts of trust.

Such trust is paying dividends for open source companies now, with so many companies struggling to do more with less, and so many developers who are "busy, but they also have time on their hands. They're exploring," suggested Gupta, and open source is the lowest-cost software with the least amount of friction to start experimenting...and falling in love with their software.

Disclosure: I work for AWS, but the views herein are mine and don't necessarily reflect those of my employer.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

Follow this link:

How does open source thrive in a cloud world? "Incredible amounts of trust," says a Grafana VC - TechRepublic

Who is hiring hundreds of new employees and can Israel lead the open-source code revolution? – CTech

Israeli fintech powerhouse Payoneer recruiting 300 new employees globally. Payoneer has benefitted from the Covid-19 pandemic due to the increased demand for online money transfer and digital payment services. Read more

Private micro-mobility companies might finally give cities the innovation they need. CTech spoke with the CEO of Bird Israel on how private companies can help public sectors - to the benefit of millions. Read more

Never trust hyperlinks, says founder of anti-phishing company Segasec. Elad Schulman, co-founder and former CEO of cybersecurity company Segasec, recently acquired by Nasdaq-listed Mimecast, says visually inspecting a URL no longer cuts it, as attackers become more sophisticated by the day. Read more

Israeli chipmaker Hailo launches a Japanese subsidiary. The new launch comes following the news of a recent $60 million series B funding round. Read more

Israeli government approves coronavirus czars traffic light model. According to the approved plan, Israeli towns and regions will be divided into four colored categories, according to the current severity of the outbreak in their territory. Read more

Welltech1 announces $400,000 investment in winner of global wellness startup competition. PopBase is a storybook game that helps kids make healthy life choices; Our portfolio reflects the diversity in the field, says Welltech1 co-founder Galit Horovitz. Read more

Israel Innovation Authority CEO Aharon Aharon resigns. Aharon who was led the government's tech investment arm since 2017 said he felt the job had run its course. Read more

Opinion | Can Israel lead the open-source code revolution? The Israeli tech scene is based on partnerships, innovation and independent thinking which are all vital in open-source code. Read more

Read more from the original source:

Who is hiring hundreds of new employees and can Israel lead the open-source code revolution? - CTech

Announcing the General Availability of Bottlerocket, an open source Linux distribution built to run containers – idk.dev

As our customers increasingly adopt containers to run their workloads, we saw a need for a Linux distribution designed from the ground up to run containers with a focus on security, operations, and manageability at scale. Customers needed an operating system that would give them the ability to manage thousands of hosts running containers with automation.

Meet Bottlerocket, a new open source Linux distribution that is built to run containers. Bottlerocket is designed to improve security and operations of your containerized infrastructure. Its built-in security hardening helps simplify security compliance, and its transactional update mechanism enables the use of container orchestrators to automate operating system (OS) updates and decrease operational costs.

Bottlerocket is developed as an open source project on GitHub with a public roadmap. Were looking forward to building a community around Bottlerocket on GitHub and welcome your feature requests, bug reports, or contributions.

We began designing and building Bottlerocket based on the things weve learned from how customers use Amazon Linux to run containers and from running services such as AWS Fargate. At every step of the design process, we optimized Bottlerocket for security, speed, and ease of maintenance.

Bottlerocket improves security by including only the software needed to run containers, which reduced the security attack surface. It uses Security-Enhanced Linux (SELinux) in enforcing mode to increase the isolation between containers and the host operating system, in addition to standard Linux kernel technologies to implement isolation between containerized workloadssuch as control groups (cgroups), namespaces, and seccomp.

Also, Bottlerocket uses Device-mappers verity target (dm-verity), a Linux kernel feature that provides integrity checking to help prevent attackers from persisting threats on the OS, such as overwriting core system software. The modern Linux kernel in Bottlerocket includes eBPF, which reduces the need for kernel modules for many low-level system operations. Large parts of Bottlerocket are written in Rust, a modern programming language that helps ensure thread safety and prevent memory-related errors, such as buffer overflows that can lead to security vulnerabilities.

Bottlerocket also enforces an operating model that further improves security by discouraging administrative connections to production servers. It is suited for large distributed environments in which control over any individual host is limited. For debugging, you can run an admin container using Bottlerockets API (invoked via user data or AWS Systems Manager) and then log in with SSH for advanced debugging and troubleshooting. The admin container is an Amazon Linux 2 container image and contains utilities for troubleshooting and debugging Bottlerocket and runs with elevated privileges. It allows you to install and use standard debugging tools, such as traceroute, strace, tcpdump. The act of logging into an individual Bottlerocket instance is intended to be an infrequent operation for advanced debugging and troubleshooting.

Bottlerocket improves operations and manageability at scale by making it easier to manage nodes and automate updates to nodes in your cluster. Unlike general-purpose Linux distributions designed to support applications packaged in a variety of formats, Bottlerocket is purpose-built to run containers. Updates to other general-purpose Linux distributions are applied on a package-by-package basis and the complex dependencies among their packages can result in errors, making the process challenging to automate.

Furthermore, general-purpose operating systems come with the flexibility to configure each instance as necessary for its workload uniquely, which makes management that is performed with traditional Linux tools more complex. By contrast, updates to Bottlerocket can be applied and rolled back in an atomic manner, which makes them easy to automate, reducing management overhead and reducing operational costs.

Bottlerocket integrates with container orchestrators to enable the automated patching of hosts to improve operational costs, manageability, and uptime. It is designed to work with any orchestrator, and AWS-provided builds work with Amazon EKS (in General Availability), and Amazon ECS (in preview).

We have launched Bottlerocket as an open source project to enable our customers to make customizations to the operating system (e.g., integration with custom orchestrators/kernels/container runtimes) used to run their infrastructure, submit them for upstream inclusion, and produce custom builds. All design documents, code, build tools, tests, and documentation will be hosted on GitHub. We will use the GitHubs bug and feature tracking systems for project management. You can view and contribute to Bottlerocket source code using standard GitHub workflows. The availability of build, release, and test infrastructure makes it easy to produce custom builds that includes their changes. ISV partners can quickly validate their software before their customers update to the latest versions of Bottlerocket.

We want to grow a vibrant community of users and contributors who adopt and support Bottlerocket as an open source project. We believe that an open source approach enables us to drive innovation based on our experience with working with other open source projects in the container space such as containerd, Linux kernel, Kubernetes, and Firecracker.

Bottlerocket includes standard open source components, such as the Linux kernel, containerd container runtime, etc. Bottlerocket-specific additions focus on reliable updates and an API-based mechanism to make configuration changes and trigger updates/roll-backs. Bottlerocket code is licensed under either the Apache 2.0 license or the MIT license at your option. Underlying third-party code, like the Linux kernel, remains subject to its original license. If you modify Bottlerocket, you may use Bottlerocket Remix to refer to your builds in accordance with the policy guidelines.

Although you can run Bottlerocket as a standalone OS without an orchestrator for development and test use cases (using utilities in the admin container to administer and update Bottlerocket), we recommend using it with a container orchestrator to take advantage of all its benefits.

An easy way to get started is by using AWS-provided Bottlerocket AMIs with either Amazon EKS or Amazon ECS (in preview). You can find the IDs for these AMIs by querying SSM with the AWS CLI as follows.

To find the latest AMI ID for the Bottlerocket aws-k8s-1.17 variant, run:

aws ssm get-parameter --region us-west-2 --name "/aws/service/bottlerocket/aws-k8s-1.17/x86_64/latest/image_id" --query Parameter.Value --output text

To find the latest AMI ID for the Bottlerocket aws-ecs-1 variant, run:

aws ssm get-parameter --region us-west-2 --name "/aws/service/bottlerocket/aws-ecs-1/x86_64/latest/image_id" --query Parameter.Value --output text

In both of the above example commands, you can change the region if you operate in another region, or change the architecture from x86_64 to arm64 if you use Graviton-powered instances.

Once you have this AMI ID, you can launch an EC2 instance and connect it to your existing EKS or ECS cluster. To connect to an EKS cluster with the Kubernetes variant of Bottlerocket, youll need to provide user data, such as the following, when you launch the EC2 instance:

[settings.kubernetes]api-server = "Your EKS API server endpoint here"cluster-certificate = "Your base64-encoded cluster certificate here"cluster-name = "Your cluster name here"

To connect to an ECS cluster with the ECS variant of Bottlerocket, you can provide user data like this:

[settings.ecs]cluster =Your cluster name here

For further instructions on getting started, see the guide for EKS and the guide for ECS.

In addition to using AWS-provided Bottlerocket AMIs, you can produce custom builds of Bottlerocket with your own changes. To do so, you can fork the GitHub repository, make your changes, and follow our building guide. As a prerequisite step, you must first set up your build environment. The build system is based on the Rust language. We recommend you install the latest stable Rust using rustup. To organize build tasks, we use cargo-make and cargo-deny during the build process. To get these, run:

cargo install cargo-makecargo install cargo-deny --version 0.6.2

Bottlerocket uses Docker to orchestrate package and image builds. We recommend Docker 19.03 or later. Youll need to have Docker installed and running with your user account able to access the Docker API. This is commonly enabled by adding your user account to the docker group.

To build an image, run after your source code changes are made:

cargo make

All packages will be built in turn, and then compiled into an img file in the build/ directory.

Next, to register the Bottlerocket AMI, for use on Amazon EC2, you need to set up the aws-cli and run:

cargo make ami

We invite you to join us in further enhancing Bottlerocket. See the Bottlerocket issues list and the Bottlerocket roadmap. We welcome contributions. Going over existing issues is a great way to get started contributing. See our contributors guide for details.

We hope you use Bottlerocket to run your containers and we look forward to your feedback!

See the original post:

Announcing the General Availability of Bottlerocket, an open source Linux distribution built to run containers - idk.dev

Vint Cerf: Why everyone has a role in internet safety – ComputerWeekly.com

When Computer Weekly spoke to Vint Cerf, father of the internet, in 2013 at the 40th anniversary of TCP/IP, the protocol he co-wrote with Robert Kahn, he spoke about the challenges facing users arising from the globalisation of the internet.

Today is the age of sharing and, as Cerf points out, sharing tools are now very common. But his concern is that social media amplifies everything, both good and bad. He says: Now we have to tame cyber space.

The internet has become a global collaboration platform, and it was designed that way, says Cerf. The whole story is all about sharing look at Tim Berners-Lee and the worldwide web.

Cerf says the origins of the internet lie in Arpanet in 1969, motivated by a desire by the US Defense Advanced Research Project Agency to stimulate collaboration between artificial intelligence and computer science researchers across universities. Sharing information broadly and collaboration motivated the development of the internet and, by the1980s, Cerf recalls that 3,000 universities were connected. The US Department of Energy and Nasa wanted connectivity and sponsored the research, he adds.

But although it has been rooted in collaboration, the founding principles of the internet are now under threat. There are ongoing trade disputes between countries such as the spat between China and the US, which, if taken to an extreme, could result in one state closing off internet access. Cerf says: People are surprised that the internet can be turned off, but if you shut down the underlying transport mechanisms, the net simply does not work.

The internet may have been born as a platform for global collaboration, but Cerf is worried that it risks being fragmented. Some states, such as Russia and China, are monitoring their internet borders with country firewalls; others, including India, have thrown a switch to turn off the internet, which happened at the end of last year in the Kashmir region, when the state intervened in a bid to curb public unrest.

In 2019, Cerf spoke about the pacification of cyber space when he gave a talk at Oxford University. He argues that fraud, malware and misinformation are now far too commonplace on the internet. Immeasurable harm is happening, he warns. Many people dont feel very safe right now. People may not want to use the net at all for fear of harm, and the net will simply collapse.

Like the major pieces of infrastructure that evolved during the 19th and 20th century, Cerf believes that a legislative framework is now needed. He says: When roads were improved to carry cars, there were very few rules, but eventually it became apparent that people need rules.

He says this tends to happen when policy-makers start to appreciate that peoples behaviour requires management, which leads to legislation. At some point, there will have to be consequences for bad behaviour on the net, he says.

But to succeed, Cerf argues that such legislation will require cooperation across international boundaries, in order to track down people who are exhibiting harmful behaviour and this is not going to be easy.

It will lead to extremes, he says. If you look at the Chinese mechanisms for limiting bad behaviours, they are way off in a direction most US and UK citizens would not want to go. Total anarchy is not very attractive, either. There must be some place in between where behaviour is adequately regulated, so we can feel we are safe.

Today, with the internet of things (IoT), Cerf says: You have many billions of devices interacting with other devices. We are doing billions of experiments with pieces of software that have never seen each other before.

For Cerf, the only reason these things actually work is thanks to internet standards, which is another form of collaboration. Standards really help, he says. They allow interoperability, even if you havent tested a particular combination.

The architecture of the internet is open, he says, which means that if people dont like how it works, it can be changed. The protocols are also open, so people can see how they work.

Open encompasses open protocols, open data and open source, and when asked about the significance of open source, Cerf admits he has mixed views. Open source implementations are open, he says. I like that you can see code, and ingest the code. But I worry that people grab open source code and think there are no bugs. Your eyes should be wide open when you use open source. We find bugs that are 20 to 25 years old. People assume they have all been erased already.

Such bugs lead to security flaws such as Heartbleed, the 2014 bug in the OpenSSL library that wreaked havoc across the internet.

Looking at how to make the internet safe, Cerf says: Transparency is our friend it creates common sense. Safety is a shared responsibility. People have to recognise they are part of the solution to the problem.

For instance, he says, no one should ever click on an attachment that claims to have come from a friend. Instead, they should email the attachment to the friend directly, asking whether it is legitimate.

For Cerf, the HTTPS protocol is a very important mechanism for securing communications. He is also a fan of two-factor authentication for securing online banking and is happy to use an authentication device, even if it is not convenient, because it adds a layer of security against fraudsters. But he adds: I have 300 online accounts and so I need the equivalent of one two-factor authentication device to handle all accounts.

Cerf doesnt trust the use of mobile phone as the second factor of authentication. He says: Mobiles are hackable. The SIM chip can be conned. I have seen server hijacking [attacks] use that technique.

Security of the internet and web is built on layers, but, as Cerf points out, achieving this is hard because it requires third-party trust. He says that third-party trust is a really tough problem to solve, because there are many certificate authorities, some of which have been compromised.

Cerf is also extremely concerned about IoT security. They are cheap devices and the manufacturers dont spend a lot of time on security, he says. To improve IoT security, Cerf says he would like to see public/private key authentication implemented in IoT connectivity.

Today, internet connectivity involves transmitting photons in optical cables at the speed of light between one point on the planet and another. Looking towards the future of internet technology, one of the most compelling areas of research to emerge is the use of quantum mechanics in data communications.

The classic use is in quantum key distribution, says Cerf. The hottest topic is the quantum relay. The idea is to build a network that allows you to transmit photons that are entangled, so that two different quantum machines that are separate from each other can become entangled, so that the computation can happen concurrently.

Cerf says the benefit of a quantum relay is that it gets around the difficulties of building bigger quantum machines reliably, which use more qubits. A quantum relay effectively enables quantum computers to scale horizontally, as Cerf explains: If you can build one quantum machine with enough qubits to do something, what would happen if you then have replicas and pass the quantum state to the other machines, so that you can run them in parallel?

This is the goal of a quantum relay, he says.

See the original post:

Vint Cerf: Why everyone has a role in internet safety - ComputerWeekly.com

Closing the (back) door on supply chain attacks – SDTimes.com

Security has become ever more important in the development process, as vulnerabilities last year caused the 2nd, 3rd and 7th biggest breaches of all time measured by the number of people that were affected.

This has exposed the industrys need for more effective use of security tooling within software development as well as the need to employ effective security practices sooner.

Another factor contributing to this growing need is the prominence of new attacks such as next-generation software supply-chain attacks that involve the intentional targeting and compromising of upstream open-source projects so that attackers can then exploit vulnerabilities when they inevitably flow downstream.

RELATED CONTENT:How does your company help make applications more secure?A guide to security tools

The past year saw a 430% increase in next-generation cyber attacks aimed at actively infiltrating open-source software supply chains, according to the 2020 State of the Software Supply Chain report.

Attackers are always looking for the path of least resistance. So I think they found a weakness and an amplifying effect in going after open-source projects and open-source developers, said Brian Fox, the chief technology officer at Sonatype. If you can somehow find your way into compromising or tricking people into using a hacked version of a very popular project, youve just amplified your base right off the bat. Its not yet well understood, especially in the security domain, that this is the new challenge.

These next-gen attacks are possible for three main reasons. One is that open-source projects rely on contributions from thousands of volunteer developers, making it difficult to discriminate between community members with good or bad intentions. Secondly, the projects incorporate up to thousands of dependencies that may contain known vulnerabilities. Lastly, the ethos of open source is built on shared trust, which can create a fertile environment for preying on other users, according to the report.

However, proper tooling, such as the use of software composition analysis (SCA) solutions, can ameliorate some of these issues. SCA is the process of automating the visibility into open-source software (OSS) for the purpose of risk management, security and license compliance.

DevOps and Linux-based containers, among other factors, have resulted in a significant

increase in the use of OSS by developers, according to Dale Gardner, a senior director and analyst on Gartners Digital Workplace Security team. Over 90% of respondents to a July 2019 Gartner survey indicate that they use open-source software.

Originally, a lot of these [security] tools were focused more on the legal side of open source and less on vulnerabilities, but now security is getting more attention, Gardner said.

The use of automated SCAIn fact, the State of the Software Supply Chain report found that high-performing development teams are 59% more likely to use automated SCA and are almost five times more likely to successfully update dependencies and to fix vulnerabilities without breakage. The teams are more than 26 times faster at detecting and remediating open-source vulnerabilities, and deploy changes to code 15 times more frequently than their peers.

The high-performer cluster shows high productivity and superior risk management outcomes can be achieved simultaneously, dispelling the notion that effective risk management practices come at the expense of developer productivity, the report continued.

The main differentiator between the top and bottom performers was that the high performers had a governance structure that relied much more heavily on automated tooling. The top teams were 96% more likely to be able to centrally scan all deployed artifacts for security and license compliance.

Ideally, a tool should also report on whether compromised or vulnerable sections of code once incorporated into an application are executed or exploitable in practice, Gardner wrote in his report titled Technology Insight for Software Composition Analysis. He added, This would require coordination with a static application security testing (SAST) or an interactive application security testing (IAST) tool able to provide visibility into control and data flow within the application.

Gardner added that the most common approach now is to integrate a lot of these security tools into IDEs and CLIs.

If youre asking developers I need you to go look at this tool that understands software composition or whatever the case may be, that tends not to happen, Gardner said. Integrating into the IDE eliminates some of the friction with other security tools and it also comes down to economics. If I can spot the problem right at the time the developer introduces something into the code, then it will be a lot cheaper and faster to fix it then if it were down the line. Thats just the way a lot of developers work.

Beyond complianceUsing SCA for looking at licenses and understanding vulnerabilities with particular packages are already prominent use cases of SCA solutions, but thats not all that theyre capable of, according to Gardner.

The areas I expect to grow will have to do with understanding the provenance of a particular package: where did it come from, whos involved with building it, and how often its maintained. Thats the part I see growing most and even that is still relatively nascent, Gardner said.

The comprehensive view that certain SCA solutions provide is not available in many tools that only rely on scanning public repos.

Relying on public repos to find vulnerabilities as many security tools still do is no longer enough, according to Sonatypes Fox. Sometimes issues are not filed in the National Vulnerability Database (NVD) and even where these things get reported, theres often a two-week or more delay before it becomes public information.

So you end up with these cases where vulnerabilities are widely known because someone blogged about it, and yet if you go to the NVD, its not published yet, so theres this massive lag, Fox said.

Instead, effective security requires going a step further into inspecting the built application itself to fingerprint whats actually inside an application. This can be done through advanced binary fingerprinting, according to Fox.

The technology tries to deterministically work backwards from the final product to figure out whats actually inside it.

Its as if I hand you a recipe and if you look at it, you could judge a pie or a cake as being safe to eat because the recipe does not say insert poison, right? Thats what those tools are doing. Theyre saying, well, it says here sugar, it doesnt say tainted sugar, and theres no poison in it. So your cake is safe to eat, Fox said. Versus what were doing here is were actually inspecting the contents of the baked cake and going, wait a minute. Theres chromatography that shows that theres actually poison in here, even though the recipe didnt call for it and thats kind of the fundamental difference.

There has also been a major shift from how application security has traditionally been positioned.

Targeting developmentIn many attacks that are happening now, the developers and the development infrastructure is the target. And while organizations are so focused on trying to make sure that the final product itself is safe before it goes to customers and to the server, in the new world, this is irrelevant, according to Fox. The developers might have been the ones that were compromised this whole time, while things were being siphoned out of the development infrastructure.

Weve seen attacks that were stealing SSH keys, certificates, or AWS credentials and turning build farms into cryptominers, all of which has nothing to do with the final product, Fox said. In the DevOps world, people talk a lot about Deming and how he helped make Japan make better, more efficient cars for less money by focusing on key principles around supply chains. Well, guess what. Deming wasnt trying to protect against a sabotage attack of the factory itself. Those processes are designed to make better cars, not to make the factory more secure. And thats kind of the situation we find ourselves in with these upstream attacks.

Now, effective security tooling can capture and automate the requirements to help developers make decisions up front and to provide them information and context as theyre picking a dependency, and not after, Fox added.

Also, when the tooling recognizes that a component has a newly disclosed vulnerability, it can recognize that its not necessarily appropriate to stop the whole team and break all the builds, because not everyone is tasked with fixing every single vulnerability. Instead, its going to notify one or two senior developers about the issue.

Its a combination of trying to understand what it takes to help the developers do this stuff faster, but also be able to do it with the enterprise top-down view and capturing that policy not to be Big Brother-y but to capture the policy so that when youre the developer, you get that instant information about whats going on, Fox said.

Read the original post:

Closing the (back) door on supply chain attacks - SDTimes.com