After cryptos crash and NFTs collapse, Web3 idealists race to prove that the dream of decentralization isnt dead – Fortune

In early 2021, the French-Lebanese cryptographer Nadim Kobeissi tweeted out a loose idea hed just had. Im designing a decentralized social media solution where each user hosts their own microservice. Kobeissi wrote. These then connect to one another in a mesh, allowing following and sharing posts. It will be lightweight, user friendly and secure. Are you interested in funding its development?

Within a day, Kobeissi had raised $100,000 with that brief, detail-light tweet. A week later, he was the CEO of a new, Delaware-incorporated company called Capsule Social that had a paper valuation of $10 million. Another $2.5 million came in via a pre-seed round that closed in April 2021. The startup is currently raising another round at a $30 million valuation.

[T]he level of interest was so exceptional I felt I essentially had to pause and reevaluate the perfect approachI was being solicited by venture capitalists to such a degree that I had no way to receive their money, Kobeissi says. I had no plan at all. I just had my project idea.

What excited VCs so much? Kobeissis pitch contained the magic word that animates the Web3 movements less speculative, more idealistic side: Decentralized.

Decentralized systems, which dont rely on any core entity to function, are an age-old concept that has been severely undermined in the Web 2.0 era. Many technologists have been chasing a decentralization revival for years.

But when Capsule Social finally launched its Blogchain writing platform in June, Web3s sexier aspectscryptocurrency and NFTshad crashed, leaving idealists like Kobeissi scrambling to rescue their projects and decentralizations brand from the larger Web3 bust.

The internet itself is a decentralized network of telecoms networks, with no central authority that censors bits and bytes or stops one part of the network from communicating with others. The technologies that first took off on that infrastructureemail, the early webinherently adopted the same decentralized nature.

Thats how the nuts and bolts of online life were designed, but then monolithic platforms like Google and Facebook took over, placing themselves at the center of peoples interactions and activities. These Web 2.0 behemoths were user-friendly and secure, but it soon became apparent that they were using their all-seeing positions to profile and target ads at their userswhile censoring some search results and uploaded content.

Distrust of Silicon Valley inspired the first big decentralization wave of the 2010s, in which idealistic geeks and activists tried and failed to take on Big Tech with services like Diaspora and Mastodonalternatives to Facebook and Twitter, respectively. These projects offered greater privacy and censorship resistance than their rivals, but also far more complicated user experiences and, crucially, few of the users who were already happily interacting on Silicon Valleys platforms.

Then Bitcoin exploded, introducing the world to the concept of the blockchain, a decentralized ledger stored across multiple computers, the contents of which are effectively tamper-proof because of that distributed architecture. Decentralization was back with a vengeance, with the term being thrown around by seemingly every advocate of Web3a fuzzy term that encapsulates the interlinked crypto, blockchain, and NFT fields.

The Web3 crowd hopes to take on Wall Street with decentralized finance (DeFi), in which transactions are made via self-executing programs called smart contracts that run on blockchains like Ethereum. New Web3 projects and communities spring up in the form of decentralized autonomous organizations (DAOs) that also use blockchains as a kind of operating system.

Decentralized networks can win the third era of the internet, declared Andreessen Horowitz partner Chris Dixon in 2018. When Coinbase CEO Brian Armstrong last year announced a new section of the crypto exchanges blog for hitting back at crypto critics, he did so in a post about decentralizing truth.

Thats the context in which Kobeissi raised $100,000 in 24 hours. Last year, money was being thrown around very readily on projects like that, and when I first proposed this project it was largely meant as a passion or side project, said Kobeissi, who was until recently best known for creating the CryptoCat secure messaging program that journalist Glenn Greenwald used for clandestine discussions with NSA leaker Edward Snowden in early 2013. Kobeissi is somehow still only 31 years old.

But after the hype, came the crash.

Since November 2021, the crypto market cap has plummeted from $3 trillion to a shade above $1 trillion, with heavy-hitters Bitcoin and Ethereum each down 66%. Sales of NFTstradable tokens that denote ownership of digital files, generally arthave also collapsed, with an estimated 88% drop in the average NFT sale price between April and July.

The crypto winter has partly resulted from the wider economic downturnonce viewed as a hedge against traditional equities, it turns out cryptocurrencies track the Nasdaqs trajectory in particularbut the slump accelerated in May, when Terraform Labs dollar-pegged stablecoin UST collapsed. Perhaps more damagingly, countless instances of NFT and crypto theft and fraud have tainted the whole sectors reputation.

In Kobeissis view, decentralization has gotten caught up in the crash. I think NFTs have helped tarnish the decentralization brand, said Kobeissi.

According to the deal-tracker Pitchbook, global Web3 and blockchain deal activity dropped from nearly $10 billion in the first quarter of this year to $7.7 billion in the secondthough Pitchbook fintech analyst Robert Le says thats still a healthy amount, and the drop mirrors whats happening in the broader VC market.

Its definitely been a period of retrenchment over the last six months across many fronts, said Andrei Brasoveanu, who led venture capital firm Accels investments in companies like Web3 development platform Tenderly and Axie Infinity maker Sky Mavis. Theres a lot of clean-up happening right now.

On the one hand, the crash makes for a tougher market in which to launch a service like Blogchain. Kobeissi says the platform is yet to institute detailed metrics, which makes it impossible to gauge readership figures, but hardly any of the posts on Blogchainsome of which are well-researched articles of the sort one might see on Substackhave more than a handful of comments and shares.

Had we launched earlier, we would have had a bigger impact, simply because of the hype surrounding Web3 and so on, Kobeissi said. Now we basically have to do a grassroots-style campaign. We have to justify the value of the product on its meritslike any traditional, sensible business would have to do.

But Kobeissi also sees the crash as vindication of his controversial decision to shun Web3s buzzier elements.

Blogchain is Web3 to the core: its decentralized nature makes it hard to completely censor posts, and it uses blockchain-based smart contracts to make content-moderation decisions completely transparentan answer to Big Techs opaque moderation practices.

But Blogchain is not based on crypto or NFTs, a trait that disappointed many of the VCs who tried to throw cash at Kobeissi in early 2021, Kobeissi says. VCs also disliked his decision to use the carbon-neutral NEAR blockchain rather than the high-emissions Ethereum blockchain, which they argued has better brand recognition.

When we developed the platform we had dozens of calls with potential investors, partners and advisers, and most were pushing us to focus more on NFTs, he recalled. A lot said that instead of having a focus on content, we should just promise people tokens and NFTs. It was advice that was given in a very superior tone, and when I rejected the advice I was treated as someone who didnt know what they were talking about.

Monkey NFTs dont make sense, but when you use the same smart-contract technology to provide accountability in content moderation, that actually makes sense, Kobeissi said.

Jrgen Geuter, a German computer scientist turned prominent tech critic who writes under the name tante, agrees that decentralizations brand has been very much damaged by recent events, but in his view, trying to create decentralized systems was already a lost cause because users have shown again and again that they prize convenience over the ability to shun Big Tech.

Geuter cites email as an example. Email is inherently decentralized, but wide adoption of Googles feature-rich, well-secured Gmail service made it effectively centralized for many peoplemuch as Bitcoin is now controlled by a handful of mining groups, and the vast majority of NFT trading takes place on one platform, OpenSea.

Whats more, Geuter says, the limited success of projects like Diaspora and Mastodon already demonstrated that decentralized services have big problems overcoming Big Techs network effects and ease of use. Nobody likes annoying technology, except maybe technologists, he said.

All projects end up with a degree of centralization, says Pitchbooks Leand thats not a problem for most users. As a consumer, I just want to use a product that makes my life easier, Le said.

While Geuter mocks the way the Web3 scene fetishizes decentralization, he still believes the concept remains extremely importantas long as people recognize decentralization not as some vague agent of democratization, but rather as a tool for building things that really benefit from that kind of architecture, like transparent content-moderation systems.

In a way, moving decentralization out of this pie-in-the-sky crypto space, clearing its name and making it a topic of research again, is good for decentralization, Geuter said.

Accel VC Brasoveanu also believes the concept remains a compelling idea and goal to pursue, and noted the recent emergence of projects like NFT marketplace LooksRare, which offers a decentralized alternative to OpenSea. In June, OpenSea was still the leading market with two-thirds of NFT trading volumes, but LooksRare came in second with 20%.

Similarly, Le cited a decentralized wireless network for Internet-of-Things connected devices, called Helium, as an example of an innovative token model. Heliums participants earn a cryptocurrency by running the hotspots that make up the network, and companies can then buy that cryptocurrency to use their infrastructure. Helium was until very recently touting Salesforce and Lime as examples of such customers, but after pushback from both, Helium admitted it had only run pilot programs with them.

Helium was valued at $1.2 billion in March, when the likes of Andreessen Horowitz and Tiger Global Management participated in a $200 million Series D round.

I think now, because of how project developers think about tokenomics, the users are going to hold the tokens because they believe in the project, Le said. Thats less speculation, and more I understand this project.

Were one of the most likely Web3 platforms to survive this downturn because were using these technologies in a way that makes sense, said Kobeissi, who is preparing to add cryptocurrency functionality to Blogchain as a way of rewarding writers who prefer to remain anonymousBlogchains revenues come from taking a 10% cut of the subscription fees charged by its premium writers.

Its the hype that gave us a push at the beginningdeserved or notbut now, because weve built on such solid and well-justified foundationsI think that we have a chance at the long term.

See the original post:
After cryptos crash and NFTs collapse, Web3 idealists race to prove that the dream of decentralization isnt dead - Fortune

Prescribing a New Paradigm for Cyber Competition – War on the Rocks

Michael P. Fischerkeller,Emily O. Goldman, andRichard J. Harknett, Cyber Persistence Theory: Redefining National Security in Cyberspace (Oxford University Press, 2022).

Predictions about cyber war have ranged from the apocalyptic to the reassuring over the past decade, and the current war in Ukraine beyond its horrific violence, dislocations, and criminality provides a test case for those theories.Do cyber operations provide decisive advantages in war? Are they more escalatory or de-escalatory than other weapons? Or is it more appropriate to consider cyber capabilities primarily as instruments of interstate competition short of war?

The Russo-Ukrainian War is the first case in which opponents with advanced cyber capabilities have used them to achieve material and cognitive effects in armed conflict. Firm conclusions must await the end of the war, but for now, cyber operations do not appear to have been decisive in destroying or disrupting military forces and economic wherewithal, or in affecting societal willpower and political cohesion.

Even the most revisionist states most of the time want to gain intelligence, enhance revenue through favorable trade, theft or sanctions evasion, and sabotage adversaries politically and economically, while avoiding shooting wars, especially with more powerful adversaries. Such states and their opponents are better off pursuing these aims through cyber operations if they can. Violent actions intended to take or hold territory or steal or disable assets are much more likely to provoke violent, costly, and irreversible responses. Once war is underway, it is thus far unclear whether roughly equivalent cyber capabilities would advantage an attacker or a defender.

The authors of a new book argue persuasively that the habitual U.S. approach of deterrence (primarily nuclear) and coercion (primarily threats of conventional attack) will not effectively dissuade adversaries cyber operations because they involve threats to inflict violence and damage disproportionate to the harm done unto us by those operations. Though written before the invasion, Cyber Persistence Theory does not flunk the Ukraine test thus far. Thanks to their pioneering diagnosis of the structure of the digital environment and the incentives it creates for competition, Michael Fischerkeller, Emily Goldman, and Richard Harknett posit that cyber warfare per se will be rare, and that most exertions will be below the violence and destruction of armed conflict. The authors are, respectively, a researcher at the Institute for Defense Analyses, a strategist at U.S. Cyber Command, and a professor at the University of Cincinnati.

If the great strength of the book is its structural analysis, its weakness is policy prescription. The authors propose an alternative approach of using persistent offensive and defensive competition with adversary cyber operators to establish customary legal boundaries between acceptable and unacceptable cyber espionage, economic and political competition, and warfighting. Unfortunately, the authors and the short span of cyber-age history do not provide detailed bases for thinking the United States and its friends will be able and willing to offer Russia, Iran, North Korea, and perhaps others sufficient threats and rewards to change their cyber behaviors.

The United States would prefer to extend its advantages in cyber-enabled precision warfare while minimizing adversary utilization of cyber to spy, steal, sabotage, and subvert below the level of armed conflict. But, if they can avoid war, adversaries have much to gain and little to lose from cyber competition with the United States, whereas the United States in toto government, businesses, and the public has more to lose from theft, sanctions evasion, and information warfare than its adversaries do. China could be an exception here, as discussed further below. Unlike the other adversaries, it is still a rising power in all relevant domains and could see benefit from negotiating rules on an equal footing. But the current political environment, with fault spread all around, precludes the authors and others from detailing sustainable experiments to this end. Absent a breakthrough on this front, the costs and anxieties of persistent exploitation of governmental, corporate, and personal computing and communications networks will continue.

The Long Shadow of Deterrence

Cyber Persistence Theory argues that the nature of information and communication technologies structures actors competition for relative gain: The global networked computing environment is a warehouse for and gateway to troves of sensitive, strategic assets that translate into wealth and power, and the capacity to organize for the pursuit of both. This environment is resilient at the macro-level its hard to crash the internet, and theres little gain from doing so. But billions of individual addresses in it are vulnerable, and it costs relatively little to acquire capabilities to exploit these vulnerabilities. So, every minute of every day some actor somewhere has both the capacity and will to [gain] access to ones national sources of power directly or indirectly.

It is impossible to completely defend against or deter capable adversaries from attempting intrusions. So, states must persistently compete for relative gains that, over time, could make them strategically better off than their adversaries. Each seeks to add to its power and wealth more than its competitors add to theirs, or especially in the case of Russia to detract more from its adversaries power and wealth than is detracted from its own.

Persistent competition, the authors write, generally takes the form of cyber faits accomplis a limited unilateral gain at a targets expense. Examples of these include Chinas theft of aircraft designs or other intellectual property, North Koreas crypto heists, Russias theft and political manipulation of data from the Democratic National Committee, and the U.S./Israeli destruction of Iranian centrifuges. Once states discover they have been exploited, they try to reduce their vulnerabilities and perhaps increase their own capacities to penetrate their adversaries. Hence, persistent cycles of engagement. This mode of competition is less expensive and risky in every way than armed conflict. It reflects a tacitly produced mutual understanding of acceptable and unacceptable behaviors similar to what the United States and the Soviet Union developed during the Cold War, which Herman Kahn dubbed agreed battle.

The books basic argument is easy to follow, not least because the authors adeptly, if not eloquently, summarize its elements at each stage in their 157-page text. The reader feels in the presence of excellent teachers. After describing the nature of the networked computing environment and the proclivities it produces, the book pivots to a discussion of how the United States could compete more effectively with its adversaries and, over time, temper the costs and risks to international society.

The United States and its allies governments, businesses, and customers should be relieved that the damage from adversary cyber operations is below what would be done by armed conflict. But things would be even better if adversaries stole less information, intellectual property and money, stopped conducting influence operations to exacerbate political polarity and dysfunction, limited penetration of key civilian infrastructure, and so on. While the case of China is more complicated, the authors argue with evidence that sanctions and other coercive threats generally have not deterred or compelled Russian, North Korean, or Iranian behavior as American policymakers, imbued with nuclear deterrence strategy, long assumed or hoped it would.

But saying deterrence and compellence wont work is not a viable policy. Something still must be done to change adversaries hostile behavior. Here, the authors urge an approach that is laudable and worthwhile, but still problematic. They urge the United States and allies to evolve existing international law and establish customary law that defines responsible state behavior and wrongful acts in this domain. The aim would be, over time, to motivate states to limit the targets, effects, and collateral damage of operations. Such restraint, it is argued, would benefit everyone by containing risks of major instability and escalation.

A Law-Building Project

Building such a legal regime would require the United States to overcome its frequent aversion to invoking international law when it indicts Chinese and other hackers. As part of the recommended legal-power strategy, the United States would declare what information and communication systems it deems exclusively its sovereign affair and off-limits from foreign interference under its interpretations of existing principles and rules of international law.

The power of this legal strategy would come from a third element: conducting cyber campaigns against adversaries in ways that reinforce the legal framework the United States is proposing. That is, the flip side of defining international legal obligations is the legitimacy it gives to countermeasures when someone violates an asserted obligation. Cyber operations to counter violations would, iteratively, amount to tacit bargaining with competitors over the boundaries between acceptable and unacceptable behaviors around and about functions or infrastructure that have been declared off-limits.

Unfortunately, the authors cannot say why Russia, North Korea, and Iran would change their behavior to comport with customary international law as interpreted by the United States. These regimes use cyber operations to acquire intelligence, steal intellectual property, evade sanctions, and exacerbate political divisions in adversary societies in ways that they cannot by other means. These states remain isolated, economically hamstrung, and technologically underdeveloped, but they are better off than they would be without cyber operations against the United States and others.

China arguably should be understood and treated differently by the United States and other states. It seeks the capacity to sabotage the United States high-tech weaponry, reconnaissance, command and control, and logistics operations in warfare. Short of armed conflict, it has used cyber espionage to gain technological capability for military and civilian purposes, to enhance counter-intelligence to protect against U.S. spying, and to project favorable opinions about Chinas government and leaders into foreign countries. Unlike Russia, Iran, and North Korea, China is a rising technological and economic power with big equity stakes in the global trading system. It will want rules that others, including the United States, live by, to protect its wealth and intellectual property as well as its one-party political system, something especially problematic for the United States and its allies. And China wants to be central in writing those rules, not passively receiving them from U.S. policymakers. Yet, China does not have the experience and international following to take a leading role. The current all-encompassing antagonism between the two countries, epitomized by Speaker Pelosis visit to Taiwan, vitiates initiatives to create a modus vivendi in the cyber domain.

In conversations, officials and experts from Russia, Iran, and China typically assume the United States has better offensive cyber capabilities than they do to spy on them, to know how to sanction them and detect their evasions, to sabotage their infrastructure, to obtain and publicize damaging information on their leaders, and to precisely and speedily fight a conventional war. (Presumably, North Koreans would say the same, but I have not spoken with them). In their view, whatever measures the United States proposes will be meant to preserve U.S. advantages over them. And as far as international law goes, adversaries like Putin, Xi, Kim, and Khamenei assume the United States will interpret it unilaterally and use it to mobilize or justify punishing its adversaries, while ignoring or violating others interpretations of international law whenever it wants, without repercussions.

The authors of Cyber Persistence know this. They want to build up customary international law so the United States can internally and internationally justify more vigorous cyber operations against adversary networks and machines. Were adversary behaviors described in unsealed public indictments framed as internationally wrongful acts, they write, the extraordinary detail in the indictments should make policymakers comfortable with pursuing countermeasures, if the behavior identified in the indictment is ongoing. This is a very important sentence nine pages from the end of the book: The United States has been too self-deterred, too inhibited, in the authors view. Senior officials and presumably influential corporate leaders and shareholders need to be pushed to see that the best defense is a good offense, and that this can be legitimized.

Unfortunately, the wisdom of their bold prescription is difficult to assess because the authors do not describe the countermeasures they have in mind. Classification and the traditional covertness of cyber operations prevent more transparency. Assuming for many good reasons the authors do not recommend armed attacks in response to adversary cyber operations of the kind seen so far, countermeasures would likely be in the cyber domain. The often-understandable lack of clarity regarding how the United States would react to hostile cyber operations leaves room for adversaries and commentators in swing countries, perhaps fueled by cinema and memories of Edward Snowden, to assume that the United States is doing more in their computers and networks than Russia, North Korea, Iran, and China are. And this is a problem for the authors other recommendation: The United States is competing with Russia and China for the rest of the worlds support in developing international norms and potentially customary law. If it cannot say more about the legitimating rationale and effects of operations it conducts in other countries systems, and plausibly distinguish between the normal and arguably legitimate espionage and countermeasures that the United States and its partners conduct compared to the less defensible targets and tradecraft of adversaries, the law-building strategy will founder.

Of course, even if Russia and China confine themselves to acceptable data-collecting espionage and preparation to attack legitimate U.S. military and war-supporting industry targets in war, the United States is likely to counteract. The hope for stabilizing cyber competition rests on the possibility of reciprocally bounding the targeting and probable effects of operations, and on very careful tradecraft. This will require the sustained, high-level attention of senior leaders, especially from the United States and China, and a steady diplomatic effort to explicate to each side which targets and effects are intolerable and will cause one to take countermeasures, and to create processes for communicating about ambiguous cases. Tacit bargaining will be essential given the secrecy of action in the cyber domain and the deranged politics of relations between the United States and the countries of greatest concern. But, at some point progress will depend on the U.S. political system tolerating leaders having a sustained, public dialogue or negotiation with leaders of adversary countries. Tacit bargaining is too ambiguous to rely upon alone.

Cyber Persistence Theory is a must-read even if it is far from the last word. The authors invoke Thomas Kuhn and his famous concept of paradigm shift. They penetratingly describe the structural shift that the information revolution imposes on some aspects of interstate competition. But cyberspace, unlike the phenomena that Kuhns natural scientists sought to understand, is human-made. Contending groups compete against each other by altering and exploiting their creations in this environment. The challenge is not merely to understand these dynamics like scientists do, but to shape them in ways that avert massive harm and, ideally, facilitate the pursuit of well-being. Meeting this latter challenge will require additional volumes that build on this one.

George Perkovich is Kenneth Olivier and Angela Nomellini Chair, vice president for studies at the Carnegie Endowment for International Peace. He is co-editor of Understanding Cyber Conflict: 14 Analogies (Georgetown University Press, 2017) which can be downloaded free at 19029-Perkovich_Understanding.indd (carnegieendowment.org).

Image: U.S. Cyber Command, photo by Josef Cole

See the rest here:
Prescribing a New Paradigm for Cyber Competition - War on the Rocks

DreamWorks Animation To Release Renderer As Open-Source Software – Slashdot

With annual CG confab SIGGRAPH slated to start Monday in Vancouver, DreamWorks Animation announced its intent to release its proprietary renderer, MoonRay, as open-source software later this year. Hollywood Reporter reports: MoonRay has been used on feature films such as How to Train Your Dragon: The Hidden World, Croods: A New Age, The Bad Guys and upcoming Puss in Boots: The Last Wish. MoonRay uses DreamWorks' distributed computation framework, Arras, also to be included in the open-source code base.

"We are thrilled to share with the industry over 10 years of innovation and development on MoonRay's vectorized, threaded, parallel, and distributed code base," said Andrew Pearce, DWA's vp of global technology. "The appetite for rendering at scale grows each year, and MoonRay is set to meet that need. We expect to see the code base grow stronger with community involvement as DreamWorks continues to demonstrate our commitment to open source."

Go here to read the rest:

DreamWorks Animation To Release Renderer As Open-Source Software - Slashdot

This Mac hacker’s code is so good, corporations keep stealing it – The Verge

Patrick Wardle is known for being a Mac malware specialist but his work has traveled farther than he realized.

A former employee of the NSA and NASA, he is also the founder of the Objective-See Foundation: a nonprofit that creates open-source security tools for macOS. The latter role means that a lot of Wardles software code is now freely available to download and decompile and some of this code has apparently caught the eye of technology companies that are using it without his permission.

Wardle will lay out his case in a presentation on Thursday at the Black Hat cybersecurity conference with Tom McGuire, a cybersecurity researcher at Johns Hopkins University. The researchers found that code written by Wardle and released as open source has made its way into a number of commercial products over the years all without the users crediting him or licensing and paying for the work.

The problem, Wardle says, is that its difficult to prove that the code was stolen rather than implemented in a similar way by coincidence. Fortunately, because of Wardles skill in reverse-engineering software, he was able to make more progress than most.

I was only able to figure [the code theft] out because I both write tools and reverse engineer software, which is not super common, Wardle told The Verge in a call before the talk. Because I straddle both of these disciplines I could find it happening to my tools, but other indie developers might not be able to, which is the concern.

The thefts are a reminder of the precarious status of open-source code, which undergirds enormous portions of the internet. Open-source developers typically make their work available under specific licensing conditions but since the code is often already public, there are few protections against unscrupulous developers who decide to take advantage. In one recent example, the Donald Trump-backed Truth Social app allegedly lifted significant portions of code from the open-source Mastodon project, resulting in a formal complaint from Mastodons founder.

One of the central examples in Wardles case is a software tool called OverSight, which Wardle released in 2016. Oversight was developed as a way to monitor whether any macOS applications were surreptitiously accessing the microphone or webcam, with much success: it was effective not only as a way to find Mac malware that was surveilling users but also to uncover the fact that a legitimate application like Shazam was always listening in the background.

Wardle whose cousin Josh Wardle created the popular Wordle game says he built OverSight because there wasnt a simple way for a Mac user to confirm which applications were activating the recording hardware at a given time, especially if the applications were designed to run in secret. To solve this challenge, his software used a combination of analysis techniques that turned out to be unusual and, thus, unique.

But years after Oversight was released, he was surprised to find a number of commercial applications incorporating similar application logic in their own products even down to replicating the same bugs that Wardles code had.

Three different companies were found to be incorporating techniques lifted from Wardles work in their own commercially sold software. None of the offending companies are named in the Black Hat talk, as Wardle says that he believes the code theft was likely the work of an individual employee, rather than a top-down strategy.

The companies also reacted positively when confronted about it, Wardle says: all three vendors he approached reportedly acknowledged that his code had been used in their products without authorization, and all eventually paid him directly or donated money to the Objective-See Foundation.

Code theft is an unfortunate reality, but by bringing attention to it, Wardle hopes to help both developers and companies protect their interests. For software developers, he advises that anyone writing code (whether open or closed source) should assume it will be stolen and learn how to apply techniques that can help uncover instances where this has happened.

For corporations, he suggests that they better educate employees on the legal frameworks surrounding reverse engineering another product for commercial gain. And ultimately, he hopes theyll just stop stealing.

Read more:

This Mac hacker's code is so good, corporations keep stealing it - The Verge

Secrets in the Code: Open-Source API Security Risks – BankInfoSecurity.com

This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors. Steve King 00:13A good day everyone this is Steve King, Im the managing director at CyberTheory. We are running our podcasts today around a topic that we call secrets in the code. Todays episode will focus on day zero supply chain vulnerability. With me today is Moshe Zioni. The VP of security research at Apiiro an early stage cybersecurity company founded in 2019, whose purpose is to help security and development teams proactively fix risk across the software supply chain before releasing to the cloud, which is very cool. In my estimation, backed by Greylock and Kleiner Perkins with a $35 million a round, I think they are well on the way to a market leadership position in the space. And some of what theyve done so far is the current winner of the pretty prestigious RSA sandbox Innovation Award. They were named to Gartner 20 week 21, cool vendor and Dev SEC ops. They found that detected a de zero supply chain security vulnerability on Kubernetes space, the Argos CD platform. And theyve been a frequent contributor to the NIST 800 to 18 Secure Software Development Framework. So Moshe has been researching security for over 20 years in multiple industries and specialized specializing in penetration testing, detecting algorithms and incident response, constant contributor to the hacking community has been co founder of the Shabak on security conference for the past six years. So welcome to the show emotion. Im glad you could join me today. Thank you, Steve. ImMoshe Zioni 02:08very happy to be here. Thank you for having me.Steve King 02:11Sure. Lets jump right in. We all know that traditional OpSec is failing modern enterprises, and that weve got many hidden risks in open source API security. In fact, you guys published a report, I think, entitled secrets in the code, which eloquently describes the business industry impact of your research, along with some actionable insights for practitioners? Can you give us an overview of that? Sure. So as aMoshe Zioni 02:40backdrop secrets, ENCODE is something that many developers and security professionals have been pointing out throughout recent years. But of course, it is as old as code exists. Simply put it is the fact that developers are putting into their code, some strings, or some artifacts that are there without a real reason, or at least not a secure reason to do to do the same thing with a secure string, or maybe some alternative that we have currently, like vaults or something. So instead, theyre using hard coded secrets secret can be a password, a token that can be utilized, again, a cloud service or something, something in this in the Spirit. And by using that sometimes they neglect it in code. And once this code is, is open source to the world, some other hacker can pick it up from the source itself, and utilize it for their own good their permissions there or authorization that you get from those tokens are is of course, varies between different suppliers and providers. But in general, you can think of the most common examples are like tokens to a specific API service that can give you maybe some credentials to implement or to access, cloud services and cloud resources of the organizations. So this is the backdrop of why we actually went through the research method and eventually resulted in the report that youve just mentioned. And in this report, we found we took like something around 20 Different organizations with different scale with different industries. And through those organizations, we actually scanned pretty rigorously all of their commits. commits are the single piece of code that are being pushed into an open source repository. And we reach 2 million commits overall. And by those commits, we have a very good grasp of how secrets behave in code how developers are, wrongly put their secrets in their code. And also what kind of what can we learn from those kinds of behaviors? Is there a Some things you can point out as a pattern. And of course, the result is the report. So you can guess there are some patterns that are most interesting to explore. And to add to the decision making processes within security professionals and organizations, once they have their plan or strategy strategic plan put intoSteve King 05:21place. Yeah. And are there quite a few dependencies that, you know, downstream dependencies on other open source programs that are called by some of these APIs and, and other open source code that no one has any idea? What what those are? Or are people? I guess the question is, how do we vet? Is that even possible that event the percentage of code that we that we reuse from these libraries?Moshe Zioni 05:52Wow, thats a great question. And of course, a very complex answer. Ill try to do it briefly. The short answer is that you can assess at least the risk of having specific package or dependencies that you use and import into your code. There is a limit to it, of course, because everything can be seen as a risk. And what we are proposing and we are, can we actually have another project in open source project for that name, the dependency come popular, which is doing exactly that, its taking into account multiple intelligence feeds, and made a data of the packages and trying to assess what is the risk of using this kind of import of using this kind of package. There are different ways to go about this kind of route of intelligence over packages, you can maybe scan them, you can actually went through a code review practice with them. But this is, of course, a very laborious and expensive in resources, of an effort to go about every kind of open source dependency that youre using, that this number is just accumulating over time, and, from our perspective, never go down, we all see the trend of using more and more open source, there is good reason for that is this saves a lot of time, this is this becomes a standard. And by that you can implement and produce better and also faster software to production. So we dont see retraction from this kind of trend. Quite the opposite.Steve King 07:29Yeah, no, and I, you know, the Imam understand that the need for you know, if were driving so desperately to digitalization and, and the fourth result revolution, and all of that I see the need for, you know, agile development, of course, but, you know, I mean, at some point, dont you say, you know, the cost is far outweigh? I mean, to do it to do it in a safe context, isnt the cost far outweigh the benefit? Its amazing to me, I know, you guys have developed some best practices also, when it comes to, you know, ethically reporting and patching these vulnerabilities. And can you help our audience understand what a few of these might be? And do they include, you know, if we run into a secret, for example, or the dependency that youre working on? Now, do you alert the dev SEC ops team? Or how does that work?Moshe Zioni 08:25Again, this is a very good point on both cases, and on once you find a vulnerability or you find the secret, which can be seen as a subset of a file a vulnerability in code, some kind of weakness that you are exposing. So in general, yes, there is a responsible disclosure process. If you are internal to the organization, this should be easy for you, you should contact your immediate app SEC engineer or app SEC representative. And by that acknowledges them that should they should respond to this kind of incident. By that they need to, of course, prista, first of all remediate meaning that they need to revoke the token, after they are rotating it into a more secure way and fixing the code. To be supportive of that on dependencies are quite the same. If you find a dependency we have our ability, you acknowledge that to the to your closest representative if you are extended to the organization, thats a bit more complicated, but fortunately, we have many processes around that. Its collectively called responsible disclosure, meaning that you are disclosing a vulnerability or maybe a weakness as we mentioned the secret to an organization Hey, listen, you have this kind of of an issue. And you also would like to extend an explained sometimes why this is an issue. What kind of business impact does this help desk this issue has over business noteworthy organizations. Once you have that you are filling up a short report, maybe an email they maybe they have some kind of a bug bounty program which Just another way to support this kind of disclosures. And by that you can go about and just disclose this kind of information safely to the organization, you can look up for more mature organizations will have their contact in the front page, just as for security manners, and of course, every kind of respectable corporate will have this kind of process one way or another.Steve King 10:25Yeah. And I assume that that means that we want to only work with mature organizations with that have ways of interacting and contacting to make sure that were able to do that responsible disclosure, and have them act on it. Right? Yeah,Moshe Zioni 10:44yeah, absolutely. We, this is one measurement for you to measure, if those kinds of issues have been just mentioned dependencies, just to measure if this, this, this dependency is being mature enough in terms of security, you can see if there were any kind of vulnerabilities in the past, you can see if they have a process installed, in order to contact their security advisory or security board. And by that you can assess at least their seriousness and their maturity in terms of security processes. This is a great indicator. Yeah, I must agree. Yeah. SoSteve King 11:14are you attempting to do that in an automated context? Or do you simply return the discovered dependency to a manual process where people dont have to look it up.Moshe Zioni 11:30So we do both, it really depends on on what the customer needs. And you can, you can, you can set it up as you will, if youd like to have just as a, an alert or something that will be notifying you about this kind of discrepancy, maybe a vulnerability funding dependency, so youll be able to manually act upon. And also, on many vulnerabilities, there are automation processes in place, so you can just forget about it and say you want to be automatic, most of the organizations will have some kind of a mix for high impact vulnerabilities, excuse me high impact on the business, they would like to assess it manually. Either way they can break. For example, if you just need to update the dependency version, you will need to test it first by a human being maybe in the future, that will be even better. So well well be able to just reduce this kind of effort as well. But currently, every kind of high business impact application will have to have some kind of a manual analysis and manual testing before releasing it to to a stable state. You can choose for at least for the time being if you would like for example, just to have a bit as a beta for testing, or maybe for some cutting edge. And someone thats more like to to have the risk of return, they be able to automatically update for the latest version and then just use it as is.Steve King 12:55Yeah, I got it. Ransomware is continuing to be a thorn and everybodys side is growing like crazy. For all the obvious reasons. Youve got advice on how organizations can best mitigate future ransomware attacks and specifically around supply chain and open source? Security. I know a lot of people that would love to hear the answer to that question. How do you mitigate future ransomware attacks,Moshe Zioni 13:22when we are discussing ransomware. Or if we can generalize it a bit for any kind of malware activity, malware can be directed and can be implemented. Not just of course, by a ransomware, I agree with you the trend somewhere is the most prominent attack vector once you have a foothold into the organizations. And what we are foreseeing and what we are proposing, especially around the supply chain, and they were supply chain ransomware attacks is to defend your code as early as you can. And also, that means that there is a trend called shift left meaning that you would like to have as much as those kind of things and validation done as soon as possible not once, not just once you are going to production. And the second rule of thumb here is if you have something more closer to the actual production systems, what youll be able to do is to lock down the versions lock down the specific cases, specific dependencies that you have. And by that, even if someone is lets say half men in the middle attack over your dependencies, youll be able to validate, and by the signature and by the fingerprint of those kinds of dependencies that you you actually get what youre expecting. So nothing like for example, a very common mistake in those kinds of cases that can lead to those kinds of attacks, potentially, is to leave it to the dependency to be able to pull down the latest version instead of the specific version that you know that is safe to use, and buy that every time that they So a build will go up, it will request the latest version without acknowledging what kind of certificate what kind of fingerprint should should this version have. And this is called a locking, version locking. So you lock the version, you can also add to that on many package managers, the actual fingerprint of the package. And by that you ensure that at least you wont be harmed, harmed by a new kind of attack through the supply chain through dependencies, if that makes sense.Steve King 15:27Okay, how much post sales support? And training do you guys have to provide to get your customers that fully extract value from the solution?Moshe Zioni 15:42I would say not much. First of all, we are in very close contact with our customers. As a startup, of course, we have this kind of agility to fit their needs pretty quickly. And we are going through the rule of thumb that if it doesnt make sense, the first time you look at it, it maybe will make sense that the third or fourth time you will but thats something that we are refraining from we are trying to make the system approachable meaning that the you user experience itself should reflect native flows of organizations and not enforcing the organizations to our will, and our own processes and what we think Sheesh, they should do. The second thing we are doing its the whole system is interconnected with your current processes. So it wont make up new processes, if you dont like to, the workloads that we can build for you are automatic and are suitable for your ticketing system, maybe for your instant messaging systems like Slack like teams, etc. And by that we are leaving the ecosystem instead of instructing it.Steve King 16:45Do you think you can scale that down as you grow?Moshe Zioni 16:49Absolutely. Currently, the the way that we are doing that is, first of all, we are a cloud native ourselves. So by that the scalability that we, if we have any kind of scalability requests, is pretty easy to do. DevOps teams are pretty used to that. And we are also always preparing ourselves to do much more than we are currently withholding. And, of course, we are looking into more and more customers, we have huge customers on our portfolio. And by that we are pretty confident with that. But of course, we are always checking those kinds of assumptions, we dont want anyone to be held down by resources or anything similar to that. And the process itself is pretty easy, you can be ramped up into onto the payroll platform, in a matter of less than a day, or even less than some than several hours sometimes depends on your size. And the analysis itself will also kick in soon as possible, though, you will have your repositories analyzed and if you are asSteve King 17:53what size customer is your ideal prospect or your ideal end user in terms of, you know, a number of people or obviously they have to have DevStack ops team, how big does that have to be? Yeah,Moshe Zioni 18:09so this is the funny thing. We are, first of all, we are seeing a lot of different customers in terms of structure. So sometimes they will have their own DevStack ops team, sometimes they will, they will have dev ops team and not dev SEC ops team, sometimes they wont have either and they maybe will have a single entity named OpSec, engineer or upset professional to go about and do the work of app SEC application security, excuse me. And by that the whole purpose of the A pillar system is to save you those kinds of resources, you you you wont need it before that you lets say you need 10 people to to exercise application security throughout your supply chain Bureau is diminishing those numbers to a single digit. And on the low end of it, the purpose of it is to make the clutters of the alerts and the alarms that you have all the bells and whistles that goes off every time you will have the minimum amount that you need. And the very focused one, dealing with deduplication dealing with automations of those kinds of processes. So in general, our idea of of, of an organization will will have to be something that some organization that will have at least one application security personnel, that can be a devsecops that can be a DevOps, and that can be an absolute professional. In terms of number of developers, you can go up to the hundreds of 1000s. But in general, thats the whole idea that the system is scalable. We are learning as much as we can from from those kinds of development developer behavior. So if you have more developers, that will make much more value. But if even if you have quite a few, even in the numbers of 10s developer, a few 10s of developers, its still going to be much valuable information and insights about who is doing what Add how what is the timeline of each material change in the code? What kind of code impacts you more than that something else and the risks that every code commits, is contributing to your to your repositories. And of course, you decide what to do with it. And we aid you with our workflows and automations around remediation and measurement.Steve King 20:21Yeah, I see. And thats got, thats got to be one of your key value propositions as well, right? Peoples dont have to stand up a whole dev SEC ops team, they, if they dont have one, thats fine, too, because youre actually doing that work.Moshe Zioni 20:38Exactly. We have some very good indications on that from customers that they applaud us on several occasions than we recently on past months, everyone had those kinds of VIP CDs, meaning vulnerabilities are very high impact into data streams. And instead of spending hours, maybe days, maybe a week, some customers said that their peers in the industry spent two weeks in order to discover all of the weaknesses they have, it took a took them with a much less of a much, much fewer applications, security professionals. And within a few hours, they had all the information they needed to mitigate and to spot every every weakness in every vulnerability that was that were discussed, and those kinds of events. So this is a very good assurance, that the impact and to the philosophy that we are taking reallySteve King 21:31your platform. Yeah, sounds like it. Thats great. Weve talked about numbers a little bit here that you know, you in the difference between private and public repositories, you youve discovered that I dont know, it was like eight times the number of expose secrets and privates. Can you told me give our listeners the difference between private and public repositories? And why that wed have eight times the number of expose secrets in private repositories? Yeah, sure. SoMoshe Zioni 22:00they there is a technical answer to that. And there is a, I would say psychological, psychological aspect of it. So first of all, the technical answer is that private versus public, a public repository is something that you quite, not surprisingly, opening up to the world and to the public. So everyone can can see your code. The reasons for that vary, sometimes its something that you would like to share, because it would like to share something with the community or maybe some some kind of a support to other customers that you have yourself, or you have an Open Source Repositories that you are maintaining the private repositories, which are the funny thing is that they are much more common than the public ones in organizations, of course, is your code that you dont you dont want to expose to the world. So this is the technical aspect of repositories, private versus public. The other aspect of it is more a psychological and organizational level aspect, is that what you do with those kinds of private repositories, those private repositories holds your crown jewels. And another difference is that those private repositories have maybe a different threat actor attacking or, or influencing the risk of those kinds of repositories. And what we found in the research is that, as you said, you have eight times the number of secrets on those kinds of private repositories. This is the first of any kind of report that covered internal repositories, to the to this breath. And by that you can also think or at least correlate the fact that developers and every organization feel much more safer to keep their code within their realms. And by that some secrets can slip in much more heavily. And also you they will never expect those kinds of secrets to go out. So they will assume this is safer, and maybe they shouldnt act upon it as furiously as they will be on public repositories. But this is completely false. First of all, many accidents that weve weve encountered and aided in those kinds of incidents, try to convey the message that some of those extents begin with the private repositories. But then sometime in the future, this code snippet or maybe the whole repositories, become public. The second thing is that if those private repositories are private, that doesnt mean that that no one can see that its accepted, specific developer quite the opposite. In those kinds of organizations, many have those kinds of access. And something like a snippet can slip through someone can copy paste something to an unsecure device. And by that you see those kinds of private repositories maybe the most notorious case of the past here was the Twitch link, which the streaming service have been hacked sometime in the past and in 2021, and the end of 2021. We saw the link itself a few gigs. bytes of code. And we saw how many, this is pretty confirming to this kind of aspects, how many secrets there were in twitches code doesnt mean that Twitch is any different from any kind of another implementation, it just confirms the fact that those kinds of secrets are much more prevalent in entire repositories.Steve King 25:19Wow. You know, as it gets more complicated the human factor, it gets more important, doesnt it? Across the board, whether its, you know, server configurations, or open source code, or the kind of mistakes that humans make, just naturally, I mean, people are people, you know. So its, its always interesting to me, it is also interesting that I hit you said that over a third of the secrets that you detected, your research detected happened in the first quarter of the year. What is the correlation between that time of year and the number of secrets?Moshe Zioni 26:01Yeah, Im happy to bring that up. Because for me, its the most revealing fact from the report Maybe, and maybe most surprising to many. But when you think about it, what the actual the actual report stated that 30 point 34 point 34% of secrets that were found, were added to those repositories during the first few months during the first quarter of the year. This is spanning the research itself spanned throughout multiple years. So and we saw this kind of very clear cadence that you have in from the beginning of the of the year to the end of it, you have some kind of a sine wave throughout, and the correlation that we found, and we also discussed it with, with experts and some on an organizations themselves. By the way, I havent mentioned that until now that the report itself has been vetted and been validated and discussed with 15 different external, external experts on the field of application security. Some of them are our customers, some of them are champions of application security globally. And they have reviewed it and gave gave their insights as well. And part of what we receive there is that many organizations have this kind of rotation cadence of secrets within their organization. Quite naturally, it maybe its the beginning of the year, maybe sometime else inside the fiscal year that needs to be rotated, because you are re rethreading over licenses. And maybe they just had a very good year sometime. And they have this kind of very aggressive recruitment. So they have much more new employees and by the new developers makes much more mistakes. Another fact that we that we put on the report itself, by the way, so we see this kind of seasonality, first of all, because of organization cadences outside of secrets, but affecting secrets indirectly. And also, we can think of the holidays, especially if the US holidays are happening at the end of the year. So something along those lines also can affect the holiday time that people take and then return. Maybe its a its overburdening for the application security team that is always in the stress of accomplishing more, so they have less time for code reviews. And they cant really stop the whole flood of secrets at those kinds of times of year. Those are, of course assumptions and correlations, and we cant really prove one to one. But we see this kind of correlations pretty strongly, especially on the seasonality and rotation factor that that I mentioned.Steve King 28:41Yeah. Yeah, that makes sense. Id love to get a copy of that report. If its now public, and perhaps you can email me some version. Thats true. Yeah, thatd be great. Its worth promoting for sure. I this is a this is a huge problem. You know, its right up there in my mind with all of the other complicating factors around our networks being way too complicated. Its moment and, and our approach is relying way too much on human on the human factor. I think were near the end of our time here. And I wanted you to have you confirmed that I think, a brief way to summarize a Pirro that you guys discover, remediate and measure every API service, dependency and sensitive data in the CI CD pipeline to map the application attack surface, right. Right, together withMoshe Zioni 29:42contextual knowledge about the risks themselves, like what is the material change? What kind of technologies are you using? If the actual code change was affecting authorization, authentication, storage, or anything along those lines and much more. All this contextual knowledge gives us the power To really recommend and to score risks according to your normalization of the organization, and not just by a ad hoc, something agnostic to yoga kind of organization, it the context is everything. And its no different with this kind of risks.Steve King 30:15Yeah, sure. And, and this all happens pre production, right? pre pre entry into the production stream and the crowd in the cloud. Yeah. Okay. Yeah, correct. So who are some of your more notable customers that folks would recognize? And then what competitors would folks expect to find when looking for a code risk platform? Is that a category by the way, that code risk platform? Category? Is that is that a Gartner thing? Or did you guys can see that?Moshe Zioni 30:46I dont think its a Gartner category, the Gartner closer thing is the scene app or the cloud native application protection platform. And by that I can mention a few of course, I can mention every kind of customer that we have. But just to name a few we have so first platica, Chegg tripactions, Imperva, rivian, mind geek, Rakuten, and many more on our platform. And if you just notice the whole line there, there are diverse customers from for many industries, any shape and size. And this is, of course, gives us a lot of, of johe working with those kinds of big customers that knows how to run application security programs. And by that they enjoy the experts platform that gives them the this kind of contextual power. Yeah,Steve King 31:37Im sure. In terms of competitors. I know you guys are early, have there been a bunch of competitors that that have been sort of creeping up? Or do you have any serious competitors that you worry about?Moshe Zioni 31:51I dont think its I think its too early to really designate a competitor, every there is a lot of cloud related startups and solutions. But every everyone is doing their thing very much differently. And we are not excluding the we are not excluded there. And by that I dont see anyone like direct competitor, but the area is still fresh. Let me let me put it that way, asked me again in one year, and I youSteve King 32:19know, I will, I believe Ill have you back in a year. And well have the same conversation and see, see where you are, which is great. You know, I mean, when you sold Imperva, there must have been competitors there that you beat out. Right? Again, weMoshe Zioni 32:37are we have a very unique approach and philosophy, to the market to application security in general. To be honest, the first time Ive heard from the founders about the company done plotnick. And youre not done about the solution, my jaw just dropped. As a veteran in the application security industry, this was not just news, but earth shaking and a paradigm shift in the way that organizations should deal with application security from now on. And this is so much time after that. I still feel like there is no competitor in the same scale and in the same maturity, and very much nothing the same even method that we are looking into. And thats why Im struggling to find a direct competitor that you are looking for.Steve King 33:26Yeah, no, I know. I dont believe that youre being evasive at all. I think that youre right. I dont know any. Any competitors here. And you guys. Thats why when Alex originally contacted me, I was I was floored, you know, I was like, Can this be for real? Because youre absolutely right. This is a this is a solution I havent seen before and it is revolutionary, absolutely set in terms of you know, security by design. No, no question about it. So thank you Moshe, for taking the time out of your crazy schedule, Im sure to join us today. This is Moshe ziani, the VP of security research at a Pyrrho and we will ask you to come back not in a year but maybe six months and have another one of these and kind of see whats happened in the market. Now. You know, were heading into a challenging moment here to the next few months and but you know, cybersecurity is not going to stop and so people still need to protect their PII and PII and IP and all the rest of it. So Im sure that you should have a fantastically successful quarter here.Moshe Zioni 34:41Thank you very much, Steve. And Im looking forward for the next invitation. It was a very pleasant discussion. And there was questions. Thank you very much.Steve King 34:49Good. Thank you. And thank you to our listeners for joining us in another one of our unplugged reviews is the stuff that matters in cybersecurity and technology and our Our new digital landscape until next time, Im your host, Steve King signing out

See original here:

Secrets in the Code: Open-Source API Security Risks - BankInfoSecurity.com

Supercloud debate: Is open-source standardization the way forward? Databricks CEO weighs in – SiliconANGLE News

Supercloud is an emerging trend in enterprise computing that is predicted to bring major changes to how companies build out their cloud architecture.

Over the past six months, SiliconANGLE Media has been following the increase in companies considering supercloud as a way to get rid of multicloud complexity and help their customers monetize data assets.

Building a supercloud isnt a one-size-fits-all project. There are as many flavors of supercloud as there are choices for cloud. Some, like Snowflake Inc., are opting for the proprietary variety. Taking the opposite side of the debate is Databricks Inc., which advocates building on open-source standardization.

Open source can pretty much do anything, said Ali Ghodsi (pictured), co-founder and chief executive officer of Databricks Inc. We think that open source is a force in software thats going to continue for decades, hundreds of years, and its going to slowly replace all proprietary code in its way.

Ghodsi spoke with theCUBE industry analyst John Furrier at Supercloud 22, during an exclusive broadcast on theCUBE, SiliconANGLE Medias livestreaming studio. During The Open Supercloud session, they discussed the advantages and disadvantages of taking an open approach to supercloud.

Can open standards deliver the same experience as de facto, proprietary approaches in terms of control, governance, performance and security when it comes to building an abstraction layer that leverages hyperscaler power to deliver a consistent experience to users and developers?Databricks has bet its fortune on the fact that it can.

The companys data lakehouse platform provides an example of an open-source supercloud in action. Built on a structured and unstructured cloud data lake powered by the hyperscalers, which is made reliable and performant by Delta Lake, the platform provides a common approach to data management, security and governance through its Unity Catalog layer.

Were big believers in this data lakehouse concept, which is an open standard to simplifying the data stack and help people to just get value out of their data in any environment, Ghodsi said.

Around 80% of Databricks customer base is on more than one cloud, and they are struggling with the complexity, according to Ghodsi. Reconfiguring data management models over and over to integrate with the different proprietary technologies of the various cloud providers is a time-consuming and difficult task brought about thanks to the ad-hoc creation of multiclouds by default rather than by design a description given by Dell Technologies Inc.s Co-Chief Operating Officer Chuck Whitten.

Its the operations teams that bear the brunt of integrating new technology and making sure it works, according to Ghodsi. And doing it in multiple environments, each with a different proprietary stack, is a tough challenge.

So, they just want standardization, he said. They want open-source technologies. They believe in the communities around it. They know that source code is open so you can see if there are issues with it, if there are security breaches, those kinds of things.

Databricks didnt set out to build a supercloud. The company has a mission to help organizations move through the data/artificial intelligence maturity mode, bringing them to the point where they can leverage the advantage of prescriptive, automated AI/machine learning in the same way that enabled the tech giants to reach the level they are today, according to Ghodsi.

Google wouldnt be here today if it wasnt for AI, he said. You know, wed be using AltaVista or something.

The continuum starts when a company goes digital and starts to collect data, Ghodsi pointed out. They want to clean it, get insights out of it. Then they move on to using the crystal ball of predictive technology. The end comes when a company can finally automate the process completely and act on the predictions.

So this credit card that got swiped, the AI thinks it is fraud, were going to deny it, he said. Thats when you get real value.

The basis of Databricks data lakehouse, which falls under theCUBEs definition of supercloud, started when the company developed the Delta Lake framework in 2019 as a way to help companies organize their messy data lakes. The same year the project was donated to the Linux Foundation in order to encourage innovation. Then, at the start of Databricks Data + AI Summitthis past June, Databricks removed any differences between its branded Delta Lake and the open-source version by handing the reins of the storage framework to the Linux Foundation.

What were seeing with the data lakehouse is that slowly the open-source community is building a replacement for the proprietary data warehouse, Delta Lake, machine learning, real-time stack in open source, and were excited to be part of it, Ghodsi said.

Potentially, the most important protocol in the data lakehouse is Delta Sharing, according to Ghodsi. The open standard enables organizations to efficiently share large data sets without duplication. And being open source means that any organization can access the functionality to help build a supercloud in the design that works best for their organization.

You dont need to be a Databricks customer. You dont need to even like Databricks, Ghodsi said. You just need to use this open-source project and you can now securely share data sets between organizations across clouds.

Open source has already become the software default, and in the next couple of years, its going to be a requirement that software works across the different cloud environments, according to Ghodsi.

Is it based on open source? Is it using this data lakehouse pattern? And if its not, I think theyre going to demand it, he said.

Heres the complete video interview, part of SiliconANGLEs and theCUBEs coverage of the Supercloud 22 event:

Continue reading here:

Supercloud debate: Is open-source standardization the way forward? Databricks CEO weighs in - SiliconANGLE News

OFRAK, an Open Source IoT Reverse Engineering Tool, Is Finally Here – WIRED

At the 2012 DefCon security conference in Las Vegas, Ang Cui, an embedded device security researcher, previewed a tool for analyzing firmware, the foundational software that underpins any computer and coordinates between hardware and software. The tool was specifically designed to elucidate internet-of-things (IoT) device firmware and the compiled binaries running on anything from a home printer to an industrial door controller. Dubbed FRAK, the Firmware Reverse Analysis Console aimed to reduce overhead so security researchers could make progress assessing the vast and ever-growing population of buggy and vulnerable embedded devices rather than getting bogged down in tedious reverse engineering prep work. Cui promised that the tool would soon be open source and available for anyone to use.

This is really useful if you want to understand how a mysterious embedded device works, whether there are vulnerabilities inside, and how you can protect these embedded devices against exploitation, Cui explained in 2012. FRAK will be open source very soon, so were working hard to get that out there. I want to do one more pass, internal code review before you guys see my dirty laundry.

He was nothing if not thorough. A decade later, Cui and his company, Red Balloon Security, are launching Ofrak, or OpenFRAK, at DefCon in Las Vegas this week.

In 2012 I thought, heres a framework that would help researchers move embedded security forward. And I went on stage and said, I think the community should have it. And I got a number of emails from a number of lawyers, Cui told WIRED ahead of the release. Embedded security is a space that we absolutely need to have more good eyes and brains on. We needed it 10 years ago, and we finally found a way to give this capability out. So here it is.

Though it hadnt yet fulfilled its destiny as a publicly available tool, FRAK hasnt been languishing all these years either. Red Balloon Security continued refining and expanding the platform for internal use in its work with both IoT device makers and customers who need a high level of security from the embedded devices they buy and deploy. Jacob Strieb, a software engineer at Red Balloon, says the company always used FRAK in its workflow, but that Ofrak is an overhauled and streamlined version that Red Balloon itself has switched to.

Cuis 2012 demo of FRAK raised some hackles because the concept included tailored firmware unpackers for specific vendors products. Today, Ofrak is simply a general tool that doesnt wade into potential trade secrets or intellectual property concerns. Like other reverse engineering platforms, including the NSAs open source Ghidra tool, the stalwart disassembler IDA, or the firmware analysis tool Binwalk, Ofrak is a neutral investigative framework. And Red Balloons new offering is designed to integrate with these other platforms for easier collaboration among multiple people.

What makes it unique is its designed to provide a common interface for other tools, so the benefit is that you can use all different tools depending on what you have at your disposal or what works best for a certain project, Strieb says.

Go here to see the original:

OFRAK, an Open Source IoT Reverse Engineering Tool, Is Finally Here - WIRED

Application Security Tools: Which solution is best? – IDG Connect

As the threat of cybercrime continues to grow, it is more important than ever for business leaders to ensure their security of their applications. For many, this means utilising application security tools tailored to the demands of today. However, selecting a product isnt always easy and there are many to choose from.

Over 540,000 professionals have used Peerspot research to inform their purchasing decisions. Its latest paper looks at the highest rated application security tool vendors, profiling each and examining what they can offer enterprise.

Heres a breakdown of the key players currently active in the market:

Average Rating: 7.6

Top Comparison: SonarQube

Overview: Highly accurate and flexible static code analysis product that allows organisations to automatically scan uncompiled code and identify hundreds of security vulnerabilities in all major coding languages and software frameworks.

Average Rating: 8.8

Top Comparison: Veracode

Overview: A breakthrough technology that enables highly accurate assessment and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts.

Average Rating: 8.9

Top Comparison: Snyk

Overview: Helps organisations detect and fix vulnerabilities in source code at every step of the software development lifecycle.

Average Rating: 7.7

Top Comparison: Black Duck

Overview: Effortlessly secures what developers create and uniquely removes the burden of application security, allowing development teams to deliver quality, secure code faster.

Average Rating: 7.7

Top Comparison: SonarQube

Overview: A web application security testing tool that enables continuous monitoring. The solution is designed to help organisations with security testing, vulnerability management, and tailored expertise.

Average Rating: 8.6

Top Comparison: OWASP Zap

Overview: The worlds leading toolkit for web security testing. Over 52,000 users worldwide,

across all industries and organisation sizes, trust the solution to find more vulnerabilities, faster.

Average Rating: 8.4

Top Comparison: SonarQube

Overview: User-friendly security solution that enables users to safely develop and use open source code. Users can create automatic scans that allow them to keep a close eye on their code and prevent bad actors from exploiting vulnerabilities.

Average Rating: 8.0

Top Comparison: Veracode

Overview: The leading tool for continuously inspecting code quality and code security, and guiding development teams during code reviews.

Average Rating: 8.6

Top Comparison: SonarQube

Overview: An open-source security and dependency management software that uses only one tool to automatically find open-source vulnerabilities at every stage of the system development life cycle.

Average Rating: 8.1

Top Comparison: SonarQube

Overview: A unique combination of SaaS technology and on-demand expertise that enables DevSecOps through integration with enterprise pipelines and empowers developers to find and fix security defects.

Originally posted here:

Application Security Tools: Which solution is best? - IDG Connect

Rezilion Offers MI-X, An Open Source Tool to Help Cybersecurity Community Determine if a Vulnerability is Exploitable – or Not – PR Newswire

LAS VEGAS, Aug. 11, 2022 /PRNewswire/ -- Today Rezilion announced the availability of MI-X, a newly created open-source tool developed by Rezilion's vulnerability research team that made its debut this week at Black Hat Arsenal. Available as a download from the GitHub repository, it is a CLI tool that can help researchers and developers know if their containers and hosts are impacted by a specific vulnerability, thus allowing organizations to target remediation plans more effectively.

"Cybersecurity vendors, software providers and CISA are issuing daily vulnerability disclosures alerting the industry to the fact that all software is built with mistakes that must be addressed, often immediately. With this influx of information, the launch of MI-X offers users a repository of information to validate exploitability of specific vulnerabilities creating more focus and efficiency around patching efforts," said Yotam Perkal, Director, Vulnerability Research at Rezilion. "As an active participant in the vulnerability research community, this is an impactful milestone for developers and researchers to collaborate and build together."

Current Vulnerability Tools Don't Factor In Exploitability

Each day, organizations grapple with a litany of critical and zero-day vulnerabilities and scramble to understand if they are affected by that vulnerability before a threat actor figures it out first. Many times,their existing tools cannot help them make this determination. That's because in order to do so, organizations need to:

What organizations need is a tool that can answer the two questions above. Current vulnerability scanners take too long to scan, don't factor exploitability, and based on the nature of a specific vulnerability often miss it altogether - as was the case with the recently discovered Log4j vulnerability. The lack of tools gives threat actors a lot of time to exploit a flaw and do major damage.

MI-X helps you to understand if you are actually affected by a specific vulnerability

Using MI-X, organizations can identify and establish the exploitability of 20+ high-profile CVEs within their environment, including hosts and containers. The tool can easily be updated to include coverage for new critical and zero-day vulnerabilities.

The tool will be a key asset to security teams seeking to know if critical bugs are a serious threat to their individual software environment so they can take action. With MI-X, security teams can scan a specific host or container and determine if a high-risk vulnerability is present and exploitable in hosts and containers.

MI-X is ideal for researchers, developers, and very small organizations to quickly detect the presence and exploitability of a known critical CVE so they can eliminate guesswork and focus on remediating what presents a true risk to the environment.

Easily upgradeable to expand coverage of vulnerabilities, by using MI-X, security teams can strategically identify vulnerabilities, without the need for expensive tools. Through MI-X, users can:

The introduction of MI-X is the first of a series of initiatives planned by Rezilion to foster a community around detecting, prioritizing and remediating software vulnerabilities.

For more information on getting started with MI-X, visit https://www.rezilion.com/rezilion-tools/am-i-exploitable/ or join the tool's open Slack channel at https://www.rezilion.com/lp/join-the-mi-x-community-on-slack/.

About Rezilion:

Rezilion's platform automatically secures the software you deliver to customers. Rezilion's continuous runtime analysis detects vulnerable software components on any layer of the software stack and determines their exploitability, filtering out up to 95% of identified vulnerabilities. Rezilion then automatically mitigates exploitable vulnerabilities across the SDLC, reducing vulnerability backlogs and remediation timelines from months to hours, while giving DevOps teams time back to build.

Learn more about Rezilion's software attack surface management platform at http://www.rezilion.com and get a 30-day free trial.

Media Contact:Danielle OstrovskyHi-Touch PR410-302-9459[emailprotected]

SOURCE Rezilion

Read the original here:

Rezilion Offers MI-X, An Open Source Tool to Help Cybersecurity Community Determine if a Vulnerability is Exploitable - or Not - PR Newswire

The 5 Top App Definition and Build Tools From CNCF – Container Journal

Kubernetes has evolved to become the foundation of the modern cloud-native stack. Yet, adopting this lovable beast of a container platform doesnt come without its hurdles. Thankfully, many toolsets now exist to help engineers package, deploy and manage applications using Kubernetes.

Below, well look at some graduated and incubating CNCF tools that fit under the application definition and image build category. These open source packages address the operational concerns of Kubernetes, making it easier to install dependencies, generate Kubernetes operators, containerize VMs and more. If you want to improve the developer experience around Kubernetes adoption, these tools are an excellent first place to look.

The Kubernetes package manager

Website | GitHub

Few cloud-native projects are as well-known as Helm, the Kubernetes package manager. If youre finding, sharing or installing cloud-native architecture, theres a high probability youre using Helm. The helm install command is a near-ubiquitous method for installing Helm charts for third-party applications. Helm charts are shared as files and laid out in a directory tree structure. Artifact Hub is a popular repository of Helm charts, where you can find, install and publish over 10,000 Kubernetes packages. Helm is a graduated project with the CNCF and is supported by a vast number of contributors.

Transform source code into runnable container images

Website | GitHub

Buildpacks, an incubating project hosted by CNCF, is a utility that assists developers in transforming their application source code into images that can run in any cloud. The concept of Buildpacks originated at Heroku in 2011. Since then, Buildpacks has been open sourced and embraced by many other companies and projects, from Google to Cloud Foundry, GitLab and Knative. Buildpacks helps application developers convert code into runnable images, and the platform also assists in packaging Buildpacks for distribution. Buildpacks uses the latest OCI container formats and comes with plenty of features, such as advanced caching, auto-detection, minimal app image, image rebasing, bill of materials and more. It also sports a centralized registry for community Buildpacks.

A K8s API and runtime to define and manage VMs

Website | GitHub

KubeVirt is an API that enables operators to quickly containerize virtual machines for Kubernetes. KubeVirt, an incubating CNCF sandbox project, provides a method to build and manage applications for containers and virtual machines in a common way. In essence, it allows operators to retain the VMs they already have while reaping the benefits of containerization. KubeVirt can be run on MiniKube, Kind and major cloud computing providers.

SDK for building Kubernetes applications

Website | GitHub

The Operator Framework decreases the barrier to constructing Kubernetes applications. This handy open source toolkit provides the means to build, test and package operators, which are common Kubernetes-native applications. The SDK uses the controller-runtime library in this process. To get an idea of what sort of operators can be created, check out the Operator Hub registry for a sample of operators created by the Kubernetes community.

An open platform for building developer portals

Website | GitHub

Backstage is a new and exciting open source incubating CNCF project originally developed at Spotify. A little different than others on our list, Backstage is not concerned with building container imagesrather, its more focused on building developer portals to help centralize the management of a complex software ecosystem. Backstage is powered by the Backstage Software Catalog, a system that keeps tabs on the ownership and metadata of all the software in your organization. It utilizes metadata YAML files that sit with the source code, on which Backstage then creates visualizations. It looks to be a robust platform that can support the registration of any software type.

Above, weve highlighted the graduated and incubating CNCF projects relating to cloud-native application definition and image build. There are also many other exciting CNCF sandbox projects to keep tabs on in this field as well:

Related

See original here:

The 5 Top App Definition and Build Tools From CNCF - Container Journal