Coinbase stock slumps on eve of Q1 results as Bitcoin sinks – Seeking Alpha

Movus/iStock Editorial via Getty Images

Coinbase (NASDAQ:COIN) is scheduled to announce Q1 earnings results on Tuesday, May 10th, after market close.

The consensus EPS Estimate is $0.86 (-71.8% Y/Y) and the consensus Revenue Estimate is $1.48B (-17.8% Y/Y).

Over the last 3 months, EPS estimates have seen 0 upward revisions and 5 downward. Revenue estimates have seen 5 upward revisions and 9 downward.

Coinbase stock fell ~15% on May 9, after cryptocurrency-exposed shares slumped as Bitcoin extended its slide to its lowest level since July 2021.

Earlier in May it was reported that ~19K bitcoins worth ~$703M flowed out of the cryptocurrency exchange through a series of four transactions.

The same month, Coinbase also saw its price target lowered at Mizuho to $135 from $150. The firm said that analyzing COIN's April and May volumes showed 25-30% potential downside to Q2 consensus revenue expectations.

In March, short seller Jim Chanos had said he's short the cryptocurrency exchange.

Coinbase, the largest cryptocurrency exchange platform in the U.S., currently finds itself down 70% from its IPO date, over a year ago, of Apr. 14, 2021. YTD, the stock has fallen -65.43%.

The company's stock had declined -1.52% on Feb. 25, the day after it reported its Q4 results, which beat analysts' estimates. COIN had said it expects subscription and services revenue to decrease in Q1 due to crypto asset declines.

Earlier in May, Coinbase is said to have end talks to acquire 2TM, owner of Mercado Bitcoin, Brazil's largest crypto exchange. In March it was reported that COIN was in talks to acquire 2TM.

Meanwhile in April, Coinbase was in discussion to buy Turkey's crypto exchange, BtcTurk for $3.2B.

In May, the company rolled out a beta version of its non-fungible token marketplace to everyone.

Follow this link:
Coinbase stock slumps on eve of Q1 results as Bitcoin sinks - Seeking Alpha

Dependency Issues: Solving the World’s Open-Source Software Security Problem – War on the Rocks

The idea of a lone programmer relying on their own genius and technical acumen to create the next great piece of software was always a stretch. Today it is more of a myth than ever. Competitive market forces mean that software developers must rely on code created by an unknown number of other programmers. As a result, most software is best thought of as bricolage diverse, usually open-source components, often called dependencies, stitched together with bits of custom code into a new application.

This software engineering paradigm programmers reusing open-source software components rather than repeatedly duplicating the efforts of others has led to massive economic gains. According to the best available analysis, open-source components now comprise 90 percent of most software applications. And the list of economically important and widely used open-source components Googles deep learning framework TensorFlow or its Facebook-sponsored competitor PyTorch, the ubiquitous encryption library OpenSSL, or the container management software Kubernetes is long and growing longer. The military and intelligence community, too, are dependent on open-source software: programs like Palantir have become crucial for counter-terrorism operations, while the F-35 contains millions of lines of code.

The problem is that the open-source software supply chain can introduce unknown, possibly intentional, security weaknesses. One previous analysis of all publicly reported software supply chain compromises revealed that the majority of malicious attacks targeted open-source software. In other words, headline-grabbing software supply-chain attacks on proprietary software, like SolarWinds, actually constitute the minority of cases. As a result, stopping attacks is now difficult because of the immense complexity of the modern software dependency tree: components that depend on other components that depend on other components ad infinitum. Knowing what vulnerabilities are in your software is a full-time and nearly impossible job for software developers.

Fortunately, there is hope. We recommend three steps that software producers and government regulators can take to make open-source software more secure. First, producers and consumers should embrace software transparency, creating an auditable ecosystem where software is not simply mysterious blobs passed over a network connection. Second, software builders and consumers ought to adopt software integrity and analysis tools to enable informed supply chain risk management. Third, government reforms can help reduce the number and impact of open-source software compromises.

The Road to Dependence

Conventional accounts of the rise of reusable software components often date it to the 1960s. Software experts such as Douglas McIlroy of Bell Laboratories had noted the tremendous expense of building new software. To make the task easier, McIlroy called for the creation of a software components sub-industry for mass-producing software components that would be widely applicable across machines, users, and applications or in other words, exactly what modern open-source software delivers.

When open source started, it initially coalesced around technical communities that provided oversight, some management, and quality control. For instance, Debian, the Linux-based operating system, is supported by a global network of open-source software developers who maintain and implement standards about what software packages will and will not become part of the Debian distribution. But this relatively close oversight has given way to a more free-wheeling, arguably more innovative system of package registries largely organized by programming language. Think of these registries as app stores for software developers, allowing the developer to download no-cost open-source components from which to construct new applications. One example is the Python Package Index, a registry of packages for the programming language Python that enables anyone from an idealistic volunteer to a corporate employee to a malicious programmer to publish code on it. The number of these registries is astounding, and now every programmer is virtually required to use them.

The effectiveness of this software model makes much of society dependent on open-source software. Open-source advocates are quick to defend the current system by invoking Linuss law: Given enough eyes, all bugs are shallow. That is, because the software source code is free to inspect, software developers working and sharing code online will find problems before they affect society, and consequently, society shouldnt worry too much about its dependence on open-source software because this invisible army will protect it. That may, if you squint, have been true in 1993. But a lot has changed since then. In 2022, when there will be hundreds of millions of new lines of open-source code written, there are too few eyes and bugs will be deep. Thats why in August 2018, it took two full months to discover that a cryptocurrency-stealing code had been slipped into a piece of software downloaded over 7 million times.

Event-Stream

The story began when developer Dominic Tarr transferred the publishing rights of an open-source JavaScript package called event-stream to another party known only by the handle right9ctrl. The transfer took place on GitHub, a popular code-hosting platform frequented by tens of millions of software developers. User right9ctrl had offered to maintain event-stream, which was, at that point, being downloaded nearly two million times per week. Tarrs decision was sensible and unremarkable. He had created this piece of open-source software for free under a permissive license the software was provided as-is but no longer used it himself. He also already maintained several hundred pieces of other open-source software without compensation. So when right9ctrl, whoever that was, requested control, Tarr granted the request.

Transferring control of a piece of open-source software to another party happens all the time without consequence. But this time there was a malicious twist. After Tarr transferred control, right9ctrl added a new component that tried to steal bitcoins from the victims computer. Millions upon millions of computers downloaded this malicious software package until developer Jayden Seric noticed an abnormality in October 2018.

Event-stream was simply the canary in the code mine. In recent years, computer-security researchers have found attackers using a range of new techniques. Some are mimicking domain-name squatting: tricking software developers who misspell a package name into downloading malicious software (dajngo vs. django). Other attacks take advantage of software tool misconfigurationswhich trick developers into downloading software packages from the wrong package registry. The frequency and severity of these attacks have been increasing over the last decade. And these tallies dont even include the arguably more numerous cases of unintentional security vulnerabilities in open-source software. Most recently, the unintentional vulnerability of the widely used log4j software package led to a White House summit on open-source software security. After this vulnerability was discovered, one journalist titled an article, with only slight exaggeration, The Internet Is on Fire.

The Three-Step Plan

Thankfully, there are several steps that software producers and consumers, including the U.S. government, can take that would enable society to achieve the benefits of open-source software while minimizing these risks. The first step, which has already received support from the U.S. Department of Commerce and from industry as well, involves making software transparent so it can be evaluated and understood. This has started with efforts to encourage the use of a software bill of materials. This bill is a complete list or inventory of the components for a piece of software. With this list, software becomes easier to search for components that may be compromised.

In the long term, this bill should grow beyond simply a list of components to include information about who wrote the software and how it was built. To borrow logic from everyday life, imagine a food product with clearly specified but unknown and unanalyzed ingredients. That list is a good start, but without further analysis of these ingredients, most people will pass. Individual programmers, tech giants, and federal organizations should all take a similar approach to software components. One way to do so would be embracing Supply-chain Levels for Software Artifacts, a set of guidelines for tamper-proofing organizations software supply chains.

The next step involves software-security companies and researchers building tools that, first, sign and verify software and, second, analyze the software supply chain and allow software teams to make informed choices about components. The Sigstore project, a collaboration between the Linux Foundation, Google, and a number of other organizations, is one such effort focused on using digital signatures to make the chain of custody for open-source software transparent and auditable. These technical approaches amount to the digital equivalent of a tamper-proof seal. The Department of Defenses Platform One software team has already adopted elements of Sigstore. Additionally, a software supply chain observatory that collects, curates, and analyzes the worlds software supply chain with an eye to countering attacks could also help. An observatory, potentially run by a university consortium, could simultaneously help measure the prevalence and severity of open-source software compromises, provide the underlying data that enable detection, and quantitatively compare the effectiveness of different solutions. The Software Heritage Dataset provides the seeds of such an observatory. Governments should help support this and other similar security-focused initiatives. Tech companies can also embrace various nutrition label projects, which provide an at-a-glance overview of the health of a software projects supply chain.

These relatively technical efforts would benefit, however, from broader government reforms. This should start with fixing the incentive structure for identifying and disclosing open-source vulnerabilities. For example, DeWitt clauses commonly included in software licenses require vendor approval prior to publishing certain evaluations of the softwares security. This reduces societys knowledge about which security practices work and which ones do not. Lawmakers should find a way to ban this anti-competitive practice. The Department of Homeland Security should also consider launching a non-profit fund for open-source software bug bounties, which rewards researchers for finding and fixing such bugs. Finally, as proposed by the recent Cyberspace Solarium Commission, a bureau of cyber statistics could track and assess software supply chain compromise data. This would ensure that interested parties are not stuck building duplicative, idiosyncratic datasets.

Without these reforms, modern software will come to resemble Frankensteins monster, an ungainly compilation of suspect parts that ultimately turns upon its creator. With reform, however, the U.S. economy and national security infrastructure can continue to benefit from the dynamism and efficiency created by open-source collaboration.

John Speed Meyers is a security data scientist at Chainguard. Zack Newman is a senior software engineer at Chainguard. Tom Pike is the dean of the Oettinger School of Science and Technology at the National Intelligence University. Jacqueline Kazil is an applied research engineer at Rebellion Defense. Anyone interested in national security and open-source software security can also find out more at the GitHub page of a nascent open-source software neighborhood watch. The views expressed in this publication are those of the authors and do not imply endorsement by the Office of the Director of National Intelligence or any other institution, organization, or U.S. government agency.

Image: stock photo

See the rest here:
Dependency Issues: Solving the World's Open-Source Software Security Problem - War on the Rocks

Encore Models, Builds the Backend Designed in Your Head – thenewstack.io

When Encore founder Andr Eriksson became a developer at Spotify, he found the work of building backends for cloud applications mundane and repetitive, far from the rush he felt while collaborating with World of Warcraft maker Blizzard as a teenager.

The Swedish backend maker on its website refers to those repetitive backend tasks, for the developer at least, like being a hamster on a wheel. His idea behind Encore is to make it easier and faster to get to the fun part of software development.

I personally was spending the vast majority of my time as an engineer, just doing the same type of work over and over again, managing the infrastructure, configuring things, you know, all that sort of repetitive and undifferentiated tasks that are just the daily life of building backends for the cloud these days, he said. And then looking around, I noticed every single team was doing that. And then looking outside of the company, every other company was also doing the same thing.

The company was exploring available tools, but not finding they provided that much benefit, he said. After thinking long and hard about the problem, he decided that the core issue is that engineers spend so much time building systems that have no idea what theyre trying to do.

So he set out to build a system that, in effect, could read your mind. Sort of.

In order to help developers do their job more effectively, we need tools that actually understand what developers are trying to do, he said. Were all used to all these tools that really have absolutely no idea what youre trying to do; they dont understand that youre building a backend at all.

Even the ones that are backend-specific, they dont understand what your backend is about; they dont understand how it fits together. And when you dont have that understanding, youre very limited in your ability to actually aid developers in getting their job done. And thats where Encore is different.

Written it Go, Encore is designed to match the design in the engineers head, an approach it calls the Encore Application Model.

With any programming language, you have a compiler and a parser that analyzes your code, then builds a binary that you then run on a server.

Encore is essentially another layer on top of that, where we add additional rules to how youre expressing backend concepts like, This is how you define an API. This is how you query a database, this is how you define that you need a queue for a cache or whatever. So you have all of these really important concepts in building distributed systems that come up over and over again, and were taking them and turning them into native concepts that you express in a certain way, he explained.

Essentially, Encore runs a set of opinionated rules atop your cloud account and its Backend Development Engine requires they be followed.

We have a parser, which works just like a regular compiler for programming language, that is then parsing the code and enforcing those rules: Oh, youre trying to query a database, but youre not following Encores rules. So in a way, its a programming language built on top of Go that instead of compiling it into a binary, its compiling it into a description of a distributed system which is like, here are all the services, here are all of the different endpoints, here are the requests and the response schemas, here is where youre doing an API call between this service and that service. Heres where youre defining a database or a key-value store. Heres where youre querying the database.

So it becomes this really, really rich description of how your whole system fits together. And it very much models the mental model of the engineers that are building that system, because thats how they think about this, he said.

Using static analysis of the metadata, it creates a graph of your system, much like if you were drawing this out on a whiteboard, with boxes and arrows representing systems and services and how they communicate and connect to the infrastructure.

It provides the ability to:

Encore doesnt want to host your software. While it does offer hosting to help startups and hobbyists get up and running quickly, for production it runs atop your cloud accounts on Amazon Web Services, Azure or the Google Cloud Platform.

It makes much of its open source roots and your control of your cloud accounts, stressing that if, for whatever reason, you want to leave Encore, you still own the data and access to those accounts.

Its a full-fledged programming tool, just at a slightly higher abstraction level thats dedicated for building cloud-based backends, Eriksson said.

Most of the engineers that are using Encore are actually very experienced. They come from a world where they know how to do all of this stuff with cloud infrastructure and scalable distributed systems. Theyre just fed up with it. They actually want to build products, not mess around with all of that toil. And they really like that Encore enables them to do that, he said.

Eriksson launched Encore along with Marcus Kohlberg, also a Spotify alum, in 2021. It touts an engineering team with experience at Google and the UK-based online bank Monzo. The company open sourced the Encore Go Framework last year under Mozilla Public License 2.0. Its the basis for the Backend Development Engine, announced recently along with a $3 million seed round led by Crane Venture Partners.

Encore is dramatically changing the developer experience for building distributed systems in the cloud, said Krishna Visvanathan, cofounder of Crane Venture Partners. It stands apart because of its ability to deeply understand source code and automate what would otherwise slow development and business to a halt, while giving developers the freedom to develop for any application or cloud environment. Encore is a clear leader and first mover in this space.

With its experience with large-scale distributed systems, its looking to solve those problems, but provide a compelling product for startups as well.

I think this approach, which is very opinionated, and really focuses on a very integrated approach, where we can actually make investments into solving problems that large engineering organizations never have the time to get to. I think theres substantial value there on the enterprise side of really sophisticated analysis about how your systems fit together and work, Eriksson said.

He noted that if youre into game development you use a game engine like Unity or Unreal Engine. But to build a backend, traditionally, you just open a file and start typing.

So theres this real massive difference in experience and integration between the game industry and the backend industry. And thats kind of where we want to take this, providing a really powerful and integrated experience that improves on not just for individual developers, but how you collaborate, and how youre working in teams, and how whole organizations work.

And then going beyond developers into insights and analytics and machine learning and data, he said, of the long-term vision.

On the more immediate horizon, its much more about how do we take this experience and making it more accessible to larger companies that want to integrate it with already existing systems and backends, being able to seamlessly integrate it with existing infrastructure, and that sort of thing.

And then just adding more cloud primitives, as we call it, the building blocks of distributed systems, like caches, and queues and object storage, and all these sorts of things that youre building back inside of these.

Brian Ketelsen, cloud developer advocate at Microsoft, is a fan. He gushed in email:

I have used Encore for a few projects now, and Im completely in love. The first project was an ambitious conference management platform undertaken with a few volunteers in the Go community. In just a few weeks we were able to put together a complete conference management system that included everything a conference needs: ticketing, program scheduling, call for papers, room management and more. It was really easy to onboard new volunteers to help with the code and everyone was impressed with the speed at which we were able to develop. This project was just over a year ago, so it was built using an older version of Encores platform.

More recently I was invited to do a keynote for DevWeek Mexico. I knew Encore was planning a 1.0 launch around the same time and they had just released Azure support. I work for Microsoft as an Azure cloud developer advocate. So I decided to build a Life API as a demo app for the keynote.

My goal was to create an API that covered all of the things I would manually do as a developer advocate. I have a new baby at home with some severe medical issues, and we ended up spending much of the time I had planned to write my talk and app in the ICU with the little one. We got home on Friday my keynote was Monday. I was able to build out the entire API and build a new website that consumes it in just a few hours over the weekend.

To say that Im impressed with Encore would be a gross understatement. From a functional perspective, Encore is built for developers. The development experience is well crafted with almost zero friction after installing the encore command-line app and creating an account. The Encore platform allowed me to write only the business logic for my application instead of spending countless hours setting up hosting, continuous integration, automated deployments and the rest of the operational things that drag a new project down in the beginning. For a smaller project like mine, that probably saved me a total of 15-20 hours of time.

Operationally, Encore really shines. Because Encores tools analyze the code Ive written, they are able to inject all the boring boilerplate that I hate writing. Yes, I want distributed tracing; No, I dont want to annotate every function with dozens of lines of repetitive code to make it happen. Once my code was deployed, I could go to the Encore dashboard and view distributed traces and detailed logs. That single pane approach to ops is such a wonderful simplification from the usual suite of 5-8 different tools a team might use to manage a deployed application.

Treating RPC calls as local function calls in code is another delightful time-saver. Instead of writing my API as a big monolith, I decided to break each functional area into separate microservices to explore how well Encore worked in an environment where there are many services exposed with public and private (internal) endpoints. Everything about the process was smooth and boring in the best possible way. Encore manages database connections, secrets, database migration, logs and infrastructure. Thats SO MUCH code I didnt write.

Every tool like Encore that is designed to speed up development comes with tradeoffs. As a developer, it is your responsibility to understand the tradeoffs that come with the decisions made on your behalf by the tools.

Encore was clearly built by people who understand both the needs of the developer and the needs of the ops crowd. There arent any decisions in their platform that I couldnt accept and embrace. The icing on the proverbial cake is the ability to host the application on my own Azure subscription so Im not dependent on someone elses cloud.

See more here:
Encore Models, Builds the Backend Designed in Your Head - thenewstack.io

Github’s 2FA Move Was Long Overdue The New Stack – thenewstack.io

On May 4, GitHubs CSO Mike Hanley announced that all users who upload code to the site must enable one or more forms of two-factor authentication (2FA) by the end of 2023 or leave. Its about time!

In case youve been asleep for the last few years, software supply chain attacks have become commonplace. One of the easiest ways to protect it is to use 2FA. 2FA is simple. Besides using a username/password pair to identify yourself you also use a second factor to prove your identity.

Under the surface, 2FA gets complicated. They rely on one of three standards: HMAC-based One Time Password (HOTP). Time-based One-Time Password (TOTP), or the FIDO Alliances FIDO2 Universal 2nd Factor (U2F) standard. But, you dont need to worry about that as a developer, unless security, authentication, and identity are your thing. You just need to, as Hanley puts it, enable one or more forms of 2FA.

Its not that freaking hard. Still, today, only approximately 16.5% of active GitHub users and 6.44% of npm users use one or more forms of 2FA. Why are developers so stubbornly stupid?

As Mark Loveless, a GitLab senior security researcher, put it recently, The main reason for low adoption of a secondary authentication factor is that turning on any multi-factor authentication (MFA) is an extra step, as it is rarely on by default for any software package. And we do so hate to take even one extra step.

Mind you, smarter developers on bigger projects do get it. Patrick Toomey, GitHubs director of product security engineering, recently observed that Open source maintainers for well-established projects (more than 100 contributors) are three to four times more likely to make use of 2FA than the average user. That comes as no surprise because larger and more popular projects appreciate their position and responsibility in the open source software supply chain. In addition, these projects often become GitHub organizations, with the ability to manage access to their repositories using teams and set security policies, including a requirement to enable 2FA.

Another factor in people refusing to get a 2FA clue is simple ignorance. For example, a discussion on the Reddit programming subreddit on the issue showed many people assume that 2FA is either hard (Spoiler: Its not) or its not that secure because if uses a phone. True, 2FA that uses texting is relatively easy to break. Just ask Jack Dorsey, Twitters founder. Dorseys own Twitter account was hijacked thanks to a SIM swap attack.

But the important point here is you dont need to use your texting, aka Short Message Service (SMS). For 2FA, GitHub explicitly tells you that can also use:

Its not that hard, people! It really isnt. And, as for those who whine, This will kill projects! any project thats killed because its developers cant do basic 2FA security is better off dead.

For too long in open source communities, weve been too inclined to think that hackers only attack proprietary programs. As James Arlen, Aiven CISO (chief information security officer), observed, The reality of open-source software development over the last 30+ years has been based on a web of trust among developers. This was maintained through personal relationships in the early days but has grown beyond the ability of humans to know all of the other humans. With GitHub taking the step of providing deeper authentication requirements on developers, it will dramatically reduce the likelihood of a developer suffering an account takeover and the possibility of a violation of that trust. In short, Angel Borroy, a Hyland developer evangelist, told me, bad guys can see open source code too.

GitHub is giving you until 2023. Thats much too kind of them. Your GitHub accounts being hijacked is a real and present danger. Adopt 2FA today. Adopt 2FA not only on GitHub but on all your code repositories and online services. Its the best way you can protect yourself and your code from attackers today.

Featured image Ed HardieonUnsplash.

Here is the original post:
Github's 2FA Move Was Long Overdue The New Stack - thenewstack.io

Yapily to acquire finAPI in open banking consolidation move – TechCrunch

Fintech startup Yapily is announcing that it plans to acquire finAPI the transaction is subject to regulatory approvals before it closes. Both companies offer open banking solutions in Europe.

With this move, Yapily is consolidating its position in Europe and growing its business in Germany, more specifically. The terms of the deal are undisclosed, but the company says it is a multimillion-euro transaction.

Based in the U.K., Yapily offers a single, unified open banking API to interact with bank accounts. Unlike Tink or TrueLayer, Yapily offers a low-level solution without any front-end interface. Developers have to code their own bank connection flow. The result is more control and no Yapily logo.

Due to European PSD2 regulation, banks have to offer programming interfaces (APIs) so that they can work better with third-party services. Yapily has focused specifically on official API integrations and covers thousands of banks. It doesnt rely on screen scraping and private APIs.

Companies can leverage open banking to check the balance on a bank account, fetch the most recent transactions, but also initiate payments directly from a bank account.

FinAPI is also an open banking provider. Originally from Munich, Germany, the company has been around since 2008 Schufa acquired a majority stake in finAPI in 2019. It offers an API with coverage in Germany, Austria, Czech Republic, Hungary and Slovakia. Like Yapily, finAPI clients can obtain account information and initiate payments using an API.

In addition to those pure open banking products, finAPI also offers the ability to verify the age and identity of a customer. This can be useful to comply with KYC (Know Your Customer) regulation.

Yapily currently covers 16 European markets and the company says it is the leader in the U.K. But the startup isnt currently active in Czech Republic, Slovakia and Hungary. With todays acquisition, the company is expanding to these three new markets and becoming the leader in Germany.

As you can see, theres some product feature overlap between Yapily and finAPI. And the acquisition makes sense as the two companies didnt start in the same market.

Yapily works with companies like American Express, Intuit QuickBooks, Moneyfarm, Volt, Vivid and BUX. FinAPIs clients include ING, Datev, Swiss Life, ImmobilienScout24 and Finanzguru.

This is a hugely exciting milestone for Yapily on our journey from disruptive startup to ambitious scale-up. Within three years from launch, we have commercialized our platform, grown our customer base, and now have the largest open banking payments volumes in Europe. Working with finAPI, we can gain more speed, agility, and depth to accelerate innovation and shape the future of open finance in Europe and beyond, Yapily founder and CEO Stefano Vaccino said in a statement.

When it comes to payments in particular, Yapily and finAPI have processed a combined total of $39.5 billion in payment volumes over the last 12 months. Essentially, Yapily will double its customer base with this acquisition.

Follow this link:
Yapily to acquire finAPI in open banking consolidation move - TechCrunch

The Web3 Movements Quest to Build a Cant Be Evil Internet – WIRED

Owocki was something of a rock star at the conference. He is credited with coining the term BUIDL in 2017. Admirers approached him nonstop to talk, express their support, or ask for a copy of his book, GreenPilled: How Crypto Can Regenerate the World, which was the talk of the conference and quickly sold out of the 400 copies he had ordered. Owocki is about as far from a casino person as youll find in the crypto world. In one of several presentations he gave, Owocki told the crowd that since research shows money stops increasing happiness after about $100,000 in annual income, Web3 founders should maximize their happiness by giving their excess money to public goods that everyone gets to enjoy. Theres cypherpunk, which is all about privacy, decentralization: hardcore libertarian shit, he told me. Im more of a leftist. Im more solarpunk, which is, how do we solve our contemporary problems around sustainability and equitable economic systems? Its a different set of values.

The internet, he explained, made it possible to move information between computers. This revolutionized communication. Blockchains have made it possible to move units of value between computers. Owocki believes this can be harnessed to revolutionize how human beings interact through something he calls regenerative cryptoeconomics. Cryptoeconomics, he writes in GreenPilled, is the use of blockchain-based incentives to design new kinds of systems, applications, or networks. Regenerative cryptoeconomics means doing this in a way that makes the world a better place for everyone. The goal is to break free from the zero-sum, rich-get-richer patterns of capitalism. Owocki believes that the right cryptoeconomic structure can help solve collective action problems like climate change, misinformation, and an underfunded digital infrastructure.

The key tool for achieving this is a decentralized autonomous organization. In theory, a DAO (yes, pronounced the same as the ancient Chinese word for the way of the universe) uses cryptocurrency to boost collective action. Typically, members join by buying some amount of a custom token issued by the DAO. That entitles them to an ownership stake in the DAO itself. Member-owners vote on what the DAO doeswhich is mostly to say, what it spends money on, since a blockchain-based entity can do little besides move funds from one address to another.

The young concept already has a checkered history. The first DAO, named simply The DAO, collapsed in 2016 after someone exploited a loophole in its code to siphon off what was then worth some $50 million in Ethereum currency. Similarly colorful failures have followed. DAOs were nonetheless all the rage at ETHDenver, where attendees waxed on about their world-changing potential. Kimbal Musk, Elons photogenic brother, spoke about his Big Green DAO, a food-related charity. Giving away money via a DAO, he insisted, got rid of all the painful bureaucracy of philanthropic nonprofits. Its way better, he said, though he also granted that there are many ways to fail, and this one could fail spectacularly.

What is it about a DAO thatunlike, say, a Kickstarter pagefrees humanity from the collective action problems that threaten to doom the species? According to Owocki, its the ability to write code in ways that tinker with incentive structures. (In this sense, the first DAO was arguably Bitcoin itself.) Our weapon of choice is novel mechanism designs, based upon sound game theory, deployed to decentralized blockchain networks as transparent open source code, he writes in GreenPilled. Indeed, the book has very little to say about technology, per se, and much more to say about various game theory concepts. These range from the sort of thing youd learn in an undergrad econ classpublic goods are non-excludable and non-rivalrousto things that wouldnt be out of place in a sci-fi novel: community inclusion currencies, fractal DAO protocols, retroactive public goods funding.

Its hard enough for me to grasp how a DAO works. So while Im in Denver,I create one.

One of the most powerful incentive design techniques, according to Owocki, is something called quadratic voting. Standing near the edge of the Shill Zone, Owocki turned around to show me the back of his purple baseball jacket, which said Quadratic Lands. The Quadratic Lands, Owocki explained, are a mythical place where the laws of economics have been redesigned to produce public goods. Its just a meme, he said. I dont want to tell you it already exists. (Everyone at ETHDenver was concerned, rightly, about my ability to separate metaphorical claims from literal ones.)

In a quadratic voting system, you get a budget to allocate among various options. Lets say its dollars, though it could be any unit. The more dollars you allocate to a particular choice, the more your vote for it counts. But theres an important caveat: Each marginal dollar you pledge to the same choice is worth less than the previous one. (Technically, the cost of your vote rises quadratically, rather than linearly.) This makes it harder for the richest people in a group to dominate the vote. GitCoin uses an adaptation, quadratic funding, to award money to Web3 projects. The number of people who contribute to a given project counts more than the amount they contribute. This rewards ideas supported by the most people rather than the wealthiest: regenerative cryptonomics in action.

Here is the original post:
The Web3 Movements Quest to Build a Cant Be Evil Internet - WIRED

Visa’s top crypto executive Terry Angelos leaves for Softbank-backed brokerage start-up DriveWealth – CNBC

Terry Angelos, Visa's global head of fintech and crypto.

DriveWealth

One of Visa's top executives is leaving the payments giant for a brokerage technology start-up, CNBC has learned.

Terry Angelos, Visa's global head of fintech and crypto, will take over as chief executive officer of start-up DriveWealth next week. Angelos joined Visa seven years ago as part of its acquisition of TrialPay, which he founded and led as CEO.

DriveWealth lets consumer finance apps like Block's Cash App and Revolut offer stock trading by providing necessary behind-the-scenes infrastructure. The Jersey City-based broker-dealer was one of the first to allow fractional investing, or buying stocks in smaller dollar amounts vs. whole shares.

While retail trading boomed during the pandemic, Angelos said the long-term opportunity is in taking U.S. equities international. He estimated roughly a billion people across the world, outside of China, access financial services from a digital wallet or a fintech app and are looking for exposure to blue-chip stocks.

"If you were to think about the single, most reliable long-term asset that people around the world want to own, it's equity in U.S. companies," Angelos said. "Traditionally, people outside the U.S. don't have the ability to open up a brokerage account. That's something that we think we can help solve."

U.S. companies have been less of a safe haven this week with the Dow hitting its lowest level of the year on Monday. Still, over the past six decades, U.S. stocks have seen a roughly 10% annual return.

DriveWealth was last valued at $2.8 billion and is backed by Softbank, Fidelity's venture capital arm and Citi Ventures among others. The company operates as a licensed broker-dealer, providing clearing and settlement on behalf of its fintech customers, which handle the consumer experience and apps.

DriveWealth also provides custody for individual accounts and stocks. To connect to these apps, it uses software known as an API, or Application Programming Interface. The company said it doubled its customer base year over year, with 140% growth in international partners. While it's starting with stocks, DriveWealth also offers crypto investing infrastructure.

Individual investor activity has slowed significantly from its 2021 peak at the time of the GameStop frenzy. The retail participation rate, measured by retail volume as a percentage of total trading volume, recently fell to its lowest level since the pandemic began, according to Rich Repetto, managing director and senior research analyst at Piper Sandler.

That pullback has hurt shares of Robinhood, which recently said it was cutting 9% of its workforce after ramping up hiring to keep up with demand, and other publicly traded brokerage firms.

Still, Angelos said DriveWealth has seen increased participation and account growth during the recent downturn, and pointed to the long-term value of U.S. stocks.

"We're still in the growth cycle of making equities available to people who otherwise wouldn't have had access and will continue to see growth, even though there may be volatility or pullbacks among more active traders," he said.

As for an initial public offering, Angelos said it's "potentially on the road map." But for now, he said he's focused on increasing its footprint and returning to the chief executive role after almost a decade at Visa.

See the original post here:
Visa's top crypto executive Terry Angelos leaves for Softbank-backed brokerage start-up DriveWealth - CNBC

Kubernetes has standardised on sigstore in a landmark move – The Stack

Kubernetes has standardised on the Linux Foundations free software signing service, sigstore, to protect against supply chain attacks. sigstore, first released in March 2021, includes a number of signing, verification and provenance techniques that let developers securely sign software artifacts such as release files, container images and binaries with signatures stored in a tamper-proof public log. The service is free to use and designed to help prevent what are increasingly regular and sophisticated upstream software supply chain attacks.

sigstores founders include Red Hat, Google and Purdue University. Its adoption by Kubernetes one of the worlds most active open source communities, with close to six million developers (a huge number given that CNCF data from December 2021 suggests that there are 6.8 million cloud native developers in total) is a significant vote of trust in the standard for verifying software components. (nb The Linux Foundation hosts both sigstore and Kubernetes, as well as Linux, Node.js and a host of other ubiquitous critical software projects.)

Kubernetes 1.24 released May 3 and all future releases will now include cryptographically signed sigstore certificates, giving its developer community the ability to verify signatures and have greater confidence in the origin of each and every deployed Kubernetes binary, source code bundle and container image.

Few open source projects currently cryptographically sign software release artifacts, something largely due, the Linux Foundation suggested on sigstores launch back in March 2021, to the challenges software maintainers face on key management, key compromise / revocation and the distribution of public keys and artifact digests.

The move by Kubernetes maintainers comes as supply chain attacks escalated 650% in 2021. The Kubernetes team in early 2021 began exploring SLSA compliance to improve Kubernetes software supply chain security, explaining that sigstore was a key project in achieving SLSA level 2 status and getting a head start towards achieving SLSA level 3 compliance, which the Kubernetes community expects to reach this August [2022]

(SLSA is a set of standards and technical controls that provide a a step-by-step guide to preventing software artifacts being tampered with, tampered artifacts from being used, and at the higher levels, hardening up the platforms that make up a supply chain. It was introduced by Google as a standard in June 2021.)

Dan Lorenc, originalco-creator of sigstorewhile at Google (and presently CEO / co-founder ofChainguard) told The Stack that the sigstore General Availability (GA) production release is due out this Summer.

This means enterprises and open source communities will benefit from stable APIs and production grade stable services for artifact signing and verification. This is being made possible thanks to the dedicated sigstore open source community, which has fixed major bugs and added key features in both services over the past few months. Sponsors like Google, RedHat, HPE and Chainguard provided funding that allowed us to stabilize infrastructure and perform a third-party security audit he said, adding: Many programming language communities are working towards Sigstore adoption and the Sigstore community is working closely with them. We just announced a new Python client for PyPI and are hoping to extend this to other ecosystems like Maven Central and RubyGems.

In terms of broader enterprise adoption (likely to accelerate when it is GA) he said in an emailed Q&A that a number of enterprises have already adopted Sigstore and are using it for signing and verifying both open and closed software. Notably the Department of Defense Platform One team has implemented Sigstore signatures into the IronBank container hardening platform which means they can verify container images, SBOMS and attestations.

sigstores keyless signing has raised some concernst that it could make revocation harder but thats not the case, he added, telling The Stack: No, in fact the opposite is true! While it is true that the signatures on software are stored forever, software verification using Sigstore does support artifact revocation. Further, Sigstore allows after-the-fact auditing to help organizations understand the extent of a compromise, and Sigstore makes discovering compromises in the first place easier by posting signatures on a transparency log. The Sigstore community recently published Dont Panic: A Playbook for Handling Account Compromise with Sigstore with more details on this

In terms of policy automation or vendor services support for sigstore, Lorenc as a co-creator had understandably got in early. His companys Chainguard Enforce, announced last week, is the first tool with native support for modern keyless software signing using the Sigstore open source standard he said, adding that the product will give CISOs the ability to audit and enforce policies around software signing for the code they use.

sigstores release had met with genuine appreciation across the community in 2021, with Santiago Torres-Arias, Assistant Professor of Electrical and Computer Engineering, University of Purdue noting that the software ecosystem is in dire need of something like it to report the state of the supply chain. I envision that, with sigstore answering all the questions about software sources and ownership, we can start asking the questions regarding software destinations, consumers, compliance (legal and otherwise), to identify criminal networks and secure critical software infrastructure. This will set a new tone in the software supply chain security conversation.

Its great to see adoption of sigstore, especially with a project such as Kubernetes which runs many critical workloads that need the utmost protection, said Luke Hinds, Security Engineering Lead at Red Hat, CTO & Member of the Kubernetes Security Response Team & Founder of the sigstore Project in a May 3 release.

Kubernetes is a well known and widely adopted open source project and can inspire other open source projects to improve their software supply chain security by following SLSA levels and signing with sigstore, added Bob Callaway, Staff Software Engineer at Google, sigstore TSC member and project founder.

He noted: We built sigstore to be easy, free and seamless so that it would be massively adopted and protect us all from supply chain attacks. Kubernetes choice to use sigstore is a testament to that work.

Security firm BlueVoyant earlier in 2021 noted after a survey of 1,500 CISOs, CIOs, and CPOs from the US, UK, Singapore, Switzerland and Mexico) that 77% had limited visibility around their third-party vendors (let alone the components they were using) and 80% having suffered a third-party related breach.

Users can find out how sigstore works in more detail here.

Original post:
Kubernetes has standardised on sigstore in a landmark move - The Stack

FLOW LAUNCHES $725 MILLION ECOSYSTEM FUND TO DRIVE INNOVATION ACROSS THE FLOW ECOSYSTEM – PR Newswire

Participants include industry-leading firms that have backed several of the most successful Web3 companies, such as a16z, AppWorks, Cadenza Ventures, Coatue, Coinfund, Digital Currency Group (DCG), Dispersion Capital, Fabric Ventures, Greenfield One, HashKey, L1 Digital, Mirana Ventures, OP Crypto, SkyVision Capital, Spartan Group, Union Square Ventures, and Dapper Ventures.

"We are thrilled to see such a strong vote of confidence in the Flow ecosystem from some of the world's leading investors in Web3 through their commitment to this Fund," said Roham Gharegozlou, CEO of Dapper Labs. "With their active participation and support, the Ecosystem Fund has the opportunity to become a real game-changer for the 7500+ strong and fast-growing developer community in the Flow ecosystem."

With a focus on enabling more distributed and equitable Web3 opportunities to developers around the globe, participants will focus on providing support for gaming, infrastructure, decentralized finance, content and creators. The resources are expected to be used by developers for product development, product scaling, team expansion, user acquisition and general operating expenses.

"The Ecosystem Fund is an opportunity to power the next generation of developers across the global Flow community," said Dan Rose, Chairman of Coatue Ventures. "Coatue has already backed multiple companies building in the Flow ecosystem including Dapper Labs, Crypthulu and Faze Technologies, and we are excited to play an active role in enabling more Web3 opportunities."

In addition to financial support, the developers in the Flow ecosystem will be able to leverage expertise via informational events, office hours, accelerators & incubators, subsidized office space and similar initiatives. For example, investors will provide Flow teams office space in cities such as Berlin (Greenfield One) and Asia (for AppWorks Accelerator program), and Liberty City Ventures will be providing two scholarships for college students to work on Flow-related projects. As a Venture Partner for Bybit and BitDAO, Mirana Ventures will also help catalyze strategic collaboration opportunities for Flow projects.

"As web3 accelerates and sophisticated app developers search for the best platforms, Flow is perhaps the best decentralized blockchain built for the scale, security, and ease of use most modern startups need to succeed." said David Pakman, Managing Partner at CoinFund. "The Flow Ecosystem Fund will be a huge accelerator of innovation and growth on the platform and we at CoinFund are excited to work with this talented community to help drive innovation and growth."

Originally developed by Dapper Labs to create more efficient, secure and scalable proof-of-stake blockchain experiences, Flow is an open-source, developer-friendly and energy efficient blockchain built for consumer applications. With global partners including the NBA, NFL, UFC and Dr. Seuss; unicorn developers such as Animoca and PlayCo; and emerging projects such as Genies, Fancraze and Cryptoys, Flow has seen daily transactions triple since September 2021 as it has grown into the leading blockchain for nonfungible token (NFT) sales by number of NFT transactions.

To learn more about Flow and the Flow Ecosystem Fund, please visitwww.flow.com/ecosystemsupport.

About Flow

Flow is the blockchain designed to be the foundation of Web3 and the open metaverse, supporting consumer-scale decentralized applications, NFTs, DeFi, DAOs, and more. Powered by Cadence, an original programming language built specifically for digital assets, Flow empowers developers to innovate and push the limits that will bring the next billion to Web3. Created by a team that has consistently delivered industry-leading consumer-scale Web3 experiences including CryptoKitties, Dapper, and NBA Top Shot, Flow is an open, decentralized platform with a thriving ecosystem ofcreators from top brands, development studios, venture-backed startups, crypto leaders, and more. For more information, visitwww.flow.com.

SOURCE Flow

Read more here:
FLOW LAUNCHES $725 MILLION ECOSYSTEM FUND TO DRIVE INNOVATION ACROSS THE FLOW ECOSYSTEM - PR Newswire

The Progress of Low-Code/No-Code and an Update to our Thesis – Madrona Venture Group

Low-Code/No-Code solutions are not new they have existed in some way, shape, or form for more than thirty years. Microsoft Excel was probably the earliest LCNC solution of its kind it enabled workflows in various ways for billions of people, most of which are still using it. As a developer, I remember my world changing when Microsoft released Visual Basic. Its graphical user interface (GUI) allowed developers to modify code by simply dragging and droppingobjectsand defining their behavior and appearance through the creation of an abstraction layer that hid complexities and automated routine tasks.

The idea ofcreating high-quality solutions at speed and scale without deep coding capabilities or an expensive development team is what most businesses want. Momentum around these Low-Code/No-Code solutions started ramping up in the early 2000s with companies like Smartsheet, which focuses on breaking down silos and creating a dynamic work environment. But that momentum has begun to increase significantly over thelast decade and even more so in the last five years. We saw the launch of AirTable for workflows, Coda for in-doc collaboration, Microsoft Power Apps and Appsheet, which was acquired by Google, for creating apps, Unqork with its visual development platform, and Plus Docs for real-timedata capture and sharing. All of this has the singular focus of democratizing technology to empower creators and builders in the modern world.

Low-Code/No-Code is a visual and hassle-free approach to software development. Low-Code/No-Code democratizes software application development. Users without deep programming knowledge are able to build enterprise-level applications that are deployable to teams and across enterprises.

In its simplest construct, you can think of Low-Code/No-Code as a visual and hassle-free approach to software development. Low-Code/No-Code platforms abstract, automate and optimize every step of a process and enable rapid delivery of any software solution. Low-Code/No-Code democratizes software application development users without deep programming knowledge are able to build enterprise-level applications that are deployable to teams and across enterprises.

Developers utilize Low-Code for rapid software delivery and workflow automation. Other professionals or knowledge workers are able to develop simple apps or expand app functions with minimal visual programming, automatic code generation, and model-driven design.

On the other hand, No-Code takes a more visual approach to creating apps or solutions without the need to know any programming languages. Users can drag and drop components to create a complete solution. Its critical to know that in reality, No-Code is never truly without code there is code involved, but it is abstracted behind an easy-to-use visual user interface.

Many factors are contributing to the acceleration of the creation and use of Low-Code/No-Code solutions.

Weve seen communities of makers grow faster over these last two years: sharing bread baking recipes on TikTok, DIY home improvement projects on YouTube. Just as were seeing people outside technology getting really excited about building stuff again, were seeing it in no code as well: people are posting challenges, like One Hundred Days of No Code on Twitter and creating maker communities supporting their building journeys. The no-code building blocks have been around for a decade, but these open maker communities are quickly on the rise. Coda Head of Solution Services John Scrugham

As the barrier to entry to participate in what was previously a highly technical field keeps getting lower, were seeing the number of people who want to change how things have always been done spike. People are more comfortable with technology, and as John Scrugham, Codas Head of Solution Services, said during Madronas annual meeting, all anyone needs is a $200 device with basic capabilities, and they can pull up everything they need to start building a solution to a productivity issue theyre having. That mindset is not limited to the workplace. John said he thinks that the recent growth in the creator/maker mindset spurred by the pandemic is inspiring people to try it in all aspects of life.

Weve seen communities of makers grow faster over these last two years: sharing bread baking recipes on TikTok, DIY home improvement projects on YouTube. Just as were seeing people outside technology getting really excited about building stuff again, were seeing it in no code as well: people are posting challenges, like One Hundred Days of No Code on Twitter and creating maker communities supporting their building journeys, said Coda Head of Solution Services John Scrugham. The no-code building blocks have been around for a decade, but these open maker communities are quickly on the rise.

It is really important for product growth and viral adoption of these types of products that people arent just using them at work. If that were the case, you wouldnt be able to share them with other people, which is pretty important. Plus Docs CEO Daniel Li

Another important observation thatPlus Docs CEO Daniel Li made was that people are now using Low-Code/No-Code tools not just in their work lives but also outside of work because there is so much low-hanging fruit to improve cumbersome workflows and processes anywhere.

As an example, we at Madrona use Plus Docs to track the performance of some of our portfolio companies and have also built a set of dashboards to track the ski conditions at Whistler for the avid skiers we have on the team.

It is really important for product growth and viral adoption of these types of products that people arent just using them at work, said Plus Docs CEO Daniel Li. If that were the case, you wouldnt be able to share them with other people, which is pretty important.

Low-Code/No-Code platforms put the power of application development into everyones hands. Users can range from knowledge workers all the way to field specialists, including those that run x-ops functions. The primary goal is to do things faster and cheaper in a more repeatable and systematic way using software and automation versus manual tooling.

If once a week you have to provide a status report, and youre comfortable going through your emails, pulling out information, going to spreadsheets, cutting out information, and building that deck for that status meeting, and you think that is the best you can do. Youre probably not the target user of this technology. But if you think theres got to be a better way to get information right from the source and present it through a live dashboard, so you never have to rebuild this deck that is the mindset. You have to be somebody who wants to change the status quo. Smartsheet Chief Product Officer & EVP of Engineering Praerit Garg

Having spent years building Smartsheet, Chief Product Officer & EVP of Engineering Praerit Garg explained the typical Low-Code/No-Code user well during Madronas annual meeting: Someone who looks at problems differently and thinks there are better ways to solve them.

He further added: If once a week you have to provide a status report, and youre comfortable going through your emails, pulling out information, going to spreadsheets, cutting out information, and building that deck for that status meeting, and you think that is the best you can do. Youre probably not the target user of this technology. But if you think theres got to be a better way to get information right from the source and present it through a live dashboard, so you never have to rebuild this deck that is the mindset. You have to be somebody who wants to change the status quo.

The consumerization of technology is back! The user is and wants to be more in control of their ability to build, deploy and manage at scale. With the advent of the creator economy, builders want solutions deployed in seconds without having to go through a cumbersome development process. The underlying framework that makes this possible is truly intelligent software with not just analytical but decision-making capabilities. There will always be work that requires the skills of what you can call professional programmers but there are only so many of them coming out of college.

We are excited to see what sort of new ideas the millions of other people that are no longer constrained by the requirements of deep coding knowledge and now have the capacity to innovate around technology in a way they never have before. At Madrona, we want to meet the next great founders who are innovating in the Low-Code/No-Code space. My contact info is linked in the byline!

July 14, 2020

This is the third in our series describing our investment themes for 2020 and beyond. Software is eating the world, but today, most organizations have a limit on how quickly they can build software. While

December 17, 2020

Today we are excited to announce our investment in Rec Room, an online universe where people can play and create games with their friends. We first met Nick Fajt and the founding Rec Room team

October 25, 2017

Today we are pleased to announce that Ted Kummert is rejoining Madrona as Venture Partner. Ted spent the last four years at Apptio as EVP of Engineering and Cloud Operations. While at Apptio (NASDAQ:APTI), Ted

Original post:
The Progress of Low-Code/No-Code and an Update to our Thesis - Madrona Venture Group