Boys & Girls Clubs set to open eight new sites throughout Northeast Ohio this summer – cleveland.com

CLEVELAND, Ohio-- Eight new Boys & Girls Clubs are expected to open across the region this summer, including sites in Cuyahoga, Summit, Lorain and Huron counties.

The expansion is the first step in the strategic plan of the Boys & Girls Clubs of Northeast Ohio to provide more youths with greater experiences and the opportunities they desire, said Jeff Scott, the CEO of the organization.

The first set of new clubs will open next month and will be school-based sites funded by stimulus dollars and the Ohio Department of Education.

Some of the sites will be in Euclid, Cuyahoga Falls, Garfield Heights and Akron. The organization also plans to open its first club in Huron County at New London Elementary School on June 13.

Nationally, the Boys & Girls Clubs aim to provide safe, fun places for children 6-18 to go after school and in the summer.

In Northeast Ohio, Scotts group serves children in Cuyahoga, Summit, Lorain and Erie counties. The non-profit organization operates 40 clubs throughout in the region.

After the pandemic, many school systems are pivoting back to the traditional after-school models or taking a hybrid approach. Because of that, Scott said, the organization had a duty to do more to help youths across Northeast Ohio.

There is no charge to join a club. Memberships are free and open for youth ages 6-18. Activities include athletics, academic help, arts and music programming, leadership opportunities, field trips and breakfast and lunch daily.

In the future, Scott said he hopes donations from individuals and corporations will allow the organization to create more stand-alone clubs across the region. Ultimately, he wants to add another 15 sites across Northeast Ohio.

Our youth deserve our unrelenting efforts. We can never stop. Never sit still. This is the first step in continuing to do more for our youth, he said.

You can learn more about Boys & Girls Clubs and access membership registration forms by visiting http://www.bgcneo.org. To access the summer membership form, go to tinyurl.com/BGCNEOsummer.

The new clubs will be at:

Akron Buchtel Community Learning Center, for students who have completed sixth through 12th grades. The starting date has not been determined.

Akron North High School, for teenagers in ninth through 12th grade. The starting date has yet to be determined.

Cuyahoga Falls Preston Elementary School, for children who have completed kindergarten through fifth grade. The starting date has not been set.

Ely Elementary School in Elyria, for children who have completed kindergarten through fifth grade. The club starts June 27.

Euclid Middle School, for children ages 6-18. The starting date has yet to be determined.

Garfield Heights High School, for children who have completed kindergarten through seventh grade. The club begins June 20.

Longfellow Middle School in Lorain, for children who have completed kindergarten through seventh grade. The club begins June 13.

New London Elementary, for those who have completed kindergarten through seventh grade. The club begins June 13.

More:
Boys & Girls Clubs set to open eight new sites throughout Northeast Ohio this summer - cleveland.com

Jamstack pioneer Matt Biilmann on Web 3, Deno, and why e-commerce needs the composable web – DevClass

Interview Matt Biilmann, co-founder of Netlify and one of the originators of the Jamstack (JavaScript, API and Markup) approach to web development, spoke to Devclass at the Headless Commerce Summit, which is underway this week in London.

What is the technical argument behind Jamstack?We saw this shift happening in the core architecture of the web, Biilmann tells us, from a world where every website, every web application was a monolithic application with templates, business logic, plug-in ecosystem, data access, all grouped together we were fortunate to predict the shift towards decoupling the web UI layer from theback endbusiness logic layer, and having thatback endbusiness logic layer split into all these different APIs and services, where some of them might be owned today, typically running in a Kubernetes cluster somewhere, but a lot of them [were]other peoples services like Stripe, Twilio,Contentful,Algoliaand so on.

He adds: We saw the opportunity to build anend to endcloud platform around the web UI layer and we coined the termJamstack.

How important was the idea of reducing the role of the web server, in favour of static pages and API calls?

Before, the stack was like your server, your web server, your operating system, yourback endprogramming, language. Now the stack is really what you deliver to the browser and how it communicates withdifferentAPIsand services, he says.

The important piece for us was this step up in where the abstraction is.Are you working with an abstraction thats your Apache server here, your PHP program, your Ruby program? Or are you working more in the abstraction of developing a web UI,we can preview it locally, we can see how itlooks when we push it to git, it will automatically go live.It was a step up from having to worry about all the underlying components, Biilmann says.

Netlifys platform includes the idea of having a content delivery network built in, so that static content is always served with low latency. It now includes the concept of Edge functions, code that runs server-side but close to where the user is located and with a serverless architecture. Is this any different from what Cloudflare is doing with its Pages and Workers, or other providers that have adopted this model?

Were thinking about it differently, said Biilmann. I would say that Cloudflare is really building their own application platform a very Cloudflare-specific ecosystem where if you build an app for that application platform, you build it for Cloudflare. When it comes to our Edge functions, we spent a long time thinking about, how do we avoid making that a proprietary Netlify layer.That was why we started collaborating with the Deno team, who areactually workingon anopen sourceruntimefor that kind of layer.

Where we think we created lock-in is just in terms of delivering developer productivity that makes our customers stay, he added.

Why Deno and not the better-known Node.js runtime? Both were made by the same person, Ryan Dahl, Biilmann says. He had this interesting observation that there was a growing need for developers to be able to program asynchronous IO,andthat was very hard to do in any of the existing dynamic runtimes like Ruby, PHP, Python.

He saw that if he made a new language around JavaScript and built it around asynchronous APIs from the beginning, its an advantage. That drove the adoption and success of Node, and what hes doing now with Deno is very similar.

The difference with Deno, Biilmann says, is that Dahl now sees that everybody wants to start runningtheir code in these new environments, where instead of running your code on your own server in your own data center, you want to take your code and have providersjust distribute them all over the world and run your code for you.

Node libraries often come with native code dependencies, Biilmann said, which breaks this kind of deployment. Features like deeper TypeScript integration and use of ECMAScript modules also make Deno attractive.

Why is the Jamstack, headless approach important for ecommerce? This idea of bringing the web UI very close to the user, either pre-building as static assets or having it run with edge functions close to the user, means huge benefits in performance, and no one is more aware of the huge difference performance makes to conversion rates than the ecommerce operators. Its just so well proven by studies, Biilmann says.

Should developers care more about Jamstack, or Web 3? Theres an immense hype around with Web 3, and I think some of the ideas are really interesting, the idea of being able to bring your data with you to applications instead of putting your data into applications and giving it away, says Biilmann.

Most of those applications are built with the Jamstack approach but if you look at the number of developers on Ethereum or Solid, thats a smaller number than the developers signing up for Netlify every week.

Theres alot ofideas there that are very aligned with our idea of what the open web means and whats good for the web. But I think they are often artificially coupled to cryptocurrencies and blockchain, and it getsvery hardto differentiate.

Continue reading here:
Jamstack pioneer Matt Biilmann on Web 3, Deno, and why e-commerce needs the composable web - DevClass

PDF to Excel conversion: Your ultimate guide to the best tools – Computerworld

In an ideal world, the data we need to analyze would be available in ready-to-use format. In the world we live in, though, a lot of valuable data is locked inside Portable Document Format (PDF) documents. How to extract that data from PDFs into an Excel spreadsheet? You have a number of PDF to Excel converters to choose from.

Theres software from major vendors like Microsoft and Adobe, task-specific cloud services including PDFTables and Cometdocs, services from general-purpose cloud providers such as Amazon, and even free open-source options.

Which is the best PDF to Excel converter? As with the best computer, the answer depends on your specific circumstances.

There are several important considerations when selecting a PDF converter.

1. Was my PDF generated by an application or is it a scanned image? There are two types of PDF files. One is generated by an application like Microsoft Word; the other comes from a scanned or other image file. You can tell which one you have by trying to highlight some text in the document. If a click and drag works to highlight text, your PDF is app-generated. If it doesnt, youve got a scan. Not all PDF conversion tools work on scanned PDFs.

2. How complex is the data structure? Almost every tool will work well on a simple one-page table. Things get more complicated if tables are spread over multiple pages, table cells are merged, or some data within a table cell wraps over multiple lines.

3. Do I have a large volume of files that need batch file conversions or automation? Our best-performing tool on app-generated PDFs may not be the best choice for you if you want to automate frequent batch conversions.

In addition, as with any software choice, you need to decide how much you value performance versus cost and ease of use.

To help you find whats best for your tasks, we tested seven PDF to Excel conversion tools using four different PDF files ranging from simple to nightmare. Youll see how all the tools perform in each scenario and find out the strengths and weaknesses of each one.

Here are the tools we tested, starting with our overall best performers (but remember that best depends in part on the specific source document). All these tools did pretty well on at least some of our tasks, so rankings range from Excellent to Good.

As the creator of the Portable Document Format standard, youd expect Adobe to do well in parsing PDFs and it does. A full-featured conversion subscription is somewhat pricey, but theres also an inexpensive $2/month plan (annual subscription required) that includes an unlimited number of PDF to Excel conversions. (You can output Microsoft Word files with this tool as well).

The Excel conversions include any text on pages that have both text and tables. This can be a benefit if youd like to keep that context or a drawback if you just want data for additional analysis.

Rating: Excellent our hands-down winner for non-scanned PDFs.

Cost: $24/year

Pros: Outstanding results; preserves much of the original formatting; deals well with tables spanning multiple pages; unlimited conversions of files up to 100MB; affordable for frequent users.

Cons: No built-in scripting/automation workflow; expensive if you only convert a few documents a year.

Bottom line: If you dont need to script or automate a lot of conversions and dont mind paying $24 per year, this is a great choice.

For an AWS cloud service, Textract is surprisingly easy to use. While you certainly can go through the usual multi-step AWS setup and coding process for Textract, Amazon also offers a drag-and-drop web demo that lets you download results as zipped CSVs. You just need to sign up for a (free) Amazon AWS account.

Rating: Excellent this was our best option for a complicated scanned PDF.

Cost: 1.5 cents per page (100 pages per month free for your first three months at AWS)

Pros: Best option tested for a complicated scanned PDF; performed extremely well on all the app-generated PDFs; offers a choice of viewing results with merged or unmerged cell layout; easy to use; affordable.

Cons: Uploaded files are limited to 10 pages at a time. For those who want to automate, using this API is more complicated than some other options.

Bottom line: An excellent choice if you dont mind the AWS setup and either manual upload or coding with a complex API.

If youre looking for free and open source, give Tabula a try. Unlike some free options from the Python world, Tabula is easy both to install and to use. And it has both a command-line and a browser interface, making it equally useful for batch conversions and point-and-click use.

Tabula did very well on PDFs of low or moderate complexity, although it did have an issue with the complex one (as did many of the paid platforms). Tabula requires a separate Java installation on Windows and Linux.

Rating: Very good and you cant beat the price.

Cost: Free

Pros: Free; easy to install; has both a GUI and scripting options; allows you to manually change what areas of the page should be analyzed for tables; can save results as a CSV, TSV, JSON, or script; offers two different data extraction methods.

Cons: Needed some manual data cleanup on complex formatting; works on app-generated PDFs only.

Bottom line: A good choice if cost, ease of use, and automation options are high on your list of desired features and your PDFs aren't scanned.

A key advantage to this service is automation. Its API is well documented and supports everything from Windows PowerShell and VBA (Office Visual Basic for Applications) to programming languages like Java, C++, PHP, Python, and R.

PDFTables performed well on most of the app-generated PDF tables, even understanding that a two-column header would be best as a single-column header row. It did have some difficulty with data in columns that were mostly empty but also had some data in cells spread over two lines. And while it choked on the scanned nightmare PDF, at least it didnt charge me for that.

Rating: Very good overall; excellent on automation.

Cost: 50 pages free at signup including API use. After that its $40 for up to 1,000 pages, and your credits are only good for a year.

Pros: Very good API; better performance on the moderately complex PDF than several of its paid rivals.

Cons: Pricey, especially if you use more than the 50 free pages but less than 1,000 page conversions in a year. Doesnt work on scanned PDFs.

Bottom line: Performs well and is easy to use both on the web and through scripting and programming. If you dont need an elegant API, however, you may prefer a less expensive option.

This is a freemium platform with paid options. It proved to be the lone free choice that was able to handle our scanned nightmare PDF.

Rating: Good.

Cost: Free in the cloud, $5/month or $49/year premium cloud for batch conversions and faster service, desktop software $35 for 30-day use or $150 lifetime.

Pros: A lot of capability for the free option; works on scanned PDFs; affordable.

Cons: No API or cloud automation (we didnt test the desktop software); paid option required for batch conversions; split single-row multi-line data into multiple rows.

Bottom line: Nice balance of cost and features. This was most compelling for complex scanned PDFs, but others did better when cell data ran across multiple lines.

This web-based service is notable for multiple file format conversions: In addition to generating Excel, it can download results as Word, PowerPoint, AutoCAD, HTML, OpenOffice, and others. Free accounts can convert up to five files per week (30MB each); paid users get an unlimited number of conversions (2GB/day data limit).

Cometdocs is a supporter of public service journalism; the service offers free premium accounts to Investigative Reporters & Editors members (disclosure: I have one).

Rating: Good.

Cost: 5 free conversions/week; otherwise $10/month, $70/year or $130 lifetime.

Pro: Works on scanned PDFs; multiple input and output formats; generally good results; did extremely well on a 2-page PDF with complex table format.

Cons: Not as robust on complex scanned PDFs as some other options; split one rows multi-line data into multiple rows; no clear script/automation option.

Bottom line: Particularly compelling if you're interested in multiple format exports and not just Excel.

Many people dont know that Excel can import PDFs directly but only if youve got a Microsoft 365 or Office 365 subscription on Windows. It was a good choice for the simple file but got more cumbersome to use as PDF complexity rose. Its also likely to be confusing to people who arent familiar with Excels Power Query / Get & Transform interface.

How to import a PDF directly into Excel: In the Ribbon toolbar, go to Data > Get Data > From File > From PDF and select your file. For a single table, youll likely have one choice to import. Select it and you should see a preview of the table and an option to either load it or transform the data before loading. Click Load and the table will pop into your Excel sheet.

For a single table on one page, this is a quick and reasonably simple choice. If you have multiple tables in a multi-page PDF, this also works well as long as each table is confined to one page. Things get a bit more complex if youve got one table over multiple PDF pages, though, and youll need knowledge of Power Query commands.

Its somewhat unfair to compare Power Query data transformation with the other tools, since the results of any of these other PDF to Excel converters could also be imported into Excel for Power Query wrangling.

Rating: Good.

Cost: Included in a Microsoft 365/Office 365 Windows subscription.

Pro: You dont have to leave Excel to deal with the file; a lot of built-in data wrangling available for those who know Power Query.

Cons: Complex to use compared with most others on all but the simplest of PDFs; doesnt work on scanned PDFs; requires a Microsoft 365/Office 365 subscription on Windows.

Bottom line: If youve already got Microsoft 365/Office 365 on Windows and youve got a simple conversion task, Excel is worth a try. If you already know Power Query, definitely consider this for more PDF conversions! (If you dont, Power Query is a great skill to learn for Excel users in general.) If your PDF is more challenging and you dont already use Power Query / Get & Transform, though, youre probably better off with another option.

Heres how the seven tools fared in our four conversion tests:

Our simple task was a single-page app-generated PDF pulled from page 5 of a Boston housing report. It contained one table and some text, but column headers and two data cells did include wrapped text over two lines.

All the platforms we tested handled this one well. However, several broke up the multi-line text into multiple rows. The issue was easy to spot and fix in this example, but this issue could be difficult in larger files. For this easy one-pager, though, the PDF to Excel converters that werent in first or second place still had very good results. All were worth using for this type of conversion.

First place: Tie Adobe and AWS Textract. With Adobe, no data cleanup was needed. The column headers even had the color formatting of the original. Adobes conversion included text (with lovely formatting), which is useful if you want to keep written explanations together with the data in Excel. Youd need to delete the text manually if you want data only, but thats simple enough.

AWS Textract converted data only. No data cleanup was needed.

Close second: Excel. Data only. Excel didnt break wrapped text into two rows, but it did appear to run text together without a space with multi-line rows. The data was actually correct, though, when you looked at it in the formula bar it just looked wrong in the overall spreadsheet. This was easily fixed by formatting cells with "wrap text." However, not everyone might know to do that when looking at their spreadsheet.

Others:

PDFTables: returned data and text. Same issues as Excel with appearing to keep wrapped text in a single line without a space between words. This was also easily fixed by wrapping text, if you knew to do so. This result also would need cleanup of a couple of words from a logo that appeared below the data. Explanatory text outside the logo had no problems, though.

Tabula: data only. Split multi-line cells into multiple rows.

Cometdocs: data and text. Split multi-line cells into multiple rows. Surrounding text was accurate, including logo text.

PDFtoExcel.com: similar performance to Cometdocs.

Our moderate PDF challenge was a single app-generated table spanning multiple PDF pages, via the Boston-area Metropolitan Water Resources Authority data monitoring wastewater for Covid-19 traces.

First place: Adobe. One of the few to recognize that all the pages were the same table, so there were no blank rows between pages. Headers were in a single row and spaces between words in the column names were maintained. Data structure was excellent, including keeping the multi-line wrap as is. It even reproduced background and text colors. The 11-page length wasnt a problem.

Second: AWS Textract. Header row was correct. Each page came back as a separate table, although it would be easy enough to combine them. The one strange issue: There were apostrophes added at the beginning of the cells possibly due to how I split the PDF, since I needed to create a file with only 10 pages. However, those apostrophes were easy to see and remove with a single search and replace, since the data didnt include any words with apostrophes. It was easier to get the exact data I needed than with Tabula, but more cumbersome to get the full data set.

Close third: Tabula. No blank rows between pages, data in the correct columns, wrapped cells stayed in a single row. Unfortunately, while the wrapped data appeared properly when you looked at the cell contents in the formula bar, once again the data appeared to merge together in the full spreadsheet and this wasnt as easily fixed by formatting with text wrapping as with Excel and PDFTables in the simple PDF.

For example, this was the content of one cell as it appeared in the formula bar:

B.1.1.7

76%

But in the overall spreadsheet, that same cell looked like

B.1.1.776%

I was able to get that to display properly at times by increasing the row height manually, but this was an added step that most people wouldnt know to do, and it didnt seem to work all the time.

Others:

PDFtoExcel.com: multiple problems. The first few pages were fine except for multi-row headers, but data over two lines in single cells broke into two rows in the data, generating blank rows elsewhere that would need to be fixed. In addition, columns were shifted to the right in one section. This would need cleanup.

PDFTables: multiple problems. All the data came in fine for most of the pages, but toward the end, a few cells that should have been in column J got merged with column I in ways that would be more difficult to fix than PDFtoExcels. For example, this single cell:

Omicron

559 23%

Was supposed to be 559 in one cell and Omicron 23% in the next cell.

Cometdocs: failed. Conversion failed on the full PDF and even the 10-page version I uploaded to AWS. It was able to convert a version with just the first 5 pages, but the full file should have been well below Cometdocs account limits.

Excel: it was possible to get the data in a format I wanted, but it required data manipulation in Power Query as well as wrapping text. Thats not a fair comparison with other platforms that were a single upload or command. Still, results were ultimately excellent. If youre an Excel/Power Query power user, this is a good choice.

Local election results are some of my favorite examples of analysis-hostile public data. The app-generated PDF from Framingham, Mass. shown below was only 3 pages but with table formatting that was not designed for ease of data import. Is there a PDF conversion tool that can handle it?

Page 1 of the PDF showing recent election results for Framingham, Mass. (Click image to enlarge it.)

First place: Tie Adobe and PDF to Excel. Adobe returned an Excel file in perfect format, complete with original cell colors.

While PDFtoExcel.coms spreadsheet didnt have the pretty formatting of Adobe, all the data came in accurately, and it was usable as is.

Others:

AWS Textract: fair. Results came back in 5 tables. In one case, youd need to copy and paste them together manually and look at the original to make sure you were doing so correctly.

PDFTables: poor. Data came back, but some in the wrong columns, whether I tried to download as multiple sheets or one sheet. This would need manual checking and cleanup.

Tabula: poor. Similar problem as PDFTables with some data in the wrong columns, but at least I didnt have to pay for it. I tried both the Stream and Lattice extraction methods, and both had some wrong-column issues (although the issues were different).

Cometdocs: conversion failed.

Our nightmare comes courtesy of a presentation at this year's National Institute for Computer Assisted Reporting conference, as an example of data that would be useful for training students if it was in a format that could be easily analyzed. Its a multi-page scanned PDF with four months of data from the federal Refugee Processing Center on refugee arrivals by country of origin and U.S. state of destination.

This PDFs challenges range from multi-page tables to lots of merged columns. In addition, the table on page 1 proved to be somewhat different than tables on the other pages, at least in terms of how several tools were able to handle them, although they look the same.

I only tested the first 10 pages due to the AWS 10-page limit, to be fair to all the tools.

Read this article:
PDF to Excel conversion: Your ultimate guide to the best tools - Computerworld

Learn React: Start of a Frontend Dev Journey – thenewstack.io

Hello! Welcome to the first article in a series of tutorials that will focus on learning React.js. This is a weekly series, and after this brief introduction, will center around building a to-do list application from scratch. I chose a to-do list as it includes all the foundational application building blocks needed in a basic CRUD application.

Before getting into what React is, here are some recommended prerequisites, as defined by Google:

When I learned React, I was a master of exactly none of these topics. I dont want to mislead anyone though, I was at Codesmith and learned React in the structured school environment. By this time, I studied algorithms and basic data structures for about five months and had a fledgling knowledge of the DOM and HTTP requests. My HTML was meh at best and my CSS was a disaster. Absolutely no divs were centered before this time period.

One last word from the wise(ish): The more working knowledge you have prior to exploring React, the more ease you may find with this, but no one can define what learning will look like for you. Many articles and video tutorials say learning React is easy but that is in comparison to heavier frontend libraries and frameworks. Easy was not my experience. Dont be discouraged if it isnt yours either. Im happy youre here and I hope you stay! Now, shall we?

Facebook developer Jordan Walke created the React.js frontend JavaScript library, as a way to help developers build user interfaces with components. A library is a collection of prewritten functions and code that reduce development time and provide popular solutions for common problems.

Inspired by XHP (an HTML component library for PHP), React was first deployed on Facebooks news feed in 2011 followed by Instagram in 2012. The library was open sourced at JSConf US in May of 2013.

React is open source, meaning it is completely free to access. Developers are encouraged to modify and enhance the library.

React adheres to the declarative programming paradigm. Developers design views for each state of an application and React updates and renders components when data changes.

Documentation: React has a proper maintenance team via the engineers who actively work on React. As result, React.js is incredibly professional. They have docs on docs on docs. Do you need to find something that isnt in the React docs or do you want to search for something super specific in Google? Well, that is no problem! Enter Stack Overflow or the numerous blog posts (hello) that are also here to help you. Ive worked with technologies that have a large footprint and those with a very small one. The larger the footprint, the more ease and independent the coding experience is.

Vast Career Potential: Uber, Bloomberg, Pinterest, Airbnb, and Skype are just a few companies that use React. The popularity is growing as more companies and Google estimates the average earnings for a React developer is $119,990 in the US.

Longevity: Any time a library is used, theres a risk that maintenance could be discontinued. It happens all the time. So when choosing a library its best to select one with such a large community. I hope its clear by now that React has one. Updates are still current after 10 years and popularity is only growing. Projects and skills are safe here.

One of the things I valued most about learning from my instructors at Codesmith was that they taught me to use the proven engineering tools at my disposal. React works. Its optimized for performance and effectiveness yet leaves so much room for creativity. Some of the greatest engineering minds put their best effort into building this library. I dont have to build my applications from scratch and can lean on these tools and libraries when it suits the project.

Leaning on a library, framework, or template isnt cheating. Its solid engineering. Engineering isnt taking the hardest, most laborious path forward in my opinion. It is solving a challenge the best way possible with the most optimized solution that you know of at that time. And now I would like to present to you, a very lean, mean, optimized frontend machine.

In the next article, I will cover the following topics: state, components, JSX, how to render JSX to the browser, how to set up the files in an IDE.

Read more here:
Learn React: Start of a Frontend Dev Journey - thenewstack.io

Encore Models, Builds the Backend Designed in Your Head – thenewstack.io

When Encore founder Andr Eriksson became a developer at Spotify, he found the work of building backends for cloud applications mundane and repetitive, far from the rush he felt while collaborating with World of Warcraft maker Blizzard as a teenager.

The Swedish backend maker on its website refers to those repetitive backend tasks, for the developer at least, like being a hamster on a wheel. His idea behind Encore is to make it easier and faster to get to the fun part of software development.

I personally was spending the vast majority of my time as an engineer, just doing the same type of work over and over again, managing the infrastructure, configuring things, you know, all that sort of repetitive and undifferentiated tasks that are just the daily life of building backends for the cloud these days, he said. And then looking around, I noticed every single team was doing that. And then looking outside of the company, every other company was also doing the same thing.

The company was exploring available tools, but not finding they provided that much benefit, he said. After thinking long and hard about the problem, he decided that the core issue is that engineers spend so much time building systems that have no idea what theyre trying to do.

So he set out to build a system that, in effect, could read your mind. Sort of.

In order to help developers do their job more effectively, we need tools that actually understand what developers are trying to do, he said. Were all used to all these tools that really have absolutely no idea what youre trying to do; they dont understand that youre building a backend at all.

Even the ones that are backend-specific, they dont understand what your backend is about; they dont understand how it fits together. And when you dont have that understanding, youre very limited in your ability to actually aid developers in getting their job done. And thats where Encore is different.

Written it Go, Encore is designed to match the design in the engineers head, an approach it calls the Encore Application Model.

With any programming language, you have a compiler and a parser that analyzes your code, then builds a binary that you then run on a server.

Encore is essentially another layer on top of that, where we add additional rules to how youre expressing backend concepts like, This is how you define an API. This is how you query a database, this is how you define that you need a queue for a cache or whatever. So you have all of these really important concepts in building distributed systems that come up over and over again, and were taking them and turning them into native concepts that you express in a certain way, he explained.

Essentially, Encore runs a set of opinionated rules atop your cloud account and its Backend Development Engine requires they be followed.

We have a parser, which works just like a regular compiler for programming language, that is then parsing the code and enforcing those rules: Oh, youre trying to query a database, but youre not following Encores rules. So in a way, its a programming language built on top of Go that instead of compiling it into a binary, its compiling it into a description of a distributed system which is like, here are all the services, here are all of the different endpoints, here are the requests and the response schemas, here is where youre doing an API call between this service and that service. Heres where youre defining a database or a key-value store. Heres where youre querying the database.

So it becomes this really, really rich description of how your whole system fits together. And it very much models the mental model of the engineers that are building that system, because thats how they think about this, he said.

Using static analysis of the metadata, it creates a graph of your system, much like if you were drawing this out on a whiteboard, with boxes and arrows representing systems and services and how they communicate and connect to the infrastructure.

It provides the ability to:

Encore doesnt want to host your software. While it does offer hosting to help startups and hobbyists get up and running quickly, for production it runs atop your cloud accounts on Amazon Web Services, Azure or the Google Cloud Platform.

It makes much of its open source roots and your control of your cloud accounts, stressing that if, for whatever reason, you want to leave Encore, you still own the data and access to those accounts.

Its a full-fledged programming tool, just at a slightly higher abstraction level thats dedicated for building cloud-based backends, Eriksson said.

Most of the engineers that are using Encore are actually very experienced. They come from a world where they know how to do all of this stuff with cloud infrastructure and scalable distributed systems. Theyre just fed up with it. They actually want to build products, not mess around with all of that toil. And they really like that Encore enables them to do that, he said.

Eriksson launched Encore along with Marcus Kohlberg, also a Spotify alum, in 2021. It touts an engineering team with experience at Google and the UK-based online bank Monzo. The company open sourced the Encore Go Framework last year under Mozilla Public License 2.0. Its the basis for the Backend Development Engine, announced recently along with a $3 million seed round led by Crane Venture Partners.

Encore is dramatically changing the developer experience for building distributed systems in the cloud, said Krishna Visvanathan, cofounder of Crane Venture Partners. It stands apart because of its ability to deeply understand source code and automate what would otherwise slow development and business to a halt, while giving developers the freedom to develop for any application or cloud environment. Encore is a clear leader and first mover in this space.

With its experience with large-scale distributed systems, its looking to solve those problems, but provide a compelling product for startups as well.

I think this approach, which is very opinionated, and really focuses on a very integrated approach, where we can actually make investments into solving problems that large engineering organizations never have the time to get to. I think theres substantial value there on the enterprise side of really sophisticated analysis about how your systems fit together and work, Eriksson said.

He noted that if youre into game development you use a game engine like Unity or Unreal Engine. But to build a backend, traditionally, you just open a file and start typing.

So theres this real massive difference in experience and integration between the game industry and the backend industry. And thats kind of where we want to take this, providing a really powerful and integrated experience that improves on not just for individual developers, but how you collaborate, and how youre working in teams, and how whole organizations work.

And then going beyond developers into insights and analytics and machine learning and data, he said, of the long-term vision.

On the more immediate horizon, its much more about how do we take this experience and making it more accessible to larger companies that want to integrate it with already existing systems and backends, being able to seamlessly integrate it with existing infrastructure, and that sort of thing.

And then just adding more cloud primitives, as we call it, the building blocks of distributed systems, like caches, and queues and object storage, and all these sorts of things that youre building back inside of these.

Brian Ketelsen, cloud developer advocate at Microsoft, is a fan. He gushed in email:

I have used Encore for a few projects now, and Im completely in love. The first project was an ambitious conference management platform undertaken with a few volunteers in the Go community. In just a few weeks we were able to put together a complete conference management system that included everything a conference needs: ticketing, program scheduling, call for papers, room management and more. It was really easy to onboard new volunteers to help with the code and everyone was impressed with the speed at which we were able to develop. This project was just over a year ago, so it was built using an older version of Encores platform.

More recently I was invited to do a keynote for DevWeek Mexico. I knew Encore was planning a 1.0 launch around the same time and they had just released Azure support. I work for Microsoft as an Azure cloud developer advocate. So I decided to build a Life API as a demo app for the keynote.

My goal was to create an API that covered all of the things I would manually do as a developer advocate. I have a new baby at home with some severe medical issues, and we ended up spending much of the time I had planned to write my talk and app in the ICU with the little one. We got home on Friday my keynote was Monday. I was able to build out the entire API and build a new website that consumes it in just a few hours over the weekend.

To say that Im impressed with Encore would be a gross understatement. From a functional perspective, Encore is built for developers. The development experience is well crafted with almost zero friction after installing the encore command-line app and creating an account. The Encore platform allowed me to write only the business logic for my application instead of spending countless hours setting up hosting, continuous integration, automated deployments and the rest of the operational things that drag a new project down in the beginning. For a smaller project like mine, that probably saved me a total of 15-20 hours of time.

Operationally, Encore really shines. Because Encores tools analyze the code Ive written, they are able to inject all the boring boilerplate that I hate writing. Yes, I want distributed tracing; No, I dont want to annotate every function with dozens of lines of repetitive code to make it happen. Once my code was deployed, I could go to the Encore dashboard and view distributed traces and detailed logs. That single pane approach to ops is such a wonderful simplification from the usual suite of 5-8 different tools a team might use to manage a deployed application.

Treating RPC calls as local function calls in code is another delightful time-saver. Instead of writing my API as a big monolith, I decided to break each functional area into separate microservices to explore how well Encore worked in an environment where there are many services exposed with public and private (internal) endpoints. Everything about the process was smooth and boring in the best possible way. Encore manages database connections, secrets, database migration, logs and infrastructure. Thats SO MUCH code I didnt write.

Every tool like Encore that is designed to speed up development comes with tradeoffs. As a developer, it is your responsibility to understand the tradeoffs that come with the decisions made on your behalf by the tools.

Encore was clearly built by people who understand both the needs of the developer and the needs of the ops crowd. There arent any decisions in their platform that I couldnt accept and embrace. The icing on the proverbial cake is the ability to host the application on my own Azure subscription so Im not dependent on someone elses cloud.

See more here:
Encore Models, Builds the Backend Designed in Your Head - thenewstack.io

Dependency Issues: Solving the World’s Open-Source Software Security Problem – War on the Rocks

The idea of a lone programmer relying on their own genius and technical acumen to create the next great piece of software was always a stretch. Today it is more of a myth than ever. Competitive market forces mean that software developers must rely on code created by an unknown number of other programmers. As a result, most software is best thought of as bricolage diverse, usually open-source components, often called dependencies, stitched together with bits of custom code into a new application.

This software engineering paradigm programmers reusing open-source software components rather than repeatedly duplicating the efforts of others has led to massive economic gains. According to the best available analysis, open-source components now comprise 90 percent of most software applications. And the list of economically important and widely used open-source components Googles deep learning framework TensorFlow or its Facebook-sponsored competitor PyTorch, the ubiquitous encryption library OpenSSL, or the container management software Kubernetes is long and growing longer. The military and intelligence community, too, are dependent on open-source software: programs like Palantir have become crucial for counter-terrorism operations, while the F-35 contains millions of lines of code.

The problem is that the open-source software supply chain can introduce unknown, possibly intentional, security weaknesses. One previous analysis of all publicly reported software supply chain compromises revealed that the majority of malicious attacks targeted open-source software. In other words, headline-grabbing software supply-chain attacks on proprietary software, like SolarWinds, actually constitute the minority of cases. As a result, stopping attacks is now difficult because of the immense complexity of the modern software dependency tree: components that depend on other components that depend on other components ad infinitum. Knowing what vulnerabilities are in your software is a full-time and nearly impossible job for software developers.

Fortunately, there is hope. We recommend three steps that software producers and government regulators can take to make open-source software more secure. First, producers and consumers should embrace software transparency, creating an auditable ecosystem where software is not simply mysterious blobs passed over a network connection. Second, software builders and consumers ought to adopt software integrity and analysis tools to enable informed supply chain risk management. Third, government reforms can help reduce the number and impact of open-source software compromises.

The Road to Dependence

Conventional accounts of the rise of reusable software components often date it to the 1960s. Software experts such as Douglas McIlroy of Bell Laboratories had noted the tremendous expense of building new software. To make the task easier, McIlroy called for the creation of a software components sub-industry for mass-producing software components that would be widely applicable across machines, users, and applications or in other words, exactly what modern open-source software delivers.

When open source started, it initially coalesced around technical communities that provided oversight, some management, and quality control. For instance, Debian, the Linux-based operating system, is supported by a global network of open-source software developers who maintain and implement standards about what software packages will and will not become part of the Debian distribution. But this relatively close oversight has given way to a more free-wheeling, arguably more innovative system of package registries largely organized by programming language. Think of these registries as app stores for software developers, allowing the developer to download no-cost open-source components from which to construct new applications. One example is the Python Package Index, a registry of packages for the programming language Python that enables anyone from an idealistic volunteer to a corporate employee to a malicious programmer to publish code on it. The number of these registries is astounding, and now every programmer is virtually required to use them.

The effectiveness of this software model makes much of society dependent on open-source software. Open-source advocates are quick to defend the current system by invoking Linuss law: Given enough eyes, all bugs are shallow. That is, because the software source code is free to inspect, software developers working and sharing code online will find problems before they affect society, and consequently, society shouldnt worry too much about its dependence on open-source software because this invisible army will protect it. That may, if you squint, have been true in 1993. But a lot has changed since then. In 2022, when there will be hundreds of millions of new lines of open-source code written, there are too few eyes and bugs will be deep. Thats why in August 2018, it took two full months to discover that a cryptocurrency-stealing code had been slipped into a piece of software downloaded over 7 million times.

Event-Stream

The story began when developer Dominic Tarr transferred the publishing rights of an open-source JavaScript package called event-stream to another party known only by the handle right9ctrl. The transfer took place on GitHub, a popular code-hosting platform frequented by tens of millions of software developers. User right9ctrl had offered to maintain event-stream, which was, at that point, being downloaded nearly two million times per week. Tarrs decision was sensible and unremarkable. He had created this piece of open-source software for free under a permissive license the software was provided as-is but no longer used it himself. He also already maintained several hundred pieces of other open-source software without compensation. So when right9ctrl, whoever that was, requested control, Tarr granted the request.

Transferring control of a piece of open-source software to another party happens all the time without consequence. But this time there was a malicious twist. After Tarr transferred control, right9ctrl added a new component that tried to steal bitcoins from the victims computer. Millions upon millions of computers downloaded this malicious software package until developer Jayden Seric noticed an abnormality in October 2018.

Event-stream was simply the canary in the code mine. In recent years, computer-security researchers have found attackers using a range of new techniques. Some are mimicking domain-name squatting: tricking software developers who misspell a package name into downloading malicious software (dajngo vs. django). Other attacks take advantage of software tool misconfigurationswhich trick developers into downloading software packages from the wrong package registry. The frequency and severity of these attacks have been increasing over the last decade. And these tallies dont even include the arguably more numerous cases of unintentional security vulnerabilities in open-source software. Most recently, the unintentional vulnerability of the widely used log4j software package led to a White House summit on open-source software security. After this vulnerability was discovered, one journalist titled an article, with only slight exaggeration, The Internet Is on Fire.

The Three-Step Plan

Thankfully, there are several steps that software producers and consumers, including the U.S. government, can take that would enable society to achieve the benefits of open-source software while minimizing these risks. The first step, which has already received support from the U.S. Department of Commerce and from industry as well, involves making software transparent so it can be evaluated and understood. This has started with efforts to encourage the use of a software bill of materials. This bill is a complete list or inventory of the components for a piece of software. With this list, software becomes easier to search for components that may be compromised.

In the long term, this bill should grow beyond simply a list of components to include information about who wrote the software and how it was built. To borrow logic from everyday life, imagine a food product with clearly specified but unknown and unanalyzed ingredients. That list is a good start, but without further analysis of these ingredients, most people will pass. Individual programmers, tech giants, and federal organizations should all take a similar approach to software components. One way to do so would be embracing Supply-chain Levels for Software Artifacts, a set of guidelines for tamper-proofing organizations software supply chains.

The next step involves software-security companies and researchers building tools that, first, sign and verify software and, second, analyze the software supply chain and allow software teams to make informed choices about components. The Sigstore project, a collaboration between the Linux Foundation, Google, and a number of other organizations, is one such effort focused on using digital signatures to make the chain of custody for open-source software transparent and auditable. These technical approaches amount to the digital equivalent of a tamper-proof seal. The Department of Defenses Platform One software team has already adopted elements of Sigstore. Additionally, a software supply chain observatory that collects, curates, and analyzes the worlds software supply chain with an eye to countering attacks could also help. An observatory, potentially run by a university consortium, could simultaneously help measure the prevalence and severity of open-source software compromises, provide the underlying data that enable detection, and quantitatively compare the effectiveness of different solutions. The Software Heritage Dataset provides the seeds of such an observatory. Governments should help support this and other similar security-focused initiatives. Tech companies can also embrace various nutrition label projects, which provide an at-a-glance overview of the health of a software projects supply chain.

These relatively technical efforts would benefit, however, from broader government reforms. This should start with fixing the incentive structure for identifying and disclosing open-source vulnerabilities. For example, DeWitt clauses commonly included in software licenses require vendor approval prior to publishing certain evaluations of the softwares security. This reduces societys knowledge about which security practices work and which ones do not. Lawmakers should find a way to ban this anti-competitive practice. The Department of Homeland Security should also consider launching a non-profit fund for open-source software bug bounties, which rewards researchers for finding and fixing such bugs. Finally, as proposed by the recent Cyberspace Solarium Commission, a bureau of cyber statistics could track and assess software supply chain compromise data. This would ensure that interested parties are not stuck building duplicative, idiosyncratic datasets.

Without these reforms, modern software will come to resemble Frankensteins monster, an ungainly compilation of suspect parts that ultimately turns upon its creator. With reform, however, the U.S. economy and national security infrastructure can continue to benefit from the dynamism and efficiency created by open-source collaboration.

John Speed Meyers is a security data scientist at Chainguard. Zack Newman is a senior software engineer at Chainguard. Tom Pike is the dean of the Oettinger School of Science and Technology at the National Intelligence University. Jacqueline Kazil is an applied research engineer at Rebellion Defense. Anyone interested in national security and open-source software security can also find out more at the GitHub page of a nascent open-source software neighborhood watch. The views expressed in this publication are those of the authors and do not imply endorsement by the Office of the Director of National Intelligence or any other institution, organization, or U.S. government agency.

Image: stock photo

See the rest here:
Dependency Issues: Solving the World's Open-Source Software Security Problem - War on the Rocks

Yapily to acquire finAPI in open banking consolidation move – TechCrunch

Fintech startup Yapily is announcing that it plans to acquire finAPI the transaction is subject to regulatory approvals before it closes. Both companies offer open banking solutions in Europe.

With this move, Yapily is consolidating its position in Europe and growing its business in Germany, more specifically. The terms of the deal are undisclosed, but the company says it is a multimillion-euro transaction.

Based in the U.K., Yapily offers a single, unified open banking API to interact with bank accounts. Unlike Tink or TrueLayer, Yapily offers a low-level solution without any front-end interface. Developers have to code their own bank connection flow. The result is more control and no Yapily logo.

Due to European PSD2 regulation, banks have to offer programming interfaces (APIs) so that they can work better with third-party services. Yapily has focused specifically on official API integrations and covers thousands of banks. It doesnt rely on screen scraping and private APIs.

Companies can leverage open banking to check the balance on a bank account, fetch the most recent transactions, but also initiate payments directly from a bank account.

FinAPI is also an open banking provider. Originally from Munich, Germany, the company has been around since 2008 Schufa acquired a majority stake in finAPI in 2019. It offers an API with coverage in Germany, Austria, Czech Republic, Hungary and Slovakia. Like Yapily, finAPI clients can obtain account information and initiate payments using an API.

In addition to those pure open banking products, finAPI also offers the ability to verify the age and identity of a customer. This can be useful to comply with KYC (Know Your Customer) regulation.

Yapily currently covers 16 European markets and the company says it is the leader in the U.K. But the startup isnt currently active in Czech Republic, Slovakia and Hungary. With todays acquisition, the company is expanding to these three new markets and becoming the leader in Germany.

As you can see, theres some product feature overlap between Yapily and finAPI. And the acquisition makes sense as the two companies didnt start in the same market.

Yapily works with companies like American Express, Intuit QuickBooks, Moneyfarm, Volt, Vivid and BUX. FinAPIs clients include ING, Datev, Swiss Life, ImmobilienScout24 and Finanzguru.

This is a hugely exciting milestone for Yapily on our journey from disruptive startup to ambitious scale-up. Within three years from launch, we have commercialized our platform, grown our customer base, and now have the largest open banking payments volumes in Europe. Working with finAPI, we can gain more speed, agility, and depth to accelerate innovation and shape the future of open finance in Europe and beyond, Yapily founder and CEO Stefano Vaccino said in a statement.

When it comes to payments in particular, Yapily and finAPI have processed a combined total of $39.5 billion in payment volumes over the last 12 months. Essentially, Yapily will double its customer base with this acquisition.

Follow this link:
Yapily to acquire finAPI in open banking consolidation move - TechCrunch

The Web3 Movements Quest to Build a Cant Be Evil Internet – WIRED

Owocki was something of a rock star at the conference. He is credited with coining the term BUIDL in 2017. Admirers approached him nonstop to talk, express their support, or ask for a copy of his book, GreenPilled: How Crypto Can Regenerate the World, which was the talk of the conference and quickly sold out of the 400 copies he had ordered. Owocki is about as far from a casino person as youll find in the crypto world. In one of several presentations he gave, Owocki told the crowd that since research shows money stops increasing happiness after about $100,000 in annual income, Web3 founders should maximize their happiness by giving their excess money to public goods that everyone gets to enjoy. Theres cypherpunk, which is all about privacy, decentralization: hardcore libertarian shit, he told me. Im more of a leftist. Im more solarpunk, which is, how do we solve our contemporary problems around sustainability and equitable economic systems? Its a different set of values.

The internet, he explained, made it possible to move information between computers. This revolutionized communication. Blockchains have made it possible to move units of value between computers. Owocki believes this can be harnessed to revolutionize how human beings interact through something he calls regenerative cryptoeconomics. Cryptoeconomics, he writes in GreenPilled, is the use of blockchain-based incentives to design new kinds of systems, applications, or networks. Regenerative cryptoeconomics means doing this in a way that makes the world a better place for everyone. The goal is to break free from the zero-sum, rich-get-richer patterns of capitalism. Owocki believes that the right cryptoeconomic structure can help solve collective action problems like climate change, misinformation, and an underfunded digital infrastructure.

The key tool for achieving this is a decentralized autonomous organization. In theory, a DAO (yes, pronounced the same as the ancient Chinese word for the way of the universe) uses cryptocurrency to boost collective action. Typically, members join by buying some amount of a custom token issued by the DAO. That entitles them to an ownership stake in the DAO itself. Member-owners vote on what the DAO doeswhich is mostly to say, what it spends money on, since a blockchain-based entity can do little besides move funds from one address to another.

The young concept already has a checkered history. The first DAO, named simply The DAO, collapsed in 2016 after someone exploited a loophole in its code to siphon off what was then worth some $50 million in Ethereum currency. Similarly colorful failures have followed. DAOs were nonetheless all the rage at ETHDenver, where attendees waxed on about their world-changing potential. Kimbal Musk, Elons photogenic brother, spoke about his Big Green DAO, a food-related charity. Giving away money via a DAO, he insisted, got rid of all the painful bureaucracy of philanthropic nonprofits. Its way better, he said, though he also granted that there are many ways to fail, and this one could fail spectacularly.

What is it about a DAO thatunlike, say, a Kickstarter pagefrees humanity from the collective action problems that threaten to doom the species? According to Owocki, its the ability to write code in ways that tinker with incentive structures. (In this sense, the first DAO was arguably Bitcoin itself.) Our weapon of choice is novel mechanism designs, based upon sound game theory, deployed to decentralized blockchain networks as transparent open source code, he writes in GreenPilled. Indeed, the book has very little to say about technology, per se, and much more to say about various game theory concepts. These range from the sort of thing youd learn in an undergrad econ classpublic goods are non-excludable and non-rivalrousto things that wouldnt be out of place in a sci-fi novel: community inclusion currencies, fractal DAO protocols, retroactive public goods funding.

Its hard enough for me to grasp how a DAO works. So while Im in Denver,I create one.

One of the most powerful incentive design techniques, according to Owocki, is something called quadratic voting. Standing near the edge of the Shill Zone, Owocki turned around to show me the back of his purple baseball jacket, which said Quadratic Lands. The Quadratic Lands, Owocki explained, are a mythical place where the laws of economics have been redesigned to produce public goods. Its just a meme, he said. I dont want to tell you it already exists. (Everyone at ETHDenver was concerned, rightly, about my ability to separate metaphorical claims from literal ones.)

In a quadratic voting system, you get a budget to allocate among various options. Lets say its dollars, though it could be any unit. The more dollars you allocate to a particular choice, the more your vote for it counts. But theres an important caveat: Each marginal dollar you pledge to the same choice is worth less than the previous one. (Technically, the cost of your vote rises quadratically, rather than linearly.) This makes it harder for the richest people in a group to dominate the vote. GitCoin uses an adaptation, quadratic funding, to award money to Web3 projects. The number of people who contribute to a given project counts more than the amount they contribute. This rewards ideas supported by the most people rather than the wealthiest: regenerative cryptonomics in action.

Here is the original post:
The Web3 Movements Quest to Build a Cant Be Evil Internet - WIRED

Github’s 2FA Move Was Long Overdue The New Stack – thenewstack.io

On May 4, GitHubs CSO Mike Hanley announced that all users who upload code to the site must enable one or more forms of two-factor authentication (2FA) by the end of 2023 or leave. Its about time!

In case youve been asleep for the last few years, software supply chain attacks have become commonplace. One of the easiest ways to protect it is to use 2FA. 2FA is simple. Besides using a username/password pair to identify yourself you also use a second factor to prove your identity.

Under the surface, 2FA gets complicated. They rely on one of three standards: HMAC-based One Time Password (HOTP). Time-based One-Time Password (TOTP), or the FIDO Alliances FIDO2 Universal 2nd Factor (U2F) standard. But, you dont need to worry about that as a developer, unless security, authentication, and identity are your thing. You just need to, as Hanley puts it, enable one or more forms of 2FA.

Its not that freaking hard. Still, today, only approximately 16.5% of active GitHub users and 6.44% of npm users use one or more forms of 2FA. Why are developers so stubbornly stupid?

As Mark Loveless, a GitLab senior security researcher, put it recently, The main reason for low adoption of a secondary authentication factor is that turning on any multi-factor authentication (MFA) is an extra step, as it is rarely on by default for any software package. And we do so hate to take even one extra step.

Mind you, smarter developers on bigger projects do get it. Patrick Toomey, GitHubs director of product security engineering, recently observed that Open source maintainers for well-established projects (more than 100 contributors) are three to four times more likely to make use of 2FA than the average user. That comes as no surprise because larger and more popular projects appreciate their position and responsibility in the open source software supply chain. In addition, these projects often become GitHub organizations, with the ability to manage access to their repositories using teams and set security policies, including a requirement to enable 2FA.

Another factor in people refusing to get a 2FA clue is simple ignorance. For example, a discussion on the Reddit programming subreddit on the issue showed many people assume that 2FA is either hard (Spoiler: Its not) or its not that secure because if uses a phone. True, 2FA that uses texting is relatively easy to break. Just ask Jack Dorsey, Twitters founder. Dorseys own Twitter account was hijacked thanks to a SIM swap attack.

But the important point here is you dont need to use your texting, aka Short Message Service (SMS). For 2FA, GitHub explicitly tells you that can also use:

Its not that hard, people! It really isnt. And, as for those who whine, This will kill projects! any project thats killed because its developers cant do basic 2FA security is better off dead.

For too long in open source communities, weve been too inclined to think that hackers only attack proprietary programs. As James Arlen, Aiven CISO (chief information security officer), observed, The reality of open-source software development over the last 30+ years has been based on a web of trust among developers. This was maintained through personal relationships in the early days but has grown beyond the ability of humans to know all of the other humans. With GitHub taking the step of providing deeper authentication requirements on developers, it will dramatically reduce the likelihood of a developer suffering an account takeover and the possibility of a violation of that trust. In short, Angel Borroy, a Hyland developer evangelist, told me, bad guys can see open source code too.

GitHub is giving you until 2023. Thats much too kind of them. Your GitHub accounts being hijacked is a real and present danger. Adopt 2FA today. Adopt 2FA not only on GitHub but on all your code repositories and online services. Its the best way you can protect yourself and your code from attackers today.

Featured image Ed HardieonUnsplash.

Here is the original post:
Github's 2FA Move Was Long Overdue The New Stack - thenewstack.io

Kubernetes has standardised on sigstore in a landmark move – The Stack

Kubernetes has standardised on the Linux Foundations free software signing service, sigstore, to protect against supply chain attacks. sigstore, first released in March 2021, includes a number of signing, verification and provenance techniques that let developers securely sign software artifacts such as release files, container images and binaries with signatures stored in a tamper-proof public log. The service is free to use and designed to help prevent what are increasingly regular and sophisticated upstream software supply chain attacks.

sigstores founders include Red Hat, Google and Purdue University. Its adoption by Kubernetes one of the worlds most active open source communities, with close to six million developers (a huge number given that CNCF data from December 2021 suggests that there are 6.8 million cloud native developers in total) is a significant vote of trust in the standard for verifying software components. (nb The Linux Foundation hosts both sigstore and Kubernetes, as well as Linux, Node.js and a host of other ubiquitous critical software projects.)

Kubernetes 1.24 released May 3 and all future releases will now include cryptographically signed sigstore certificates, giving its developer community the ability to verify signatures and have greater confidence in the origin of each and every deployed Kubernetes binary, source code bundle and container image.

Few open source projects currently cryptographically sign software release artifacts, something largely due, the Linux Foundation suggested on sigstores launch back in March 2021, to the challenges software maintainers face on key management, key compromise / revocation and the distribution of public keys and artifact digests.

The move by Kubernetes maintainers comes as supply chain attacks escalated 650% in 2021. The Kubernetes team in early 2021 began exploring SLSA compliance to improve Kubernetes software supply chain security, explaining that sigstore was a key project in achieving SLSA level 2 status and getting a head start towards achieving SLSA level 3 compliance, which the Kubernetes community expects to reach this August [2022]

(SLSA is a set of standards and technical controls that provide a a step-by-step guide to preventing software artifacts being tampered with, tampered artifacts from being used, and at the higher levels, hardening up the platforms that make up a supply chain. It was introduced by Google as a standard in June 2021.)

Dan Lorenc, originalco-creator of sigstorewhile at Google (and presently CEO / co-founder ofChainguard) told The Stack that the sigstore General Availability (GA) production release is due out this Summer.

This means enterprises and open source communities will benefit from stable APIs and production grade stable services for artifact signing and verification. This is being made possible thanks to the dedicated sigstore open source community, which has fixed major bugs and added key features in both services over the past few months. Sponsors like Google, RedHat, HPE and Chainguard provided funding that allowed us to stabilize infrastructure and perform a third-party security audit he said, adding: Many programming language communities are working towards Sigstore adoption and the Sigstore community is working closely with them. We just announced a new Python client for PyPI and are hoping to extend this to other ecosystems like Maven Central and RubyGems.

In terms of broader enterprise adoption (likely to accelerate when it is GA) he said in an emailed Q&A that a number of enterprises have already adopted Sigstore and are using it for signing and verifying both open and closed software. Notably the Department of Defense Platform One team has implemented Sigstore signatures into the IronBank container hardening platform which means they can verify container images, SBOMS and attestations.

sigstores keyless signing has raised some concernst that it could make revocation harder but thats not the case, he added, telling The Stack: No, in fact the opposite is true! While it is true that the signatures on software are stored forever, software verification using Sigstore does support artifact revocation. Further, Sigstore allows after-the-fact auditing to help organizations understand the extent of a compromise, and Sigstore makes discovering compromises in the first place easier by posting signatures on a transparency log. The Sigstore community recently published Dont Panic: A Playbook for Handling Account Compromise with Sigstore with more details on this

In terms of policy automation or vendor services support for sigstore, Lorenc as a co-creator had understandably got in early. His companys Chainguard Enforce, announced last week, is the first tool with native support for modern keyless software signing using the Sigstore open source standard he said, adding that the product will give CISOs the ability to audit and enforce policies around software signing for the code they use.

sigstores release had met with genuine appreciation across the community in 2021, with Santiago Torres-Arias, Assistant Professor of Electrical and Computer Engineering, University of Purdue noting that the software ecosystem is in dire need of something like it to report the state of the supply chain. I envision that, with sigstore answering all the questions about software sources and ownership, we can start asking the questions regarding software destinations, consumers, compliance (legal and otherwise), to identify criminal networks and secure critical software infrastructure. This will set a new tone in the software supply chain security conversation.

Its great to see adoption of sigstore, especially with a project such as Kubernetes which runs many critical workloads that need the utmost protection, said Luke Hinds, Security Engineering Lead at Red Hat, CTO & Member of the Kubernetes Security Response Team & Founder of the sigstore Project in a May 3 release.

Kubernetes is a well known and widely adopted open source project and can inspire other open source projects to improve their software supply chain security by following SLSA levels and signing with sigstore, added Bob Callaway, Staff Software Engineer at Google, sigstore TSC member and project founder.

He noted: We built sigstore to be easy, free and seamless so that it would be massively adopted and protect us all from supply chain attacks. Kubernetes choice to use sigstore is a testament to that work.

Security firm BlueVoyant earlier in 2021 noted after a survey of 1,500 CISOs, CIOs, and CPOs from the US, UK, Singapore, Switzerland and Mexico) that 77% had limited visibility around their third-party vendors (let alone the components they were using) and 80% having suffered a third-party related breach.

Users can find out how sigstore works in more detail here.

Original post:
Kubernetes has standardised on sigstore in a landmark move - The Stack