This puzzle challenge brings joy to the world of code – MIT Technology Review

By midnight on December 1, 2015, when Eric Wastl first launched his annual Santa-themed puzzle-a-day programming challenge Advent of Code, 81 people had signed up. That pretty much matched his capacity planning for 70 participants. Wastl figured this amusement might be of interest to a few friends, friends of friends, and maybe some of their friends as well.

But Wastl, a software engineer who works as a senior architect for TCGPlayer, an online marketplace for trading card games, had failed to anticipate how social medias recursive contagion might overwhelm these modest expectations. He jokes that the technical term for what happened next is: OH NO! Within 12 hours there were about 4,000 participants. The server nearly crashed. At 48 hours, there were 15,000 people, and by the end of the event, on December 25, the grand total was 52,000. The following year, he moved the operation to Amazon Web Services, and numbers have since continued to grow.

Last year, perhaps due to the pandemic, the event saw a 50% spike in traffic, with more than 180,000 participants worldwide.

And now again this year, thousands of coders from San Francisco to Sloveniastudents and software engineers and competitive programmers alikeare counting down to Christmas with Advent of Code (AoC). While traditional advent calendars deliver daily gifts of chocolate or toys (and some alternative versions deliver dog treats, Jack Daniels, Lego figures, or even digital delights via apps), Advent of Coders unwrap playfully mathy problems and then write computer mini-programs that do the solving.

The fun of it, partly, is simply in the time-honored magic of a holiday ritual. But its also in submitting to pleasurable puzzlement. Peter Norvig, a research director at Google, finds it fun because he trusts the creator, Wastl, to make it worth my timein a similar way, Norvig says, to how New York Times crossword puzzlers trust Will Shortz to do right by them. There will be some tricks that make it interesting, says Norvig, but there are bounds on how tricky.

At midnight US Eastern time (Wastl is based in Buffalo, New York), every night from December 1 to 25, a new puzzle lights up at adventofcode.com, embedded within a cleverly composed Christmas-caper narrativeone player described the story as an Excuse Plot if there ever was such a thing.

This years event got off to a fine start when Santas elves lost the keys to the sleigh. The first problem set the scene as follows: Youre minding your own business on a ship at sea when the overboard alarm goes off! You rush to see if you can help. Apparently one of the Elves tripped and accidentally sent the sleigh keys flying into the ocean!

Luckily, the Elves had a submarine handy for just such emergencies, and from there participants set off on a 25-day underwater quest. They try to solve two puzzles daily (the second adding a twist, or more difficulty), each worth a star and some praise: Thats the right answer! You are one gold star closer to finding the sleigh keys.

Every player earns a star for solving a problem, but if youre the first to get a star, you receive 100 points; if youre second, you receive 99 points; and so on, with the 100th place earning one point.

In order to save Christmas, the puzzle master explains, youll need to get all fifty stars by December 25th.

MS TECH | ADVENT OF CODE

The object of Advent of Code is to solve the puzzles using your programming language of choice (Python is the most popular). Participants also use by-hook-or-by-crook strategiessuch as Excel madness, as Wastl describes it, or reams of graph paper, and a surprising number solve the puzzles in Minecraft.

But the broader motivation varies from player to player. Some treat it as an annual tune-up for their programming skills; others see it as the perfect opportunity to learn to code or try a new language. Jos Valim, creator of the Elixir programming language, is live-streaming his AoC solutions on Twitch.

At the top of the global leaderboard, which ranks the 100 players with the highest total score, competitive programmers like Brian Chen (his handle is betaveros) and Andrew He (ecnerwala) are out for speed. A security software engineer working on end-to-end encryption at Zoom, Chen placed first last year (and the year before), while He came a close second.

Going fast is fun, Chen says, just like optimizing anything where you can get fairly immediate feedback. There are lots of little knobs to tweak, and lots of little moments to be proud of where you made the right choice or prepared something that came in useful.

Both MIT computer science alums who live in the Bay Area, Chen and He are friendly rivals whove competed together in programming challenges over the yearson the same team at the International Collegiate Programming Contest (ICPC) and as competitors at Codeforces and Googles Code Jam. This year again, Chen is beating He. To be honest, its cause hes a little better than mebetter at various tricks and implementations that optimize speedbut I dont like admitting that, says He, a founding engineer at the startup Modal, which builds infrastructure and tooling for data teams.

The leaderboard is out of reach for the majority of participantsespecially as puzzles get harder by the day. Kathryn Tang, who runs an engineering operations team at Shopify, placed 36th on day one and was still hanging on to 81st by day three, but she knew her leaderboard status wouldnt last long. Im doing this for fun using Google sheets, she says.

The element of contest, however, is replicatedat Shopify and Google and many companies big and smallwith private leaderboards, as well as dedicated chat channels where players share solutions and kvetch about the problems in post-mortems.

The competitiveness helps commitment, said the engineer Alec Brickner, commenting in a Slack channel at Primer.ai, a natural-language-processing startup in San Francisco (Brickner has made the leaderboard on a couple of days so far).

Meh, replied his colleague Michael Leikam. The payoff for me is the joy of coding.

John Bohannon, Primers director of science, seconded that with an emoji: SAME.

Bohannon also loves the silly story that sets up the problems, but the plot has little to zero utility. The speed-demon solvers completely ignore the story, focusing on the variables of the problem to solve and just getting to it, he says.

Nora Petrova, a data scientist and engineer at Primers office in London, UK, is there for the beauty, not the sport: I love the drama thats unfolding in every puzzle, she says. For instance, on day four, a giant squid attached itself to the submarineit wanted to play bingo, of course. The puzzle input was a random set of 100 bingo boards, and the challenge was to predict the winning board and give it to the squid.

Wastls main motivation in creating Advent of Code was to help people become better programmers. Beginners who are just getting into programming are the people I want to get the most out of this, he says. The success metric for most people should be How many new things did I learn?not Was I one of the very, very fastest people in the world to solve this puzzle?

Russell Helmstedter, a middle school teacher at the De Anza Academy of Technology and the Arts, in Ventura, California, is using Advent of Code to teach Python to his students in sixth, seventh, and eighth grades. They tackled the first two problems together as a class. From a teaching perspective, the problems are effective exercises because if you fail, you can simply try againvery much in the spirit of test-driven software development.

Helmstedter found that some of his students were a bit overwhelmed with the two-pronged challengedeciphering the problem and coding a machine to solve itbut most embraced the struggle. I like that it is hard to do, one student said on a survey. And another said, There is honestly no downside. I really like how you start working progressively toward a goal. Although the surveys multiple-choice question ranking feels elicited one Hate it, 41 respondents chose Like it (to varying degrees) and eight Love it.

MS TECH | ADVENT OF CODE

At the University of Ljubljana, in Slovenia, the computer scientist Janez Demar uses the AoC problems both as a professor and to hone his own skills (hes on the core team of Orange, an open-source machine learning and data visualization toolbox). I need to have some regular practice, like a violinist who plays in an orchestra and does some teaching but still needs some small pieces to practice, he says. So these are my etudes. Demar teaches Programming 101 to a heterogenous group of more than 200 students. My greatest concern, he says, is how to keep those who already know some (or a lot) of programming interested and occupied. AoC tasks are great because they require various skillsfrom pure coding to algorithms.

Gregor Kikelj, a third-year mathematics undergraduate at the university, first tried Advent of Code in 2019. He did well enough to land himself an internship at Comma.ai (working on Openpilot, its software for semi-automated driving systems), since the founder of the company was also competing. And Kikelj boosted his grade in the programming course (with another professor), since every problem solved was worth extra points on the final examplus bonus points for placing on the leaderboard.

Kikelj (grekiki) got up every morning for the puzzle drop6 a.m. in Sloveniaand ranked 52 overall on the leaderboard, accumulating a total of 23 extra exam points. After that year, they put the cap on the amount of points you can receive to 5, he recalls. But hes still rising with the sun to pounce on the puzzle. This year his best ranking, on day five, was 25thhes aiming to stay in the top 100. Well see how it goes as the problems get harder, Kikelj says.

If the leaderboard is your game, competition is fierce and the daily countdown is keyplayers wait like a hawk for the puzzle to drop, and then click lickety-split to download. Last year, this giant burst of traffic synchronized to a single second (as Wastl describes it) troubled even Amazons load balancers.

The AoC Subredditone of many communities around the internetis full of inside-baseball banter about how to prevail (with solutions and help threads, as well as self-satire and memes). But the best resource is perhaps Brian Chens blog post on how to leaderboard.

Original post:
This puzzle challenge brings joy to the world of code - MIT Technology Review

Toronto startup Buf Technologies raises more than $100-million with 18 employees, no revenue – but big plans to change how software is built – The…

Some of the top names in global venture capital have invested more than $100-million in a Toronto startup named after the city of Buffalo with 18 employees, no revenue and big plans to transform how software is developed.

Buf Technologies Inc. said last week it had raised US$93.4-million in four separate financings since its founding 22 months ago, including a US$68-million funding last month co-led by New York funds Lux Capital and Tiger Global.

Other investors include U.S. venture capital firms Amplify Partners, Lightspeed Venture Partners, Addition, Haystack Ventures, Abstraction Capital, Greenoaks Capital Partners and Canadas Garage Capital.

Its the fourth investment this fall in Canada led or co-led by Tiger Global, one of the worlds largest and most prolific early stage investors, known for offering big dollars at rich valuations to young, growing companies with fast closing times and no requests for board seats or other conditions usually set by VC firms.

Bufs early success attracting money is in sharp contrast to Canadas startup scene a decade ago, when many founders here were told by U.S. funders they would have to move south as a condition of receiving funding. Many did.

By contrast, Buf was founded and built in Toronto by an American, software engineer Peter Edge, after leaving his previous job with Uber Technologies in the city, which has seen a huge influx of global tech giants and a proliferation of startups that have flourished without having to move. The team could have built this company anywhere, said Mike McCauley, managing partner of Waterloo, Ont.-based Garage Capital. But theyve chosen to build it in Toronto because its where they believe its the best place to attract the best global talent.

Mr. Edge named the company as both a nod to both his hometown of Buffalo (which is technically the companys headquarters) and also the area of software development it focuses on, known as protocol buffers (or Protobuf) built on top of open-source tools first developed by Google.

In plainer language, Buf is working to make it easier for machines and software programs to communicate with one another. Developers typically create and use software tools known as APIs (application programming interface) that enable different digital technologies to interact. While some of the largest digital companies have adopted technologies such as Protobuf to streamline the development process, much of that work is done by everyone else using less advanced open-source programming tools. Its a laborious process that Mr. Edge said in an interview previously ate up 20 per cent of his engineering working time.

None of us are really providing any business value doing that, Mr. Edge said. A lot of the industry has had a really difficult time bringing it to everyone else in an easy-to-consume way, and we think we have the world experts to actually accomplish it. If we can eliminate a large portion of that code base you need to write it effectively reduces the amount of time your engineers need to spend on all these ancillary tasks. With every company becoming a software company, if you can give back software engineering time, youre giving a company back one of its most valuable assets.

The companys head of business development, Ahron Seeman, a former management consultant, explains its approach is to build schema driven tools analogous to Lego. You know what the interface between two bricks will be, so you can build a chimney before you build a roof because you know exactly how they should connect. Google and Facebook have done that for years. Bufs value is to build software to help [other] companies do that in a more accessible way.

Mr. Edge, who studied computer science and math at Pittsburgh-based Carnegie Mellon University, has recruited at least six of his former colleagues from the ride-sharing giant Uber. Other Buffers previously worked at prominent tech firms Stripe, GitHub, Cisco and Autodesk.

Mr. McCauley said what intrigued us most about the story was that the team had experienced the exact problem they were going after at some of the most well-respected engineering companies and had realized what they were building would be useful to everyone else, but no one had built it for everyone else yet.

Guru Chahal, a partner with Silicon Valley-based Lightspeed, which led Bufs US$3.7-million seed round in September, 2020, said when he met the company it was very clear the potential here is huge, because there is an amount of pain every software team in the world goes through to take those open-source projects and make them usable for the team. If you solved that, youd get paid for it.

Buf released a free open-source program called Buf CLI last year that it says has been downloaded more than a million times. In recent months, it has focused on building its paid tool, called Buf Schema Registry. Several early customers are now testing it out; Mr. Seeman describes them as major enterprises, public companies and household names but wont disclose their identities.

I would expect very soon we will have our first revenue, Mr. Edge said, though he added: Meaningful revenues, to the point where it has a major impact on the business, isnt our primary concern as a 2022 target.

For now the focus is on staffing up to 100 people next year, accelerating development and building up the user base.

While the idea of giving away a product for years is foreign in many sectors, its commonplace in the world of software tool development; other multibillion-dollar-valued companies have deployed similar strategies, including Lightspeed-backed Grafana Labs Inc., and publicly traded Confluent Inc., Elastic NV and HashiCorp Inc.

Our experience is, if you create and provide value to engineering teams, theres enough budget that theyll pay for a version with more support and features, Mr. Chahal said.

Your time is valuable. Have the Top Business Headlines newsletter conveniently delivered to your inbox in the morning or evening. Sign up today.

Read more here:
Toronto startup Buf Technologies raises more than $100-million with 18 employees, no revenue - but big plans to change how software is built - The...

U-M, Humotech partner to bring open-source bionic leg to research labs – University of Michigan News

The open-source, artificially intelligent prosthetic leg designed by researchers at the University of Michigan will be brought to the research market by Humotech, a Pittsburgh-based assistive technology company.

The goal of the collaboration is to speed the development of control software for robotic prosthetic legs, which have the potential to provide the power and natural gait of a human leg to prosthetic users.

We developed the open-source leg to foster the study of control strategies for robotic prosthesesone of the most prominent barriers hindering their public impact, said Elliott Rouse, assistant professor of mechanical engineering and core faculty at U-Ms Robotics Institute.

The open-source leg is now being used by over 10 other research groups to develop control strategies on a common platform, but we noticed some research groups would rather not build it themselves. To maximize the benefit to the public, a product-like solution was needed.

First released in 2019, the open-source legs free-to-copy design is intended to accelerate scientific advances by offering a unified platform to fragmented research efforts across the field of bionics. Now, for labs that need an off-the-shelf robotic prosthesis for research and development, Humotech will provide an assembled version of the open-source leg, including warranty service and technical support.

We see many benefits to standardizing the hardware and software used by the research community, said Josh Caputo, president and CEO of Humotech. The fully contained and powerful open-source leg is a natural expansion of what we can do to support our mission to transform the way the world develops wearable robotics.

By offering a preassembled version with professional support, we hope to improve access to this platform for studying the control of robotic prosthetic legs. Were extremely excited to partner with the University of Michigan on this strategic initiative and together help accelerate research and innovation in the field.

Alejandro Francisco Azocar, Mechanical Engineering Graduate Student Research Assistant puts the finishing connections together before testing an open-source robotic leg designed by Elliott Rouse, Assistant Professor of Mechanical Engineering, and his research group in the G. G. Brown Building on May 28, 2019. Image credit: Robert Coelius, Michigan Engineering

Humotech, originating from Carnegie Mellon University, develops tools for the advancement of wearable robotic control systems and other wearable devices. Using its own research community, Humotech will further build and support a development community around the open-source leg and seek to incorporate the leg into Humotechs Caplex platform. Caplex is a hardware and software testbed that enables researchers to emulate the mechanics of wearable machines, including prostheses and exoskeletons.

In collaboration, Rouses lab and Humotech will also iterate on new versions of the open-source leg to meet the needs of prosthetic wearers and researchers.

The original prosthetic leg was designed to be simple, low-cost and high-performance. Its modular design can act as a knee, ankle or both, with an onboard power supply and control electronics that allow it to be tested anywhere. Rouse collaborated with Levi Hargrove, director of the Center for Bionic Medicine at the Shirley Ryan AbilityLab in Chicago, to develop this first model.

Rouse hopes Humotechs partnership will expand the capabilities of other labs, and enable them to conduct high-impact research. An example of such research that Rouse notes is a Nature Biomedical Engineering article, Design and clinical implementation of an open-source bionic leg, by former mechanical engineering doctoral student Alejandro Azocar.

For researchers looking to build the leg on their own, the prosthetics parts list, assembly instructions and programming remain freely available online.

This collaboration furthers the mission of our open-source leg project, Rouse said. The translation of an open-source research prototype to a commercial product is rare for our field, but our partnership can continue to lower the barrier to research, speed technical advances, and in the end, positively impact lives.

Written in collaboration with Danielle Commisso of Humotech.

Excerpt from:
U-M, Humotech partner to bring open-source bionic leg to research labs - University of Michigan News

Kickstarter’s Year in Design and Tech Trends – Core77.com

Sunne

Admittedly, 2021 has turned out differently than most of us expected, with Hot Vax Summer giving way to "continue to social distance and exercise caution" fall. But as we evolve and adjust to the new new normal, Kickstarter's Design and Tech team continues to be inspired by the many creators and innovators who are rising to the challenge and working toward the public benefit through creativity, great design, and plenty of fun. From groups turning waste into opportunity and harnessing the power of the wind to the ascent of water-saving shower tech and the creation of a sunset you can hang on your windowsill, 2021 was filled with surprise, awe, and optimism for the ways design can enhance everyday life.

These were some of the top trends in a year that defied predictions.

One Clock

The passing of time took on a surreal quality in 2021. We no longer relied on just the minutes, hours, or even days to chart our year, and this shift was reflected in a range of unique projects. The classic clock got some creative upgrades as Author Clock charted each minute in iconic quotes from literature, and OneClock transformed the daily wake up into a less alarming experience thanks to AI-generated music from a Grammy-award-winning composer. Watches also got a reimagining, with Bangle providing open-source programming options for smartwatch users. While the Minimalist Wall Calendar visualized the passing of a year without unnecessary frills, Superlocal went granular, allowing backers to reflect the unique ebbs and flows of life via a clock face.

Motion Kit

This past year many of us took up new hobbies, learned a skill or two, or simply tried to finally assemble that IKEA furniture. For the ambitious, 2021 was great for tinkering and experimenting. From kaleidoscope legend Thea Marshall, Kaleidxscape offered backers the option of building their own fantastical 'scopes. The brainchild of two former physics and design students, MotionKit promised tinkerers the chance to create their own Rube Goldberg-style contraptions. And for those looking to make science and tech fun, PocketLab G-Force designed a STEM kit on wheels by way of a mini car packed with sensors that measure speed and motion to help teach engineering and physics, while CircuitMess Batmobile promised to illuminate the dynamics of autonomous driving in a superhero-friendly package.

Boxx

Life is far from back-to-normal, and many are looking for ways to recreate "outside world" experiences often enjoyed in public spaces into the home. Inspired by her own maternity leave exercise struggles, creator Anna Samuels launched Boxx, the smart punch bag and app for training from your living room. Nimble offered a robotically applied salon-quality manicure right on your bedside vanity, while Light Pong's interactive ping pong promised all the fun of games night at the local dive.

There also continues to be enthusiasm for virtual reality and the prospect of traveling to wherever you'd like while remaining at home. Lynx launched the latest in AR and VR with its open and versatile headset, beaming wearers into central Paris or simply into their favorite game, while Tundra Tracker promoted its Full Body tracking, cross-compatible with any SteamVR device.

Ebo Smart Robot

They don't shed, need to be walked, or have bathroom emergencies and for robot pets, 2021 was absolutely their year. Drawing clear inspiration from Boston Dynamics' Spot, MangDang's Mini Pupper is an extremely cute way to learn robotics, while the XGO Mini scampered its way into our hearts. The Ebo Smart Robot's camera and AI-equipped roaming bots utilize a speaker and advanced mobility to provide connectivity and companionship to family members (both two and four-legged), while FerroPet uses electromagnets to make alien goo (aka ferrofluid) dance.

Shine Turbine

Last year saw many of us embracing the outdoors with new zest, and that trend extends to projects that offer creative ways of capturing and storing green energy. Shine Turbine launched a powerful portable turbine small enough to toss into a backpack; EcoFlow promised energy independence with a portable home battery equipped for smart energy management; Sunne captured the sun's rays during the day to illuminate an elegant window light at night; and Solar Cow's cow-shaped solar panel, equipped with electric udders for charging portable batteries, generated electricity while helping underserved children attend school.

Nutshell Cooler

The popularity of inventive reuse and repurposed circular waste continues to produce new and innovative projects on the platform, particularly in fashion and style. The year saw the creation of caffeinated gear with Panto Coffee Boots and Baseline Midlayer's hoodies made from recycled coffee grounds, as well as jaunty headwear from Storied Hats, created with a fusion of coffee, algae, and cactus. Standouts also included grappaSac's stylish bags made from grape waste left by the international wine harvest and EcoTrek's pants knitted from recycled ocean buoys.

Bringing sustainability into home design, Welli Bins, made from sugarcane, launched a collection of sustainable plant-based storage bins, while SOAPBOTTLE crafted an entirely-sustainable home goods line. In high-end, Ohmie unveiled an elegant lamp made from recycled oranges, Gomi speakers transformed recycled e-waste and ocean plastic into portable sound systems, and Nutshell Cooler upcycled coconut waste into eco-friendly to-go insulation.

REFRAMD

To capture imaginations, designers are focusing on adding positivity to the world. Called to action by her battle with cancer, shoe designer Nelli Kim launched REDEN, a line of kicks created with orthopedic surgeons to ease foot pain; REFRAMD unveiled a series of digitally tailored glasses custom made for Black nose profiles and others underserved by traditional retailers; and EONE Switch's inclusivity-designed watches sought to help vision-impaired wearers tell time by touch. Forgoing fast fashion's environmental waste, La Guapa unveiled clothing from vintage wool blankets, the line also providing Melbourne, Australia's refugee community with work and learning opportunities.

Inclusivity was also key. In 2021, Dynasty George revealed a line of elegant, maternity-friendly dresses; Ponderosa by Alpine Parrot offered female hikers size 14-24 options for hitting the trails; and Victoria Jenkins, a garment technologist who became Disabled in her 20s, Unhidden, unveiled a series of adaptive garments that accommodate mobility challenges.

uFactory Lite 6 robotic arm

When desktop 3D printers first emerged a decade ago, they were slow, expensive, routinely failed, and we absolutely loved every second of watching a real object emerge from the digital ether. These days, 3D printing may feel less novel, but the machines themselves are much more capable. Creality's CR-30 filament printer, a follow up to their extremely popular CR-6 SE campaign from 2020, features a conveyer belt, allowing you to print a parade of identical models that would be too long for most printers to create in a single piece. The team at Elegoo were also thinking big with their Jupiter resin printer, bringing the intricate detail of SLA printing to a large format. And the Revopoint POP, a handheld 3D scanner, lets you turn the world around you into 3D models for printing or maybe showcased on the Looking Glass Portrait, a holographic display that shipped in 2021.

Beyond 3D printing, we saw quite a range of innovative tools for helping you bring your ideas to life. Carvera aims to reduce CNC mills' steep learning curve, YesWelder offers a budget-and-space-friendly entry point into the world of metal fabrication, and xTool's M1 combines laser and blade cutting technologies to let you work with a wide range of materials. For those of us who don't own a giant factory building to house industrial robots, the uFactory Lite 6 robotic arm brings the power of automated manufacturing and testing to a more manageable scale.

Keyboardio Model 100

Heading into year three of working from home, it's no surprise that we've seen a steady stream of products designed to make the experience of sitting at your computer a little more comfortable. And Kickstarter remains a hub for mechanical keyboard enthusiasts: Keychron launched three new variations on their highly-customizable designs, Mojo68 embraced quirky design and bold color palettes, and the Keyboardio Model 100 sporting a hardwood enclosure, split keyboard, and open-source firmware clicked with backers. If QWERTY alone doesn't cover your human-computer interface needs, TourBox offers a versatile controller with mappable dials and buttons to enhance film editing, music production, digital illustration, and more.

Portable displays that allow you to take your home setup anywhere are also a trend worth watching. Arovia's Splay lets you carry up to an 80" screen in your backpack thanks to a patented folding mechanism, and Espresso Display's portable touchscreen monitor earned a spot on Time's 'Best Inventions of 2021' list for giving adherents to the two-screen lifestyle a portable option.

Want to see what else is on the horizon? Browse some of our favorite projects.

Read the original here:
Kickstarter's Year in Design and Tech Trends - Core77.com

Getting Started with R and InfluxDB The New Stack – thenewstack.io

Gourav Singh Bais

Gourav is an applied machine learning engineer at ValueMomentum Inc. He is skilled in developing machine learning/deep learning pipelines, retraining systems and transforming data science prototypes to production-grade solutions. He has been working in the same field for the last three years and has served a lot of clients including Fortune 500 companies, which provided him the exposure to build experience and skills that can contribute to the machine learning community.

As a data professional, you may come across some datasets with few independent variables (input variables). One variable would be time, and the other can be any sort of time-dependent column, such as the number of bookings in a hotel or the number of passengers on a flight.

This type of data is referred to as time-series data, which has some type of trend and captures a point in time. There are various ways of storing this type of data, such as relational databases or files, like CSV or Excel. However, these options are not designed to efficiently store the time-series data. Enter time-series databases, which are specifically designed to efficiently and quickly store time-series data.

There are various use cases where time-series databases (TSDB) perform significantly better than other storage mechanisms. Consider a few:

Furthermore, there are several advantages to using a time-series database over other storage mechanisms for that data type. Here are a few reasons:

One widely used time-series database is InfluxDB. The company InfluxData created InfluxDB, an open source time-series database. Its written in Go for storing and retrieving time-series data for any use case, including operations monitoring, application metrics, Internet of Things (IoT) sensor data and real-time analytics.

To learn more about the benefits of InfluxDB, you can refer to the InfluxData website.

In this article, you will learn what is needed to get started in InfluxDB with R language, starting from installing, setting up, querying, writing and finally, building a simple time-series application using R.

Clients interacting with InfluxDB using any programming language must be able to connect to the database so that different database operations can be carried out. The influxdb-client-r library can be used to connect to InfluxDB using R. Its a package that supports operations, like writing data, reading data and getting the database status. This client library works with InfluxDB version 2.0.

Lets start with setting up InfluxDB using version 2.0. InfluxDB is available on different platforms, like Windows, Linux and macOS. Examples that you will see in this article are tested against macOS Big Sur, although installing it on any platform is simple.

Alternatively, you can use InfluxDB Cloud to quickly get a free instance of InfluxDB running in minutes without having to install anything locally on your machine.

InfluxDB can be installed on macOS using Homebrew:

```$ brew update$ brew install influxdb influxdb-cli```

```

$ brew update

$ brew install influxdb influxdb-cli

```

Alternatively, InfluxDB can be manually downloaded here.

Once InfluxDB is installed, you can start it by using this code:

The first time you start InfluxDB, it will ask you to set up the account, which can be carried out using the UI [localhost:8086](localhost:8086) or command line interface (CLI). For a UI setup, you will have to open the localhost URL and provide the information required for the initial setup. If youre using CLI, youll need to do it with the InfluxDB client, which can be started in the terminal using the following code:

For the initial setup, note the following details:

Username: You can choose any username for the initial user.

Password: You need to create and confirm a password for database access.

Organization name: You need to choose the initial organization name.

Bucket name: An initial bucket name is required, and you can create as many buckets as you want to work with.

Retention period: The time period your bucket will store the data before deleting it. You can choose **never** or leave it empty for an infinite retention period.

To install InfluxDB on other platforms, refer to the following link.

Once you have installed InfluxDB and completed the setup, you can log in to [localhost:8086](localhost:8086). You should see a screen like this:

You can take a look through the various modules included in the dashboard, though this article will primarily focus on those through which you can connect to the InfluxDB client. Start with the data module:

Here, you can observe different sections, like Sources, Bucket, Telegraf, Scrapers, and Tokens. To interact with InfluxDB using R, youll need to check the Buckets and Tokens sections. To connect with the database, youll need to have a private token (key) generated that is only accessible to you, allowing you to connect to different buckets.

To generate this token, navigate to the Tokens tab. On the right side, you will see a Generate Tokens button. This button has two different sections:

Read/Write Token: This token provides read and write access to different buckets, which can be limited to the scope (to specific buckets) or provided to all the buckets available. With this token, you can only read and write the data in an organization.

All-Access Token: This token provides full access to actions, like reading, writing, updating or deleting each bucket. This would be the recommended token through which you can connect to any bucket available without any explicit configuration and can perform all the needed actions, like read, write, update and delete.

For the purposes of this article, youll want to generate an All-Access Token. Once the token is generated, you can access it anytime by simply logging into the localhost console.

Now that you have InfluxDB all set up, you can download R and RStudio for writing and testing the code. Installing R is pretty simple. You can download the package here, then open and install it. After the R installation, you can download RStudio, which will be the IDE that you use to write the R code. You can download RStudio here.

At this stage, you have almost all the tools and technologies needed to connect to InfluxDB. As the last step, you need to install the InfluxDB client library for R, which can be downloaded using the following line of code:

```install.packages("influxdbclient")```

```

install.packages("influxdbclient")

```

If you install it on RStudio, other dependencies will be downloaded along with the base library. However, if dependencies are not automatically downloaded, you can separately download them using the following line of code:

```install.packages(c("httr", "bit64", "nanotime", "plyr"))```

```

install.packages(c("httr", "bit64", "nanotime", "plyr"))

```

The next step will be to import the InfluxDB client library in R and create an instance of InfluxDBClient that can be used to interact with the database and perform all sets of operations. Parameters required to make a database connection include the following:

Since this connection will be made locally, the connection script should look like this:

If you are using a cloud account make sure the URL parameter matches the region your cloud account is located in, rather than using localhost. You can find the URL endpoints in the docs.

Now that you have established a connection to InfluxDB, its time to use the data to perform different database operations. To understand these operations, lets take a look at some sample data of worldwide COVID-19 casesfrom January 2020 to April 2020:

This sample data contains the following fields:

To read the data frame in R, you will need to write the following line of code:

Lets start by first inserting this data into InfluxDB. To do so, use the write() method, which accepts parameters like this:

```client$write(data, bucket, precision, measurementCol,tagCols, fieldCols, timeCol, object)```

```

client$write(data, bucket, precision, measurementCol,

tagCols, fieldCols, timeCol, object)

```

Note: The above method is simply a function definition, not part of the code.

This method takes the following parameters:

To store the COVID-19 data in InfluxDB using the write() method, you will need to make sure that your time-stamp column (Date) is in POSIXct format.

The response from the write() function can be NULL, True, or an error. To debug the write() function and check how the data is being written in the database, you can assign an object: lp.

Now that you have your time-stamped data stored in the database, lets try reading the data. For querying the data using the R client, the read() function is used, which expects a Fluxquery. For querying, you can make use of the same client that you created for writing the data or you can create a new InfluxDB client and do the same.

Lets break down the above query. Starting with the keyword from, youll need to first specify the bucket name, followed by the range of time from which you want to select the data, and finally, a set of conditions. In the above query, the condition specifies not to include the start and stop columns from the database.

The result contains a list of data frames for each entry made in the database for the specified period. To check an instance of it, you can use the following code:

Now that you have queried the data, lets make use of this data for forecasting purposes. Here, you will be training a time-series model on the data retrieved and will try to predict the next five days cases. Lets create a dataframe from the results that you have after querying:

Once the dataframe is created, there are some changes that will be required to apply the time-series model on it. Typically, this stage is data preprocessing.

After preprocessing, now its time to create a time-series representation of our data. This would be done using the following code:

Finally, lets fit the data into the forecasting model and make the predictions for the next five days:

That is how data can be accessed and used for time-series forecasting, which is just one practical use case for the time-stamp data. The whole implementation can be found here.

For more information and best practices for optimizing the performance of InfluxDB, refer to the docs.

After reading this article, you now know how to set up InfluxDB in your system, as well as how to create a client and to write and read data for your time-series use case using R language. One major advantage of InfluxDB is that it comes with support for almost all major programming languages.

There are several options for storing time-series data, but time-series databases, like InfluxDB, can do so more quickly and on a higher scale. Several use cases, such as IoT applications, automated cars or real-time application analysis, need data insertion from as little as tens of thousands to as many as hundreds of thousands of entries at a time. Time-series databases perform this task at a very high speed and in real time, allowing them to be easily adapted by any developer working on a real-time time-series application. Be sure to consider deploying InfluxDB to use these great features in your own applications.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Real.

Featured image via Pixabay.

Visit link:
Getting Started with R and InfluxDB The New Stack - thenewstack.io

The internet runs on free open-source software. Who pays to fix it? – MIT Technology Review

To support MIT Technology Review's journalism, please consider becoming a subscriber.

For something so important, you might expect that the worlds biggest tech firms and governments would have contracted hundreds of highly paid experts to quickly patch the flaw.

The truth is different: Log4J, which has long been a critical piece of core internet infrastructure, was founded as a volunteer project and is still run largely for free, even though many million- and billion-dollar companies rely on it and profit from it every single day. Yazici and his team are trying to fix it for next to nothing.

This strange situation is routine in the world of open-source software, programs that allow anyone to inspect, modify, and use their code. Its a decades-old idea that has become critical to the functioning of the internet. When it goes right, open-source is a collaborative triumph. When it goes wrong, its a far-reaching danger.

Open-source runs the internet and, by extension, the economy, says Filippo Valsorda, a developer who works on open-source projects at Google. And yet, he explains, it is extremely common even for core infrastructure projects to have a small team of maintainers, or even a single maintainer that is not paid to work on that project.

The team is working around the clock, Yazici told me by email when I first reached out to him. And my 6 a.m. to 4 a.m. (no, there is no typo in time) shift has just ended.

In the middle of his long days, Yazici took time topoint a finger at critics, tweetingthat Log4j maintainers have been working sleeplessly on mitigation measures; fixes, docs, CVE, replies to inquiries, etc. Yet nothing is stopping people to bash us, for work we arent paid for, for a feature we all dislike yet needed to keep due to backward compatibility concerns.

Before the Log4J vulnerability made this obscure but ubiquitous software into headline news, project lead Ralph Goers had a grand total of three minor sponsors backing his work. Goers, who works on Log4J on top of a full-time job, is in charge of fixing the flawed code and extinguishing the fire thats causing millions of dollars in damage. Its an enormous workload for a spare-time pursuit.

The underfunding of open-source software is a systemic risk to the United States, to critical infrastructure, to banking, to finance, says Chris Wysopal, chief technology officer at the security firm Veracode. The open-source ecosystem is up there in importance to critical infrastructure with Linux, Windows, and the fundamental internet protocols. These are the top systemic risks to the internet.

Visit link:

The internet runs on free open-source software. Who pays to fix it? - MIT Technology Review

NAB to ‘innersource’ some of its business platforms – iTnews

NAB is set to innersource some of its key business platforms, after successfully applying the model to the development and maintenance of more internally-focused code libraries and tools.

Innersource is an increasingly popular set of software engineering practices that are used to create an open source-like culture inside of an organisation.

NAB has been innersourcing code for about two-and-a-half years, with the model forming part of a broader set of practices known as the NAB engineering foundation or NEF, which is designed to help development teams get code into the cloud and into production faster.

Another big four proponent of innersource is CBA, which revealed its own work to iTnews earlier this year.

NAB engineering manager Matt Cobby told the Innersource Summit 2021 last month that the bank adopted innersource initially to remove duplication of coding effort and costs as different teams worked to make their products cloud-ready.

We migrated Australias first highly confidential banking workload into public cloud in 2016, and we enabled teams to move rapidly and take control of their own outcomes, but this led to duplication of tooling and there was a need to reduce the cost per workload to scale faster, Cobby said.

It was in this situation that I found myself looking for a tool to automate AWS Credentials setup, and I found 20 different versions of the same tool across Github. Some were supported and some werent, and some were fully functioning and others less than perfect.

I felt that this was definitely one of these places where we were not being as efficient as we could be, and I felt that the techniques of open source development could help us improve.

Cobby said that the bank also wanted to reduce that cost of experimentation in order to help teams develop new ideas and test out new business solutions quickly and efficiently.

That led NAB to adopt innersource, which it defines as the sharing of knowledge, skills and code across teams in NAB using open-source collaboration techniques.

By creating formal ways to share the work of different teams and to collaborate on further development, the bank hoped to remove undifferentiated heavy lifting of multiple teams reinventing code libraries and tools, and in doing so, refocus the efforts of teams to reach a business outcome much faster.

Innersource setup

With hundreds of development teams across the bank that each had their own way of working, the bank focused its innersourcing efforts on the interfaces between teams.

This allowed us to create a safe environment for engineers to talk to other engineers across the bank: to reach out, to understand their codebases, and to share their work, Cobby said.

NAB said its adoption of innersource had to balance the needs of the bank in terms of architectural endorsement, security endorsement, ownership, accountability and auditability, with the needs of an open source community to be creative.

To do so, it appointed community champions that act as on-the-ground evangelists.

They make sure that their peers know about innersource, Cobby said.

Theyre running community showcases for new products where they do peer review on the products, they check if the product meets certain criteria such as do we know what problem it solves, that it isnt the duplication of an existing product, that it meets our minimum standards and that it has a strong ownership. Its at this point as well that security and architecture both have a voice and can endorse or query any individual product.

Where we have multiple solutions to the same problem, well build a small community around that problem and well work with all the interested parties to reduce that duplication and come up with a better solution for everyone.

On the other side of the model, Cobby said a strong culture of product ownership has been established.

This is where we make sure that each product within innersource has a distinct product owner, he said.

The owners responsibilities are around making sure that the product meets the minimum standards, that it has a workflow, that theres somebody there to read and evaluate the pull requests, and to make sure that these pull requests meet certain SLAs.

Theyre there to provide technical support for the products and to take questions from people when theyre asking about contributions to the products.

We also provide these product owners with a playbook in order to help them innersource their own platforms and their own products.

Products that are to be innersourced are classified as either curated or community.

The purpose of this is to show that when consuming teams are looking at what they can use [from elsewhere in the bank], they have the confidence that the code they are using is endorsed and has production-level support - but we dont set the bar so high that the community projects cant get started, Cobby said.

Typically, a curated product is proven in production. We know that its gone through all our normal existing operational processes, that its running in production with customer workloads, that its been security tested, that it encapsulates many years of learning and experience across the organisation, and that theres often significant investment behind it.

This means that there are very few curated products, but they are very high quality.

On the community side we embrace our open source origins and this is more of an incubator for new ideas. We make sure theres a very low barrier to entry.

We tend to use a more open-source style support model where its often by best endeavours, and the typical products we see in this space are around tooling or individual pipeline components which are used in the delivery of applications.

While maintaining a light touch, Cobby said there are some minimum standards that all code repositories have to meet to create a safe space for teams to reach out and work on other teams repositories in a clear and consistent way.

We make sure that every innersource repository and every innersource product has a README [file] that makes it very clear what the product is doing and what problem it solves, Cobby said.

We make sure the CODEOWNERS [file] is maintained and up-to-date so that external developers know who to talk to when they have a question.

Theres a contributing guide so that when you want to make a change theres a very clear path for you to do so, and a code of conduct to make sure that you know the acceptable behaviours for the team [that created the product or tool].

Benefits so far

NAB said that code quality, collaboration and learning opportunities had all increased under innersource.

When we write code in the open, we tend to write better code, Cobby said.

Were improving discoverability and the ease of finding the source of truth for a piece of information, and were reusing intellectual property across the different domains.

Cobby said that the openness made it easier to understand why certain architectural decisions were made.

We peer review each others work and our discussions are in the open, so that we can always find out why a certain architectural decision was made or why this decision was made not to use a particular technology, he said.

Teams can also move faster by making changes to existing code libraries directly, where required.

When we have the ability to read another teams repository, we have the ability to remove bottlenecks, Cobby said.

If youre dependent upon a team and they cant implement your change, you have the ability to make the change yourself and get it accepted into the core product.

Were then also breaking down the silos of the organisation and helping learnings from one area be applied into different areas.

The bank saw some unanticipated benefits around mentorship, cross-skilling and learning.

We found through some of the innersource hackathons that we ran that we had senior engineers mentoring junior engineers, we found frontend developers learning how to be API developers, weve had backend business service operators learning how to be frontend React developers, Cobby said.

This is one of the real unexpected benefits from innersource and is something which is giving us probably far more return on investment than we ever expected.

So one of the main benefits for us is this cross-skilling of people across the organisation.

Expansion opportunities

Still, Cobby indicated there are opportunities for the bank to strengthen its innersource adoption as well as to broaden its use.

Weve been looking at how we can innersource our business platforms, he said.

With some of the benefits that weve seen before about decoupling teams and removing the blockers from coupled backlogs, theres a real business potential here for understanding how we can ease delivery through the organisation and across multiple platforms.

He also saw further opportunities to automate some of the metrics NAB used around innersource; the bank is working with Github on this particular area of improvement.

Were looking at the number of people collaborating across teams, were looking at things such as product reuse, he said.

We have automation that scans Github for dependency management and tells us how many reuses of an individual library that were seeing. We can then quantify that library reuse into financial terms in terms of how much it cost to develop, and how many times its been reused.

Were also using the metrics for operational health of the innersource products, because its very important to check that products dont end up in some wasteland. Were using some of these metrics in reporting to find products which need some help, or that need an owner, and then we step in and get them some help.

He continued: Weve just scratched the surface in terms of what we can do [here].

I believe that the source code of an organisation is often an untapped source of intelligence, and theres a lot of information there we could look at to help us understand what are the flows of information across the organisation.

Read the original post:

NAB to 'innersource' some of its business platforms - iTnews

Why The IAB Tech Lab Still Hasn’t Taken On The Administrator Role For Unified ID 2.0 AdExchanger – AdExchanger

Nothing in ad tech is ever easy.

Despite previously signaling interest in serving as an administrator for Unified ID 2.0 earlier this year, the IAB Tech Lab is still on the fence about taking on this role for the open-source initiative to replace third-party cookies with email-based IDs.

During a Tech Lab board meeting last Thursday, a vote on the matter was tabled for further discussion.

Tech Lab assuming the administrator role for Unified ID 2.0 is being actively explored, but no decision has been made, a spokesperson told AdExchanger. We will provide an update when we have something to share with the industry.

So, whats the holdup?

One issue has to do with the fact that the Tech Lab doesnt feel comfortable taking on the role of admin as currently defined in The Trade Desks technical specs for Unified ID 2.0, according to someone with knowledge of the matter who asked to remain anonymous.

The administrators main job is to be in charge of a centralized database of sorts and manage access to the UID2 partner ecosystem. That means distributing encryption keys to UID2 operators, distributing decryption keys to compliant members, sending UID2 opt-out requests to operators and DSPs and auditing participants for compliance. The administrator must also shut off bad actors that abuse the ID.

Its that last bit the IAB Tech Lab board isnt comfortable with, as in pulling the plug if a partner violates UID2s code of conduct.

The Trade Desk, however, is pushing for an industry entity to take on the responsibility of controlling the kill switch for UID2.

Hence, the impasse.

Back in February, though, the IAB Tech Lab seemed close to sealing the deal. In a blog post, Jordan Mitchell, then the IAB Tech Labs SVP of privacy, identity and data (he left in April), noted that the Tech Lab is well suited to the serve the technical role of UID2 admin and manage the open-source software powering UID2 in collaboration with other industry players.

But the devil clearly lives in the details on this one.

Beyond the current structure of the role, the Tech Lab is also concerned about policing the use of UID2 in countries with strict privacy laws, like the General Data Protection Regulation in Europe.

If the IAB Tech Lab does eventually take on a modified version of the admin role, Europe and potentially other countries, such as Brazil and India, will likely be carved out, at least to start.

GDPR violations carry a hefty fine and no one wants to be the one left holding a bag full of potential liability.

But all that said, the IAB Tech Lab board appears willing to move forward as an admin as long as its not on the hook for shutting down the baddies.

And although the admin role is still TBD, the Tech Lab has already taken on a few other functions related to the initiative, including hosting the open source code repositories for UID2 on GitHub.

The next step will be to set up a follow-up board meeting dedicated to the topic of UID2, likely for sometime in the new year. This meeting will include a vote.

One of the reasons a vote didnt happen at the board meeting last week is because the UID2 item appeared rather far down on a long agenda and by the time it was addressed a lot of members had already left.

The IAB Tech Labs board, chaired by Neal Richter, Amazon DSPs director of advertising science, is made up of nearly 40 product and business leaders across a broad range of advertising and media companies, including Google, CafeMedia, Facebook, TikTok, LiveRamp, News Corp, ViacomCBS, Criteo, The Trade Desk, PubMatic and Neustar.

See the article here:

Why The IAB Tech Lab Still Hasn't Taken On The Administrator Role For Unified ID 2.0 AdExchanger - AdExchanger

Command Prompt in Windows 11 to be replaced by Windows Terminal as the default experience – Ghacks Technology News

Windows Terminal was unveiled in 2019, and after a year in preview phase, it was released as an open source tool in 2020. Microsoft has announced that the Command Prompt in Windows 11 will be replaced by Windows Terminal.

The Redmond-based company has been making changes to its operating system, replacing legacy components, with modern ones. The most notable change is, of course, Control Panel, which has slowly but surely been superseded by the Settings app. Notepad recently got an overhaul, a much-needed one in my opinion. So, it's not surprising that Microsoft wants to shift away from CMD to a modern equivalent with richer options.

The move towards making Windows Terminal as the default command line tool will begin with the Windows Insider Program. It makes sense, as feedback from users will be crucial, and will probably involve testing use-case scenarios, where CMD is normally used.

The announcement made by the company, first spotted by The Verge, states that Microsoft will enforce the change for all Windows 11 users in 2022.

While Windows Terminal will primarily be useful for programmers, its functions are not necessarily limited to developers. All commands that are supported in Command Prompt, are also supported in Windows Terminal. So, if you're familiar with the legacy tool, you'll feel at home with its replacement. In addition to this, the tool also supports PowerShell, Azure Cloud Shell, and Windows Subsystem for Linux (WSL), meaning it is quite versatile.

Interface-wise, Windows Terminal has significant advantages. It supports tabs and panes, you can work on multiple tabs or panes and switch between them easily like you were using a web browser. The command line shell also lets you rename tabs, duplicate them, set a color to the tab's title bar, etc. The application does more, you can customize its appearance, color schemes, for a more personalized experience. I wish File Explorer supported these features.

Windows Terminal has a GPU accelerated text rendering engine, the command line shell includes support for Unicode and UTF-8 character support, HTML, RTF and Plain Text formatting. The tool can be used with special characters and emojis. Keyboard shortcuts are always nice to have.

Due to the fact that it is open source, anyone can contribute to the source code, track issues on GitHub. The utility is available at the Microsoft Store, which means it will get updates and new features faster than if it were to be patched via Windows Update. And it is compatible with Windows 10.

Will CMD be removed from Windows 11?

The fact that the announcement says that Windows Terminal will be the default experience, seems to suggest that Command Prompt will continue to exist, alongside PowerShell. It just won't be the recommended option anymore. Maybe Microsoft will nag you to use Windows Terminal, like it does with Edge. If you don't get it, you may to read this article for context.

It's a little sad to wave goodbye to CMD, I'll miss it. Have you used Windows Terminal?

Summary

Article Name

Windows Terminal will replace Command Prompt as the default experience in Windows 11

Description

Microsoft will replace Command Prompt with Windows 11 to be replaced by Windows Terminal in 2022.

Author

Ashwin

Publisher

Ghacks Technology News

Logo

The rest is here:

Command Prompt in Windows 11 to be replaced by Windows Terminal as the default experience - Ghacks Technology News

ShiftLeft Expands Attackability Detection Coverage to JavaScript and TypeScript – StreetInsider.com

News and research before you hear about it on CNBC and others. Claim your 1-week free trial to StreetInsider Premium here.

The new feature release adds the most popular programming language to the scanning arsenal of NG-SAST and I-SCA users, empowering shift left security practices by analyzing full attack data paths and prioritizing attackable vulnerabilities.

SANTA CLARA, Calif.--(BUSINESS WIRE)--ShiftLeft, Inc., an innovator in automated application security testing, today announced that its Intelligent-SCA product has added scanning and attackability analysis for JavaScript (JS) and the TypeScript (TS) language to the ShiftLeft CORE platform. JavaScript is the most widely used programming language and is also a frequent attack target for cybercriminals seeking to exploit vulnerabilities in open source code and the software supply chain.

Development teams using JavaScript frequently add functionality to their code by quickly writing new code or borrowing it from open source libraries like npm or reusing existing libraries and code modules on GitHub. Because JavaScript is a dynamic language and something of a Swiss Army Knife working on both the front-end and the server side, developers often move quickly to write quick fixes or hacks that create longer term vulnerabilities. Equally challenging, open source Javascript libraries frequently contain vulnerabilities that create unknown risk for the application. When the introduced risks are serious, it can require months of remediation work to identify and address all the risk ramifications.

By adding JavaScript coverage, ShiftLeft dramatically expanded the ability of Application Security (AppSec) teams to shift security left by providing detailed and accurate guidance to development teams on which vulnerabilities in web applications and JavaScript-driven frameworks can be proven to result in damaging attacks. With the addition of JavaScript coverage, ShiftLeft is one of the most comprehensive solutions in the marketplace and allows us to test all our web application code before we ever go into production, says Adam Fletcher, Chief Security Officer at Blackstone. This means we see security flaws sooner and can focus our efforts on the most attackable vulnerabilities, letting us safely ship code faster. With the new product capabilities, ShiftLeft offers the following benefits:

By adding JavaScript coverage, ShiftLeft can dramatically expand the percentage of application code covered with attackability insights, says Alok Shukla, VP Products, ShiftLeft. As the most popular language playing a critical role in the global web and application infrastructure, JavaScript security will become even more important as the pace and severity of attacks on applications and the open source supply chain - much of which is written in JavaScript increase over the course of 2022.

The addition of JS/TS coverage further cements ShiftLeft as the most comprehensive and authoritative provider of Application Security testing and attackability analysis on the market today. Application security teams and developers using ShiftLeft are able to close more security gaps at a faster pace and spend more time focusing on the issues that matter thanks to the unique ability of ShiftLeft to spotlight attackable vulnerabilities and clearly identify low-risk theoretical vulnerabilities.

About ShiftLeft

ShiftLeft enables software developers and application security teams to radically reduce the attackability of their applications by providing near-instantaneous security feedback on software code during every pull request. By analyzing application context and data flows in near real-time with industry-leading accuracy, ShiftLeft empowers developers and AppSec teams to find and fix the most serious vulnerabilities faster. Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, ShiftLefts platform scans for attack context and pathways typical of modern applications, across APIs, OSS, internal microservices, and first-party business logic code, and then provides detailed guidance on risk remediation within existing development workflows and tooling. ShiftLeft CORE, a unified code security platform, combines the companys flagship NextGen Static Analysis (NG SAST), Intelligent Software Composition Analysis (SCA), and contextual security training through ShiftLeft Educate to provide developers and application security teams the fastest, most accurate, most relevant, and easiest to use automated application security and code analysis platform.

Backed by Bain Capital Ventures, Mayfield, Thomvest Ventures, and SineWave Ventures, ShiftLeft is based in Santa Clara, CA. To learn how ShiftLeft keeps AppSec in sync with the rapid pace of DevOps, see https://www.shiftleft.io/.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211216005188/en/

PR Contact:Corinna KruegerShiftLeftckrueger@shiftleft.io

Source: ShiftLeft, Inc.

See more here:

ShiftLeft Expands Attackability Detection Coverage to JavaScript and TypeScript - StreetInsider.com