GitHub Vulnerability Allows Hackers to Hijack Thousands of Popular Open-Source Packages CPO Magazine
The rest is here:
GitHub Vulnerability Allows Hackers to Hijack Thousands of Popular Open-Source Packages - CPO Magazine
Link:
GitHubs Octoverse report finds 97% of apps use open source software - VentureBeat
The rest is here:
Microsoft sued for open-source piracy through GitHub Copilot - BleepingComputer
Welcome! Lets do some open source!
Contributing to open source for the first time can be scary and a little overwhelming. Perhaps youre a Code Newbie or maybe youve been coding for a while but havent found a project you felt comfortable contributing to.
If you have never contributed to an open source project before and youre just getting started, consider exploring these resources.
We asked folks on Twitter what they felt when they made their first contribution to an open source project. Here are just a few of their tweets.
Some had great experiences:
Some had bad experiences. The purpose of first-timers-only is to help everyone have an empowering and welcoming first experience as they enter the world of Open Source Software (OSS)!
If you are an OSS project owner, then consider marking a few open issues with the label first-timers-only. The first-timers-only label explicitly announces:
Im willing to hold your hand so you can make your first PR. This issue is a bit easier than normal.And anyone whos already contributed to open source isnt allowed to touch this one!
First timer contributions are normally very small and easy (One recent first-timers-only issue was literally three lines of simple changes! And the changes were described in great detail and tested by the project maintainer). But this makes it easier for the contributor to get the hang of the contribution process rather than the contribution itself. Remember, this isnt as much about getting your project features implemented quickly as it is about helping first timers.
Why is YAL (yet another label) like first-timers-only important? Because it makes a statement that first timers are welcome, that they are valued, and that they can start contributing to your project! Often the hard part of getting into open source for the first time isnt the implementation of a feature, but figuring out how to actually contribute code such that the pull request is accepted! But, oh the feeling of accomplishment when your first PR is merged!
Go label an issue or two with first-timers-only and advertise that those issues exist! Walk a newbie a week (or a month) through the process! Document the process, blog and tweet about it and encourage those first timers to do the same! And add this badge to your repos README:
Markdown snippet:
Kent C. Dodds proposed First Timers Only to get new people to make their first contribution. Scott Hanselman blogged about Bringing Kindness Back to Open Source, so it was obvious that we team up and promote these ideas and get more folks involved in open source.
James Spencer created a great twitter account called @yourfirstpr that exists to showcase great issues that a newbie can solve in order to create Your First Pull Request! We recommend you follow @yourfirstpr and let them know if your OSS project has a first-timers-only tag and you have open issues that youll reserve for a new contributor!
Utkarsh Upadhyay created a bot called @first_tmrs_only which tweets when a new first-timers-only issue is posted on GitHub. Follow it to stay abreast with latest first-timers-only issues!
Angie Gonzalez and Arlene Perez created a GitHub app called First Timers that automates most of the process of creating first-timers-only issues. Install the app on your repositories and commit simple changes to branches with names starting with first-timers- the First Timers App will turn it into a fully fledged issue with all information a first-time Open Source contributor will need to make their first pull request.
We believe - and we hope you do too - that learning how to code, how to think, and how to contribute to open source can empower the next generation of coders and creators. We VALUE first time contributors and we want them to know that everyone started somewhere! Start here!
Go here to see the original:
First Timers Only - Get involved in Open Source and commit code to your ...
This article is about software free to be modified and distributed. For examples of software free in the monetary sense, see List of freeware.
This is a list of free and open-source software packages, computer software licensed under free software licenses and open-source licenses. Software that fits the Free Software Definition may be more appropriately called free software; the GNU project in particular objects to their works being referred to as open-source.[1] For more information about the philosophical background for open-source software, see free software movement and Open Source Initiative. However, nearly all software meeting the Free Software Definition also meets the Open Source Definition and vice versa. A small fraction of the software that meets either definition is listed here.Some of the open-source applications are also the basis of commercial products, shown in the List of commercial open-source applications and services.
Be advised that available distributions of these systems can contain, or offer to build and install, added software that is neither free software nor open-source.
Read more:
List of free and open-source software packages - Wikipedia
We're releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU codemost of the time on par with what an expert would be able to produce. Triton makes it possible to reach peak hardware performance with relatively little effort; for example, it can be used to write FP16 matrix multiplication kernels that match the performance of cuBLASsomething that many GPU programmers can't doin under 25 lines of code. Our researchers have already used it to produce kernels that are up to 2x more efficient than equivalent Torch implementations, and we're excited to work with the community to make GPU programming more accessible to everyone.
Novel research ideas in the field of Deep Learning are generally implemented using a combination of native framework operators. While convenient, this approach often requires the creation (and/or movement) of many temporary tensors, which can hurt the performance of neural networks at scale. These issues can be mitigated by writing specialized GPU kernels, but doing so can be surprisingly difficult due to the many intricacies of GPU programming. And, although a variety of systems have recently emerged to make this process easier, we have found them to be either too verbose, lack flexibility or generate code noticeably slower than our hand-tuned baselines. This has led us to extend and improve Triton, a recent language and compiler whose original creator now works at OpenAI.
The architecture of modern GPUs can be roughly divided into three major componentsDRAM, SRAM and ALUseach of which must be considered when optimizing CUDA code:
Basic architecture of a GPU.
Reasoning about all these factors can be challenging, even for seasoned CUDA programmers with many years of experience. The purpose of Triton is to fully automate these optimizations, so that developers can better focus on the high-level logic of their parallel code. Triton aims to be broadly applicable, and therefore does not automatically schedule work across SMs -- leaving some important algorithmic considerations (e.g. tiling, inter-SM synchronization) to the discretion of developers.
Compiler optimizations in CUDA vs Triton.
Out of all the Domain Specific Languages and JIT-compilers available, Triton is perhaps most similar to Numba: kernels are defined as decorated Python functions, and launched concurrently with different program_ids on a grid of so-called instances. However, as shown in the code snippet below, the resemblance stops there: Triton exposes intra-instance parallelism via operations on blockssmall arrays whose dimensions are powers of tworather than a Single Instruction, Multiple Thread (SIMT) execution model. In doing so, Triton effectively abstracts away all the issues related to concurrency within CUDA thread blocks (e.g., memory coalescing, shared memory synchronization/conflicts, tensor core scheduling).
Vector addition in Triton.
While this may not be particularly helpful for embarrassingly parallel (i.e., element-wise) computations, it can greatly simplify the development of more complex GPU programs.
Consider for example the case of a fused softmax kernel (below) in which each instance normalizes a different row of the given input tensor $X in mathbb{R}^{M times N}$. Standard CUDA implementations of this parallelization strategy can be challenging to write, requiring explicit synchronization between threads as they concurrently reduce the same row of $X$. Most of this complexity goes away with Triton, where each kernel instance loads the row of interest and normalizes it sequentially using NumPy-like primitives.
Fused softmax in Triton.
Note that the Triton JIT treats X and Y as pointers rather than tensors; we felt like retaining low-level control of memory accesses was important to address more complex data structures (e.g., block-sparse tensors).
Importantly, this particular implementation of softmax keeps the rows of $X$ in SRAM throughout the entire normalization process, which maximizes data reuse when applicable (~<32K columns). This differs from PyTorchs internal CUDA code, whose use of temporary memory makes it more general but significantly slower (below). The bottom line here is not that Triton is inherently better, but that it simplifies the development of specialized kernels that can be much faster than those found in general-purpose libraries.
A100 performance of fused softmax for M=4096.
The lower performance of the Torch (v1.9) JIT highlights the difficulty of automatic CUDA code generation from sequences of high-level tensor operations.
Fused softmax with the Torch JIT.
Being able to write fused kernels for element-wise operations and reductions is important, but not sufficient given the prominence of matrix multiplication tasks in neural networks. As it turns out, Triton also works very well for those, achieving peak performance with just ~25 lines of Python code. On the other hand, implementing something similar in CUDA would take a lot more effort and would even be likely to achieve lower performance.
Matrix multiplication in Triton.
One important advantage of handwritten matrix multiplication kernels is that they can be customized as desired to accommodate fused transformations of their inputs (e.g., slicing) and outputs (e.g., Leaky ReLU). Without a system like Triton, non-trivial modifications of matrix multiplication kernels would be out-of-reach for developers without exceptional GPU programming expertise.
V100 tensor-core performance of matrix multiplication with appropriately tuned values for BLOCK$_M$, BLOCK$_N$, BLOCK$_K$, GROUP$_M$.
The good performance of Triton comes from a modular system architecture centered around Triton-IR, an LLVM-based intermediate representation in which multi-dimensional blocks of values are first-class citizens.
High-level architecture of Triton.
The @triton.jit decorator works by walking the Abstract Syntax Tree (AST) of the provided Python function so as to generate Triton-IR on-the-fly using a common SSA construction algorithm. The resulting IR code is then simplified, optimized and automatically parallelized by our compiler backend, before being converted into high-quality LLVM-IRand eventually PTXfor execution on recent NVIDIA GPUs. CPUs and AMD GPUs are not supported at the moment, but we welcome community contributions aimed at addressing this limitation.
We have found that the use of blocked program representations via Triton-IR allows our compiler to automatically perform a wide variety of important program optimizations. For example, data can be automatically stashed to shared memory by looking at the operands of computationally intensive block-level operations (e.g., tl.dot)and allocated/synchronized using standard liveness analysis techniques.
The Triton compiler allocates shared memory by analyzing the live range of block variables used in computationally intensive operations.
On the other hand, Triton programs can be efficiently and automatically parallelized both (1) across SMs by executing different kernel instances concurrently, and (2) within SMs by analyzing the iteration space of each block-level operation and partitioning it adequately across different SIMD units, as shown below.
Element-wise
FP16 matrix multiplication
Vectorized
Tensorized
SM
GPU
Element-wise
FP16 matrix mult.multiplication
Vectorized
Tensorized
SM
GPU
Automatic parallelization in Triton. Each block-level operation defines a blocked iteration space that is automatically parallelized to make use of the resources available on a Streaming Multiprocessor (SM).
We intend for Triton to become a community-driven project. Feel free to fork our repository on GitHub!
If youre interested in joining our team and working on Triton & GPU kernels, were hiring!
Go here to see the original:
Introducing Triton: Open-Source GPU Programming for Neural Networks
For a long time open source software held the earlier label of "free software." The free software movement was formally established by Richard Stallman in 1983 through the GNU Project. The free software movement organized itself around the idea of user freedoms: freedom to see the source code, to modify it, to redistribute itto make it available and to work for the user in whatever way the user needed it to work.
Free software exists as a counterpart to proprietary or "closed source" software. Closed source software is highly guarded. Only the owners of the source code have the legal right to access that code. Closed source code cannot be legally altered or copied, and the user pays only to use the software as it is intendedthey cannot modify it for new uses nor share it with their communities.
The name "free software," however, has caused a lot of confusion. Free software does not necessarily mean free to own, just free to use how you might want to use it. "Free as in freedom, not as in beer" the community has tried to explain. Christine Peterson, who coined the term "open source," tried to address this problem by replacing free software with open source: "The problem with the main earlier label, free software, was not its political connotations, but thatto newcomersits seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept."
Peterson proposed the idea of replacing "free software" with the term "open source" to a working group that was dedicated, in part, to shepherding open source software practices into the broader marketplace. This group wanted the world to know that software was better when it was sharedwhen it was collaborative, open, and modifiable. That it could be put to new and better uses, was more flexible, cheaper, and could have better longevity without vendor lock-in.
Eric Raymond was one of the members of this working group, and in 1997 he published some of these same arguments in his wildly influential essay "The Cathedral and the Bazaar". In 1998, partly in response to that essay, Netscape Communications Corporation open sourced their Mozilla project, releasing the source code as free software. In its open source form, that code later became the foundation for Mozilla Firefox and Thunderbird.
Netscapes endorsement of open source software placed added pressure on the community to think about how to emphasize the practical business aspects of the free software movement. And so, the split between open source and free software was cemented: "open source" would serve as the term championing the methodological, production, and business aspects of free software. "Free software" would remain as a label for the conversations that emphasized the philosophical aspects of these same issues as they were anchored in the concept of user freedoms.
By early 1998 the Open Source Initiative(OSI) was founded, formalizing the term open source and establishing a common, industry-wide definition. Though the open source movement was still met with wariness and corporate suspicion from the late 1990s into the early 2000s, it has steadily moved from the margins of software production to become the industry standard that it is today.
See original here:
What is open source? - Red Hat
Free/open-source software the source availability model used by free and open-source software (FOSS) and closed source are two approaches to the distribution of software.
Under the closed-source model source code is not released to the public. Closed-source software is maintained by a team who produces their product in a compiled-executable state, which is what the market is allowed access to. Microsoft, the owner and developer of Windows and Microsoft Office, along with other major software companies, have long been proponents of this business model, although in August 2010, Microsoft interoperability general manager Jean Paoli said Microsoft "loves open source" and its anti-open-source position was a mistake.[1]
The FOSS model allows for able users to view and modify a product's source code, but most of such code is not in the public domain. Common advantages cited by proponents for having such a structure are expressed in terms of trust, acceptance, teamwork and quality.[2]
A non-free license is used to limit what free software movement advocates consider to be the essential freedoms. A license, whether providing open-source code or not, that does not stipulate the "four software freedoms",[3] are not considered "free" by the free software movement. A closed source license is one that limits only the availability of the source code. By contrast a copyleft license claims to protect the "four software freedoms" by explicitly granting them and then explicitly prohibiting anyone to redistribute the package or reuse the code in it to make derivative works without including the same licensing clauses. Some licenses grant the four software freedoms but allow redistributors to remove them if they wish. Such licenses are sometimes called permissive software licenses.[4] An example of such a license is the FreeBSD License which allows derivative software to be distributed as non-free or closed source, as long as they give credit to the original designers.
A misconception that is often made by both proponents and detractors of FOSS is that it cannot be capitalized.[5] FOSS can and has been commercialized by companies such as Red Hat, Canonical, Mozilla, Google, IBM, Novell, Sun/Oracle, VMware and others.[6]
The primary business model for closed-source software involves the use of constraints on what can be done with the software and the restriction of access to the original source code.[6] This can result in a form of imposed artificial scarcity on a product that is otherwise very easy to copy and redistribute. The result is that an end-user is not actually purchasing software, but purchasing the right to use the software. To this end, the source code to closed-source software is considered a trade secret by its manufacturers.
FOSS methods, on the other hand, typically do not limit the use of software in this fashion. Instead, the revenue model is based mainly on support services. Red Hat Inc. and Canonical Ltd. are such companies that give its software away freely, but charge for support services. The source code of the software is usually given away, and pre-compiled binary software frequently accompanies it for convenience. As a result, the source code can be freely modified. However, there can be some license-based restrictions on re-distributing the software. Generally, software can be modified and re-distributed for free, as long as credit is given to the original manufacturer of the software. In addition, FOSS can generally be sold commercially, as long as the source-code is provided. There are a wide variety of free software licenses that define how a program can be used, modified, and sold commercially (see GPL, LGPL, and BSD-type licenses). FOSS may also be funded through donations.
A software philosophy that combines aspects of FOSS and proprietary software is open core software, or commercial open source software. Despite having received criticism from some proponents of FOSS,[7] it has exhibited marginal success. Examples of open core software include MySQL and VirtualBox. The MINIX operating system used to follow this business model, but came under the full terms of the BSD license after the year 2000.
This model has proved somewhat successful, as witnessed in the Linux community. There are numerous Linux distributions available, but a great many of them are simply modified versions of some previous version. For example, Fedora Linux, Mandriva Linux, and PCLinuxOS are all derivatives of an earlier product, Red Hat Linux. In fact, Red Hat Enterprise Linux is itself a derivative of Fedora Linux. This is an example of one vendor creating a product, allowing a third-party to modify the software, and then creating a tertiary product based on the modified version. All of the products listed above are currently produced by software service companies.
Operating systems built on the Linux kernel are available for a wider range of processor architectures than Microsoft Windows, including PowerPC and SPARC. None of these can match the sheer popularity of the x86 architecture, nevertheless they do have significant numbers of users; Windows remains unavailable for these alternative architectures, although there have been such ports of it in the past.
The most obvious complaint against FOSS revolves around the fact that making money through some traditional methods, such as the sale of the use of individual copies and patent royalty payments, is much more difficult and sometimes impractical with FOSS. Moreover, FOSS has been considered damaging to the commercial software market, evidenced in documents released as part of the Microsoft Halloween documents leak.[8][9][10]
The cost of making a copy of a software program is essentially zero, so per-use fees are perhaps unreasonable for open-source software. At one time, open-source software development was almost entirely volunteer-driven, and although this is true for many small projects, many alternative funding streams have been identified and employed for FOSS:
Increasingly, FOSS is developed by commercial organizations. In 2004, Andrew Morton noted that 37,000 of the 38,000 recent patches in the Linux kernel were created by developers directly paid to develop the Linux kernel. Many projects, such as the X Window System and Apache, have had commercial development as a primary source of improvements since their inception. This trend has accelerated over time.[citation needed]
There are some[who?] who counter that the commercialization of FOSS is a poorly devised business model because commercial FOSS companies answer to parties with opposite agendas. On one hand commercial FOSS companies answer to volunteers developers, who are difficult to keep on a schedule, and on the other hand they answer to shareholders, who are expecting a return on their investment. Often FOSS development is not on a schedule and therefore it may have an adverse effect on a commercial FOSS company releasing software on time.[11]
Gary Hamel counters this claim by saying that quantifying who or what is innovative is impossible.[12]
The implementation of compatible FOSS replacements for proprietary software is encouraged by the Free Software Foundation to make it possible for their users to use FOSS instead of proprietary software, for example they have listed GNU Octave, an API-compatible replacement for MATLAB, as one of their high priority projects. In the past this list contained free binary compatible Java and CLI implementations, like GNU Classpath and DotGNU. Thus even "derivative" developments are important in the opinion of many people from FOSS. However, there is no quantitative analysis, if FOSS is less innovative than proprietary software, since there are derivative/re-implementing proprietary developments, too.
Some of the largest well-known FOSS projects are either legacy code (e.g., FreeBSD or Apache) developed a long time ago independently of the free software movement, or by companies like Netscape (which open-sourced its code with the hope that they could compete better), or by companies like MySQL which use FOSS to lure customers for its more expensive licensed product. However, it is notable that most of these projects have seen major or even complete rewrites (in the case of the Mozilla and Apache 2 code, for example) and do not contain much of the original code.
Innovations have come, and continue to come, from the open-source world:
An analysis of the code of the FreeBSD, Linux, Solaris, and Windows operating system kernels looked for differences between code developed using open-source properties (the first two kernels) and proprietary code (the other two kernels). The study collected metrics in the areas of file organization, code structure, code style, the use of the C preprocessor, and data organization. The aggregate results indicate that across various areas and many different metrics, four systems developed using open- and closed-source development processes score comparably.[16]The study mentioned above is refuted by a study conducted by Coverity, Inc finding open source code to be of better quality.[17]
A study done on seventeen open-source and closed-source software showed that the number of vulnerabilities existing in a piece of software is not affected by the source availability model that it uses. The study used a very simple metrics of comparing the number of vulnerabilities between the open-source and closed-source software.[18] Another study was also done by a group of professors in Northern Kentucky University on fourteen open-source web applications written in PHP. The study measured the vulnerability density in the web applications and shown that some of them had increased vulnerability density, but some of them also had decreased vulnerability density.[19]
In its 2008 Annual Report, Microsoft stated that FOSS business models challenge its license-based software model and that the firms who use these business models do not bear the cost for their software development[clarification needed]. The company also stated in the report:[20][21]
Some of these [open source software] firms may build upon Microsoft ideas that we provide to them free or at low royalties in connection with our interoperability initiatives. To the extent open source software gains increasing market acceptance, our sales, revenue and operating margins may decline.Open source software vendors are devoting considerable efforts to developing software that mimics the features and functionality of our products, in some cases on the basis of technical specifications for Microsoft technologies that we make available. In response to competition, we are developing versions of our products with basic functionality that are sold at lower prices than the standard versions.
There are numerous business models for open source companies which can be found in the literature.[6]
Originally posted here:
Comparison of open-source and closed-source software
Oracle officially announced the general availability of Java 19 on Sept. 20, marking the second release of the widely used open source programming language in 2022.
Java 19 follows Java 18 by six months and continues to provide new capabilities that aim to make the programming language easier for developers to use, while providing more features.
Related: Should Developers Learn Java Programming Language in 2022?
Java 19 is an incremental release and will only be supported for six months. As part of its rapid release cycle, Java features are grouped into larger projects that define a target capability that will be enabled via the introduction of individual Java features that are detailed by the JDK Enhancement Process (JEP).
The JEPs included in Java 19 help advance three key projects, Georges Saab, senior vice president of development, Java Platform at Oracle and chair of the OpenJDK Governing Board, explained to ITPro Today.
Related: Most Popular Programming Languages: Whats Hot, Whats Not in 2022
"One is for Project Loon, which is about scalability; the second is Project Amber, which is about evolution of the Java language itself and syntax; and final one is Project Panama, which is about interoperability with other languages," Saab said.
Inside of the Project Amber grouping, Java 19 benefits from a pair of enhancements that are now in preview.
The first is a Record Patterns capability. The feature is defined in JEP 405, which extends pattern matching to express more sophisticated, composable data queries. JEP 427 provides Pattern Matching for Switch, which enhances Java pattern matching for switch expressions and statements.
As part of Project Panama, Java has been expanded in recent years to better support functions that are normally outside of Java. For example, Java 15, which was released in September 2020, introduced JEP 383 as a new API for Foreign Memory Access. In Java 19, there is a further extension of foreign memory with JEP 424.
"Project Panama is an overarching project to improve the connections between Java and non-Java APIs," Saab said. "If we believe there will always be incremental improvements that can be made that will help developers using non-Java APIs, we will continue to innovate in those areas."
Specific to what's new in Java 19 with JEP 424, one key change in this release is more control over allocation and deallocation of foreign memory via the "MemorySession" API, he said. Also, there are improvements around the foreign function API.
Also new to Java 19 for Project Panama is JEP 426 which helps improve performance with an API to express vector computations.
Java performance will likely also benefit from the Project Loon effort for virtual threads that has made its way into Java 19.
"Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications," JEP 425 states.
Java 19 will be the final release of Java in 2022. The next incremental release, Java 20, is currently scheduled for March 2023. Java 21, set for release in September 2023, will be a Long Term Support (LTS) update that will be supported for five years. The current Java LTS release is Java 17, which became generally available in September 2021 and will be supported until at least 2026.
About the author
Read more from the original source:
Java 19 Brings New Patterns to Open Source Programming Language