Splendid Sunsets on the Marina: USC ISI’s Class of 2022 is Graduating with Big Dreams and Fond Memories – USC Viterbi | School of Engineering – USC…

ISI class of 2022

USCs Information Sciences Institute (ISI) has an impressive Class of 2022, featuring undergraduate, graduate, and doctoral students from all over the world. These students, nearly all graduating from the USC Viterbi School of Engineering, had the opportunity to work and research at USC ISI, the universitys storied crown jewel research institute.

ISIs 2022 graduating class features students from Wuhan China; Karachi, Pakistan; Novo Hamburgo, Brazil; Binh Duong, Vietnam; California, Japan, South Korea and India, to name a few.

For one of the graduating students, Thamme Gowda, this achievement is particularly meaningful: he is the first engineer from his Southern Indian village, near Bangalore. With his brand new Ph.D. in computer science, he even wonders if he is also the first doctor from his hometown not the medical type, though! he admitted jokingly. When asked about the most impactful project he worked on at ISI, Gowda responded that he made a concrete impact: he expanded machine translation support to rare languages.

While most competitors (including Google and Microsoft Translate) currently support only about 100 languages, said Gowda, I have created higher-quality translation models for up to 500 languages.

Making an impact

The impactful projects were abundant for this years graduates. Jonathan Nguyen, who worked with supervisor David Barnhart and will be graduating with an M.S. in astronautical engineering, stated: I was leading a team developing sensors that determine relative attitude and distance for satellite docking. We were able to secure a flight opportunity to the International Space Station for testing in microgravity. He also has quite the anecdote: There was a time when I used my packed jacket on a stick to poke at our test platform to simulate impulsive thrust of a spacecraft to guide it for docking, and it worked. After graduation, Jonathan would like to work in spacecraft propulsion, hypersonic, or astrodynamics.

Sami Abu-El-Haija, who will graduate with a Ph.D. in computer science, spent a lot of his time at ISI initializing deep graph neural networks. His goal was to make the training process of graph neural networks faster by a significant amount without affecting their performance. The next step for Sami? He has already accepted, and started, a role as a research scientist at Google Research.

Others are also going to big tech companies, like Yuzi He, graduating with a Ph.D. in physics, who will join Meta as a research scientist upon graduation. Haoda Wang, with his bachelors in computer engineering and computer science, also has big dreams: I worked at NASA-JPL for a bit, working with the flight software onboard Mars 2020. Building software for spacecraft like that would be pretty nice. Hopefully, they will notice him through his out of the box ideas: I wrote a blog post that analyzed whether a LEGO rocket could really fly, and it somehow got featured on Ars Technica, recalled Wang.

Seungmin Lee, graduating with an M.S. in computer science, believes his most impactful project at ISI was, working on how to leverage multi-layers storage, how data communication evolves with batch size. Rehan Ahmed, who got his masters in applied data science, spent his time at ISI detecting potential sources of vulnerabilities in open source code and figuring out a way tofix them. Minh Pham, Ph.D. in computer science, focused on automating the process of understanding, processing and cleaning tabular data.

ISI has inspired many students to explore different areas of research, and for Shen Yan, Ph.D. in computer science, her work at ISI even prompted her thesis. I worked on an IARPA project named Tracking Individual Performance with Wearable Sensors (TILES) when I first joined ISI. TILES is a project focused on the analysis of stress, task performance, behavior and other factors pertaining to professionals engaged in a high-stress work environment. We design machine learning models to estimate human behaviors from sensory data. I learned a lot from the project and decided to make it my dissertation research.

Sunset dreams and innovations

She also has plans to change how we communicate: I would love to invent a tool or app that can help mimic more real, supportive human-to-human interactions. Eventhough we have a phone, video calls, messages, and many social platforms, remote connections still cannot provide sufficientcompanionship. For family and friends that cannot meet in person, we need a tool to provide them with better connections and mental support.

For Yiwen Ma, M.S. in healthcare computer science, the sky is the limit when it comes to inventing: I would love to create a time machine to allow one to travel through time and space, which bridges the distance and provides us more time to spend with family and friends.

Other fond memories had little to do with research. Many students remembered the beautiful views from ISIs Marina Del Rey office building and its breathtaking sunsets on the harbor. Matheus Schmitz, an ISI student who will be graduating with an M.S. in applied data science after working on a model to identify anti-vaccination users on Twitter, recalled his first time in the office seeing ISIs view of the marina. Likewise, Akira Matsui, who is graduating with a Ph.D. in computer science, will have a hard time letting go of the splendid sunsets on the beautiful marina he got to witness while he was working on machine learning and human forecasting to predict geopolitical events. He shares the best advice he received during his years at ISI: do your homework and be positive.

Sunset in Marina Del Rey from the ISI building.

Erin Szeto, graduating with an M.S. in applied data science, also has some solid words of advice for anybody who would like to work in this field: Your first round of code will never be perfect, and you will always be rewriting and improving your code. Talk to the rubber duck! But the wisest words have to be those spoken to Jae Young Kim, who got his masters in applied data science: Focus more on the big picture: not the trees, but the forest.

Congratulations to the Class of 2022, and thank you to our featured ISI students Thamme Gowda, Matheus Schmitz, Akira Matsui, Seungmin Lee, Erin Szeto, Jae Young Kim, Rehan Ahmed, Haoda Wang, Shen Yan, Yuzi He, Jonathan Nguyen, Minh Pham, Yiwen Ma, and Sami Abu-El-Haija. Fight On!

Published on May 9th, 2022

Last updated on May 9th, 2022

Continue reading here:

Splendid Sunsets on the Marina: USC ISI's Class of 2022 is Graduating with Big Dreams and Fond Memories - USC Viterbi | School of Engineering - USC...

GROMACS 2022 Advances Open Source Drug Discovery with oneAPI – HPCwire

May 6, 2022 Intel is committed to fostering an open ecosystem, including technical contributions to many open source projects that are making direct real-world impacts. One example is GROMACS,a molecular dynamics package designed for simulations of proteins, lipids and nucleic acids used to design new pharmaceuticals. Recently released GROMACS 2022, developed usingSYCL and oneAPI, exhibits strong performance running on multiple architectures, including IntelXearchitecture-based GPUs.

GROMACS is one of the worlds most widely used open source molecular dynamics applications, and its easy to see why. The simulations we can conduct with the application grants us better understanding into things as small as the proteins in our bodies to as large as the galaxies in the universe. Most notably, our work with GROMACS developed and optimized with oneAPI allows Intel to have a hand in significant advances in drug discovery and expands GROMACS open development across multiple compute architectures. And this is all while collaborating with the open source community that we so greatly value, said Roland Schulz, parallel software engineer at Intel.

Why It Matters

GROMACS molecular dynamic simulations, which are powered by oneAPI, contribute to the identification of crucial pharmaceutical solutions for conditions like breast cancer, COVID-19, Type 2 diabetes, and others, along with projects such as the international distributed computing initiative[emailprotected]. In modern drug discovery, molecular dynamic simulations are applied widely and successfully. These simulations provide researchers with the structural information on biomacromolecules needed to understand the structure-function relationship that guides the drug discovery and design process. The application of computational tools like GROMACS to drug discovery helps researchers more efficiently design and evaluate new drugs while conserving resources.

The GROMACS research and development team at Stockholm University and KTH Royal Institute of Technology, directed by biophysics professor Erik Lindahl, leads the GROMACS molecular dynamics toolkit development, one of the worlds most widely used HPC applications. Molecular dynamics is among the most time-consuming HPC applications because it is a very iterative, compute-focused problem. With computation happening billions of times, that means there are millions of lines of code involved.

How It Works

oneAPI, an open and unified programming model for CPUs and accelerators, supports multiple vendors architectures, which helped Lindahl and his team expand GROMACS support of heterogeneous hardware. This is due to improved productivity using cross-architecture and cross-vendor open standards. Based on these standards, oneAPI programming simplifies software development and delivers performance for accelerated computing without proprietary programming languages or vendor lock-in, while allowing integration of existing code, including OpenMP.

As part of the oneAPI optimization work, Lindahls team ported GROMACS CUDA code, which only runs on Nvidia hardware, to SYCL using the Intel DPC++ Compatibility Tool (part of the Intel oneAPI Base Toolkit), which typically automates 90-95% of the code migration.1,2This allowed the team to create a new, single portable codebase that is cross-architecture-ready, greatly streamlining development and providing flexibility for deployment in multiarchitecture environments.

With GROMACS 2022s full support of SYCL and oneAPI, we extended GROMACS to run on new classes of hardware. Were already running production simulations on current Intel Xe architecture-basedGPUs as well asthe upcoming Intel Xe architecture-based GPU development platform Ponte Vecchio via the Intel DevCloud.Performance results at this stage are impressive a testament to the power of Intel hardware and software working together. Overall, these optimizations enable diversity in hardware, provide high-end performance, and drive competition and innovation so that we can do science faster, and lower costs downstream, Lindahl said.

GROMACS accelerated compute was made possible through optimizations usingIntel oneAPI cross-architecture toolssuch as the oneAPI DPC++/C++ Compiler, oneAPI libraries, and HPC analysis and cluster tools. The oneAPI tools are in theIntel DevCloud,a free environmentto develop and test code across a variety of Intel architectures (CPU, GPU, FPGA).Learn more about how the tools were used in the video that follows.

Notes

1The team ported GROMACS Nvidia CUDA code toData Parallel C++ (DPC++), which is a SYCL implementation for oneAPI, in order to create new cross-architecture-ready code.

2Intel estimates as of September 2021. Based on measurements on a set of 70 HPC benchmarks and samples, with examples like Rodinia, SHOC, PENNANT. Results may vary.

About GROMACS

GROMACS is a versatile package for performing molecular dynamics, using Newtonian equations of motion, for systems with hundreds to millions of particles. GROMACS is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a multitude of complicated bonded interactions. But, since GROMACS is extremely fast at calculating the non-bonded interactions typically dominating simulations, many researchers use it for research on non-biological systems, such as polymers.

About oneAPI

oneAPI is an open, unified and cross-architecture programming model for CPUs, GPUs, FPGAs and other accelerators. Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated computing without proprietary lock-in, while enabling the integration of legacy code.

About Intels Work with [emailprotected]

GROMACS is the bedrock for the [emailprotected] distributed computing project aimed to help scientists develop new therapeutics for a variety of diseases by simulating protein dynamics. Conducting these challenging molecular dynamics simulations requires a process called strong scaling to successfully simulate atoms during drug discovery research. Intels ability to support GROMACS, and in turn [emailprotected], with advanced software technology tools and code optimizations help deliver productive, performant heterogeneous programming. This ultimately enables developers and scientists by providing the computing capabilities necessary to complete strong scaling. While the project has not yet adopted GROMACS 2022, plans are to transition code so it is cross-architecture ready in time for upcoming Intel Xearchitecture GPUs.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go tonewsroom.intel.comandintel.com.

Source: Intel

Read more from the original source:

GROMACS 2022 Advances Open Source Drug Discovery with oneAPI - HPCwire

GOSH launches as the first ever Git blockchain – PR Newswire

Developers will be able to transparently track and verify all the code they build while ensuring malicious code will be noticeable immediately.

KYIV, Ukraine, May 10, 2022 /PRNewswire/ -- Announced at DockerCon, GOSH launched as the first blockchain in history custom-built for git on-chain. GOSH has partnered with Docker to secure the software supply chain with the GOSH Docker extension. GOSH's mission is to offer a comprehensive solution to securing the global software supply chain, which has long been a big problem for businesses, and capturing the value locked in open source projects.

"Storing git on-chain is a no-brainer," said Mitja Goroshevsky, CTO of EverX and GOSH co-founder, "Attacks happen daily, and blockchain is the only technology which is widely used and is incredibly secure. The only problem: it was impossible to store git on-chain, until now. But GOSH isn't just about security, it's about offering developers a better git overall.

"Git management systems available today, apart from not being secure, are also not tailored to open source. The management of the software always involves handing over code to a centralized party, and there has so far been no community management of code. GOSH changes this by allowing developers to turn their git repositories into a DAO and build consensus around your code."

The current software supply chain is vulnerable to security and transparency risks, and containers are particularly susceptible. Because of this, the team behind GOSH is delighted to announce their first partnership. The GOSH Docker extension is a tool to verify that Docker containers built on GOSH remain secure and unchanged. Developers can be sure that the container itself was built only using the components they indicated in their smart contracts.

Using GOSH requires no workflow adjustments from developers, and is still very much a git. Only now, developers will be able to transparently track and verify all the code they build, instead of just relying on social metrics, such as stars and ratings. Code can be tracked to distribution, and all the elements of software are traceable back to the source code, also ensuring malicious code will be noticeable immediately.

GOSH is already actively working with Amaze and BitRezus on making sure their supply chains are air tight. "Here at Amaze we have become passionate about NFTs. A cornerstone of a new and exciting technology that promises to create great value to our customers, from creators to entrepreneurs, we now offer them the opportunity to mint and create minted templates for NFTs," said Aaron Day, CEO of Amaze, "The nature of the services we provide means safety is top priority. We need to make sure that when users deal with financial tools their funds aren't in any danger. GOSH technology can guarantee that our code is developed and delivered in a secure way so software is never compromised."

BitRezus CEO Konstantinos Antonakopoulosadded: "Astropledge works to prevent cybercrime and securely provide software to satellites using the best technology for the task: the blockchain. Our aim is to protect assets sent to space from the dangers posed from hackers or human error. Adopting GOSH is a natural evolutionary step for us, seeing as it is the only blockchain that secures the services we provide in delivering software to a satellite, securely."

About GOSH

GOSH stands for Git Open Source Hodler. It is a decentralized community Git blockchain, purpose-built for securing the software supply chain. GOSH is the first and only formally verified Git implementation. Built as an advanced scalable multithreaded and multi-sharded blockchain, it allows developers to build a layer of structural security smart contracts therefore making it the first platform where the more code you write the more secure it becomes. It was founded on May 10th, 2022.

SOURCE GOSH

Read more here:

GOSH launches as the first ever Git blockchain - PR Newswire

The changing economics of open source – MIT Technology Review

Online systems like SourceForge and later GitHub made it easier to share and collaborate on smaller open-source components. Subsequently, the early and explosive growth of open-source software tested some of those original ideas to the breaking point.

In contrast to the focus on creating alternatives to large software packages in the past, today theres a proliferation of open-source software. On one side, we have internet giants churning out all manners of tools, frameworks, and platforms. On the other side, teams using OneDev, an open-source software development platform, have created small but critical parts that support a huge number of businesses.

The diversity of projects today has challenged many of the initial principles of open source. Hence, in many instances, the codebases for open-source packages are simply too large to allow meaningful inspection. Other packages are distributed by internet titans that dont expect anyone else to contribute to them. Yet, other releases are distinct, targeted releases that may only do one relatively minor task, but do it so well that theyve spread across the internet. However, rather than an active community of maintainers, theyre often just one or two committed developers working on a passion project. One can appreciate the challenges this might create by looking at some recent examples of open sources changing economics.

For instance, ElasticSearch changed its licensing terms in 2021, to include requiring cloud service providers who profit off its work to pay it forward by releasing the code for any management tools they build. Those changes caused an outcry in the open-source community. They prompted Amazon Web Services, which had been offering a managed service based on ElasticSearch until the change, to fork the codebase and create a new distribution for its OpenSearch product.

At the other end of the scale, a security snafu in Log4J created whats been dubbed the biggest bug on the internet after a vulnerability was disclosed in December 2021. Log4J is an open-source logging tool widely used across a multitude of systems today. But, its popularity didnt mean it was backed by a stellar maintenance teaminstead, it was maintained by hobbyists. Here, throwing money at the problem is hardly a solution. We know of many open-source enthusiasts who maintain their software personally while leading busy professional livesthe last thing they want is the responsibility of a service-level agreement because someone paid them for their creation.

So, is this the end of the road for the open-source dream? Certainly, many of the open-source naysayers will view the recent upheavals as proof of a failed approach. They couldnt be more wrong.

What were seeing today is a direct result of the success of open-source software. That success means there isnt a one-size-fits-all description to define open-source software, nor one economic model for how it can succeed.

For internet giants like Facebook or Netflix, the popularity, or otherwise, of their respective JavaScript library and software toolReact and Chaos Monkeyis beside the point. For such companies, open-source releases are almost a matter of employer brandinga way to show off their engineering chops to potential employees. The likelihood of them altering licensing models to create new revenue streams is small enough that most enterprises need not lose sleep over it. Nonetheless, if these open-source tools form a critical part of your software stack or development process, you might want some form of contingency planyoure likely to have very little sway over future developments, so understanding your risks helps.

That advice holds true for those pieces of open-source software maintained by commercial entities. In most cases, such companies will want to keep customers happy, but theyre also under pressure to deliver returns, so changes in licensing terms cannot be ruled out. Again, to reduce the risk of disruption, you should understand the extent to which youre reliant on that software, and whether alternatives are available.

For companies that have built platforms containing open-source software, the risks are more uncertain. This is in line with Thoughtworks view that all businesses can benefit from a greater awareness of what software is running in their various systems. In such cases, we advise companies to consider the extent to which theyre reliant on that piece of software: are there viable alternatives? In extreme circumstances, could you fork the code and maintain it internally?

Once you start looking at crucial parts of your software stack where youre reliant on hobbyists, your choices begin to dwindle. But if Log4Js case has taught us anything, its this: auditing what goes into the software that runs your business puts you in a better place than being completely caught by surprise.

This content was produced by Thoughtworks. It was not written by MIT Technology Reviews editorial staff.

Visit link:
The changing economics of open source - MIT Technology Review

Tevano Wholly-Owned Subsidiary illuria Security, Inc. Announces the Availability of Its Open Source Tool "manush" – Yahoo Finance

Vancouver, British Columbia--(Newsfile Corp. - April 27, 2022) - Tevano Systems Holdings Inc's (CSE: TEVO) (FSE: 7RB) (OTC Pink: TEVNF) ("Tevano", or the "Company") wholly owned subsidiary illuria Security, Inc. ("illuria") announces the availability of its open source tool "manush", an Oberon 2-based Menu Shell.

illuria open sourced manush, a customizable menu shell, though nimble, is intuitive and secure because of its programming language, Oberon, which has a safe runtime. Due to its fast, secure and intuitive nature, manush will be the default shell in illuria's ProfilerX product line, which allows illuria to do rapid changes in its deployment menus, based on customers' needs.

Manush's lead developer Norayr Chilingarian states "We created this not only because we needed it, but because others will need it too! Manush reads the menu configuration file and presents those as a beautifully colored menu to the user. Users can also configure manush to call itself with another configuration file, allowing the user's menus to be with unlimited depth!"

illuria's CEO, Antranig Wartanian states "Most product vendors and open-source projects create their own menu shells from scratch. This seems very redundant work, hence we decided to create a customizable, open-source menu shell that would benefit all systems engineers and operators around the world. While this is the initial version, illuria plans to make major improvements in the coming months as feedback and ideas from the two-way communication with the community guides the roadmap of future releases".

illuria's open sourcing of manush provides a tool for anybody who wants to have their own menu shell. manush is available on illuria's GitHub account: https://github.com/illuria/manush

About Tevano

Tevano Systems Holdings Inc., through its operating subsidiaries, is a technology company with custom and proprietary hardware and software technologies. Its subsidiary, illuria Security, Inc. is an early-stage software development company whose technology involves active cyber deception to protect critical network systems of enterprise systems of all sizes. Using deception technology, illuria's software seeks to solve the challenge of cyber-attacks by detecting threats, systematically deceiving attackers, and actively deterring attacks. Its subsidiary Tevano Systems Inc. is the developer of Health Shield, an AI-driven, electronic tablet that video displays a user with their body temperature and other information. It provides detailed reports of all scans done throughout an enterprise.

Story continues

For more information, please visit http://www.tevano.com

On behalf of the Board of:

TEVANO SYSTEMS HOLDINGS INC.

David Bajwa, Chief Executive Officer

davidb@tevano.com778 388 4806

CAUTIONARY STATEMENT REGARDING FORWARD-LOOKING INFORMATION: This news release contains forwardlooking statements and forwardlooking information within the meaning of applicable securities laws. These statements relate to future events or future performance. All statements other than statements of historical fact may be forwardlooking statements or information. More particularly and without limitation, this news release contains forwardlooking statements and matters. The forwardlooking statements and information are based on certain key expectations and assumptions made by management of the Company. Although management of the Company believes that the expectations and assumptions on which such forward-looking statements and information are based are reasonable, undue reliance should not be placed on the forwardlooking statements and information since no assurance can be given that they will prove to be correct.

Forward-looking statements and information are provided for the purpose of providing information about the current expectations and plans of management of the Company relating to the future. Readers are cautioned that reliance on such statements and information may not be appropriate for other purposes, such as making investment decisions. Since forwardlooking statements and information address future events and conditions, by their very nature they involve inherent risks and uncertainties. Actual results could differ materially from those currently anticipated due to several factors and risks. These include, but are not limited to, the Company's ability to raise further capital, the success of the Company's software and product initiatives and the Company's ability to obtain regulatory and exchange approvals. Accordingly, readers should not place undue reliance on the forwardlooking statements and information contained in this news release. Readers are cautioned that the foregoing list of factors is not exhaustive. The forwardlooking statements and information contained in this news release are made as of the date hereof and no undertaking is given to update publicly or revise any forwardlooking statements or information, whether as a result of new information, future events or otherwise, unless so required by applicable securities laws. The forward-looking statements or information contained in this news release are expressly qualified by this cautionary statement.

Neither the CSE nor the Investment Industry Regulatory Organization of Canada accepts responsibility for the adequacy or accuracy of this release.

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/121855

Go here to read the rest:
Tevano Wholly-Owned Subsidiary illuria Security, Inc. Announces the Availability of Its Open Source Tool "manush" - Yahoo Finance

Heresy: Hare programming language an alternative to C – The Register

On Monday, software developer Drew DeVault announced a systems programming language called Hare, describing it as "simple, stable, and robust." We've all heard that before but there may be something in this.

More than 300 programming languages have existed at one time or another. Hare aims to serve as an alternative to C arguably the most significant programming language of the past 50 years.

DeVault and about 30 project contributors have been working on Hare for about two and a half years. They've now let their rabbit loose so developers can run with it.

"Hare uses a static type system, manual memory management, and a minimal runtime," explained DeVault in a blog post. "It is well-suited to writing operating systems, system tools, compilers, networking software, and other low-level, high performance tasks."

In an email to The Register, DeVault wrote that Hare draws its main inspirations from C.

"I am not as dissatisfied with C as many other language designers appear to be," observed DeVault. "Hare is a conservative set of improvements over C's basic design ideas, and aims to be what C might have been if it were built with the benefit of hindsight."

DeVault revealed that Hare's standard library incorporates ideas from Google's Go programming language, specifically having enough capabilities built into the standard library batteries included, in coding jargon to avoid the need to import dependencies.

"The idea is to have enough batteries to facilitate many use-cases without causing programmers to reach for dependencies, while still having a manageable scope," he argued. "I think Go does this reasonably well; in fact, some Hare modules were more-or-less straight ports from Go (especially crypto)."

Hare's batteries include: a cryptography suite; networking support; date/time operations; I/O and filesystem abstractions; Unix primitives such as poll, fnmatch, and glob; POSIX extended regular expressions; a parser and type checker; and reference documentation.

Hare does not link to libc, the C standard library, by default. It's based on the qbe compiler backend. Here's what a Hare "Hello, world!" program looks like:

Hare has been characterized as a stripped-down spin on Zig, which is also a low-level systems language with manual memory management. It's certainly less involved than Rust another C alternative that has won a significant following over the past few years.

DeVault, however, describes Hare as a way to avoid C's pitfalls.

"I think that many of the languages which aim to compete with C are too far removed from it," he opined. "Hare is a conservative language that aims to distill the lessons learned from the past 30 years into a small, simple, and robust language which can be relied upon for the next 30 years. We're not concerned so much with bold innovations as we are with careful engineering."

Hare currently supports three CPU instruction set architectures x86_64, Arm's aarch64, and riscv64 and two operating systems Linux and FreeBSD. According to DeVault, while there's currently no plan to support non-free platforms like macOS or Windows, a third-party implementation or fork could try to make that work.

The language remains a work in progress, as detailed in the roadmap, which is currently focused on stability for a 1.0 release and standard library enhancements like TLS and raw IP sockets support.

"I expect that in the early days much of the development will continue to be focused on the language itself," DeVault noted, "but it is already useful for system tools like command line utilities, daemons like cron, init systems and supervisors, etc."

DeVault explained he is using Hare to write a password manager and a kernel projects for which the language is well suited. "I think a lot of additional use-cases will open up once we have TLS support as well," he added.

Hare currently relies on the BDFL (benevolent dictator for life) governance model. "The language is designed to stabilize and remain largely unchanging, so much governance is not necessarily called for," DeVault explained, noting that there is a current fundraising effort focused on paying for a cryptography audit.

"Hare is the sum of the efforts of about 30 individuals over the course of two and a half years," said DeVault. "We've worked very hard on it, we are very proud of it, and we hope that you will like it."

Go here to see the original:
Heresy: Hare programming language an alternative to C - The Register

NYC Summer Rising application opens Monday: Heres how to apply – SILive.com

STATEN ISLAND, N.Y. The application for New York Citys Summer Rising program available to all students in elementary and middle school will open on Monday.

The city launched the Summer Rising program last year amid the coronavirus (COVID-19) pandemic, with an aim to create a bridge back to school in the 2021-2022 school year.

This summer, the city plans to expand the program to 110,000 students who will get the opportunity to engage with peers, caring adults, and their community in a wide range of experiences.

For students in grades K-5, the program will run for seven weeks in July and August. Students in grades 6-8 will be in the program for six weeks.

Additionally, this years program will offer high-quality program models, Friday sessions and optional extended hours from the city Department of Youth and Community Development. District 75 students and those with 12-month Individualized Education Programs (IEPs) will receive more inclusive programming.

Students in kindergarten through eighth grade, in both public and private schools, will be able to apply for Summer Rising.

Families are encouraged to apply early to secure a spot at their preferred location. As soon as a family completes the enrollment, they will receive a confirmation email and have a spot in the program. The sooner a family applies, the more likely they are to receive a seat at their preferred location. For best availability, families should sign up by May 22.

Enrollment is quick and easy, and it can be completed from any device with an internet connection or by contacting your schools parent coordinator.

Visit http://www.schools.nyc.gov/summer to apply and to find more information.

This year, families can submit one enrollment per child, so the city Department of Education (DOE) urges families to choose their site carefully.

You can search by zip code, community-based organization, or school name and select any site that displays in the application. The building list and building map share the locations where Summer Rising programs will be held.

The application will only show sites that serve your childs grade level and still have seats available in their program. When a program is full, it will no longer display in the application, according to the DOE. This means there is no need for a waitlist or for families to rank options.

Families should email summer@schools.nyc.gov, call 311, or speak to their parent coordinator for support with Summer Rising enrollment. Families who do not have internet access or cannot access the enrollment portal should reach out to their Parent Coordinator, who can help them complete the application.

Who can attend Summer Rising?

Students in kindergarten through eighth grade, in both public and private schools, will be able to apply for Summer Rising.

What will students do?

Summer Rising offers academic classes, social-emotional learning and other enrichment opportunities, like arts activities, outdoor recreation and even field trips. Some local field trips include visits to parks, pools and other outdoor venues that are educational.

Academics will be provided by licensed teachers in the morning, and enrichment will be led by community-based organization staff.

Students with disabilities who may require additional support to participate in programming, such as a paraprofessional, will receive those supports as needed.

Students also get breakfast, lunch and a snack.

What is the calendar?

Grades K-5:

July 5-Aug. 19 Monday to Friday from 8 a.m. to 3 p.m., followed by extended day enrichment until 6 p.m.

Grades 6-8:

July 5-Aug. 12 Monday to Friday from 8 a.m. to 3 p.m., followed by extended day enrichment until 6 p.m.

Students with 12-month IEPs (District 75):

July 5-Aug. 12 Monday to Friday from 8:10 a.m. to 2:40 p.m. (or similar 6.5-hour day)

Students with 12-month IEPs (Districts 1-32 Extended School Year):

July 5-Aug. 12 Monday to Friday from 8:10 a.m. to 2:10 p.m. (or similar 6-hour day)

ASD (Autism Spectrum Disorder) Programs:

July 5-Aug. 1 Monday to Thursday from 8:30 a.m. to 12:30 p.m.

FOLLOW ANNALISE KNUDSON ON FACEBOOK AND TWITTER.

The rest is here:
NYC Summer Rising application opens Monday: Heres how to apply - SILive.com

Reflecting on the 25th Anniversary of ASCI Red and Continuing Themes for Our Heterogenous Future – HPCwire

In the third of a series of guest posts on heterogeneous computing, James Reinders shares experiences surrounding the creation of ASCI Red and ties that systems quadranscentennial anniversary to predictions about the heterogeneous future being ushered in by exaflops machines.

In 1997, ASCI Red appeared on the Top500 as the first teraflops machine in history. It held that spot for seven lists, a record that remains unbroken decades later. Using thousands of Intel microprocessors, it offered additional evidence that massively parallel machines based on off the shelf technology would dominate supercomputing of the future a trend that was not universally endorsed in 1997. It was also not hard to find skeptics that claimed we would never need a petaflops of computing power, and many saw teraflops as only needed for military needs.

Twenty-five years later, exaflops machines offer evidence of trends that will dominate supercomputing of the future. Before I share my predictions of our future, Ill reflect on how ASCI Red came to be.

ASCI Red

In December 1996, while the machine was still at Intel in Oregon and only three-fourths built, ASCI Red ran for the first time above the one-trillion-operations-per-second rate.

The full system featured 1.2 TB of memory and 9,298 processors (200 MHz Intel Pentium Pro processor boosted later with specially packaged 333 MHz Pentium II Xeon processors) in 104 cabinets. Not including cooling, the system consumed 850 kW of power.

People speak of ASCI Red supercomputer, operated at Sandia for nine years, with a well-deserved reverence. Sandia director Bill Camp said, in 2006, that ASCI Red had the best reliability of any supercomputer ever built.

Why ASCI Red?

Accelerated Strategic Computing Initiative (ASCI), was a ten-year program designed to move nuclear weapons design and maintenance from a test-based (underground explosions) to simulation-based approach (no more underground testing).

By developing reliable computational models for the processes involved over the whole life of nuclear weapons, the U.S. could comfortably live with a Comprehensive Nuclear-Test-Ban Treaty. DOE scientists estimated they needed 100 teraflops by the early 2000s.

Convex, Tombstones, and Execution of Strategies

To build a teraflops machine it was initially believed we would need to do that with a non-Intel processor. Clearly, the floating-point performance of the Pentium processor was insufficient.

In 1994, I visited Convex Computer Corporation to consider if we should use HP processors. Convex pushed HP designs to their limits including over-clocking (long before gamers made this popular). On the patio just outside of the Convex cafeteria, there are more than twenty names etched in cement including Chopp Computer, ETA Systems, and Multiflow. These were all companies that started alongside Convex in supercomputers and failed as businesses.

They explained that these werereminders of the need for more than a great strategy and smart people, you have to actually execute it successfully. Convex cofounder Bob Paluck was quoted in Bloomberg as saying Youve got to have a brilliant strategy, and you have to actually execute it. Otherwise, you become a tombstone.

It fits perfectly with the Andy Grove philosophy drilled into us at Intel that only the paranoid survive.

Convex survived and was eventually acquired by HP. While we didnt select HP parts, I never forgot that Convex graveyard.

Krazy Glew on comp.arch

Intel was the first company to have hardware (the 8087 in 1980) supporting the (then draft) IEEE FP standard. The Intel i860 used a VLIW design to power the #1 supercomputer in 1994, but x86 floating-point remained disappointing for HPC. As a frequent reader of comp.arch on usenet, I was intrigued when Andy Krazy Glew from Intels P6 wrote Dont count Intel out on floating-point to a flame about Intel floating point being noncompetitive. Andy and I hit it off.

I became the first champion in the architecture study team for using the P6 design. The interest became much greater when the first P6 parts now called the Intel Pentium Pro came back and it became apparent we could have 200MHz parts under 40 watts, and that included an on-package L2. The power efficiency, compute density, and costs quickly made it the obvious choice with the entire architecture team.

Comet ShoemakerLevy 9 and C++

For the most part, C++ had no following in HPC. C++ was not an ANSI or ISO standard (that came in 1998). A notable exception was a group at Sandia, the destination for ASCI Red.

The discovery of Comet ShoemakerLevy 9 and the realization that it was likely to collide with Jupiter caused great excitement a never before seen opportunity to observe two significant Solar System bodies collide.

Astronomers and astrophysicists, with scant data to guide them, did not believe the effects of the collision would be visible from Earth. Sandia researchers, experts on high energy impacts, offered a different perspective. Computational simulations by Dave Crawford and Mark Boslough at Sandia, using C++ on an Intel Paragon supercomputer (#1 on the Top500 list at the time), predicted a visible plume rising above the rim of Jupiter. This public disagreement was carried by the media, notably CNN. In the end, the close correspondence between their predictions of a visible plume rising above the rim of Jupiter and the actual plume as observed by astronomers lent even more confidence to the accuracy of the Sandia simulation codes. What an awesome validation!

In a recent book Impactful Times: Memories of 60 Years of Shock Wave Research at Sandia National Laboratories, J. Michael McGlaun of Sandia related We decided to write [c.1990] PCTH[1] in C++ rather than FORTRAN. We hoped to eliminate some coding errors using C++ features. The results were a version of PCTH working that demonstrated excellent parallel speedup that also demonstrated that we could eliminate many software defects in a carefully written C++ program.

Sandia and comets helped fuel the interest that set the stage for C++ to really be a serious language on ASCI Red, in addition to the dominant FORTRAN and lesser used C.

Trends for the Future

In retrospect, most trends that would expand over the next twenty-five years were quite evident when you looked at what the needs were in 1997 and what results were coming out of groundbreaking work.

Those trends became even more evident during the life of ASCI Red. The spectacular comet simulations with C++ code was strong evidence of future directions (I cant imagine writing an adaptive mesh code in Fortran no matter how much I love Fortran). While only defense uses were willing to pay for a teraflops machine, there were plenty of hints that would change including dual-use[2] work at Sandia. The insatiable appetite for performance drove the importance of standardizing message passing (MPI), and then the fattening of nodes with more and more computation at the node level which in turn fueled the need for node level standards (e.g., OpenMP). The topic of security has grown as well as the scope of usage has grown dramatically. Arguably, the least predictable trend was the giant leap in AI usefulness thanks to deep learning algorithms.

In brief, nine notable changes that went from small to big in the past twenty-five year are:

What would our next list look like twenty-five years from now after exaflops systems appear. We already should know that the following nine are in our future:

Unlike ASCI Red, our heterogeneous future will be multivendor, and multiarchitecture because competition is only growing in this new golden age for computer architecture.

Additionally, diversity in hardware demands that performance portability will be critical to the future. When systems were CPU-only, performance portability came about because each generation of CPUs sought to be uniformly better than the CPUs that came before. Every CPU tried to be general purpose. In a heterogeneous world, where specialization is needed for lower power and higher densities, non-CPU compute devices are no longer trying to be general purpose. Any rush to standardize in order to lock in the architectures of today will only serve to undermine the credibility of such a standard.

These nine trends demand we support more variety in hardware and applications, while making it more approachable, faster, and better.

And, unlike in 1997, we need to do it in far more than just Fortran (formerly known as FORTRAN).

No code is sounding better and better all the time. Dream on.

[1] PCTH stands for Parallel CTH, CTH stands for CSQ to the Three Halves, CSQ stands for CHARTD Squared, CHARTD stands for Coupled Hydrodynamics And Radiation Transport Diffusion). Learn more about CTH at https://www.sandia.gov/cth/.

[2] Dual-use technologies refer to technologies with both military utility and commercial potential.

About the Author

James Reinders believes the full benefits of the evolution to full heterogeneous computing will be best realized with an open, multivendor, multiarchitecture approach. Reinders rejoined Intel a year ago, specifically because he believes Intel can meaningfully help realize this open future. Reinders is an author (or co-author and/or editor) of ten technical books related to parallel programming; his latest book is about SYCL (it can be freely downloadedhere).

Other articles in this series

Solving Heterogeneous Programming Challenges with SYCL

Why SYCL: Elephants in the SYCL Room

Read more here:
Reflecting on the 25th Anniversary of ASCI Red and Continuing Themes for Our Heterogenous Future - HPCwire

Istio Applies to Join CNCF: Why Now? The New Stack – thenewstack.io

The Istio Steering Committees decision to offer the service mesh project as an incubating project with the Cloud Native Computing Foundation (CNCF) begs the question: why has it taken so long?

The move follows concerns by IBM one of the original creators with Google and car-sharing provider Lyft and other community members over the projects governance, specifically Googles advocacy of the creation of the Open Usage Commons (OUC) for the project in 2020. However, the context has changed today, an Istio steering committee member noted on GitHub.

The Istio steering committee implied this week that the timing is right. The move is intended to help deepen Istios integration with Kubernetes through the Gateway API and gRPC with proxyless mesh, not to mention Envoy, which has grown up beside Istio, according to an Istio statement released on GitHub by Istio steering committee member Craig Box, who leads the Cloud Native advocacy team at Google Cloud. We think its time to unite the premier Cloud Native stack under a single umbrella, the statement reads.

However, Istios application to join CNCF followed criticism in 2020 over Googles creation of the Open Usage Commons (OUC) license for Istio and Googles ownership of the associated trademarks. IBM deemed the OUC licensing scheme disappointing because it doesnt live up to the communitys expectation for open governance, then IBMs Jason McGee, general manager and CTO of IBM Cloud Platform, wrote in a blog post in 2020.

An open governance process is the underpinning of many successful projects. Without this vendor-neutral approach to project governance, there will be friction within the community of Kubernetes-related projects. At the projects inception, there was an agreement that the project would be contributed to the CNCF when it was mature, McGee wrote. IBM continues to believe that the best way to manage key open source projects such as Istio is with true open governance, under the auspices of a reputable organization with a level playing field for all contributors, transparency for users, and vendor-neutral management of the license and trademarks. Google should reconsider their original commitment and bring Istio to the CNCF.

Relinquishment of the trademarks by Google was required in order for the Istio project to achieve its long-term objectives, Todd Moore, vice president, open technology, IBM, told The New Stack in an emailed response.

Long ago, IBM realized the power of communities that are openly governed and projects that are secured in neutral homes are the ones to gain momentum and spawn markets. While the Istio project governance made great strides, the project was not destined to reach the broad adoption that would be secured by a long-term neutral home, Moore said. Single-vendor control over the trademark and licensing is a deterrent to broad adoption as end users and industry players are aware of the pitfalls.

Meanwhile, the parties at Google who were reluctant to surrender trademarks are no longer there, Moore noted. This freed sensible heads to prevail. At the start, it was a toss up on who would register the trademark and IBM took Google at good faith that our agreement to take the project to the CNCF would be honored, Moore said. This turned out to not be the case, but that has been put right.

A Google spokesperson countered in an emailed response: Weve been waiting for the right time of Istios lifecycle to donate, and now is simply the right time in terms of its maturation. Google approached the OUC and asked them to donate the trademark to the Linux Foundation. The OUC agreed to do so, so as part of the contribution, the trademark will be transferred.

Yesterday, Istios steering committee said the OUC license will remain in effect. However, the trademarks will move to The Linux Foundation but continue to be managed under OUCs trademark guidelines.

According to industry sources, certain Google parties were reluctant to surrender the ownership of Istios trademarks. This is because, Torsten Volk, an analyst for Enterprise Management Associates (EMA), told The New Stack, Google has invested a lot of staff hours into Istio and regards service mesh as a critical entry point into the enterprise market.

Controlling the strings that hold together distributed applications would be a great position for any vendor to be in, but Google was certainly aware of what happened to Docker when they overplayed their hand, paving the way for Kubernetes, Volk said. Point being, Google needed to take this step in order for VMware, Cisco, IBM, Red Hat and friends to stay committed to Istio, instead of eventually starting to shop around.

While Istio is retaining the OUC license, the act of moving the associated trademarks to The Linux Foundation, and especially, the decision to apply to become a CNCF project, seems to have appeased IBM at least somewhat.

IBM wrote in a post yesterday: IBM fully believes in open governance and the power of community. Therefore, we enthusiastically applaud todays submission of Istio to the Cloud Native Computing Foundation (CNCF).

However, IBM was not more specific. The about-face, according to Volk, can be accounted for by lots of friction around this topic in the past and Google still hanging on to the OUC license model instead of simply adopting a traditional open source license without trademark protection.

This is a tricky topic for all parties involved, as Istio integration requires each vendor to make significant investments and nobody wants to explain to their board why their company was contributing to Googles shareholder value, Volk said.

Meanwhile, Google has made over half of all contributions to Istio and two-thirds of the commits, according to CNCF DevStats, Chen Goldberg, vice president of engineering for Google, noted in a blog post. Google also became Envoys largest contributor after adopting Envoy for Istio.

Istio is the last major component of organizations Kubernetes ecosystem to sit outside of the CNCF, and its APIs are well-aligned to Kubernetes. On the heels of our recent donation of Knative to the CNCF, acceptance of Istio will complete our cloud-native stack under the auspices of the foundation, and bring Istio closer to the Kubernetes project, Goldberg wrote. Joining the CNCF also makes it easier for contributors and customers to demonstrate support and governance in line with the standards of other critical cloud-native projects, and we are excited to help support the growth and adoption of the project as a result.

Istios joining CNCF is only good news for Solo.io, the leading provider of tools for Istio. The CNCFs support will, of course, only make Istio more robust, which should translate into performance benefits for users of Solo.ios Gloo Mesh and other Istio-based products.

We bet on Istio five years ago But we did believe that Istio is the best service mesh even when it wasnt in the CNCF. But before people were a little bit confused about why Istio was not in the CNCF and were even a little bit worried, Idit Levine, founder and CEO of Solo.io, told The New Stack. Now I think that Istio joining the CNCF will make Istio exactly like Kubernetes, as the de facto service mesh.

Service mesh is defined in the book Istio in Action, by Christian E. Posta, vice president, global field CTO for Solo.io, and Rinor Maloku, field engineer for Solo.io, as a relatively recent term used to describe a decentralized application-networking infrastructure that allows applications to be secure, resilient, observable and controllable, Posta and Maloku write. Service mesh, in this way, describes an architecture consisting of a data plane that uses application-layer proxies to manage networking traffic on behalf of an application and a control plane to manage proxies. This architecture lets us build important application-networking capabilities outside of the application without relying on a particular programming language or framework, Posta and Maloku write.

Istio is an open source implementation of a service mesh. It was created initially by folks at Lyft, Google, and IBM, but now it has a vibrant, open, diverse community that includes individuals from Lyft, Red Hat, VMware, Solo.io, Aspen Mesh, Salesforce and many others, Posta and Maloku write. Istio allows us to build reliable, secure, cloud-native systems and solve difficult problems like security, policy management and observability in most cases with no application code changes.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Read the original:
Istio Applies to Join CNCF: Why Now? The New Stack - thenewstack.io

All you need to know about career in algo-trading and its future – Economic Times

As markets and API-driven trading have started flourishing in recent years, Algo-Trading or Quant Trading professionals have seen an of requests for guidance from students about careers in this area. As a result, there is a significant need for structured resources and advice in this field, and in this article, I will try to give some direction to the interested candidates.

I am a practitioner in the quantitative or algorithmic trading domain. I started working in the field in 2010 when Algo trading started in India. There were no resources for learning algo trading out in the market when I started. Everything we learned came from experience; fortunately, a lot of content in this field is open source, and the internet has a lot of quality content available.

Where to Start Learning?There are structured courses in algorithmic trading available on popular ed-tech platforms like CFA Institute, Coursera, QuantInsti, and World quant University, which can be extremely helpful. In addition, many universities like BSE Institute have recently started full-fledged programs on Algo Trading.

But one can even start learning from their efforts through a structured learning program. The skillset necessary to create would be:

Types of roles in Algorithmic Trading

My humble prediction is that the resources for algorithmic trading will evolve and become structured and efficient as the market grows. India has 50-60% penetration of algo trading, but the developed markets have much higher penetration, more complex products, and more accessible regulations. Indian markets and algorithmic trading will continue to grow. We would see fresh new leaders come into the market not just in the alpha generation or portfolio management roles but also in technology, data science, education, and content development.

More here:
All you need to know about career in algo-trading and its future - Economic Times