Nethack is now in the Museum of Modern Art – PC Gamer

Nethack (opens in new tab) is one of the great (and ongoing) gaming projects. It first appeared in 1987 as a fork of a 1982 title, Hack, created by Mike Stephenson, Izchak Miller and Janet Walz. The game operated on an open source principle, where anyone was free to create their own version, but only the members of the DevTeam could change the main source code.

Over time the DevTeam grew with skilled coders from the community and, after more than three decades, has had an unknown number of members. They seem to like the mystery. Over its lifetime NetHack has grown and grown in complexity, somehow juggling its hundreds of emergent elements together, with the DevTeam held up as collective coding gods by the game's devoted community. Browse the NetHack forums and even now you may well come across the initialism TDTTOE: The DevTeam Thinks of Everything.

NetHack's emergent roleplaying isn't like anything else out there, (opens in new tab) nor has its development been. It's a singular project from all angles, and has now received the recognition it deserves. The Museum of Modern Arts began its videogame collection in 2012 (opens in new tab), and at the time mentioned NetHack would eventually be included (though it wasn't in the initial 14 titles chosen). Its time has finally come: NetHack has been added to the Architecture and Design department's collection, and will be displayed as part of the Never Alone exhibition (opens in new tab) from September 10.

The news was shared by coder Jean-Christophe Collet in a blogpost discussing his own early involvement as a member of the DevTeam (thanks, Slashdot (opens in new tab)).

"A long time ago I got involved with the development of NetHack, a very early computer role playing game, and soon joined the DevTeam, as weve been known since the early days," Collet writes (opens in new tab). "I was very active for the first 10 years then progressively faded out even though I am still officially (or semi-officially as there is nothing much really 'official' about NetHack, but more on that later) part of the team."

Collet writes rather amusingly about the different stages of amazement he went through, before thinking about "these 35 years and what they meant to me, the team, the gaming community and, finally the open source community."

NetHack was and remains, as Collet puts it, "one hell of an anomaly," and hugely influential within gaming. But he also notes its pioneering approach in other respects: "It is also one of the first, if not the first software project to be developed entirely over the Internet by a team distributed across the globe (hence the 'Net' in 'NetHack')."

Similarly the DevTeam quickly grew used to accommodating user feedback, "suggestions, bug reports and bug fixes from the online community (mostly over UseNet at the time) long, long before tools like GitHub (or Git for that matter), BugZilla or Discord were even a glimmer of an idea in the minds of their creators."

Collet says he was in his early 20s when he first started working on NetHack, and at this distance can now see he learned "as much, or more" from the project as he did from his then-jobs.

Looking back on everything over the years, knowing NetHack's going to be displayed in Moma, Collet makes an observation to warm any programmer's heart: "I learned that you should always write clean code that you wont be embarrassed by, 35 years later, when it ends up in a museum."

The post ends with Collet's tribute to the fun that everyone involved in making the game has had over its creation:

"We didnt have any lofty goals, we didnt set out to change the world or disrupt anything, we just enjoyed a little game called Hack, came up with ideas that we thought would make it even more fun, worked on these ideas, met like-minded people, and decided to band together. Having a blast all along.

"I am incredibly grateful to have been part of that adventure. It had a huge impact on my life and I am absolutely thrilled to see the game and the team recognised in such spectacular fashion."

There is now a page on Moma's website whose sole purpose is to display a screen of NetHack (opens in new tab) which greets viewers with an unbeatable line: "Hello, Yoghurt! Welcome again to Ermenak's used armor dealership!"

Here is the original post:

Nethack is now in the Museum of Modern Art - PC Gamer

OpenSSL 3.0.5 awaits release to fix potential worse-than-Heartbleed flaw – The Register

The latest version of OpenSSL v3, a widely used open-source library for secure networking using the Transport Layer Security (TLS) protocol, contains a memory corruption vulnerability that imperils x64 systems with Intel's Advanced Vector Extensions 512 (AVX512).

OpenSSL 3.0.4 was released on June 21 to address a command-injection vulnerability (CVE-2022-2068) that was not fully addressed with a previous patch (CVE-2022-1292).

But this release itself needs further fixing. OpenSSL 3.0.4 "is susceptible to remote memory corruption which can be triggered trivially by an attacker," according to security researcher Guido Vranken. We're imagining two devices establishing a secure connection between themselves using OpenSSL and this flaw being exploited to run arbitrary malicious code on one of them.

Vranken said that if this bug can be exploited remotely and it's not certain it can be it could be more severe than Heartbleed, at least from a purely technical point of view.

However, Vranken notes several mitigating factors, including the continued use of the 1.1.1 tree of the library rather than v3 tree; the fork of libssl into LibreSSL and BoringSSL; the short amount of time 3.0.4 has been available; and the fact that the error only affects x64 with AVX512 available on certain Intel chips released between 2016 and early 2022.

Intel this year began disabling AVX512 support on Alder Lake, its 12th Gen Intel Core processors.

The bug, an AVX512-specific buffer overflow, was reported six days ago. It has been fixed, but OpenSSL 3.0.5 has not yet been released.

Meanwhile, Linux distributions like Gentoo have not yet rolled out OpenSSL 3.0.4 as a result of this bug and a test build failure bug. So they include OpenSSL 3.0.3, with its command injection flaw.

In the GitHub Issues thread discussing the bug, Tom Mrz, software developer at the OpenSSL Foundation, argues the bug shouldn't be classified as a security vulnerability.

"I do not think this is a security vulnerability," he said. "It is just a serious bug making [the] 3.0.4 release unusable on AVX512 capable machines."

Xi Ruoyao, a PhD student at Xidian University, also said he disagreed with the policy of calling every heap buffer overflow a security flaw. Vim, he said, started doing so this year and the result has been something like ten "high severity" vim CVEs every month without any proof-of-concept exploit code.

"I think we shouldn't mark a bug as 'security vulnerability' unless we have some evidence showing it can (or at least, may) be exploited," he wrote, adding that nonetheless 3.0.5 should be released as soon as possible because it's very severe.

Alex Gaynor, software resilience engineer with the US Digital Service, however, argues to the contrary.

"I'm not sure I understand how it's not a security vulnerability," responded Gaynor. "It's a heap buffer overflow that's triggerable by things like RSA signatures, which can easily happen in remote contexts (e.g. a TLS handshake)."

Gaynor urged releasing the fix quickly. "I think this issue qualifies as a CRITICAL within OpenSSL's vulnerability severity policy, and it makes it effectively impossible for users to upgrade to 3.0.4 to obtain its security fixes," he said .

See original here:
OpenSSL 3.0.5 awaits release to fix potential worse-than-Heartbleed flaw - The Register

GPT-3 Training Programmers for the Present (and the Future) – hackernoon.com

I wrote a paper in Spanish and asked "someone else" to rewrite it

When I asked it to translate the article, it decided the title was not good enough.

So it changed it to present AND future

Scary, isn't it?

Let's move on.

So this is the original paper rewritten by GPT-3:

// Translate this into English and convert it to a technical article:

The current market is looking for programmers to stack bricks (1) using their trendy languages. Such professionals are trained in specific tools that have very short life cycles and then become obsolete.

There are currently alternatives to carry out computer developments using intelligent people, capable of making their own decisions and participating in the creative process.

The programmers of the future will have to be excellent and declarative (2). Professionals who know about algorithms and complexity and who know-how

Most developers in the world are making changes to old systems or have to use complex preexisting libraries or frameworks that they have little control over and few opportunities to modify. (3)

In today's software industry, it is very unlikely to find the need to develop from scratch, with a completely blank page.

The specific lifespan of a programmer age along with the fashionable language. This period of time is usually less than a decade, so the industry discards professionals as obsolete before ten years of training. (4)

In the last decades, they were fashionable as supposed silver bullets (5), languages like Visual Basic, C ++, Java, Php, Perl, Javascript, Ruby, Python, and GoLang.

Some of them are already ceasing to be used, leaving their place for new fashions. (6)

The general concepts (which Frederick Brooks calls essential (7)) are better absorbed in our first academic stage. We must focus on forming these concepts so that professionals can easily migrate from one technology (which Brooks calls accidental) to another technology (also accidental and, therefore, quickly obsolete).

By 2021 there are already numerous alternatives to artificial intelligence and machine learning, capable of carrying out low-level programming and algorithmic tasks (8) (9).

Modern virtual machines (those that interpret the high-level language in which we program and that exist, for example, in web browsers) optimize the code for us. (10)

As technology advances, low-level programmers will no longer be required in the same way that no one is looking for librarians today (11). Similar to what happens with other automatable and obsolete professions. (12)

Our profession is still based on teaching low-level imperative programming as if we were facing the problems of the 1960s or 1970s when the specialized hardware was punch cards and open tape machines.

We educate our students to carry out absurd and obsolete algorithmic optimizations, very close to the language of machines and very far from the modeling of entities of the problem that we are representing in our computer solution.

We train professionals to understand how a computer works and to lower themselves to speak in its language, forming low-level mental models and trying to reason in the algorithmic way in which a Turing machine processes.

However, today we can develop semantic models with several layers of abstraction above as we approach declarative mental models we move away at the same time from the computable abstractions of machines. The most declarative programming languages allow us to reason and infer rules about our models. Against this trend, our students write their instructions based on ones and zeros.

Moore's law (14) tells us about the speed of change of the hardware and its exponential growth in processing power. With respect to software, we grow linearly limited by the realities of many decades ago and the inertia of trainers. In analogy with Malthusian economic theory (15) we need to catch up with the hardware curve to be able to release all our potential before the machines do it for us.

The lack of abstraction and understanding of reality is a symptom of an inertial mentality, based on the time restrictions of processors and storage of the 60s and 70s. These restrictions are limited today to very specific domains and are not representative of the problems we have to solve today as software engineers in most of our jobs. (16)

We need to train generalists, but not theorists. People who understand the basics and foundations of software engineering and can adapt to accidental implementations based on current tools, but also to future ones that we do not yet know today.

Concepts such as software design, life cycle models, teamwork and building a shared theory (17), continuous integration and deployment or architecture, are much more important than learning to minimize CPU usage, to use React, the Vue framework, or the Keta library.

"It is very difficult to make predictions, especially about the future". This phrase, attributed to the great Niels Bohr, tells us that we should be careful when trying to anticipate the future.

Paraphrasing the 2004 Turing Award (18) Alan Key, said: "The best way to predict the future is to invent it".

We do not know for sure what the path of machine automation will be, but we can predict that the most creative and engineering part of our profession should be one of the last bastions to fall before automation.

The next generation of software developers will have to focus their skills on design, modeling the real world, and creating abstractions that evolve along with the problem domain to avoid building obsolete and legacy software (19).

This strategic approach is not limited only to development careers. In data science there are similar problems: data scientists focused on optimizing existing algorithms instead of training generalist professionals, able to understand the academic and commercial problems of our country with a battery of solutions and good criteria to determine which to choose in each situation. They should be trained to model the real problem and evaluate different possible solutions.

Data science professionals have an incredible variety of tools to adjust their models. We are still far from finding the second "super algorithm" (20) capable of solving specific problems with a generic solution. Our future professionals have the only general-purpose computer that can reasonably solve various specific problems: Their brilliant minds.

Our responsibility is to feed and stimulate those brains with real problems and not with toy solutions that computers solve (already in 2020) much more efficiently.

Paradoxically, wasting our brains by asking them to perform low-level optimizations for machines is the worst way to optimize our most expensive resource.

Historically, we have privileged the development of very specific and theoretical tools. This is very common and desirable in science because theoretical advances usually precede concrete implementations and uses by decades.

However, in software development, discoveries and findings are mostly found in the private sector before academia. Our professionals must be trained in methodology and concepts above accidental trendy technologies and languages. This generates tension between the market that wants "implementers expert in a tool" to discard them when it fulfills its fashion cycle of 5 or 10 years.

Our professionals should not be disposable or recyclable. We must train them in techniques and they must keep themselves constantly updated, as happens in other professions such as medicine, physics, or biotechnology.

In addition to "soft" skills such as construction and teamwork (since software arises from a collective activity) (17), we must teach design and prototyping techniques to validate our high-level solutions.

As for software, it is imperative to teach solution design, focusing on the behavior of our models and, paraphrasing Donald Knuth (21), the historical author of most of the algorithms we use today, avoiding premature optimizations because we want to play a game that machines dominate much better than us.

Training talent is an accessible option for any country with a good academic level, such as Argentina.

Investing in training excellent software engineers is a strategic decision and an opportunity for take-off that has already been exploited by many other countries such as Estonia, Ireland, Israel and India. The Sadosky Foundation is currently working in this direction. (22)

In Argentina, we have excellent teachers, a good level of English, an unbeatable time zone to dialogue with USA and Europe, and a culture compatible with the most developed countries.We need to prioritize information technologies and, within them, train intelligent and declarative engineers rather than mechanized programmers and low-level optimizers.

Our professionals must have basic knowledge of programming, algorithms, complexity, and databases.

Above all, they must learn to make designs based on continuous integration and continuous deployment models, with automated tests, using agile techniques such as Test-Driven Development. (23)

The software produced must be declarative and based on the desired behavior (and specified in the automated functional tests); we must stop thinking in the reigning paradigm of the 60s and 70s, based on data types and file and string manipulations, to focus on high-level models that accompany the simulation of any aspect of the real world that we want to represent to solve a certain problem. (24)

The techniques of design based on behavior are agnostic with respect to the accidental technology of fashion and this allows an engineer trained with these concepts 30 years ago to be able to make concrete developments even today.

Unfortunately, such a situation is not replicated by programmers who dominated some fashionable language, which practically has no use and that makes them not find good options in the labor market. The paradox is that a trade with full employment discards these professionals for not having been able to adapt. (25)

The change, currently, is even more vertiginous. Technologies last much less and obsolescence stalks us, unless we are intelligent and broad, and have the appropriate training.

This is an opinion piece. As future work to support the current thesis, we should carry out a quantitative study including figures on employee turnover (25), average time in each job according to age and studies completed, etc.

To do this we must use techniques related to the social sciences under a multidisciplinary approach.

The future has already arrived. We don't have much idea of what a programmer's job will be like in 5 or 10 years, but we have strong indications that it will not be related to writing algorithms based on data structures. We must train professionals who quickly understand a real-life problem and know how to build simulators with very small conceptual leaps, so that they can evolve alongside the problems we are solving today.

Part of the ideas in this article was born from the teaching work in the Software Engineering course in the Computer Science degree at the Faculty of Exact Sciences, University of Buenos Aires.

The translation is accurate and beautiful.

I have checked it with Grammarly and Hemmingway app and found just a few grammatical mistakes.

As an example:

Este es un artculo de opinin. Como trabajo futuro para apoyar la presente tesis deberamos realizar un trabajo cuantitativo incluyendo cifras de rotacin de empleados (25), tiempo promedio en cada trabajo segn la edad y los estudios realizados etc.Para ello debemos utilizar tcnicas relacionadas a las ciencias sociales bajo un enfoque multidisciplinario.

Translates in GPT-3 as:

This is an opinion piece. As future work to support the current thesis, we should carry out a quantitative study including figures on employee turnover (25), average time in each job according to age and studies undertaken etc.To do this we must use techniques related to the social sciences from a multidisciplinary perspective.

While Google Translate yields:

This is an opinion article. As future work to support this thesis, we should carry out quantitative work including employee turnover figures (25), average time in each job according to age and studies carried out, etc.For this we must use techniques related to the social sciences under a multidisciplinary approach.

I used text-davinci-002 model with Temperature 0 (Randomness) and Maximum Length 2000 tokens

What is your opinion on this article?

I think that this article makes some valid points about the future of programming and the need for programmers to be adaptable to change. I agree that the industry is moving towards more automated solutions and that this will eventually lead to a decrease in the need for programmers. However, I think that there will always be a need for programmers who are able to understand and solve complex problems.

(Scary Again!!)

I will definitively, keep using GPT-3 to write technical articles to improve them.

L O A D I N G. . . comments & more!

Read more from the original source:
GPT-3 Training Programmers for the Present (and the Future) - hackernoon.com

Appsmith Raises $41 Million to Scale Open Source Platform that Speeds Critical Internal App Development – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Appsmith, the first open-source low code software helping developers build internal tools, today announced that it has raised $41 million in Series B funding led by global software investor Insight Partners, with participation from Accel, Canaan, OSS Capital, and angel investors including Jeff Hammerbacher, co-founder of Cloudera, and Abhinav Asthana, co-founder of Postman.

Every enterprise needs to create custom internal applications a slow, repetitive, expensive process that requires work to build the user interface, write integrations, code the business logic, manage access controls and ultimately deploy the app. Often, these internal applications take a backseat to customer-facing applications. Appsmith enables software engineers to build internal apps often 10 times faster.

Software development is becoming critical to more and more businesses, but there are simply not enough developers to meet our growing global needs, said Nikhil Sachdev, managing director at Insight Partners. Low code tools like Appsmith are compelling options because they enable companies to deliver more developer productivity and faster time to market. With its open-source approach and great momentum achieved in a short amount of time, Appsmith is clearly a favorite among developers.

The funding follows dramatic growth in users since the $10.5 million Series A round in October 2021 led by Canaan and including Bessemer Venture Partners, bringing the total funding to $51.5 million. The company plans to continue building out its open source software while delivering support services as well as a paid version of the software with additional security features needed by enterprises.

This funding round was done to support the demand that were seeing for our open source platform to build internal apps that are mission critical and not customer-facing so they often get deprioritized, said Abhishek Nayak, co-founder and CEO of Appsmith. As a result, companies of all sizes from many different industries rely on Appsmith as an easy way to build custom apps to solve this problem.

Since being introduced less than two years ago, Appsmiths open source software has more than 10 million downloads, more than 18,000 stars (up 12x since 2021) on Github and is used at over 10,000 organizations. Last year, the company introduced more than 150 enhancements, including major features like JS Editor, version control, and 30-plus new widgets and variations. In total, 184 features were released. A full list can be viewed here. Appsmith also now has more than 18,000 community members with 192 contributors.

Appsmith is the first open-source low code software that helps developers build internal and CRUD (create, read, update and delete) type applications quickly, usually within only hours. Software engineers build the user interface with pre-built components, code the business logic by connecting application programming interfaces (APIs) along with any database, then test and deploy a web application where users are authenticated using a dashboard. Companies dedicate anywhere from 10% to 40% of their engineering resources to these internal tools, like for example a customer support dashboard.

About Appsmith

Appsmith was founded in 2019 with the mission to enable backend engineers to build internal web apps quickly with a low code approach. Taking an open source software approach provides anyone with access to the software and the opportunity to get involved in the community. The company has offices in San Francisco and Bengaluru, India. For more information visit https://www.appsmith.com

Read the original post:
Appsmith Raises $41 Million to Scale Open Source Platform that Speeds Critical Internal App Development - Business Wire

Exploiting symmetries: Speeding up the computational study of solid solutions – EurekAlert

image:Atomic substitution with La atoms: Ce8Pd24Sb (Ce5,La3)Pd24Sb. The crystal structure was obtained from the ICSD database (CollCode: 83378). The space group is 221-Pm3m, and the crystal structures are depicted using VESTA. view more

Credit: Kousuke Nakano from JAIST.

Ishikawa, Japan -- Symmetry is a prevalent feature of nature at all scales. For example, our naked eyes can easily identify symmetries in the bodily shape of countless organisms. Symmetry is also very important in the fields of physics and chemistry, especially in the microscopic realm of atoms and molecules. Crystals, which are highly ordered materials, can even have multiple types of symmetry at the same time, such as rotational symmetry, inversion symmetry, and translational symmetry.

Lately, alongside rapid progress in computer science, researchers have developed computational methods that seek to predict the physical properties of crystals based on their electronic structure. In practice, however, pure and perfectly symmetric crystals are seldom used. This is because a crystals properties can be tuned as desired by alloying them with other materials or randomly substituting certain atoms with other elements, i.e., doping.

Accordingly, materials scientists are seeking computationally efficient approaches to analyze such alloys and substituted crystals, also known as solid solutions. The supercell method is one such approach and is widely used to model crystal structures with random substitutions of different atoms. The symmetry of crystals, however, is actually a problem when using this technique. In crystals, there can be many substitution patterns that are physically equivalent to other substitutions if we simply translate or rotate them. Findings these symmetric substitution patterns is not very meaningful, and thus their calculation when using the supercell method is a waste of time.

In a recent study, a team of researchers led by Assistant Professor Kousuke Nakano from Japan Advanced Institute of Science and Technology (JAIST) found a solution to this problem. They developed an open-source software called Suite for High-throughput generation of models with atomic substitutions implemented by Python, or SHRY that can, in terms of symmetry, generate distinct substitution patterns in solid solutions and alloys [https://github.com/giprayogo/SHRY]. This work, which was published in the ACS Journal of Chemical Information and Modeling, was co-authored by doctoral student Genki I. Prayogo, Dr. Andrea Tirelli, Professor Ryo Maezono, and Associate Professor Kenta Hongo.

The team approached the problem from the angle of group theory. It turns out that searching for atomic substitution patterns in crystals is analogous to the problem of finding coloring patterns on the vertices of graphs under certain restrictions. This allows one to reformulate the original problem of finding non-symmetric atomic substitutions in crystals as exploring search trees depicting the coloring of vertices in graphs.

However, the way in which the search tree is explored is crucial. A simple, nave approach in which all possible branches are searched and directly compared is impossible; the time and calculations required grow uncontrollably for large systems. This happens because deciding whether to explore further down a branch requires information about all other branches besides the one being explored, which is technically referred to as non-local information.

To avoid this issue, the researchers implemented in SHRY a technique called canonical augmentation. This method can decide whether a tree branch should be explored more deeply or not based solely on local information, explains Dr. Nakano, Most importantly, theorems from group theory guarantee that only distinct substitution patterns will be extracted, without over- or under-exploring the tree structure in terms of symmetry. The team verified that their algorithm was error-free by testing it thoroughly with data from a database of crystal structures.

It is worth noting that SHRY was written in Python 3, one of the most popular cross-platform programming languages, and uploaded to GitHub, a leading project-sharing online platform. SHRY can be used as a stand-alone program or imported into another Python program as a module, highlights Dr. Nakano, Our software also uses the widely supported Crystallographic Information File (CIF) format for both the input and output of the sets of substituted crystal structures. The team plans to keep improving SHRYs code based on feedback from other users, boosting its speed and capabilities.

Overall, the software developed in this study could help scientists identify potential atomic substitutions in solids, which is the most common strategy used to tune the properties of materials for practical applications. SHRY will help speed up research and develop substituted crystals with unprecedented functionalities and superior characteristics.

###

Reference

Title of original paper:

SHRY: Application of Canonical Augmentation to the Atomic Substitution Problem

Journal:

Journal of Chemical Information and Modeling

DOI:

10.1021/acs.jcim.2c00389

About Japan Advanced Institute of Science and Technology, Japan

Founded in 1990 in Ishikawa prefecture, the Japan Advanced Institute of Science and Technology (JAIST) was the first independent national graduate school in Japan. Now, after 30 years of steady progress, JAIST has become one of Japans top-ranking universities. JAIST counts with multiple satellite campuses and strives to foster capable leaders with a state-of-the-art education system where diversity is key; about 40% of its alumni are international students. The university has a unique style of graduate education based on a carefully designed coursework-oriented curriculum to ensure that its students have a solid foundation on which to carry out cutting-edge research. JAIST also works closely both with local and overseas communities by promoting industryacademia collaborative research.

About Assistant Professor Kousuke Nakano from Japan Advanced Institute of Science and Technology, Japan

Dr. Kousuke Nakano obtained B.Sc. and M.Sc. degrees in Engineering from Kyoto University, Japan, in 2012 and 2014, respectively. He then joined JAIST, where he obtained a Ph.D. in computer and information science in 2017. Since 2019, he works there as Assistant Professor, researching the topics of first-principles quantum Monte Carlo simulations, density functional theory, machine learning for materials informatics, and the synthesis of novel inorganic compounds using solid-state reactions. He has over 40 publications to his name on these topics and his h-index is 14 with over 700 citations (Google Scholar, Jun. 2022).

Funding information

This work was financially supported by JST SPRING (Grant Number JPMJSP2102), MIUR Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2017 (Grant Number 2017BZPKSZ), the HPCI System Research Project (Project IDs: hp210019,hp210131, and jh210045), MEXT-KAKENHI (JP16H06439, JP17K17762, JP19K05029, JP19H05169, JP21K03400, JP21H01998, JP22H02170, JP19H04692, and JP21K03400), the U.S. Air Force Office of Scientific Research (Award Number FA2386-20-1-4036, AFOSR-AOARD/FA2386-17-1-4049; FA2386-19-1-4015), JSPS Bilateral Joint Projects (JPJSBP120197714), JSPS Overseas Research Fellowships, a Grant-in-Aid for Early-Career Scientists (Grant Number JP21K17752), and a Grant-in-Aid for Scientific Research (C) (Grant Number JP21K03400).

Journal of Chemical Information and Modeling

SHRY: Application of Canonical Augmentation to the Atomic Substitution Problem

9-Jun-2022

See the original post:
Exploiting symmetries: Speeding up the computational study of solid solutions - EurekAlert

Building An Insights And Analytics Newsletter: From Proof Of Concept To Feature Release – Forbes

As software engineers at Forbes, were always building innovative features and working on prototypes for potential projects. Some proof of concept initiatives remain in the testing state, but some end up being shared with a larger audience. Bertie Bulletin, a monthly email containing user stats, was a project that started as an engineering initiative and turned into a monthly email sent to both active Forbes Staff and Contributors who write for the Forbes website.

Currently, writers have access to their user and story stats through Bertie, Forbes Content Management System where they can write, edit, and publish their articles. Bertie Bulletin was created from a suggestion to better equip writers with the knowledge they need to understand their stats, as well as provide them with a record of historical stats. Bertie Bulletin is similar to a bank statement that helps writers keep track of their performance from month-to-month. We included the top performing stories for the previous month, based on how many page views the story received, audience stats, and a referral breakdown indicating where readers are coming from. While these emails have some of the data that already exists in the stats dashboard, they also extend to include insights. Insights are comparisons of data, such as this month or year versus the previous month or year.

Linking entities. Network, networking, social media, internet co

As we built Bertie Bulletin, we made sure to fully utilize pre existing projects in the Forbes Engineering universe. The initial approach used Directed Acyclic Graphs, or DAGs, which are written in Python, therefore, so was Bertie Bulletin. To create the mailing list we consulted with our API team, who utilized cloud functions to generate the mailing lists with the given parameters. We used Mailgun, an email delivery service used by other Forbes projects, to store mailing lists and email templates as well as trigger emails. Our codebase made calls to the Forbes stats API to fetch the numbers and generate insights, which were then stored as recipient variables in Mailgun.

A lot of research went into figuring out which templating engine made sense to use. The initial email was set up in a single HTML file, though it wasnt pretty to look at or easy to edit. Our list of wants included an engine that would allow for use of template inheritance and control structures. In other words, we could break the email down into sections with conditionals that would then be compiled. This led us to Jinja, which describes itself as a fast, expressive, extensible templating engine, making it ideal for our purposes.

Setting up the Jinja flow for Bertie Bulletin required three different steps: creating the Jinja files containing our HTML elements, using built-in methods to render a single HTML file, and uploading that file to Mailgun to use as the email template.

Our Minimum Viable Product (MVP) had a couple of must-haves for our writers. This included top stories, audience stats, total published stories, and information on page views. This first version of the email was released in February 2022, rendering January stats data. With each iteration of these emails, we included new features to enable writers to visualize their information in a more palatable manner, such as an audience stats donut chart and a traffic source types referral table, to name a few.

Screenshot of Bertie Bulletin donut chart and referral breakdown designs.

The process to deploy Bertie Bulletin emails can be split into a few broad steps. Initially, the email template is generated and uploaded, and the mailing list is sanitized. The next steps retrieve each writers data from the stats API and transform it into useful pieces of information. Lastly, each users data is updated and the email is triggered.

In order to figure out the best deployment strategy for Bertie Bulletin, we once again turned to Forbes Engineering. We consulted with our DevOps team and they suggested using Argo-Workflows, a container-native engine that enables the orchestration of parallel jobs on Kubernetes, an open-source platform for working with containers. Each step was containerized into a reliable package of the execution environment and applied into a self-contained image. The advantage of using containers was that each one would be small and do a defined unit of work which could then be scaled horizontally and completed in parallel, thus finishing the batch work quicker. Optimization was important because we expected to send a couple thousand emails every month. With Argo's fully featured UI, we were able to see each containerized step as a stage in the Argo workflow and visualize its progress. If a step errored out for any reason, Argo could retry the step and provide us with logs to better debug the issue.

As we reflected on the process of creating Bertie Bulletin, we realized that the advice from Jackie Has previous article still applied at any level.

3d rendering of question mark on speech bubble. Brainstorming, business concept.

Be Inquisitive: In the beginning, we had a lot of conversations with product owners to hash out details. What did the MVP look like? What were the specific requirements at each stage?

Know When To Ask For Help: With a large project like Bertie Bulletin, it was inevitable (like Thanos) that we would run into the unknown (like Elsa). To overcome these roadblocks, we consulted with various people throughout the process for guidance on the best way to execute.

Learn How To Debug: Because Bertie Bulletin had so many moving parts, if something went wrong we needed to be able to pinpoint exactly where the error came from. This meant we had to figure out how to debug effectively and efficiently.

Proofread Your PRs: Whether you are on team unified view or team split view, its good practice to review your own PR in Github before sending it out to others. The visual comparison can help you catch a sneaky print() or syntax errors easily missed within your code editor.

Taking this project from prototype to production was a strenuous effort that required collaboration across many teams. After many hours of pair programming, screen sharing, and debugging sessions, being able to finally see the end result live and hearing positive feedback from the recipients made the process worthwhile.

Go here to read the rest:
Building An Insights And Analytics Newsletter: From Proof Of Concept To Feature Release - Forbes

Calendar of events, awards and opportunities – ASBMB Today

Every week, we update this list with new meetings, awards, scholarships and events to help you advance your career.If youd like us to feature something that youre offering to the bioscience community, email us with the subject line For calendar. ASBMB members offerings take priority, and we do not promote products/services. Learn how to advertise in ASBMB Today.

As we do each year, we'll be hosting a Twitter chat for Pride Month. It will be at 2 p.m. Eastern on June 27 and will feature ASBMB staffers, members and representatives of allied organizations. We hope you can join us! Follow us at @ASBMB.

This webinar will feature the ins and outs of the Early Career Reviewer Program at the National Institutes of Healths Center for Scientific Review, which gives emerging investigators an inside look at the scientific peer-review process. Elyse Schauwecker, a scientific review officer at CSR, will talk about the benefits of participating, eligibility, the application process and recent changes. There will also be time to ask Schauwecker questions about the program and other CSR opportunities for early-career scientists. Anita Corbett of Emory University, a member of the ASBMB Public Affairs Advisory Commitee, will moderate.Register.

The National Cancer Institutes Frederick National Laboratory for Cancer Research is the only national laboratory dedicated to biomedical research. FNLCR is conducting a survey to determine how familiar researchers are with the lab and the services, tools and resources it offers to the scientific community. Take the survey.

This five-day conference will be held Aug. 1418 in person in Cambridge, Massachusetts, and online. It will be an international forum for discussion of the remarkable advances in cell and human protein biology revealed by ever-more-innovative and powerful mass spectrometric technologies. The conference will juxtapose sessions about methodological advances with sessions about the roles those advances play in solving problems and seizing opportunities to understand the composition, dynamics and function of cellular machinery in numerous biological contexts. In addition to celebrating these successes, we also intend to articulate urgent, unmet needs and unsolved problems that will drive the field in the future. The registration deadline is July 1.Learn more.

The Journal of Science Policy & Governance, the United Nations Educational, Scientific and Cultural Organization and the Major Group for Children and Youth announced in February a call for papers for a special issue on "open science policies as an accelerator for achieving the sustainable development goals." The deadline for submissions is July 10. To help authors prepare their submissions, the group will be hosting a series of webinars (April 8 & 29, May 20, and June 10) and a science policy paper-writing workshop (March 2627). Read the call for submissions and learn more about the events.

This in-person meeting will be held Sept. 29 through Oct. 2 in Snowbird, Utah. Sessionswill cover recent advances and new technologies in RNA polymerase II regulation, including the contributions of non-coding RNAs, enhancers and promoters, chromatin structure and post-translational modifications, molecular condensates, and other factors that regulate gene expression. Patrick Cramer of the Max Planck Institute will present the keynote address on the structure and function of transcription regulatory complexes. The deadline for oral presentation abstracts is July 14. The deadline for poster presentation abstracts is Aug. 18.Learn more.

Head to beautiful Denver, Colorado, for a summer experience as a PRIDE (Programs to Increase Diversity Among Individuals Engaged in Health-Related Research) scholar. PRIDE is an initiative of the National Heart, Lung and Blood Institute that trains junior faculty from underrepresented backgrounds and/or with disabilities to advance their scientific careers and make them more competitive for external research funding. The University of Colorado PRIDE (led by Sonia C. Flores, who also leads the ASBMB Minority Affairs Committee) is one of nine national PRIDE sites. Its focus is on the "impact of ancestry and gender on omics of lung and cardiovascular diseases" (which is why it's called PRIDEAGOLD). The program consists of two consecutive summer institutes (two and one week, respectively) that offer comprehensive formal instruction on multi-omics, data sciences and bioinformatics, with an emphasis on interpretations based on ancestry and/or gender; career development and grant-writing tools; pairing with expert mentors; and pilot funds to develop a small research project. Learn more.

Most meetings on epigenetics and chromatin focus on transcription, while most meetings on genome integrity include little attention to epigenetics and chromatin. This conference in Seattle will bridge this gap to link researchers who are interested in epigenetic regulations and chromatin with those who are interested in genome integrity. The oral and poster abstract deadline and early registration deadline is Aug. 2. The regular registration deadline is Aug. 29.Learn more.

For Discover BMB, the ASBMB's annual meeting in March in Seattle, we're seeking two types of proposals:

In May, the Howard Hughes Medical Institute launched a roughly $1.5 billion program to "help build a scientific workforce that more fully reflects our increasingly diverse country." The Freeman Hrabowski Scholars Program will fund 30 scholars every other year, and each appointment can last up to 10 years. That represents up to $8.6 million in total support per scholar. HHMI is accepting applications from researchers "who are strongly committed to advancing diversity, equity, and inclusion in science." Learn more.

Save the date for the ASBMB Career Expo. This virtual event aims to highlight the diversity of career choices available to modern biomedical researchers. No matter your career stage, this expo will provide a plethora of career options for you to explore while simultaneously connecting you with knowledgeable professionals in these careers. Each 60-minute session will focus on a different career path and will feature breakout rooms with professionals in those paths. Attendees can choose to meet in a small group with a single professional for the entire session or move freely between breakout rooms to sample advice from multiple professionals. Sessions will feature the following five sectors: industry, government, science communication, science policy and other. The expo will be held from 11 a.m. to 5 p.m. Eastern on Nov. 2. Stay tuned for a link to register!

The ASBMB provides members with a virtual platform to share scientific research and accomplishments and to discuss emerging topics and technologies with the BMB community.

The ASBMB will manage the technical aspects, market the event to tens of thousands of contacts and present the digital event live to a remote audience. Additional tools such as polling, Q&A, breakout rooms and post event Twitter chats may be used to facilitate maximum engagement.

Seminars are typically one to two hours long. A workshop or conference might be longer and even span several days.

Prospective organizers may submit proposals at any time. Decisions are usually made within four to six weeks.

Propose an event.

If you are a graduate student, postdoc or early-career investigator interested in hosting a #LipidTakeover, fill out this application. You can spend a day tweeting from the Journal of Lipid Research's account (@JLipidRes) about your favorite lipids and your work.

The International Union of Biochemistry and Molecular Biology is offering $500 to graduate students and postdocs displaced from their labs as a result of natural disaster, war or "other events beyond their control that interrupt their training." The money is for travel and settling in. Learn more and spread the word to those who could use assistance.

TheCenter for Open Bioimaging Analysismaintains open-source softwareCellProfilerandImageJ. COBA has partnered withBioimaging North Americaand theRoyal Microscopical Societyto create a survey to assess the needs of the community for software and training materials. Take the survey.

Read more here:
Calendar of events, awards and opportunities - ASBMB Today

Credentials for thousands of open source projects free for the takingagain! – Ars Technica

Getty Images

A service that helps open source developers write and test software is leaking thousands of authentication tokens and other security-sensitive secrets. Many of these leaks allow hackers to access the private accounts of developers on Github, Docker, AWS, and other code repositories, security experts said in a new report.

The tokens give anyone with access to them the ability to read or modify the code stored in repositories that distribute an untold number of ongoing software applications and code libraries. The ability to gain unauthorized access to such projects opens the possibility of supply chain attacks, in which threat actors tamper with malware before it's distributed to users. The attackers can leverage their ability to tamper with the app to target huge numbers of projects that rely on the app in production servers.

Despite this being a known security concern, the leaks have continued, researchers in the Nautilus team at the Aqua Security firm are reporting. A series of two batches of data the researchers accessed using the Travis CI programming interface yielded 4.28 million and 770 million logs from 2013 through May 2022. After sampling a small percentage of the data, the researchers found what they believe are 73,000 tokens, secrets, and various credentials.

"These access keys and credentials are linked to popular cloud service providers, including GitHub, AWS, and Docker Hub," Aqua Security said. "Attackers can use this sensitive data to initiate massive cyberattacks and to move laterally in the cloud. Anyone who has ever used Travis CI is potentially exposed, so we recommend rotating your keys immediately."

Travis CI is a provider of an increasingly common practice known as continuous integration. Often abbreviated as CI, it automates the process of building and testing each code change that has been committed. For every change, the code is regularly built, tested, and merged into a shared repository. Given the level of access CI needs to work properly, the environments usually store access tokens and other secrets that provide privileged access to sensitive parts inside the cloud account.

The access tokens found by Aqua Security involved private accounts of a wide range of repositories, including Github, AWS, and Docker.

Aqua Security

Examples of access tokens that were exposed include:

The following graph shows the breakdown:

Aqua Security

A representative for Code Climate, the service shown in the chart above, said the credentials found by Aqua Security don't provide hackers with unauthorized access. "These are Test coverage tokens, used to report test coverage to Code Climates Quality product," the representative said. "Unlike the other tokens mentioned in this post, these tokens are not considered secret, and cannot be used to access any data."

Aqua Security researchers added:

We found thousands of GitHub OAuth tokens. Its safe to assume that at least 10-20% of them are live. Especially those that were found in recent logs. We simulated in our cloud lab a lateral movement scenario, which is based on this initial access scenario:

1. Extraction of a GitHub OAuth token via exposed Travis CI logs.

2. Discovery of sensitive data (i.e., AWS access keys) in private code repositories using the exposed token.

3. Lateral movement attempts with the AWS access keys in AWS S3 bucket service.

4. Cloud storage object discovery via bucket enumeration.

5. Data exfiltration from the targets S3 to attackers S3.

Aqua Security

Travis CI representatives didn't immediately respond to an email seeking comment for this post. Given the recurring nature of this exposure, developers should proactively rotate access tokens and other credentials periodically. They should also regularly scan their code artifacts to ensure they don't contain credentials. Aqua Security has additional advice in its post.

Post updated to add comment from Code Climate.

Link:
Credentials for thousands of open source projects free for the takingagain! - Ars Technica

What are the Most Famous Programming Tools and Techniques? – Programming Insider

To sign up for our daily email newsletter, CLICK HERE

A programming tool, also known as a software development tool, is a program or application that programmers use to create, debug, maintain, and support other programs and applications. The word usually refers to a set of very simple programs that may be assembled to complete a task, similar to how many hand tools can be used to repair a real object. Its difficult to tell the difference between tools and applications. Simple databases (such as a file holding a list of significant values) are frequently used by developers as tools. A full-fledged database, on the other hand, is normally considered of as a separate application or piece of software. CASE (computer-assisted software engineering) tools have been in demand for a long time.

Successful tools have been difficult to come by. In certain ways, CASE tools, such as UML, prioritized design and architecture support. IDEs, on the other hand, have been the most successful of these tools. One of the characteristics of a professional software engineer is the ability to use a number of tools effectively. A program is a sequence of instructions that instructs the computer to do a variety of tasks; often, the instruction it is to perform is dependent on what happened after it completed a previous instruction. This section outlines the two major ways in which youll provide these instructions, or commands as theyre commonly known. One method employs an interpreter, while the other uses a compiler.

Software are very useful for manipulating and interpreting the concepts. Just like the Arduino that makes our life as easy as we can design multiple applications using it. If you want to control the speed and direction of DC motor of robotics car we can implement this task using Arduino.

Best Programming tools:

The most famous and useful programming tools are:

Every day, software developers are confronted with a large amount of information to remember. New technologies, keyboard shortcuts, software requirements, and best practices are all things to be aware of. Many of us reach a limit on how much we can keep in our thoughts at some point. Evernotes free tier gives you an external brain, a place where you may store learnings, articles, information, and keyboard shortcuts or commands. Its always there when you need it because its cloud-based.

Trello is a project management app that is both simple and free. Its an app that lets you make columns or swim lanes and arrange cards in them. These cards can represent jobs that need to be performed or labor that needs to be done.

GitHub created Atom, a relatively new code editor. Its open source and free, and it looks fantastic. Its also quite simple to use. Atom is a terrific tool for hacking at scripts or working on side projects, even if you use a more feature-rich IDE for your development at work. Atoms markdown preview mode is one feature that sets it apart from other code editors. When working on Readme files and other documentation, you can enter notes in markdown and get an inline preview.

Unity is a free, end-to-end game engine that makes it easier than ever to develop professional, cross-platform games. Its usual for software developers to dismiss game development as cool but too difficult, but with an infusion of high-quality tutorials and ongoing updates to Unitys tooling, the barrier to entry has never been lower. By dabbling in a totally different sort of programming, youll obtain insights and ideas that will help you become a better programmer overall, and youll probably have a lot of fun doing it.

Code Climate is a code analysis tool that rates your software based on test coverage, complexity, duplication, security, style, and other factors. It comes with a two-week trial period. Even if youre not willing to pay, Code Climate can provide you with a wealth of information on the code quality of your most recent personal project, orif your team is on boardthe product or service youre developing. You definitely have a sense for code smells as a software developer: things that could be better. When you have a lot of things wrong with your code, it might be difficult to know where to start.

See original here:
What are the Most Famous Programming Tools and Techniques? - Programming Insider

Top 7 ‘Hot’ Programming Languages of 2022 – ITPro Today

What's the most important programming language to learn in 2022? That's an open question, but one way to answer it is to look at languages that are currently trending.

Some of them are well-established coding languages that have long been popular. Others are newer languages that are just now entering their heyday. Either way, they're languages worth familiarizing yourself with.

Related: Is PHP Dying? No, but It Has an Image Problem

Here's a roundup of what are arguably the trendiest programming languages in 2022.

1. Python: When talking about hot programming languages in 2022, the list must start with Python. Probably no language is having a better year than Python, which recently slid into first place to become the very most popular language of all. You could argue that Python doesn't quite deserve that status, but the fact is that it enjoys it.

Related: COBOL Language Still in Demand as Application Modernization Efforts Take Hold

2. Go: Go (or Golang, as it's formally known) has long been a "cool" programming language partly because it traces its roots to Google (which is a hotbed of coolness, technologically speaking) and partly because it's fast to write, fast to compile, and fast to run.

3. OPA: Open Policy Agent, or OPA, isn't technically a programming language. It's a policy language that lets you define resources using code. That makes it a hot language, however, in a world increasingly obsessed with doing "everything as code."

4. Swift: If you develop anything for the world of Apple whether on macOS, iOS, or any other *OS platform Swift is a language you absolutely need to know today. It's also a relatively easy language to code in, by many accounts.

5. C: C, which turns 50 this year, may be old, but it remains relevant as ever and is still a hot programming language in 2022. It's messy, it's fast, and it's essential for a wide variety of programming tasks.

6. Java: It's arguably hard to get excited about Java a language that is tedious to code in and whose code is relatively slow. But the fact is that Java was the most popular programming language for years, and tons of stuff are still written in it. Whether you actually enjoy coding in Java or not, it remains an important language as of 2022.

7. JavaScript: JavaScript is not the same as Java, but they're similar in that tons of stuff are written in JavaScript, too. If you are creating web apps in particular, JavaScript is probably the most important language for you to learn today.

About the author

Read more:
Top 7 'Hot' Programming Languages of 2022 - ITPro Today