Report finds the value in open source skills – SDTimes.com

When it comes to what skills developers should be learning, a recent report found organizations and technology managers value open source. According to the report, which was conducted by OReilly Media and commissioned by IBM, the increasing adoption of hybrid cloud is driving the importance of open source skills. IBM predicts hybrid cloud adoption will grow by 47% over the next three years and organizations will be using an average of six hybrid clouds.

Two significant shifts characterize computing in the past two decades: the widespread use of free and open source software (OSS), and migration to the cloud. The relationship between these trends is complex and deserves close attention. Developers need to understand the growing value of OSS in the cloud era. Mastering open source tools and programming libraries will make them valuable, even as this software is increasingly deployed on third-party cloud offerings, the report stated.

According to the report, which surveyed more than 3,400 developers and technology managers, 94% of respondents stated open-source software was equal or better than proprietary software. Seventy percent of respondents choose a cloud provider based on open source, and 65% of respondents prefer skills related to open-source technologies such as Linux, Kubernetes and Istio.

Open source becomes more valuable in a hybrid cloud world because its built on open-source technologies. In fact, almost every major cloud vendors container platform is built on Kubernetesand the containers themselves are being built with other open technologies, IBM developer advocates Willie Tejada, Todd Moore, and Chris Ferris wrote in a blog post. The skills you develop related to these technologies are transferable across the developer community and ecosystem, and of course to any proprietary cloud that you work on.

The report also found that open source contributions impress potential employers, result in better job opportunities, and result in better professional opportunities.

More:
Report finds the value in open source skills - SDTimes.com

The Programming Foundation is on a mission to make technology inclusive – YourStory

In 2018, during his college days, Subhajeet Mukherjee from Kolkata realised that a lot of students were being taught computer programming through drag and drop tools.

Moreover, at a time when data security is of utmost concern, Subhajeet wanted to keep the users anonymous, anddemocratise computer science education. This was in a bidtofosterpeople at the grassroot level, and create a self-sustaining community of developers worldwide.

Founded in February 2020 in Sunnyvale, California, The Programming Foundation (TPF) focuses on providing computer science education free-of-cost, without compromising data. Theodore Rolle, a Technical Account Manager with Google Cloud Professional Services Organization joined TPF as the Secretary and Technical advisor.

Subhajeet introducing TPF at talk event organized by Write the Docs at Linkedin San Francisco in early 2020

The 24-year-old aims to create a smarter general population through The Programming Foundation, and the operating system that hes developing. He has authored two books on operating systems and given talks at HackerDojo, ACM, SF Python, and LinkedIn.He previously served as the Data Support Engineer at The Pill Club and Community Support Specialist at BetterHelp.

In fact, the non-profit platform does not require the user to log in or create an account. The lessons can easily be accessed from their main portal without giving any details about the users whereabouts.

We have an integrated interface for Operating System, Programming and Logic. Under the Operating System, topics such as Unix, Vim and Kernel are covered. Furthermore, C, Python, and object-oriented programming make up the entire programming. Likewise, Binaries and Gates account for the Logic section, says Subhajeet.

TPF is based on written instructions, interactive examples and processes. It provides volunteers with hands-on experience working together as a team by developing free and open-source tools to improve the platform.

The target audience for TPF at the moment are those who are in or have recently graduated from college/university. The main regions they focus on are Northeast India, some Northern regions of India, regions in South East Asia and Africa, and Midwestern United States.

There are no great pre-requisites to join TPF classes. The general population needs to have a base level understanding of how computers work, Subhajeet says.

The classes are designed in such a way that the domains are laid out on the Learn page itself. The categories are divided into Operating Systems, Programming and Logic. Users can go into the operating system domain, learn the concepts behind it and interact with the interface. Furthermore, were shipping interactive versions of the Programming Languages into production very soon, he says.

TPF's interactive step by step Unix Learning experience

Learners can study these concepts by directly visiting the platform which is the primary method. They dont need to download anything as its all in there in the browser. After the end of each domain, theyre asked to answer a few questions on the domain to ensure impact.

TPF is also open to taking volunteers to work at the foundation, provided they have the basic requirements for the position. Weve over 30 volunteers from the US, India, South America and Europe. Many of them are part of the technology industry while others are new to technology.

As a non-profit, TPF relies primarily on donations and grants. However, Subhajeet shares that they were fortunate enough to receive the Google Ad Grants, along with a lot of other in-kind support from leading industry technology companies in the initial stages and along the way. This helped them scale fast and gain a steady user base.

Incidentally, getting donations and grants remains their biggest challenge as well. When The Programming Foundation was launched during the pandemic, we never thought we would be able to survive till the end of 2020, but we successfully entered 2021, he shares.

While the Foundation is enabling a number of people with free programming courses, there are still a number of areas in the world that dont have internet access. How does it empower them?

TPF wants to provide them with native experiences, and achieve a singular interface as The Programming Foundations Learn section.

A screengrab of The Programming Foundation's operating system that'll run on RISC-V to democratizethe education of operating systems.

TPF is prioritising accessibility so that people who are blind or differently abled can also use the platform and operating system in the future.

Inclusion is important to us. We need more women and people from the LGBTQ community to represent technology. These are long term goals, and we have a road-map for that. Weve started to encourage the usage of Gender Neutral pronouns such as they, and them at TPF. I believe this is the first step, shares Subhajeet.

Read more:
The Programming Foundation is on a mission to make technology inclusive - YourStory

Some Open-Source Projects Are More Open Than Others – Built In

Piotr Zakrzewski is a sometime-contributor to open source projects. Hes not a regular on any one project, but more of a dabbler a self-described outsider contributor who sometimes submits pull requests to projects he enjoys using.

In fact, Zakrzewski said, many contributors to open-source projects are outsiders.

We are talking about people who usually use the project, he said. They dont work on the project directly, they just use it for something else. And they found a bug or a missing feature, and because they were passionate about it and they like open source, they decided to give it a chance and make a contribution.

But among the projects open to outside contribution, Zakrzewski found that some were a bit more open than others.

There are some projects that are very eager to accept your contributions, that are more likely to merge it, that do whatever is needed to work with you to get it merged, Zakrzewski said. And there are also some projects that are more likely to ignore them, or they just dont accept them.

MORE ON ENGINEERINGIs an Open Web Still Possible?

The definition of open source can be confusing. For instance, theres a difference between open-source code and code thats simply visible to the public, like code stored in public repositories on GitHub.

You can inspect all open-source code, but not all code that you can inspect is immediately open source, Zakrzewski said.

The exact definition of open source is squishy, but it generally means a project that is available to anyone to freely use.

What determines that is a license, Zakrzewski said. There are certain types of licenses, like LGPL, GPL, MIT, FreeBSD, Apache and so forth, that, if you see them, means that this project is open source.

These licenses state that projects are available for anyone to download, use and modify. For many open-source projects, theres also an open collaboration aspect where anyone can contribute pull requests into the main branch of the codebase, but thats not always the case.

There are projects that allow you to do anything you want with the code yourself fork it, modify it, redistribute it, sell it but they will not accept an outsider contribution into the main branch.

Just because something is open source, that does not necessarily mean that its open contribution, Zakrzewski said. There are projects that allow you to do anything you want with the code yourself fork it, modify it, redistribute it, sell it but they will not accept an outsider contribution into the main branch.

Open-source projects closed to outside contributions are also easy to spot, because they usually say so explicitly in the projects README file, Zakrzewski said. The real difficulty is figuring out just how open to contributions the remaining open-source, open-contribution projects really are.

The problem is this gray zone in between, Zakrzewski said. They either dont want to invest time anymore in interacting with the community, they simply cannot afford it time-wise, mostly or they just dont want to do it for another reason but they dont make it explicit. In other cases, they actually kind of would like some contributions, but theyre just very picky.

Thats not an inherent problem projects are different, and some may have characteristics that make pull requests difficult to get past review. But working on and submitting a pull request can take significant effort for developers, and Zakrzewski began to wish he knew the likelihood his suggestions would be accepted ahead of time.

I didnt know how to tell those apart for some time, and I found it a bit frustrating, he said. I felt that maybe other people find it [frustrating] too its not easy to figure out how likely the contribution is ignored or not.

At the time Zakrzewski was interested in learning to use the GraphQL programming language, so he combined his interests and built a tool that estimates the likelihood an outsiders pull request on GitHub would be accepted, called Merge Chance.

Using GitHubs documentation, Zakrzewski found APIs that gave him data from GitHub repositories, including those of some open-source projectshe had unsuccessfully tried to contribute to.

Once I fetched this data, I just calculated some very simple statistics, Zakrzewski said. How many pull requests are being merged in total, and what can I say about the people who merged them?

He classified each projects pull requests into two groups: those initiated by insider contributors and those initiated by outsider contributors. Insider contributors were considered to be people who owned repositories or belonged to organizations that owned the repositories. He then calculated the chance that a pull request has of getting approved for each project.

Zakrzewski found that most pull requests to open-source projects are accepted. So open source mostly works, he said.

It is a metric, its not necessarily a score that should be maximized.

Zakrzewski was surprised to find that bigger open-source projects are more likely than smaller ones to accept outsider pull requests. This might be because larger projects have more people who can help review outside contributions.

It is kind of counterintuitive that it is those big projects usually backed by bigger companies that are very dynamic, and they have enough people to really help you with approving your contribution, Zakrzewski said. A lot of small projects are those that are most likely to ignore you, or they just dont have the resources to accept your contributions.

Although Merge Chance calculates a percent likelihood of approval for each project, Zakrzewski said its important not to think of the number as a score.

It is a metric, its not necessarily a score that should be maximized, he said. Whether every project should aspire to have 90 percent-plus merge chance no, they shouldnt. But its still useful to know what is the merge chance, because making a contribution to the project takes a lot of effort from the contributor, and also from those who accept it.

Zakrzewski has tweaked the Merge Chance classifications to reflect feedback from developers. One adjustment affected how insider and outsider contributors are defined to better catch insiders who look like outsiders.

There are a lot of different ways that people work with GitHub, he said. Some projects are very disciplined about adding insiders, and they give them official rights those are very easy to detect. But more informal projects or just projects that are organized differently, or from smaller companies they dont always do that. Contributors or even maintainers of a project, from a GitHub perspective, dont differ at all from outsiders.

In those cases, Merge Chance is likely to give the projects inflated likelihood values, because insider contributors get counted as outsiders. After Zakrzewski set a limit on how many contributions outsiders can have before being classified as insiders, the results gave a more accurate value.

Currently, he is working on something that will filter out spam pull requests, which artificially brings down a projects Merge Chance value.

For instance, Vue.js and React are very popular open-source projects, and they experience significant amounts of daily spam contributions, Zakrzewski said. Some developers its hard to say why they do this they just open frivolous contributions like Hello World, or they change one word in the README, and the maintainers immediately close them. So that inflates the metrics a bit for some repositories.

Owners of open-source repositories who are interested in fostering more outsider contributions have also reached out to Zakrzewski about the project, in order to figure out how they can best help outsiders get involved in the community.

They are interested in how the project they contribute to looks like, he said. Lets say the product I work on accepts 60 percent. How do I feel about this? Should we maybe be more open? Should we be less critical? Or maybe its OK. Its one more metric that developers might be interested in.

MORE ON ENGINEERINGCould Tax Dollars Fund Smaller, Better Social Media?

Continued here:
Some Open-Source Projects Are More Open Than Others - Built In

The Origin of Linux and Reasons to use Linux – Technotification

What is Linux?

If you are eager to know about Linux, the first thing you should learn is Linuxs origins to understand the concept behind this operating system. Linux is a software layer between the hardware and the software in a computer operating system. It allows you to do productive things and create custom programs on your computer. In simple words, an operating system is a medium between the software and a computer systems hardware. An operating systemallows you to store data on your storage devices like hard drives, solid-state drives, and USBs. It manages data transmission from one element to another; for example, it oversees data flow from the operating system to printers in your office or home. If you have installed a standard Windows environment such as the Microsoft Windows operating system on your computer system, the Windows operating system runs the hardware. It controls the mouse, keyboard, printers, scanners, and other accessories. You will have to install Microsoft Office, Adobe readers, pdf converters, and other software as per your needs. You will pay for each program and have them installed on your system.

Linux is the same as the Windows operating system regarding the process of controlling the hardware. It is different because it acts as a medium between the softwares instructional code and the physical device. The most significant difference is that of the software you will use in the Linux operating systems. The software will be of a different type as compared to the ones that run on Windows systems. You cannot install and run Microsoft Office or Adobe Photoshop on Linux environment. Linux runs different servers like web virtualization servers, Apache servers, database servers, etc.

However, Linux has several distributions that are made for personal desktop computers. These distributions are similar to macOS and Windows operating systems. They run the same type of programs like word processors, image editors, and games. These Linux distributions appear to be more targeted for the home users searching for a free system alternative.

Linux did not kick-off as an operating system or a challenger to the Windows operating system. In the start, Linux happened to be a kernel that Linus Torvalds created. Linus was a student then at the University of Helsinki. The kernel is still useful in the system. In the start, the Linux kernel was used along with the GNU operating system. You can say that the GNU system was incomplete without the kernel. A kernel is defined as an integral component of Linux.

A kernel is considered the central part of operating systems, responsible for all the interfacing of applications and hardware. There are two types of kernels in the market now, namely Unix-like kernels and Windows kernels.

Between 1991 and 1994, Linus took a step further to create the Linux operating system. He combined the GNU OS with Linux Kernel. At the start, he wanted to create an operating system that did not come free, but instead, he needed something that he could customize to fit as per his programming needs. Linux appeared to be his pet project at the start. It was like a side hustle. UNIX is different from the Linux operating system. Linus built the entire Linux system from scratch. He created Linux because he desired to build an open-source operating system for the people to use. At that time, UNIX was not open-source. People had to pay someone to use UNIX. Similarly, Microsoft was also a paid operating system. Therefore, Linux came up with the idea of an open-source operating system. He worked up the idea with his friends from the Massachusetts Institute of Technology (MIT). Coupled with building an open-source operating system, they needed an easy-to-use and efficient operating system to customize to suit their programming needs.

When Linus was creating the Linux operating system, he stopped working on it for a while. During that period, he made the code for the operating system public. This allowed everyone to take part in the creation of the system. Scientists and computer geeks started working on the concept as well. They changed the operating system as they deemed fit. Prominent educational institutions and companies liked the concept of this new operating system because everyone who had the source code could install Linux on his or her computer.

This is how people started creating different versions of the Linux operating system. Students from the University of California, Berkeley, tried to start building a version. People from China and people with different occupations also started creating versions to suit their personal needs. The availability of the source code to the public facilitated the creation of distros or distributions. Distributions are different versions of Linux that people have been creating over time. Linux has different versions, and its many distributions offer it several capabilities. When you have to decide which Linux distribution you need to use, you have to decide what you want your computer to do with Linux. I will explain it by running an analogy with the Windows operating system. When you install Microsoft Windows operating system on our computer, every distribution is built to do things in a particular manner. There is a version of Linux known as Trustix. Linux Trustix is labeled as the most secure Linux operating system in the market. It is simply a brick. You set up Linux Trustix, and no one will be able to hack it until you do something foolish. There will be no sneaking in by viruses. It is a secure and reliable server. However, you have to decide that you really need a secure server before picking up the source code and installing the system.

Now that you have learned about the origins of Linux and its distributions, it is time to move forward to the concept of open-source licensing, which makes Linux different from other operating systems. Linux has open-source licensing. You might have heard of open-source software at some point in your life. Open source does not mean that your software is free to use. If you treat all open-source software as free, you will be on the verge of jeopardizing your programming career and company as well. This isnt good; therefore, we must discuss open-source software to clear the air. Open-source software means that whenever programmers write a code for a software, they give you the code to see how he or she wrote the program in the first place. It does not mean that the program is free to use. There are different ways by which open-source vendors are paid. The first way is through the Open Source model, where they give off their software free of cost. However, when you require support or training for the software, they will charge you a certain amount. For example, you can download the MySQL server for the Linux server. You find it useful and powerful as well. Even though you have learned the MySQL programs different intricacies, you may need to know or need support with some software aspects. Therefore, you approach the software developer, ask for training or help with the software. At this point, you have to pay the programmer or developer for his or her development efforts.

The second way by which developers are paid is through a non-commercial open-source license. This is where most people get into trouble. You have to pay them to use them.If you want a software for home use, there is no problem. Once you use it to connect to a business server, you own a licensing fee to use that software for commercial use. The worst thing is that licensing fees may be over $8,000. It can be that much expensive in some cases. Therefore, it is wise to stay conscious of how you may use the software for non-commercial, personal, or commercial purposes.

The third way by which open-source software programmers are paid is through a paid open-source license. Some of you might ask how software can be on the open-source license if it happens to be a paid software. A paid software is called open-source if the programmer of the software allows you to see their code.

The fourth way by which these programmers earn money is by recurring license fees for the open-source software. This is like most of the open license programs. They will let you download and test their software free of charge. They would let you see its code as well so that you know how the software works. However, if you want to have the softwares legal rights, you will have to pay a yearly or monthly fee for that. This is much cheaper than a one-minute licensing fee that is too much expensive.

The shell of any operating system is the screen by which you interact with that system. Take the example of Microsoft Windows. The Windows shell is its graphical user interface where you can see the mouse pointer at work. You use a pointer to navigate the screen and click on different desktop elements such as icons and folders.

The shell is generally of two types, the first being the graphical user interface (GUI) and the second being the line user interface (LUI). The LUI appears to be as DOS prompts. If you ever have the opportunity to work on the Microsoft DOS prompt, you should know that the screen you see and work on is the line user interface (LUI). It is a black-and-white screen. You see a bunch of commands on the screen to get a specific output from your computer.

Linux is a technical operating system, which is why programmers, engineers, and geeks prefer it. They like to use this line user interface because it facilitates them in programming. When you install Linux, you can install the Linux graphical user interface on your system, just like Windows. Here, you can use a mouse to click on things or access a line user interface more suitable for programmers. However, the line user interface on Linux works on a bunch of commands.

It would help if you kept in mind that the line user interface on Linux is more robust than the graphical user interface (GUI). However, when you install Linux with a line user interface for the Linux shell, you see a prompt instead of a mouse. If you do not know what a command prompt is or what you will do with it, you will most likely be stuck. To help you out, I will give functional examples of Linux commands and shell scripts.

The reason you should learn about the Linux operating system is the functionality of the server. The Linux operating system is incredibly rock-solid. Once you have installed the Linux operating system and once you have gone through the quirks and set up the configurations, a Linux operating system will run without overheating and dying in the middle of working. It would run on end. Once you have installed the Linux operating system correctly, it has the power to run for a hundred and fifty days without shutting down.

The Linux operating system is unlike Windows because you have to reboot it weekly to avoid certain losses of memory. If you have configured it correctly, the Linux operating system would run and do the job with the least concern about the circumstances. There will be lithe to no operational problems when you install a Linux operating system on your computer.

Go here to see the original:
The Origin of Linux and Reasons to use Linux - Technotification

Programming in the pandemic – Skymind: Thoughts from the AI ecosystem – ComputerWeekly.com

With only a proportion of developers classified as key workers (where their responsibilities perhaps included the operations-side of keeping mission-critical and life-critical systems up and online), the majority of programmers will have been forced to work remotely, often in solitude.

So how have the fallout effects of this played out?

This post is written by Paul Dubs, drawing on his 15+ years of experience as a software engineer. Based in Germany, he is responsible for heading up the software division of Skymind, an open source enterprise deep learning software company that cals itself a dedicated AI ecosystem builder.

Dubs writes as follows

We are building AI solutions for business use. This includes our open source products Deeplearning4J and Konduit Serving, as well as commercial products and professional services.

In many ways, things havent changed that much due to the pandemic. The company has always been a remote-first company. The team has consisted of people distributed over many different time zones. The biggest change has been that we cant come together as easily as we used to. Previously it was possible for key people to meet in-person at least once per quarter. Often, we met at conferences or for workshops.

In terms of what types of application development we think have actually flourished under the constraints of lockdown vs. what types of development have suffered things that are clearly defined. Where you know exactly what needs to be built and no communication or planning is needed, things can be done very well under lockdown conditions. When you can just set yourself as offline and work heads down until you are done.

But anything that benefits from a free communication flow is somewhat hindered: brainstorming and whiteboarding; planning and figuring out what to build in the first place.

Unfortunately, most of the online meeting solutions weve tried so far end up being a very distant second compared to an in-person meeting. The latency results in people starting to talk over each other; the noise reduction algorithms remove too much and make it hard to understand some people; sudden technical problems always result in lost time.

Dubs: tough times, but teamwork has still shined through.

Thinking about our staff numbers, headcount has grown quite a bit during the last year, despite the pandemic.

Training these people has been a challenge we have had to overcome. Whereas we could organise a workshop and come together previously, we now have to find different ways to share our knowledge. One of the ways we are doing it is through regular knowledge sharing sessions over Zoom. We record them, to allow people who were not able to attend to watch them at a later time, and make that learning more asynchronous in general.

Thinking about methods waterfall development as we know it has always been a strawman. Even the paper that introduced it, used it as an example of what not to do. So, I dont think that we as an industry are likely to fall back to it.Agile development methodologies have always been about being adaptable, about being able to deal with the uncertainties of business life. With uncertainties at an all-time high, it seems unlikely that anyone would consider moving away from agile methods. It makes sense, however, that the actual details change.

How that change looks obviously depends on the situation. For example, there is no point of a standup in the morning, if there is no morning that is consistent for all of the team members as we are working at different hours and time zone.s

A more asynchronous approach, where you have a kind of continuous standup in a Slack channel, for example, can work better in that case.

I have been working in a remote position for a few years now. When the pandemic hit, this meant that I was well prepared and essentially nothing about my day-to-day work had to change.

In contrast to many people who were used to going to an office building to do their work and now have to work from their bedroom, living room or kitchen, I had a properly set up home office already. More importantly, I was already used to working in an environment that is abundant with distractions.

For many people, it was the first time where they had to do the majority or all of their work from home and while they knew how to deal with distractions in their office environment, they werent equipped yet to deal with them at home. When you have to work from your kitchen, your mind might wander to what you will be cooking today or you start to suddenly notice all the little spots that may need a little bit of cleaning. When working from the living room, the TV may be a constant reminder that you might be also watching your favorite show right now.

It really is no moral failing of their own, as those activities are strongly associated with those rooms. For the brain, the work might as well be the distraction.Finding your own rhythm that will work in your home environment takes some time and practice.

For myself, Im following the same schedule as Ive used before the pandemic. I get up sometime between 6 and 7 in the morning, get a light workout in, revisit my plan for the day that Ive set up the evening before, and read a bit. Then I get some breakfast and get to work. Sometime around noon or early afternoon I have lunch and a short break and then get back to work. When Im done for the day, I sketch out a new plan for the next day. The plan helps to stay on track, or get back on track when something unexpected needs to be handled.

One particular thing that really helps me to stay focused, is to ignore all and any news until the evening. During interesting times, as we are currently experiencing, the trickling stream of news can heavily sap attention from the things that really matter.

Link:
Programming in the pandemic - Skymind: Thoughts from the AI ecosystem - ComputerWeekly.com

14 free or affordable online courses to learn Python, offered by MIT, Harvard, UPenn, Google, and more – Business Insider

When you buy through our links, we may earn money from our affiliate partners. Learn more.

It's no secret that coding-related jobs are on the riseand that careers in data science and software development are among the ones with the highest average job satisfaction, according to Glassdoor.

For that reason, you may have heard of Python, one of the most popular programming languages in the world. Its uses range from data analysis to AI and machine learning, and its code is used by companies like Google, Reddit, Wikipedia, Amazon, Instagram, Spotify, and many more.

Luckily, there are many online resources to get started in learning about Python, from relatively short, free introductions to months-long intensive (yet comparatively affordable) certificate programs. Many are offered by prestigious universities like MIT, Harvard, UPenn, and the University of Michigan, or by top companies like Google or IBM, giving online students access to lessons and projects that can help them work towards a future career in Python development.

Originally posted here:
14 free or affordable online courses to learn Python, offered by MIT, Harvard, UPenn, Google, and more - Business Insider

Developer jobs: Googles Go, Redux.js, Google Cloud, and AWS skills will get you the most interviews – ZDNet

While many people will face tough prospects in 2021, software engineers remain in high demand even in areas that tech companies and employees are supposedly fleeing from, like San Francisco.

But while employees might want to leave expensive cities, employers are offering slightly more to attract talent in traditional tech hubs.

"Average salaries for top software engineering roles increased in all major tech hubs last year by 5% in the San Francisco Bay Area, 3% in New York, 7% in Toronto, and 6% in London respectively," Hired notes in a new report.

SEE: Hiring Kit: Python developer (TechRepublic Premium)

Hired notes that programmers who know Google's Go programming language, the Redux JavaScript library, Google Cloud, and AWS get more interview requests from employers.

Remote working under the pandemic, however, has had some impact on traditional tech hubs as more remote roles appear elsewhere.

For example, Denver, in Colorado accounted for 34% remote role offers, while roles in London and Toronto accounted for 6% and 9% of remote roles, respectively.

Hired's survey, covering 10,000 participating companies and 245,000 job seekers, was conducted with hiring platform Vettery.

"Demand for software engineers and their skill set continued to grow despite the massive economic downturn amid the pandemic and one of the most difficult job markets in US history," said Josh Brenner, Vettery's chief.

"As many companies will pick up their hiring efforts again this year, they will have to compete even more for top engineering talent."

The companies found that 83% of software engineers were after "new challenges and continuous learning", meaning that companies will need to cater to an appetite among developers for remote work and career development opportunities.

Developers across the board are in demand. People with backend and full stack knowledge accounted for 58% and 57% of interview requests, while frontend software engineers accounted for 30% of all interview requests.

SEE: Programming languages: Microsoft TypeScript leaps ahead of C#, PHP and C++ on GitHub

Software engineers who know about Redux.js, Google Cloud, AWS and React.js are in luck. Engineers proficient in Redux.js received almost three times more interview requests than the marketplace average, while candidates with Google Cloud, AWS and React.js skills received 2.7 times more interviews.

The companies found that developers with knowledge of Go and Scala got twice as many interview requests.

AWS is where the jobs are though. "AWS was requested 8 [times] more in job listings compared to Google Cloud Platform and Microsoft Azure skills," Hired notes.

Developers who want a job also need to know Kubernetes and Docker, the predominant container technologies.

Read the rest here:
Developer jobs: Googles Go, Redux.js, Google Cloud, and AWS skills will get you the most interviews - ZDNet

Google recommits to the Python ecosystem – SDTimes.com

Google has announced it is increasing its support for the Python Software Foundation (PSF). The company is now a Visionary Sponsor and will work to improve the language, ecosystem and community.

Python is critically important to both Google Cloud and our customers. It serves as a popular runtime for many of our hosted services, from the launch of App Engine more than a decade ago, to modern serverless products like Cloud Functions. We use the Python Package Index (PyPI) to distribute hundreds of client libraries and developer tools, including the popular open-source machine-learning library TensorFlow. And we use it internally as well, where it helps power many of our core products and services, Dustin Ingram, a senior developer advocate at Google, wrote in a blog post.

RELATED CONTENT:Python named TIOBEs programming language of 2020Moving from Python 2 to Python 3

As part of its new support, the company will donate more than $350,000 to support PSF projects and improve the supply-chain security. The investment will go towards: productionized malware detection for PyPl; improvements for Python tools and services; and a full-time CPython Developer-in-Residence to help prioritize maintenance and address the backlog of the CPython project, according to the company.

Additionally, the Google Cloud infrastructure in-kind donation has been recommitted to the PSF to support its critical infrastructure such as the Python Package Index. This will help support the critical infrastructure that the PSF operates, including the Python Package Index.

The company will also make the Google Cloud Public Datasets program home to the PyPI download statistics and PyPI project metadata, which will be updated in near-real-time.

Like so many Google Cloud customers, were big believers in Python. Supporting the PSF in this way will help ensure that the Python ecosystem has a strong and viable future for many years to come, Ingram wrote.

More information is available here.

Originally posted here:
Google recommits to the Python ecosystem - SDTimes.com

Ruffle keeps classic flash games alive (and safer) with an open source emulator – Liliputing

The death of Adobe Flash was a long time coming. While the 25-year-old technology was instrumental in bringing animation, games, and interactive content to the web when it was still young, it was always sort of a security nightmare, with Adobe struggling to issue bug fixes faster than vulnerabilities were discovered and exploited.

But now that Adobe has finally pulled the plug on Flash, what happens to all the classic games and other content developed in Flash?

Some has been ported to newer web technologies like HTML5. Some live on in the Internet Archives Flash library. And theres BlueMaximas Flashpoint, a project to save more than 70,000 Flash games and 8,000 animation and bundle them with a Flashpoint Secure Player.

Now theres another option called ruffle. Its an open source Flash Play emulator thats designed to be more secure than Adobes product, while allowing you to run Flash games and animations on a wide range of devices.

Ruffle is written using the Rust programming language, which features built-in memory protection to help protect users from many of the vulnerabilities that affected Adobes Flash Player.

The Flash Player emulator is also cross-platform:

Ruffle is still under active development and a note on the projects github page describes it as in the proof of concept stage. But you can take it for a spin by visiting ruffle.rs/demo/ to play a couple of sample games and animations in your browser.

via Bleeping Computer

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

or...

More:
Ruffle keeps classic flash games alive (and safer) with an open source emulator - Liliputing

New Machine Learning Theory Raises Questions About the Very Nature of Science – SciTechDaily

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

The algorithm, devised by a scientist at the U.S. Department of Energys (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations, said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. What Im doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a serving algorithm, then made accurate predictions of the orbits of other planets in the solar system without using Newtons laws of motion and gravitation. Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data, Qin said. There is no law of physics in the middle.

PPPL physicist Hong Qin in front of images of planetary orbits and computer code. Credit: Elle Starkman / PPPL Office of Communications

The program does not happen upon accurate predictions by accident. Hong taught the program the underlying principle used by nature to determine the dynamics of any physical system, said Joshua Burby, a physicist at the DOEs Los Alamos National Laboratory who earned his Ph.D. at Princeton under Qins mentorship. The payoff is that the network learns the laws of planetary motion after witnessing very few training examples. In other words, his code really learns the laws of physics.

Machine learning is what makes computer programs like Google Translate possible. Google Translate sifts through a vast amount of information to determine how frequently one word in one language has been translated into a word in the other language. In this way, the program can make an accurate translation without actually learning either language.

The process also appears in philosophical thought experiments like John Searles Chinese Room. In that scenario, a person who did not know Chinese could nevertheless translate a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostroms philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. If we live in a simulation, our world has to be discrete, Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

The resulting pixelated view of the world, akin to what is portrayed in the movie The Matrix, is known as a discrete field theory, which views the universe as composed of individual bits and differs from the theories that people normally create. While scientists typically devise overarching concepts of how the physical world behaves, computers just assemble a collection of data points.

Qin and Eric Palmerduca, a graduate student in the Princeton University Program in Plasma Physics, are now developing ways to use discrete field theories to predict the behavior of particles of plasma in fusion experiments conducted by scientists around the world. The most widely used fusion facilities are doughnut-shaped tokamaks that confine the plasma in powerful magnetic fields.

Fusion, the power that drives the sun and stars, combines light elements in the form of plasma the hot, charged state of matter composed of free electrons and atomic nuclei that represents 99% of the visible universe to generate massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

In a magnetic fusion device, the dynamics of plasmas are complexand multi-scale, and the effective governing laws or computational models for a particular physical process that we are interested in are not always clear, Qin said. In these scenarios, we can apply the machine learning technique that I developed to create a discrete field theory and then apply this discrete field theory to understand and predict new experimental observations.

This process opens up questions about the nature of science itself. Dont scientists want to develop physics theories that explain the world, instead of simply amassing data? Arent theories fundamental to physics and necessary to explain and understand phenomena?

I would argue that the ultimate goal of any scientist is prediction, Qin said. You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I dont need to know Newtons laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newtons laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.

Machine learning could also open up possibilities for more research. It significantly broadens the scope of problems that you can tackle because all you need to get going is data, Palmerduca said.

The technique could also lead to the development of a traditional physical theory. While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one, Palmerduca said. When youre trying to deduce a theory, youd like to have as much data at your disposal as possible. If youre given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.

Reference: Machine learning and serving of discrete field theories by Hong Qin, 9 November 2020, Scientific Reports.DOI: 10.1038/s41598-020-76301-0

Continued here:
New Machine Learning Theory Raises Questions About the Very Nature of Science - SciTechDaily