Programming language Rust: Mozilla job cuts have hit us badly but here’s how we’ll survive – ZDNet

The open-source project behind the Mozilla-founded systems programming language, Rust, has announced a new Rust foundation to boost its independence following Mozilla's recent round of pandemic layoffs.

Firefox-maker Mozilla's decision to cut 250 roles or 25% of its workforce last week has taken a toll on the open-source project behind Rust. Mozilla is the key sponsor of Rust and provides much of the language's infrastructure as well as core talent.

Some Mozilla contributors to five-year-old Rust did lose their jobs in Mozilla's job cuts, causing some speculation that heavier cuts to the team behind Mozilla's Servo browser engine a core user of Rust might pose an existential threat to the young language.

Rust's demise would be bad news for a growing number of developers exploring it for system programming as opposed to application development as a modern and memory-safe alternative to C and C++.

Rust is now in developer analyst RedMonk's top 20 most-popular language rankings, and it is being used at Amazon Web Services (AWS), Microsoft and Google Cloud among others for building platforms. And while Mozilla is the main sponsor of Rust, AWS, Microsoft Azure and Google Cloud have come on board as a sponsor too.

However, discussing Mozilla's layoffs, Steve Klabnik, a Rust Core member, has pointed out that the Rust community is much bigger than the number of Mozilla employees who contributed to the project and were affected by the layoffs.

"Rust will survive," wrote Klabnik in a post on Hacker News. "This situation is very painful, and it has the possibility of being more so, but Rust is bigger than Mozilla."

Nonetheless, as a project born in Mozilla Research and supported heavily by Mozilla, Rust is still currently entrenched in Mozilla's infrastructure, which, for example, hosts the Rust package manager, crates.io.

"Mozilla employs a small number of people to work on Rust full time, and many of the Servo people contributed to Rust too, even if it wasn't their job," Klabnik wrote.

"[Mozilla] also pays for the hosting bill for crates.io. They also own a trademark on Rust, Cargo, and the logos of both. Two people from the Rust team have posted about their situation, one was laid off and one was not. Unsure about the others. Many of the Servo folks (and possibly all, it's not 100% clear yet but it doesn't look good) have been laid off."

But Klabnik notes that "vast majority" of Rust contributors are not employed by Mozilla, even though the Mozilla's talent and infrastructure is important to the language's survival.

To resolve issues around ownership and control, the Rust Core team and Mozilla are accelerating plans to create a Rust foundation, which they expect to be operating by the end of the year.

"The various trademarks and domain names associated with Rust, Cargo, and crates.io will move into the foundation, which will also take financial responsibility for the costs they incur. We see this first iteration of the foundation as just the beginning," the Rust Core team said in a blog post this week.

"There's a lot of possibilities for growing the role of the foundation, and we're excited to explore those in the future," it added.

Addressing the question of Rust's demise, the team noted that it was a "common misconception that all the Mozilla employees who participated in Rust leadership did so as a part of their employment". Instead, some leaders were contributing to Rust on a voluntary basis rather than as part of the job at Mozilla.

The Rust language project has also selected a team to lead the creation of the Rust foundation, including Microsoft Rust expert Ryan Levickand Josh Triplett, a former Intel engineer and a lead of the Rust language team.

Microsoft Azure engineers are exploring Rust for a Kubernetes container tool, and Microsoft recently released a public preview of Rust/WinRT, or Rust for the Windows Runtime (WinRT), to support Rust developers who build Windows desktop apps, store apps, and components like device drivers.

While a primary sponsor like AWS, Microsoft or Google Cloud could be good news for Rust, the Rust Core team says it doesn't want to rely too heavily on just one sponsor.

"While we have only begun the process of setting up the foundation, over the past two years the Infrastructure Team has been leading the charge to reduce the reliance on any single company sponsoring the project, as well as growing the number of companies that support Rust," the Rust Core team said.

Visit link:
Programming language Rust: Mozilla job cuts have hit us badly but here's how we'll survive - ZDNet

Top 10 Languages That Paid Highest Salaries Worldwide In 2020 – Analytics India Magazine

Recently, the Stack Overflow Developer Survey 2020 surveyed about 65,000 developers, where they voted on their daily-used programming languages, go-to tools, libraries and more. The survey stated, Globally, respondents who use Perl, Scala, and Go tend to have the highest salaries, with a median salary around $75k. While looking at US jobs only, Scala language developers tend to have the highest salaries.

Here, we list down the top 10 programming languages from the survey that paid the highest salaries worldwide in 2020.

(The list is in alphabetical order).

Rank: 6th

Average Salary: $65k

About: Bash is a Unix shell: a command-line interface for interacting with the operating system. Shell scripts are usually used by the developers for various system administration tasks, for instance, performing disk backups, evaluating system logs, and so on.

The job profiles that require shell script programming include automation engineer, application server expert, SysOps network engineer, among others.

Know more here.

Rank: 3rd

Average Salary: $74k

About: Go is an open-source programming language that helps in building simple, reliable, and efficient software. The language is expressive, concise, clean and efficient. It can carry tasks such as garbage collection.

The job profiles that require Go programming include Go language developer, software development engineer, senior research engineer, among others.

Know more here.

Rank: 8th

Average Salary: $60k

About: Haskell is an open-source, purely-functional programming language that allows rapid development of robust software. The features of this language include strong support for integration with other languages, built-in concurrency and parallelism, and more. It includes debuggers, profilers and rich libraries.

The job profiles that require Haskell programming include senior Haskell engineer, senior software engineer, full-stack engineer, among others.

Know more here.

Rank: 9th

Average Salary: $59k

About: Julia is a flexible, dynamic language, appropriate for scientific as well as numerical computing, with performance comparable to traditional statically-typed languages. The developers can use Julia for specialised domains such as machine learning, data science, etc.

The job profiles that require Julia programming include data scientist, machine learning engineer, senior software developer, among others.

Know more here.

Rank: 7th

Average Salary: $64k

About: Objective-C is an object-oriented programming language and is the primary programming language when writing software for OS X and iOS. Objective-C inherits the syntax, primitive types, and flow control statements of C and adds syntax for defining classes and methods. It also adds language-level support for object graph management and object-literals while providing dynamic typing and binding, deferring many responsibilities until runtime.

The job profiles that require Objective-C programming include iOS developer, quality assurance engineer, mobile software developer, among others.

Know more here.

Rank: 1st

Average Salary: $76k

About: Perl is a highly capable, feature-rich programming language which runs on over 100 platforms from portables to mainframes and is suitable for both rapid prototyping and large scale development projects.

The features of this language are that it is easily extensible, object-oriented, enables Unicode support, etc. The job profiles that require Perl programming include Perl developer, lead Perl developer, among others.

Know more here

Rank: 10th

Average Salary: 59k

About: Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. It has high-level built-in data structures, combined with dynamic typing and dynamic binding. The simple, easy-to-learn syntax of this language emphasises readability and therefore reduces the cost of program maintenance.

The job profiles that require Python programming include Python developer, data scientist, research analyst, data analyst, among others.

Know more here.

Rank: 4th

Average Salary: $74k

About: Rust is a multi-paradigm programming language that helps in building reliable and efficient software. The features of this language are that they are memory-efficient, have a rich type system, ensure memory-safety and thread-safety, among others.

The job profiles that require Rust programming include firmware engineer, principal software engineer, autonomous systems AI engineer, among others.

Know more here.

Rank: 5th

Average Salary: $71k

About: Ruby is a dynamic, open-source programming language with a focus on simplicity and productivity. The features of this language are flexibility, exception handling features, mark-and-sweep garbage collector, OS independent threading, highly portable, etc.

The job profiles that require Ruby programming include Ruby on Rails developer, technical architect, senior backend developer, among others.

Know more here.

Rank: 2nd

Average Salary: $76k

About: Scala is a high-level programming language that combines object-oriented and functional programming. The static types of this language help avoid bugs in complex applications, and its JVM and JavaScript runtimes let a developer build high-performance systems with easy access to vast ecosystems of libraries.

The job profiles that require Scala programming include big data developer, Scala developer, data engineer, machine learning engineer, among others.

Know more here.

comments

Visit link:
Top 10 Languages That Paid Highest Salaries Worldwide In 2020 - Analytics India Magazine

Programming language Kotlin 1.4 is out: This is how it’s improved quality and performance – ZDNet

Developer tools maker JetBrains has released version 1.4 of Kotlin, the increasingly popular programming language promoted by Google for Android app development.

Kotlin has become one of the fastest-growing languages on GitHub and now ranks as one of the top five most-loved languages by developers who use Stack Overflow, while ranking 19th on RedMonk's list of most popular languages.

While it is popular among Android app developers, JetBrains points out that Kotlin is also used for server-side development, and for targeting iOS, the web, Windows, macOS and Linux.

SEE: Hiring Kit: Python developer (TechRepublic Premium)

It also says 5.8 million people have edited Kotlin code over the past year, up from 3.5 million people a year earlier.

Kotlin 1.4, released this week, addresses over 60 performance issues that were causing integrated development environment (IDE) problems.

Developers should notice autocomplete suggestions appearing significantly faster, though these require JetBrain's IDE IntelliJ IDEA 2020.1+ and Android Studio 4.1+, which is co-developed by Google and JetBrains.

It also introduces a new coroutine debugger tab in the Debug Tool Window to help developers check the state of coroutines, and a new Kotlin Project Wizard for creating and configuring different types of Kotlin projects.

JetBrains is also working on a new Kotlin compiler that aims to eventually unify all platforms that Kotlin supports, but in 1.4 the compiler starts with a new type-inference algorithm that was available in Kotlin 1.3 and is now used by default.

There's also new Java Virtual Machine (JVM) and JavaScript (JS) backends in 1.4, which are available in Alpha mode now but will become the default after being stabilized.

These backends Kotlin/JVM and Kotlin/JS are being migrated to the same internal representation (IR) for Kotlin code as the Kotlin/Native backend is. The outcome is that all three backends share a common background infrastructure, which should make it easier to implement bug fixes and features once for all platforms.

SEE: Chrome for Android to label 'Fast page' sites as Google clamps down on mixed forms

Kotlin/Native also gains performance improvements to Kotlin/Native compilation and execution, as well as improved interoperability between it and Swift and Objective C for iOS and macOS development.

Finally, Kotlin Multiplatform is an early-stage project that aims to help save effort when maintaining the same code for different platforms.

There's a new hierarchical project structure that allows developers to share code between a subset of similar targets, like iOS Arm64 devices and an x86 simulator target. This offers developers more flexibility than the current ability to share code on all platforms.

View original post here:
Programming language Kotlin 1.4 is out: This is how it's improved quality and performance - ZDNet

Linux Foundation showcases the greater good of open source – ComputerWeekly.com

The role of open source collaboration was highlighted during a presentation to tie in with the start of the Linux Foundations KubeCon and Cloud Native Computing Forum (CNCF) virtual conferences.

Many believe that open source is the future of software development. For instance, in a recent conversation with Computer Weekly, PayPal CTO Sri Shivananda said: It is impossible for you to hire all the experts in the world. But there are many more people creating software because they have a passion to do it.

These passionate software developers not only help the wider community by contributing code, but they also help themselves. You can help others as well as helping yourself, said Jim Zemlin. executive director of the Linux Foundation.

As an example, Zemlin described how the work on the Open Mainframe project, involving free Cobol programming training, is helping to fill the skills gap resulting from the need, during the Covid-19 pandemic, to update legacy government systems. It is an initiative, supported by IBM, that has helped IBM to grow the number of Cobol and mainframe skills needed to support its clients.

The project, which began in April 2020, has helped to support some US states, which faced temporary challenges when they needed to process a record number of unemployment claims and faced some temporary challenges.

Zemlin highlighted another open source project, OpenEEW, an earthquake early warning system, originally developed by Grillo. Grillo developed EEW systems in Mexico and Chile that have been issuing alerts since March 2018.

Earlier this month, IBM announced that it would play a role in supporting Grillo by adding the OpenEEW earthquake technology into the Call for Code deployment pipeline supported by the Linux Foundation.

IBM said it has deployed a set of six of Grillos earthquake sensor hardware and is conducting tests in Puerto Rico, complementing Grillos tools with a new Node-RED dashboard to visualise readings. IBM said it was extending a Docker software version of the detection component that can be deployed to Kubernetes and Red Hat OpenShift on the IBM Cloud.

At the start of the conference, Berlin-based online fashion retailer Zalando was recognised for its contribution to the Kubernetes ecosystem. The retailer has provided contributions to Kubernetes, and created and maintained a number of open source projects to expand the ecosystem. These include Skipper, Zalandos Kubernetes Ingress proxy and HTTP router; Postgres Operator, an operator to run PostgreSQL on Kubernetes; kube-metrics-adapter, which uses custom metrics for autoscaling as well as an ingress controller for AWS.

Henning Jacobs, senior principal engineer at Zalando, said: Zalandos need to massively scale led us on a cloud-native journey that has ultimately made our developers happier and enabled us to grow with customer demand. The Kubernetes and cloud-native community has been such a valuable resource for us that we are dedicated to actively continuing giving back in any way we can.

A key aspect about open source is that it represents a community of software developers who are able to collaborate across the globe in a way that enables sophisticated software products to be built and maintained.

During the event, a number of individual developers were recognised for their contribution to the Jaeger project, which was originally submitted by Uber Engineering in 2015. The project, which has more than 1,700 contributors, is an open source distributing tracing tool for microservices.

View post:
Linux Foundation showcases the greater good of open source - ComputerWeekly.com

Learn More About Dynamo for Revit: Features, Functions, and News – ArchDaily

Learn More About Dynamo for Revit: Features, Functions, and News

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

There's no shortage of architectural software these days and it can be challenging and overwhelming to know what tools will be the best fit for your work. Often the programs you learned in school or whatever your firm uses are what you stick with. However, it's often beneficial to step out of that comfort zone and investigate your options to see what else is out there. New software can present opportunities to simplify existing workflows or even bring new digital capabilities to you and your firm.

One such program is Dynamo, a plug-in for Autodesk Revit. Dynamo is an open source visual programming language for Revit, written by designers and construction professionals. It is a programming language that allows you to type lines of code; while also creating an algorithm that consists of nodes. If you're not familiar with Dynamo yet, you can take a complete online course to learn how to use it - first, read on for an introduction to Dynamo.

What is visual programming?

Establishing visual, systemic, and geometric relationships between the different parts of a drawing is key to the design process. Workflows influence these relationships from the concept stage to final design. Similarly, programming allows us to establish a workflow, but through formalizing algorithms.

What does Dynamo do?

Using Dynamo, you can work with enhanced BIM capabilities in Revit. Dynamo and Revit together can be utilized to model and analyze complex geometries, automate repetitive processes, minimize human error, and export data to Excel files and other file-types not typically supported by Revit. Dynamo can make the design process more efficient, with an intuitive interface and many pre-made scripting libraries available as well. GoPillar's online course doesn't require any previous coding experience or familiarity with Dynamo, but it's helpful if you're already a Revit user.

What does Dynamo look like?

The user interface for Dynamo is organized into five main areas, the largest being the workspace where programs are processed. The user interface sections include: Main menu Toolbar Library Workspace* Execution bar

*Note: The workspace is the place where designers develop visual programs, but it is also the place where any elaborate geometry is previewed. Whether you are working in a home workspace or in a custom node, you can navigate the workspace by using the mouse or buttons on the top right. However, you can change the navigation preview by switching from one mode to another in the lower right.

How does Dynamo work?

Once the application is installed, you can connect different elements to define the custom algorithms that consist of relationships and sequences. These algorithms can be used on many applications, from data processing to geometry generation, with minimal programming code. Dynamo is considered a specific visual programming tool for designers, as well as a software that can create tools. These tools can use external libraries or any Autodesk product that has an API.

How can using Dynamo help problem-solve?

Dynamo users are usually architects, engineers, and construction professionals, so the problems that Dynamo solves are essentially related to parametric modeling, analysis of BIM data, automation of documentation, and the exchange of information between different softwares.

What is the latest version of Dynamo?

The latest version of Dynamo For Revit is an excellent upgrade for those still using the 2017 - 2019 versions of Revit. With this update, significant performance improvements have been made. First, the checkboxes of the Analytics UI collection have been updated for clarity and follow users' choices more efficiently. Next, instrumentation is subject to Analytics and cannot be activated unless it is selected in advance to send analysis information. Thisfeature also helps to clean the Settings drop-down menu by removing one of the checkbox items and making the dashboard more streamlined.

Dynamo for Revit 2021 Generative Design Tool

Generative Design Tool in Revit is a 2021 feature that creates a set of design outputs based on user-specified inputs, constraints and goals. Revit uses Dynamo to iterate various input values and generate outputs based on set goals. Within Dynamo, there are specific nodes used to characterize the parameters in the generative design process, which will modify the design outputs created by the Revit tool.

Dynamo provides the framework as a Revit internal generative design tool. An understanding of how to work with a Dynamo Script created for Generative Design will help improve knowledge of Generative Design. Professionals will then be able to manipulate the Revit tool and the way it acts in the process. By adjusting the nodes and values in Dynamo that are linked to Generative Design parameters, you can change the options once the Generative Design tool has been run in Revit.

How are files saved and what format(s) does the software work on?

The new version of Dynamo saves files in the JavaScript Object Notation (JSON) format instead of the XML-based formats of dyn and dyf. Custom graphics and nodes created in Dynamo are not compatible with previous versions of the program. However, upon installation of the new version, all versions of the 1.x file will be kept and converted to the new format (with a backup copy saved in the original version).

The Dynamo Node Library

The node library has been reorganized to reduce redundancy and ease user navigation. All custom and non-default nodes will be displayed as a sub-item known as an "Add-on. In addition, you can now resize and collapse the library window by manipulating the right edge of the panel. When working with nodes, remember that the nodes have a precise order of drawing, making it possible that the processed objects are displayed on top of each other. This display can be confusing when adding multiple nodes sequentially as they can be viewed in the same location in the workspace.

If you would be interested in learning more about Dynamo, our partner GoPillar Academy has recently launched an online course using this software, suitable for all skill levels. ArchDaily readers have access to an exclusive discount for $99 USD ($349 USD regular price) valid until August 31st!

Link:
Learn More About Dynamo for Revit: Features, Functions, and News - ArchDaily

NITK figures 4th in Google Summer of Code ranking – BusinessLine

The National Institute of Technology Karnataka (NITK) at Surathkal in Mangaluru taluk has been ranked fourth globally in the list of universities with the most accepted students for GSoC (Google Summer of Code) 2020.

As many as 23 students from NITK got selected for GSoC 2020. A total of 1,198 students from 550 universities globally are participating in GSoC 2020.

GSoC is a global programme organised by Google Open Source team with an aim to introduce students to open source software development. The students are paired with mentors from open source organisations to work on a programming-intensive project. The GSoC programme is running from June to August 2020.

A press release by the institute said that there has been a voluntary and organised effort, led by Mohit P Tahiliani of the Department of Computer Science and Engineering, for interested students from various departments of NITK, to structurally plan out open source activities in the institute.

The number of students participating in GSoC has increased in the past two years, thereby showing the growth of NITK in the field of open source contributions, it said.

Google Open Source blog has given the names of 12 universities with the most accepted students for GSoC 2020. NITK figures fourth in that list.

Excerpt from:
NITK figures 4th in Google Summer of Code ranking - BusinessLine

Why ASP.NET Core Is Regarded As One Of The Best Frameworks For Building Highly Scalable And Modern Web Applications – WhaTech

ASP.NET Core entered the market and created a buzz for web applications.

It made cross-platform apps highly flexible. ASP.NET Core is one of the best web frameworks which provides numerous benefits to developers. Web applications are in trend due to numerous benefits.

Let us understand a few terminologies first:

A web application is a computer program that operates on web servers. You can operate a web application on any web browser. This uses an integration of PHP and ASP in the back-end and HTML and JavaScript in the front-end to run the application smoothly. Here, PHP is a server-side programming language. On web applications, customers can perform different functions such as shopping carts, forms, and more, similar to that of native applications. Some of the much-used examples of web applications are Google apps and Microsoft 365.Microsoft has multiple web applications.

It uses ASP.NET core to build a strong framework that helps in smooth coding as well as managing lots of tasks. If you are thinking of using this amazing framework in your next project you can reach out to a Microsoft web app development services provider.

A framework is a platform where you are allowed to build programs and develop applications. A framework is well tested and used multiple times so are used to create high functionality applications. The framework helps to use a more secure code free of bugs in less time. The framework exists for all genres here we will be taking count of web application framework which is ASP.NET core.

ASP.NET Core is regarded as one of the best frameworks for building highly scalable and modern web applications due to its amazing features. It is an open-source framework that can be used by all, unlike ASP.NET. It is supported by the Microsoft community. Are you thinking of using this framework for your next project and developing dynamic web applications? You can reach out to a Microsoft web app development services provider. To know about it in detail have a look at its amazing features:

ASP.NET core has ample of features to make it the best framework for web applications. ASP.NET is an open-source framework that is highly popular among web application developers. It can run on multiple platforms and is integrated with great useful tools. Here, we have listed some of the best features of ASP.NET Core. By now, you must have got a concise idea of why ASP.NET Core is the most preferred framework for web applications. It is high time that you design web applications and expand your business on digital platforms. If you are thinking of developing a web application, then what better than using ASP.NET Core as the framework? To use this framework for your next web project, you can reach out to a dot net development company.

This email address is being protected from spambots. You need JavaScript enabled to view it.

Link:
Why ASP.NET Core Is Regarded As One Of The Best Frameworks For Building Highly Scalable And Modern Web Applications - WhaTech

Use Pulumi and Azure DevOps to deploy infrastructure as code – TechTarget

Infrastructure as code makes IT operations part of the software development team, with scalable and testable infrastructure configurations. To reap the benefits, IaC tools integrate with other DevOps offerings, as in this tutorial for Pulumi with Microsoft Azure DevOps.

Pulumi provides infrastructure as code provisioning, while Azure DevOps provides version control and a build and release tool. Together, they form a pipeline to define, build, test and deploy infrastructure, and to share infrastructure configurations. Follow this tutorial to develop infrastructure code in C# with Pulumi, unit test it using the open source NUnit framework, and safely deliver it via the Azure DevOps ecosystem. First, get to know Pulumi.

The Pulumi approach puts infrastructure into common development programming languages, rather than a domain specific language(DSL) only used for the tool. This means that an infrastructure blueprint for a project can use .NET Core, Python or another supported language that matches the application code. HashiCorp Terraform uses the HashiCorp Configuration Language or JSON or YAML formatting to define infrastructure as code. Similarly, Azure Resource Manager has limitations on how a user can apply logic and test it.

Note: In July 2020, HashiCorp added the ability to define infrastructure using TypeScript and Python as a feature in preview.

Because Pulumi uses a real programming language for infrastructure code, the same set of tools can build, test and deploy applications and infrastructure. It has built-in tools to assist IT engineers as they develop, test and deploy infrastructure. Pulumi is designed to deploy to cloud providers, including AWS and Azure.

To follow this Pulumi tutorial, get to know these terms:

This tutorial starts with one of Pulumi's example appsfor building a website. To highlight the integration with Azure DevOps, we make some modifications to the example app repository:

See the AzDOPulumiExample repository in Azure DevOps, and its ReadMe file for the modifications made to the example app.

Microsoft provides Azure DevOps, but is not tied to one language, platform or cloud. It includes many DevOps orchestration services, such as Azure Boards to track a software project and Pipelines to build, test and share code.

This tutorial uses repositories and Azure Pipelines to automatically build, test and release code. Azure Pipelines is a cloud service. It supports pipeline as code, because the user can store the pipeline definition in version control. Within Azure Pipelines, this tutorial relies on a pipeline, which describes the entire CI/CD process with definitions comprised of steps in jobs divided into stages.

The IT organization controls Azure Pipelines through both manual and programmatic means.

To get started with the Pulumi example in this tutorial, create a sample Pulumi stack along with some unit tests. There is a sample repository with source code for the project called WebServerStack, as seen in Figure 1. Start by cloning this example repository locally.

Once the repository is cloned, you can build and test the project locally by using dotnet build and dotnet test commands, respectively.

To set up Azure DevOps, start with an Azure DevOps organization with repository and pipeline enabled. For this tutorial, I created an Azure DevOps organization named dexterposh. In the figures, you see this organization called AzDO, for Azure DevOps.

Under the AzDO organization, create a repository named AzDOPulumiExample for the Pulumi code and tests project. Create an Azure Resource Manager service connection to connect to an Azure subscription.

Next, create an environment named dev and add manual approval so that the engineer controls what deploys. Without manual approvals, Azure DevOps will automatically create and deploy to the environment. Environments can only be created via the Azure DevOps portal.

Finally, install the Pulumi extension in your Azure DevOps organization.

This integration with Azure DevOps enables us to make build and release stages for the Pulumi Project. We can also extend the pipeline to provision changes to the environment. Stages are logical divisions meant to mimic different phases in an application's lifecycle.

In the stage titled Build, Test & Release, Azure DevOps will build the project, run tests and then package and publish an artifact. The Preview Stage lets the engineer or project team preview the changes to the infrastructure. Finally, in the Deploy stage, we can approve the changes and make them go live in the environment to provision infrastructure.

A high-level overview of these stages is diagrammed in Figure 2, and the final integration is shown in Figure 3.

In Azure DevOps, create a stage called Build, Test & Release. Add the file named azure-pipelines.yml at the root of our repository, which the AzDO organization picks up by default as the pipeline definition.Editor's note: Both .yaml and .yml are YAML file extensions.

At the top of the pipeline definition in azure-pipelines.yml, we define several things.

After defining a stage, execute it on an agent with a job. The job will execute all the steps. The details of the BuildTestReleaseJob are shown in Figure 5.

In this set of commands, $(vmImage) refers to the variable that we define later in the YAML file.

To build a .NET app, we fetch the dependencies it references. The agent where the code will be built is new and does not have this information yet. For all the .NET Core-based tasks here, we use the official .NET Core CLI task, available by default. Add the task, shown as DotNetCoreCLI@2, to restore the project dependencies.

The next step in the infrastructure code's lifecycle is to build it. The build step ensures that the Pulumi code, along with all the dependencies, can be compiled into the .NET framework's Intermediate language files with a .dll extension and a binary file. The .NET Core CLI task works here as well.

A successful build confirms that dependencies are pulled in successfully, there are no syntactical errors, and the .dll file was generated. Then, run tests to ensure that there are no breaking changes. Use the .NET CLI task for this step.

Run the task dotnet publish against the .NET app to generate an artifact. The artifact is what later stages will use. Once published, the .NET app and all the dependencies are available in the publish folder, which we can archive as a zip file for later use.

Look at the argument specified to place the output to the $(Build.ArtifactStagingDirectory) variable, which represents a folder path on the agent to place build artifacts.

With the artifact ready, archive it and publish it as a build artifact. Azure Pipelines performs this step with the task named PublishBuildArtifacts. Specify the variable $(Build.ArtifactStagingDirectory) as the path to the zip file and the published build artifact is named 'pulumi.'

In this Pipeline stage, we built, tested and released the infrastructure as code from Pulumi with multiple tasks under the BuildTestRelease job. The next stage utilizes Pulumi tooling to generate a preview and then finally deploy the project.

With infrastructure code, we can extend the pipeline to generate a preview. The Preview stage is similar to a Terraform execution plan, which describes how it will get to the desired state. The Preview stage assists the engineer in reviewing the effect of changes when they deploy to an environment.

A YAML-based definition for the Preview stage, shown below, is added to the stages list in the pipeline definition.

The stage contains a job, PreviewJob. Let's review what each step inside the job does.

1. Template reference to build/downloadArtifact.yml. It contains another two tasks: to download the build artifact from the previous stage and to extract the zip file from the artifact. Here, it downloads the pulumi named artifact and makes it available in the path $(System.ArtifactsDirectory).

2. Template reference to build/configurePulumi.yml. It contains another two tasks: one to run the configure command and another to install the Azure extension to use with Pulumi. A plugin was added as a workaround to install Pulumi and the Azure extension required.

Note: We created separate template YAML files, called downloadArtifact.yml and configurePulumi.yml, to avoid issues when these steps repeat again in the Deploy phase. The configurePulumi.yml steps template was needed as a workaround for the Pulumi task that failed on AzureDevOps, with an error message asking to install the Azure plugin on the agent. Pulumi shares that the error relates to a limitation when using binary mode with plugin discovery.

3. Finally, a task runs the Pulumi preview command to generate a preview of the changes to be deployed to the infrastructure.

The Deploy stage is the last part of this DevOps pipeline. It uses the Azure DevOps environment and manual approvals.

The setup defines the stage with a job and multiple steps within the job:

This stage relies on the DeployJob job. Here's what each step inside the job does:

Once approved, the previewed changes are deployed, as shown in Figure 9.

After following this tutorial, DevOps teams can assess the benefits of combining Pulumi and Azure DevOps for infrastructure as code. With a common programming language rather than a DSL, infrastructure code matches application code. These programming languages are in use globally with several years of maturity in terms of how to test, build and package code. The combination of Pulumi with the Azure DevOps services creates a CI/CD pipeline for that infrastructure code. It can also extend to change management, with preview capabilities and manual approvals as needed before code deploys to an environment.

Go here to read the rest:
Use Pulumi and Azure DevOps to deploy infrastructure as code - TechTarget

SiFive Opens Business Unit to Build Chips With Arm and RISC-V Inside – Electronic Design

What youll learn:

Part 1 of this two-part series considered what the end of Moores Law means for organizations struggling to manage the data deluge. Cloud service providers noticed the demise of Moores Law early on and acted promptly to address declining performance. They have looked at many technologies, such as GPUs, NPUs, and even building their own ASIC chips. However, another alternative is emerging, one which could be even more versatile and powerful than any acceleration options currently available: FPGAs.

The technology has been around for decades, but the ubiquity of this highly configurable technology, coupled with proven performance in a variety of acceleration use cases, is coming to cloud service providers attention. Could this be the ultimate answer to bridging the gap between compute needs of the future and the flattening performance curve of server CPUs?

FPGAs Gain Steam

Everything old is new again isnt often true in the technology world, but it can be said of field-programmable gate arrays (FPGAs), which have been around for more than 40 years. They have traditionally been used as an intermediary step in the design of application-specific integrated-circuit (ASIC) semiconductor chips. The advantage of FPGAs is that they require the same tools and languages as those used to design semiconductor chips, but its possible to rewrite or reconfigure the FPGA with a new design on the fly. The disadvantage is that FPGAs are bigger and more power-hungry than ASICs.

It became harder and harder to justify making the investment in ASIC production, though, as the cost of producing ASICs began to increase. At the same time, FPGAs became more efficient and cost-competitive. It therefore made sense to remain at the FPGA stage and release the product based on an FPGA design.

Now, many industries take advantage of FPGAs, particularly in networking and cybersecurity equipment, where they perform specific hardware-accelerated tasks.

In 2010, Microsoft Azure started looking into using FPGA-based SmartNICs in standard servers to offload compute- and data-intensive tasks from the CPU to the FPGA. Today, these FPGA-based SmartNICs are used broadly throughout Microsoft Azures data centers, supporting services like Bing and Microsoft 365.

When it became clear that FPGAs were a legitimate option for hardware acceleration, Intel bought Altera in 2015, the second-largest producer of FPGA chips and development software, for $16 billion. Since then, several cloud companies have added FPGA technology to their service offerings, including AWS, Alibaba, Tencent, and Baidu, to name a few.

The Many Benefits of FPGAs

FPGAs are attractive for several reasons. One is that they offer a nice compromise between versatility, power, efficiency, and cost. Another is that FPGAs can be used for virtually any processing task. Its possible to implement parallel processing on an FPGA, but other processing architectures can be implemented as well.

Yet another attraction of FPGAs is that details such as data-path widths and register lengths can be tailored specifically to the needs of the application. Indeed, when designing a solution on an FPGA, its best to have a specific use case and application in mind in order to truly exploit the power of the FPGA.

Even just considering the two largest vendors, Xilinx and Intel, theres a vast array of choice for FPGAs when it comes to power. For example, compare the smallest FPGAs that can be used on drones for image processing, to extremely large FPGAs that can be used for machine learning and artificial intelligence. FPGAs generally provide very good performance per watt. Take FPGA-based SmartNICsthey can process up to 200 Gb/s of data without exceeding the power requirements on server PCIe slots.

Its possible to create highly efficient solutions with FPGAs that do just what is required, when required, because FPGAs are reconfigurable and can be tailored specifically to the application. One of the drawbacks of generic multiprocessor solutions is that theres an overhead in cost due to their universal nature. A generic processor can do many things well at the same time, but it will always struggle to compete with a specific processor designed to accelerate a specific task.

With the wide selection of FPGAs available, you should be able to find the right model at the right price point for your application needs. Like any chip technology, the cost of a chip reduces dramatically with volumethis is also the case with FPGAs. Theyre widely used today as an alternative to ASIC chips, providing a volume base and competitive pricing thats only set to improve over the coming years.

Only the Beginning

The end of Moores Law and its rapid doubling of processing power doesnt sound the death knell for computing. But it does mean that we must reconfigure our assumptions of what constitutes high-performance computing architectures, programming languages, and solution design. Hennessey and Patterson (see Part 1) even refer to this as the start of a new golden age in computer and software architecture innovation. Wherever that innovation may lead, its safe to say that server acceleration is possible now, and FPGAs provide an agile alternative with many benefits to consider.

Daniel Proch is Vice President of Product Management at Napatech.

See the rest here:
SiFive Opens Business Unit to Build Chips With Arm and RISC-V Inside - Electronic Design

Hadoop Developer Interview Questions: What to Know to Land the Job – Dice Insights

Interested in Apache Hadoop as a building block of your tech career? While youre on the job hunt, Hadoop developer interview questions will explore whether you have the technical chops with this open-source framework, especially if youre going for a role such as data engineer or B.I. specialist.

Hadoop allows firms to run data applications on large, often distributed hardcase clusters. While it takestechnical skill to create the Hadoop environment necessaryto process Big Data, other skill sets are required to make the results meaningful and actionable. The fast-changing Hadoop environment means that candidates should have flexibility and openness to new innovations.

Every few years, it seems, punditsbegin predicting Hadoops demise, killed by the cloud or some competing technology or well, something else. But according to Burning Glass, which collects and analyzes job postings from across the country, Hadoop-related jobs are expected to grow 7.8 percent over the next 10 years. Thats not exactly dead tech, to put it mildly. Moreover, the median salary is $109,000 (again, according to Burning Glass), which makes it pretty lucrative; that compensation can raise if you havethe right mix of skills and experience.

As you can see from the below chart, Hadoop pops up pretty frequently as a requested skill for data engineer, data scientist, and database architect jobs. If youre applying for any of these roles, the chances of Hadoop-related questions is high:

Dice Insights spokes to Kirk Werner, vice president of content at Udacity, to find out the best ways to prepare for Hadoop developer interview questions, the qualities that make a good candidate, and how practice before the interview makes perfect.

Werner notes Hadoop is designed for big, messy data and doesnt work well with multiple database strings.

Theres a lot of times in tech when the buzzword is the next big thing, and a lot of people heard Hadoop and thought they had to run all their data through these systems, when actually its for a lot of big, messy data, he said. Understanding that is the most important challenge for people to overcomeis this the right tool to use? The reality is youve got to understand what data youre looking at in order to determine if its the right tool, and thats the biggest thing with Hadoop.

When you learn Hadoop, you also learn how it intersects with a variety of other services, platforms, tools, and programming languages, including NoSQL, Apache Hive, Apache Kafka, Cassandra, and MapReduce. One of the big challenges of specializing in Hadoop is managing the complexity inherent in this particular ecosystem.

For Werner, it comes down a handful of fundamentals. First, there are questions that target your understanding of the terminology and what the tools can do. How do you use Hive, HBase, Pig? Every person I talk to about the interview process, most of the questions are basic and fundamental: How do I use these tools?

Second, its key to understand how to answer a scenario-based question. For example: In a cluster of 20 data notes, with X number of cores and Y RAM, whats the total capacity?

Understand what theyre going to ask so you know how to work through the problem, he said. How do you work through the problem presented based on the tools the company is using? Before you head into the interview, do your research; read the companys website, and search Google News or another aggregator for any articles about its tech stack.

Its important to have a sense of what toolsets the company utilizes, so you dont answer the questions completely antithetical to how they approach a particular data issue. Practice and preparedont go in cold, Werner said. Learn how to answer the questions, and build the communications skills to get that information across clearly. Make sure you understand the information well enough that you can go through the answer for the people in front of youits a live test, so practice that.

Werner said its important to be comfortable with ambiguity. Even if youre used to working with earlier versions of Hadoop-related tools, and even if you cant fully answer a technical question the interviewer lobs at you, you can still show that youre at ease with the fundamentals of data and Hadoop.

Its about understanding machine learning in the context of Hadoop, knowing when to bring in Spark, knowing when the new tools come up, playing with the data, seeing if the output works more efficiently, he added. A comfort level with exploration is important, and having a logical mind is important.

Integrating new tool sets and distribution tools is one of the most essential skills in any kind of data work. You have to be able to just see an interesting pattern, and want to go explore it, find new avenues of the analysis, and be more open to the idea that you might want to organize it in a different way a year from now, Werner said.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

You should always spend the interview showing the interviewer how youd be a great fit for the organization. This is where your pre-interview research comes in: Do your best to show the ways you can enhance the companys existing Hadoop work.

Be honest, and dont sugarcoat. You want to be humble, but audacious. Talk about how you add value to the organization, beyond just filling this role, Werner said. The interview is not just, here are the technical things we need you to know, and if you can explain how you can add broad value to your position, youll be much more successful.

Werner advised that candidates should ask questions about team structure, including the people theyll be working with. On the technical side of things, youll want to ask about the companys existing data structures, as well as tools and distribution engines used.

A companys not going to tell you the ins and outs of their data, but youre going to want to know if they use Hive or MongoDBthey should be open about the toolsets theyre using, he said.

On top of that, hes always believed asking questions prior to your interview is the best way to prepare. It can go beyond the technology stack or database system the company hashow does the individual manage teams, whats the success criteria for the position? Werner said. Show interest in what it takes to be a successful member of the team. Being prepared to be interviewed is super-important, even beyond the technical aspect of it.

Here is the original post:
Hadoop Developer Interview Questions: What to Know to Land the Job - Dice Insights