What Senate Intel report says about Trump and Roger Stones 39 phone calls during the 2016 election – Yahoo! Voices

Donald Trump; Roger Stone

Donald Trump and Roger Stone Photo illustration by Salon/Getty Images

On August 18, the Senate Intelligence Committee released its bipartisan report on Russian interference in the 2016 presidential election. One of the things addressed in the report is Donald Trump's phone conversations with veteran GOP operative Roger Stone during the election and according to anAugust 19 articleby New York Times reporter Julian E. Barnes, the Senate Intelligence report sheds even more light on those interactions than the Mueller report and Stone's criminal trial.

Barnes notes that according to court records, Stone and Trump had 39 phone conversations from March-November 2016 one of which Barnes describes as "an intriguing phone call, on October 6, 2016, to Mr. Trump."

"According to the Senate report," Barnes explains, "Mr. Stone received a call that afternoon from a number belonging to an aide to Mr. Trump, who regularly used others' phones to make calls. The topic of the conversation was not known, Senate investigators wrote, but they noted that Mr. Stone was focused on a potential WikiLeaks release."

The Senate Intelligence Committee's report concludes, "It appears quite likely that Stone and Trump spoke about WikiLeaks."

In October 2016, WikiLeaks published hacked Democratic e-mails that had been stolen by Russians.

Barnes points out that in its report, the Senate Intelligence Committee "laid out a range of evidence that Mr. Stone was focused on WikiLeaks. He and Mr. Trump had spoken a few days earlier, on September 29, also on the aide's phone. Another campaign aide, Rick Gates, witnessed it and told investigators that the two men discussed WikiLeaks. After that call, Mr. Trump told Mr. Gates that 'more releases of damaging information would be coming.'"

The Times reporter also notes that Stone "said the Senate conclusion that he had discussed WikiLeaks with the president was based solely on testimony by Mr. Gates and Mr. Trump's former lawyer Michael D. Cohen. Mr. Stone called their testimony tainted by agreements with prosecutors to answer their questions."

Story continues

Stone has insisted that he did not know that people connected to the Russian government were behind the stolen Democratic e-mails that WikiLeaks published in October 2016 and that he never discussed WikiLeaks with Trump. Barnes, notes, however, "The Senate report made clear that WikiLeaks, at least, 'very likely' knew the e-mails were coming from Russian intelligence, and that Mr. Stone knew about the most critical WikiLeaks release before it happened."

Stone is among the many Trump associates who has faced criminal charges: he was convicted of charges ranging from witness tampering to lying to Congress and sentenced to 40 months in federal prison by Judge Amy Berman Jackson, a Barack Obama appointee. But Trump commuted Stone's sentence in July, saving him from the prison sentence he was about to begin.

Barnes notes that the Senate Intelligence Committee "rejected Mr. Trump's statement to prosecutors investigating Russia's interference that he did not recall conversations with his long-time friend, Roger J. Stone Jr., about the e-mails, which were later released by WikiLeaks."

In the Mueller report, former special counsel Robert Mueller concluded that the 2016 Trump campaign's interactions with Russians however questionable did not rise to the level of a full-fledged criminal conspiracy. And Barnes points out that the Senate Intelligence report does not accuse Trump of lying. But Barnes also points out that the report "laid out extensive contacts between Trump advisers and Russians" and "detailed even more of the president's conversations with Mr. Stone than were previously known, renewing questions about whether Mr. Trump was truthful with investigators for the special counsel, Robert S. Mueller III, or misled them."

See the original post here:
What Senate Intel report says about Trump and Roger Stones 39 phone calls during the 2016 election - Yahoo! Voices

Its David v Goliath: Assanges partner launches CrowdJustice appeal to help stop WikiLeaks founders extradition to US – RT

Stella Moris, the partner of Julian Assange, has launched a CrowdJustice campaign to help reinforce the WikiLeaks founders legal defense as it faces new traps set by US prosecutors.

The crowdfunding appeal is designed to help cover the extensive legal costs of the London Magistrates hearings to decide whether Assange will be extradited to the US, where he faces a possible 175-year prison sentence.

The charges leveled against the journalist include conspiracy, vague accusations that he recruited hackers to assist him, violation of the US Espionage Act, and conspiracy to commit computer intrusion for allegedly trying to help ex-US Army whistleblower Chelsea Manning.

At the time of writing, the fund has garnered 6,792 ($8,913) of its 25,000 ($32,809) target in the hour since it was launched.

Moris, who has two sons with Assange, issued the call to arms to fight against extradition and his continuing imprisonment, while warning that the case sets a dangerous precedent for press freedom around the world, especially given that Assange and WikiLeaks exposed US war crimes and human rights abuses.

The Obama administration opted not to prosecute Assange but the Trump administration is pursuing the Australians extradition under 100-year-old US laws, Moris says, adding that no publisher or journalist has been pursued using this legislation, which allows no possibility for defense of public interest.

The 49-year-old has been locked up in Belmarsh Prison in the UK for the past 16 months. He is confined to his cell for up to 23 hours a day, even during the Covid-19 pandemic, with no visitors allowed since his arrest at the Ecuadorian embassy on 11 April 2019.

Charges were initially leveled against the journalist and publisher in April 2019, totalling some 18 counts related to receiving and publishing government documents. However, the prosecution changed the indictment to broaden its reach in July 2020.

This effectively puts the squeeze on the defense team as it scrambles to prepare new legal arguments with already limited and stretched resources, something which Moris has likened to climbing the Himalayas.

Assanges next hearing is slated to begin on September 7 at the Old Bailey.

Think your friends would be interested? Share this story!

Read more here:
Its David v Goliath: Assanges partner launches CrowdJustice appeal to help stop WikiLeaks founders extradition to US - RT

Programming language Rust: Mozilla job cuts have hit us badly but here’s how we’ll survive – ZDNet

The open-source project behind the Mozilla-founded systems programming language, Rust, has announced a new Rust foundation to boost its independence following Mozilla's recent round of pandemic layoffs.

Firefox-maker Mozilla's decision to cut 250 roles or 25% of its workforce last week has taken a toll on the open-source project behind Rust. Mozilla is the key sponsor of Rust and provides much of the language's infrastructure as well as core talent.

Some Mozilla contributors to five-year-old Rust did lose their jobs in Mozilla's job cuts, causing some speculation that heavier cuts to the team behind Mozilla's Servo browser engine a core user of Rust might pose an existential threat to the young language.

Rust's demise would be bad news for a growing number of developers exploring it for system programming as opposed to application development as a modern and memory-safe alternative to C and C++.

Rust is now in developer analyst RedMonk's top 20 most-popular language rankings, and it is being used at Amazon Web Services (AWS), Microsoft and Google Cloud among others for building platforms. And while Mozilla is the main sponsor of Rust, AWS, Microsoft Azure and Google Cloud have come on board as a sponsor too.

However, discussing Mozilla's layoffs, Steve Klabnik, a Rust Core member, has pointed out that the Rust community is much bigger than the number of Mozilla employees who contributed to the project and were affected by the layoffs.

"Rust will survive," wrote Klabnik in a post on Hacker News. "This situation is very painful, and it has the possibility of being more so, but Rust is bigger than Mozilla."

Nonetheless, as a project born in Mozilla Research and supported heavily by Mozilla, Rust is still currently entrenched in Mozilla's infrastructure, which, for example, hosts the Rust package manager, crates.io.

"Mozilla employs a small number of people to work on Rust full time, and many of the Servo people contributed to Rust too, even if it wasn't their job," Klabnik wrote.

"[Mozilla] also pays for the hosting bill for crates.io. They also own a trademark on Rust, Cargo, and the logos of both. Two people from the Rust team have posted about their situation, one was laid off and one was not. Unsure about the others. Many of the Servo folks (and possibly all, it's not 100% clear yet but it doesn't look good) have been laid off."

But Klabnik notes that "vast majority" of Rust contributors are not employed by Mozilla, even though the Mozilla's talent and infrastructure is important to the language's survival.

To resolve issues around ownership and control, the Rust Core team and Mozilla are accelerating plans to create a Rust foundation, which they expect to be operating by the end of the year.

"The various trademarks and domain names associated with Rust, Cargo, and crates.io will move into the foundation, which will also take financial responsibility for the costs they incur. We see this first iteration of the foundation as just the beginning," the Rust Core team said in a blog post this week.

"There's a lot of possibilities for growing the role of the foundation, and we're excited to explore those in the future," it added.

Addressing the question of Rust's demise, the team noted that it was a "common misconception that all the Mozilla employees who participated in Rust leadership did so as a part of their employment". Instead, some leaders were contributing to Rust on a voluntary basis rather than as part of the job at Mozilla.

The Rust language project has also selected a team to lead the creation of the Rust foundation, including Microsoft Rust expert Ryan Levickand Josh Triplett, a former Intel engineer and a lead of the Rust language team.

Microsoft Azure engineers are exploring Rust for a Kubernetes container tool, and Microsoft recently released a public preview of Rust/WinRT, or Rust for the Windows Runtime (WinRT), to support Rust developers who build Windows desktop apps, store apps, and components like device drivers.

While a primary sponsor like AWS, Microsoft or Google Cloud could be good news for Rust, the Rust Core team says it doesn't want to rely too heavily on just one sponsor.

"While we have only begun the process of setting up the foundation, over the past two years the Infrastructure Team has been leading the charge to reduce the reliance on any single company sponsoring the project, as well as growing the number of companies that support Rust," the Rust Core team said.

Visit link:
Programming language Rust: Mozilla job cuts have hit us badly but here's how we'll survive - ZDNet

Top 10 Languages That Paid Highest Salaries Worldwide In 2020 – Analytics India Magazine

Recently, the Stack Overflow Developer Survey 2020 surveyed about 65,000 developers, where they voted on their daily-used programming languages, go-to tools, libraries and more. The survey stated, Globally, respondents who use Perl, Scala, and Go tend to have the highest salaries, with a median salary around $75k. While looking at US jobs only, Scala language developers tend to have the highest salaries.

Here, we list down the top 10 programming languages from the survey that paid the highest salaries worldwide in 2020.

(The list is in alphabetical order).

Rank: 6th

Average Salary: $65k

About: Bash is a Unix shell: a command-line interface for interacting with the operating system. Shell scripts are usually used by the developers for various system administration tasks, for instance, performing disk backups, evaluating system logs, and so on.

The job profiles that require shell script programming include automation engineer, application server expert, SysOps network engineer, among others.

Know more here.

Rank: 3rd

Average Salary: $74k

About: Go is an open-source programming language that helps in building simple, reliable, and efficient software. The language is expressive, concise, clean and efficient. It can carry tasks such as garbage collection.

The job profiles that require Go programming include Go language developer, software development engineer, senior research engineer, among others.

Know more here.

Rank: 8th

Average Salary: $60k

About: Haskell is an open-source, purely-functional programming language that allows rapid development of robust software. The features of this language include strong support for integration with other languages, built-in concurrency and parallelism, and more. It includes debuggers, profilers and rich libraries.

The job profiles that require Haskell programming include senior Haskell engineer, senior software engineer, full-stack engineer, among others.

Know more here.

Rank: 9th

Average Salary: $59k

About: Julia is a flexible, dynamic language, appropriate for scientific as well as numerical computing, with performance comparable to traditional statically-typed languages. The developers can use Julia for specialised domains such as machine learning, data science, etc.

The job profiles that require Julia programming include data scientist, machine learning engineer, senior software developer, among others.

Know more here.

Rank: 7th

Average Salary: $64k

About: Objective-C is an object-oriented programming language and is the primary programming language when writing software for OS X and iOS. Objective-C inherits the syntax, primitive types, and flow control statements of C and adds syntax for defining classes and methods. It also adds language-level support for object graph management and object-literals while providing dynamic typing and binding, deferring many responsibilities until runtime.

The job profiles that require Objective-C programming include iOS developer, quality assurance engineer, mobile software developer, among others.

Know more here.

Rank: 1st

Average Salary: $76k

About: Perl is a highly capable, feature-rich programming language which runs on over 100 platforms from portables to mainframes and is suitable for both rapid prototyping and large scale development projects.

The features of this language are that it is easily extensible, object-oriented, enables Unicode support, etc. The job profiles that require Perl programming include Perl developer, lead Perl developer, among others.

Know more here

Rank: 10th

Average Salary: 59k

About: Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. It has high-level built-in data structures, combined with dynamic typing and dynamic binding. The simple, easy-to-learn syntax of this language emphasises readability and therefore reduces the cost of program maintenance.

The job profiles that require Python programming include Python developer, data scientist, research analyst, data analyst, among others.

Know more here.

Rank: 4th

Average Salary: $74k

About: Rust is a multi-paradigm programming language that helps in building reliable and efficient software. The features of this language are that they are memory-efficient, have a rich type system, ensure memory-safety and thread-safety, among others.

The job profiles that require Rust programming include firmware engineer, principal software engineer, autonomous systems AI engineer, among others.

Know more here.

Rank: 5th

Average Salary: $71k

About: Ruby is a dynamic, open-source programming language with a focus on simplicity and productivity. The features of this language are flexibility, exception handling features, mark-and-sweep garbage collector, OS independent threading, highly portable, etc.

The job profiles that require Ruby programming include Ruby on Rails developer, technical architect, senior backend developer, among others.

Know more here.

Rank: 2nd

Average Salary: $76k

About: Scala is a high-level programming language that combines object-oriented and functional programming. The static types of this language help avoid bugs in complex applications, and its JVM and JavaScript runtimes let a developer build high-performance systems with easy access to vast ecosystems of libraries.

The job profiles that require Scala programming include big data developer, Scala developer, data engineer, machine learning engineer, among others.

Know more here.

comments

Visit link:
Top 10 Languages That Paid Highest Salaries Worldwide In 2020 - Analytics India Magazine

Programming language Kotlin 1.4 is out: This is how it’s improved quality and performance – ZDNet

Developer tools maker JetBrains has released version 1.4 of Kotlin, the increasingly popular programming language promoted by Google for Android app development.

Kotlin has become one of the fastest-growing languages on GitHub and now ranks as one of the top five most-loved languages by developers who use Stack Overflow, while ranking 19th on RedMonk's list of most popular languages.

While it is popular among Android app developers, JetBrains points out that Kotlin is also used for server-side development, and for targeting iOS, the web, Windows, macOS and Linux.

SEE: Hiring Kit: Python developer (TechRepublic Premium)

It also says 5.8 million people have edited Kotlin code over the past year, up from 3.5 million people a year earlier.

Kotlin 1.4, released this week, addresses over 60 performance issues that were causing integrated development environment (IDE) problems.

Developers should notice autocomplete suggestions appearing significantly faster, though these require JetBrain's IDE IntelliJ IDEA 2020.1+ and Android Studio 4.1+, which is co-developed by Google and JetBrains.

It also introduces a new coroutine debugger tab in the Debug Tool Window to help developers check the state of coroutines, and a new Kotlin Project Wizard for creating and configuring different types of Kotlin projects.

JetBrains is also working on a new Kotlin compiler that aims to eventually unify all platforms that Kotlin supports, but in 1.4 the compiler starts with a new type-inference algorithm that was available in Kotlin 1.3 and is now used by default.

There's also new Java Virtual Machine (JVM) and JavaScript (JS) backends in 1.4, which are available in Alpha mode now but will become the default after being stabilized.

These backends Kotlin/JVM and Kotlin/JS are being migrated to the same internal representation (IR) for Kotlin code as the Kotlin/Native backend is. The outcome is that all three backends share a common background infrastructure, which should make it easier to implement bug fixes and features once for all platforms.

SEE: Chrome for Android to label 'Fast page' sites as Google clamps down on mixed forms

Kotlin/Native also gains performance improvements to Kotlin/Native compilation and execution, as well as improved interoperability between it and Swift and Objective C for iOS and macOS development.

Finally, Kotlin Multiplatform is an early-stage project that aims to help save effort when maintaining the same code for different platforms.

There's a new hierarchical project structure that allows developers to share code between a subset of similar targets, like iOS Arm64 devices and an x86 simulator target. This offers developers more flexibility than the current ability to share code on all platforms.

View original post here:
Programming language Kotlin 1.4 is out: This is how it's improved quality and performance - ZDNet

Linux Foundation showcases the greater good of open source – ComputerWeekly.com

The role of open source collaboration was highlighted during a presentation to tie in with the start of the Linux Foundations KubeCon and Cloud Native Computing Forum (CNCF) virtual conferences.

Many believe that open source is the future of software development. For instance, in a recent conversation with Computer Weekly, PayPal CTO Sri Shivananda said: It is impossible for you to hire all the experts in the world. But there are many more people creating software because they have a passion to do it.

These passionate software developers not only help the wider community by contributing code, but they also help themselves. You can help others as well as helping yourself, said Jim Zemlin. executive director of the Linux Foundation.

As an example, Zemlin described how the work on the Open Mainframe project, involving free Cobol programming training, is helping to fill the skills gap resulting from the need, during the Covid-19 pandemic, to update legacy government systems. It is an initiative, supported by IBM, that has helped IBM to grow the number of Cobol and mainframe skills needed to support its clients.

The project, which began in April 2020, has helped to support some US states, which faced temporary challenges when they needed to process a record number of unemployment claims and faced some temporary challenges.

Zemlin highlighted another open source project, OpenEEW, an earthquake early warning system, originally developed by Grillo. Grillo developed EEW systems in Mexico and Chile that have been issuing alerts since March 2018.

Earlier this month, IBM announced that it would play a role in supporting Grillo by adding the OpenEEW earthquake technology into the Call for Code deployment pipeline supported by the Linux Foundation.

IBM said it has deployed a set of six of Grillos earthquake sensor hardware and is conducting tests in Puerto Rico, complementing Grillos tools with a new Node-RED dashboard to visualise readings. IBM said it was extending a Docker software version of the detection component that can be deployed to Kubernetes and Red Hat OpenShift on the IBM Cloud.

At the start of the conference, Berlin-based online fashion retailer Zalando was recognised for its contribution to the Kubernetes ecosystem. The retailer has provided contributions to Kubernetes, and created and maintained a number of open source projects to expand the ecosystem. These include Skipper, Zalandos Kubernetes Ingress proxy and HTTP router; Postgres Operator, an operator to run PostgreSQL on Kubernetes; kube-metrics-adapter, which uses custom metrics for autoscaling as well as an ingress controller for AWS.

Henning Jacobs, senior principal engineer at Zalando, said: Zalandos need to massively scale led us on a cloud-native journey that has ultimately made our developers happier and enabled us to grow with customer demand. The Kubernetes and cloud-native community has been such a valuable resource for us that we are dedicated to actively continuing giving back in any way we can.

A key aspect about open source is that it represents a community of software developers who are able to collaborate across the globe in a way that enables sophisticated software products to be built and maintained.

During the event, a number of individual developers were recognised for their contribution to the Jaeger project, which was originally submitted by Uber Engineering in 2015. The project, which has more than 1,700 contributors, is an open source distributing tracing tool for microservices.

View post:
Linux Foundation showcases the greater good of open source - ComputerWeekly.com

Learn More About Dynamo for Revit: Features, Functions, and News – ArchDaily

Learn More About Dynamo for Revit: Features, Functions, and News

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

There's no shortage of architectural software these days and it can be challenging and overwhelming to know what tools will be the best fit for your work. Often the programs you learned in school or whatever your firm uses are what you stick with. However, it's often beneficial to step out of that comfort zone and investigate your options to see what else is out there. New software can present opportunities to simplify existing workflows or even bring new digital capabilities to you and your firm.

One such program is Dynamo, a plug-in for Autodesk Revit. Dynamo is an open source visual programming language for Revit, written by designers and construction professionals. It is a programming language that allows you to type lines of code; while also creating an algorithm that consists of nodes. If you're not familiar with Dynamo yet, you can take a complete online course to learn how to use it - first, read on for an introduction to Dynamo.

What is visual programming?

Establishing visual, systemic, and geometric relationships between the different parts of a drawing is key to the design process. Workflows influence these relationships from the concept stage to final design. Similarly, programming allows us to establish a workflow, but through formalizing algorithms.

What does Dynamo do?

Using Dynamo, you can work with enhanced BIM capabilities in Revit. Dynamo and Revit together can be utilized to model and analyze complex geometries, automate repetitive processes, minimize human error, and export data to Excel files and other file-types not typically supported by Revit. Dynamo can make the design process more efficient, with an intuitive interface and many pre-made scripting libraries available as well. GoPillar's online course doesn't require any previous coding experience or familiarity with Dynamo, but it's helpful if you're already a Revit user.

What does Dynamo look like?

The user interface for Dynamo is organized into five main areas, the largest being the workspace where programs are processed. The user interface sections include: Main menu Toolbar Library Workspace* Execution bar

*Note: The workspace is the place where designers develop visual programs, but it is also the place where any elaborate geometry is previewed. Whether you are working in a home workspace or in a custom node, you can navigate the workspace by using the mouse or buttons on the top right. However, you can change the navigation preview by switching from one mode to another in the lower right.

How does Dynamo work?

Once the application is installed, you can connect different elements to define the custom algorithms that consist of relationships and sequences. These algorithms can be used on many applications, from data processing to geometry generation, with minimal programming code. Dynamo is considered a specific visual programming tool for designers, as well as a software that can create tools. These tools can use external libraries or any Autodesk product that has an API.

How can using Dynamo help problem-solve?

Dynamo users are usually architects, engineers, and construction professionals, so the problems that Dynamo solves are essentially related to parametric modeling, analysis of BIM data, automation of documentation, and the exchange of information between different softwares.

What is the latest version of Dynamo?

The latest version of Dynamo For Revit is an excellent upgrade for those still using the 2017 - 2019 versions of Revit. With this update, significant performance improvements have been made. First, the checkboxes of the Analytics UI collection have been updated for clarity and follow users' choices more efficiently. Next, instrumentation is subject to Analytics and cannot be activated unless it is selected in advance to send analysis information. Thisfeature also helps to clean the Settings drop-down menu by removing one of the checkbox items and making the dashboard more streamlined.

Dynamo for Revit 2021 Generative Design Tool

Generative Design Tool in Revit is a 2021 feature that creates a set of design outputs based on user-specified inputs, constraints and goals. Revit uses Dynamo to iterate various input values and generate outputs based on set goals. Within Dynamo, there are specific nodes used to characterize the parameters in the generative design process, which will modify the design outputs created by the Revit tool.

Dynamo provides the framework as a Revit internal generative design tool. An understanding of how to work with a Dynamo Script created for Generative Design will help improve knowledge of Generative Design. Professionals will then be able to manipulate the Revit tool and the way it acts in the process. By adjusting the nodes and values in Dynamo that are linked to Generative Design parameters, you can change the options once the Generative Design tool has been run in Revit.

How are files saved and what format(s) does the software work on?

The new version of Dynamo saves files in the JavaScript Object Notation (JSON) format instead of the XML-based formats of dyn and dyf. Custom graphics and nodes created in Dynamo are not compatible with previous versions of the program. However, upon installation of the new version, all versions of the 1.x file will be kept and converted to the new format (with a backup copy saved in the original version).

The Dynamo Node Library

The node library has been reorganized to reduce redundancy and ease user navigation. All custom and non-default nodes will be displayed as a sub-item known as an "Add-on. In addition, you can now resize and collapse the library window by manipulating the right edge of the panel. When working with nodes, remember that the nodes have a precise order of drawing, making it possible that the processed objects are displayed on top of each other. This display can be confusing when adding multiple nodes sequentially as they can be viewed in the same location in the workspace.

If you would be interested in learning more about Dynamo, our partner GoPillar Academy has recently launched an online course using this software, suitable for all skill levels. ArchDaily readers have access to an exclusive discount for $99 USD ($349 USD regular price) valid until August 31st!

Link:
Learn More About Dynamo for Revit: Features, Functions, and News - ArchDaily

NITK figures 4th in Google Summer of Code ranking – BusinessLine

The National Institute of Technology Karnataka (NITK) at Surathkal in Mangaluru taluk has been ranked fourth globally in the list of universities with the most accepted students for GSoC (Google Summer of Code) 2020.

As many as 23 students from NITK got selected for GSoC 2020. A total of 1,198 students from 550 universities globally are participating in GSoC 2020.

GSoC is a global programme organised by Google Open Source team with an aim to introduce students to open source software development. The students are paired with mentors from open source organisations to work on a programming-intensive project. The GSoC programme is running from June to August 2020.

A press release by the institute said that there has been a voluntary and organised effort, led by Mohit P Tahiliani of the Department of Computer Science and Engineering, for interested students from various departments of NITK, to structurally plan out open source activities in the institute.

The number of students participating in GSoC has increased in the past two years, thereby showing the growth of NITK in the field of open source contributions, it said.

Google Open Source blog has given the names of 12 universities with the most accepted students for GSoC 2020. NITK figures fourth in that list.

Excerpt from:
NITK figures 4th in Google Summer of Code ranking - BusinessLine

Why ASP.NET Core Is Regarded As One Of The Best Frameworks For Building Highly Scalable And Modern Web Applications – WhaTech

ASP.NET Core entered the market and created a buzz for web applications.

It made cross-platform apps highly flexible. ASP.NET Core is one of the best web frameworks which provides numerous benefits to developers. Web applications are in trend due to numerous benefits.

Let us understand a few terminologies first:

A web application is a computer program that operates on web servers. You can operate a web application on any web browser. This uses an integration of PHP and ASP in the back-end and HTML and JavaScript in the front-end to run the application smoothly. Here, PHP is a server-side programming language. On web applications, customers can perform different functions such as shopping carts, forms, and more, similar to that of native applications. Some of the much-used examples of web applications are Google apps and Microsoft 365.Microsoft has multiple web applications.

It uses ASP.NET core to build a strong framework that helps in smooth coding as well as managing lots of tasks. If you are thinking of using this amazing framework in your next project you can reach out to a Microsoft web app development services provider.

A framework is a platform where you are allowed to build programs and develop applications. A framework is well tested and used multiple times so are used to create high functionality applications. The framework helps to use a more secure code free of bugs in less time. The framework exists for all genres here we will be taking count of web application framework which is ASP.NET core.

ASP.NET Core is regarded as one of the best frameworks for building highly scalable and modern web applications due to its amazing features. It is an open-source framework that can be used by all, unlike ASP.NET. It is supported by the Microsoft community. Are you thinking of using this framework for your next project and developing dynamic web applications? You can reach out to a Microsoft web app development services provider. To know about it in detail have a look at its amazing features:

ASP.NET core has ample of features to make it the best framework for web applications. ASP.NET is an open-source framework that is highly popular among web application developers. It can run on multiple platforms and is integrated with great useful tools. Here, we have listed some of the best features of ASP.NET Core. By now, you must have got a concise idea of why ASP.NET Core is the most preferred framework for web applications. It is high time that you design web applications and expand your business on digital platforms. If you are thinking of developing a web application, then what better than using ASP.NET Core as the framework? To use this framework for your next web project, you can reach out to a dot net development company.

This email address is being protected from spambots. You need JavaScript enabled to view it.

Link:
Why ASP.NET Core Is Regarded As One Of The Best Frameworks For Building Highly Scalable And Modern Web Applications - WhaTech

Use Pulumi and Azure DevOps to deploy infrastructure as code – TechTarget

Infrastructure as code makes IT operations part of the software development team, with scalable and testable infrastructure configurations. To reap the benefits, IaC tools integrate with other DevOps offerings, as in this tutorial for Pulumi with Microsoft Azure DevOps.

Pulumi provides infrastructure as code provisioning, while Azure DevOps provides version control and a build and release tool. Together, they form a pipeline to define, build, test and deploy infrastructure, and to share infrastructure configurations. Follow this tutorial to develop infrastructure code in C# with Pulumi, unit test it using the open source NUnit framework, and safely deliver it via the Azure DevOps ecosystem. First, get to know Pulumi.

The Pulumi approach puts infrastructure into common development programming languages, rather than a domain specific language(DSL) only used for the tool. This means that an infrastructure blueprint for a project can use .NET Core, Python or another supported language that matches the application code. HashiCorp Terraform uses the HashiCorp Configuration Language or JSON or YAML formatting to define infrastructure as code. Similarly, Azure Resource Manager has limitations on how a user can apply logic and test it.

Note: In July 2020, HashiCorp added the ability to define infrastructure using TypeScript and Python as a feature in preview.

Because Pulumi uses a real programming language for infrastructure code, the same set of tools can build, test and deploy applications and infrastructure. It has built-in tools to assist IT engineers as they develop, test and deploy infrastructure. Pulumi is designed to deploy to cloud providers, including AWS and Azure.

To follow this Pulumi tutorial, get to know these terms:

This tutorial starts with one of Pulumi's example appsfor building a website. To highlight the integration with Azure DevOps, we make some modifications to the example app repository:

See the AzDOPulumiExample repository in Azure DevOps, and its ReadMe file for the modifications made to the example app.

Microsoft provides Azure DevOps, but is not tied to one language, platform or cloud. It includes many DevOps orchestration services, such as Azure Boards to track a software project and Pipelines to build, test and share code.

This tutorial uses repositories and Azure Pipelines to automatically build, test and release code. Azure Pipelines is a cloud service. It supports pipeline as code, because the user can store the pipeline definition in version control. Within Azure Pipelines, this tutorial relies on a pipeline, which describes the entire CI/CD process with definitions comprised of steps in jobs divided into stages.

The IT organization controls Azure Pipelines through both manual and programmatic means.

To get started with the Pulumi example in this tutorial, create a sample Pulumi stack along with some unit tests. There is a sample repository with source code for the project called WebServerStack, as seen in Figure 1. Start by cloning this example repository locally.

Once the repository is cloned, you can build and test the project locally by using dotnet build and dotnet test commands, respectively.

To set up Azure DevOps, start with an Azure DevOps organization with repository and pipeline enabled. For this tutorial, I created an Azure DevOps organization named dexterposh. In the figures, you see this organization called AzDO, for Azure DevOps.

Under the AzDO organization, create a repository named AzDOPulumiExample for the Pulumi code and tests project. Create an Azure Resource Manager service connection to connect to an Azure subscription.

Next, create an environment named dev and add manual approval so that the engineer controls what deploys. Without manual approvals, Azure DevOps will automatically create and deploy to the environment. Environments can only be created via the Azure DevOps portal.

Finally, install the Pulumi extension in your Azure DevOps organization.

This integration with Azure DevOps enables us to make build and release stages for the Pulumi Project. We can also extend the pipeline to provision changes to the environment. Stages are logical divisions meant to mimic different phases in an application's lifecycle.

In the stage titled Build, Test & Release, Azure DevOps will build the project, run tests and then package and publish an artifact. The Preview Stage lets the engineer or project team preview the changes to the infrastructure. Finally, in the Deploy stage, we can approve the changes and make them go live in the environment to provision infrastructure.

A high-level overview of these stages is diagrammed in Figure 2, and the final integration is shown in Figure 3.

In Azure DevOps, create a stage called Build, Test & Release. Add the file named azure-pipelines.yml at the root of our repository, which the AzDO organization picks up by default as the pipeline definition.Editor's note: Both .yaml and .yml are YAML file extensions.

At the top of the pipeline definition in azure-pipelines.yml, we define several things.

After defining a stage, execute it on an agent with a job. The job will execute all the steps. The details of the BuildTestReleaseJob are shown in Figure 5.

In this set of commands, $(vmImage) refers to the variable that we define later in the YAML file.

To build a .NET app, we fetch the dependencies it references. The agent where the code will be built is new and does not have this information yet. For all the .NET Core-based tasks here, we use the official .NET Core CLI task, available by default. Add the task, shown as DotNetCoreCLI@2, to restore the project dependencies.

The next step in the infrastructure code's lifecycle is to build it. The build step ensures that the Pulumi code, along with all the dependencies, can be compiled into the .NET framework's Intermediate language files with a .dll extension and a binary file. The .NET Core CLI task works here as well.

A successful build confirms that dependencies are pulled in successfully, there are no syntactical errors, and the .dll file was generated. Then, run tests to ensure that there are no breaking changes. Use the .NET CLI task for this step.

Run the task dotnet publish against the .NET app to generate an artifact. The artifact is what later stages will use. Once published, the .NET app and all the dependencies are available in the publish folder, which we can archive as a zip file for later use.

Look at the argument specified to place the output to the $(Build.ArtifactStagingDirectory) variable, which represents a folder path on the agent to place build artifacts.

With the artifact ready, archive it and publish it as a build artifact. Azure Pipelines performs this step with the task named PublishBuildArtifacts. Specify the variable $(Build.ArtifactStagingDirectory) as the path to the zip file and the published build artifact is named 'pulumi.'

In this Pipeline stage, we built, tested and released the infrastructure as code from Pulumi with multiple tasks under the BuildTestRelease job. The next stage utilizes Pulumi tooling to generate a preview and then finally deploy the project.

With infrastructure code, we can extend the pipeline to generate a preview. The Preview stage is similar to a Terraform execution plan, which describes how it will get to the desired state. The Preview stage assists the engineer in reviewing the effect of changes when they deploy to an environment.

A YAML-based definition for the Preview stage, shown below, is added to the stages list in the pipeline definition.

The stage contains a job, PreviewJob. Let's review what each step inside the job does.

1. Template reference to build/downloadArtifact.yml. It contains another two tasks: to download the build artifact from the previous stage and to extract the zip file from the artifact. Here, it downloads the pulumi named artifact and makes it available in the path $(System.ArtifactsDirectory).

2. Template reference to build/configurePulumi.yml. It contains another two tasks: one to run the configure command and another to install the Azure extension to use with Pulumi. A plugin was added as a workaround to install Pulumi and the Azure extension required.

Note: We created separate template YAML files, called downloadArtifact.yml and configurePulumi.yml, to avoid issues when these steps repeat again in the Deploy phase. The configurePulumi.yml steps template was needed as a workaround for the Pulumi task that failed on AzureDevOps, with an error message asking to install the Azure plugin on the agent. Pulumi shares that the error relates to a limitation when using binary mode with plugin discovery.

3. Finally, a task runs the Pulumi preview command to generate a preview of the changes to be deployed to the infrastructure.

The Deploy stage is the last part of this DevOps pipeline. It uses the Azure DevOps environment and manual approvals.

The setup defines the stage with a job and multiple steps within the job:

This stage relies on the DeployJob job. Here's what each step inside the job does:

Once approved, the previewed changes are deployed, as shown in Figure 9.

After following this tutorial, DevOps teams can assess the benefits of combining Pulumi and Azure DevOps for infrastructure as code. With a common programming language rather than a DSL, infrastructure code matches application code. These programming languages are in use globally with several years of maturity in terms of how to test, build and package code. The combination of Pulumi with the Azure DevOps services creates a CI/CD pipeline for that infrastructure code. It can also extend to change management, with preview capabilities and manual approvals as needed before code deploys to an environment.

Go here to read the rest:
Use Pulumi and Azure DevOps to deploy infrastructure as code - TechTarget