Why ASP.NET Core Is Regarded As One Of The Best Frameworks For Building Highly Scalable And Modern Web Applications – WhaTech

ASP.NET Core entered the market and created a buzz for web applications.

It made cross-platform apps highly flexible. ASP.NET Core is one of the best web frameworks which provides numerous benefits to developers. Web applications are in trend due to numerous benefits.

Let us understand a few terminologies first:

A web application is a computer program that operates on web servers. You can operate a web application on any web browser. This uses an integration of PHP and ASP in the back-end and HTML and JavaScript in the front-end to run the application smoothly. Here, PHP is a server-side programming language. On web applications, customers can perform different functions such as shopping carts, forms, and more, similar to that of native applications. Some of the much-used examples of web applications are Google apps and Microsoft 365.Microsoft has multiple web applications.

It uses ASP.NET core to build a strong framework that helps in smooth coding as well as managing lots of tasks. If you are thinking of using this amazing framework in your next project you can reach out to a Microsoft web app development services provider.

A framework is a platform where you are allowed to build programs and develop applications. A framework is well tested and used multiple times so are used to create high functionality applications. The framework helps to use a more secure code free of bugs in less time. The framework exists for all genres here we will be taking count of web application framework which is ASP.NET core.

ASP.NET Core is regarded as one of the best frameworks for building highly scalable and modern web applications due to its amazing features. It is an open-source framework that can be used by all, unlike ASP.NET. It is supported by the Microsoft community. Are you thinking of using this framework for your next project and developing dynamic web applications? You can reach out to a Microsoft web app development services provider. To know about it in detail have a look at its amazing features:

ASP.NET core has ample of features to make it the best framework for web applications. ASP.NET is an open-source framework that is highly popular among web application developers. It can run on multiple platforms and is integrated with great useful tools. Here, we have listed some of the best features of ASP.NET Core. By now, you must have got a concise idea of why ASP.NET Core is the most preferred framework for web applications. It is high time that you design web applications and expand your business on digital platforms. If you are thinking of developing a web application, then what better than using ASP.NET Core as the framework? To use this framework for your next web project, you can reach out to a dot net development company.

This email address is being protected from spambots. You need JavaScript enabled to view it.

Link:
Why ASP.NET Core Is Regarded As One Of The Best Frameworks For Building Highly Scalable And Modern Web Applications - WhaTech

Use Pulumi and Azure DevOps to deploy infrastructure as code – TechTarget

Infrastructure as code makes IT operations part of the software development team, with scalable and testable infrastructure configurations. To reap the benefits, IaC tools integrate with other DevOps offerings, as in this tutorial for Pulumi with Microsoft Azure DevOps.

Pulumi provides infrastructure as code provisioning, while Azure DevOps provides version control and a build and release tool. Together, they form a pipeline to define, build, test and deploy infrastructure, and to share infrastructure configurations. Follow this tutorial to develop infrastructure code in C# with Pulumi, unit test it using the open source NUnit framework, and safely deliver it via the Azure DevOps ecosystem. First, get to know Pulumi.

The Pulumi approach puts infrastructure into common development programming languages, rather than a domain specific language(DSL) only used for the tool. This means that an infrastructure blueprint for a project can use .NET Core, Python or another supported language that matches the application code. HashiCorp Terraform uses the HashiCorp Configuration Language or JSON or YAML formatting to define infrastructure as code. Similarly, Azure Resource Manager has limitations on how a user can apply logic and test it.

Note: In July 2020, HashiCorp added the ability to define infrastructure using TypeScript and Python as a feature in preview.

Because Pulumi uses a real programming language for infrastructure code, the same set of tools can build, test and deploy applications and infrastructure. It has built-in tools to assist IT engineers as they develop, test and deploy infrastructure. Pulumi is designed to deploy to cloud providers, including AWS and Azure.

To follow this Pulumi tutorial, get to know these terms:

This tutorial starts with one of Pulumi's example appsfor building a website. To highlight the integration with Azure DevOps, we make some modifications to the example app repository:

See the AzDOPulumiExample repository in Azure DevOps, and its ReadMe file for the modifications made to the example app.

Microsoft provides Azure DevOps, but is not tied to one language, platform or cloud. It includes many DevOps orchestration services, such as Azure Boards to track a software project and Pipelines to build, test and share code.

This tutorial uses repositories and Azure Pipelines to automatically build, test and release code. Azure Pipelines is a cloud service. It supports pipeline as code, because the user can store the pipeline definition in version control. Within Azure Pipelines, this tutorial relies on a pipeline, which describes the entire CI/CD process with definitions comprised of steps in jobs divided into stages.

The IT organization controls Azure Pipelines through both manual and programmatic means.

To get started with the Pulumi example in this tutorial, create a sample Pulumi stack along with some unit tests. There is a sample repository with source code for the project called WebServerStack, as seen in Figure 1. Start by cloning this example repository locally.

Once the repository is cloned, you can build and test the project locally by using dotnet build and dotnet test commands, respectively.

To set up Azure DevOps, start with an Azure DevOps organization with repository and pipeline enabled. For this tutorial, I created an Azure DevOps organization named dexterposh. In the figures, you see this organization called AzDO, for Azure DevOps.

Under the AzDO organization, create a repository named AzDOPulumiExample for the Pulumi code and tests project. Create an Azure Resource Manager service connection to connect to an Azure subscription.

Next, create an environment named dev and add manual approval so that the engineer controls what deploys. Without manual approvals, Azure DevOps will automatically create and deploy to the environment. Environments can only be created via the Azure DevOps portal.

Finally, install the Pulumi extension in your Azure DevOps organization.

This integration with Azure DevOps enables us to make build and release stages for the Pulumi Project. We can also extend the pipeline to provision changes to the environment. Stages are logical divisions meant to mimic different phases in an application's lifecycle.

In the stage titled Build, Test & Release, Azure DevOps will build the project, run tests and then package and publish an artifact. The Preview Stage lets the engineer or project team preview the changes to the infrastructure. Finally, in the Deploy stage, we can approve the changes and make them go live in the environment to provision infrastructure.

A high-level overview of these stages is diagrammed in Figure 2, and the final integration is shown in Figure 3.

In Azure DevOps, create a stage called Build, Test & Release. Add the file named azure-pipelines.yml at the root of our repository, which the AzDO organization picks up by default as the pipeline definition.Editor's note: Both .yaml and .yml are YAML file extensions.

At the top of the pipeline definition in azure-pipelines.yml, we define several things.

After defining a stage, execute it on an agent with a job. The job will execute all the steps. The details of the BuildTestReleaseJob are shown in Figure 5.

In this set of commands, $(vmImage) refers to the variable that we define later in the YAML file.

To build a .NET app, we fetch the dependencies it references. The agent where the code will be built is new and does not have this information yet. For all the .NET Core-based tasks here, we use the official .NET Core CLI task, available by default. Add the task, shown as DotNetCoreCLI@2, to restore the project dependencies.

The next step in the infrastructure code's lifecycle is to build it. The build step ensures that the Pulumi code, along with all the dependencies, can be compiled into the .NET framework's Intermediate language files with a .dll extension and a binary file. The .NET Core CLI task works here as well.

A successful build confirms that dependencies are pulled in successfully, there are no syntactical errors, and the .dll file was generated. Then, run tests to ensure that there are no breaking changes. Use the .NET CLI task for this step.

Run the task dotnet publish against the .NET app to generate an artifact. The artifact is what later stages will use. Once published, the .NET app and all the dependencies are available in the publish folder, which we can archive as a zip file for later use.

Look at the argument specified to place the output to the $(Build.ArtifactStagingDirectory) variable, which represents a folder path on the agent to place build artifacts.

With the artifact ready, archive it and publish it as a build artifact. Azure Pipelines performs this step with the task named PublishBuildArtifacts. Specify the variable $(Build.ArtifactStagingDirectory) as the path to the zip file and the published build artifact is named 'pulumi.'

In this Pipeline stage, we built, tested and released the infrastructure as code from Pulumi with multiple tasks under the BuildTestRelease job. The next stage utilizes Pulumi tooling to generate a preview and then finally deploy the project.

With infrastructure code, we can extend the pipeline to generate a preview. The Preview stage is similar to a Terraform execution plan, which describes how it will get to the desired state. The Preview stage assists the engineer in reviewing the effect of changes when they deploy to an environment.

A YAML-based definition for the Preview stage, shown below, is added to the stages list in the pipeline definition.

The stage contains a job, PreviewJob. Let's review what each step inside the job does.

1. Template reference to build/downloadArtifact.yml. It contains another two tasks: to download the build artifact from the previous stage and to extract the zip file from the artifact. Here, it downloads the pulumi named artifact and makes it available in the path $(System.ArtifactsDirectory).

2. Template reference to build/configurePulumi.yml. It contains another two tasks: one to run the configure command and another to install the Azure extension to use with Pulumi. A plugin was added as a workaround to install Pulumi and the Azure extension required.

Note: We created separate template YAML files, called downloadArtifact.yml and configurePulumi.yml, to avoid issues when these steps repeat again in the Deploy phase. The configurePulumi.yml steps template was needed as a workaround for the Pulumi task that failed on AzureDevOps, with an error message asking to install the Azure plugin on the agent. Pulumi shares that the error relates to a limitation when using binary mode with plugin discovery.

3. Finally, a task runs the Pulumi preview command to generate a preview of the changes to be deployed to the infrastructure.

The Deploy stage is the last part of this DevOps pipeline. It uses the Azure DevOps environment and manual approvals.

The setup defines the stage with a job and multiple steps within the job:

This stage relies on the DeployJob job. Here's what each step inside the job does:

Once approved, the previewed changes are deployed, as shown in Figure 9.

After following this tutorial, DevOps teams can assess the benefits of combining Pulumi and Azure DevOps for infrastructure as code. With a common programming language rather than a DSL, infrastructure code matches application code. These programming languages are in use globally with several years of maturity in terms of how to test, build and package code. The combination of Pulumi with the Azure DevOps services creates a CI/CD pipeline for that infrastructure code. It can also extend to change management, with preview capabilities and manual approvals as needed before code deploys to an environment.

Go here to read the rest:
Use Pulumi and Azure DevOps to deploy infrastructure as code - TechTarget

SiFive Opens Business Unit to Build Chips With Arm and RISC-V Inside – Electronic Design

What youll learn:

Part 1 of this two-part series considered what the end of Moores Law means for organizations struggling to manage the data deluge. Cloud service providers noticed the demise of Moores Law early on and acted promptly to address declining performance. They have looked at many technologies, such as GPUs, NPUs, and even building their own ASIC chips. However, another alternative is emerging, one which could be even more versatile and powerful than any acceleration options currently available: FPGAs.

The technology has been around for decades, but the ubiquity of this highly configurable technology, coupled with proven performance in a variety of acceleration use cases, is coming to cloud service providers attention. Could this be the ultimate answer to bridging the gap between compute needs of the future and the flattening performance curve of server CPUs?

FPGAs Gain Steam

Everything old is new again isnt often true in the technology world, but it can be said of field-programmable gate arrays (FPGAs), which have been around for more than 40 years. They have traditionally been used as an intermediary step in the design of application-specific integrated-circuit (ASIC) semiconductor chips. The advantage of FPGAs is that they require the same tools and languages as those used to design semiconductor chips, but its possible to rewrite or reconfigure the FPGA with a new design on the fly. The disadvantage is that FPGAs are bigger and more power-hungry than ASICs.

It became harder and harder to justify making the investment in ASIC production, though, as the cost of producing ASICs began to increase. At the same time, FPGAs became more efficient and cost-competitive. It therefore made sense to remain at the FPGA stage and release the product based on an FPGA design.

Now, many industries take advantage of FPGAs, particularly in networking and cybersecurity equipment, where they perform specific hardware-accelerated tasks.

In 2010, Microsoft Azure started looking into using FPGA-based SmartNICs in standard servers to offload compute- and data-intensive tasks from the CPU to the FPGA. Today, these FPGA-based SmartNICs are used broadly throughout Microsoft Azures data centers, supporting services like Bing and Microsoft 365.

When it became clear that FPGAs were a legitimate option for hardware acceleration, Intel bought Altera in 2015, the second-largest producer of FPGA chips and development software, for $16 billion. Since then, several cloud companies have added FPGA technology to their service offerings, including AWS, Alibaba, Tencent, and Baidu, to name a few.

The Many Benefits of FPGAs

FPGAs are attractive for several reasons. One is that they offer a nice compromise between versatility, power, efficiency, and cost. Another is that FPGAs can be used for virtually any processing task. Its possible to implement parallel processing on an FPGA, but other processing architectures can be implemented as well.

Yet another attraction of FPGAs is that details such as data-path widths and register lengths can be tailored specifically to the needs of the application. Indeed, when designing a solution on an FPGA, its best to have a specific use case and application in mind in order to truly exploit the power of the FPGA.

Even just considering the two largest vendors, Xilinx and Intel, theres a vast array of choice for FPGAs when it comes to power. For example, compare the smallest FPGAs that can be used on drones for image processing, to extremely large FPGAs that can be used for machine learning and artificial intelligence. FPGAs generally provide very good performance per watt. Take FPGA-based SmartNICsthey can process up to 200 Gb/s of data without exceeding the power requirements on server PCIe slots.

Its possible to create highly efficient solutions with FPGAs that do just what is required, when required, because FPGAs are reconfigurable and can be tailored specifically to the application. One of the drawbacks of generic multiprocessor solutions is that theres an overhead in cost due to their universal nature. A generic processor can do many things well at the same time, but it will always struggle to compete with a specific processor designed to accelerate a specific task.

With the wide selection of FPGAs available, you should be able to find the right model at the right price point for your application needs. Like any chip technology, the cost of a chip reduces dramatically with volumethis is also the case with FPGAs. Theyre widely used today as an alternative to ASIC chips, providing a volume base and competitive pricing thats only set to improve over the coming years.

Only the Beginning

The end of Moores Law and its rapid doubling of processing power doesnt sound the death knell for computing. But it does mean that we must reconfigure our assumptions of what constitutes high-performance computing architectures, programming languages, and solution design. Hennessey and Patterson (see Part 1) even refer to this as the start of a new golden age in computer and software architecture innovation. Wherever that innovation may lead, its safe to say that server acceleration is possible now, and FPGAs provide an agile alternative with many benefits to consider.

Daniel Proch is Vice President of Product Management at Napatech.

See the rest here:
SiFive Opens Business Unit to Build Chips With Arm and RISC-V Inside - Electronic Design

Hadoop Developer Interview Questions: What to Know to Land the Job – Dice Insights

Interested in Apache Hadoop as a building block of your tech career? While youre on the job hunt, Hadoop developer interview questions will explore whether you have the technical chops with this open-source framework, especially if youre going for a role such as data engineer or B.I. specialist.

Hadoop allows firms to run data applications on large, often distributed hardcase clusters. While it takestechnical skill to create the Hadoop environment necessaryto process Big Data, other skill sets are required to make the results meaningful and actionable. The fast-changing Hadoop environment means that candidates should have flexibility and openness to new innovations.

Every few years, it seems, punditsbegin predicting Hadoops demise, killed by the cloud or some competing technology or well, something else. But according to Burning Glass, which collects and analyzes job postings from across the country, Hadoop-related jobs are expected to grow 7.8 percent over the next 10 years. Thats not exactly dead tech, to put it mildly. Moreover, the median salary is $109,000 (again, according to Burning Glass), which makes it pretty lucrative; that compensation can raise if you havethe right mix of skills and experience.

As you can see from the below chart, Hadoop pops up pretty frequently as a requested skill for data engineer, data scientist, and database architect jobs. If youre applying for any of these roles, the chances of Hadoop-related questions is high:

Dice Insights spokes to Kirk Werner, vice president of content at Udacity, to find out the best ways to prepare for Hadoop developer interview questions, the qualities that make a good candidate, and how practice before the interview makes perfect.

Werner notes Hadoop is designed for big, messy data and doesnt work well with multiple database strings.

Theres a lot of times in tech when the buzzword is the next big thing, and a lot of people heard Hadoop and thought they had to run all their data through these systems, when actually its for a lot of big, messy data, he said. Understanding that is the most important challenge for people to overcomeis this the right tool to use? The reality is youve got to understand what data youre looking at in order to determine if its the right tool, and thats the biggest thing with Hadoop.

When you learn Hadoop, you also learn how it intersects with a variety of other services, platforms, tools, and programming languages, including NoSQL, Apache Hive, Apache Kafka, Cassandra, and MapReduce. One of the big challenges of specializing in Hadoop is managing the complexity inherent in this particular ecosystem.

For Werner, it comes down a handful of fundamentals. First, there are questions that target your understanding of the terminology and what the tools can do. How do you use Hive, HBase, Pig? Every person I talk to about the interview process, most of the questions are basic and fundamental: How do I use these tools?

Second, its key to understand how to answer a scenario-based question. For example: In a cluster of 20 data notes, with X number of cores and Y RAM, whats the total capacity?

Understand what theyre going to ask so you know how to work through the problem, he said. How do you work through the problem presented based on the tools the company is using? Before you head into the interview, do your research; read the companys website, and search Google News or another aggregator for any articles about its tech stack.

Its important to have a sense of what toolsets the company utilizes, so you dont answer the questions completely antithetical to how they approach a particular data issue. Practice and preparedont go in cold, Werner said. Learn how to answer the questions, and build the communications skills to get that information across clearly. Make sure you understand the information well enough that you can go through the answer for the people in front of youits a live test, so practice that.

Werner said its important to be comfortable with ambiguity. Even if youre used to working with earlier versions of Hadoop-related tools, and even if you cant fully answer a technical question the interviewer lobs at you, you can still show that youre at ease with the fundamentals of data and Hadoop.

Its about understanding machine learning in the context of Hadoop, knowing when to bring in Spark, knowing when the new tools come up, playing with the data, seeing if the output works more efficiently, he added. A comfort level with exploration is important, and having a logical mind is important.

Integrating new tool sets and distribution tools is one of the most essential skills in any kind of data work. You have to be able to just see an interesting pattern, and want to go explore it, find new avenues of the analysis, and be more open to the idea that you might want to organize it in a different way a year from now, Werner said.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

You should always spend the interview showing the interviewer how youd be a great fit for the organization. This is where your pre-interview research comes in: Do your best to show the ways you can enhance the companys existing Hadoop work.

Be honest, and dont sugarcoat. You want to be humble, but audacious. Talk about how you add value to the organization, beyond just filling this role, Werner said. The interview is not just, here are the technical things we need you to know, and if you can explain how you can add broad value to your position, youll be much more successful.

Werner advised that candidates should ask questions about team structure, including the people theyll be working with. On the technical side of things, youll want to ask about the companys existing data structures, as well as tools and distribution engines used.

A companys not going to tell you the ins and outs of their data, but youre going to want to know if they use Hive or MongoDBthey should be open about the toolsets theyre using, he said.

On top of that, hes always believed asking questions prior to your interview is the best way to prepare. It can go beyond the technology stack or database system the company hashow does the individual manage teams, whats the success criteria for the position? Werner said. Show interest in what it takes to be a successful member of the team. Being prepared to be interviewed is super-important, even beyond the technical aspect of it.

Here is the original post:
Hadoop Developer Interview Questions: What to Know to Land the Job - Dice Insights

Exploring the future of modern software development – ComputerWeekly.com

Modern software development is about building cloud-native, cloud-first and multi-cloud applications. But its also about embracing data-driven big data insights and making use of artificial intelligence (AI) and machine learning (ML). The definition of modern software development encompasses granular code reuse and low coding tools and a whole lot more, too.

The hidden question is really: what does it take to be a software developer in 2020 and beyond?

First, lets revisit a well-known phrase that has become somewhat of a recognised principle in the creation, deployment, operation and management of software solutions: its about people, processes, tools and technology.

Take technology for instance. An overriding theme throughout myriad market sectors is the spectrum of digital technologies enabling new levels of operational capacity and business reach. The list is long. By no means comprehensive, it can span from cloud and mobile through to internet-connected products, integration strategies of application programming interfaces (APIs) and new application models such as blockchain and microservices.

Organisations are looking to take advantage of digital technologies to innovate and deliver solutions and products faster. These technologies also enable more engaging experiences and interactions while driving greater levels of productivity and personalisation.

The boundaries of an organisation are no longer confined to physical bricks but out to an edge that flexes according to its end points. Underpinning processes, such as DevOps, focus on finding a new working relationship that benefits the entire software process. Implicit in that goal is the quick, stable and repeatable release of software into the field with greater frequency and control.

Todays software developer has access to a wealth of tools and services that have evolved and adapted with a new wave of guardrails keeping them in check. These tools incorporate greater support for automation, self-service provisioning and broader scope of training services. There is flexibility with features that abstract complexity and provide the necessary plumbing that makes things work.

With low and no code tooling support, businesses are not limited by their access to traditional developer skills. They can broaden the scope of participation to include more employees.

Give the people what they want when they want it has become a first principle when it comes to software delivery. Ultimately, the outcome matters. A defining feature of modern software development for all ages is the delivery of software solutions and products that simply dont suck but are intuitive to modern needs and concerns.

In short, modern software development means the development of applications and apps that take advantage of all that current technology has to offer. It uses the different architectures, services and capabilities available to maximise the benefits. It requires interpersonal skills and a collaborative approach that is attuned to the context of use and the customer.

Its important to pay attention to driving concerns such as security, privacy and ethical responsibility. The challenge to being modern is navigating and selecting that which wont hold you back people, tools and technology. The good news is that open extensibility and interoperability is the modern lingua franca that will keep you current.

Bola Rotibi is a research director at CCS Insight.

Read the original:
Exploring the future of modern software development - ComputerWeekly.com

In the City: Take advantage of open recreation, cultural and park amenities – Coloradoan

John Stokes Published 7:00 a.m. MT Aug. 16, 2020

John Stokes(Photo: City of Fort Collins)

Even in the midst of this unprecedented time, I hope you are finding opportunities to enjoy our beautiful Colorado summer.

The last few months have brought the welcome reopening of several recreation, cultural and park facilities, and programs. Following county and state health department guidelines, the city has designed reopening plans to create safe and welcoming places with appropriate activities for each location.

A number of recreation facilities have reopened including Edora Pool and Ice Center (EPIC), Northside Aztlan Community Center, Fort Collins Senior Center, The Farm at Lee Martinez Park, The Pottery Studio, Foothills Activity Center, and Club Tico.

Visitors should expect modified hours, limited programs and capacities, increased cleaning and sanitization, and updated check-in policies when visiting. Participants can engage in fitness, education, youth enrichment and arts and crafts programming in person or virtually.

The last month has also brought the reopening of the Fort Collins Museum of Discovery and the Lincoln Center. In welcoming the community back, the museum has limited and timed admissions and is using an online ticketing process. The Lincoln Center is available to the community for scheduled gatherings, meetings and events.

The Gardens on Spring Creek opened in June and has added summer programming for the community including yoga and tai chi on the Great Lawn.

Showtime?How Fort Collins performance venues approach reopening

Be on the lookout for over 50 murals being created this summer by local artists through the Art in Public Places Program. Artists will be painting murals on transformer cabinets, pianos, walls and even concrete barriers in Old Town.

Another great way to enjoy the Colorado summer is by taking a stroll in a local natural area or park. I encourage you to enjoy these treasured open spaces and recreate safely at available splash pads, dog parks, skate parks, golf courses and more.

While we are very much in the moment, we continue to plan for the future. For the last 10 months, Fort Collins parks and recreation staff together with a consulting team and with considerable public engagement, have been updating the Parks and Recreation Master Plan.

There are many phases to this in-depth planning exercise, and the final document is intended to guide the future of recreation and park assets for several decades. Community input is vital to the success of the master plan, and we want to hear your voice. Visit ourcity.fcgov.com/ParksandRec for more information on the master plan and how you can participate in upcoming engagement opportunities.

We have seen tremendous community use of public amenities recently, and we hope you will continue to enjoy the parks, natural areas, recreation and cultural facilities. Please know that we remain committed to staying informed and to safely adapting to the ever-changing conditions we find ourselves in.

John Stokes is the deputy director of Fort Collins Community Services. He can be reached at 970-221-6263 or jstokes@fcgov.com.

Read or Share this story: https://www.coloradoan.com/story/opinion/2020/08/16/city-enjoy-open-recreation-cultural-and-park-amenities/3369378001/

See the article here:
In the City: Take advantage of open recreation, cultural and park amenities - Coloradoan

CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning – Yahoo Finance

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Please replace the release with the following corrected version due to multiple revisions.

The updated release reads:

ANYSCALE HOSTS INAUGURAL RAY SUMMIT ON SCALABLE PYTHON AND SCALABLE MACHINE LEARNING

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Anyscale, the distributed programming platform company, is proud to announce Ray Summit, an industry conference dedicated to the use of the Ray open source framework for overcoming challenges in distributed computing at scale. The two-day virtual event is scheduled for Sept. 30 Oct. 1, 2020.

With the power of Ray, developers can build applications and easily scale them from a laptop to a cluster, eliminating the need for in-house distributed computing expertise. Ray Summit brings together a leading community of architects, machine learning engineers, researchers, and developers building the next generation of scalable, distributed, high-performance Python and machine learning applications. Experts from organizations including Google, Amazon, Microsoft, Morgan Stanley, and more will showcase Ray best practices, real-world case studies, and the latest research in AI and other scalable systems built on Ray.

"Ray Summit gives individuals and organizations the opportunity to share expertise and learn from the brightest minds in the industry about leveraging Ray to simplify distributed computing," said Robert Nishihara, Ray co-creator and Anyscale co-founder and CEO. "Its also the perfect opportunity to build on Rays established popularity in the open source community and celebrate achievements in innovation with Ray."

Anyscale will announce the v1.0 release of the Ray open source framework at the Summit and unveil new additions to a growing list of popular third-party machine learning libraries and frameworks on top of Ray.

The Summit will feature keynote presentations, general sessions, and tutorials suited to attendees with various experience and skill levels using Ray. Attendees will learn the basics of using Ray to scale Python applications and machine learning applications from machine learning visionaries and experts including:

"It is essential to provide our customers with an enterprise grade platform as they build out intelligent autonomous systems applications," said Mark Hammond, GM Autonomous Systems, Microsoft. "Microsoft Project Bonsai leverages Ray and Azure to provide transparent scaling for both reinforcement learning training and professional simulation workloads, so our customers can focus on the machine teaching needed to build their sophisticated, real world applications. Im happy we will be able to share more on this at the inaugural Anyscale Ray Summit."

To view the full event schedule, please visit: https://events.linuxfoundation.org/ray-summit/program/schedule/

For complimentary registration to Ray Summit, please visit: https://events.linuxfoundation.org/ray-summit/register/

About Anyscale

Anyscale is the future of distributed computing. Founded by the creators of Ray, an open source project from the UC Berkeley RISELab, Anyscale enables developers of all skill levels to easily build applications that run at any scale, from a laptop to a data center. Anyscale empowers organizations to bring AI applications to production faster, reduce development costs, and eliminate the need for in-house expertise to build, deploy and manage these applications. Backed by Andreessen Horowitz, Anyscale is based in Berkeley, CA. http://www.anyscale.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200812005122/en/

Contacts

Media Contact:Allison Stokesfama PR for Anyscaleanyscale@famapr.com 617-986-5010

The rest is here:
CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning - Yahoo Finance

ISRO Is Recruiting For Vacancies with Salary Upto Rs 54000: How to Apply – The Better India

ISRO recruitment 2020 is currently underway for capacity building and research in the field of Remote Sensing and Geo-Informatics.

The Indian Institute of Remote Sensing (IIRS), a Unit of Indian Space Research Organisation (ISRO), is hiring for 18 vacancies, out of which 17 are for the post of Junior Research Fellow and 1 is for a Research Associate.

The organisation is focused on developing land-ocean-atmosphere applications, and understanding processes on the Earths surface using space-based technologies.

According to the official notification, selected candidates will be paid a salary of Rs 31,000/month for JRF positions and up to Rs 54,000/month for an RA position.

Step 1: Visit the official website, and register yourself as an applicant using a valid email id.

Step 2: Fill the online application form and upload necessary documents

Step 3: Submit the application

The last date for submitting applications is 31 August 2020. Before applying, read through the detailed official notification.

Project Retrieval of Geophysical parameters using GNSS/IRNSS signals

Vacancies 4

Educational qualification

Essential qualifications

Project Himalayan Alpine Biodiversity Characterization and Information SystemNetwork (NMHS)

Vacancies: 2

Educational qualifications:

Essential qualifications:

Project Chandrayaan-2 Science plan for utilization of Imaging Infrared Spectrometer (IIRS) data for lunar surface compositional mapping

Vacancies: 1

Educational Qualifications

Essential qualification

Project Multi-sensor integration for digital recording, and realistic 3D Modelling of UNESCO World Heritage sites in Northern India

Vacancies: 1

Educational Qualifications:

Essential qualifications:

Project Extending crop inventory to new crops

Vacancies: 2

Educational qualification:

Essential qualifications:

Project Aerosol Radiative Forcing over India

Vacancies: 1

Educational qualifications:

Essential qualifications:

Project Spatio-temporal variations of gaseous air pollutants over the Indian Subcontinent with a special emphasis on foothills of North-Western Himalaya.

Vacancies 2

Educational qualifications:

Essential qualifications

Project: Indian Bio-Resource Information Network

Vacancies: 2

Educational qualifications:

Essential qualifications:

Project Rainfall threshold and DlnsAR based methods for initiation of landslides and decoupling of spatial variations in precipitation, erosion, tectonics in Garhwal Himalaya

Vacancies 1

Educational qualifications:

Essential qualifications:

Project Indian Bio-Resource Information Network

Vacancies 1

Educational qualifications:

Essential qualifications:

Project Indian Bio-Resource Information Network

Vacancies 1

Educational qualification:

Essential qualification:

For more information, you can visit the IIRS website or read the recruitment notice.

(Edited by Gayatri Mishra)

We at The Better India want to showcase everything that is working in this country. By using the power of constructive journalism, we want to change India one story at a time. If you read us, like us and want this positive news movement to grow, then do consider supporting us via the following buttons:

See original here:
ISRO Is Recruiting For Vacancies with Salary Upto Rs 54000: How to Apply - The Better India

Does technology increase the problem of racism and discrimination? – TechTarget

Technology was designed to perpetuate racism.This is pointed out by a recent article in the MIT Technology Review, written by Charlton McIlwain, professor of media, culture and communication at New York University and author of Black Software: The Internet & Racial Justice, From the AfroNet to Black Lives Matter.

The article explains how the Black population and the Latino community in the United States are victims of the configuration of technological tools, such as facial recognition, which is programmed to analyze the physical features of people and, in many cases, generate alerts of possible risks when detecting individuals whose facial features identify them as Black or Latino.

"We've designedfacial recognition technologiesthat target criminal suspects on the basis of skin color. We'vetrained automated risk profiling systemsthat disproportionately identify Latinx people as illegal immigrants. We'vedevised credit scoring algorithmsthat disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs," McIlwain wrote.

In the article, the author elaborates on the origins of the use of algorithms in politics to win elections, understand the social climate and prepare psychological campaigns to modify the social mood, which in the late 1960s was tense in the United States.These efforts, however, paved the way for large-scale surveillance in the areas where there was most unrest, at the time, the Black community.

According to McIlwain, "this kind of information had helped create what came to be known as 'criminal justice information systems.' They proliferated through the decades, laying the foundation for racial profiling, predictive policing, and racially targeted surveillance. They left behind a legacy that includes millions of black and brown women and men incarcerated."

Contact tracing and threat-mapping technologies designed to monitor and contain the COVID-19 pandemic did not help improve the racial climate.On the contrary, these applications showed a high rate of contagion among Black people, Latinos and the indigenous population.

Although this statistic could be interpreted as a lack of quality and timely medical services for members of the aforementioned communities, the truth is that the information was disclosed as if Blacks, Latinos and indigenous people were a national problem and a threat of contagion.Donald Trump himself made comments in this regard and asked to reinforce the southern border to prevent Mexicans and Latinos from entering his country and increasing the number of COVID-19 patients, which is already quite high.

McIlwain's fear -- and that of other members of the Black community in the United States -- is that the new applications created as a result of the pandemic will be used to recognize protesters to later "quell the threat." Surely, he refers to persecutions and arrests, which may well end in jail, or in disappearances.

"If we dont want our technology to be used to perpetuate racism, then we must make sure that we dont conflate social problems like crime or violence or disease with black and brown people. When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we design it to eradicate," concludes the author.

Although artificial intelligence and machine learning feed applications to enrich them, the truth is that the original programming is made by a human (or several).Who defines, initially, the parameters for the algorithms, are the people who created the program or application.The lack of well-defined criteria can result in generalizations, and this can lead to discriminatory or racist actions.

The British newspaperThe Guardianreported, a few years ago, that one of Google's algorithmsauto-tagged images of Black people like gorillas.Other companies, such asIBM and Amazon, avoid using facial recognition technologybecause of its discriminatory tendencies towards Black people, especially women.

"We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," IBM executive director Arvind Krishna wrote in a letter sent to Congress in June."[T]he fight against racism is as urgent as ever," said Krishna, while announcing that IBM has ended its "general" facial recognition products and will not endorse the use of any technology for "mass surveillance, racial discrimination and human rights violations."

If we consider that thethe difference in error rate between identifying a white man and a Black woman is 34% in the case of IBM software, according to a study by the MIT Media Lab, IBM's decision not only seems fair from the point of view racially speaking, it is also a recognition of the path that lies ahead in programming increasingly precise algorithms.

The 2018 MIT Media Lab study concluded that, although the average precision of these products ranges between 93.7% and 87.9%, the differences based on skin color and gender are notable;93% of the errors made by Microsoft's product affected people with dark skin, and 95% of the errors made by Face ++, a Chinese alternative, concerned women.

Joy Buolamwini, co-author of the MIT study and founder of the Algorithmic Justice League, sees IBM's initiative as a first step in holding companies accountable and promoting fair and accountable artificial intelligence."This is a welcome recognition that facial recognition technology, especially as deployed by police, has been used to undermine human rights, and toharm Black people specifically, as well as Indigenous people and other People of Color,"she said.

Another issue related to discrimination in the IT industry has to do with the language used to define certain components of a network or systems architecture.Concepts like master/slave are beingreformulated to change for a less objectionable terminology.The same will happen with the concepts of blacklists/whitelists.Now, developers will have terms like leader/follower, and allowed list/blocked list.

TheLinux open source operating system will include newinclusiveterminologyin its code and documentation.Linux kernel maintainer Linus Torvaldsapproved this new terminology on July 10, according to ZDNet.

GitHub, a Microsoft-owned software development company,also announced a few weeks ago that it is working to remove such termsfrom its coding.

These actions demonstrate the commitment of the technology industry to create tools that help the growth of society, with inclusive systems and applications and technologies that help combat discrimination instead of fomenting racism.

Go here to read the rest:
Does technology increase the problem of racism and discrimination? - TechTarget

Build Your Own PaaS with Crossplane: Kubernetes, OAM, and Core Workflows – InfoQ.com

Key Takeaways

InfoQ recently sat down with Bassam Tabbara, founder and CEO of Upbound, and discussed building application platforms that span multiple cloud vendors and on-premise infrastructure.

The conversation began by exploring the premise that every organisation delivering software deploys applications onto a platform, whether they intentionally curate this platform or not. Currently, Kubernetes is being used as the foundation for many "cloud native" platforms. Although Kubernetes does not provide a full platform-as-a-service (PaaS)-like experience out of the box, the combination of a well-defined API, clear abstractions, and comprehensive extension points make this a perfect foundational component on which to build upon.

Tabbara also discussed Crossplane, an open source project that enables engineers to manage any infrastructure or cloud services directly from Kubernetes. This "cross cloud control plane" has been built upon the Kubernetes declarative configuration primitives, and it allows engineers defining infrastructure to leverage the existing K8s toolchain. The conversation also covered the Open Application Model (OAM) and explored how Crossplane has become the Kubernetes implementation of this team-centric standard for building cloud native applications.

Many organisations are aiming to assemble their own cloud platform, often consisting of a combination of on-premises infrastructure and cloud vendors. Leaders within these organisations recognise that minimizing deployment friction and decreasing the lead time for the delivery of applications, while simultaneously providing safety and security, can provide a competitive advantage. These teams also acknowledge that any successful business typically has existing "heritage" applications and infrastructure that needs to be included within any platform. Many also want to support multiple public cloud vendors, with the goals of avoiding lock-in, arbitraging costs, or implementing disaster recovery strategies.

Platform teams within organisations typically want to enable self-service usage for application developers and operators. But they also want appropriate security, compliance, and regulatory requirements baked into the platform. All large-scale public cloud vendors such as Amazon Web Service (AWS), Microsoft Azure, and Google Cloud Platform (GCP) offer their services through a control plane. This control plane consists of user interfaces (UIs), command line interfaces (CLIs), and application programming interfaces (APIs) that platform teams and operators interact with to configure and deploy the underlying "data plane" of infrastructure services. Although the implementation of a cloud control plane is typically globally distributed, it appears centralised to the end users. This control plane provides a single entry point for user interaction where policy can be enforced, guardrails applied, and auditing conducted.

Application developers typically want a platform-as-a-service (PaaS)-like experience for defining and deploying applications, as pioneered by the likes of Zimki, Heroku, and Cloud Foundry. Deploying new applications via a simple "git push heroku master" is a powerful and frictionless approach. Application operators and site reliability engineering (SRE) teams want to easily compose, run, and maintain applications and their respective configurations.

Tabbara cautioned that these requirements lead to an organisation buying a PaaS, which unless chosen appropriately, can be costly to maintain:

"Modern commercial PaaSs often meet the requirements of 80% of an organisations use cases. However, this means that the infrastructure teams still have to create additional platform resources to meet the other 20% of requirements"

Building a PaaS is not easy. Doing so takes time and skill, and it is challenging to define and implement technical abstractions that meet the requirements of all of the personas involved. Google famously has thousands of highly-trained engineers working on internal platforms, Netflix has a large team of specialists focused on the creation and maintenance of their internal PaaS, and even smaller organisations like Shopify have a dedicated platform team. Technical abstractions range from close to the "lowest common denominator", like that taken by Libcloud and OpenStack, all the way through to providing a common workflow but full cloud-specific configuration, like HashiCorps Terraform or Pulumi. Traditional PaaS abstractions are also common within the domain of cloud, but are typically vendor specific e.g. GCP App Engine, AWS Elastic Beanstalk, or Azure Service Fabric.

Many organisations are choosing to build their platform using Kubernetes as the foundation. However, as Tabbara stated on Twitter, this can require a large upfront investment, and combined with the 80% use case challenge, this can lead to the "PaaS dilemma":

"The PaaS dilemma - your PaaS does 80% of what I want, my PaaS takes 80% of my time to maintain #kubernetes"

Tabbara stated that the open source Crossplane project aims to be a universal multi-cloud control plane for building a bespoke PaaS-like experience.

"Crossplane is the fusing of "cross"-cloud control "plane". We wanted to use a noun that refers to the entity responsible for connecting different cloud providers and acts as control plane across them. Cross implies "cross-cloud" and "plane" brings in "control plane"."

By building on the widely accepted Kubernetes-style primitives for configuration, and providing both ready-made infrastructure components and a registry for sharing additional resources, this reduces the burden on the infrastructure and application operators. Also, by providing a well-defined API that encapsulates the key infrastructure abstractions, this allows a separation of concerns between platform operators (those working "below the API line") and application developers and operators (those working "above the API line").

"Developers can define workloads without worrying about implementation details, environment constraints, or policies. Administrators can define environment specifics and policies. This enables a higher degree of reusability and reduces complexity."

Crossplane is implemented as a Kubernetes add-on and extends any cluster with the ability to provision and manage cloud infrastructure, services, and applications. Crossplane uses Kubernetes-styled declarative and API-driven configuration and management to control any piece of infrastructure, on-premises or in the cloud. Through this approach, infrastructure can be configured using custom resource definitions (CRDs) and YAML. It can also be managed via well established tools like kubectl or via the Kubernetes API itself. The use of Kubernetes also allows the definition of security controls, via RBAC, or policies, using Open Policy Agent (OPA) implemented via Gatekeeper.

As part of the Crossplane installation a Kubernetes resource controller is configured to be responsible for the entire lifecycle of a resource: provisioning, health checking, scaling, failover, and actively responding to external changes that deviate from the desired configuration. Crossplane integrates with continuous delivery (CD) pipelines so that application infrastructure configuration is stored in a single control cluster. Teams can create, track, and approve changes using cloud native CD best practices such as GitOps. Crossplane enables application and infrastructure configuration to co-exist on the same Kubernetes cluster, reducing the complexity of toolchains and deployment pipelines.

The clear abstractions, use of personas, and the "above and below the line" approach draws heavily on the work undertaken within the Open Application Model.

Initially created by Microsoft, Alibaba, and Upbound, the Open Application Model (OAM) specification describes a model where developers are responsible for defining application components, application operators are responsible for creating instances of those components and assigning them application configurations, and infrastructure operators are responsible for declaring, installing, and maintaining the underlying services that are available on the platform. Crossplane is the Kubernetes implementation of the specification.

With OAM, platform builders can provide reusable modules in the format of Components, Traits, and Scopes. This allows platforms to do things like package them in predefined application profiles. Users choose how to run their applications by selecting profiles, for example, microservice applications with high service level objective (SLO) requirements, stateful apps with persistent volumes, or event-driven functions with horizontally autoscaling.

The OAM specification introduction document presents a story that explores a typical application delivery lifecycle.

To deliver an application, each individual component of a program is described as a Component YAML by an application developer. This file encapsulates a workload and the information needed to run it.

To run and operate an application, the application operator sets parameter values for the developers' components and applies operational characteristics, such as replica size, autoscaling policy, ingress points, and traffic routing rules in an ApplicationConfiguration YAML. In OAM, these operational characteristics are called Traits. Writing and deploying an ApplicationConfiguration is equivalent to deploying an application. The underlying platform will create live instances of defined workloads and attach operational traits to workloads according to the ApplicationConfiguration spec.

Infrastructure operators are responsible for declaring, installing, and maintaining the underlying services that are available on the platform. For example, an infrastructure operator might choose a specific load balancer when exposing a service, or a custom database configuration that ensures data is encrypted and replicated globally.

To make the discussion more concrete, lets explore a typical Crossplane workflow, from installation of the project to usage.

First, install Crossplane and create a Kubernetes cluster. Next, install a provider and configure your credentials. Infrastructure primitives can be provisioned from any provider e.g. (GCP, AWS, Azure, Alibaba, and (custom-created) on-premise.

A platform operator defines, composes, and publishes your own infrastructure resources with declarative YAML, resulting in your own infrastructure CRDs being added to the Kubernetes API for applications to use.

An application developer publishes application components to communicate any fundamental, suggested, or optional properties of our services and their infrastructure requirements.

An application operator ties together the infrastructure components and application components, specificies configuration, and runs the application.

Kubernetes is being used as the foundation for many "cloud native" platforms, and therefore investing in both models of how the team interacts with this platform and also how the underlying components are assembled is vitally important and a potential competitive advantage for organisations. As stated by Dr Nicole Forsgren et al in Accelerate, minimising lead time (from idea to value) and increasing deployment frequency are correlated with high performing organisations. The platform plays a critical role here.

Crossplane is a constantly evolving project, and as the community expands more and more feedback is being sought. Engineering teams can visit the Crossplane website to get started with the open source projects, and feedback can be shared in the Crossplane Slack.

Daniel Bryant works as a Product Architect at Datawire, and is the News Manager at InfoQ, and Chair for QCon London. His current technical expertise focuses on DevOps tooling, cloud/container platforms and microservice implementations. Daniel is a leader within the London Java Community (LJC), contributes to several open source projects, writes for well-known technical websites such as InfoQ, O'Reilly, and DZone, and regularly presents at international conferences such as QCon, JavaOne, and Devoxx.

Continued here:
Build Your Own PaaS with Crossplane: Kubernetes, OAM, and Core Workflows - InfoQ.com