AI/ML in Broadband Networks: the Role of Standards – EnterpriseAI

Earlier this year a new initiative to create standards for artificial intelligence (AI) and machine learning (ML) in the cable telecommunications industry was launched. The working group, which draws members from both inside and outside of cable including giants like IBM, is exploring how AI and ML can be leveraged to make the network more efficient. Success of this group will have significant implications for businesses across the country by moving the industry toward 10G (the broadband technology platform of the future with residential speeds up to 10 times faster than todays networks) with greater speed and by supporting scalability of new technology deployments across the network.

The new initiative is part of the SCTEISBE Standards program, the only ANSI-accredited platform for developing technical standards supporting cable broadband networks. Standards for these networks impact the more than 66 million people across the U.S. who rely on broadband access. Currently at least 20 expert members are working together to drive telecom standards and operational practices for AI and ML. The resulting standards will improve network efficiency, move the industry towards faster adoption of 10G, and allow products to be interchangeable and interoperable, thus accelerating the deployment of products and technologies in an ever-changing broadband landscape.

Still in its early stages, the AI/ML working group is analyzing current and projected projects utilizing AI and ML among member companies to determine what standards are needed. As cable network operators increasingly embrace the advantages of using AL and ML to run their networks, the initial focus of the group is on internal uses of the technology to improve the network. Primarily the AI and ML computing algorithms are complementing human efforts to optimize network operations and correct network impairments before customers even notice an issue. Three initial applied examples of AI and ML have risen to the surface in the first few months of work.

One application being explored for the creation of an industry standard is the use of ML on HFC node splits. Network operators commonly use node splits to provide greater bandwidth and capacity to a given geographic area. Because node splits require significant labor and capital investment, cable operators typically must prioritize where to invest. This prioritization process has required an intensive manual effort in the past. Using machine learning to solve this challenge, an algorithm considers multiple variables including service load and cost to provide an actionable and prioritized report for the cable operator to act on. By applying ML to automate node splits, the network will run more efficiently and customers will continue to receive their high-speed services without interruption as the network grows.

The working group is also looking at creating standards to control video piracy by applying artificial intelligence on the network that detects signatures of bad actors. Initial findings indicate that video piracy can be significantly diminished with this use of AI. The development of a standard for this application would provide incredible benefits for content creators, streaming services, and film production companies, among others. Benefits like these emphasize the importance of having technology experts from outside of the cable industry collaborating with cable experts on these working groups.

Machine learning is also being applied to spectral impairment detection across the access network which allows for an automated diagnostic report and mitigation activity. Spectral impairments, from a variety of sources such as external signal interference, account for a significant portion of infrastructure issues. The rapid identification of impairments allows for fewer disruptions to the network and to the user. A standard for this application would help all cable operators and improve network service for everyone.

These examples are only the start of how AI and ML will help in building and operating more complex and robust networks and services. With standards averaging six months to two years to develop, the group expects to start publishing standards in 2021 related to AI/ML. And, over the next few years, the uses of machine learning to optimize network operations is expected to grow significantly which will lead to new demands for standards.

The AI/ML working group is one of seven working groups that make up the Explorer initiative. Each group represents industries, technologies, or practices that will place significant demands on telecommunications infrastructure, including telehealth and telemedicine, aging in place, autonomous transport, smart cities, and more. As the cable industry pushes towards 10G, opportunities for the advancement of emerging and yet-to-be imagined technologies continue to grow. It is crucial to ensure these advancements are met with industry standards to usher in a new era of connectivity and allow businesses to ensure that their products and services are optimized to reach customers across the broadband network.

About the Author

Chris Bastian is senior vice president and CTIO at SCTEISBE, the not-for-profit member organization for cable telecommunications. Bastian heads SCTEISBEs ANSI-accredited, award-winning Standards program. Prior to joining SCTEISBE, he spent 15 years in leadership roles at Comcast. For more information, visit scte.org/explorer.

Related

Read the rest here:
AI/ML in Broadband Networks: the Role of Standards - EnterpriseAI

Machine Learning Operationalization Software Market | Global Industry Analysis, Segments, Top Key Players, Drivers and Trends to 2025 – AlgosOnline

Market Study Report, LLC, adds a comprehensive research of the ' Machine Learning Operationalization Software market' that mentions valuable insights pertaining to market share, profitability graph, market size, SWOT analysis, and regional proliferation of this industry. This study incorporates a disintegration of key drivers and challenges, industry participants, and application segments, devised by analyzing profuse information about this business space.

The Machine Learning Operationalization Software market report rigorously examines the implications of the major growth drivers, restraints, and opportunities on the revenue cycle of this industry vertical.

Request a sample Report of Machine Learning Operationalization Software Market at:https://www.marketstudyreport.com/request-a-sample/2829618?utm_source=algosonline.com&utm_medium=AG

As the world continues to battle the rampaging Covid-19 pandemic, lockdowns and restrictions have put a big question mark on the growth of businesses. Some industries will have to face adversities even once the economy recovers.

The coronavirus outbreak has prompted almost all businesses to revise their budget, in an effort to restore the profitability in the forthcoming years. Our in-depth assessment of this business space will help you craft an action plan to tackle the market uncertainties.

A complete study of the various market segmentations with their growth prospects are also included in the report. In addition, insights into the competitive dynamics are provided.

Main highlights of the Machine Learning Operationalization Software market report:

Ask for Discount on Machine Learning Operationalization Software Market Report at:https://www.marketstudyreport.com/check-for-discount/2829618?utm_source=algosonline.com&utm_medium=AG

Machine Learning Operationalization Software Market segmentations elucidated in the report:

Regional bifurcation: North America, Europe, Asia-Pacific, South America, Middle East and Africa

Product types:

Applications range:

Competitive outlook:

Report Focuses:

For More Details On this Report: https://www.marketstudyreport.com/reports/global-machine-learning-operationalization-software-market-2020-by-company-regions-type-and-application-forecast-to-2025

Some of the Major Highlights of TOC covers:

Executive Summary

Manufacturing Cost Structure Analysis

Development and Manufacturing Plants Analysis of Machine Learning Operationalization Software

Key Figures of Major Manufacturers

Related Reports:

2. Global Online Payment Gateway Market 2020 by Company, Regions, Type and Application, Forecast to 2025Online Payment Gateway Market report characterize imperative Portion and contenders of the market regarding market estimate, volume, esteem. This report likewise covers every one of the locales and nations of the world, which demonstrates a territorial improvement status, it additionally incorporates Business Profile, Introduction, Revenue and so on.Read More: https://www.marketstudyreport.com/reports/global-online-payment-gateway-market-2020-by-company-regions-type-and-application-forecast-to-2025

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

Go here to read the rest:
Machine Learning Operationalization Software Market | Global Industry Analysis, Segments, Top Key Players, Drivers and Trends to 2025 - AlgosOnline

Everything You Need To Know About Machine Learning In Unity 3D – Analytics India Magazine

Unity 3D is a popular platform for creating and operating interactive, real-time 3D content. It is a cross-platform 3D engine and a user-friendly integrated development environment (IDE) which helps in creating games in 3D as well as applications for desktop, mobile, web and more. It consists of a number of tools for programmers as well as artists to create real-time solutions, such as films and automotive, apart from games. The flexible real-time tools of Unity offer incredible possibilities for all industries and applications.

With a vision to maximise the transformative impact of Machine Learning for researchers and developers, Unity released the first version of Unity Machine Learning Agents Toolkit (ML-Agents) in 2017.

The aim of this ML environment is to allow game developers and AI researchers to use Unity as a platform to train as well as embed intelligent agents with the help of the latest advancements in ML and AI.

The Unity Machine Learning Agents Toolkit or simply ML-Agents is an open-source project by Unity, which allows games and simulations to serve as environments for training the intelligent agents. ML-Agents includes a C# software development kit (SDK) to set up a scene and define the agents within it, and a state-of-the-art ML library to train agents for 2D, 3D, and VR/AR environments.

The agents can be trained using techniques like reinforcement learning, imitation learning, neuro-evolution and other such ML methods through a simple-to-use Python API. The toolkit includes a number of training options, such as Curriculum Learning, Curiosity module for sparse-reward environments, Self-Play for multi-agent scenarios and more.

The Unity environment also provides implementations of state-of-the-art algorithms, which are based on TensorFlow to enable game developers to easily train intelligent agents for 2D, 3D and VR/AR games.

These trained agents can be utilised for multiple purposes, including controlling NPC behaviour, automated testing of the game builds as well as evaluating various game design decisions prior to its release.

The ML-Agents Toolkit provides a central platform where advances in Artificial Intelligence can be evaluated on the environments of Unity and then made accessible to the game developer communities for wider research.

Unity ML-Agents include a number of intuitive features. Some of them are:

Unity Machine Learning Agents (ML-Agents) allows developers to create more compelling gameplay and enhanced game experience. Using the platform, a developer can teach intelligent agents to learn through a combination of deep reinforcement learning and imitation learning.

The steps involved in ML-Agents are:

Know more here.

The key benefits of Unity ML-Agents are:

comments

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: ambika.choudhury@analyticsindiamag.com

Read the original:
Everything You Need To Know About Machine Learning In Unity 3D - Analytics India Magazine

Amazon’s Machine Learning University To Make Its Online Courses Available To The Public – Analytics India Magazine

In a recent development, Amazon announced that it will make online courses by its Machine Learning University available to the public. The classes were previously only available to Amazon employees.

The company believes that machine learning has the potential to transform businesses in all industries, but theres a major limitation: demand for individuals with ML expertise far outweighs supply. Thats a challenge for Amazon, and for companies big and small across the globe.

The Machine Learning University (MLU) was founded with an aim to meet this demand in 2016. It helped ML practitioners sharpen their skills and keep them abreast with the latest developments in the field. The classes are taught by Amazon ML experts.

The tech giant now plans to make these classes available to the ML community across the globe. It will include nine more in-depth courses before the year ends. As the blog post notes, by the beginning of 2021, all MLU classes will be available via on-demand video, along with associated coding materials. It will cover topics such as natural language processing, computer vision and tabular data while addressing various business problems.

By going public with the classes, we are contributing to the scientific community on the topic of machine learning, and making machine learning more democratic, said Brent Werness, AWS research scientist and MLUs academic director.

This initiative to bring our courseware online represents a step toward lowering barriers for software developers, students and other builders who want to get started with practical machine learning, he added.

Instead of a three-class sequence that takes upwards of 18 or 20 weeks to complete, in the accelerated classes we can engage students with machine learning right up front, shared Ben Starsky, MLU program manager.

The company said that similar to other open-source initiatives, MLUs courseware will evolve to improve over time based on input from the builder community. It also looking to rebuild its curriculum to further integrate dive into deep learning into class sessions.

The company wants to include as many important things as possible while offering flexibility in the way people can take these classes.

comments

Srishti currently works as Associate Editor at Analytics India Magazine. When not covering the analytics news, editing and writing articles, she could be found reading or capturing thoughts into pictures.

Excerpt from:
Amazon's Machine Learning University To Make Its Online Courses Available To The Public - Analytics India Magazine

In the City: Take advantage of open recreation, cultural and park amenities – Coloradoan

John Stokes Published 7:00 a.m. MT Aug. 16, 2020

John Stokes(Photo: City of Fort Collins)

Even in the midst of this unprecedented time, I hope you are finding opportunities to enjoy our beautiful Colorado summer.

The last few months have brought the welcome reopening of several recreation, cultural and park facilities, and programs. Following county and state health department guidelines, the city has designed reopening plans to create safe and welcoming places with appropriate activities for each location.

A number of recreation facilities have reopened including Edora Pool and Ice Center (EPIC), Northside Aztlan Community Center, Fort Collins Senior Center, The Farm at Lee Martinez Park, The Pottery Studio, Foothills Activity Center, and Club Tico.

Visitors should expect modified hours, limited programs and capacities, increased cleaning and sanitization, and updated check-in policies when visiting. Participants can engage in fitness, education, youth enrichment and arts and crafts programming in person or virtually.

The last month has also brought the reopening of the Fort Collins Museum of Discovery and the Lincoln Center. In welcoming the community back, the museum has limited and timed admissions and is using an online ticketing process. The Lincoln Center is available to the community for scheduled gatherings, meetings and events.

The Gardens on Spring Creek opened in June and has added summer programming for the community including yoga and tai chi on the Great Lawn.

Showtime?How Fort Collins performance venues approach reopening

Be on the lookout for over 50 murals being created this summer by local artists through the Art in Public Places Program. Artists will be painting murals on transformer cabinets, pianos, walls and even concrete barriers in Old Town.

Another great way to enjoy the Colorado summer is by taking a stroll in a local natural area or park. I encourage you to enjoy these treasured open spaces and recreate safely at available splash pads, dog parks, skate parks, golf courses and more.

While we are very much in the moment, we continue to plan for the future. For the last 10 months, Fort Collins parks and recreation staff together with a consulting team and with considerable public engagement, have been updating the Parks and Recreation Master Plan.

There are many phases to this in-depth planning exercise, and the final document is intended to guide the future of recreation and park assets for several decades. Community input is vital to the success of the master plan, and we want to hear your voice. Visit ourcity.fcgov.com/ParksandRec for more information on the master plan and how you can participate in upcoming engagement opportunities.

We have seen tremendous community use of public amenities recently, and we hope you will continue to enjoy the parks, natural areas, recreation and cultural facilities. Please know that we remain committed to staying informed and to safely adapting to the ever-changing conditions we find ourselves in.

John Stokes is the deputy director of Fort Collins Community Services. He can be reached at 970-221-6263 or jstokes@fcgov.com.

Read or Share this story: https://www.coloradoan.com/story/opinion/2020/08/16/city-enjoy-open-recreation-cultural-and-park-amenities/3369378001/

See the article here:
In the City: Take advantage of open recreation, cultural and park amenities - Coloradoan

CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning – Yahoo Finance

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Please replace the release with the following corrected version due to multiple revisions.

The updated release reads:

ANYSCALE HOSTS INAUGURAL RAY SUMMIT ON SCALABLE PYTHON AND SCALABLE MACHINE LEARNING

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Anyscale, the distributed programming platform company, is proud to announce Ray Summit, an industry conference dedicated to the use of the Ray open source framework for overcoming challenges in distributed computing at scale. The two-day virtual event is scheduled for Sept. 30 Oct. 1, 2020.

With the power of Ray, developers can build applications and easily scale them from a laptop to a cluster, eliminating the need for in-house distributed computing expertise. Ray Summit brings together a leading community of architects, machine learning engineers, researchers, and developers building the next generation of scalable, distributed, high-performance Python and machine learning applications. Experts from organizations including Google, Amazon, Microsoft, Morgan Stanley, and more will showcase Ray best practices, real-world case studies, and the latest research in AI and other scalable systems built on Ray.

"Ray Summit gives individuals and organizations the opportunity to share expertise and learn from the brightest minds in the industry about leveraging Ray to simplify distributed computing," said Robert Nishihara, Ray co-creator and Anyscale co-founder and CEO. "Its also the perfect opportunity to build on Rays established popularity in the open source community and celebrate achievements in innovation with Ray."

Anyscale will announce the v1.0 release of the Ray open source framework at the Summit and unveil new additions to a growing list of popular third-party machine learning libraries and frameworks on top of Ray.

The Summit will feature keynote presentations, general sessions, and tutorials suited to attendees with various experience and skill levels using Ray. Attendees will learn the basics of using Ray to scale Python applications and machine learning applications from machine learning visionaries and experts including:

"It is essential to provide our customers with an enterprise grade platform as they build out intelligent autonomous systems applications," said Mark Hammond, GM Autonomous Systems, Microsoft. "Microsoft Project Bonsai leverages Ray and Azure to provide transparent scaling for both reinforcement learning training and professional simulation workloads, so our customers can focus on the machine teaching needed to build their sophisticated, real world applications. Im happy we will be able to share more on this at the inaugural Anyscale Ray Summit."

To view the full event schedule, please visit: https://events.linuxfoundation.org/ray-summit/program/schedule/

For complimentary registration to Ray Summit, please visit: https://events.linuxfoundation.org/ray-summit/register/

About Anyscale

Anyscale is the future of distributed computing. Founded by the creators of Ray, an open source project from the UC Berkeley RISELab, Anyscale enables developers of all skill levels to easily build applications that run at any scale, from a laptop to a data center. Anyscale empowers organizations to bring AI applications to production faster, reduce development costs, and eliminate the need for in-house expertise to build, deploy and manage these applications. Backed by Andreessen Horowitz, Anyscale is based in Berkeley, CA. http://www.anyscale.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200812005122/en/

Contacts

Media Contact:Allison Stokesfama PR for Anyscaleanyscale@famapr.com 617-986-5010

The rest is here:
CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning - Yahoo Finance

Does technology increase the problem of racism and discrimination? – TechTarget

Technology was designed to perpetuate racism.This is pointed out by a recent article in the MIT Technology Review, written by Charlton McIlwain, professor of media, culture and communication at New York University and author of Black Software: The Internet & Racial Justice, From the AfroNet to Black Lives Matter.

The article explains how the Black population and the Latino community in the United States are victims of the configuration of technological tools, such as facial recognition, which is programmed to analyze the physical features of people and, in many cases, generate alerts of possible risks when detecting individuals whose facial features identify them as Black or Latino.

"We've designedfacial recognition technologiesthat target criminal suspects on the basis of skin color. We'vetrained automated risk profiling systemsthat disproportionately identify Latinx people as illegal immigrants. We'vedevised credit scoring algorithmsthat disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs," McIlwain wrote.

In the article, the author elaborates on the origins of the use of algorithms in politics to win elections, understand the social climate and prepare psychological campaigns to modify the social mood, which in the late 1960s was tense in the United States.These efforts, however, paved the way for large-scale surveillance in the areas where there was most unrest, at the time, the Black community.

According to McIlwain, "this kind of information had helped create what came to be known as 'criminal justice information systems.' They proliferated through the decades, laying the foundation for racial profiling, predictive policing, and racially targeted surveillance. They left behind a legacy that includes millions of black and brown women and men incarcerated."

Contact tracing and threat-mapping technologies designed to monitor and contain the COVID-19 pandemic did not help improve the racial climate.On the contrary, these applications showed a high rate of contagion among Black people, Latinos and the indigenous population.

Although this statistic could be interpreted as a lack of quality and timely medical services for members of the aforementioned communities, the truth is that the information was disclosed as if Blacks, Latinos and indigenous people were a national problem and a threat of contagion.Donald Trump himself made comments in this regard and asked to reinforce the southern border to prevent Mexicans and Latinos from entering his country and increasing the number of COVID-19 patients, which is already quite high.

McIlwain's fear -- and that of other members of the Black community in the United States -- is that the new applications created as a result of the pandemic will be used to recognize protesters to later "quell the threat." Surely, he refers to persecutions and arrests, which may well end in jail, or in disappearances.

"If we dont want our technology to be used to perpetuate racism, then we must make sure that we dont conflate social problems like crime or violence or disease with black and brown people. When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we design it to eradicate," concludes the author.

Although artificial intelligence and machine learning feed applications to enrich them, the truth is that the original programming is made by a human (or several).Who defines, initially, the parameters for the algorithms, are the people who created the program or application.The lack of well-defined criteria can result in generalizations, and this can lead to discriminatory or racist actions.

The British newspaperThe Guardianreported, a few years ago, that one of Google's algorithmsauto-tagged images of Black people like gorillas.Other companies, such asIBM and Amazon, avoid using facial recognition technologybecause of its discriminatory tendencies towards Black people, especially women.

"We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," IBM executive director Arvind Krishna wrote in a letter sent to Congress in June."[T]he fight against racism is as urgent as ever," said Krishna, while announcing that IBM has ended its "general" facial recognition products and will not endorse the use of any technology for "mass surveillance, racial discrimination and human rights violations."

If we consider that thethe difference in error rate between identifying a white man and a Black woman is 34% in the case of IBM software, according to a study by the MIT Media Lab, IBM's decision not only seems fair from the point of view racially speaking, it is also a recognition of the path that lies ahead in programming increasingly precise algorithms.

The 2018 MIT Media Lab study concluded that, although the average precision of these products ranges between 93.7% and 87.9%, the differences based on skin color and gender are notable;93% of the errors made by Microsoft's product affected people with dark skin, and 95% of the errors made by Face ++, a Chinese alternative, concerned women.

Joy Buolamwini, co-author of the MIT study and founder of the Algorithmic Justice League, sees IBM's initiative as a first step in holding companies accountable and promoting fair and accountable artificial intelligence."This is a welcome recognition that facial recognition technology, especially as deployed by police, has been used to undermine human rights, and toharm Black people specifically, as well as Indigenous people and other People of Color,"she said.

Another issue related to discrimination in the IT industry has to do with the language used to define certain components of a network or systems architecture.Concepts like master/slave are beingreformulated to change for a less objectionable terminology.The same will happen with the concepts of blacklists/whitelists.Now, developers will have terms like leader/follower, and allowed list/blocked list.

TheLinux open source operating system will include newinclusiveterminologyin its code and documentation.Linux kernel maintainer Linus Torvaldsapproved this new terminology on July 10, according to ZDNet.

GitHub, a Microsoft-owned software development company,also announced a few weeks ago that it is working to remove such termsfrom its coding.

These actions demonstrate the commitment of the technology industry to create tools that help the growth of society, with inclusive systems and applications and technologies that help combat discrimination instead of fomenting racism.

Go here to read the rest:
Does technology increase the problem of racism and discrimination? - TechTarget

ISRO Is Recruiting For Vacancies with Salary Upto Rs 54000: How to Apply – The Better India

ISRO recruitment 2020 is currently underway for capacity building and research in the field of Remote Sensing and Geo-Informatics.

The Indian Institute of Remote Sensing (IIRS), a Unit of Indian Space Research Organisation (ISRO), is hiring for 18 vacancies, out of which 17 are for the post of Junior Research Fellow and 1 is for a Research Associate.

The organisation is focused on developing land-ocean-atmosphere applications, and understanding processes on the Earths surface using space-based technologies.

According to the official notification, selected candidates will be paid a salary of Rs 31,000/month for JRF positions and up to Rs 54,000/month for an RA position.

Step 1: Visit the official website, and register yourself as an applicant using a valid email id.

Step 2: Fill the online application form and upload necessary documents

Step 3: Submit the application

The last date for submitting applications is 31 August 2020. Before applying, read through the detailed official notification.

Project Retrieval of Geophysical parameters using GNSS/IRNSS signals

Vacancies 4

Educational qualification

Essential qualifications

Project Himalayan Alpine Biodiversity Characterization and Information SystemNetwork (NMHS)

Vacancies: 2

Educational qualifications:

Essential qualifications:

Project Chandrayaan-2 Science plan for utilization of Imaging Infrared Spectrometer (IIRS) data for lunar surface compositional mapping

Vacancies: 1

Educational Qualifications

Essential qualification

Project Multi-sensor integration for digital recording, and realistic 3D Modelling of UNESCO World Heritage sites in Northern India

Vacancies: 1

Educational Qualifications:

Essential qualifications:

Project Extending crop inventory to new crops

Vacancies: 2

Educational qualification:

Essential qualifications:

Project Aerosol Radiative Forcing over India

Vacancies: 1

Educational qualifications:

Essential qualifications:

Project Spatio-temporal variations of gaseous air pollutants over the Indian Subcontinent with a special emphasis on foothills of North-Western Himalaya.

Vacancies 2

Educational qualifications:

Essential qualifications

Project: Indian Bio-Resource Information Network

Vacancies: 2

Educational qualifications:

Essential qualifications:

Project Rainfall threshold and DlnsAR based methods for initiation of landslides and decoupling of spatial variations in precipitation, erosion, tectonics in Garhwal Himalaya

Vacancies 1

Educational qualifications:

Essential qualifications:

Project Indian Bio-Resource Information Network

Vacancies 1

Educational qualifications:

Essential qualifications:

Project Indian Bio-Resource Information Network

Vacancies 1

Educational qualification:

Essential qualification:

For more information, you can visit the IIRS website or read the recruitment notice.

(Edited by Gayatri Mishra)

We at The Better India want to showcase everything that is working in this country. By using the power of constructive journalism, we want to change India one story at a time. If you read us, like us and want this positive news movement to grow, then do consider supporting us via the following buttons:

See original here:
ISRO Is Recruiting For Vacancies with Salary Upto Rs 54000: How to Apply - The Better India

Build Your Own PaaS with Crossplane: Kubernetes, OAM, and Core Workflows – InfoQ.com

Key Takeaways

InfoQ recently sat down with Bassam Tabbara, founder and CEO of Upbound, and discussed building application platforms that span multiple cloud vendors and on-premise infrastructure.

The conversation began by exploring the premise that every organisation delivering software deploys applications onto a platform, whether they intentionally curate this platform or not. Currently, Kubernetes is being used as the foundation for many "cloud native" platforms. Although Kubernetes does not provide a full platform-as-a-service (PaaS)-like experience out of the box, the combination of a well-defined API, clear abstractions, and comprehensive extension points make this a perfect foundational component on which to build upon.

Tabbara also discussed Crossplane, an open source project that enables engineers to manage any infrastructure or cloud services directly from Kubernetes. This "cross cloud control plane" has been built upon the Kubernetes declarative configuration primitives, and it allows engineers defining infrastructure to leverage the existing K8s toolchain. The conversation also covered the Open Application Model (OAM) and explored how Crossplane has become the Kubernetes implementation of this team-centric standard for building cloud native applications.

Many organisations are aiming to assemble their own cloud platform, often consisting of a combination of on-premises infrastructure and cloud vendors. Leaders within these organisations recognise that minimizing deployment friction and decreasing the lead time for the delivery of applications, while simultaneously providing safety and security, can provide a competitive advantage. These teams also acknowledge that any successful business typically has existing "heritage" applications and infrastructure that needs to be included within any platform. Many also want to support multiple public cloud vendors, with the goals of avoiding lock-in, arbitraging costs, or implementing disaster recovery strategies.

Platform teams within organisations typically want to enable self-service usage for application developers and operators. But they also want appropriate security, compliance, and regulatory requirements baked into the platform. All large-scale public cloud vendors such as Amazon Web Service (AWS), Microsoft Azure, and Google Cloud Platform (GCP) offer their services through a control plane. This control plane consists of user interfaces (UIs), command line interfaces (CLIs), and application programming interfaces (APIs) that platform teams and operators interact with to configure and deploy the underlying "data plane" of infrastructure services. Although the implementation of a cloud control plane is typically globally distributed, it appears centralised to the end users. This control plane provides a single entry point for user interaction where policy can be enforced, guardrails applied, and auditing conducted.

Application developers typically want a platform-as-a-service (PaaS)-like experience for defining and deploying applications, as pioneered by the likes of Zimki, Heroku, and Cloud Foundry. Deploying new applications via a simple "git push heroku master" is a powerful and frictionless approach. Application operators and site reliability engineering (SRE) teams want to easily compose, run, and maintain applications and their respective configurations.

Tabbara cautioned that these requirements lead to an organisation buying a PaaS, which unless chosen appropriately, can be costly to maintain:

"Modern commercial PaaSs often meet the requirements of 80% of an organisations use cases. However, this means that the infrastructure teams still have to create additional platform resources to meet the other 20% of requirements"

Building a PaaS is not easy. Doing so takes time and skill, and it is challenging to define and implement technical abstractions that meet the requirements of all of the personas involved. Google famously has thousands of highly-trained engineers working on internal platforms, Netflix has a large team of specialists focused on the creation and maintenance of their internal PaaS, and even smaller organisations like Shopify have a dedicated platform team. Technical abstractions range from close to the "lowest common denominator", like that taken by Libcloud and OpenStack, all the way through to providing a common workflow but full cloud-specific configuration, like HashiCorps Terraform or Pulumi. Traditional PaaS abstractions are also common within the domain of cloud, but are typically vendor specific e.g. GCP App Engine, AWS Elastic Beanstalk, or Azure Service Fabric.

Many organisations are choosing to build their platform using Kubernetes as the foundation. However, as Tabbara stated on Twitter, this can require a large upfront investment, and combined with the 80% use case challenge, this can lead to the "PaaS dilemma":

"The PaaS dilemma - your PaaS does 80% of what I want, my PaaS takes 80% of my time to maintain #kubernetes"

Tabbara stated that the open source Crossplane project aims to be a universal multi-cloud control plane for building a bespoke PaaS-like experience.

"Crossplane is the fusing of "cross"-cloud control "plane". We wanted to use a noun that refers to the entity responsible for connecting different cloud providers and acts as control plane across them. Cross implies "cross-cloud" and "plane" brings in "control plane"."

By building on the widely accepted Kubernetes-style primitives for configuration, and providing both ready-made infrastructure components and a registry for sharing additional resources, this reduces the burden on the infrastructure and application operators. Also, by providing a well-defined API that encapsulates the key infrastructure abstractions, this allows a separation of concerns between platform operators (those working "below the API line") and application developers and operators (those working "above the API line").

"Developers can define workloads without worrying about implementation details, environment constraints, or policies. Administrators can define environment specifics and policies. This enables a higher degree of reusability and reduces complexity."

Crossplane is implemented as a Kubernetes add-on and extends any cluster with the ability to provision and manage cloud infrastructure, services, and applications. Crossplane uses Kubernetes-styled declarative and API-driven configuration and management to control any piece of infrastructure, on-premises or in the cloud. Through this approach, infrastructure can be configured using custom resource definitions (CRDs) and YAML. It can also be managed via well established tools like kubectl or via the Kubernetes API itself. The use of Kubernetes also allows the definition of security controls, via RBAC, or policies, using Open Policy Agent (OPA) implemented via Gatekeeper.

As part of the Crossplane installation a Kubernetes resource controller is configured to be responsible for the entire lifecycle of a resource: provisioning, health checking, scaling, failover, and actively responding to external changes that deviate from the desired configuration. Crossplane integrates with continuous delivery (CD) pipelines so that application infrastructure configuration is stored in a single control cluster. Teams can create, track, and approve changes using cloud native CD best practices such as GitOps. Crossplane enables application and infrastructure configuration to co-exist on the same Kubernetes cluster, reducing the complexity of toolchains and deployment pipelines.

The clear abstractions, use of personas, and the "above and below the line" approach draws heavily on the work undertaken within the Open Application Model.

Initially created by Microsoft, Alibaba, and Upbound, the Open Application Model (OAM) specification describes a model where developers are responsible for defining application components, application operators are responsible for creating instances of those components and assigning them application configurations, and infrastructure operators are responsible for declaring, installing, and maintaining the underlying services that are available on the platform. Crossplane is the Kubernetes implementation of the specification.

With OAM, platform builders can provide reusable modules in the format of Components, Traits, and Scopes. This allows platforms to do things like package them in predefined application profiles. Users choose how to run their applications by selecting profiles, for example, microservice applications with high service level objective (SLO) requirements, stateful apps with persistent volumes, or event-driven functions with horizontally autoscaling.

The OAM specification introduction document presents a story that explores a typical application delivery lifecycle.

To deliver an application, each individual component of a program is described as a Component YAML by an application developer. This file encapsulates a workload and the information needed to run it.

To run and operate an application, the application operator sets parameter values for the developers' components and applies operational characteristics, such as replica size, autoscaling policy, ingress points, and traffic routing rules in an ApplicationConfiguration YAML. In OAM, these operational characteristics are called Traits. Writing and deploying an ApplicationConfiguration is equivalent to deploying an application. The underlying platform will create live instances of defined workloads and attach operational traits to workloads according to the ApplicationConfiguration spec.

Infrastructure operators are responsible for declaring, installing, and maintaining the underlying services that are available on the platform. For example, an infrastructure operator might choose a specific load balancer when exposing a service, or a custom database configuration that ensures data is encrypted and replicated globally.

To make the discussion more concrete, lets explore a typical Crossplane workflow, from installation of the project to usage.

First, install Crossplane and create a Kubernetes cluster. Next, install a provider and configure your credentials. Infrastructure primitives can be provisioned from any provider e.g. (GCP, AWS, Azure, Alibaba, and (custom-created) on-premise.

A platform operator defines, composes, and publishes your own infrastructure resources with declarative YAML, resulting in your own infrastructure CRDs being added to the Kubernetes API for applications to use.

An application developer publishes application components to communicate any fundamental, suggested, or optional properties of our services and their infrastructure requirements.

An application operator ties together the infrastructure components and application components, specificies configuration, and runs the application.

Kubernetes is being used as the foundation for many "cloud native" platforms, and therefore investing in both models of how the team interacts with this platform and also how the underlying components are assembled is vitally important and a potential competitive advantage for organisations. As stated by Dr Nicole Forsgren et al in Accelerate, minimising lead time (from idea to value) and increasing deployment frequency are correlated with high performing organisations. The platform plays a critical role here.

Crossplane is a constantly evolving project, and as the community expands more and more feedback is being sought. Engineering teams can visit the Crossplane website to get started with the open source projects, and feedback can be shared in the Crossplane Slack.

Daniel Bryant works as a Product Architect at Datawire, and is the News Manager at InfoQ, and Chair for QCon London. His current technical expertise focuses on DevOps tooling, cloud/container platforms and microservice implementations. Daniel is a leader within the London Java Community (LJC), contributes to several open source projects, writes for well-known technical websites such as InfoQ, O'Reilly, and DZone, and regularly presents at international conferences such as QCon, JavaOne, and Devoxx.

Continued here:
Build Your Own PaaS with Crossplane: Kubernetes, OAM, and Core Workflows - InfoQ.com

Our First Amendment shows world meaning of free speech – The Connection

Forty-five words.

Throughout our history, United States citizens have debated 45 words that have become the bedrock on which our culture stands: Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Since the death of George Floyd, I have spent an enormous amount of time reflecting on what has occurred and continues to occur in our country. What originated in Minneapolis has brought forth a level of dialogue around not only racism, but also our First Amendment right to free speech and peaceful assembly.

I did what any lifelong learner would do I researched it and refreshed my knowledge on those 45 words that are imprinted on Americans.

Did you know that the First Amendment was actually supposed to be the Third Amendment? The original first and second amendments were defeated at the time. The original first amendment dealt with how members of the House of Representatives would be assigned to the states a measure that would have resulted in more than 6,000 members of the House of Representatives. The original second amendment? It addressed Congressional pay (it was later approved as the 27th Amendment 203 years later).

And then the third became the first. How fortuitous it was to have the first two amendments fail so that the third would become the first. The amendment for which the United States is known around the world and arguably has influenced other nations became first through fate.

While our courts have decided that some speech is protected and some not (fighting words, child pornography, true threats, etc.), it is important to remember that we should not necessarily differentiate who is entitled to free speech and assembly and who is not.

The 45 words of the First Amendment encapsulate the liberty we cherish. You cannot be supporters of freedom of speech and assembly of only ideas with which you agree and only people with whom you agree.

The bottom line is this: Our First Amendment rights are fundamental to the fabric of our nation. Whether or not we agree with the speech or demonstration, we have been afforded this right by our founding fathers.

Our ability to contribute to the marketplace of ideas whether or not we like or agree with those ideas and those who share them is what makes our country an incomparable place to live, work and play.

Randy Boyd is the president of the University of Tennessee at Knoxville.

Original post:

Our First Amendment shows world meaning of free speech - The Connection