Daily Archives: May 14, 2020

COVID-19 Puts Structural Racism On Full Display Will We Finally Do Something to Correct It? – Next City

Posted: May 14, 2020 at 4:54 pm

COVID-19 is a dangerous new reality, spreading indiscriminately and without regard for skin color or cultural background. Yet many black and brown Americans are dying at disproportionately high rates. Will this be the time that we stop talking about structural racism and finally do something about it?

By all accounts of science and chance and with equal levels of exposure and risk the rates of infection and death across all communities should be the same. But as we have learned from responsible news reporting, the rates of infection and death are not the same, particularly along racial lines. As our country surpasses 1.3 million infections and more than 80,000 deaths, black people so far represent nearly 30 percent of all infections yet only 13 percent of the national population. In some cities, the number is even higher. However, we should not be surprised.

For communities of color in the United States, COVID-19 has transformed an otherwise protracted assortment of chronic health issues associated with poverty, overcrowding, and uneven access to public space or quality housing among them cardiovascular disease, hypertension, diabetes, cancer, and asthma turning them into abrupt and immediate death sentences. We have a name for the uneven distribution of exposure and risk along racial lines, and its not COVID-19. Its structural racism.

Where this coronavirus is lacking in racial bias, the United States has made up for with a resilient and highly adaptive white supremacist capitalist racial ideology. It is an ideology that is etched into our national DNA, rooted in the exploitation of human beings for economic gain the perverse logic of slavery and which has laid a long and injurious legacy for black and brown communities. It has justified spatial and economic exclusion (segregation and red lining), racial terrorism (Jim Crow laws and community massacres), community theft (block-busting and predatory lending), targeted community removal (urban renewal and federal highway programs), criminalization of blackness and loss of voting rights and citizenship (mass incarceration and deportation), or simply blanket ethnic exclusion (anti-immigration orders against what our President has named shithole countries).

As if all of that was not enough, communities who experience higher levels of exposure and risk to the coronavirus have now become our essential workers, positioned at the front lines of this pandemic. They are the transit workers, doormen, janitors, health care workers, food producers, grocery store staffers, and warehouse and delivery workers those on which every one of us is relying to get us through this crisis. They are underpaid, underinsured, and they are very often black or brown.

Our nations willingness to accept collateral damage in exchange for capital gain is proven; especially during national disasters like the one we are experiencing now. As was the case for Hurricane Katrina in New Orleans, Hurricane Maria in Puerto Rico, and the poisoned water crisis in Flint when black and brown people were disproportionately affected, a national discussion about race is once again underway. But promising as these race-facing (and racism-naming) national discussions can seem while they are taking place, they always turn out to be fleeting. If action is taken at all, it relies on a rising tides lift all boats framing rather than an explicit commitment to racial justice.

Though the COVID crisis has put the lethal legacy of slavery on full display, the CDC only recently started collecting and disaggregating data by race. Their slowness to act decisively is either from political embarrassment, willful ignorance, or ambivalence to the immediate and life-saving significance of this information, and so when the US President and many in his political party push to reopen the economy prematurely, we shouldnt be surprised. Once again, economic concerns in this country are taking priority over public health concerns and human life, as they often do when black and brown people are involved. A more strategic rollout of this information could have allowed Americans to get on board with a strategy, supported by race-disaggregated data, to ensure that resources were directed to the right communities.

Structural racism is insidious. It doesnt rely on decision-makers to themselves be racists. Instead, it is a generations-old system of norms and parameters which provide the framework for almost every decision we make. As history confirms, the roots of American society lie in a slave economy, and our racially-structured political and economic system is reinforced by a legal system that relies on history (which is precedent) for administering justice. In most cases, instead of radical transformation, our system delivers us a watered-down version of what we already are; in other words, when the gavel drops or the bill is passed, we are simply left with white-supremacist capitalist racial ideology-light. So, it should come as no surprise that racial equity transformation is slow, and that it never comes without a fight.

When you water something down, it becomes a diluted version of itself. What we need right now is something altogether different. Instead of passing up yet another opportunity to right a four-century-old wrong, its time to finally ensure that a post-COVID-19 recovery benefits both sides of the color-line and that we as a nation truly begin to address the structural roots of racial inequality. So, what is the organizing work, political work, and accountability work that needs to happen in order to ensure that the public good serves all of us equally? In many cases, tools for advancing racial equity already exist. Some can be hacked while others will need to be completely reimagined. But heres where we can start:

Economic Development

Housing

The Public Domain

For communities of color, a cure for the harm caused by COVID-19 needs to go far beyond developing a vaccine. We also need social and economic policies that take on the underlying, longstanding, and persistent problems of structural racism.

Reflecting on this countrys long history of intentional racist planning and policy-making, todays planners, designers, and policy-makers have an ethical obligation to realign our priorities and adopt intentional antiracist agendas that address the legacy pockets of inequity in black and brown communities. The time to act is now! Because if we choose to wait and it will be a choice we will once again miss an opportunity to ensure that the very same people who are keeping our recovery afloat can finally be treated equally.

EDITORS NOTE: Weve clarified some of the numbers around infection rates.

Stephen F. Gray is an Assistant Professor of Urban Design at Harvard Graduate School of Design and founder of Boston-based design firm Grayscale Collaborative. His work acknowledges the intersectionality of race, class, and the production of space, and he is currently co-leading an Equitable Impacts Framework pilot with the High Line Network and Urban Institute aimed at advancing racial equity agendas for industrial reuse projects across North America.

View post:

COVID-19 Puts Structural Racism On Full Display Will We Finally Do Something to Correct It? - Next City

Posted in Intentional Communities | Comments Off on COVID-19 Puts Structural Racism On Full Display Will We Finally Do Something to Correct It? – Next City

The near future of Colorados restaurants could depend on our biggest asset: the outdoors – Loveland Reporter-Herald

Posted: at 4:54 pm

In cities around the world, restaurants are taking to the streets. Theyre transforming parking lots and plazas, spilling onto sidewalks and coming up with parklets for more patio space. After months of closed dine-in service, these gathering places are counting on fresh air and more room for social distancing to keep employees and customers safe and businesses alive through the summer months.

Denver could be next to adopt the charge. After eight weeks of running on takeout and delivery only, restaurants and their business improvement districts, as well as volunteer planners across the city, are advocating now for further loosened restrictions on alcohol permitting and temporarily closed-off streets and parking lots to serve diners again.

By Memorial Day, Colorado Gov. Jared Polis said he expects to announce instructions for restaurants that are looking at a late May or early June reopening. A variance given to Mesa County this month allows for restaurants there to reopen at 30% of their usual fire-code capacity, and in a press conference on Wednesday, Polis said that a greatly reduced capacity should be expected indoors as more restaurants start to open around the state, but we also want to find ways that they can expand tables outdoors he said, mentioning sidewalks and parking spaces as potential options.

We know restaurants are eager to reopen in a way that protects the health of their patrons, and (they) see measures like expanded patio space as one way to do that, Denver Mayor Michael Hancocks office announced on Tuesday. (We) have been taking and evaluating requests from various stakeholders on what measures, including expanded patio space, could be implemented to support restaurants once theyre able to reopen.

This week, the Downtown Denver Partnership shared plans of its rapid activation of commercial streets, which was also proposed to Hancock earlier this month. The group gave nine examples of core blocks in various Denver neighborhoods where vehicle through-traffic and car parking could be temporarily blocked, allowing for pedestrian walkways and al fresco dining areas as seen in Europe or elsewhere in the United States during festivals and events.

This concept is not new to Denver, the DDPs president and CEO Tami Door told The Denver Post. We have done this many many times, as have great cities around the world. What is particularly intriguing about it now is its an amalgamation of wins.

Those wins, according to Door, include allowing individuals to gather safely again and letting neighborhoods and many of their retail businesses return to life, all while monitoring the viability of these types of gathering spaces long-term.

We know for the future that this isnt going away anytime soon, so we really need to understand how our public spaces can create safe spaces, Door said. In order to do that, she and the DDP have proposed a five-month pilot period from Memorial Day to Oct. 31 that would take advantage of Colorados sunshine while allowing restaurants to serve diners in more spaces outdoors.

I can make this happen literally yesterday, restaurateur Beth Gruitch said of the time she would need to open up her dining rooms outside. Gruitch co-owns two restaurants in Larimer Square (Rioja and Bistro Vendome) and two more at Union Station (Ultreia and Stoic & Genuine). She and her team closed down a fifth restaurant, Euclid Hall, permanently at the start of the shutdown.

She knows that closing off Larimer Square to cars will take longer to implement and face more opposition, but at Union Station, in my opinion, its a why not? she said. Why wouldnt we? Let us open up our patios, let us space our tables out, let us fill that space with energy and fun.

At Union Station, Gruitch envisions the surrounding plaza dotted with dining tables from each of the halls various restaurants. She pictures it working just as to-go orders have, but with the added bonus of table service and summery alcoholic beverages.

Its an appealing image that recalls tables set on Italian piazzas and in Parisian alleyways. But for other Denverites, the idea of turning not just plazas but also city streets into dining rooms is more romantic than practical.

Look, I couldnt be more heartbroken for what has happened to the restaurant and bar trade, Steve Weil, who owns Rockmount Ranch Wear on Wazee Street, told The Denver Post. Its a perfect storm for them. They have few options, and I dont begrudge supporting them how we can. But giving them the public right-of-way monopolizes public access perhaps in a way thats detrimental to everyone else.

For Weils business to reopen now, he says downtown Denvers streets need to stay open for vehicles, which his customers use more than public transportation or other forms of transport. Downtown has become an insufferable, incoherent puzzle of how to get from A to B, Weil explained. And this will make it worse.

Hes especially concerned about paying property taxes come June. Theres no relief on that. We need to do everything within our power to safely restart this economy, and not just for one segment but for every segment. What we do for one should not hurt the other.

But Door thinks the Downtown Denver Partnership and other stakeholders can use this summer-long outdoor dining trial to answer concerns like Weils, plus other questions that will surely arise: Where do outdoor dining zones fit in? How long can they operate? Who uses them? Are they inclusive and equitable? And what about them doesnt work?

This isnt about closing every street in the city; its about being strategic, being intentional, Door said. And its not one-size-fits-all, so its a good opportunity to explore.

Across Denver, outside business centers on Colorado Boulevard, along neighborhood roads in North Park Hill and just off Federal Boulevard by Jefferson Park, urban designer Matthew Bossler and a small cohort of community organizers are creating templates for outdoor dining areas of all sizes and shapes.

Bossler says hes excited to see grassroots organizations planning their own versions of this effort, but hes most eager to help out the businesses that wouldnt otherwise have resources to dedicate now.

Most communities of color dont have business improvement districts in place, he told The Denver Post. Thats why were being intentional about taking the momentum BIDS have and supporting them, but really dedicating resources to those areas that dont have that staff on hand or the money set aside.

On Thursday morning, hes giving a talk for Downtown Colorado Inc.s 500 statewide members. Bossler will discuss the playbook he and his team are creating basically just to cut out all the guesswork that everybodys trying to figure out on their own in little pieces all across the city right now, he said. Hell go over transportation consideration, design logistics, licensing, permitting and liability.

The organizational work and facilitation thats the majority of the work that weve been doing, Bossler said. And we foresee that that need will persist for at least the next month in order to knock down those barriers.

Meanwhile, at the city and state level, restaurateurs like Gruitch say this is a time to determine what Denver looks like over the summer and then for years to come.

What are we willing to sacrifice in order to grow our city in the direction of a place that we want to live in? she asked. We could be a city where there arent any good restaurants, where the retails not there, and downtown is desolate, and then we wont have to worry about the parking. It wont be an issue. But Id rather have an incredibly vibrant, successful city with some challenges than just kind of placating what we have now.

Subscribe to our food newsletter, Stuffed, to get Denver food and drink news sent straight to your inbox.

Read more:

The near future of Colorados restaurants could depend on our biggest asset: the outdoors - Loveland Reporter-Herald

Posted in Intentional Communities | Comments Off on The near future of Colorados restaurants could depend on our biggest asset: the outdoors – Loveland Reporter-Herald

How the City of Knoxville, UT aim to end college youth homelessness in Knoxville – UT Daily Beacon

Posted: at 4:54 pm

In 2018, there were 815 youth who were unaccompanied by an adult recorded by the Knoxville Homeless Management Information System who were experiencing homelessness in Knoxville.

Youth aged 12 to 24 may make up a small percentage of Knoxvilles 187,500 population counted by the U.S. Census Bureau that same year. However, the number represents a growing issue within the country.

The homeless population encompassing college students aged 18-24 has been growing, including in Knoxville.

In its 2017 annual report, KnoxHMIS stated that on a single night, 41,000 youth were homeless across the U.S. with 88% between the ages of 18 and 24. In Knoxville, around 747 homeless youth were registered as homeless by KnoxHMIS.

The KnoxHMIS system, a partnership between the University of Tennessee College of Social Work and the Social Work Office of Research and Public Service, has been recording data on homelessness within Knoxville since 2007. Started in 2004 by endowed professor of mental health research and practice David Patterson, the program fosters a greater understanding of the social consequences, human impact and other deleterious effects of homelessness.

The data has helped the City of Knoxville and UT have a better understanding of who makes up the homeless populations and what can be done to help.

Michael Dunthorn, homeless program coordinator at Knoxvilles Office of Homelessness, explained that recently the city has been trying better to identify young adults who may be homeless in the city.

We've been more intentional about trying to find youth and young adults and reach out to them. A lot of folks who are particularly young adults aren't going to even see themselves as homeless if they're couch surfing, you know, living in somebody else's place, Dunthorn said. Certainly somebody just starting out doesn't want to identify as homeless and so they're not necessarily looking for homeless resources.

Through better and more deliberate study of the college homeless population, the proper institutions can step up to help address their needs such as housing, food and support for education.

The U.S. Department of Housing and Urban Development also started an initiative to reduce the number of youth experiencing homelessness through their Youth Homelessness Demonstration Program.

The City of Knoxville applied again for this coming year for the grant which Dunthorn explained offers cities a way of trying new innovative ideas to solve a problem with the idea that you demonstrate something that works well and that it would be something that could be replicated in other places.

To be able to qualify for the funding, Knoxville has to have a youth advisory council or according to HUDs website about the YHDP, a Youth Action Board which has to be made of youth who are currently or in the past experienced homelessness.

Communities must also: bring together different stakeholders in the community like housing providers, school districts, the juvenile justice system, local and state child welfare agencies and workforce development organizations; assess the needs of special populations at higher risk of being homeless including racial and ethnic minorities, LGBTQ+ youth, parenting youth and youth involved in the foster care and juvenile justice systems and create a coordinated community plan to assess the needs of youth either at-risk or currently experiencing homelessness.

Knoxville has the Youth WINS (When in Need of Support) program which has youth advisors between the ages of 18 to 22. These advisors work to assist other youths find stable housing and connect them to community resources. In addition, the board advises the city on what actions are needed to help.

Annette Beebe, case manager and Youth WINS program manager at the Community Action Committee, said the board, which has over 44 members, meets twice a month and all the meetings by the youth council are closed to the public.

The board, Beebe said, is proactive with their mission on ending youth homelessness, currently working with the city to come up with solutions, but focuses mostly on supporting peers going through similar experiences.

I think the reason why it's so popular is because it offers community, Beebe said. It offers a platform for their voice to be heard, and they are getting recognized in our community.

Dunthorn said the local city government has access to an affordable rental housing fund which can help fill the gap in the funding package that's required for developers to develop apartments.

The idea is that nobody should be paying more than a third of their income in rent. And so for people who don't have a lot of income, the cost of the apartment has to be fairly low, Dunthorn said. Most market rate developers are not looking for that. So they'll build the upscale stuff on their own and that's fine. In order to make the money work to be able to come out with a decent apartment that is ultimately affordable to somebody can sometimes require a little bit of help.

The program could then increase the supply and long-term availability for those with modest incomes looking to rent apartments. Additionally, Dunthorn said that for young adults, the process of building affordable housing also needs to consider what resources are critical for helping students start their lives.

In addition to the citys aim to receive funding to help get the homeless population off the streets, UT has also aimed at increasing awareness of the situation in Knoxville. Through several resources offered on campus, the university aims to help meet every students needs.

The university is increasing awareness through events like participating in National Hunger and Homelessness Awareness Week. The university hosted its second Hunger and Homelessness Summit in November 2019, participating in an event held in more than 700 different locations.

The summit brought guest speakers to campus including founder and CEO of Swipe Out Hunger Rachel Sumekh and Larry Roper, a professor and coordinator of college student services at Oregon State University. Speakers and events were directed to focus on addressing students needs through connecting resources across campus.

As the conversation around college youth food and housing insecurity continues to grow, there is a hope that more students will be able to step forth and receive the help they need to continue on their college career.

Read the original post:

How the City of Knoxville, UT aim to end college youth homelessness in Knoxville - UT Daily Beacon

Posted in Intentional Communities | Comments Off on How the City of Knoxville, UT aim to end college youth homelessness in Knoxville – UT Daily Beacon

Black restaurant owners in Boston want to see relief on the menu – The Boston Globe

Posted: at 4:54 pm

But will this historic space become a Black history memory due to the coronavirus?

Most of the 300,000 restaurant workers in Massachusetts are furloughed or laid off. And according to the Massachusetts Restaurant Association, the losses are expected to be as high as $2.3 billion.

When Congress rolled out that $2 trillion relief package, there was supposed to be assistance for small businesses in the form of loans, tax breaks, and paycheck protection. But the application process was hard. Not even lawyers agree on how it works. And in order to get loan forgiveness, a business owner must spend the money within eight weeks, keep the same number of employees it had before the pandemic, and use 75 percent of it on payroll.

To make it harder, a lot of big businesses like Ruths Chris and Kura Sushi were getting money meant for the little local joints. Shake Shack may have returned its funds, but a lot of small restaurants are on their own.

Restaurants owned by immigrants, Black people, and other people of color have historically struggled to get business loans, liquor licenses, and contracts. Now, COVID-19 is amplifying the inequities in how those businesses will stay open.

The first place we saw coronavirus shutter was Chinatown, where businesses experienced a dwindling number of customers due to xenophobia months before social distancing, shutdowns, and widespread infection.

Now, were seeing how the virus could close down the few Black-owned restaurants we have, like District 7 Tavern. Smith, along with the owners of Darryls Corner Bar & Kitchen, the renowned Wallys Caf, Savvor Restaurant & Lounge, and Soleil Restaurant & Catering have formed the Boston Black Hospitality Coalition in an effort to survive.

They are challenging city and state officials to create a task force specifically for Black-owned businesses and restaurants and looking for community support.

Smith says he estimates each of them have about another month before they have to consider closing their doors for good. District 7 Tavern closed in mid-March and is already $120,000 under. Collectively, the five businesses will have lost over $1 million by the end of the month. Smith applied for the federal Paycheck Protection Program but hasnt received any funds. Hes rethinking his business model.

Its a problem so dire minority-owned microbusinesses nationwide are facing closure without major government support.

Last week, US Representative Ayanna Pressley and Senator Kamala D. Harris introduced the Saving Our Street (SOS) Act to lend federal support to small businesses during the crisis. The act, if passed, would establish a Microbusiness Assistance Fund of $124.5 billion and provide up to $250,000 directly to microbusinesses: the tiny operations with staffs of fewer than 10 people 20 if half the staff is from a low-income community. The application process would require demographic data, to ensure minority-owned businesses arent excluded.

In the Massachusetts Seventh [congressional district], our smallest neighborhood restaurants and businesses are the backbones of our communities. These businesses need real help now, but so far too many have been left out and left behind by federal relief efforts," Pressley said in a statement.

We cannot allow the systemic barriers that have long prevented Black business owners from accessing capital to persist amid this crisis. Our relief efforts must be intentional and race-conscious to ensure minority-owned small businesses get the resources and support.

The Boston Black Hospitality Coalition isnt asking for much: $500,000 as a collective. The NAACPs Boston branch contributed the first $25,000.

Ultimately, Smith says, they want the fund to benefit not just the owners involved, but also to boost Black-owned businesses like ZAZ in Hyde Park, The Coast Cafe in Cambridge, and the hundreds of musicians out of work.

If we aint getting money, they aint getting money," Smith says. When it comes to minority businesses and minority dollars banks and companies want to invest in, Black people are the minority of the minority. Theres a lot of zeros floating around. We are just looking to survive.

And their survival is vital to an entire people.

The Teachers Lounge comes together most often at Black-owned restaurants to uplift Black educators and educators of color. Market Sharing helps feed the homeless, thanks to their relationship with Savvor. Queens Co. often holds its women empowerment events at Black-owned restaurants.

These places arent just places to eat and drink. This is often where Black Boston builds, networks, and thrives.

Farrah Belizaire, founder of LiteWork Events, says Black ownership is a key part of economic mobility and sustainability in our community. Her organization is dedicated to curating events for professionals of color, so keeping the doors open at places like La Fabrica and Darryls and Cesaria is important. Before social distancing, she was a guest bartender at District 7 Tavern.

Im often having to combat the stereotype that Black people dont exist in Boston, she says. One way to change that narrative is to amplify the existence of social spaces owned and occupied by Black Bostonians. During these times its especially important to make sure these places can stay afloat.

Smith says 2020 was supposed to be the year Black Boston showed up and showed out. Hes right. The NAACP was scheduled to bring its national convention to Boston this year, putting the city and Black-owned businesses in the spotlight.

He and the rest of the hospitality coalition first came together to advocate for city contracts and a seat at the table in discussions on liquor licenses, preferred vendors, and contracts. They were hoping to make a big impact, starting with the convention. And then coronavirus postponed life as we know it.

We have to wait to have some of those conversations, he says. The focus is on coronavirus right now, as it should be. But what happens when the lights turn back on?

The new normal cannot be to add more inequities to the pot.

We have to make sure people not only have places to return for work, for their food, drink, and camaraderie. We have to make sure we are crafting recipes that dont leave them in the dark.

Jene Osterheldt can be reached at jenee.osterheldt@globe.com and on Twitter @sincerelyjenee

View post:

Black restaurant owners in Boston want to see relief on the menu - The Boston Globe

Posted in Intentional Communities | Comments Off on Black restaurant owners in Boston want to see relief on the menu – The Boston Globe

Summary of 28th Annual Conference on Fair Lending and Consumer Financial Protection – JD Supra

Posted: at 4:54 pm

Year In Review

Anand Raman, the head of Skaddens Consumer Financial Services (CFS) practice, began the conference by providing a summary of notable events and trends over the past year relating to consumer financial services compliance and enforcement, including enforcement actions by the Consumer Financial Protection Bureau (CFPB) and prudential regulators, statutory and regulatory changes, and activity by state regulators.

Key issues discussed in this session included:

CFPB Staffing Changes and Enforcement Trends. The CFPB hired new Associate Director of Supervision, Enforcement and Fair Lending Bryan Schneider at the end of 2019 and hired new Enforcement Director Thomas Ward early in 2020. The impact that these new hires will have on the CFPBs enforcement strategy is not yet clear, but the CFPB continues to actively investigate consumer compliance issues and initiate enforcement actions.

Notably, enforcement actions over the past year have disproportionately come in the form of lawsuits, rather than consent orders, compared with earlier years. For example, of the 20 actions that were filed between April 1, 2019, and April 1, 2020, 12 were lawsuits and eight were consent orders. In contrast, between April 1, 2015, and April 1, 2016, the CFPB filed 43 enforcement actions, of which 18 were lawsuits and 25 were consent orders.

On March 3, 2020, the Supreme Court heard oral argument in the matter of Seila Law v. Consumer Financial Protection Bureau to consider two questions: (i) whether the vesting of substantial executive authority in the CFPB, an independent agency led by a single director, violates the separation of powers; and (ii) whether, if the CFPB is found unconstitutional on the basis of the separation of powers, 12 U.S.C. 5491(c)(3) can be severed from the Dodd-Frank Act. The CFPB argues that the single-director structure is unconstitutional, and a decision by the Court invalidating the single-director structure could affect how the Bureau pursues enforcement in the future. The Courts decision is expected in the coming months.

Prudential Regulators. Anand noted that the prudential regulators also continue to actively examine consumer compliance issues including in the areas of fair lending and unfair or deceptive acts or practices (UDAP) with a particular emphasis on nonpublic resolutions of consumer compliance matters. Key issues that the prudential regulators have reviewed over the past year include overdraft fee assessment practices, commercial lending disclosures and broker compensation.

Fair Lending. The CFPB and the federal prudential regulators did not enter into any public fair lending enforcement actions over the past 12 months, although the Department of Justice and the Department of Housing and Urban Development entered into settlements relating to redlining and automobile loan pricing.

Statutes and Regulations. The CFPB proposed revisions to Regulation F, which implements the Fair Debt Collection Practices Act. The proposed revisions would prohibit a debt collector from calling a consumer about a particular debt more than seven times within a seven-day period and from engaging in more than one telephone conversation with a consumer about a particular debt within a seven-day period.

The CFPB also issued a final rule delaying the implementation of the underwriting requirements of its 2017 payday lending rule, which had originally been proposed under the Bureaus former Director Richard Cordray. The underwriting requirements would have made it an unfair and abusive practice to make certain payday and vehicle title loans without determining that the borrower had an ability to repay the loan, but the CFPB subsequently determined that these ability-to-repay provisions would unduly restrict access to credit.

With respect to small business data collection under the Dodd-Frank Act and the Equal Credit Opportunity Act (ECOA), the CFPB agreed, as part of a settlement of litigation against the agency relating to its delay in issuing a rule, to publish an outline of proposals under consideration and alternatives considered by September 15, 2020. The CFPB also agreed to convene a Small Business Advocacy Review panel, which would, among other things, issue a report on the subject.

On September 10, 2019, the CFPB issued new or revised policies relating to no-action letters, Compliance Assistance Sandbox approvals and trial disclosure approvals, all of which are administered by the CFPBs Office of Innovation. The purpose of these policies is to encourage institutions to provide innovative products and services to consumers by addressing or resolving regulatory uncertainty that may be preventing the institutions from implementing a product or service.

State Update. States have aggressively enforced consumer financial protection laws, particularly in light of a perceived reduction in activity by the CFPB, with the New York Department of Financial Services, Massachusetts Attorney General, and California Department of Business Oversight as notable examples. State agencies have taken action across a variety of industries, and have been particularly active in the auto lending space.

Audience questions submitted during the presentation included:

Skadden counsel Austin Brown and associates Nicole Cleminshaw andAndrew Hansonled a discussion of hot topics in consumer compliance over the past year, including emerging issues regarding fair lending and unfair, deceptive or abusive acts or practices (UDAAP) related to the COVID-19 pandemic. Much of the consumer compliance enforcement activity over the past year has related to UDAAP, with several enforcement actions also relating to the Fair Debt Collection Practices Act and the Fair Credit Reporting Act.

Although there were no public fair lending enforcement actions by the CFPB over the past year, the CFPB entered into a consent order against mortgage lender Freedom Mortgage Corp. for submitting erroneous Home Mortgage Disclosure Act data under Regulation C in June 2019. The presenters pointed out that the Freedom Mortgage Corp. order related primarily to the reporting of race, ethnicity, and sex information, which the CFPB alleged was reported incorrectly on an intentional basis in many cases.

Key issues discussed in this session included:

Compliance With Foreclosure, Forbearance and Other Loss Mitigation Guidance and Regulations. The presenters discussed efforts by federal and state governments to address the hardships created by the COVID-19 pandemic, including requirements and guidelines prohibiting foreclosures and encouraging or requiring loan servicers to offer forbearance and other loss mitigation options to borrowers. Under the CARES Act, for example, a borrower with a federally-backed mortgage loan experiencing a hardship due directly or indirectly to the COVID-19 emergency may request forbearance from the borrowers servicer, and such forbearance shall be granted where the borrower provides a certification of hardship. Likewise, servicers may not pursue foreclosure processes for federally-backed mortgage loans over the 60-day period that began on March 18, 2020. These and other similar requirements at the state level for nonmortgage loans raise fair lending risk, inasmuch as institutions may not be treating similarly situated customers consistently.

The presenters outlined best practices to potentially mitigate fair lending risk relating to foreclosure, forbearance and other loss mitigation programs, including:

Risks Related to Implementation of the Paycheck Protection Program (PPP). Implementation of the Paycheck Protection Program has led to a number of compliance issues, including fair lending risks under the ECOA relating to the prioritization of existing customers for PPP loans. In particular, it has been widely reported that some institutions have issued guidelines prohibiting from eligibility loan applicants who do not already have a loan account with the institution. Several reasons have been offered for such a policy, including minimizing the burden on staff, who are already stretched thin by COVID-19 issues and concerns, to compliance with know-your-customer rules.

The presenters highlighted that, depending on the composition of the existing customer base, a customer-only policy could lead to a higher denial rate for minority-owned businesses. In addition, any other underwriting overlays could create denial rate disparities and expose institutions to further fair lending risk. The presenters recommended that institutions carefully monitor underwriting decisions relating to the PPP program to determine whether minority-owned businesses are being adversely affected by policies and procedures. They noted that fair lending enforcement relating to the PPP program could take months or even years to develop, necessitating careful documentation of decisions and attention to treating similarly situated customers equally.

Emerging Fair Lending Issues. Loan pricing continues to be an area of fair lending scrutiny by regulators, particularly with respect to relationship and competitive discounts. In particular, regulators continue to investigate whether institutions are offering competitive price matching and relationship discounts equally to minority and nonminority borrowers. The presenters discussed best practices in this area, including:

A second fair lending hot topic is the enhanced regulatory scrutiny resulting from the banking industrys shift into digital banking. As banks reduce their physical footprint, fair lending risk may arise inasmuch as branches are closed disproportionately in majority-minority communities, or an institution leaves majority-minority communities altogether. Steps to mitigate fair lending risk relating to the closure of branches potentially include documenting the specific reasons for any particular branch closure and evaluating whether branch closures will disproportionately affect majority-minority communities.

Emerging UDAAP Issues. The presenters discussed several emerging UDAP and UDAAP issues, focusing on the application of the UDAP prohibition to small business lending practices, recent rulemaking relating to the prohibition against abusive acts or practices, and the development of debt collection rules tied to UDAAP. Although the application of UDAP to small business lending practices is not new, regulators have recently focused their enforcement activities on protecting small businesses from unfair or deceptive acts or practices, including with respect to the disclosure of material terms and conditions of contracts. The presenters made the point that enforcement actions in the consumer space may foreshadow small business enforcement in the future, and encouraged participants to consider issues such as payment processing, credit reporting and overdraft practices affecting small businesses through a UDAP lens.

The presenters also described the CFPBs recent application of UDAAP principles to debt collection practices, including a recent enforcement action alleging that the number of debt collection calls to a borrower constituted an unfair practice.

Audience questions submitted during the presentation included:

Skadden counsel Darren Welch led a discussion regarding machine learning, digital marketing and other emerging technology-based issues in consumer financial services. Machine learning an advanced computing methodology which helps identify predictive patterns from large data sets without human involvement has the potential to result in more predictive models, increased access to credit, and better terms and conditions, thereby benefiting the industry and consumers alike. As with other advances in technology, however, machine learning also presents a number of compliance issues including unique challenges as well as some issues present in traditional models in the areas of fair lending, fairness and technical compliance. Darren discussed these compliance issues and a number of recommended best practices.

Key issues discussed in this session included:

Fair Lending Testing. With the prevalence and complexity of machine learning models used for underwriting and other purposes, it is important for companies to assess the adequacy of fair lending testing methodologies. Two key types of fair lending testing are (i) to identify potential less discriminatory alternatives and (ii) to assess whether variables may serve as a close proxy for a prohibited factor. While regulators have not clearly articulated expectations regarding fair lending testing methodologies for machine learning models, important questions include how to determine whether an alternative is materially better than the challenged practice, whether an alternative must be considered if it results in some drop in model performance, and how to assess different and conflicting impacts of an alternative on different borrower groups. Skadden has worked with clients to develop metrics and protocols to address these questions.

Explainability. It is important that lenders understand how complex models work, and that they are able to explain to consumers the reasons why a model resulted in an adverse outcome. In addition, to mitigate fair lending risk, lenders may wish to ensure that they can provide intuitive reasons as to why nontraditional data elements are predictive of risk.

Nontraditional Data. The use of nontraditional data elements i.e., data not found in traditional credit bureau reports or reported by the consumer on the application in lending models has the potential to expand access to credit in some circumstances, as regulators have indicated. However, these data elements can present elevated compliance risk, and it is important to consider appropriate fair lending risk management, including carefully reviewing data elements, documenting the rationale as to why nontraditional data elements are predictive, fair lending testing for alternatives and proxies, and considering other options to mitigate fair lending risk resulting from third-party models.

Digital Marketing. Recommendations in recent regulatory guidance regarding internet marketing include monitoring audiences reached by marketing, understanding third-party algorithms, carefully reviewing geographic filters and offering consumers the best products for which they are eligible. Regarding marketing through social platforms, including Facebook, while certain targeting options have been eliminated for credit models, other algorithms used by social media platforms that cannot be controlled by advertisers may consider prohibited basis variables, and Darren recommended that lenders consider whether and how they use such platforms for marketing.

Audience questions submitted during the presentation included:

Download PDF

Read the original:

Summary of 28th Annual Conference on Fair Lending and Consumer Financial Protection - JD Supra

Posted in Intentional Communities | Comments Off on Summary of 28th Annual Conference on Fair Lending and Consumer Financial Protection – JD Supra

CoinDesk 50: Besu, the Marriage of Ethereum and Hyperledger – CoinDesk

Posted: at 4:54 pm

The official marriage of Ethereum and Hyperledger matters.

There have been dalliances between Hyperledger and Ethereum going back over the years. The latest lovechild, Besu, was designed from the ground up to let large enterprises connect to the public Ethereum blockchain.

There are benefits on both sides. On the public, or permissionless, side of things, Ethereum has the largest developer community in crypto, building tools corporations may not even know they need yet.

On the other, Hyperledgers permissioned blockchain is where many of the corporations looking at this tech feel most comfortable. (Besu graduated to active status within Hyperledger in March of this year, placing the project on an equal footing with the likes of Fabric, Sawtooth and Indy.)

This post is part of the CoinDesk 50, an annual selection of the most innovative and consequential projects in the blockchain industry. See thefull list here.

Ethereums true believers have always viewed big business using the public mainnet as a Holy Grail in the quest for world computer status. Such a development would make Ethereum a transparent trust layer for anchoring transactions or agreements, bringing the Fortune 500 into a new world of open, decentralized finance.

Businesses are coming round to the idea of a public blockchain connection, too, either running their own nodes or by using some form of safe bridge to the mainnet, said Daniel Heyman, program director of PegaSys, the protocol engineering group at ConsenSys that built Besu.

While some folks over at Hyperledger think thats a nice to have, there are definitely others who think its a need to have, Heyman said. Regardless, a mainnet project brings a lot of optionality to enterprises that otherwise wouldnt have those choices.

Hyperledger Executive Director Brian Behlendorf said Besu was kind of hedging our bets, since the client can be used in both permissioned blockchains as well as on public networks.

I like to keep an open mind, said Behlendorf. Eventually, I think the larger, more-successful permissioned blockchain networks will look and feel not unlike many of the public blockchains. So it's not a dichotomy in my book.

Looking ahead, its also possible Besu may blaze a trail in bringing more Ethereum-affiliated projects into Hyperledger. For instance, Axoni, the blockchain builder working with the Depository Trust & Clearing Corporation (DTCC) is due to open source that particular piece of work as part of Hyperledger.

Seeing other enterprise Ethereum projects begin to gravitate towards Hyperledger would be really exciting, said Heyman.

Communities take a lot of work to maintain, Heyman added. Ethereum is far and away the most engaged community in the blockchain space, which has happened rather organically. But on the enterprise side of the coin, you typically need to be a bit more intentional to get those communities to form. So Hyperledgers support is really helpful.

Behlendorf said thats where Besu may come in handy by getting some enterprise blockchain projects to stop focusing on the bespoke in favor of something that can be adopted across multiple platforms.

[Hyperledger] can play a useful role in helping up-level the whole industry and help everyone save some money at a time when there isnt really cash to spare, he said.

The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.

Go here to read the rest:

CoinDesk 50: Besu, the Marriage of Ethereum and Hyperledger - CoinDesk

Posted in Intentional Communities | Comments Off on CoinDesk 50: Besu, the Marriage of Ethereum and Hyperledger – CoinDesk

1000+ Experts From Around the World Call for ‘Degrowth’ After COVID-19 Pandemic – The Wire

Posted: at 4:54 pm

New Delhi:A group of over 1,000 experts and organisations have written an open letter questioning the worlds strategy, and suggesting a transformative change as we move beyond the COVID-19 pandemic that has gripped us all.

For a more just and equitable society, they argue, degrowth is the way to go. This, they say, will require an overhaul of the capitalist system a planned yet adaptive, sustainable, and equitable downscaling of the economy, leading to a future where we can live better together with less. They have put forth five principles which they believe will help create a more just future.

Blind faith in the market system and pursuits like green growth will not make matters any better, they agree.

Read the full text of the letter below.

The Coronavirus pandemic has already taken countless lives and it is uncertain how it will develop in the future. While people on the front lines of healthcare and basic social provisioning are fighting against the spread of the virus, caring for the sick and keeping essential operations running, a large part of the economy has come to a standstill. This situation is numbing and painful for many, creating fear and anxiety about those we love and the communities we are part of, but it is also a moment to collectively bring new ideas forward.

The crisis triggered by the Coronavirus has already exposed many weaknesses of our growth-obsessed capitalist economy insecurity for many, healthcare systems crippled by years of austerity and the undervaluation of some of the most essential professions. This system, rooted in exploitation of people and nature, which is severely prone to crises, was nevertheless considered normal. Although the world economy produces more than ever before, it fails to take care of humans and the planet, instead the wealth is hoarded and the planet is ravaged.Millions of children die every year from preventable causes, 820 million people are undernourished, biodiversity and ecosystems are being degraded and greenhouse gases continue to soar, leading to violent anthropogenic climate change: sea level rise, devastating storms,droughts and fires that devour entire regions.

Also read: We Will Survive the Coronavirus. We Need to Make Sure We Survive Ourselves.

For decades, the dominant strategies against these ills were to leave economic distribution largely to market forces and to lessen ecological degradation through decoupling and green growth. This has not worked. We now have an opportunity to build on the experiences of the Corona crisis: from new forms of cooperation and solidarity that are flourishing, to the widespread appreciation of basic societal services like health and care work, food provisioning and waste removal. The pandemic has also led to government actions unprecedented in modern peacetime, demonstrating what is possible when there is a will to act: the unquestioned reshuffling of budgets, mobilisation and redistribution of money, rapid expansion of social security systems and housing for the homeless.

At the same time, we need to be aware of the problematic authoritarian tendencies on the rise like mass surveillance and invasive technologies, border closures, restrictions on the right of assembly, and the exploitation of the crisis by disaster capitalism. We must firmly resist such dynamics, but not stop there. To start a transition towards a radically different kind of society, rather than desperately trying to get the destructive growth machine running again, we suggest to build on past lessons and the abundance of social and solidarity initiatives that have sprouted around the world these past months. Unlike after the 2008 financial crisis, we should save people and the planet rather than bail out the corporations, and emerge from this crisis with measures of sufficiency instead of austerity.

We, the signatories of this letter, therefore offer five principles for the recovery of our economy and the basis of creating a just society. To develop new roots for an economy that works for all, we need to:

1)Put life at the center of our economic systems.

Instead of economic growth and wasteful production, we must put life and wellbeing at the center of our efforts. While some sectors of the economy, like fossil fuel production, military and advertising, have to be phased out as fast as possible, we need to foster others, like healthcare, education, renewable energy and ecological agriculture.

2)Radically reevaluate how much and what work is necessary for a good life for all.

We need to put more emphasis oncare workand adequately value the professions that have proven essential during the crisis. Workers from destructive industries need access to training for new types of work that is regenerative and cleaner, ensuring a just transition. Overall, we have to reduce working time and introduce schemes for work-sharing.

3)Organize society around the provision of essential goods and services.

While we need to reduce wasteful consumption and travel, basic human needs, such as the right to food, housing and education have to be secured for everyone through universal basic services or universal basic income schemes. Further, a minimum and maximum income have to be democratically defined and introduced.

4)Democratise society.

This means enabling all people to participate in the decisions that affect their lives. In particular, it means more participation for marginalised groups of society as well as including feminist principlesinto politics and the economic system. The power of global corporations and the financial sector has to be drastically reduced through democratic ownership and oversight. The sectors related to basic needs like energy, food, housing, health and education need to be decommodified and definancialised. Economic activity based on cooperation, for example worker cooperatives, has to be fostered.

5)Base political and economic systems on the principle of solidarity.

Redistribution and justice transnational, intersectional and intergenerational must be the basis for reconciliation between current and future generations, social groups within countries as well as between countries of the Global South and Global North. The Global North in particular must end current forms of exploitation and make reparations for past ones. Climate justice must be the principle guiding a rapid social-ecological transformation.

As long as we have an economic system that is dependent on growth, a recession will be devastating. What the world needs instead is Degrowth a planned yet adaptive, sustainable, and equitable downscaling of the economy, leading to a future where we can live better with less. The current crisis has been brutal for many, hitting the most vulnerable hardest, but it also gives us the opportunity to reflect and rethink. It can make us realise what is truly important and has demonstrated countless potentials to build upon. Degrowth, as a movement and a concept, has been reflecting on these issues for more than a decade and offers a consistent framework for rethinking society based on other values, such as sustainability, solidarity, equity, conviviality, direct democracy and enjoyment of life.

Join us in these debates and share your ideas atDegrowth Vienna 2020and theGlobal Degrowth Day to construct an intentional and emancipatory exit from our growth addictions together!

In solidarity,

The open letter working group: Nathan Barlow, Ekaterina Chertkovskaya, Manuel Grebenjak, Vincent Liegey, Franois Schneider, Tone Smith, Sam Bliss, Constanza Hepp, Max Hollweg, Christian Kerschner, Andro Rilovi, Pierre Smith Khanna, Jolle Saey-Volckrick

This letter is the result of a collaborative process within the degrowth international network. It has been signed by more than 1,100 experts and over 70 organizations from more than 60 countries. See all signatories here

Read this article:

1000+ Experts From Around the World Call for 'Degrowth' After COVID-19 Pandemic - The Wire

Posted in Intentional Communities | Comments Off on 1000+ Experts From Around the World Call for ‘Degrowth’ After COVID-19 Pandemic – The Wire

Machine learning – Wikipedia

Posted: at 4:53 pm

Scientific study of algorithms and statistical models that computer systems use to perform tasks without explicit instructions

Machine learning (ML) is the study of computer algorithms that improve automatically through experience.[1] It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.[2][3]:2 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.

Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning.[4][5] In its application across business problems, machine learning is also referred to as predictive analytics.

Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed. For more advanced tasks, it can be challenging for a human to manually create the needed algorithms. In practice, it can turn out to be more effective to help the machine develop its own algorithm, rather than have human programmers specify every needed step.[6][7]

The discipline of machine learning employs various approaches to help computers learn to accomplish tasks where no fully satisfactory algorithm is available. In cases where vast numbers of potential answers exist, one approach is to label some of the correct answers as valid. This can then be used as training data for the computer to improve the algorithm(s) it uses to determine correct answers. For example, to train a system for the task of digital character recognition, the MNIST dataset has often been used. [6][7]

Early classifications for machine learning approaches sometimes divided them into three broad categories, depending on the nature of the "signal" or "feedback" available to the learning system. These were:Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent) As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximise. [3]

Other approaches or processes have since developed that don't fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example topic modeling, dimensionality reduction or meta learning. [8] As of 2020, deep learning has become the dominant approach for much ongoing work in the field of machine learning . [6]

The term machine learning was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in the field of computer gaming and artificial intelligence. [9][10] A representative book of the machine learning research during the 1960s was the Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.[11] Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. [12] In 1981 a report was given on using teaching strategies so that a neural network learns to recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal. [13]

Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."[14] This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".[15]

As a scientific endeavor, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics.[16] Probabilistic reasoning was also employed, especially in automated medical diagnosis.[17]:488

However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.[17]:488 By 1980, expert systems had come to dominate AI, and statistics was out of favor.[18] Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.[17]:708710; 755 Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.[17]:25

Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory.[18] As of 2019, many sources continue to assert that machine learning remains a sub field of AI. Yet some practitioners, for example Dr Daniel Hulme, who both teaches AI and runs a company operating in the field, argues that machine learning and AI are separate. [7][19][6]

Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.

Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples). The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.[20]

Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns.[21] According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[22] He also suggested the term data science as a placeholder to call the overall field.[22]

Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model,[23] wherein "algorithmic model" means more or less the machine learning algorithms like Random forest.

Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.[24]

A core objective of a learner is to generalize from its experience.[3][25] Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.

The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The biasvariance decomposition is one way to quantify generalization error.

For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer.[26]

In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.

The types of machine learning algorithms differ in their approach, the type of data they input and output, and the type of task or problem that they are intended to solve.

Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs.[27] The data is known as training data, and consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimization of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs.[28] An optimal function will allow the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.[14]

Types of supervised learning algorithms include Active learning , classification and regression.[29] Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email.

Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification.

Unsupervised learning algorithms take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points. The algorithms, therefore, learn from test data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. A central application of unsupervised learning is in the field of density estimation in statistics, such as finding the probability density function.[30] Though unsupervised learning encompasses other domains involving summarizing and explaining data features.

Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity.

Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy.

In weakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.[31]

Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In machine learning, the environment is typically represented as a Markov Decision Process (MDP). Many reinforcement learning algorithms use dynamic programming techniques.[32] Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP, and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent.

Self-learning as machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning named Crossbar Adaptive Array (CAA). [33] It is a learning with no external rewards and no external teacher advices. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion. [34]The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine:

It is a system with only one input, situation s, and only one output, action (or behavior) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioral environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal seeking behavior, in an environment that contains both desirable and undesirable situations. [35]

Several learning algorithms aim at discovering better representations of the inputs provided during training.[36] Classic examples include principal components analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task.

Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labeled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabeled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorization[37] and various forms of clustering.[38][39][40]

Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors.[41] Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.[42]

Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.

Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions, and is assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately.[43] A popular heuristic method for sparse dictionary learning is the K-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.[44]

In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.[45] Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions.[46]

In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts in activity. This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data, unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.[47]

Three broad categories of anomaly detection techniques exist.[48] Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier (the key difference to many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behavior from a given normal training data set and then test the likelihood of a test instance to be generated by the model.

In developmental robotics, robot learning algorithms generate their own sequences of learning experiences, also known as a curriculum, to cumulatively acquire new skills through self-guided exploration and social interaction with humans. These robots use guidance mechanisms such as active learning, maturation, motor synergies and imitation.

Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".[49]

Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.[50] Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.

Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliski and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets.[51] For example, the rule { o n i o n s , p o t a t o e s } { b u r g e r } {displaystyle {mathrm {onions,potatoes} }Rightarrow {mathrm {burger} }} found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.

Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.[52]

Inductive logic programming (ILP) is an approach to rule-learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming languages for representing hypotheses (and not only logic programming), such as functional programs.

Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting.[53][54][55] Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples.[56] The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set.

Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems.

Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules.

An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times.

The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.

Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[57]

Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making.

Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.[58] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.

Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel [59]), Logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher dimensional space.

A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.

A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[60][61] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[62]

Usually, machine learning models require a lot of data in order for them to perform well. Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model.

Federated learning is a new approach to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.[63]

There are many applications for machine learning, including:

In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy on its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1million.[65] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[66] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[67] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[68] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings, and that it may have revealed previously unrecognized influences among artists.[69] In 2019 Springer Nature published the first research book created using machine learning.[70]

Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[71][72][73] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[74]

In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[75] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of investment.[76][77]

Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[78] Language models learned from data have been shown to contain human-like biases.[79][80] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[81][82] In 2015, Google photos would often tag black people as gorillas,[83] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[84] Similar issues with recognizing non-white people have been found in many other systems.[85] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[86] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[87] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "Theres nothing artificial about AI...Its inspired by people, its created by people, andmost importantlyit impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.[88]

Classification machine learning models can be validated by accuracy estimation techniques like the Holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[89]

In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the False Positive Rate (FPR) as well as the False Negative Rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The Total Operating Characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used Receiver Operating Characteristic (ROC) and ROC's associated Area Under the Curve (AUC).[90]

Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[91] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[92][93] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning.

Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases.[94][95]

Other forms of ethical challenges, not related to personal biases, are more seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is huge potential for machine learning in health care to provide professionals a great tool to diagnose, medicate, and even plan recovery paths for patients, but this will not happen until the personal biases mentioned previously, and these "greed" biases are addressed.[96]

Software suites containing a variety of machine learning algorithms include the following:

Originally posted here:

Machine learning - Wikipedia

Comments Off on Machine learning – Wikipedia

Machine Learning Tutorial for Beginners – Guru99

Posted: at 4:53 pm

What is Machine Learning?

Machine Learning is a system that can learn from example through self-improvement and without being explicitly coded by programmer. The breakthrough comes with the idea that a machine can singularly learn from the data (i.e., example) to produce accurate results.

Machine learning combines data with statistical tools to predict an output. This output is then used by corporate to makes actionable insights. Machine learning is closely related to data mining and Bayesian predictive modeling. The machine receives data as input, use an algorithm to formulate answers.

A typical machine learning tasks are to provide a recommendation. For those who have a Netflix account, all recommendations of movies or series are based on the user's historical data. Tech companies are using unsupervised learning to improve the user experience with personalizing recommendation.

Machine learning is also used for a variety of task like fraud detection, predictive maintenance, portfolio optimization, automatize task and so on.

In this basic tutorial, you will learn-

Traditional programming differs significantly from machine learning. In traditional programming, a programmer code all the rules in consultation with an expert in the industry for which software is being developed. Each rule is based on a logical foundation; the machine will execute an output following the logical statement. When the system grows complex, more rules need to be written. It can quickly become unsustainable to maintain.

Machine learning is supposed to overcome this issue. The machine learns how the input and output data are correlated and it writes a rule. The programmers do not need to write new rules each time there is new data. The algorithms adapt in response to new data and experiences to improve efficacy over time.

Machine learning is the brain where all the learning takes place. The way the machine learns is similar to the human being. Humans learn from experience. The more we know, the more easily we can predict. By analogy, when we face an unknown situation, the likelihood of success is lower than the known situation. Machines are trained the same. To make an accurate prediction, the machine sees an example. When we give the machine a similar example, it can figure out the outcome. However, like a human, if its feed a previously unseen example, the machine has difficulties to predict.

The core objective of machine learning is the learning and inference. First of all, the machine learns through the discovery of patterns. This discovery is made thanks to the data. One crucial part of the data scientist is to choose carefully which data to provide to the machine. The list of attributes used to solve a problem is called a feature vector. You can think of a feature vector as a subset of data that is used to tackle a problem.

The machine uses some fancy algorithms to simplify the reality and transform this discovery into a model. Therefore, the learning stage is used to describe the data and summarize it into a model.

For instance, the machine is trying to understand the relationship between the wage of an individual and the likelihood to go to a fancy restaurant. It turns out the machine finds a positive relationship between wage and going to a high-end restaurant: This is the model

When the model is built, it is possible to test how powerful it is on never-seen-before data. The new data are transformed into a features vector, go through the model and give a prediction. This is all the beautiful part of machine learning. There is no need to update the rules or train again the model. You can use the model previously trained to make inference on new data.

The life of Machine Learning programs is straightforward and can be summarized in the following points:

Once the algorithm gets good at drawing the right conclusions, it applies that knowledge to new sets of data.

Machine learning can be grouped into two broad learning tasks: Supervised and Unsupervised. There are many other algorithms

An algorithm uses training data and feedback from humans to learn the relationship of given inputs to a given output. For instance, a practitioner can use marketing expense and weather forecast as input data to predict the sales of cans.

You can use supervised learning when the output data is known. The algorithm will predict new data.

There are two categories of supervised learning:

Imagine you want to predict the gender of a customer for a commercial. You will start gathering data on the height, weight, job, salary, purchasing basket, etc. from your customer database. You know the gender of each of your customer, it can only be male or female. The objective of the classifier will be to assign a probability of being a male or a female (i.e., the label) based on the information (i.e., features you have collected). When the model learned how to recognize male or female, you can use new data to make a prediction. For instance, you just got new information from an unknown customer, and you want to know if it is a male or female. If the classifier predicts male = 70%, it means the algorithm is sure at 70% that this customer is a male, and 30% it is a female.

The label can be of two or more classes. The above example has only two classes, but if a classifier needs to predict object, it has dozens of classes (e.g., glass, table, shoes, etc. each object represents a class)

When the output is a continuous value, the task is a regression. For instance, a financial analyst may need to forecast the value of a stock based on a range of feature like equity, previous stock performances, macroeconomics index. The system will be trained to estimate the price of the stocks with the lowest possible error.

In unsupervised learning, an algorithm explores input data without being given an explicit output variable (e.g., explores customer demographic data to identify patterns)

You can use it when you do not know how to classify the data, and you want the algorithm to find patterns and classify the data for you

Type

K-means clustering

Puts data into some groups (k) that each contains data with similar characteristics (as determined by the model, not in advance by humans)

Clustering

Gaussian mixture model

A generalization of k-means clustering that provides more flexibility in the size and shape of groups (clusters

Clustering

Hierarchical clustering

Splits clusters along a hierarchical tree to form a classification system.

Can be used for Cluster loyalty-card customer

Clustering

Recommender system

Help to define the relevant data for making a recommendation.

Clustering

PCA/T-SNE

Mostly used to decrease the dimensionality of the data. The algorithms reduce the number of features to 3 or 4 vectors with the highest variances.

Dimension Reduction

There are plenty of machine learning algorithms. The choice of the algorithm is based on the objective.

In the example below, the task is to predict the type of flower among the three varieties. The predictions are based on the length and the width of the petal. The picture depicts the results of ten different algorithms. The picture on the top left is the dataset. The data is classified into three categories: red, light blue and dark blue. There are some groupings. For instance, from the second image, everything in the upper left belongs to the red category, in the middle part, there is a mixture of uncertainty and light blue while the bottom corresponds to the dark category. The other images show different algorithms and how they try to classified the data.

The primary challenge of machine learning is the lack of data or the diversity in the dataset. A machine cannot learn if there is no data available. Besides, a dataset with a lack of diversity gives the machine a hard time. A machine needs to have heterogeneity to learn meaningful insight. It is rare that an algorithm can extract information when there are no or few variations. It is recommended to have at least 20 observations per group to help the machine learn. This constraint leads to poor evaluation and prediction.

Augmentation:

Automation:

Finance Industry

Government organization

Healthcare industry

Marketing

Example of application of Machine Learning in Supply Chain

Machine learning gives terrific results for visual pattern recognition, opening up many potential applications in physical inspection and maintenance across the entire supply chain network.

Unsupervised learning can quickly search for comparable patterns in the diverse dataset. In turn, the machine can perform quality inspection throughout the logistics hub, shipment with damage and wear.

For instance, IBM's Watson platform can determine shipping container damage. Watson combines visual and systems-based data to track, report and make recommendations in real-time.

In past year stock manager relies extensively on the primary method to evaluate and forecast the inventory. When combining big data and machine learning, better forecasting techniques have been implemented (an improvement of 20 to 30 % over traditional forecasting tools). In term of sales, it means an increase of 2 to 3 % due to the potential reduction in inventory costs.

Example of Machine Learning Google Car

For example, everybody knows the Google car. The car is full of lasers on the roof which are telling it where it is regarding the surrounding area. It has radar in the front, which is informing the car of the speed and motion of all the cars around it. It uses all of that data to figure out not only how to drive the car but also to figure out and predict what potential drivers around the car are going to do. What's impressive is that the car is processing almost a gigabyte a second of data.

Machine learning is the best tool so far to analyze, understand and identify a pattern in the data. One of the main ideas behind machine learning is that the computer can be trained to automate tasks that would be exhaustive or impossible for a human being. The clear breach from the traditional analysis is that machine learning can take decisions with minimal human intervention.

Take the following example; a retail agent can estimate the price of a house based on his own experience and his knowledge of the market.

A machine can be trained to translate the knowledge of an expert into features. The features are all the characteristics of a house, neighborhood, economic environment, etc. that make the price difference. For the expert, it took him probably some years to master the art of estimate the price of a house. His expertise is getting better and better after each sale.

For the machine, it takes millions of data, (i.e., example) to master this art. At the very beginning of its learning, the machine makes a mistake, somehow like the junior salesman. Once the machine sees all the example, it got enough knowledge to make its estimation. At the same time, with incredible accuracy. The machine is also able to adjust its mistake accordingly.

Most of the big company have understood the value of machine learning and holding data. McKinsey have estimated that the value of analytics ranges from $9.5 trillion to $15.4 trillion while $5 to 7 trillion can be attributed to the most advanced AI techniques.

Continued here:

Machine Learning Tutorial for Beginners - Guru99

Comments Off on Machine Learning Tutorial for Beginners – Guru99

ValleyML is launching a Machine Learning and Deep Learning Boot Camp from July 14th to Sept 10th and AI Expo Series from Sept 21st to Nov 19th 2020….

Posted: at 4:53 pm

SANTA CLARA, Calif., May 14, 2020 /PRNewswire/ --ValleyML, Valley Machine Learning and Articial Intelligence is the most active and important community of ML & AI Companies and Start-ups, Data Practitioners, Executives and Researchers. We have a global outreach to close to 200,000 professionals in AI and Machine Learning. The focus areas of our members are AI Robotics, AI in Enterprise and AI Hardware. We plan to cover the state-of-the-art advancements in AI technology.ValleyML sponsors include UL, MINDBODY Inc., Ambient Scientific Inc., SEMI, Intel, Western Digital, Texas Instruments, Google, Facebook, Cadence andXilinx.

ValleyML Machine Learning and Boot Camp -2020Build a solid foundation of Machine Learning / Deep Learning principles and apply the techniques to real-world problems. Get IEEE PDH Certificate. Virtual Live Boot Camp from July 14th-Sept 10th.Description. Enroll and Learn at ValleyML Live Learning Platform(coupons: valleyml40 Register by June 1st for 40% off. valleyml25 Register by July 1st for 25% off.)

Global Call for Presentations & Sponsors for ValleyML AI Expo 2020 conference series (Global & Virtual).A unified call for proposals from industry for ValleyML's AI Expo events focused on Hardware, Enterprise and Robotics is now open at ValleyML2020. Submit by June 1st to participate in a virtual and global series of 90-minute talks and discussions from Sept 21st to Nov 19th on Mondays-Thursdays. Sponsor AI Expo!Limited sponsorship opportunities available. These highly focused events welcome a community of CTOs, CEOs, Chief Data Scientists, product management executives and delegates from some of the world's top technology companies.

Committee for ValleyML AI Expo 2020:

Program Chair for AI Enterprise and AI Robotics series:

Mr. Marc Mar-Yohana, Vice President at UL.

Program Chair for AI Hardware series:

Mr. George Williams, Director of Data Science at GSI Technology.

General Chair:

Dr. Kiran Gunnam, Distinguished Engineer, Machine Learning and Computer Vision, Western Digital.

SOURCE ValleyML

AI Expo 2022

Go here to see the original:

ValleyML is launching a Machine Learning and Deep Learning Boot Camp from July 14th to Sept 10th and AI Expo Series from Sept 21st to Nov 19th 2020....

Comments Off on ValleyML is launching a Machine Learning and Deep Learning Boot Camp from July 14th to Sept 10th and AI Expo Series from Sept 21st to Nov 19th 2020….