Cronkite School to host special event discussing diversity and media in the NFL – Walter Cronkite School of Journalism and Mass Communication

Panelists include Troy Vincent, Herm Edwards, Marvin Lewis and William C. Rhoden

Media Note: Members of the media are invited to attend.

The Walter Cronkite School of Journalism and Mass Communication at Arizona State University will host On The Clock & In the Media: Race, Hiring and the NFL, a panel discussion focusing on the complex challenges in the NFL.

The free event will begin at 6 p.m. on Thursday, April 7 at the Cronkite Schools First Amendment Forum, located on the second floor of the Cronkite building, 555 N. Central Avenue. It will also be streamed online. Seating is limited and early registration is encouraged. Register here.

The panel will feature ASU football head coach Herm Edwards; ASU football special advisor to the head coach and former Cincinnati Bengals head coach Marvin Lewis; Troy Vincent, NFL executive vice president of football operations; and Cronkite School Visiting Professor and renowned sports journalist William C. Rhoden.

Cronkite School Dean Battinto L. Batts, Jr. will moderate the discussion.

The panelists will discuss what improvements NFL teams should make to their hiring practices to bring more diversity into its head coaching ranks and how the NFL can move forward.

The panel will also examine the role the media plays when showcasing various diversity challenges within the NFL. Sports journalism is one of the fastest growing sectors in the media industry and journalists play an integral role in highlighting these issues.

In 2021, more than 70% of NFL players were people of color, according to The Institute for Diversity and Ethics in Sport. However, just five of the 32 NFL teams currently employ head coaches of color.

The panel will feature two former NFL head coaches, as well as an NFL executive who played in the league.

Edwards, a former NFL player, has coached at the professional and collegiate levels within the past 20 years. He was hired as ASUs football coach in 2017 following nine years as an NFL analyst for ESPN.

Lewis joined the ASU football team in 2019 as a special advisor. He also served as an interim defensive backs coach at the end of the 2019 season and co-defensive coordinator in 2020. Lewis has more than 25 years of coaching experience in the NFL.

Vincent has more than 20 years of NFL executive experience after a 15-year playing career. In his current position, Vincents responsibilities focus on the business of football, including game analytics, accountability, integrity of the game, development and growth, and policies and procedures relating to NFL games.

Rhoden is a visiting professor at the Cronkite School, writer-at-large for Andscape, formerly The Undefeated, former award-winning sports columnist for The New York Times and author of Forty Million Dollar Slaves.

About the Cronkite School

The Walter Cronkite School of Journalism and Mass Communication at Arizona State University is widely recognized as one of the nations premier professional journalism programs and has received international acclaim for its innovative use of the teaching hospital model. Rooted in the time-honored values that characterize its namesake accuracy, responsibility, objectivity, integrity the school fosters journalistic excellence and ethics in both the classroom and in its 13 professional programs that fully immerse students in the practice of journalism and related fields. Arizona PBS, one of the nations largest public television stations, is part of Cronkite, making it the largest media outlet operated by a journalism school in the world. Learn more at cronkite.asu.edu.

Visit link:

Cronkite School to host special event discussing diversity and media in the NFL - Walter Cronkite School of Journalism and Mass Communication

VICTORY: After FIRE letter, University of Northern Iowa clarifies resident assistants may speak with media – Foundation for Individual Rights in…

FIRE commends University of Northern Iowa for quickly affirming to resident assistants that the university will respect their First Amendment rights.(Photo courtesy University of Northern Iowa)

by Sabrina Conza

The University of Northern Iowa has made clear to resident assistants that they may speak with the media as private citizens after FIRE raised concerns about UNI requiring pre-approval of RAs communications.

In February, The Northern Iowan student newspaper sent RAs an anonymous survey asking about their experiences on campus, but a UNI official quickly told RAs that university media relations officials must approve all RA-themed media responses to the press. On March 17, FIRE wrote UNI explaining that government employees, including RAs at public institutions, have the right to speak to the media in their individual capacities on matters of public concern.

On March 23, UNI responded to FIRE, affirming that the school strongly values the First Amendment rights of [UNI] students and employees and pledging not to restrict RAs right to speak with the media. And on March 29, UNI told its RAs that they may speak with the media (including on-campus newspapers) in their capacity as a private citizen without seeking prior approval from UNI Housing & Dining.

UNI told FIRE, We continue to value opportunities to assure Resident Assistants as with all UNI students and employees understand the protections afforded under the First Amendment.

As much as FIRE will readily criticize universities unconstitutional policies and practices, we much prefer commending universities when they do the right thing.

FIRE commends UNI for quickly affirming to RAs that the university will respect their First Amendment rights.

We have seen this scenario play out many times before, with mixed results. In just the last couple of years, after FIREs intervention, University of North Carolina, University of Missouri, and University of Virginia changed their policies which limited RAs ability to speak with the media, bringing them into compliance with the First Amendment. Louisiana State University and Frostburg State University, on the other hand, both refused to fully respect RAs First Amendment rights.

Other institutions with restrictive practices of silencing students and employees speech or suppressing the student press should take note as much as FIRE will readily criticize universities unconstitutional policies and practices, we much prefer commending universities when they do the right thing.

If youre a student or faculty member facing censorship or prior review from your university, or a student journalist facing restrictions on communicating with sources, reach out to FIRE. We may be able to help.

FIRE defends the rights of students and faculty members no matter their views at public and private universities and colleges in the United States. If you are a student or a faculty member facing investigation or punishment for your speech, submit your case to FIRE today. If youre a college journalist facing censorship or a media law question, call the Student Press Freedom Initiative 24-hour hotline at 717-734-SPFI (7734).

View original post here:

VICTORY: After FIRE letter, University of Northern Iowa clarifies resident assistants may speak with media - Foundation for Individual Rights in...

Letters to the editor: Book bans, teaching restrictions in public schools are un-American – Akron Beacon Journal

Speech restrictions are un-American

In Ohio and across the nation, state legislatures and school districts are banning books, limiting what can be taught in public schools and state universities, restricting the types of events that public librariescan host, and even saying that certain words can't be uttered in certain settings.

The people who are doing this are the same ones that yammer about Second Amendment rights while trampling the First Amendment.

This is what Nazis did; it is what Vladimir Putin does; it is not what we do in the United States ofAmerica.

Jim Kroeger, Fairlawn

After watching President Joe Bidens March 26 speech in Poland, I am reminded of lyrics from the U2 song Crumbs from your Table: where you live should not decide whether you live or whether you die. To allow thousands of Ukrainians to die because they are not a part of NATO, thus on the wrong side of the street, is so immoral. To say that Bidens speech ranks up there with those given in Europe by John F. Kennedy or Ronald Reagan is a joke; those great men did not cower to tyrants. May God have mercy on those in charge who think sanctions alone are the answer.

Randy Ley, Tallmadge

Go here to read the rest:

Letters to the editor: Book bans, teaching restrictions in public schools are un-American - Akron Beacon Journal

Despite warnings, Republicans poised to stick Peach State with steep legal tab for political stunt – Disruptive Competition Project

Republican lawmakers are about to advance a bill that will waste hundreds of thousands of taxpayer dollars while making the Internet less safe for Georgians. The Common Carrier Non-Discrimination Act, which passed in the Senate earlier this month, is part of a broader effort by Republican legislators to punish digital services for enforcing their policies with respect to the social media accounts of former President Trump.

This bill would force digital services to carry all users content neutrally, irrespective of what risks that content creates. By doing so, it would put Georgians at greater risk to everything from foreign disinformation and propaganda from Russian agents and extremist content from anti-American jihadists, who, according to the Senate bill, all deserve equal treatment.

This law would bind digital services hands, preventing them from standing between American Internet users and the torrent of foreign disinformation, Communist propaganda, and extremism propagated by adversaries abroad. Digital services need the flexibility this law would take away to fight those evolving online threats.

Some Georgia lawmakers appear to believe private businesses have to give access to any speaker. But Internet services have made commitments to their users to try and protect them from certain problematic content, and that is itself a speech interest. A digital service saying we dont want to host Nazi Party candidates is exercising its own First Amendment rights, and Internet users can choose services whose communities and norms best align with their own preferences.

Georgia lawmakers are well aware that government attempts to dictate speech online violate the Constitution. Legal experts have warned that this bill will inevitably face the same legal challenges as similar proposals in neighboring states that were found unconstitutional and Georgians will be stuck with the legal tab. Over the past year, other states have introduced legislation to impose new rules on private companies online content moderation practices, which would limit their ability to remove offensive or harmful content. Next door, Floridians are already paying the price for the Stop Social Media Censorship Act, over which the state recently lost a federal court case.

The battle is not yet over, and public records requests reveal that just one Florida state agency has already wasted nearly $700,000 to defend the unconstitutional new law. Before the case is over, Florida taxpayers are almost certain to have lost seven figures to the frivolous political stunt whose only real impact has been to make work for lawyers and get news attention for its sponsors. The same situation is playing out in Texas, where a federal judge recently blocked a similar anti-censorship bill from taking effect. The judge concluded that Texas, just like Florida, was unconstitutionally infringing on digital services right to exercise editorial discretion in deciding what content was suitable for their communities.

Through a series of hearings on the Common Carrier Non-Discrimination Act, Georgia lawmakers have been repeatedly warned that it will face the same fate as proposals in Florida and Texas. However, the bills sponsors dont seem to mind asking their constituents to foot the bill for a political stunt, and they have proven theyre willing to sacrifice user safety to punish perceived political enemies.

In a time of economic uncertainty and geopolitical instability, the last thing Georgians need is officials wasting their tax dollars on ill-conceived laws that would flood their screens with foreign disinformation, propaganda, and extremism. Georgia legislators ought to pull the plug on this proposal.

Follow this link:

Despite warnings, Republicans poised to stick Peach State with steep legal tab for political stunt - Disruptive Competition Project

On a day celebrating transgender visibility, Kansans offer the best and worst in response – Kansas Reflector

Happy National Transgender Day of Visibility, to all trans Kansans. Im delighted youre here, and your tenacious courage inspires me.

Also, Id like to apologize for a handful of other Kansans who have decided to score political points on your lives.

This 2022 celebration comes with two giant asterisks. First, of course, the Legislature has been wrangling over Senate Bill 484. Its anti-trans discrimination gussied up as a way to protect girls sports, a breathtaking distortion for which both Sen. Renee Erickson and Senate President Ty Masterson should feel lasting shame.

Then the Washburn University College Republicans decided to show that theirs is decidedly not a big tent party by inviting conservative author Michael J. Knowles for a speech called Banning Transgenderism.

Note that the title isnt We should protect female athletes. Its not I disagree with some things transgender people say. Its not I wonder if this whole gender situation has gotten out of hand.

Nope. Its Banning Transgenderism. As in, an entire group of people who live and work among all of us. For that matter, people who also serve as Kansas legislators.

Thankfully, Washburn University president Jerry Farley took a stand against callous hatred, writing in an email to campus that while he supported the First Amendment, I am disappointed when those rights are used to make others feel unwelcome and even unsafe in our community. While we support the right to speak freely, Washburn University does not condone the hate and misinformation spread by the speaker and his supporters.

No doubt Farley will catch flack from predictable, bigoted corners of the commentariat. But he did the right thing.

High-profile support for this celebratory day also came from Kansas Gov. Laura Kelly. She issued a proclamation marking the occasion. President Joe Biden also noted the date with a forceful statement.

To everyone celebrating Transgender Day of Visibility, I want you to know that your President sees you, Biden said. On this day and every day, we recognize the resilience, strength, and joy of transgender, nonbinary, and gender nonconforming people.

Farley, Kelly and Biden have the right idea. Transgender people didnt just pop into existence over the last three or four years. They have always existed, with historical documentation dating to 5000 B.C.

The fact that trans folk now live visibly and authentically throughout our state and country should be a source of pride for all of us. Even college Republicans.

Read the original here:

On a day celebrating transgender visibility, Kansans offer the best and worst in response - Kansas Reflector

FAQ: The SEC’s Proposed Rule on the Enhancement and Standardization of Climate Related-Disclosures – JD Supra

[co-author: Jorden Johnson]

On March 21, 2022, the U.S. Securities and Exchange Commission (SEC) released its much-anticipated proposed rule titled "The Enhancement and Standardization of Climate-Related Disclosures for Investors." The proposed enhanced disclosure requirements draw from groups dedicated to developing effective climate-related disclosures, including the Task Force on Climate-Related Financial Disclosures and the Greenhouse Gas Protocol. SEC Chair Gary Gensler believes the enhanced disclosure requirements will provide consistent, comparable, and reliable climate-risk information to investors. Environmentally-focused investors appear to agree that the rule, if finalized, will provide much needed guidance, but not everyone is convinced.

Ready or not, the SEC's proposed rule may well be finalized this year and, if so, would begin applying to certain filings as soon as FY 2023. In this alert, we answer some commonly asked questions regarding disclosure requirements the proposed rule would add, the SEC's authority to require climate disclosures, and the potential impact of the disclosure requirements.

Charged with protecting investors and maintaining investor confidence, the SEC's existing regulatory framework requires that public companies, broker-dealers, and certain company insiders disclose "material" information, or information that a "reasonable shareholder" would likely consider important.1

In 2010, the SEC issued guidance on pertinent non-financial disclosure rules that required some disclosures related to climate change, including the disclosure of material effects of compliance with federal, state, and local provisions regulating the discharge of materials into the environment and environmental litigation. The SEC noted then that, depending on the facts and circumstances of a particular registrant, certain items may require disclosures regarding the impact of climate change.

The newly proposed rule clarifies that a registrant would be required to disclose the following:

Further, if the registrant has publicly set climate-related targets or goals, the registrant must disclose information about:

When responding to any of the proposed rules' provisions concerning governance, strategy, and risk management, a registrant may also disclose information concerning any identified climate-related opportunities. A registrant that qualifies as a "large accelerated filer" or "accelerated filer" will also be required to obtain a third-party attestation report on its Scope 1 and 2 emissions disclosures.

Major legislation that provides the framework for the SEC's oversight of the securities markets includes the Securities Act of 1933, the Securities Exchange Act of 1934, the Investment Company Act of 1940, the Sarbanes-Oxley Act, the Dodd-Frank Wall Street Reform and Consumer Protection Act, and the Jumpstart Our Business Startups Act. SEC Chair Gensler maintains that the proposed rule lies within the scope of the SEC's authority to regulate the information material to investors, while critics of the SEC's proposed rule, including SEC Commissioner Hester Peirce, argue that the rule exceeds the authority of SEC. Two of the most likely legal challenges to the proposed rule pertain to (a) the materiality standard and (b) the First Amendment.

Regarding materiality, in TSC Industries v. Northway2, the Supreme Court explained that, under the Securities Exchange Act of 1934, information is only material to investors, and therefore requiring disclosure, if there is a "substantial likelihood that the disclosure of the omitted fact would have been viewed by the reasonable investor as having significantly altered the total mix of information available." Some law professors and scholars have noted that, while climate-related disclosures may be material to some investors, the disclosures may be completely irrelevant to others. This may lead companies to challenge the proposed rule as requiring immaterial disclosures.

As for the First Amendment, the Supreme Court has closely scrutinized disclosure requirements in several cases and has explained that there must be a substantial relationship between the government interest and the information required to be disclosed such that the strength of the government interest reflects the seriousness of the burden on First Amendment rights.3 And so, certain law professors, among other critics of the proposed rule, have suggested that the government interest does not reflect the burden on First Amendment rights since the proposal is not limited to materials that interest all investors. It is therefore likely the SEC will face challenges on the basis of the registrants' First Amendment rights.

If adopted as proposed, public companies will have to present much more detailed disclosures regarding climate-related matters in their SEC filings, including in their financial statements. Many larger public companies have already been disclosing these matters, albeit perhaps not at the level of detail contemplated in the SEC's proposed rule. These larger companies should further refine their reporting processes and controls so that they are in a position to effectively compile and present the climate-related information in a manner subject to attestation by third parties. Smaller companies should also begin assessing their reporting processes as they relate to climate-related matters so the companies will be prepared to comply with the proposed new disclosure requirements. Form 10-K and proxy season is already a busy time for companies, and it looks like aggregating detailed, climate-related information could now be a substantial part of that busy season.

As far as timing is concerned, Large Accelerated Filers will have until FY 2023 (for their Form 10-K filed in 2024) to comply with all proposed disclosures, including Scope 1 and Scope 2 GHG emissions metrics and until FY 2024 (for their Form 10-K filed in 2025) to comply with Scope 3 metrics. Accelerated Filers and Non-Accelerated Filers will have until FY 2024 (for their Form 10-K filed in 2025) to comply with all proposed disclosures including Scope 1 and Scope 2 GHG emission metrics and until FY 2025 (for their Form 10-K filed in 2026) to comply with Scope 3 metrics. Smaller Reporting Companies will have until FY 2025 (for their Form 10-K filed in 2026) to comply with all proposed disclosures including Scope 1 and Scope 2 GHG emission metrics and are exempt from complying with Scope 3 requirements. There is also a transition period for the attestation requirements. Large Accelerated Filers and Accelerated Filers will have to provide third-party attestation at a limited assurance level for fiscal years 2 and 3 after the Scope 1 and Scope 2 GHG emissions compliance date and at a reasonable assurance level for fiscal years 4 and beyond after the Scope 1 and Scope 2 GHG emissions compliance date.

1 TSC Indus., Inc. v. Northway, Inc., 426 U.S. 438, 449 (1976).

2 426 U.S. at 448.

3 See, e.g., Natl Assn of Mfrs. v. Taylor, 582 F.3d 1, 9 (D.C. Cir. 2009).

Original post:

FAQ: The SEC's Proposed Rule on the Enhancement and Standardization of Climate Related-Disclosures - JD Supra

Comedy Clubs Bump Up Precautionary Measures in Wake of Oscars Slap – Complex

The slap seen round the world at this years Academy Awards has moved comedy clubs across the United States to try and prevent a potential new norm.

According to TMZ, theStand Up NY in New York City posted a sign on its storefront window establishing that heckling and physical abuse of comics is prohibited, and patrons will be immediately removed from the showroom. The Academy of Motion Pictures Arts and Sciences claimed Wednesday that Will Smithwas asked to leave the Dolby Theater after he walked on stage and smackedChris Rockin response to ajoke about the actors wife Jada Pinkett Smith, but refused.

Stand Up NY is making it clear that patrons will not betreated with the same type ofleniency.Comedians play a critical role in our society, especially during times of chaos and uncertainty. They make us laugh, bring perspective and remind us there are different ways of seeing our reality, the statement reads. Comics must be protected.

Laugh Factory owner and CEO Jamie Masada told The Hollywood Reporter that there has been a noticeable mood shift since comedy clubs reopened following closures due to the pandemic. Masada has since added security in several locations, but the Oscars incidenthas reignited a conversation about going even further to ensure the safety of comedians.

Masada has discussed various approaches, which range from installing metal detectors tohaving someone positioned near or by the stage. Im going to talk to my staff, just for this weekend, and say, We definitely need you by the stage now. That is your post. Just in case someone is just trying to re-create a moment or feels emboldened by what Will Smith did. And its unfortunate, he said.

The Laugh Factory has publicly backedRock in the days that followed the Oscars with a sign on its marquee declaring its support of the First Amendment, adding, The comedy community loves & supports you Chris.

Read more:

Comedy Clubs Bump Up Precautionary Measures in Wake of Oscars Slap - Complex

The Guardian view on bridging human and machine learning: its all in the game – The Guardian

Last week an artificial intelligence called NooK beat eight world champion players at bridge. That algorithms can outwit humans might not seem newsworthy. IBMs Deep Blue beat world chess champion Garry Kasparov in 1997. In 2016, Googles AlphaGo defeated a Go grandmaster. A year later the AI Libratus saw off four poker stars. Yet the real-world applications of such technologies have been limited. Stephen Muggleton, a computer scientist, suggests this is because they are black boxes that can learn better than people but cannot express, and communicate, that learning.

NooK, from French startup NukkAI, is different. It won by formulating rules, not just brute-force calculation. Bridge is not the same as chess or Go, which are two-player games based on an entirely known set of facts. Bridge is a game for four players split into two teams, involving collaboration and competition with incomplete information. Each player sees only their cards and needs to gather information about the other players hands. Unlike poker, which also involves hidden information and bluffing, in bridge a player must disclose to their opponents the information they are passing to their partner.

This feature of bridge meant NooK could explain how its playing decisions were made, and why it represents a leap forward for AI. When confronted with a new game, humans tend to learn the rules and then learn to improve by, for example, reading books. By contrast, black box AIs train themselves by deep learning: playing a game billions of times until the algorithm has worked out how to win. It is a mystery how this software comes to its conclusions or how it will fail.

NooK nods to the work of British AI pioneer Donald Michie, who reasoned that AIs highest state would be to develop new insights and teach these to humans, whose performance would be consequently increased to a level beyond that of a human studying by themselves. Michie considered weak machine learning to be just improving AI performance by increasing the amount of data ingested.

His insight has been vindicated as deep learnings limits have been exposed. Self-driving cars remain a distant dream. Radiologists were not replaced by AI last year, as had been predicted. Humans, unlike computers, often make short work of complicated, high-stake tasks. Thankfully, human society is not under constant diagnostic surveillance. But this often means not enough data for AI is available, and frequently it contains hidden, socially unacceptable biases. The environmental impact is also a growing concern, with computing projected to account for 20% of global electricity demand by 2030.

Technologies build trust if they are understandable. Theres always a danger that black box AI solves a problem in the wrong way. And the more powerful a deep-learning system becomes, the more opaque it can become. The House of Lords justice committee this week said such technologies have serious implications for human rights and warned against convictions and imprisonment on the basis of AI that could not be understood or challenged. NooK will be a world-changing technology if it lives up to the promise of solving complex problems and explaining how it does so.

Read the rest here:
The Guardian view on bridging human and machine learning: its all in the game - The Guardian

What Is Machine Learning, and How Can It Help With Content Marketing? – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

The term machine learning was first introduced by Arthur Samuel in 1959. Machine learning is a type of artificial intelligence that gives computers the ability to learn without being explicitly programmed. It provides a set of algorithms and techniques for creating computer programs that can automatically improve their performance on specific tasks.

Related:How Machine Learning Is Changing the World -- and Your Everyday Life

Machine learning is playing a significant role in content marketing because it helps marketers understand what consumers want to read and what they don't. It also helps marketerscreate content that will be more likely to generate conversions and increase their return on investment.

The future of machine learning in content marketing is limitless as we can expect AI to take over more and more responsibilities from marketers.

Related:5 Reasons Machine Learning Is the Future of Marketing

Machine learning is a type of artificial intelligence that can learn from data and make predictions. Machine learning algorithms are used in many industries, such as finance, healthcareand so on.Content marketing is one of the most popular industries where machine learning can be applied.

There are many ways that content marketers use machine learning to create better content and optimize their marketing campaigns. One way they do this is by using sentiment analysis to understand what kind of moods people might be in while reading their content. This helps them write more engaging copy for their audience.

Another way for marketersto use machine learning is by utilizing predictive analytics to predict what people will want to read based on the time of day or day of the week. This helps them make sure they have relevant content available at all times for their audience.

Predictive analytics is a process of extracting information from data sources to forecast the future. It is an approach that allows companies to use past data and trends to predict future outcomes.

Predictive analytics can be used for both customer engagement and content generation. For example, it can be used for customer service by predicting customer behavior and needs. This way, businesses are able to prepare for the needs of their customers before they even contact them. Predictive analytics can also help with content generation by predicting what content will resonate with customers and what topics people are interested in.

Predictive analytics is an important part of any companys marketing strategy. It helps them know their customers better and provides a more personalized experience for them.

Related:How Predictive Analytics Can Help Your Business See the Future

Machine learning is a subset of artificial intelligence that helps with predictive analytics. It supports your business decisions by providing insights into what will happen in the future.Machine learning has been used for years to help make predictions about the stock market. It is now being used to help make predictions about content as well.

Machine learning can be used to predict what kind of content will be popular, what topics people are interested inand even how long content should be before it gets boring. This type of AI saves both time and money by optimizing your content strategy for you!

Machine learning is the way of the future. It will help you create content that is relevant to your audience and that will resonate with them. You should start using it now to supercharge your content creation efforts.

Follow this link:
What Is Machine Learning, and How Can It Help With Content Marketing? - Entrepreneur

Deus in the Machina: Machine-Learning Corrections for Improved Position Accuracy – Inside GNSS

A novel method for improving the positioning accuracy of GNSS receivers exploits a machine learning (ML) algorithm. The ML model uses the post-fit residuals, which are readily available after the position computation from the position, velocity and timing (PVT) engine, adoptable by existing receivers without requiring any modification. The performance of this method, demonstrated using data collected with mass-market receivers as well as a Google public dataset collected with Android smartphones, shows the practicality of the concept.

GIANLUCA CAPARRA, PAOLO ZOCCARATO, FLOOR MELMAN

EUROPEAN SPACE AGENCY

GNSS receivers are proneto multipath errors. Increased receiver complexity, cost and power consumption constitute the main drawbacks of mitigation approaches relying upon designs of transmitted signals that allow a better multipath rejection and dedicated signal-processing techniques at baseband level. A mass-market receiver generally includes limited multipath rejection at antenna and baseband processing and applies some sort of filtering in the positioning engine. For instance, it is rather common to adopt a Kalman filter or pseudorange smoothing with phase measurements, as they produce a smoother and more accurate trajectory.

The errors related to atmospheric effects are instead usually compensated using atmospheric models or differencing with measurements from close-by reference receivers.

Recently, the use of machine learning with 3D mapping has been proposed to increase the accuracy of GNSS receivers. The performances achieved by this method are promising. The main disadvantage lies in the fact that it requires high-quality 3D maps to work. Moreover, these maps must be updated continuously to avoid introducing undesired biases.

Here, an approach that only needs information directly available in the GNSS receivers, and hence requires no information about the surroundings, attacks the multipath problem. The approach aims to create a ML model to be used after the positioning engine. This model estimates the positioning error exploiting the post-fit residuals and applies a correction to the PVT to compensate this error. The receiver still needs to receive the corrections from the trained model, but this model can be built based on GNSS results only. The ML regression acts as a sort of adaptive filter, able to cope with different environmental conditions automatically, properly adjusting the estimated position with a 3D error compensation. The ML feedback aims to map the directional pseudorange residuals to a 3D correction of the computed PVT.

The advantage of this approach is that it can be integrated in the current generation of receivers at software level, without requiring any hardware revision. It can even be deployed as a third-party service for the receivers that provide some basic additional information on top of the PVT.

The GNSS receiver estimates its position by deriving the distance from at least four satellites with known ephemerides. The signals contain information on the satellite positions and the transmission time, which is referenced to the system time. The receiver records the reception time, according to its local reference clock, and then estimates the distance by computing the propagation time. This distance estimation is usually referred to as pseudorange, as it includes the geometric range plus several errors sources, e.g. synchronization error between the local reference clock and the system time, atmospheric effects and multipath. The pseudorange is the most used measurement adopted in the position estimation process.

The receiver position can be estimated by solving the navigation equations either using a weighted least square (WLS) solution or a Kalman filter. Due to the noise embedded in the input measurements, the best-fit of the state vector, which includes the position parameters, will have some differences with regard to the measured pseudoranges. The differences between the observed and modeled (from the estimated solution) measurements are usually referred to as residuals. These can be either the innovation residuals, if Kalman filter is used, or the pseudorange residuals, if LS/WLS is used.

Our concept uses the residuals to train an ML model capable of predicting the 3D positioning errors, i.e. the differences between the estimated positions and the reference trajectory. The residuals are already present in the GNSS receivers: therefore, it does not require additional sensors for gathering external information.Figure 1andFigure 2present the detailed system architecture for the training phase and the usage phase.

The residuals, together with azimuth and elevation, can be projected into the navigation reference frame. The residuals are projected for all the signals used in the PVT computation.

These projected pseudorange residuals are the main input to the ML algorithm, which can be complemented by additional information, such as the carrier to noise ratio (C/N0) or other quality/reliability indicators. The ML model uses this information to estimate a positioning error. Depending on the application, the positioning error can be 3D or limited to the horizontal plane. It is possible to use any convenient reference frame. The approach described here is for a 3D positioning error, for instance in an East North Up (ENU) reference frame.

The training phase consists of finding a model that relates the residuals with the difference of estimated positions with respect to the reference trajectory, i.e. the position errors. The positioning error estimated by the ML model can then be subtracted from the position provided by the GNSS receiver, increasing the positioning accuracy.

When available, additional information can be included in the machine-learning model. This is left to future works. For instance, adding a label derived from the position indicating if the current area is rural or urban might help the ML algorithm to achieve better performances. Eventually, if the position itself (or a quantized version) is used in the ML model, the model can act as a sort of raytracing, because the model will learn how statistically the rays reflect in a certain environment, as a function of the azimuth and elevation. However, this would require an enormous amount of data. For this reason, this aspect was not taken into account for the time being. In this scenario, the algorithm would not be independent anymore from external information about the environment in which the receiver operates.

The concept has been tested with data collected from mass-market receivers. Three data analyses were performed:

Single-frequency (SF) multi-constellation PVT solution from a mass-market receiver, used as a black box;

Dual-frequency (DF) multi-constellation PVT solution from a mass-market receiver, used as a black box;

Smartphone measurements (Google Smartphone Decimeter Challenge) using a single frequency PVT engine.

In the first two experiments, data was collected with mass-market receivers during test campaigns and field trials carried out in 2020-2021 in the Netherlands, targeting two main environments: a rural/open sky scenario and a deep-urban scenario in Rotterdam, (with parts of the data acquisition in highways). The datasets consist of several runs, for a total of about 117k, 175k and 95k epochs and are represented inFigure 4.A reference trajectory for ground-truth obtained with a high-end GNSS/IMU is also available for each trajectory.

In the experiments, each epoch was considered independently. Therefore, temporal correlation among epochs has not been exploited. The temporal correlation can be achieved by using a recurrent neural network (RNN), such as long short-term memory (LSTM), but this is left to future work.

The ML algorithm has been implemented as a neural network using Tensorflow. The neural network architecture is reported inFigure 3,showing an example with four layers, where the last one is fully connected, with three neurons (one for each direction of the coordinate system) and without activation, which provides the error estimation per component. Using a single ML model for all three components simultaneously (multi-target regression) allows capturing the correlations among the components. The algorithm can be tuned to work in different configurations. For instance, in the experiment with smartphone data, only the horizontal errors have been considered, leading to a final layer with only 2 neurons.

For the proof of concept, a neural network with six layers of respectively [2048, 2048, 512, 256, 32, 3] units was used. Each layer is followed by a rectified linear unit (ReLU) activation function. Limited optimizations of the neural network and of the training phase have been performed.

Within each dataset, 70% was used for training, 15% for validation, and 15% for testing.Figure 5, Figure 6andFigure 8show the results of the tests (taken from a random 15% of the dataset never presented to the ML model during training and validation), comparing the cumulative distribution function (CDF) of the positioning error with regard to the reference trajectory before and after the ML algorithm application on the PVT solution. Notably, the ML algorithm effectively improves the positioning accuracy.

Figure 7provides a visual representation of the results, taken from the Rotterdam city center area. The data are taken from the DF test. The positions from the reference trajectory are represented in blue, the positions computed by the mass-market receiver are depicted in red, and the one after the ML corrections in green. The corrected positions generally lie closer to the reference trajectory than the positions computed by the mass-market receiver.

Table 1andTable 2report the summary of the results for horizontal and 3D errors respectively. The positioning errors are reported for different percentiles, together with the improvement in accuracy both in absolute and relative terms. It is interesting to note that at the 95th percentile, the improvement ranges between 7.7% and 38% for the horizontal errors, and between 23.5% and 40.8% for 3D errors.

To assess the impact of the complexity of the neural network on the performances of the corrections, a smaller variant of the neural network composed by four layers of respectively [512, 256, 32, 3] units was tested. One iteration of the smaller model (NN1) requires around 50 s on the laptop CPU (Intel i7-8650U) while the storage of the trained model parameters requires 2 MB. This allows it to be easily deployed to receivers. The bigger model (NN2) requires around 600 s per iteration, and around 60 MB of storage. The results are reported inFigure 9. Note that NN2 achieves better results than NN1.

We have introduced the concept of machine learning corrections for improving the accuracy of the GNSS receivers positioning. The advantage of this method is that it does not require changes in the architecture of the GNSS receivers and can be deployed as a software service.

The concept has been demonstrated in three experiments using real-world data collected with mass-market receivers and smartphones, showing that it is possible to achieve a significant accuracy improvement, even with a neural network of modest size.

Future work will expand the dataset size, increasing also the variety of environments, to better assess the generalization of the ML model, and explore different ML architectures, e.g. investigating the benefit of LSTM for capturing temporal correlation among the epochs. Another interesting direction of research will be to explore the potential benefits for high accuracy positioning techniques, such as PPP (-AR) or RTK.

This article is based on material presented in a technical paper at ION GNSS+ 2021, available ation.org/publications/order-publications.cfm.

(1)G. Fu, M. Khider, F. van Diggelen, Android Raw GNSS Measurement Datasets for Precise Positioning, Proceedings of ION GNSS+ 2020, September 2020, pp. 1925-1937.

(2)Martn Abadi, et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.

(3)F van Diggelen, End Game for Urban GNSS: Googles Use of 3D Building Models,Inside GNSS, March 2021

(4)G. Caparra, Correcting Output of Global Satellite Navigation Receiver, PCT/EP2021/052383

Gianluca Caparrareceived a Ph.D. in information engineering from the Universit Degli Studi di Padova, Italy. He is currently a radio-navigation engineer with the European Space Agency. His research interests include positioning, navigation, and timing assurance, cybersecurity, signal processing, and machine learning, mainly in the context of global navigation satellite systems.

Paolo Zoccaratoholds a Ph.D. in science technology and measurements for space on precise orbit determination from the University of Padova, Italy. He worked at Curtin University as a PostDoc on PPP-RTK and in Trimble TerraSat GmbH on VRS and RTx. He is a radio-navigation engineer consultant at ESA/ESTEC, contributing mainly on real-time reliable high-accuracy positioning for different GNSS receiver types, sensors, environments and systems.

Floor Melmanreceived a masters degree in aerospace engineering from the Delft University of Technology (TU Delft), the Netherlands. He now works as a radio-navigation engineer at ESA/ESTEC. His main areas of work include PNT algorithms (in harsh environments) and GNSS signal processing.

See the rest here:
Deus in the Machina: Machine-Learning Corrections for Improved Position Accuracy - Inside GNSS