Daily Archives: July 23, 2021

One in three face no action in Scotland after refusing to pay criminal fines – The Scotsman

Posted: July 23, 2021 at 4:15 am

According to figures obtained by the party via a Freedom of Information request, around 39 per cent of people who refused to pay fines in 2018/19 faced no further action from the justice system.

This number rose to 40 per cent in 2019/20, with the majority of cases in 2020/21 ongoing due to Covid-19 and court backlogs.

The Scottish Conservatives said these figures showed the reality of the SNPs soft-touch justice system, adding it was at odds with a statement from deputy first minister John Swinney in Holyrood earlier this year.

The Covid recovery secretary said during a justice debate that safeguards are built into the operation of fiscal fines, which are not mandatory penalties.

Mr Swinney said: Anyone who is offered a fiscal fine as an alternative to prosecution may refuse such an offer by giving notice to the court to that effect.

"In such an event, the refusal is treated as a request by the alleged offender to be prosecuted for the offence, in which case the procurator fiscal decides what action to take in the public interest.

Reacting to the figures, Scottish Conservative community justice spokesperson Russell Findlay said the number not penalised for refusing to pay fines exposed the sham of Mr Swinneys comments.

He said: These shocking new figures show the reality of the SNPs soft-touch justice system, which routinely betrays crime victims.

"This exposes the sham of John Swinney's claim, made to the Scottish Parliament, that rejection of these fines is likely to result in prosecution.

The message this sends is clear alleged offenders know they can break the law with impunity as they won't pay the price under this SNP Government.

A Scottish Government spokesperson said decisions around further action were for the Crown Office to take.

The spokesperson said: Use of non-court disposals for less serious offending is a long-standing and recognised part of the Scottish justice system, which the Scottish Parliament has legislated to provide powers for the Crown Office and Procurator Fiscal Service (COPFS) to use.

"Decisions in individual cases as to whether to offer a non-court disposal and the action taken if such an offer is not taken up is entirely a matter for the independent Crown Office and Procurator Fiscal Service.

Figures released by the COPFS for the past three financial years show the vast majority of fiscal fines issued are paid and cases resolved. A total of 18,705 fines were issued in 2019/20, with 180 charges from that pool marked no further action.

A COPFS spokesperson said: Procurators Fiscal deal with every case on its own individual facts and circumstances.

Effective and appropriate prosecutorial action is not limited to court proceedings and an offer of an alternative to prosecution is an effective response to certain types of minor crimes.

The Procurator Fiscal will decide what is the most appropriate action to take, whether criminal proceedings or an offer of an alternative to prosecution.

Where an alternative to prosecution is not accepted, the Procurator Fiscal will decide whether further prosecutorial action is appropriate in the individual case circumstances.

A message from the Editor:

Thank you for reading this article. We're more reliant on your support than ever as the shift in consumer habits brought about by coronavirus impacts our advertisers.

Here is the original post:

One in three face no action in Scotland after refusing to pay criminal fines - The Scotsman

Posted in Fiscal Freedom | Comments Off on One in three face no action in Scotland after refusing to pay criminal fines – The Scotsman

Beacon Hill Roll Call: July 12 to July 16, 2021 – The Recorder – The Recorder

Posted: at 4:15 am

Beacon Hill Roll Call records the votes of local representatives and senators from the week of July 12 to July 16.

The House, 150 to 0, and the Senate, 40 to 0, approved and Gov. Charlie Baker signed into law a bill that authorizes $200 million in one-time funding for the maintenance and repair of roads and bridges in cities and towns across the state. The $350 million package, a bond bill under which the funding would be borrowed by the state through the sale of bonds, also includes $150 million to pay for bus lanes, improvement of public transit, electric vehicles and other state transportation projects.

Public transportation is a public good, said Senate Transportation Committee Chair Sen. Joe Boncore, D-Winthrop. The $350 million investment is among the largest Chapter 90 bond bills to date and represents the Legislatures commitment to safe roads, reliable bridges and modernized transit infrastructure.

The longstanding state-municipal partnership established under the Chapter 90 program is critical to helping cities and towns meet their transportation infrastructure needs, said GOP House Minority Leader Brad Jones, R-North Reading. Todays agreement continues the House and Senates ongoing commitment to support this important road and bridge program.

A Yes vote is for the bill.

Rep. Natalie Blais Yes

Rep. Paul Mark Yes

Rep. Susannah Whipps Yes

Sen. Joanne Comerford Yes

Sen. Anne Gobi Yes

Sen. Adam Hinds Yes

Gov. Chalie Baker signed into law a $47.6 billion state budget for fiscal year 2022, which began on July 1. The governor and the Legislature were mostly on the same page since the Legislature approved the budget unanimously by a 160 to 0 vote in the House and a 40 to 0 vote in the Senate. Baker did disagree with the Legislature on some spending and he vetoed close to $8 million from the package approved by the Legislature. He also vetoed a section that further delays implementation of a charitable giving tax deduction approved by voters in 2000. The Legislature will soon act on overriding some of the vetoes, which takes a two-thirds vote of each branch.

The budget makes historic investments in our communities, schools, economy and workers as Massachusetts emerges from the pandemic, Gov. Baker said in a message to the Legislature. As we continue in our economic recovery, we are focused on supporting those communities that have been hardest hit by COVID-19, and this budget will complement our $2.9 billion proposal to invest a portion of Massachusetts federal funds in urgent priorities that support communities of color and lower-wage workers.

Baker continued, By working with our legislative partners to carefully manage the commonwealths finances and by reopening our economy, we now expect to make a $1.2 billion deposit in the Stabilization Fund through this budget, bringing the balance to $5.8 billion, an increase of over 400 percent since we took office. We are able to responsibly grow our reserves without raising taxes, while continuing to make historic investments in our schools, job training programs and downtown economies.

The Public Health Committee held a virtual hearing on legislation that would repeal the current law that allows parents to exempt their children on religious grounds from any required school vaccinations unless an emergency or epidemic of disease is declared by the Department of Public Health.

Current state law requires students to be immunized against diphtheria, pertussis, tetanus, measles, poliomyelitis and other communicable diseases designated from time to time by the Department of Public Health. It allows exemptions in cases where a doctor certifies the childs health would be endangered by a vaccine or in cases where the parent or guardian states in writing that vaccination or immunization conflicts with his or her sincere religious beliefs.

Sponsor Rep. Andy Vargas, D-Haverhill, said that several other states, including Connecticut, New York and Maine, have removed non-medical exemptions for childhood vaccines.

Above all the lessons learned through the pandemic, perhaps the most powerful one is that, whether we like it or not, Americans, Massachusetts residents and human beings have a responsibility for the health and safety of one another, Vargas said. As lawmakers, we have to reason with the facts, listen to trained experts, trust the science and make tough decisions to stop preventable death and illness. We learned this the hard way during the pandemic.

This measure would usurp the right of parents to control the health care of their own children and empower the state to intrude into the exclusive concerns of the family, Catholic Action League Executive Director C.J. Doyle told Beacon Hill Roll Call. It would also coerce the consciences and violate the religious freedom rights of orthodox Catholics and other pro-life citizens, who find the use of fetal tissue or cell lines from aborted children, used in the production or testing of numerous vaccines, to be morally objectionable. In any conflict between a constitutional right and a compelling state interest, the American legal tradition has always held that the government should make a reasonable accommodation for the sincerely held religious beliefs of citizens. Rep. Vargas bill repudiates that tradition and prohibits that accommodation.

The Higher Education Committee held a virtual hearing on a bill that would prohibit public and private colleges from withholding a students entire academic transcript if the student owes the school money for any loan payments, fines, fees, tuition or other expenses. The measure would allow schools to withhold from the transcript only any academic credits and grades for any specific course for which that students tuition and mandatory course fees are not paid in full.

Supporters said currently schools can withhold a students entire transcript even though it might be just one course for which the student has not paid. They said this means that these students cannot use any credits to transfer to more affordable institutions or to obtain employment.

Higher Education institutions are supposed to be vehicles of opportunity, economic mobility and promises of a better future, said sponsor Rep. David LeBoeuf, D-Worcester. Continuing to foster adverse practices that disproportionally penalize low-income students go against these principles and the principles of the commonwealth. It is our responsibility to make sure those who pursue higher education are not saddled with debt or denied advancement opportunities because of limited financial resources. This bill begins to address this issue by eliminating a counterintuitive practice that has no place in Massachusetts.

Another measure heard by the Higher Education Committee bill would allow college student-athletes to earn compensation from the use of their name, image or likeness without affecting that students scholarship eligibility. Other provisions allow a student-athlete to obtain representation from an agent for contracts or legal matters; require agents to have specific credentials, verify their eligibility through a public registration process and keep detailed records; and require colleges to establish a Catastrophic Sports Injury Fund to compensate student-athletes who suffer severe long-term injuries.

Just two weeks ago, after years of delay, the National Collegiate Athletic Association (NCAA) finally began allowing college athletes to earn compensation from the use of their name, image or likeness, said Senate sponsor Sen. Barry Finegold, D-Andover. I had originally introduced my athlete compensation bill last session, but I believe this bill is all the more important in light of the NCAAs recent policy shift. My proposed bill would codify the NCAAs rule change into Massachusetts law and provide additional clarity both for athletes and higher education institutions as they figure out how to comply with the NCAAs guidelines. We need to act now.

Even though the NCAA has updated (its) policy, this legislation would ensure that Massachusetts student athletes constitutional rights can never be infringed upon again in the commonwealth, said Rep. Steven Howitt, R-Seekonk, House sponsor of a similar bill (H 1340). It is important for the commonwealth to join the 24 other states who have already signed similar bills into law.

The Public Safety and Homeland Security Committee held a virtual hearing on legislation that would require EMS personnel to provide emergency treatment to a police dog and use an ambulance to transport the dog injured in the line of duty to a veterinary clinic or veterinary hospital if there are not people requiring emergency medical treatment or transport at that time.

Co-sponsor Rep. Steven Xiarhos, R-Barnstable, spent 40 years on the Yarmouth Police Department and was the officer who sent a team of highly trained officers on a mission to find and arrest an armed and violent career-criminal in April 2018. He and sponsor Sen. Mark Montigny, D-New Bedford, filed the bill in response to the tragic events on that day when Police Sgt. Sean Gannon was shot and killed and his K-9 partner Nero was severely injured and had to be rushed to the animal hospital in the back of a police cruiser. Nero survived.

Xiarhos said he will never forget the sight of K-9 Nero being carried out, covered in blood and gasping for air.

Despite the paramedics present wanting to help save him, they could not legally touch K-9 Nero as current Massachusetts law prohibits helping a police animal wounded in the line of duty, Xiarhos said. Instead, the police officers placed K-9 Nero in the back of a police cruiser and drove him to the closest veterinary hospital.

These incredible animals risk their lives to work alongside law enforcement in dangerous situations, Montigny said. It is only humane to allow for them to be transported in a way that reflects their contributions to our commonwealth. Sgt. Gannon was a native son of New Bedford and therefore his K-9 partner Nero is part of our communitys extended family. We hope that this never has to be used, but it demonstrates the respect for the crucial work these animals do.

The Committee on Consumer Protection and Professional Licensure held a virtual hearing on a measure that would require all applicants for a new or renewal of a license to be a hairdresser, barber, cosmetologist, electrolysis, manicurist or massage therapist to complete, in person or online, one hour of domestic violence and sexual assault awareness education as part of their educational requirements to be licensed in their field.

Domestic violence and sexual assault are life-threatening issues in our communities, which were only exacerbated by COVID-19, said the bills sponsor Rep. Christine Barber, D-Somerville. Nearly one in three women statewide have experienced rape, physical violence or stalking by an intimate partner. As legislators, we have a duty to provide resources and support to survivors of domestic violence and sexual assault. Salon professionals often build trusting personal connections with clients and are uniquely positioned to see the details of their clients bodies, as they work directly with their skin, hair, heads and hands. Their unique role puts them in a position to observe potential signs of domestic violence or sexual assault and has the potential to minimize violence and save lives.

See original here:

Beacon Hill Roll Call: July 12 to July 16, 2021 - The Recorder - The Recorder

Posted in Fiscal Freedom | Comments Off on Beacon Hill Roll Call: July 12 to July 16, 2021 – The Recorder – The Recorder

Increasing the normal minimum pension age for Pensions Tax – GOV.UK

Posted: at 4:15 am

Who is likely to be affected

Individual members of registered pension schemes who do not have a protected pension age but take scheme benefits before age 57 after 5 April 2028 or those who would like to have taken a benefit but who now will not be able to. However, members of the firefighters, police and armed forces public service schemes will not be affected by this increase.

Scheme administrators of registered pension schemes will need to modify their systems to accommodate for these changes.

This measure increases the normal minimum pension age (NMPA), which is the minimum age at which most pension savers can access their pensions without incurring an unauthorised payments tax charge unless they are retiring due to ill-health, from age 55 to 57 in 2028.

A consultation on the implementation of the increase and a proposed framework of protections for pension savers who already have a right to take their pension at a pre-existing pension age. This was launched on 11 February 2021 via a Written Ministerial Statement (WMS) by the EST. The consultation closed on 22 April 2021 and received 142 responses.

This measure supports the governments agenda around fuller working lives and has indirect benefits to the economy through increased labour market participation, while also helping to make sure pension savings provide for later life.

The NMPA was introduced in 2006 and it increased from age 50 to age 55 in 2010. In 2014, following the consultation on Freedom and Choice in Pensions, the government announced it would increase the NMPA to age 57 in 2028 to coincide with the rise of state pension age to 67.

Following the consultation on a proposed framework of protections this measure will legislate for the increase in NMPA.

The increase in NMPA will have effect on and after 6 April 2028.

Registered pension schemes must not normally pay any benefits to members until they reach NMPA. Tax legislation provides that from 6 April 2010 the NMPA is age 55 (before 6 April 2010 it was age 50). Sections 165(1) and 279(1) Finance Act 2004.

Registered pension schemes are also not permitted to have a normal pension age lower than age 55 and this applies equally to individuals in occupations that usually retire before 55 (for example, professional sports people).

Although the legislation provides the minimum age at which benefits can be taken, the rules of a scheme will state what benefits can be taken and the age at which they can be taken from. The age at which they can be taken from can be higher than NMPA.

If a registered pension scheme does pay benefits to a member before the NMPA unauthorised payment charge liabilities may arise unless the benefits are paid on ill-health grounds, or the member had a right on 5 April 2006 to take benefits before the NMPA. An individual may have a right to take benefits before the NMPA where this is not dependent on anything else or somehow qualified for example requiring employer or trustee consent. Where certain conditions are met these individuals may take their benefits earlier than age 55 without a tax charge. This is known as the individuals protected pension age. Paragraphs 21 to 23 Schedule 36 Finance Act 2004.

If an individual has a protected pension age, the tax rules provide that it replaces the prevailing NMPA for all purposes of the pensions tax legislation except for the lifetime allowance reduction that may apply where the protected pension age is less than 50 and benefits are taken before NMPA. This means, subject to that exception that when taking benefits from the relevant registered pension scheme, the tax rules apply to the member based on their protected pension age rather than the prevailing NMPA.

A consultation on the implementation of the increase to NMPA and a proposed framework of protections for pension savers who already have a right to take their pension at a pre-existing pension age concluded on 22 April 2021 and received over 145 responses.

Following the consultation, legislation will be introduced in Finance Bill 2021-22 regarding the framework of protections and the increase to the NMPA from age 55 to 57.

This measure is not expected to have an Exchequer impact within the scorecard period. Impacts from the implementation date onwards will be subject to scrutiny by the Office for Budget Responsibility and will be set out at a future fiscal event.

This measure is not expected to have any significant macroeconomic impacts.

This measure will impact individuals approaching retirement age who will be affected by the 2-year increase in the NMPA.

Customer experience is expected to remain broadly the same as this measure does not significantly alter how individuals interact with HMRC.

This measure is not expected to have an impact on family formation, stability or breakdown.

This measure will impact men and women equally as the NMPA is the same for both genders. Whether individuals are affected will depend on the circumstances of their scheme.

This measure will impact older individuals more than younger ones. This is because it is raising the pension age and those closer to this age will be immediately impacted more than those who are 10+ years away as they will have ample time to adjust and financially plan.

It is not anticipated that there will be any particular impact on other groups sharing protected characteristics.

This measure is expected to have a negligible impact on businesses administering registered pension schemes.

One-off costs for businesses will include familiarisation with the changes and could also include updating systems to reflect changes to the national minimum pension age. Additional one-off costs could also include training staff of changes, legal and consultation advice and managing a potential communication increase from customers.

There are not expected to be any continuing costs.

Customer experience is expected to stay broadly the same as this measure does not significantly alter how pension schemes interact with HMRC.

This measure is not expected to impact civil society organisations.

Minimal changes will need to be made to the online guidance on GOV.UK. This will be handled as part of the routine end of year updates at nil cost.

Other impacts have been considered and none has been identified.

The measure will be kept under review through communication with affected taxpayer groups.

If you have any questions about this change, contact Steve Darling on Telephone: 03000 512336 or email: pensions.policy@hmrc.gov.uk.

Read the original post:

Increasing the normal minimum pension age for Pensions Tax - GOV.UK

Posted in Fiscal Freedom | Comments Off on Increasing the normal minimum pension age for Pensions Tax – GOV.UK

How the National Science Foundation is taking on fairness in AI – Brookings Institution

Posted: at 4:14 am

Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficiallyin this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.

The FAI program is an investment in what the NSF calls use-inspired research, where scientists attempt to address fundamental questions inspired by real world challenges and pressing scientific limitations. Use-inspired research is an alternative to the traditional basic research, which attempts to make fundamental advances in scientific understanding without necessarily a specific practical goal. NSF is better known for basic research in computer science, where the NSF provides 87% of all federal basic research funding. Consequently, the FAI program is a relatively small portion of the NSFs total investment in AIaround $3.3 million per year, considering that Amazon covers half of the cost. In total, the NSF requested $868 million in AI spending, about 10% of its entire budget for 2021, and Congress approved every penny. Notably, this is a broad definition of AI spending that includes many applications of AI to other fields, rather than fundamental advances in AI itself, which is likely closer to $100 or $150 million, by rough estimation.

The FAI program is specifically oriented towards the ethical principle of fairnessmore on this choice in a moment. While this may seem unusual, the program is a continuation of prior government funded research into the moral implications and consequences of technology. Starting in the 1970s, the federal government started actively shaping bioethics research in response to public outcry following the APs reporting on the Tuskegee Syphilis Study. While the original efforts may have been reactionary, they precipitated decades of work towards improving the biomedical sciences. Launched alongside the Human Genome Project in 1990, there was an extensive line of research oriented towards the ethical, legal, and social implications of genomics. Starting in 2018, the NSF funded 21 exploratory grants on the impact of AI on Society, a precursor to the current FAI program. Today, its possible to draw a rough trend line through these endeavors, in which the government is becoming more concerned with first pure science, then the ethics of the scientific process, and now the ethical outcomes of the science itself. This is a positive development, and one worth encouraging.

NSF made a conscious decision to focus on fairness rather than other prevalent themes like trustworthiness or human-centered design. Dr. Erwin Gianchandani, an NSF deputy assistant director, has described four categories of problems in FAIs domain, and these can each easily be tied to present and ongoing challenges facing AI. The first category is focused on the many conflicting mathematical definitions of fairness and the lack of clarity around which are appropriate in what contexts. One funded project studied the human perceptions of what fairness metrics are most appropriate for an algorithm in the context of bail decisionsthe same application of the infamous COMPAS algorithm. The study found that survey respondents slightly preferred an algorithm that had a consistent rate of false positives (how many people were unnecessarily kept in jail pending trial) between two racial groups, rather than an algorithm which was equally accurate for both racial groups. Notably, this is the opposite quality of the COMPAS algorithm, which was fair in its total accuracy, but resulted in more false positives for Black defendants.

The second category, Gianchandani writes, is to understand how an AI system produces a given result. The NSF sees this as directly related to fairness because giving an end-user more information about an AIs decision empowers them to challenge that decision. This is an important pointby default, AI systems disguise the nature of a decision-making process and make it harder for an individual to interrogate the process. Maybe the most novel project funded by NSF FAI attempts to test the viability of crowdsourcing audits of AI systems. In a crowdsourced audit, many individuals might sign up for a toole.g., a website or web browser extensionthat pools data about how those individuals were treated by an online AI system. By aggregating this data, the crowd can determine if the algorithm is being discriminatory, which would be functionally impossible for any individual user.

The third category seeks to use AI to make existing systems fairer, an especially important task as governments around the world are continuing to consider if and how to incorporate AI systems into public services. One project from researchers at New York University seeks, in part, to tackle the challenge of fairness when an algorithm is used in support of a human decision-maker. This is perhaps inspired by a recent evaluation of judges using algorithmic risk assessments in Virginia, which concluded that the algorithm failed to improve public safety and had the unintended effect of increasing incarceration of young defendants. The NYU researchers have a similar challenge in minddeveloping a tool to identify and reduce systemic biases in prosecutorial decisions made by district attorneys.

The fourth category is perhaps the most intuitive, as it aims to remove bias from AI systems, or alternatively, make sure AI systems work equivalently well for everyone. One project looks to create common evaluation metrics for natural language processing AI, so that their effectiveness can be compared across many different languages, helping to overcome a myopic focus on English. Other projects looks at fairness in less studied methods, like network algorithms, and still more look to improve in specific applications, such as for medical software and algorithmic hiring. These last two are especially noteworthy, since the prevailing public evidence suggests that algorithmic bias in health-core provisioning and hiring is widespread.

Critics may lament that Big Tech, which plays a prominent role in AI research, is present even in this federal programAmazon is matching the support of the NSF, so each organization is paying around $10 million. Yet there is no reason to believe the NSFs independence has been compromised. Amazon is not playing any role in the selection of the grant applications, and none of the grantees contacted had any concerns about the grant-selection process. NSF officials also noted that any working collaboration with Amazon (such as receiving engineering support) is entirely optional. Of course, it is worth considering what Amazon has to gain from this partnership. Reading the FAI announcement, it sticks out that the program seeks to contribute to trustworthy AI systems that are readily accepted and that projects will enabled broadened acceptance of AI systems. It is not a secret that the current generation of large technology companies would benefit enormously from increased public trust in AI. Still, corporate funding towards genuinely independent research is good and unobjectionable especially relative to other options like companies directly funding academic research.

Beyond the funding contribution, there may be other societal benefits from the partnership. For one, Amazon and other technology companies may pay more attention to the results of the research. For a company like Amazon, this might mean incorporating the results into its own algorithms, or into the AI systems that it sells through Amazon Web Services (AWS). Adoption into AWS cloud services may be especially impactful, since many thousands of data scientists and companies use those services for AI. As just an example, Professor Sandra Wachter of the Oxford Internet Institute was elated to learn that a metric of fairness she and co-authors had advocated for had been incorporated into an AWS cloud service, making it far more accessible for data science practitioners. Generally speaking, having an expanded set of easy-to-use features for AI fairness makes it more likely that data scientists will explore and use these tools.

In its totality, FAI is a small but mighty research endeavor. The myriad challenges posed by AI are all improved with more knowledge and more responsible methods driven by this independent research. While there is an enormous amount of corporate funding going into AI research, it is neither independent nor primarily aimed at fairness, and may entirely exclude some FAI topics (e.g., fairness in the government use of AI). While this is the final year of the FAI program, one of NSF FAIs program directors, Dr. Todd Leen, stressed when contacted for this piece that the NSF is not walking away from these important research issues, and that FAIs mission will be absorbed into the general computer science directorate. This absorption may come with minor downsidesfor instance, a lack of clearly specified budget line and no consolidated reporting on the funded research projects. The NSF should consider tracking these investments and clearly communicating to the research community that AI fairness is an ongoing priority of the NSF.

The Biden administration could also specifically request additional NSF funding for fairness and AI. For once, this funding would not be a difficult sell to policymakers. Congress funded the totality of the NSFs $868 million budget request for AI in 2021, and President Biden has signaled clear interest in expanding science funding; his proposed budget calls for a 20% increase in NSF funding for fiscal year 2022, and the administration has launched a National AI Research Taskforce co-chaired by none other than Dr. Erwin Gianchandani. With all this interest, bookmarking $5 to $10 million per year explicitly for the advancement of fairness in AI is clearly possible, and certainly worthwhile.

The National Science Foundation and Amazon are donors to The Brookings Institution. Any findings, interpretations, conclusions, or recommendations expressed in this piece are those of the author and are not influenced by any donation.

Read the rest here:

How the National Science Foundation is taking on fairness in AI - Brookings Institution

Posted in Ai | Comments Off on How the National Science Foundation is taking on fairness in AI – Brookings Institution

Getting Industrial About The Hybrid Computing And AI Revolution – The Next Platform

Posted: at 4:14 am

For oil and gas companies looking at drilling wells in a new field, the issue becomes one of return vs. cost. The goal is simple enough: install the fewest number of wells that will draw them the most oil or gas from the underground reservoirs for the longest amount of time. The more wells installed, the higher the cost and the larger the impact on the environment.

However, finding the right well placements quickly becomes a highly complex math problem. Too few wells sited in the wrong places leaves a lot of resources in the ground. Too many wells placed too close together not only can sharply increase the cost but cause wells to pump from the same area.

Shahram Farhadi knows how complex the challenge is. Farhadi is the chief technology officer for industrial AI at Beyond Limits, a startup spun off by Caltech and NASAs Jet Propulsion Lab to commercialize technologies built for space exploration for industrial settings. The company, founded in 2014, aims to leverage cognitive AI, machine learning, and deep learning techniques in industries like oil and gas, manufacturing and industrial Internet of Things (IoT), power and natural resources, and healthcare and other evolving markets, many of which have already been using HPC environments to run their most complicated programs.

Placing wells within a reservoir is one of those problems that involves a sequential decision-making process that changes and grows with each decision made. Farhadi notes that in chess, there are almost 5 million possible moves after the first five are made. For the game Go, that number becomes 10 to the 12th power. When optimizing well placement in a small reservoir from where and when to drill to how many producer and injector wells there can be as many as 10 to the 20th power possible combinations after five sequential, non-commutative choices of vertical drilling locations.

The combination of advanced AI frameworks with HPC can greatly reduce the challenge.

Anything the AI can learn such as basic rules for how far the wells should be separated and apply to the problem will help decrease the number of computations, to hammer them down to something that is more tangible, Farhadi tells The Next Platform.

Where to place wells has been a challenge for oil and gas companies for years, during which time they developed seismic imaging capabilities and simulation models that run on HPC systems that describe reservoirs beneath the ground. They also use optimizers to run variations of the model to determine how many of which kinds of wells should we place where. There have been at least two generations of engineers who worked to perfect these equations and their nuances, tuning and learning from the data, Farhadi says.

The problem has been that they have worked on these computations using a combination of brute force and such optimizations as particle swarm and genetic algorithms atop computationally expensive reservoir simulators, making such a complex problem even more challenging. Thats where Beyond Limits advanced AI frameworks can come in.

The industry is really equipped with really good simulations and the opportunity of a high-performance AI could be, how about we use the simulations to generate the data and then learn from that generated data? he says. In that sense, you are going some good miles. Other industries are also doing this now, like with the auto industry, this is happening more or less. But from the energy industry standpoint, these simulations are fairly rich.

Beyond Limits is applying such techniques as deep reinforcement learning (DRL), using a framework to train a reinforcement learning agent to make optimal sequential recommendations for placing wells. It also uses reservoir simulations and novel deep convolutional neural networks to work. The agent takes in the data and learns from the various iterations of the simulator, allowing it to reduce the number of possible combinations of moves after each decision is made. By remembering what it learned from the previous iterations, the system can more quickly whittle the choices down to the one best answer.

One area that we looked at specifically is the simulation of subsurface movement of fluids, Farhadi says. Think of a body of a rock that is found somewhere that has oil in it. It also has water that has come to it and as you take out this hydrocarbon, this whole dynamic changes. Things will kick in. You might have water breaking through, but its quite a delicate process that is happening down there. A lot of time goes into building this image because you have limited information. But lets say you have built the image and you have a simulator now that if you tell this simulator, I want to place a well here [and] a well here, the simulator can evolve this in time and give you the flow rates and say, If you do this, this is what youre going to get. Now if I operate this asset, the question for me is just exactly that: How many wells do I put in this? What kind of wells do I want to put vertical [and] horizontal? Do I want to inject water from the beginning? Do I want to inject gas? This is basically the expertise of reservoir engineering. Its playing the game of how to optimally extract this natural resource from these assets, and the assets are usually billions of dollars of value. This is a very, very precious asset for any company that is producing oil and gas. The question is, how do you extract the max out of it now?

The goal is to get down to a high net present value (NPV) score essentially the amount of oil or gas that will be captured (and sold) and the amount of money made after costs are figured in. The fewest wells needed to extract the most resources will mean more profit.

The NPV initially does some iteration, but after about 150,000 times of interacting with the simulator, it can get to something like $40 million dollars of NPV, he says. The key thing here is the fact that this simulation on its own can be expensive to run, so you optimize it, be smart and use it efficiently.

That included creating a system that would allow Beyond Limits to most efficiently scale the model to where the oil and gas companies needed it. The company tested it using three systems two of which were CPU-only and one that was a hybrid running CPUs and GPUs. Beyond Limits used an on-premises 20-core CPU system running Intel Core i9-7900X chips, a cloud-based 96-core CPU system with the same processors, and the hybrid setup, with a 20-core CPU and two Nvidia Ampere A100 GPU accelerators on a p4d.24xlarge Amazon Web Services instance.

The company also took it a step further by including a 36-hour run on a p4d.24xlarge AWS instance using a setup with 90 CPU cores and eight A100 GPUs.

The metrics benchmarked were around the instantaneous rate of reinforcement learning calculation, the number of episodes and forward action-explorations during the progress of reinforcement learning and the value of the best solution found in terms of NPV.

What Beyond Limits found was that the hybrid setup outperformed both CPU-only systems. In terms of benchmarks, the hybrid setup delivered a peak in terms of processing speed of 184.3 percent over the 96-core system and 1,169.5 percent over the 20-core operation. To reach the same number of actions explored at the end of 120,000 seconds, the CPU-GPU hybrid had an improvement in time elapsed of 245.4 percent over the 20 CPU cores and 152.9 percent of the 96 CPU cores. (See chart below.) Regarding NPV, the hybrid instance had a boost of about 109 percent compared to the 20-core CPU setup for vertical wells.

Scale and efficiency are key when trying to reach optimal NPV, because not only do calculations such as the number and types of wells used add to the costs, but so do computational needs.

This problem is very, very complicated in terms of the number of possible combinations, so the more hardware you throw at it, the higher you get and obviously there are physical limits to that, Farhadi says. The GPU becomes a real value-add because you can now achieve NPVs that are higher. Just because you were able to have higher grades, you would be able to have more FLOPs or you could compute more. You have a higher chance of finding better configurations. The idea here was to show that there is this technology that can help with highly combinatorial simulation-based optimizations called reinforcement learning, and we have benchmarked it on simple, smaller reservoir models. But if you were to take it to the actual field models with this number of cells, its going to be on its own, like a massive high-performance training system.

Beyond Limits is also building advanced AI systems for other industries. One example is a system designed to help with planning of a refinery. Another AI system helps chemists more quickly and efficiently build formulas for engine oil and other lubricants, he says.

For the practices that you have relied on a human expert to come up with a framework and [to] solve a problem, it is important for them that whatever system you build is honoring that and can digest that, Farhadi says. Its not only data, its also that knowledge thats human. How do we incorporate and then bring this together? For example, how do you make the knowledge that your engineer learned about from the data or how do you use the physics as a constraint for your AI? Its an interesting field. Even in the frontiers of deep learning [and] machine learning, this is now being looked at. Instead of just looking at the pixels, now lets see if we can have more robust representations of hierarchical understandings of the objects that come our way. We really started this way earlier than 2014, because one big motivation was that the industries we went to required it. That was what they had and they needed to augment it, maybe with digital assistants. It has data elements to it, but they were not quite competent.

View post:

Getting Industrial About The Hybrid Computing And AI Revolution - The Next Platform

Posted in Ai | Comments Off on Getting Industrial About The Hybrid Computing And AI Revolution – The Next Platform

Diverse AI teams are key to reducing bias – VentureBeat

Posted: at 4:14 am

All the sessions from Transform 2021 are available on-demand now. Watch now.

An Amazon-built resume-rating algorithm, when trained on mens resumes, taught itself to prefer male candidates and penalize resumes that included the word women.

A major hospitals algorithm, when asked to assign risk scores to patients, gave white patients similar scores to Black patients who were significantly sicker.

If a movie recommendation is flawed, thats not the end of the world. But if you are on the receiving end of a decision [that] is being used by AI, that can be disastrous, Huma Abidi, senior director of AI SW products and engineering at Intel, said during a session on bias and diversity in AI at VentureBeats Transform 2021 virtual conference. Abidi was joined by Yakaira Nuez, senior director of research and insights at Salesforce, and Fahmida Y Rashid, executive editor of VentureBeat.

In order to produce fair algorithms, the data used to train AI needs to be free of bias. For every dataset, you have to ask yourself where the data came from, if that data is inclusive, if the dataset has been updated, and so on. And you need to utilize model cards, checklists, and risk management strategies at every step of the development process.

The best possible framework is that we were actually able to manage that risk from the outset we had all of the actors in place to be able to ensure that the process was inclusive, bringing the right people in the room at the right time that were representative of the level of diversity that we wanted to see and the content. So risk management strategies are my favorite. I do believe in order for us to really mitigate bias that its going to be about risk mitigation and risk management, Nuez said.

Make sure that diversity is more than just a buzzword and that your leadership teams and speaker panels are reflective of the people you want to attract to your company, Nuez said.

When thinking about diversity, equity, and inclusion work, or bias and racism, the most impact tends to be in areas in which individuals are most at risk, Nuez said. Health care, finance, and legal situations anything involving police and child welfare are all sectors where bias causes the most amount of harm when it shows up. So when people are working on AI initiatives in these spaces to increase productivity or efficiencies, it is even more critical that they are thinking deliberately about bias and potential for harm. Each person is accountable and responsible for managing that bias.

Nuez discussed how the responsibility of a research and insights leader is to curate data so executives can make informed decisions about product direction. Nuez is not just thinking about the people pulling the data together, but also the people who may not be in the target market, to give insight into people Salesforce would not have known anything about otherwise.

Nuez regularly asks the team to think about bias and whether it is present in the data, like asking whether the panel of individuals for a project is diverse. If the feedback is not from an environment that is representative of the target ecosystem, then that feedback is less useful.

Those questions are the small little things that I can do at the day to day level to try to move the needle a bit at Salesforce, Nuez said.

Research has shown that minorities often have to whiten their rsums in order to get callbacks and interviews. Companies and organizations can weave diversity and inclusion into their stated values to address this issue.

If its already not part of your core mission statement, its really important to add those things diversity, inclusion, equity. Just doing that, by itself, will help a lot, Abidi said.

Its important to integrate these values into corporate culture because of the interdisciplinary nature of AI: Its not just engineers; we work with ethicists, we have lawyers, we have policymakers. And all of us come together in order to fix this problem, Abidi said.

Additionally, commitments by companies to help fix gender and minority imbalances also provide an end goal for recruitment teams: Intel wants women in 40% of technical roles by 2030. Salesforce is aiming to have 50% of its U.S. workforce made up of underrepresented groups, including women, people of color, LGBTQ+ employees, people with disabilities, and veterans.

Original post:

Diverse AI teams are key to reducing bias - VentureBeat

Posted in Ai | Comments Off on Diverse AI teams are key to reducing bias – VentureBeat

The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during…

Posted: at 4:14 am

Global Artificial Intelligence (AI) Chips Market 2021-2025 The analyst has been monitoring the artificial intelligence (AI) chips market and it is poised to grow by $ 73. 49 billion during 2021-2025, progressing at a CAGR of over 51% during the forecast period.

New York, July 22, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Artificial Intelligence (AI) Chips Market 2021-2025" - https://www.reportlinker.com/p05006367/?utm_source=GNW Our report on the artificial intelligence (AI) chips market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.The report offers an up-to-date analysis regarding the current global market scenario, latest trends and drivers, and the overall market environment. The market is driven by the increasing adoption of AI chips in data centers, increased focus on developing AI chips for smartphones, and the development of AI chips in autonomous vehicles. In addition, the increasing adoption of AI chips in data centers is anticipated to boost the growth of the market as well.The artificial intelligence (AI) chips market analysis includes the product segment and geographic landscape.

The artificial intelligence (AI) chips market is segmented as below:By Product ASICs GPUs CPUs FPGAs

By Geography North America Europe APAC South America MEA

This study identifies the convergence of AI and IoT as one of the prime reasons driving the artificial intelligence (AI) chips market growth during the next few years. Also, increasing investments in ai start-ups and advances in the quantum computing market will lead to sizable demand in the market.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters. Our report on artificial intelligence (AI) chips market covers the following areas: Artificial intelligence (AI) chips market sizing Artificial intelligence (AI) chips market forecast Artificial intelligence (AI) chips market industry analysis

This robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading artificial intelligence (AI) chips market vendors that include Alphabet Inc., Broadcom Inc., Intel Corp., NVIDIA Corp., Qualcomm Inc., Advanced Micro Devices Inc., Huawei Investment and Holding Co. Ltd., International Business Machines Corp., Samsung Electronics Co. Ltd., and Taiwan Semiconductor Manufacturing Co. Ltd. Also, the artificial intelligence (AI) chips market analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research - both primary and secondary. Technavios market research reports provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast the accurate market growth.Read the full report: https://www.reportlinker.com/p05006367/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Original post:

The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during...

Posted in Ai | Comments Off on The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during…

Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling – SciTechDaily

Posted: at 4:13 am

By Matthew Hutson, MIT Department of Nuclear Science and EngineeringJuly 22, 2021

Pictures of the boiling surfaces taken using a scanning electron microscope: Indium tin oxide (top left), copper oxide nanoleaves (top right), zinc oxide nanowires (bottom left), and porous coating of silicon dioxide nanoparticles obtained by layer-by-layer deposition (bottom right). Credit: SEM photos courtesy of the researchers.

MIT researchers train a neural network to predict a boiling crisis, with potential applications for cooling computer chips and nuclear reactors.

Boiling is not just for heating up dinner. Its also for cooling things down. Turning liquid into gas removes energy from hot surfaces, and keeps everything from nuclear power plants to powerful computer chips from overheating. But when surfaces grow too hot, they might experience whats called a boiling crisis.

In a boiling crisis, bubbles form quickly, and before they detach from the heated surface, they cling together, establishing a vapor layer that insulates the surface from the cooling fluid above. Temperatures rise even faster and can cause catastrophe. Operators would like to predict such failures, and new research offers insight into the phenomenon using high-speed infrared cameras and machine learning.

Matteo Bucci, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering at MIT, led the new work,published on June 23, 2021, in Applied Physics Letters. In previous research, his team spent almost five years developing a technique in which machine learning could streamline relevant image processing. In the experimental setup for both projects, a transparent heater 2 centimeters across sits below a bath of water. An infrared camera sits below the heater, pointed up and recording at 2,500 frames per second with a resolution of about 0.1 millimeter. Previously, people studying the videos would have to manually count the bubbles and measure their characteristics, but Bucci trained a neural network to do the chore, cutting a three-week process to about five seconds. Then we said, Lets see if other than just processing the data we can actually learn something from an artificial intelligence, Bucci says.

The goal was to estimate how close the water was to a boiling crisis. The system looked at 17 factors provided by the image-processing AI: the nucleation site density (the number of sites per unit area where bubbles regularly grow on the heated surface), as well as, for each video frame, the mean infrared radiation at those sites and 15 other statistics about the distribution of radiation around those sites, including how theyre changing over time. Manually finding a formula that correctly weighs all those factors would present a daunting challenge. But artificial intelligence is not limited by the speed or data-handling capacity of our brain, Bucci says. Further, machine learning is not biased by our preconceived hypotheses about boiling.

To collect data, they boiled water on a surface of indium tin oxide, by itself or with one of three coatings: copper oxide nanoleaves, zinc oxide nanowires, or layers of silicon dioxide nanoparticles. They trained a neural network on 85 percent of the data from the first three surfaces, then tested it on 15 percent of the data of those conditions plus the data from the fourth surface, to see how well it could generalize to new conditions. According to one metric, it was 96 percent accurate, even though it hadnt been trained on all the surfaces. Our model was not just memorizing features, Bucci says. Thats a typical issue in machine learning. Were capable of extrapolating predictions to a different surface.

The team also found that all 17 factors contributed significantly to prediction accuracy (though some more than others). Further, instead of treating the model as a black box that used 17 factors in unknown ways, they identified three intermediate factors that explained the phenomenon: nucleation site density, bubble size (which was calculated from eight of the 17 factors), and the product of growth time and bubble departure frequency (which was calculated from 12 of the 17 factors). Bucci says models in the literature often use only one factor, but this work shows that we need to consider many, and their interactions. This is a big deal.

This is great, says Rishi Raj, an associate professor at the Indian Institute of Technology at Patna, who was not involved in the work. Boiling has such complicated physics. It involves at least two phases of matter, and many factors contributing to a chaotic system. Its been almost impossible, despite at least 50 years of extensive research on this topic, to develop a predictive model, Raj says. It makes a lot of sense to us the new tools of machine learning.

Researchers have debated the mechanisms behind the boiling crisis. Does it result solely from phenomena at the heating surface, or also from distant fluid dynamics? This work suggests surface phenomena are enough to forecast the event.

Predicting proximity to the boiling crisis doesnt only increase safety. It also improves efficiency. By monitoring conditions in real-time, a system could push chips or reactors to their limits without throttling them or building unnecessary cooling hardware. Its like a Ferrari on a track, Bucci says: You want to unleash the power of the engine.

In the meantime, Bucci hopes to integrate his diagnostic system into a feedback loop that can control heat transfer, thus automating future experiments, allowing the system to test hypotheses and collect new data. The idea is really to push the button and come back to the lab once the experiment is finished. Is he worried about losing his job to a machine? Well just spend more time thinking, not doing operations that can be automated, he says. In any case: Its about raising the bar. Its not about losing the job.

Reference: Decrypting the boiling crisis through data-driven exploration of high-resolution infrared thermometry measurements by Madhumitha Ravichandran, Guanyu Su, Chi Wang, Jee Hyun Seong, Artyom Kossolapov, Bren Phillips, Md Mahamudur Rahman and Matteo Bucci, 23 June 2021, Applied Physics Letters.DOI: 10.1063/5.0048391

Link:

Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling - SciTechDaily

Posted in Ai | Comments Off on Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling – SciTechDaily

Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide – HPCwire

Posted: at 4:13 am

PARIS and BRISTOL, England, July 22, 2021 Atos and Graphcore today announce that they have signed a partnership to accelerate performance and innovation in Artificial Intelligence (AI) by integrating Graphcores advanced IPU compute systems into Atos recently launched ThinkAI offering to bring AI high-performance solutions to customers worldwide.

This partnership will mutually benefit both parties. Atos long-standing position as a European leader in high-performance computing (HPC) and trusted advisor, provider and integrator of HPC solutions at scale will give Graphcore access to a multitude of new customers, sectors and geographies. Graphcore in turn will work with Atos globally to expand its global reach by targeting large corporate enterprise in sectors including finance, healthcare, telecoms and consumer internet as well as national labs and universities focused on scientific research, which are rapidly developing their AI capabilities.

ThinkAI brings together Atos AI business consultancy expertise with its experts at the AtosCenter of Excellence in Advanced Computing with its digital security capabilities and its software, such as Atos HPC Software Suites, to enable organizations to accelerate time to AI operationalization and industrialization.

Graphcore, the UK-headquartered maker of the Intelligence Processing Unit (IPU), plays a significant role in Atos ThinkAI offering, which is focused on the twin objectives of accelerating pure artificial intelligence applications and augmenting traditional HPC simulation with AI. Graphcores IPU-POD systems for scale-up datacentre computing will be an integral part of ThinkAI.

Even before todays formal launch of the partnership, the two companies welcomed their first major joint customer, one of the largest cloud providers in South Korea, which will be using Graphcore systems in large-scale AI cloud datacenters, in a deal facilitated by Atos.

ThinkAI represents a massive commitment to the future of artificial intelligence by one of the worlds most trusted technology companies. For Atos to have put Graphcore as a key part of its strategy says a great deal about the maturity of our hardware and software, and the ability of our systems to deliver on customer needs, said Fabrice Moizan, GM and SVP Sales EMEAI and Asia Pacific at Graphcore.

Agns Boudot, Senior Vice President, Head of HPC & Quantum at Atos said: With ThinkAI, were making it possible for organizations from any industry to achieve breakthroughs with AI. Graphcores IPU hardware and Poplar software is opening up new opportunities for innovators to explore the potential of AI for their organizations, complemented with our industry-tailored AI business consultancy, digital security capabilities and software, were excited to be orchestrating these cutting-edge technologies in our ThinkAI solution.

About Atos

Atos is a global leader in digital transformation with 105,000 employees and annual revenue of over 11 billion. European number one in cybersecurity, cloud and high performance computing, the Group provides tailored end-to-end solutions for all industries in 71 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos operates under the brands Atos and Atos|Syntel. Atos is a SE (Societas Europaea), listed on the CAC40 Paris stock index.

The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space.

About Graphcore

Graphcore is the inventor of the Intelligence Processing Unit (IPU), the worlds most sophisticated microprocessor, specifically designed for the needs of current and next-generation artificial intelligence workloads.

Graphcores IPU-POD datacenter systems, for scale up and scale out AI compute, offer the ability to run large models across multiple IPUs, or to share the compute resource between different users and workloads.

Since its founding in 2016, Graphcore has raised more than $730 million in funding.

Investors include Sequoia Capital, Microsoft, Dell, Samsung, BMW iVentures, Robert Bosch Venture Capital, as well as leading AI innovators including Demis Hassabis (Deepmind), Pieter Abbeel (UC Berkeley), and Zoubin Ghahramani (Google Brain).

Source: Graphcore

See the rest here:

Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide - HPCwire

Posted in Ai | Comments Off on Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide – HPCwire

AI spots shipwrecks from the ocean surface and even from the air – The Conversation US

Posted: at 4:13 am

The Research Brief is a short take about interesting academic work.

In collaboration with the United States Navys Underwater Archaeology Branch, I taught a computer how to recognize shipwrecks on the ocean floor from scans taken by aircraft and ships on the surface. The computer model we created is 92% accurate in finding known shipwrecks. The project focused on the coasts of the mainland U.S. and Puerto Rico. It is now ready to be used to find unknown or unmapped shipwrecks.

The first step in creating the shipwreck model was to teach the computer what a shipwreck looks like. It was also important to teach the computer how to tell the difference between wrecks and the topography of the seafloor. To do this, I needed lots of examples of shipwrecks. I also needed to teach the model what the natural ocean floor looks like.

Conveniently, the National Oceanic and Atmospheric Administration keeps a public database of shipwrecks. It also has a large public database of different types of imagery collected from around the world, including sonar and lidar imagery of the seafloor. The imagery I used extends to a little over 14 miles (23 kilometers) from the coast and to a depth of 279 feet (85 meters). This imagery contains huge areas with no shipwrecks, as well as the occasional shipwreck.

Finding shipwrecks is important for understanding the human past think trade, migration, war but underwater archaeology is expensive and dangerous. A model that automatically maps all shipwrecks over a large area can reduce the time and cost needed to look for wrecks, either with underwater drones or human divers.

The Navys Underwater Archaeology Branch is interested in this work because it could help the unit find unmapped or unknown naval shipwrecks. More broadly, this is a new method in the field of underwater archaeology that can be expanded to look for various types of submerged archaeological features, including buildings, statues and airplanes.

This project is the first archaeology-focused model that was built to automatically identify shipwrecks over a large area, in this case the entire coast of the mainland U.S. There are a few related projects that are focused on finding shipwrecks using deep learning and imagery collected by an underwater drone. These projects are able to find a handful of shipwrecks that are in the area immediately surrounding the drone.

Wed like to include more shipwreck and imagery data from all over the world in the model. This will help the model get really good at recognizing many different types of shipwrecks. We also hope that the Navys Underwater Archaeology Branch will dive to some of the places where the model detected shipwrecks. This will allow us to check the models accuracy more carefully.

Im also working on a few other archaeological machine learning projects, and they all build on each other. The overall goal of my work is to build a customizable archaeological machine learning model. The model would be able to quickly and easily switch between predicting different types of archaeological features, on land as well as underwater, in different parts of the world. To this end, Im also working on projects focused on finding ancient Maya archaeological structures, caves at a Maya archaeological site and Romanian burial mounds.

Continue reading here:

AI spots shipwrecks from the ocean surface and even from the air - The Conversation US

Posted in Ai | Comments Off on AI spots shipwrecks from the ocean surface and even from the air – The Conversation US