ServiceNow ordered a year’s worth of hardware to avoid supply-chain hassles – The Register

The tech world's pandemic supply chain meltdown drove ServiceNow to place orders for a year worth of datacenter kit in January 2022, believing that doing so was necessary to get the hardware it needed to cope with growing customer workloads.

"Pre-COVID, I could generally get stuff in 45 days," CTO Pat Casey told The Register at ServiceNow's Knowledge 22 conference in Sydney, Australia, today.

Well-publicized coronavirus-related supply challenges caused ServiceNow's lead time for some networking kit to stretch to 160 days, while servers can take 120 days to arrive.

So the company "literally placed our entire 2022 order in January," he explained.

"We did it to get in line with the supply chain. If we order it now, hardware starts landing in Q3. If I order in Q3 2022 to meet hardware demand for Q4 2022, I will get the product in Q3 2023."

ServiceNow can't afford to wait that long because, the biz hosts clients on its own infrastructure Casey finds it cheaper to do so. Startups and small companies, he said, rightly balk at paying for datacenter engineers to run their own operations. ServiceNow has reached a scale at which it can afford an infrastructure staff to manage the 200,000 or so instances it runs.

Casey also feels that Amazon Web Services offers "a generic cloud." ServiceNow prefers hardware tuned to the needs of its application, which requires servers loaded up with memory and disk.

"We run one app. I can buy gear optimized for that, which means I can stack it denser; I can often get exactly the stuff I need. The price points are there," Casey elaborated.

The CTO said ServiceNow has found that "middling CPUs" meet its needs. "There is a spectrum of chips: the fast chips with a small number of cores, and the slower you get the more cores they give you for the same amount of power. We are somewhere in the middle we can't run all that efficiently on the core-happy but fairly slow stuff, but it is not worth us to pay a pile of money for something with only four cores on it to get 12 percent faster."

ServiceNow is an x86 shop, though Casey said the company has considered alternatives including IBM's Power architecture. It has not been convinced to change.

ServiceNow's servers use locally attached NVMe storage, housed on separate cards. Shared storage is used sparingly, and usually in the same rack as the servers it, well, serves.

Casey said ServiceNow was one of the first customers of Fusion-io, the storage upstart that was early to market with flash storage on PCIe cards. "It was life changing for us because it was so much better than the spinning disk arrays we had," Casey enthused. "At one point we were buying ten percent of Fusion-io's annual production. We were their number one customer. We are still a big buyer of NVMe storage."

Yet Casey still sees some hangovers from the days of mechanical hard disks in the world of software.

"A lot of the internals of a database are really designed to work around the behaviors of spinning disk arrays," he told The Register. "On NVMe it is almost not worth it. The double write buffering behaviors you see in a lot of databases, you don't need that on NVMe. They are actually counterproductive."

Casey said ServiceNow is a big user of, and investor in, MariaDB, with around 200,000 instances running. In August 2021, ServiceNow acquired German database vendor Swarm64 in the expectation its Postgres-based tech will enable rapid analytics and potentially be useful for primary storage, too. MonetDB, an open source effort led by folks in the Netherlands, also has a home at ServiceNow. Casey said it is "very, very fast, but also sort of fragile."

"You set it up, you load your column storage fast as a thief, but if you change it, it degrades," he said. "So we have to run two of them in parallel, then swap them."

Casey doesnt see any technology on the horizon that he thinks will have a positive impact to compare with that of NVMe, though he is keeping an eye on Compute Express Link (CXL) without being close to a decision. He's aware of SmartNICs but has no plan to adopt them.

One project that has commenced is the development of a backup tier. ServiceNow wrote its own backup solution and is thinking of using a standardized protocol namely, Amazon Web Services' S3 for that tech.

That's not an indication ServiceNow will adopt AWS storage. Instead, Casey said the company may deploy software that speaks the S3 protocol. ServiceNow would not be alone in doing so AWS's cloud storage offering is so pervasive that many on-prem storage rigs and applications use its protocol to allow easier access to hybrid cloud storage. For ServiceNow, S3-compatible storage in-house therefore has value.

Casey's job isn't all infrastructure he also guides development of ServiceNow's products. In that role, he said, an ongoing challenge is the user interface both to ensure complexity does not become an issue and because end-user demands remain high. People expect the ease of consumer tech experiences replicated at work.

More here:

ServiceNow ordered a year's worth of hardware to avoid supply-chain hassles - The Register

Google says it would release its photorealistic DALL-E 2 rival but this AI is too prejudiced for you to use – The Register

DALLE 2 may have to cede its throne as the most impressive image-generating AI to Google, which has revealed its own text-to-image model called Imagen.

Like OpenAI's DALLE 2, Google's system outputs images of stuff based on written prompts from users. Ask it for a vulture flying off with a laptop in its claws and you'll perhaps get just that, all generated on the fly.

A quick glance at Imagen's website shows off some of the pictures it's created (and Google has carefully curated), such as a blue jay perched on a pile of macarons, a robot couple enjoying wine in front of the Eiffel Tower, or Imagen's own name sprouting from a book. According to the team, "human raters exceedingly prefer Imagen over all other models in both image-text alignment and image fidelity," but they would say that, wouldn't they.

Imagen comes from Google Research's Brain Team, who claim the AI achieved an unprecedented level of photorealism thanks to a combination of transformer and image diffusion models. When tested against similar models, such as DALLE 2 and VQ-GAN+CLIP, the team said Imagen blew the lot out of the water. DrawBench, a list of 200 prompts used to benchmark the models, was built in-house.

Imagen's work, with prompts ... Source: Google

Imagen's designers say that their key breakthrough was in the training stage of their model. Their work, the team said, shows how effective large, frozen pre-trained language models can be as text encoders. Scaling that language model, they found, had far more impact on performance than scaling Imagen's other components.

"Our observation encourages future research directions on exploring even bigger language models as text encoders," the team wrote.

Unfortunately for those hoping to take a crack at Imagen, the team that created it said it isn't releasing its code nor a public demo, for several reasons.

For example, Imagen isn't good at generating human faces. In experiments with pictures including human faces, Imagen only received a 39.2 percent preference from human raters over reference images. When human faces were removed, that number jumped to 43.9 percent.

Unfortunately, Google didn't provide any Imagen-generated human pictures, so it's impossible to tell how they compare to those generated by platforms like This Person Does Not Exist, which uses a general adversarial network to generate faces.

Aside from technical concerns, and more importantly, Imagen's creators found that it's a bit racist and sexist even though they tried to prevent such biases.

Imagen showed "an overall bias towards generating images of people with lighter skin tones and portraying different professions to align with Western gender stereotypes," the team wrote. Eliminating humans didn't help much, either: "Imagen encodes a range of social and cultural biases when generating images of activities, events and objects."

Like similar AIs, Imagen was trained on image-text pairs scraped from the internet into publicly available datasets like COCO and LAION-400M. The Imagen team said it filtered a subset of the data to remove noise and offensive content, though an audit of the LAION dataset "uncovered a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes."

Bias in machine learning is a well-known issue: Twitter's image cropping and Google's computer vision are just a couple that have been singled out for playing into stereotypes that are coded into the data we produce.

"There are a multitude of data challenges that must be addressed before text-to-image models like Imagen can be safely integrated into user-facing applications We strongly caution against the use of text-to-image generation methods for any user-facing tools without close care and attention to the contents of the training dataset," Imagen's creators said.

Read the original here:

Google says it would release its photorealistic DALL-E 2 rival but this AI is too prejudiced for you to use - The Register

Toyota cuts vehicle production over global chip shortage – The Register

Toyota is to slash global production of motor vehicles due to the semiconductor shortage. The news comes as Samsung pledges to invest about $360 billion over the next five years to bolster chip production, along with other strategic sectors.

In a statement, Toyota said it has had to lower the production schedule by tens of thousands of units globally from the numbers it provided to suppliers at the beginning of the year.

"The shortage of semiconductors, spread of COVID-19 and other factors are making it difficult to look ahead, but we will continue to make every effort possible to deliver as many vehicles to our customers at the earliest date," the company said.

This has resulted in the suspension of manufacturing in May and June for 16 Toyota production lines in 10 plants, out of 28 lines across 14 plants, according to the company.

The news is just the latest in the saga of shortages caused by lockdowns and other issues that have led to long delays in chip shipments affecting multiple industries.

In April, Volvo cited chip shortages for a 22.1 percent drop in sales of its vehicles in March, when compared to the same period the previous year. Jaguar Land Rover, General Motors and others say they've also felt the squeeze this year.

Car manufacturers were particularly badly hit due to lack of flexibility in the supply chain, but the effects are also being felt by makers of computers and other kit, with Dell reporting in February that it is expecting the backlog to grow. Chipmaker TSMC warned in April that supply difficulties are likely to last through this year and into 2023.

Amid all this, Samsung has announced that it plans to invest about $360 billion in total over the five years to drive growth in semiconductors, biopharmaceuticals, and other next-generation technologies.

The investment represents an increase of more than 30 percent over the previous five-year period, and comes with the expectation that this will lead to the creation of 80,000 jobs, mostly in semiconductors and biopharmaceuticals and most of these likely in Samsung's backyard.

According to Reuters, Samsung said 80 percent of the investment will be made in South Korea and that the announcement includes a 240 trillion ($206 billion) investment pledge made by the company in August 2021.

While the move may be welcomed by many, it is unlikely to ease the chip shortage pain currently being felt by many hardware makers and their customers. However, Richard Gordon, Gartner practice vice president for semiconductors and electronics, said we may now be past the worst of it.

"We've just seen a classic peak in the semiconductor market chip shortages, prices rises, inventory build-up, all of which led to a very high growth year and record revenues in 2021. But this is a cyclical market. The shortage situation is easing; I think we are past the peak in the cycle," Gordon told us last month.

"On the supply side, capacity is going to come on-stream progressively from 2022 onwards and supply chain disruption in places like China will cause sporadic glitches in electronics production."

However, both Samsung and TSMC, the two largest contract semiconductor manufacturers in the world, announced earlier this month that they are planning to increase the prices they charge customers for manufacturing chips, which will likely lead to a hike in the prices that enterprises and consumers pay for products.

Earlier this year, Samsung announced revenue of $63.8 billion for Q4 2021, up 24 percent year-on-year, with operating profits up almost 5 percent, despite the fact that it failed to meet its own guidance for DRAM and NAND shipments during final three months of the year.

Follow this link:

Toyota cuts vehicle production over global chip shortage - The Register

The case for placing AI at the heart of digitally robust financial regulation – Brookings Institution

Data is the new oil. Originally coined in 2006 by the British mathematician Clive Humby, this phrase is arguably more apt today than it was then, as smartphones rival automobiles for relevance and the technology giants know more about us than we would like to admit.

Just as it does for the financial services industry, the hyper-digitization of the economy presents both opportunity and potential peril for financial regulators. On the upside, reams of information are newly within their reach, filled with signals about financial system risks that regulators spend their days trying to understand. The explosion of data sheds light on global money movement, economic trends, customer onboarding decisions, quality of loan underwriting, noncompliance with regulations, financial institutions efforts to reach the underserved, and much more. Importantly, it also contains the answers to regulators questions about the risks of new technology itself. Digitization of finance generates novel kinds of hazards and accelerates their development. Problems can flare up between scheduled regulatory examinations and can accumulate imperceptibly beneath the surface of information reflected in traditional reports. Thanks to digitization, regulators today have a chance to gather and analyze much more data and to see much of it in something close to real time.

The potential for peril arises from the concern that the regulators current technology framework lacks the capacity to synthesize the data. The irony is that this flood of information is too much for them to handle. Without digital improvements, the data fuel that financial regulators need to supervise the system will merely make them overheat.

Enter artificial intelligence.

In 2019, then-Bank of England Gov. Mark Carney argued that financial regulators will have to adopt AI techniques in order to keep up with the rising volumes of data flowing into their systems. To dramatize the point, he said the bank receives 65 billion pieces of data annually from companies it oversees and that reviewing it all would be like each supervisor reading the complete works of Shakespeare twice a week, every week of the year.

That was three years ago. The number is almost certainly higher today. Furthermore, the numbers he cited only covered information reported by regulated firms. It omitted the massive volumes of external Big Data generated from other sources like public records, news media, and social media that regulators should also be mining for insight about risks and other trends.

AI was developed over 70 years ago. For decades, enthusiasts predicted that it would change our lives profoundly, but it took awhile before AI had much impact on everyday lives.1 AI occasionally made news by performing clever feats, like IBMs Watson besting human champions at Jeopardy in 2011, or AIs beating masters of complex games like chess (in 1996) and Go (in 2017). However, it was only recently that such machines showed signs of being able to solve real-world problems. Why is that?

A key answer is that, until only recently, there wasnt enough data in digitized formformatted as computer-readable codeto justify using AI.2 Today, there is so much data that not only can we use AI, but in many fields like financial regulation we have to use AI simply to keep up.

As discussed further below, financial regulators around the world are in the early stages of exploring how AI and its sub-branches of Machine Learning (ML), Natural Language Processing (NLP), and neural networks, can enhance their work. They are increasingly weighing the adoption of supervisory technology (or suptech) to monitor companies more efficiently than they can with analog tools. This shift is being mirrored in the financial industry by a move to improve compliance systems with similar regulatory technology (regtech) techniques. Both processes are running on a dual track, with one goal being to convert data into a digitized form and the other to analyze it algorithmically. Meeting either of these objectives without the other has little value. Together, they will transform both financial regulation and compliance. They offer the promise that regulation, like everything else that gets digitized, can become better, cheaper, and faster, all at once.

Financial regulators around the world have generally been more active in regulating industrys use of AI than adopting it for their own benefit. Opportunities abound, however, for AI-powered regulatory and law enforcement tactics to combat real-world problems in the financial system. In a later section, this paper will look at the primary emerging use cases. Before doing so, it is worth taking a look at some areas of poor regulatory performance, both past and present, and ask whether AI could have done better.

One example is the $800 billion Paycheck Protection Program that Congress established in 2020 to provide government-backed loans for small businesses reeling from the pandemic. More than 15% of PPP loans representing $76 billioncontained evidence of fraud, according to a study released last year. Many cases involved loan applicants using fake identities. Imagine if the lenders submitting loan guarantee applications or the Small Business Administration systems that were reviewing them had had mature AI-based systems that could have flagged suspicious behavior. They could have spotted false statements and prevented fraudulent loans, thereby protecting taxpayer money and ensuring that their precious funds helped small businesses in need instead of financing thieves.

Two examples can be found from the war in Ukraine. The Russian invasion has sparked a whole new array of sanctions against Russian oligarchs who hide riches in shell companies and are scrambling to move their money undetected. Financial institutions are required to screen accounts and transactions to identify transactions by sanctioned entities. What if they and law enforcement agencies like the Financial Crimes Enforcement Network (FinCEN) had AI-powered analytics to pull and pool data from across the spectrum of global transactions and find the patterns revealing activity by sanctioned parties? Unfortunately, most financial institutions and government agencies do not have these tools in hand today.

The second example comes from the rapid flight of millions of refugees attracting human traffickers to the countrys borders seeking to ensnare desperate women and children and sell them into slavery for work and sex. Banks are required by law to maintain anti-money laundering (AML) systems to detect and report money movement that may indicate human trafficking and other crimes, but these systems are mostly analog and notoriously ineffective. The United Nations Office on Drugs and Crime estimates that less than 1% of financial crime is caught. AI-powered compliance systems would have a far better chance of flagging the criminal rings targeting Ukraine. If such systems had been in effect in recent years, moreover, the human trafficking trade might not be flourishing. As it stands today, an estimated 40 million people are being held captive in modern human slavery, and one in four of them is a child.

In another thought experiment, what if bank regulators in 2007 had been able to see the full extent of interrelationships between subprime mortgage lenders and Wall Street firms like Bear Stearns, Lehman Brothers, and AIG? If regulators had been armed with real-time digital data and AI analytics, they would have been monitoring risk contagion in real time. They might have been able to avert the financial crisis and with it, the Great Recession.

Finally, what about fair lending? In 1968, the United States outlawed discrimination on the basis of race, religion and other factors in mortgage lending through the passage of the Fair Housing Act.3 With the later passage of the Equal Credit Opportunity Act and Housing and Community Development Act, both in 1974, Congress added sex discrimination to that list and expanded fair-lending enforcement to all types of credit, not just mortgages.4 That was nearly 50 years ago.

These laws have gone a long way toward combating straightforward, overt discrimination but have been much less effective in rooting out other forms of bias. Lending decisions still produce disparate impacts on different groups of borrowers, usually in ways that disproportionately harm protected classes like people of color. Some of this arises from the fact that high volume credit decisioning must rely on efficient measures of creditworthiness, like credit scores, that in turn rely on narrow sources of data.5 What if, 40 years ago, both regulators and industry had been able to gather much more risk data and analyze it with AI? How many more people would have been deemed creditworthy instead of having their loan denied? Over four decades, could AI tools have changed the trajectory of racial opportunity in the United States, which currently includes a $10 trillion racial wealth gap and the African-American homeownership rate lagging that of whites by 30 percentage points?

In his 2018 book titled Unscaled, venture capitalist Hemant Taneja argued that exploding amounts of data and AI will continue to produce unprecedented acceleration of our digital reality. In another ten years anything that AI doesnt power will seem lifeless and outmoded. It will be like an icebox after electric-powered refrigerators were invented, he wrote.

Tanejas estimated time horizon is now only six years away. In the financial sector, this sets up a daunting challenge for regulators to design and construct sufficiently powerful suptech before the industrys changing technology could overwhelm their supervisory capacity. Fortunately, regulators in the U.S. and around the world are taking steps to narrow the gap.

Arguably the global leader in regulatory innovation is the United Kingdoms Financial Conduct Authority (FCA). In 2015, the FCA established the Project Innovate initiative, which included the creation of a regulatory sandbox for private sector firms to test new products for their regulatory impact. A year later, the FCA launched a regtech unit that developed what the agency called techsprintsan open competition resembling a tech hackathon in which regulatory, industry, and issue experts work side-by-side with software engineers and designers to develop and present tech prototypes for solving a particular regulatory problem. The innovation program has since been expanded into a major division within the FCA.6

The FCA has been able to translate this relatively early focus on digital innovation into real-world problem solving. In 2020, a senior agency official gave a speech about how the FCA uses machine learning and natural language processing to monitor company behaviors and spot outlier firms as part of a holistic approach to data analysis. Similar strides have been made in other countries, including Singapore and Australia.

U.S. regulators for the most part have made slower progress incorporating AI technologies in their monitoring of financial firms. All of the federal financial regulatory bodies have innovation programs in some form. Most of them, however, have focused more on industry innovation than their own. The U.S. banking agenciesConsumer Financial Protection Bureau, Federal Deposit Insurance Corporation, Federal Reserve Board and Office of the Comptroller of the Currencyall have innovation initiatives that are largely outward-facing, aimed at understanding new bank technologies and offering a point of contact on novel regulatory questions. They all also expanded their technology activities during the COVID-19 pandemic, spurred by the sudden digital shifts underway in the industry and their own need to expand offsite monitoring. Several agencies also have suptech projects underway. These, however, generally have limited reach and do not address the need for agencies to revisit their foundational, analog-era information architecture.

This is beginning to change. The Federal Reserve in 2021 created the new position of Chief Innovation Officer and hired Sunayna Tuteja from the private sector, charging her to undertake a sweeping modernization of the Feds data infrastructure. The FDIC, too, has closely examined its own data structures, and the OCC has worked on consolidating its examination platforms. These are productive steps, but they still lag the advanced thinking underway in other parts of the world. U.S. regulators have yet to narrow the gap between the accelerating innovation in the private sector and their own monitoring systems.

Other U.S. regulatory agencies have embraced AI technologies more quickly. In 2017, Scott Bauguess, the former deputy chief economist at the Securities and Exchange Commission (SEC), described his agencys use of AI to monitor securities markets. Soon after the financial crisis, he said, the SEC began simple text analytic methods to determine if the agency could have predicted risks stemming from credit default swaps before the crisis. SEC staff also applies machine-learning algorithms to identify reporting outliers in regulatory filings.

Similarly, the Financial Industry Regulatory Authority (FINRA)the self-regulatory body overseeing broker-dealers in the U.S.uses robust AI to detect possible misconduct.7 The Commodity Futures Trading Commission (CFTC), meanwhile, has been a leader through its LabCFTC program, which addresses both fintech and regtech solutions. Former CFTC Chairman Christopher Giancarlo has said that the top priority of every regulatory body should be to digitize the rulebook.8 Lastly, the Treasury Departments Financial Crimes Enforcement Network (FinCEN) launched an innovation program in 2019 to explore regtech methods for improving money-laundering detection.9 The agency is now in the process of implementing sweeping technology mandates it received under the Anti-Money Laundering Act of 2020, a great opportunity to implement AI to better detect some of the financial crimes discussed above.

If government agencies supplanted their analog systems with a digitally native design, it would optimize the analysis of data that is now being under-utilized. The needles could be found in the haystack, fraudsters and money launderers would have a harder time hiding their activity, and regulators would more completely fulfill their mission of maintaining a safer and fairer financial system.

Below are specific use cases for incorporating AI in the regulatory process:

Arguably the most advanced regtech use case globally is anti-money laundering (AML). AML compliance costs the industry upwards of $50 billion per year in the U.S., as most banks rely on rules-based transaction monitoring systems.10 These methods help them determine which activity to report to FinCEN as suspicious but currently produce a false-positive rate of over 90%. This suggests banks, regulators, and law enforcement authorities are spending time and money chasing down potential leads but not really curbing illicit financial crimes. The AML data that law enforcement agencies currently receive contains too much unimportant information and is not stored in formats to help identify patterns of crime.11

Financial regulators around the world have generally been more active in regulating industrys use of AI than adopting it for their own benefit.

In addition to the challenges associated with locating financial crimes among the massively complex web of global transactions, banks also must perform identity verification checks on new customers and submit beneficial owner data to FinCEN to prevent launderers from hiding behind fake shell companies. The war in Ukraine and toughening of sanctions on Russian oligarchs has highlighted the need for better screening mechanisms to restrict the financial activity of individuals that appear on sanctions lists. While a growing industry of regtech firms are attempting to help financial institutions more efficiently comply with Know-Your-Customer (KYC) rules, FinCEN is in the midst of implementing legislative reforms requiring corporations to submit data to a new beneficial owner database.

In 2018 and 2019, the FCA held two international tech sprints aimed at addressing AML challenges. The first sprint dealt with enabling regulators and law enforcement to share threat information more safely and effectively. The second focused on Privacy-Enhancing Technologies, or PETs, of various kinds. For example, homomorphic encryption is a technique that shows promise for enabling data shared through AML processes to be encrypted throughout the analytical process, so that the underlying information is concealed from other parties and privacy is preserved. Another PET technique known as zero-knowledge proof enables one party to ask another essentially a yes-or-no question without the need to share the underlying details that spurred the inquiry. For example, one bank could ask another if a certain person is a customer, or if that person engaged in a certain transaction. Techniques like this can be used to enable machine-learning analysis of laundering patterns without compromising privacy or potentially undermining the secrecy of an ongoing investigation.

The SBA did make efforts to evaluate AI tools to detect fraud in PPP loans, looking to certain AI-powered fintech lenders. Nevertheless, the small business loan program was still rife with fraud. (In fact, some of the attention regarding fraud concerns has centered on loans processed by fintech firms.12) Several studies show that effective use of machine learning in credit decisioning can more easily detect when, for example, loan applications are submitted by fake entities.

One of the biggest fraud threats facing financial institutions is the use of synthetic identities by bad actors. These are created by combining real customer information with fake data in a series of steps that can fool normal detection systems but can often be caught by regtech analysis using more data and machine learning.

Many regtech solutions for fighting money laundering grew out of technology for identifying fraud, which has generally been more advanced. This may be because the industry has an enormous financial interest in preventing fraud losses. It may also reflect the fact that, in fraud, firms are usually dealing with the certainty of a problem, whereas in AML, they usually never know whether the Suspicious Activity Reports they file with FinCEN lead to something useful. These factors make it all the more important to equip banks and their regulators with tools that can more easily, and less expensively, detect patterns of crime.

U.S. consumer protection law bans Unfair and Deceptive Acts and Practices (UDAP), both in the financial sector and overall, and adds the criterion of abusive activity for purposes of enforcement by the Consumer Financial Protection Bureau (UDAAP). However, enforcement of subjective standards like unfairness and deception is challenging, often hampered by the difficulty of detecting and analyzing patterns of potentially illegal behavior. As with discrimination, UDAAP enforcement relies on considerable subjective judgment in distinguishing activities that are against the law from more benign patterns. This also makes compliance difficult. AI-based regtech can bring to bear the power of more data and AI analytical tools to solve these challenges, allowing regulators to detect and prove violations more easily. It might also enable them to issue more clear and concrete guidanceincluding more sophisticated standards on statistical modelingto help industry avoid discrimination and being responsible for UDAAPs.

There is a growing recognition among advocates that full financial inclusion, especially for emerging markets, requires greatly expanded use of digital technology. Access to cell phones has, in effect, put a bank branch in the hands of two-thirds of the worlds adults. This unprecedented progress has, in turn, highlighted barriers to further success, most of which could be solved or ameliorated with better data and AI.

One is the problem of AML de-risking. As noted above, banks must follow Know-Your-Customer (KYC) rules before accepting new customers, a process that includes verifying the persons identity. In many developing countries, poor peopleand particularly womenlack formal identity papers like birth certificates and drivers licenses, effectively excluding them from access to the formal financial system.13 In some parts of the world, the regulatory pressure on banks to manage risk associated with taking on new customers has resulted in whole sectorsand, in some countries, the entire populationbeing cut off from banking services.14 In reality, these markets include millions of consumers who would be well-suited to opening an account and do not present much risk at all. Banks and regulators struggle with how to distinguish high-risk individuals from those who are low risk. A great deal of work is underway in various countries to solve this problem more fully with AI, through the use of digital identity mechanisms that can authenticate a persons identity via their digital footprints.

A related challenge is that expanded financial inclusion has produced increased need for better consumer protection. This is especially important for people who are brought into the financial system by inclusion strategies and who may lack prior financial background and literacy, making them vulnerable to predatory practices, cyber scams, and other risks. Regulators are using AI chatbots equipped with NLP to intake and analyze consumer complaints at scale and to crawl the web for signs of fraudulent activity.

One example is the RegTech for Regulators Accelerator (R2A) launched in 2016 with backing from the Bill & Melinda Gates Foundation, the Omidyar Network, and USAID.15 It focuses on designing regulatory infrastructure in two countries, the Philippines and Mexico. Emphasizing the need for consumers to access services through their cell phone, the project introduced AML reporting procedures and chatbots through which consumers could report complaints about digital financial products directly to regulators.

Importantly, regtech innovation in the developing world often exceeds that in the major advanced economies. One reason is that many emerging countries never built the complex regulatory infrastructure that is commonplace today in regions like the U.S., Canada, and Europe. This creates an opportunity to start with a clean slate, using todays best technology rather than layering new requirements on top of yesterdays systems.

Perhaps AIs greatest financial inclusion promise lies in the emergence of data-centered credit underwriting techniques that evaluate loan applications. Traditional credit underwriting has relied heavily on a narrow set of dataespecially the individuals income and credit history, as reported to the major Credit Reporting Agenciesbecause this information is easily available to lenders. Credit scores are accurate in predicting default risk among people with good FICO scores (and low risks of default). However, those traditional underwriting techniques skew toward excluding some people who could repay a loan but have a thin credit file (and hence a lower or no credit score) or a complicated financial situation that is harder to underwrite.

AI underwriting is beginning to be used by lenders, especially fintechs. AI is also increasingly being used by financial firms as a regtech tool to check that the main underwriting process complies with fair-lending requirements. A third process, much less developed, is the potential for the same technologies to be used by regulators to check for discrimination by lenders, including structural bias and unintentional exclusion of people who could actually repay a loan. Structural biases often lead to disparate impact outcomes. In these cases, regulators assert that a lending policy was discriminatory on the basis of race, gender, or other prohibited factors, not because of intent but because a specific class of consumers endured negative outcomes. Because disparate impact is a legal standard16 and violations of these laws create liability for lenders, these claims may also be made by plaintiffs representing people who argue they have been wronged.

Research conducted by FinRegLab and others is exploring the potential for AI-based underwriting to make credit decisions more inclusive with little or no loss of credit quality, and possibly even with gains in loan performance. At the same time, there is clearly risk that new technologies could exacerbate bias and unfair practices if not properly designed, which will be discussed below.

In March 2022, the Securities and Exchange Commission proposed rules for requiring public companies to disclose risks relating to climate change.17 The effectiveness of such a mandate will inevitably be limited by the fact that climate impacts are notoriously difficult to track and measure. The only feasible way to solve this will be by gathering more information and analyzing it with AI techniques that can combine vast sets of data about carbon emissions and metrics, interrelationships between business entities, and much more.

The potential benefits of AI are enormous, but so are the risks. If regulators mis-design their own AI tools, and/or if they allow industry to do so, these technologies will make the world worse rather than better. Some of the key challenges are:

Explainability: Regulators exist to fulfill mandates that they oversee risk and compliance in the financial sector. They cannot, will not, and should not hand their role over to machines without having certainty that the technology tools are doing it right. They will need methods either for making AIs decisions understandable to humans or for having complete confidence in the design of tech-based systems. These systems will need to be fully auditable.

Bias: There are very good reasons to fear that machines will increase rather than decrease bias. Technology is amoral. AI learns without the constraints of ethical or legal considerations, unless such constraints are programmed into it with great sophistication. In 2016, Microsoft introduced an AI-driven chatbot called Tay on social media. The company withdrew the initiative in less than 24 hours because interacting with Twitter users had turned the bot into a racist jerk. People sometimes point to the analogy of a self-driving vehicle. If its AI is designed to minimize the time elapsed to travel from point A to point B, the car or truck will go to its destination as fast as possible. However, it could also run traffic lights, travel the wrong way on one-way streets, and hit vehicles or mow down pedestrians without compunction. Therefore, it must be programmed to achieve its goal within the rules of the road.

In credit, there is a high likelihood that poorly designed AIs, with their massive search and learning power, could seize upon proxies for factors such as race and gender, even when those criteria are explicitly banned from consideration. There is also great concern that AIs will teach themselves to penalize applicants for factors that policymakers do not want considered. Some examples point to AIs calculating a loan applicants financial resilience using factors that exist because the applicant was subjected to bias in other aspects of her or his life. Such treatment can compound rather than reduce bias on the basis of race, gender, and other protected factors. Policymakers will need to decide what kinds of data or analytics are off-limits.

One solution to the bias problem may be use of adversarial AIs. With this concept, the firm or regulator would use one AI optimized for an underlying goal or functionsuch as combatting credit risk, fraud, or money launderingand would use another separate AI optimized to detect bias in the decisions in the first one. Humans could resolve the conflicts and might, over time, gain the knowledge and confidence to develop a tie-breaking AI.

Data quality: As noted earlier, AI and data management are inextricably intertwined, so that acceptable AI usage will not emerge unless regulators and others solve the many related challenges regarding data use. As with any kind of decision making, AI-based choices are only as good as the information on which they rely.

Integrating AI into regulation is a big challenge that brings substantial risks, but the cost of sticking with largely analog systems is greater.

Accordingly, regulators face tremendous challenges regarding how to receive and clean data. AI can deal most easily with structured data, which arrives in organized formats and fields that the algorithm easily recognizes and puts to use. With NLP tools, AI can also make sense of unstructured data. Being sure, however, that the AI is using accurate data and understanding it requires a great deal of work. Uses of AI in finance will require ironclad methods for ensuring that data is collected and cleaned properly before it undergoes algorithmic analysis. The old statistics maxim garbage in, garbage out becomes even more urgent when the statistical analysis will be done by machines using methods that its human minders cannot fully grasp.

It is critical that policymakers focus on what is at stake. AI that might be good at, say, recommending a movie to watch on Netflix will not suffice for deciding whether to approve someone for a mortgage or a small-business loan or let them open a bank account.

Data protection and privacy: Widespread use of AI will also necessitate deep policy work on the ethics and practicalities of using data. What kinds of information should be used and what should be off-limits? How will it be protected from security risks and government misuse? Should people have the right to force-remove past online data, and should companies encryption techniques be impenetrable even by the government?

Privacy-enhancing technologies may be able to mitigate these risks, but the dangers will require permanent vigilance. The challenge will spike even higher with the approach of quantum computing that has the power to break the encryption techniques used to keep data safe.

Model Risk Management (MRM): Mathematical models are already widely used in financial services and financial regulation. They raise challenges that will only grow as AI becomes more widely employed. This is particularly true as AI is placed in the hands of people who do not understand how it makes decisions. Regulators and industry alike will need clear governance protocols to ensure that these AI tools are frequently retested, built on sufficiently robust and accurate data, and are kept up to date in both their data and technical foundations.

Redesigning financial regulation to catch up to the acceleration of AI and other industry innovation is somewhat analogous to the shift in cameras from analog to digital at the turn of the millennium. An analog camera produces an image in a form that is cumbersome, requiring expert (and expensive) manipulation to edit photos. Improving the process of taking pictures with 35-millimeter film hits a ceiling at a certain point. By comparison, the digital or smartphone camera was a whole new paradigm, converting images into digital information that could be copied, printed, subjected to artificial intelligence for archiving and other methods, and incorporated into other media. The digital camera was not an evolution of the analog version that preceded it. It was entirely different technology.

Similarly, current regulatory technologies are built on top of an underlying system of information and processes that were all originally designed on paper. As a result, they are built around the constraining assumptions of the analog era, namely that information is scarce and expensive to obtain, and so is computing power.

To undertake a more dramatic shift to a digitally native design, regulators should create new taxonomies of their requirements (which some agencies are already developing) that can be mapped to AI-powered machines. They should also develop comprehensive education programs to train their personnel in technology knowledge and skills, including baseline training on core topics, of which AI is a single, integral part. Other key big data issues include the Internet of Things, cloud computing, open source code, blockchains and distributed ledger technology, cryptography, quantum computing, Application Program Interfaces (APIs), robotic process automation (RPI), privacy enhancing technologies (PETs), Software as a Service (Saas), agile workflow, and human-centered design.

These are big challenges that bring substantial risks, but the cost of sticking with largely analog systems is greater. Personnel may fear that such an overhaul could result in machines taking their jobs, or that machines will make catastrophic errors, resulting in financial mishaps. On the former fear, robotics and AI can in fact empower human beings to do their jobs better, by decreasing vast amounts of routine work duties and freeing up people to use their uniquely human skills on high-value objectives. On the second fear, agencies should build cultures grounded in an understanding that humans should not cede significant decisionmaking to machines. Rather, experts should use technology to help prioritize their own efforts and enhance their work.

Data is the new oil not only in its value but in its impact: Like oil, digitization of data can solve some problems and cause others. The key to achieving optimal outcomes is to use both data and AI in thoughtful wayscarefully designing new systems to prevent harm, while seizing on AIs ability to analyze volumes of information that would overwhelm traditional methods of analysis. A digitally robust regulatory system with AI at its core can equip regulators to solve real-world problems, while showcasing how technology can be used for good in the financial system and beyond.

The author serves on the board of directors of FinRegLab, a nonprofit organization whose research includes a focus on use of AI in financial regulatory matters. She did not receive financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. Other than the aforementioned, the author is not currently an officer, director, or board member of any organization with a financial or political interest in this article.

Read the original:

The case for placing AI at the heart of digitally robust financial regulation - Brookings Institution

Advantages of custom software development versus ready products – Techstory

In any company and at the enterprise every day there is work with a large amount of information. Various financial statements, accounting, orders and contracts, manuals and much more all this requires reliable storage and effective management. IT is designed to provide work with such information using software. It will help in work processes, save time and budget of the company.

In most typical cases, there are many ready-made programs that you can implement and start using:

Before choosing one or another solution, you need to look at all the pros and cons of each of them.

There are many companies that specialize in custom software developmentcompany. Nix United is one of the few companies that can be truly useful for both large companies and small firms. On their website you can order the development of software and mobile applications of any complexity.

Custom development has the following pros and cons:

Pros:

Cons:

Pros:

Cons:

Pros:

Cons:

Pros:

Cons:

Here is the original post:

Advantages of custom software development versus ready products - Techstory

Copilot, GitHubs AI-powered coding tool, will be free for students – TechCrunch

Last June, Microsoft-owned GitHub and OpenAI launched Copilot, a service that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio. Available as a downloadable extension, Copilot is powered by an AI model called Codex thats trained on billions of lines of public code to suggest additional lines of code and functions given the context of existing code. Copilot can also surface an approach or solution in response to a description of what a developer wants to accomplish (e.g. Say hello world), drawing on its knowledge base and current context.

While Copilot was previously available in technical preview, itll become generally available starting sometime this summer, Microsoft announced at Build 2022. Copilot will also be available free for students as well as verified open source contributors. On the latter point, GitHub said itll share more at a later date.

The Copilot experience wont change much with general availability. As before, developers will be able to cycle through suggestions for Python, JavaScript, TypeScript, Ruby, Go and dozens of other programming languages and accept, reject or manually edit them. Copilot will adapt to the edits developers make, matching particular coding styles to autofill boilerplate or repetitive code patterns and recommend unit tests that match implementation code.

Copilot extensions will be available for Noevim and JetBrains in addition to Visual Studio Code, or in the cloud on GitHub Codespaces.

One new feature coinciding with the general release of Copilot is Copilot Explain, which translates code into natural language descriptions. Described as a research project, the goal is to help novice developers or those working with an unfamiliar codebase.

Earlier this year we launched Copilot Labs, a separate Copilot extension developed as a proving ground for experimental applications of machine learning that improve the developer experience, Ryan J. Salva, VP of product at GitHub, told TechCrunch in an email interview. As a part of Copilot Labs, we launched explain this code and translate this code. This work fits into a category of experimental capabilities that we are testing out that give you a peek into the possibilities and lets us explore use cases. Perhaps with explain this code, a developer is weighing into an unfamiliar codebase and wants to quickly understand whats happening. This feature lets you highlight a block of code and ask Copilot to explain it in plain language. Again, Copilot Labs is intended to be experimental in nature, so things might break. Labs experiments may or may not progress into permanent features of Copilot.

Copilots new feature, Copilot Explain, translates code into natural language explanations. Image Credits: Copilot

Owing to the complicated nature of AI models, Copilot remains an imperfect system. GitHub warns that it can produce insecure coding patterns, bugs and references to outdated APIs, or idioms reflecting the less-than-perfect code in its training data. The code Copilot suggests might not always compile, run or even make sense because it doesnt actually test the suggestions. Moreover, in rare instances, Copilot suggestions can include personal data like names and emails verbatim from its training set and worse still, biased, discriminatory, abusive, or offensive text.

GitHub said that its implemented filters to block emails when shown in standard formats, and offensive words, and that its in the process of building a filter to help detect and suppress code thats repeated from public repositories. While we are working hard to make Copilot better, code suggested by Copilot should be carefully tested, reviewed, and vetted, like any other code, the disclaimer on the Copilot website reads.

While Copilot has presumably improved since its launch in technical preview last year, its unclear by how much. The capabilities of the underpinning model, Codex a descendent of OpenAIs GPT-3 have since been matched (or even exceeded) by systems like DeepMinds AlphaCode and the open source PolyCoder.

We are seeing progress in Copilot generating better code Were using our experience with [other] tools to improve the quality of Copilot suggestions e.g., by giving extra weight to training data scanned by CodeQL, or analyzing suggestions at runtime, Salva asserted CodeQL referring to GitHubs code analysis engine for automating security checks. Were committed to helping developers be more productive while also improving code quality and security. In the long term, we believe Copilot will write code thats more secure than the average programmer.

The lack of transparency doesnt appear to have dampened enthusiasm for Copilot, which Microsoft said today suggests about 35% of the code in languages like Java and Python was generated by the developers in the technical preview. Tens of thousands have regularly used the tool throughout the preview, the company claims.

Here is the original post:
Copilot, GitHubs AI-powered coding tool, will be free for students - TechCrunch

AI and low/no code: What they can and cant do together – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Artificial Intelligence (AI) is in the fast lane and driving toward mainstream enterprise acceptance, but, at the same time, another technology is making its presence known: low-code and no-code programming. While these two initiatives inhabit different spheres within the data stack, they nevertheless offer some intriguing possibilities to work in tandem to vastly simplify and streamline data processes and product development.

Low-code and no-code are intended to make it simpler to create new applications and services, so much so that even nonprogrammers i.e., knowledge workers who actually use these apps can create the tools they need to complete their own tasks. They work primarily by creating modular, interoperable functions that can be mixed and matched to suit a wide variety of needs. If this technology can be combined with AI to help guide development efforts, theres no telling how productive the enterprise workforce can become in a few short years.

Venture capital is already starting to flow in this direction. A startup called Sway AI recently launched a drag-and-drop platform that uses open-source AI models to enable low-code and no-code development for novice, intermediate and expert users. The company claims this will allow organizations to put new tools, including intelligent ones, into production quicker, while at the same time fostering greater collaboration among users to expand and integrate these emerging data capabilities in ways that are both efficient and highly productive. The company has already tailored its generic platform for specialized use cases in healthcare, supply chain management and other sectors.

AIs contribution to this process is basically the same as in other areas, says Gartners Jason Wong that is, to take on rote, repetitive tasks, which in development processes includes things like performance testing, QA and data analysis. Wong noted that while AIs use in no-code and low-code development is still in its early stage, big hitters like Microsoft are keenly interested in applying it to areas like platform analysis, data anonymization and UI development, which should greatly alleviate the current skills shortage that is preventing many initiatives from achieving production-ready status.

Before we start dreaming about an optimized, AI-empowered development chain, however, well need to address a few practical concerns, according to developer Anouk Dutre. For one thing, abstracting code into composable modules creates a lot of overhead, and this introduces latency to the process. AI is gravitating increasingly toward mobile and web applications, where even delays of 100 ms can drive users away. For back-office apps that tend to quietly churn away for hours this shouldnt be much of an issue, but then, this isnt likely to be a ripe area for low- or no-code development either.

Additionally, most low-code platforms are not very flexible, given that they work with largely pre-defined modules. AI use cases, however, are usually highly specific and dependent on the data that is available and how it is stored, conditioned and processed. So, in all likelihood, youll need customized code to make an AI model function properly with other elements in the low/no-code template, and this could end up costing more than the platform itself. This same dichotomy impacts functions like training and maintenance as well, where AIs flexibility runs into low/no-codes relative rigidity.

Adding a dose of machine learning to low-code and no-code platforms could help loosen them up, however, and add a much-needed dose of ethical behavior as well. Persistent Systems Dattaraj Rao recently highlighted how ML can allow users to run pre-canned patterns for processes like feature engineering, data cleansing, model development and statistical comparison, all of which should help create models that are transparent, explainable and predictable.

Its probably an overstatement to say that AI and no/low-code are like chocolate and peanut butter, but there are solid reasons to expect that they can enhance each others strengths and diminish their weaknesses in a number of key applications. As the enterprise becomes increasingly dependent on the development of new products and services, both technologies can remove the many roadblocks that currently stifle this process and this will likely remain the case regardless of whether they are working together or independently.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Read this article:
AI and low/no code: What they can and cant do together - VentureBeat

Apple expands Today at Apple Creative Studios – Apple

May 24, 2022

PRESS RELEASE

Apple expands Today at Apple Creative Studios, providing new opportunities to young creatives

Select Apple Store locations across the globe will host all-new Creative Studios sessions open to the local community

CUPERTINO, CALIFORNIA Apple has unveiled plans to bring its Today at Apple Creative Studios initiative to even more young creatives from underrepresented communities around the world. The expanded program offers career-building mentorship, training, and resources across a wide range of artistic disciplines, which now include all-new curricula in app design, podcasting, spatial audio production, and filmmaking. This year, Creative Studios will launch in seven new cities, including Nashville, Miami, Berlin, Milan, Taipei, Tokyo, and Sydney. It will also return for its second year in Chicago; Washington, D.C.; New York City; London; Paris; Bangkok; and Beijing.

Our stores have long provided a platform to showcase the great talent of local artists, and our retail teams are proud to play a role in supporting creativity within their communities and creating a place where everyone is welcome, said Deirdre OBrien, Apples senior vice president of Retail + People. Were enormously grateful to our Apple Creative Pros, our retail team members, and local partners, who together make it possible for us to expand access to free arts education and mentorship to even more communities.

Designed to support young people who face barriers to receiving a quality creative education, Creative Studios connects participants with mentors from Apple and more than 30 nonprofit community partners who specialize in areas such as books and storytelling, app design, radio and podcasts, and photography, film, and TV. Participants will receive hands-on education, training, and feedback on their projects. In addition to nurturing participants creative skills, mentors will encourage them to think about how their talents can encourage social change in their communities.

Apple Store locations in select cities will also host public Today at Apple Creative Studios sessions. Led by the established artists mentoring young participants in Creative Studios and Apple Creative Pros, these free events will be open to the public, with registration available at apple.com/today.

It was an honor to share my passion for Apple technology and storytelling with these young people, said Rudy P., an Apple Creative Pro at Apple Carnegie Library in Washington. Technology entered my life at a young age and completely changed my trajectory. My hope for these published authors is that they continue to tell the stories of their lives. The world needs their point of view.

Last year, over 400 young people participated in Creative Studios programming. Communities celebrated the books, films, and music the participants developed, and showcased participants art via Apple TV, Apple Books, and Apple Music.

This years program includes:

App Design (New York)

New for this year, Creative Studios New York will provide mentorship, insight, and resources to women and nonbinary creatives as they conceptualize apps to drive social impact.

Books and Storytelling (Miami, Washington)

Young creatives in Washington will hone their skills in creative writing and visual storytelling as they make their own board books, audiobooks, and storyboards. In Miami, Apple joins community partner O, Miami in a program for BIPOC+ emerging artists to explore storytelling through the creation of micro audiobooks.

Music, Radio, and Podcasts (Berlin, Nashville, Chicago, Paris)

Aspiring musicians in Berlin will learn about radio production while exploring themes of belonging with guidance from inspiring mentors, working closely with Refuge Worldwide and Open Music Lab. In Nashville, in collaboration with the National Museum of African American Music, the program will specialize in spatial audio recording by granting participants access to Apple Music studios. In Paris, participants will build skills in creative storytelling, audio engineering, and recording through new programming focused on podcasting. And in Chicago, young creatives from the Southwest Side will amplify their own narratives and stories around the theme of belonging and identity through an experience focused on radio production and audio/video experimentation.

Art and Design (Taipei, Milan)

Creative Studios in Taipei and Milan will connect mentors with aspiring young designers as they explore identity while designing, creating, and promoting content that represents themselves and their communities. In Milan, participants will have the opportunity to create an inclusive media landscape through a program that celebrates the diversity of fashion, art, and design led by Afro Fashion, while Taipeis program will create a safe space for young people to explore gender and identity through creativity. These programs will guide participants through production, provide mentorship and inspiration, and create access to resources and insights from the design industry.

Photography, Film, and TV (London, Sydney, Beijing, Tokyo, Bangkok)

Participants in London and Sydney will explore identity, culture, and representation as they create short documentary films and build skills in cinematography, direction, and editing, with feedback and insight from established artists. In Beijing and Tokyo, participants will receive professional guidance to dive into photography and videography as a means to tell their own stories. In Bangkok, Apple joins community partner Saturday School Foundation for a second year, providing young creatives the opportunity to explore a wide range of Apple-led sessions taught by Apples Creative Pro team. This six week in-store program will focus on photography and music creation.

About Apple

Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, Apple Watch, and Apple TV. Apples five software platforms iOS, iPadOS, macOS, watchOS, and tvOS provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, and iCloud. Apples more than 100,000 employees are dedicated to making the best products on earth, and to leaving the world better than we found it.

Press Contacts

Josh Lipton

Apple

j_lipton@apple.com

Monica Fernandez

Apple

monicaf@apple.com

Apple Media Helpline

media.help@apple.com

(408) 974-2042

Excerpt from:
Apple expands Today at Apple Creative Studios - Apple

Camp For Seriously Ill Children To Build Second Location On Marylands Eastern Shore – CBS Baltimore

BALTIMORE (WJZ) A well-known camp for sick children and their families will build its second location on Marylands Eastern Shore.

Paul Newmans Hole in the Wall Gang Camp announced plans Tuesday to open its second location in Queenstown, Md. at the Aspen Institutes former 166-acre Wye River Conference Center.

The Aspen Institute, a nonprofit for humanistic studies, is donating a majority of the property to Hole in the Wall. The property was gifted to the institute in 1988, so the organization is paying it forward.

Since 1979, the Wye River campus has played an important role in the Aspen Institutes history. This beautiful and protected site has hosted countless seminars and convenings, including some of international significance, saidDan Porterfield, President and CEO of the Aspen Institute. We are now proud to make available a significant part of this land to The Hole in the Wall Gang Camp to become their second location. Their mission to provide joy to children with serious illnesses and their families is inspiring, and secures a wonderful future for the Wye River campus.

The camp was founded in 1988 by legendary actor and entrepreneur Paul Newman to provide a different kind of healing to seriously ill children and their families, completely free of charge. The camp, based in Ashford, Connecticut, mostly serves families within a three-hour radius, as will the Maryland location.

The camp said the Maryland location is an ideal location because of its proximity to some of the United States most prominent pediatric hospitals, like the Johns Hopkins Childrens Center and the University of Maryland Childrens Hospital.

The Wye Conference Center has several residential buildings and other conference facilities that will be renovated to give the camp a starting point to begin programming.

The facilities and all programming will be designed to be family inclusive so that those most devastated and isolated by serious illnesses, including the rare disease community, will be able to find a caring community of support that understands their unique challenges, the camp said.

With its year-round programs, the camp serves 20,000 people annually, and it hopes to bring that impact to the Mid-Atlantic.

Lisa Nickerson and her son, Evan Bucklin, visited the camp when her daughter was ill.

When my daughter died, I was equally worried about Evan because emotionally it took such a toll, Nickerson said.

It truly is a one-of-a-kind experience, Bucklin said. It gives you a lot of the things kids in that age, in that environment really need- which is both friendship, an environment where they can have some fun it gave me some sense of power over my own life.

Construction and renovation at the camp is expected to be complete by Summer 2023, which is when programming will begin.

Original post:
Camp For Seriously Ill Children To Build Second Location On Marylands Eastern Shore - CBS Baltimore

Coastal Studies Institute to hold open house – The Coastland Times | The Coastland Times – The Coastland Times

East Carolina Universitys Integrated Coastal Programs (ECU ICP) and the Coastal Studies Institute (CSI) are hosting an open house from 14 p.m. on June 4, 2022 at the ECU Outer Banks Campus location in Wanchese. The public is welcome to attend this free event.

Attendees will be able to tour the campus, grounds and facilities, learn about current research and education programs, take part in family friendly activities and interact with faculty and staff from ECU, CSI and partner organizations.The LEED gold certified ECU Outer Banks Campus is located on Roanoke Island at 850 NC Highway 345, approximately one mile from the highway 64 and NC 345 intersection.

Located on the ECU Outer Banks Campus, ECUs Integrated Coastal Programs is a leader in coastal and marine research, education and engagement.The program uses an interdisciplinary approach and scientific advances to provide effective solutions to complex problems while helping coastal communities, ecosystems and economies thrive. ECU ICP includes a transdisciplinary Department of Coastal Studies, a PhD program in Integrated Coastal Sciences and the Coastal Studies Institute.

The Coastal Studies Institute is a multi-institutional research partnership led by East Carolina University, in association with NC State, UNC Chapel Hill, UNC Wilmington and Elizabeth City State University. CSI focuses on integrated coastal research and education programming centered on responding to the needs, issues and topics of concern of the residents of eastern North Carolina.

ECU ICP and CSI research and education initiatives span a variety of coastal topics from nearshore coastal estuaries to the offshore waters along the continental shelf. Visitors to the 2022 open house will learn about research initiatives first-hand from faculty and staff stationed throughout the facility.

Coastal geoscientists are researching the processes that drive coastal change, their effect on communities and ways to become more resilient in the face of increasing hazards that threaten the coast.

Ecologists are studying estuarine systems, their inputs and how people can ensure healthy coastal ecosystems for the future.

Oceanographers and coastal engineers are exploring ways to harness the power of the Gulf Stream, waves and other renewable ocean energy sources using new technologies to broaden North Carolinas energy portfolio.

Social scientists are working with coastal residents, visitors and relevantsocialstatistics to better understand the impacts coastal change has on communities, while working to develop new and prosperous economies for the future.

Maritime archaeologists are researching and discovering new shipwrecks using advanced technologies while celebrating the maritime heritage of eastern North Carolina.

Faculty and staff are engaging the local community and the next generation of scientists and decision makers in education programming that fosters student interest in the fields of technology, engineering, art, math and science.

ECU ICP and CSI welcome the public to take part in a fun-filled, engaging and educational event, stated the event announcement. Join us for the 2022 Open House on the ECU Outer Banks Campus at the Coastal Studies Institute from 1:00 4:00 pm on Saturday, June 4, 2022.

READ ABOUT MORE NEWS HERE.

Read the original here:
Coastal Studies Institute to hold open house - The Coastland Times | The Coastland Times - The Coastland Times