Page 61«..1020..60616263..7080..»

Category Archives: Technology

Sources and acknowledgments | The Economist

Posted: February 5, 2022 at 5:05 am

Jan 27th 2022

In addition to the people named in the text, the author would like to thank Justin Bronk, Kevin Copsey, Keith Dear, Michael C. Horowitz, Michael Kofman, Thomas Mahnken, Todd Master, Phil Muir, Rob Magowan, Nick Moran, Ruslan Pukhov, Henning Robach, Jack Shanahan, Ed Stringer, Phil Weir, Jerry Welsh and others who would prefer to remain anonymous.

Further reading on defence technology:

Christian Brose, The Kill Chain Defending America in the Future of High-Tech Warfare, Hachette BooksJustin Bronk and Jack Watling, Necessary Heresies Challenging the Narratives Distorting Contemporary UK Defence, RUSIPaul Scharre, Army of None Autonomous Weapons and the Future of War, W.W. Norton

Owen Cote vs Sebastian Brixley-Williams on anti-submarine warfareRemy Hemez on decoys and Jennifer McArdle on deceptionJack Watling and CSIS on the lessons of the Nagorno-Karabakh warT.X. Hammes on defence dominance

This article appeared in the Technology Quarterly section of the print edition under the headline "Sources and acknowledgments"

See the article here:

Sources and acknowledgments | The Economist

Posted in Technology | Comments Off on Sources and acknowledgments | The Economist

Privacy and Technology – wrps.on.ca

Posted: at 5:05 am

The exploration and use of technology are essential for WRPS to meet its obligations to the community regarding public safety, including the prevention and investigation of crimes, as well as to improve overall administration.Technologies are assessedto protect privacy and security while ensuring thepublic has access to police information as outlined inthe Municipal Freedom of Information and Protection of Privacy Act(MFIPPA).

WRPS iscommitted to assessing the impacts ofnew and existing technology, procedures and programs with access and privacy at the forefront, as well as to ensure compliance with the Criminal Code of Canada, the Charter of Rights and Freedoms, the Police Services Act, the Youth Criminal Justice Act and any other relevant laws or legislation. As such, information is collected through lawful authority, judicial authorization or upon consent.

We continue our commitment to providing citizens with responsive policing services that foster a relationship of trust and transparency within our community.

Remotely Piloted Vehicle(RPV)

The Waterloo Regional Police Service (WRPS) uses a remotely piloted vehicle (RPV), to assist with a variety of law enforcement functions:

A RPV is also used to create internal training videos and external communication videos. Any use of personal information for these purposes requires a signedPhotograph/Video Release and Consentform.

Body-Worn Video (BWV)/In-Car Video (ICV)

WRPS use of Body-Worn Video is informed by the Information andPrivacy Commissioner of Ontario'sGuidance for the use of Body-Worn Cameras by Law Enforcement Authoritiestoprovide oversight and accountability for police interactions with the public.

Device Extraction Technologies are utilized to unlock electronic devices and extract data relevant to law enforcement investigations or prosecutions. Use of this technologyis based on consent, judicial authorization or immediate risk to public safety and is used by authorized police officers only.

Greykey and Cellebrite

Statistics

Image Analytics

Image Analytics Technologies are utilized by authorized police officers to view, process and analyze lawfully obtained photographs, video footage, etc.for specific images that are relevant to law enforcement investigations or prosecutions. While this technology does utilizefacial recognition, it is notused to scan the internet, social media, etc.It is used solely for video obtained on consent, through a warrant or targeted electronic surveillance. The purpose of using this technology is to expedite the process of locating objects or individuals withinthe lawfully obtainedvideo.

BriefCam

BriefCamis a new program utilized by WRPS in 2022.BriefCam software can quickly search volumes of video that would otherwise be impossible to examine manually, providing investigative clues that create intelligence and operational information for officers. BriefCam does not expand the collection of personal information by investigators.

Select the WRPS Feedbackform to submit any questions or commentsregardinguse of technology.

If you have questions about making an Access to Information request under the MFIPPA, please contact theAccess to Information Unit.

Excerpt from:

Privacy and Technology - wrps.on.ca

Posted in Technology | Comments Off on Privacy and Technology – wrps.on.ca

House competition bill aims to boost minorities in science and technology – CNBC

Posted: at 5:05 am

Congress is aiming to reshape America's workforce through new legislation that would direct more than $1 billion toward increasing diversity of the scientists, researchers and technologists who drive the innovation economy.

The measure includes $900 million for grants and partnerships with historically Black colleges and universities, $164 million to study barriers for people of color in the field and $17.5 million to combat sexual harassment. They're part of a expansive package of bills known as the America Competes Act, which lawmakers hope will ensure the United States continues to lead the global economy.

"We cannot compete internationally without having the available talent," House Science Committee Chairwoman Eddie Bernice Johnson, D-Texas, told CNBC. "We've got to make sure we build in the mechanism to get that talent."

Mirimus, Inc. lab scientists work to validate rapid IgM/IgG antibody tests of COVID-19 samples from recovered patients on April 10, 2020 in the Brooklyn borough of New York City.

Misha Friedman Getty Images

The House passed the package Friday. It includes signature items such as funding for the domestic semiconductor industry and efforts to tackle supply chain shortages. Speaker Nancy Pelosi had enough support to pass the legislation despite opposition from House Republicans who want to take a tougher stance against China.

A version of the bill passed the Senate last summer with strong bipartisan support. The two chambers will have to negotiate a compromise version of the legislation. The White House has made getting the bill to the president's desk one of the administration's top priorities as its social spending plan and other legislative initiatives languish.

"Our red line is doing nothing or taking too long," Commerce Secretary Gina Raimondo told reporters Friday. She added: "My message to everyone is to find common ground, quickly. This should take weeks not months."

Read more of CNBC's politics coverage:

A report from the National Academies of Sciences, Engineering and Medicine estimated the United States will need 1 million more people employed in those sectors over the next decade than it is currently on track to produce. The group said the country will not reach that goal without substantially increasing diversity in the labor force.

"A clear takeaway from the projected demographic profile of the nation is that the educational outcomes and STEM readiness of students of color will have direct implications on America's economic growth, national security, and global prosperity," the report said.

The bill would also authorize new investments for colleges and universities that primarily serve students of color through research funding and enhanced engagement. About 18% of Black college graduates in science and technology come from historically Black colleges and universities, according to the National Science Foundation.

"We've got to build the opportunity," Johnson said. "We've got to invest in building talent from this level, which means if they're at HBCUs, then we've got to invest in HBCUs."

At Spelman College in Atlanta, more Black women have graduated with doctoral degrees than at any other school in the country. The historically Black college is planning to build a new innovation lab over the next two years thanks to a $10 million gift from the foundation named after Arthur Blank, co-founder of Home Depot.

The school's president, Mary Schmidt Campbell, told CNBC that Washington also plays an important role by setting the national agenda. She said the new legislation could "democratize" innovation and ultimately benefit businesses' bottom line.

"There's of course the altruistic mission of making sure that everybody is included," Campbell said. "But there is a self-interested reason why companies should be interested in diversity: It's because it makes them better companies."

Correction: This story was updated to correct the spelling of the Spelman College president's name.

Read more:

House competition bill aims to boost minorities in science and technology - CNBC

Posted in Technology | Comments Off on House competition bill aims to boost minorities in science and technology – CNBC

Severance Creator on the Technology, Location and Timeline – TheWrap

Posted: at 5:05 am

Very mild Spoilers for the first episode of Severance below.

For every answer that the new Apple TV+ dramatic thriller series Severance offers, more compelling questions arise. Thats certainly true of the technology at the center of the show, which allows people to literally split their work life and their home life through a controversial procedure called, appropriately enough, severance. Once severed, a person will have no memories of their work life while at home, and no memories of their home life at work. And the details of exactly how that technology works, according to creator and showrunner Dan Erickson, are purposefully a little vague.

Adam Scott plays Mark, a man who has undergone the severance procedure. On the outside, hes lonely and mourning his dead wife. On the inside, hes a bright and loyal worker at a mysterious corporation called Lumon.

The switch between Outie and Innie is visually represented by the character going down an elevator to some lower level where their Innie work life takes place. So how, exactly, does the elevator trigger the severance process? I have file upon file on my on my laptop of walking through in my head how this scientifically makes sense. But suffice it to say, theres some sort of a barrier that if youre basically halfway down the elevator, you pass it, Erickson told TheWrap. Weve talked about it as a wire. Weve talked about it as as just some sort of a threshold, you pass that, and it sends a frequency to the chip in your head that causes you to switch to your Innie mode. And then it just comes back up when youre going home.

The show also finds characters exiting a stairwell where the same switch occurs. For the stairwell, its similar, whatever that thing is and again, we sort of intentionally, never fully decided even for ourselves, like what is the exact technology, he said. But that threshold is in the doorway. So when Helly (Britt Lower) is running through, thats the moment that shes switching, is the moment that she passes through the doorway.

As the season progresses, the characters grow more and more curious about what, exactly, is going on inside Lumon, and viewers are unraveling the mystery at the same time. That mystery extends to the location of the show, which Erickson confesses was also left intentionally ambiguous.

We sort of intentionally kept a lot of ambiguity to the time and place, Erickson told TheWrap. We obviously shot mostly in New York and New Jersey, so theres sort of a vague New England, east coast-y feel to to the city, but we didnt really want to know exactly where it was or tie it to a specific locale.

As for when Severance takes place, its not a far-off future. It is around now, its like vaguely now-ish, he said. Were not going for something where this is 10 years in the future where severance has been invented and already exists. Its sort of an alternate, vaguely now-ish timeline.

The first two episodes of Severance premiere on Apple TV+ on Feb. 18 with new episodes airing weekly on Fridays.

See more here:

Severance Creator on the Technology, Location and Timeline - TheWrap

Posted in Technology | Comments Off on Severance Creator on the Technology, Location and Timeline – TheWrap

Game-Changing Carbon Capture Technology To Remove 99% of CO2 From Air – SciTechDaily

Posted: at 5:05 am

University of Delaware researchers have broken new ground that could bring more environmentally friendly fuel cells closer to commercialization. Credit: Graphic illustration by Jeffrey C. Chase

University of Delaware researchers carbon capture advance could bring environmentally friendly fuel cells closer to market.

University of Delaware engineers have demonstrated a way to effectively capture 99% of carbon dioxide from air using a novel electrochemical system powered by hydrogen.

It is a significant advance for carbon dioxide capture and could bring more environmentally friendly fuel cells closer to market.

The research team, led by UD Professor Yushan Yan, reported their method in Nature Energy on Thursday, February 3.

Fuel cells work by converting fuel chemical energy directly into electricity. They can be used in transportation for things like hybrid or zero-emission vehicles.

Yan, Henry Belin du Pont Chair of Chemical and Biomolecular Engineering, has been working for some time to improve hydroxide exchange membrane (HEM) fuel cells, an economical and environmentally friendly alternative to traditional acid-based fuel cells used today.

But HEM fuel cells have a shortcoming that has kept them off the road they are extremely sensitive to carbon dioxide in the air. Essentially, the carbon dioxide makes it hard for a HEM fuel cell to breathe.

This defect quickly reduces the fuel cells performance and efficiency by up to 20%, rendering the fuel cell no better than a gasoline engine. Yans research group has been searching for a workaround for this carbon dioxide conundrum for over 15 years.

The UD research teams spiral wound module takes in hydrogen and air through two separate inlets (shown on the left) and emits carbon dioxide and carbon dioxide-free air (shown on the right) after passing through two large-area, catalyst-coated shorted membranes. The inset image on the right shows, in part, how the molecules move within the short-circuited membrane. Credit: University of Delaware

A few years back, the researchers realized this disadvantage might actually be a solution for carbon dioxide removal.

Once we dug into the mechanism, we realized the fuel cells were capturing just about every bit of carbon dioxide that came into them, and they were really good at separating it to the other side, said Brian Setzler, assistant professor for research in chemical and biomolecular engineering and paper co-author.

While this isnt good for the fuel cell, the team knew if they could leverage this built-in self-purging process in a separate device upstream from the fuel cell stack, they could turn it into a carbon dioxide separator.

It turns out our approach is very effective. We can capture 99% of the carbon dioxide out of the air in one pass if we have the right design and right configuration, said Yan.

So, how did they do it?

They found a way to embed the power source for the electrochemical technology inside the separation membrane. The approach involved internally short-circuiting the device.

Its risky, but we managed to control this short-circuited fuel cell by hydrogen. And by using this internal electrically shorted membrane, we were able to get rid of the bulky components, such as bipolar plates, current collectors or any electrical wires typically found in a fuel cell stack, said Lin Shi, a doctoral candidate in the Yan group and the papers lead author.

Now, the research team had an electrochemical device that looked like a normal filtration membrane made for separating out gases, but with the capability to continuously pick up minute amounts of carbon dioxide from the air like a more complicated electrochemical system.

This picture shows the electrochemical system developed by the Yan group. Inside the highlighted cylindrical metal housing shown is the research teams novel spiral wound module. As hydrogen is fed to the device, it powers the carbon dioxide removal process. Computer software on the laptop plots the carbon dioxide concentration in the air after passing through the module. Credit: University of Delaware

In effect, embedding the devices wires inside the membrane created a short-cut that made it easier for the carbon dioxide particles to travel from one side to the other. It also enabled the team to construct a compact, spiral module with a large surface area in a small volume. In other words, they now have a smaller package capable of filtering greater quantities of air at a time, making it both effective and cost-effective for fuel cell applications. Meanwhile, fewer components mean less cost and, more importantly, provided a way to easily scale up for the market.

The research teams results showed that an electrochemical cell measuring 2 inches by 2 inches could continuously remove about 99% of the carbon dioxide found in air flowing at a rate of approximately two liters per minute. An early prototype spiral device about the size of a 12-ounce soda can is capable of filtering 10 liters of air per minute and scrubbing out 98% of the carbon dioxide, the researchers said.

Scaled for an automotive application, the device would be roughly the size of a gallon of milk, Setzer said, but the device could be used to remove carbon dioxide elsewhere, too. For example, the UD-patented technology could enable lighter, more efficient carbon dioxide removal devices in spacecraft or submarines, where ongoing filtration is critical.

We have some ideas for a long-term roadmap that can really help us get there, said Setzler.

According to Shi, since the electrochemical system is powered by hydrogen, as the hydrogen economy develops, this electrochemical device could also be used in airplanes and buildings where air recirculation is desired as an energy-saving measure. Later this month, following his dissertation defense, Shi will join Versogen, a UD spinoff company founded by Yan, to continue advancing research toward sustainable green hydrogen.

Reference: A shorted membrane electrochemical cell powered by hydrogen to remove CO2 from the air feed of hydroxide exchange membrane fuel cells by Lin Shi, Yun Zhao, Stephanie Matz, Shimshon Gottesfeld, Brian P. Setzler and Yushan Yan, 3 February 2022, Nature Energy.DOI: 10.1038/s41560-021-00969-5

Co-authors on the paper from the Yan lab include Yun Zhao, co-first author and research associate, who performed experimental work essential for testing the device; Stephanie Matz, a doctoral student who contributed to the designing and fabrication of the spiral module, and Shimshon Gottesfeld, an adjunct professor of chemical and biomolecular engineering at UD. Gottesfeld was principal investigator on the 2019 project, funded by the Advanced Research Projects Agency-Energy (ARPA-E), that led to the findings.

Read more from the original source:

Game-Changing Carbon Capture Technology To Remove 99% of CO2 From Air - SciTechDaily

Posted in Technology | Comments Off on Game-Changing Carbon Capture Technology To Remove 99% of CO2 From Air – SciTechDaily

Advancing AI trustworthiness: Updates on responsible AI research – Microsoft

Posted: at 5:05 am

Editors note: This year in review of responsible AI research was compiled by Aether, a Microsoft cross-company initiative on AI Ethics and Effects in Engineering and Research, as outreach from their commitment to advancing the practice of human-centered responsible AI. Although many of the papers authors are participants in Aether, the research presented here expands beyond, encompassing work from across Microsoft, as well as with collaborators in academia and industry.

Inflated expectations around the capabilities of AI technologies may lead people to believe that computers cant be wrong. The truth is AI failures are not a matter of if but when. AI is a human endeavor that combines information about people and the physical world into mathematical constructs. Such technologies typically rely on statistical methods, with the possibility for errors throughout an AI systems lifespan. As AI systems become more widely used across domains, especially in high-stakes scenarios where peoples safety and wellbeing can be affected, a critical question must be addressed: how trustworthy are AI systems, and how much and when should people trust AI?

As part of their ongoing commitment to building AI responsibly, research scientists and engineers at Microsoft are pursuing methods and technologies aimed at helping builders of AI systems cultivate appropriate trustthat is, building trustworthy models with reliable behaviors and clear communication that set proper expectations. When AI builders plan for failures, work to understand the nature of the failures, and implement ways to effectively mitigate potential harms, they help engender trust that can lead to a greater realization of AIs benefits.

Pursuing trustworthiness across AI systems captures the intent of multiple projects on the responsible development and fielding of AI technologies. Numerous efforts at Microsoft have been nurtured by its Aether Committee, a coordinative cross-company council comprised of working groups focused on technical leadership at the frontiers of innovation in responsible AI. The effort is led by researchers and engineers at Microsoft Research and from across the company and is chaired by Chief Scientific Officer Eric Horvitz. Beyond research, Aether has advised Microsoft leadership on responsible AI challenges and opportunities since the committees inception in 2016.

The following is a sampling of research from the past year representing efforts across the Microsoft responsible AI ecosystem that highlight ways for creating appropriate trust in AI. Facilitating trustworthy measurement, improving human-AI collaboration, designing for natural language processing (NLP), advancing transparency and interpretability, and exploring the open questions around AI safety, security, and privacy are key considerations for developing AI responsibly. The goal of trustworthy AI requires a shift in perspective at every stage of the AI development and deployment life cycle. Were actively developing a growing number of best practices and tools to help with the shift to make responsible AI more available to a broader base of users. Many open questions remain, but as innovators, we are committed to tackling these challenges with curiosity, enthusiasm, and humility.

AI technologies influence the world through the connection of machine learning modelsthat provide classifications, diagnoses, predictions, and recommendationswith larger systems that drive displays, guide controls, and activate effectors. But when we use AI to help us understand patterns in human behavior and complex societal phenomena, we need to be vigilant. By creating models for assessing or measuring human behavior, were participating in the very act of shaping society. Guidelines for ethically navigating technologys impacts on societyguidance born out of considering technologies for COVID-19prompt us to start by weighing a projects risk of harm against its benefits. Sometimes an important step in the practice of responsible AI may be the decision to not build a particular model or application.

Human behavior and algorithms influence each other in feedback loops. In a recent Nature publication, Microsoft researchers and collaborators emphasize that existing methods for measuring social phenomena may not be up to the task of investigating societies where human behavior and algorithms affect each other. They offer five best practices for advancing computational social science. These include developing measurement models that are informed by social theory and that are fair, transparent, interpretable, and privacy preserving. For trustworthy measurement, its crucial to document and justify the models underlying assumptions, plus consider who is deciding what to measure and how those results will be used.

In line with these best practices, Microsoft researchers and collaborators have proposed measurement modeling as a framework for anticipating and mitigating fairness-related harms caused by AI systems. This framework can help identify mismatches between theoretical understandings of abstract conceptsfor example, socioeconomic statusand how these concepts get translated into mathematics and code. Identifying mismatches helps AI practitioners to anticipate and mitigate fairness-related harms that reinforce societal biases and inequities. A study applying a measurement modeling lens to several benchmark datasets for surfacing stereotypes in NLP systems reveals considerable ambiguity and hidden assumptions, demonstrating (among other things) that datasets widely trusted for measuring the presence of stereotyping can, in fact, cause stereotyping harms.

Flaws in datasets can lead to AI systems with unfair outcomes, such as poor quality of service or denial of opportunities and resources for different groups of people. AI practitioners need to understand how their systems are performing for factors like age, race, gender, and socioeconomic status so they can mitigate potential harms. In identifying the decisions that AI practitioners must make when evaluating an AI systems performance for different groups of people, researchers highlight the importance of rigor in the construction of evaluation datasets.

Making sure that datasets are representative and inclusive means facilitating data collection from different groups of people, including people with disabilities. Mainstream AI systems are often non-inclusive. For example, speech recognition systems do not work for atypical speech, while input devices are not accessible for people with limited mobility. In pursuit of inclusive AI, a study proposes guidelines for designing an accessible online infrastructure for collecting data from people with disabilities, one that is built to respect, protect, and motivate those contributing data.

When people and AI collaborate on solving problems, the benefits can be impressive. But current practice can be far from establishing a successful partnership between people and AI systems. A promising advance and direction of research is developing methods that learn about ideal ways to complement people with problem solving. In the approach, machine learning models are optimized to detect where people need the most help versus where people can solve problems well on their own. We can additionally train the AI systems to make decisions as to when a system should ask an individual for input and to combine the human and machine abilities to make a recommendation. In related work, studies have shown that people will too often accept an AI systems outputs without question, relying on them even when they are wrong. Exploring how to facilitate appropriate trust in human-AI teamwork, experiments with real-world datasets for AI systems show that retraining a model with a human-centered approach can better optimize human-AI team performance. This means taking into account human accuracy, human effort, the cost of mistakesand peoples mental models of the AI.

In systems for healthcare and other high-stakes scenarios, a break with the users mental model can have severe impacts. An AI system can compromise trust when, after an update for better overall accuracy, it begins to underperform in some areas. For instance, an updated system for predicting cancerous skin moles may have an increase in accuracy overall but a significant decrease for facial moles. A physician using the system may either lose confidence in the benefits of the technology or, with more dire consequences, may not notice this drop in performance. Techniques for forcing an updated system to be compatible with a previous version produce tradeoffs in accuracy. But experiments demonstrate that personalizing objective functions can improve the performance-compatibility tradeoff for specific users by as much as 300 percent.

System updates can have grave consequences when it comes to algorithms used for prescribing recourse, such as how to fix a bad credit score to qualify for a loan. Updates can lead to people who have dutifully followed a prescribed recourse being denied their promised rights or services and damaging their trust in decision makers. Examining the impact of updates caused by changes in the data distribution, researchers expose previously unknown flaws in the current recourse-generation paradigm. This work points toward rethinking how to design these algorithms for robustness and reliability.

Complementarity in human-AI performance, where the human-AI team performs better together by compensating for each others weaknesses, is a goal for AI-assisted tasks. You might think that if a system provided an explanation of its output, this could help an individual identify and correct an AI failure, producing the best of human-AI teamwork. Surprisingly, and in contrast to prior work, a large-scale study shows that explanations may not significantly increase human-AI team performance. People often over-rely on recommendations even when the AI is incorrect. This is a call to action: we need to develop methods for communicating explanations that increase users understanding rather than to just persuade.

The allure of natural language processings potential, including rash claims of human parity, raises questions of how we can employ NLP technologies in ways that are truly useful, as well as fair and inclusive. To further these and other goals, Microsoft researchers and collaborators hosted the first workshop on bridging human-computer interaction and natural language processing, considering novel questions and research directions for designing NLP systems to align with peoples demonstrated needs.

Language shapes minds and societies. Technology that wields this power requires scrutiny as to what harms may ensue. For example, does an NLP system exacerbate stereotyping? Does it exhibit the same quality of service for people who speak the same language in different ways? A survey of 146 papers analyzing bias in NLP observes rampant pitfalls of unstated assumptions and conceptualizations of bias. To avoid these pitfalls, the authors outline recommendations based on the recognition of relationships between language and social hierarchies as fundamentals for fairness in the context of NLP. We must be precise in how we articulate ideas about fairness if we are to identify, measure, and mitigate NLP systems potential for fairness-related harms.

The open-ended nature of languageits inherent ambiguity, context-dependent meaning, and constant evolutiondrives home the need to plan for failures when developing NLP systems. Planning for NLP failures with the AI Playbook introduces a new tool for AI practitioners to anticipate errors and plan human-AI interaction so that the user experience is not severely disrupted when errors inevitably occur.

To build AI systems that are reliable and fairand to assess how much to trust thempractitioners and those using these systems need insight into their behavior. If we are to meet the goal of AI transparency, the AI/ML and human-computer interaction communities need to integrate efforts to create human-centered interpretability methods that yield explanations that can be clearly understood and are actionable by people using AI systems in real-world scenarios.

As a case in point, experiments investigating whether simple models that are thought to be interpretable achieve their intended effects rendered counterintuitive findings. When participants used an ML model considered to be interpretable to help them predict the selling prices of New York City apartments, they had difficulty detecting when the model was demonstrably wrong. Providing too many details of the models internals seemed to distract and cause information overload. Another recent study found that even when an explanation helps data scientists gain a more nuanced understanding of a model, they may be unwilling to make the effort to understand it if it slows down their workflow too much. As both studies show, testing with users is essential to see if people clearly understand and can use a models explanations to their benefit. User research is the only way to validate what is or is not interpretable by people using these systems.

Explanations that are meaningful to people using AI systems are key to the transparency and interpretability of black-box models. Introducing a weight-of-evidence approach to creating machine-generated explanations that are meaningful to people, Microsoft researchers and colleagues highlight the importance of designing explanations with peoples needs in mind and evaluating how people use interpretability tools and what their understanding is of the underlying concepts. The paper also underscores the need to provide well-designed tutorials.

Traceability and communication are also fundamental for demonstrating trustworthiness. Both AI practitioners and people using AI systems benefit from knowing the motivation and composition of datasets. Tools such as datasheets for datasets prompt AI dataset creators to carefully reflect on the process of creation, including any underlying assumptions they are making and potential risks or harms that might arise from the datasets use. And for dataset consumers, seeing the dataset creators documentation of goals and assumptions equips them to decide whether a dataset is suitable for the task they have in mind.

Interpretability is vital to debugging and mitigating the potentially harmful impacts of AI processes that so often take place in seemingly impenetrable black boxesit is difficult (and in many settings, inappropriate) to trust an AI model if you cant understand the model and correct it when it is wrong. Advanced glass-box learning algorithms can enable AI practitioners and stakeholders to see whats under the hood and better understand the behavior of AI systems. And advanced user interfaces can make it easier for people using AI systems to understand these models and then edit the models when they find mistakes or bias in them. Interpretability is also important to improve human-AI collaborationit is difficult for users to interact and collaborate with an AI model or system if they cant understand it. At Microsoft, we have developed glass-box learning methods that are now as accurate as previous black-box methods but yield AI models that are fully interpretable and editable.

Recent advances at Microsoft include a new neural GAM (generalized additive model) for interpretable deep learning, a method for using dropout rates to reduce spurious interaction, an efficient algorithm for recovering identifiable additive models, the development of glass-box models that are differentially private, and the creation of tools that make editing glass-box models easy for those using them so they can correct errors in the models and mitigate bias.

When considering how to shape appropriate trust in AI systems, there are many open questions about safety, security, and privacy. How do we stay a step ahead of attackers intent on subverting an AI system or harvesting its proprietary information? How can we avoid a systems potential for inferring spurious correlations?

With autonomous systems, it is important to acknowledge that no system operating in the real world will ever be complete. Its impossible to train a system for the many unknowns of the real world. Unintended outcomes can range from annoying to dangerous. For example, a self-driving car may splash pedestrians on a rainy day or erratically swerve to localize itself for lane-keeping. An overview of emerging research in avoiding negative side effects due to AI systems incomplete knowledge points to the importance of giving users the means to avoid or mitigate the undesired effects of an AI systems outputs as essential to how the technology will be viewed or used.

When dealing with data about people and our physical world, privacy considerations take a vast leap in complexity. For example, its possible for a malicious actor to isolate and re-identify individuals from information in large, anonymized datasets or from their interactions with online apps when using personal devices. Developments in privacy-preserving techniques face challenges in usability and adoption because of the deeply theoretical nature of concepts like homomorphic encryption, secure multiparty computation, and differential privacy. Exploring the design and governance challenges of privacy-preserving computation, interviews with builders of AI systems, policymakers, and industry leaders reveal confidence that the technology is useful, but the challenge is to bridge the gap from theory to practice in real-world applications. Engaging the human-computer interaction community will be a critical component.

Reliability and safety

Privacy and security

AI is not an end-all, be-all solution; its a powerful, albeit fallible, set of technologies. The challenge is to maximize the benefits of AI while anticipating and minimizing potential harms.

Admittedly, the goal of appropriate trust is challenging. Developing measurement tools for assessing a world in which algorithms are shaping our behaviors, exposing how systems arrive at decisions, planning for AI failures, and engaging the people on the receiving end of AI systems are important pieces. But what we do know is change can happen today with each one of us as we pause and reflect on our work, asking: what could go wrong, and what can I do to prevent it?

The rest is here:

Advancing AI trustworthiness: Updates on responsible AI research - Microsoft

Posted in Technology | Comments Off on Advancing AI trustworthiness: Updates on responsible AI research – Microsoft

GOP lawmakers move to stop IRS facial recognition technology – Accounting Today

Posted: at 5:05 am

Republicans in Congress are raising concerns about the Internal Revenue Services move to use facial recognition technology to authenticate taxpayers before they can access their online accounts, introducing a bill that would ban the practice.

The IRS has contracted with ID.me, a third-party provider of authentication technology, to help deter identity theft by requiring taxpayers to send a selfie along with an image of a government document like a passport or drivers license before they can access their online taxpayer account or use tools like Get Transcript (see story). Although the new technology is intended to protect taxpayers from cybercriminals, it also has raised privacy concerns. The IRS has emphasized that taxpayers wont need such measures to file their taxes or pay taxes online. The agency began rolling out the technology this year for new taxpayer accounts and its expected to be in place by this summer for existing accounts as well.

Andrew Harrer/Bloomberg

The Treasury is already looking into alternatives to ID.me over privacy concerns amid reports that the company has amassed images of tens of millions of faces from its contracts with other federal and state government agencies and businesses (see story). But if Congress bans the use of such technology, or discourages the IRS from requiring it, that could prompt the agency to find other ways to authenticate users besides selfies.

In a letter Thursday, a group of Senate Republicans, led by Senate Finance Committee ranking member Mike Crapo, R-Idaho, questioned the IRSs plans to expand its collaboration with ID.me by requiring taxpayers to have an ID.me account to access some of the main IRS online resources. To register with ID.me, taxpayers will need to submit to ID.me personal information, including sensitive biometric data, starting this summer.

The IRS has unilaterally decided to allow an outside contractor to stand as the gatekeeper between citizens and necessary government services, the senators wrote in a letter to IRS Commissioner Chuck Rettig. The decision millions of Americans are forced to make is to pay the toll of giving up their most personal information, biometric data, to an outside contractor or return to the era of a paper-driven bureaucracy where information moves slow, is inaccurate, and some would say is processed in ways incompatible with contemporary life.

They pointed to a number of issues, pointing out that a selfie couldnt be changed if its compromised, unlike a password. They also asked about cybersecurity standards, and how the sensitive data would be stored and protected. The lawmakers also pointed out that ID.me is not subject to the same oversight rules as a government agency, and asked what assurances and rights would be allowed taxpayers under the collaboration, as taxpayers apparently would be subject to multiple terms of agreement filled with dense legal print.

ID.me defended its technology. We are committed to working together with the IRS to implement the best identity verification solutions to prevent fraud, protect Americans privacy, and ensure equitable, bias-free access to government services, said the company in a statement. To date, IRS and ID.me have worked together to substantially increase the identity verification pass rates from previous levels. These services are essential to helping prevent government benefits fraud that is occurring on a massive scale.

In the House, Rep. Jackie Walorski, R-Indiana, a senior member of the tax-writing House Ways and Means Committee, introduced the Save Taxpayers Privacy Act, which would prevent the IRS from requiring facial recognition technology to pay taxes or access account information. The bill would prohibit the Treasury from requiring the technology for access to any IRS online account or service.

It is outrageous that the IRS is planning to force American taxpayers to submit to dangerous facial recognition software in order to fulfill their basic civic responsibility, Walorski said in a statement Friday. Given the agencys previous failures to safeguard Americans private data and prevent political weaponization within its ranks, emboldening the IRS with any additional sensitive data or personal information would be a disservice to taxpayers and an affront to their rights. In the 21st century, the IRS can use secure alternatives to confirm taxpayers identities without resorting to facial recognition surveillance. To protect taxpayers privacy and security, I introduced legislation to stop IRS spying and defend citizens right to privacy.

View post:

GOP lawmakers move to stop IRS facial recognition technology - Accounting Today

Posted in Technology | Comments Off on GOP lawmakers move to stop IRS facial recognition technology – Accounting Today

Technology-first broker Prevu is more than its message: Tech Review – Inman

Posted: at 5:05 am

Prevu is a real estate buyer solution with a tech-forward, high-touch approach to working with customers. It uses salaried agents and customer concierge staff to assist buyers with its branded property search, market questions and offer submission.

Are you receiving Inmans Agent Edge? Make sure youre subscribed for the latest on real estate technology from Inmans expert Craig Rowe.

Platforms: BrowserIdeal for: Agents considering new brokerages, homebuyers

While tech-centered, Prevu is a brokerage. Its model is unique but might face scaling issues without a mechanism for taking on listings.

Prevu is a brokerage, but its prominent use of technology warrants a review of how it delivers service.

Prevu is a real estate buyer solution with a tech-forward, high-touch approach to working with customers. It uses salaried agents and customer concierge staff to assist buyers with its branded property search, market questions and offer submission.

This is a challenging company to write about because of its multiple value propositions to the industry. As of now, it leads with buyer rebates and low commissions. In that respect, its the same sort of look at what you can save argument as IdealAgent and Redfin.

Although saving money will always appeal to consumers, it doesnt resonate with those who automatically see savings as another term for limited service. That unfortunate stigma is even more pronounced in real estate, thanks largely to FSBO firms and other alternatives that sell a-la-carte listing marketing services.

But, while Prevu may be a brokerage at heart, its the technology that makes things pump.

The company leans heavily on buyer service, aiming to provide a more modernized, less paper-driven user experience. Although the team at Prevu describes its approach as similar to Expedia and Carvana, to me, its closer to the consumer insurance industrys move to shed its suit-and-tie, bureaucratic reputation with online quotes, service and mobile experiences.

Prevu spreads its technology evenly between buyers and the teams serving them. Collaboration is primarily chat- and email-driven. Buyers will work with both a concierge and an agent, the former elevating discussions to the latter as the relationship deepens.

Once onboard, buyers are offered access to Prevus collaborative search portal, from which they can save homes, ignore bad matches, request tours and submit offers via the call-to-action block, a section of features designed to engage users.

Every listing in Prevus search experience displays the potential rebate amount. The offer screen asks about funding source and down payment amount, and it encourages them to upload a qualification letter or some form of financial wherewithal. Its not a formal offer; rather, like many other online-offer tools today, its designed to express legitimate intent. It also asks for a good time to schedule a conversation with an agent.

The benefit to agents who want to work with Prevu is that you are 100 percent focused on service. The companys marketing will provide you with the clients, and you provide the wisdom.

Agent tools include a CRM module for viewing buyer profiles, property interest, budget, offers made, tours taken and all messages that have transpired since initial engagement, whether via text or email. (In many cases, this is all you need a CRM to do.)

Tours that get scheduled are routed to the listing agent on record, according to the local MLS data that Prevu uses as its property source. Calendar alerts are sent out, and for now, Prevu can integrate with ShowingTime.

Other features of note are a buyer document library, text-based onboarding and no formal representation agreement, as part of a deliberate approach to keep things low-pressure.

For now, Prevu is operating in New York, Connecticut, California, Pennsylvania, Massachusetts and Washington.

Many may read this review and think that most of whats happening here has already been done. Agents use text. Redfin pays agents. Keller Williams says its a technology company.

Most of that is true.

But Prevu, along with a small but growing cadre of up-and-coming colleagues, are instigating fissures in the foundation of the traditional brokerage model. Note that I refuse to use the D word because to me that term signals naive haughtiness, an attempt to usurp for the sake of it.

Before you blow off that idea, know that 47 percent of brokers surveyed in NARs 2021 Real Estate in Digital Age report cited competition from nontraditional market participants as one of their biggest challenges in the next two years. Know too that Prevu has sold close to $1 billion to date and tripled its revenue year-over-year in 2021.

What I see in Prevu is an application of what its founders merely consider to be a better way to serve a consumer need. Its simply what they know as a good business, a better way to provide value to homebuyers.

It should be noted as well that there is some Zillow and StreetEasy experience behind the company, so its not as if these guys came in from outside the industry.

The ultimate intent here is to automate as much of the buyer journey as possible without letting them put into the ditch. Its more consultative than heavy-handed and lead-driven.

Is it the future of how real estate services are offered? Thats up to the consumers. Might want to think about that.

Have a technology product you would like to discuss? Email Craig Rowe

Craig C. Rowe started in commercial real estate at the dawn of the dot-com boom, helping an array of commercial real estate companies fortify their online presence and analyze internal software decisions. He now helps agents with technology decisions and marketing through reviewing software and tech for Inman.

Read this article:

Technology-first broker Prevu is more than its message: Tech Review - Inman

Posted in Technology | Comments Off on Technology-first broker Prevu is more than its message: Tech Review – Inman

Cybersecurity Risks of Biometric Related Technology Use – The National Law Review

Posted: at 5:05 am

Thursday, February 3, 2022

Facial recognition, voiceprint, and other biometric-related technology are booming, and they continue to infiltrate different facets of everyday life. The technology brings countless potential benefits, as well as significant data privacy and cybersecurity risks.

Whether it is facial recognition technology being used with COVID-19 screening tools and in law enforcement, continued use of fingerprint-based time management systems, or the use of various biometric identifiers such as voiceprint for physical security and access management, applications in the public and private sectors involving biometric identifiers and information continue to grow so do concerns about the privacy and security of that information and civil liberties. Over the past few years, significant compliance and litigation risks have emerged that factor heavily into the deployment of biometric technologies, particularly facial recognition.

Research suggeststhat the biometrics market is expected to grow to approximately $44 billion in 2026 (from about $20 billion in 2020). This is easy to imagine, considering how ubiquitous biometric applications have become in everyday life. Biometrics are used for identity verification in a myriad of circumstances, such as unlocking smartphones, accessing theme parks, operating cash registers, clocking in and out for work, and travelling by plane. Concerns about security and identity theft, coupled with weak practices around passwords,have led some to ask whether biometrics will eventually replace passwordsfor identity verification. While that remains to be seen, there is little doubt the use of biometrics will continue to expand.

A significant piece of that market, facial recognition technology, has become increasingly popular in employment and consumer areas (e.g.,employee access, passport check-in systems, and payments on smartphones), as well as with law enforcement. For approximately 20 years, law enforcement has used facial recognition technology to aid criminal investigation, but with mixed results,according to a New York Times report. Additionally, the COVID-19 pandemic has helped to drive broader use of this technology. The need to screen persons entering a facility for symptoms of the virus, including their temperature, led to increased use of thermal cameras, kiosks, and similar devices embedded with facial recognition capabilities. When federal and state unemployment benefit programs experienced massive fraud as they tried to distribute hundreds of billions in COVID-19 relief, many turned to facial recognition and similar technologies for help. By late-summer 2021, more than half the states in the United States have contracted with ID.me to provide ID-verification services, according to aCNN report.

Many have objected to the use of this technology in its current form, however. They raise concerns over a lurch toward a more Orwellian society and related to due process, noting some of the technologys shortcomings in accuracy and consistency.Others have observedthat the ability to compromise the technology can become a new path to committing fraud against individuals.

Additionally, the use of voice recognition technology has seen massive growth in the past year. A newreportfrom Global Market Insights, Inc. estimates the global market valuation for voice recognition technology will reach approximately $7 billion by 2026. It said this is in main part due to the surge of AI and machine learning across a wide array of devices, including smartphones, healthcare apps, banking apps, and connected cars, among many others. While the ease and efficacy of voice recognition technology is clear, the privacy and security obligations associated with this technology, as with facial recognition, cannot be overlooked.

With the increasingly broad and expanding use of facial recognition and other biometrics has come more regulation and the related compliance and litigation risks.

Perhaps one of the most well-known laws regulating biometric information is theIllinois Biometric Information Privacy Act(BIPA). Enacted in 2008, the BIPA was one of the first state laws to address a businesss collection of biometric data. The BIPA protects biometric identifiers (a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry) and biometric information (any information, regardless of how it is captured, converted, stored, or shared, based on an individuals biometric identifier used to identify an individual). The law established a comprehensive set of rules for companies collecting biometric identifiers and information from state residents, including the following key features:

Informed consent in connection with collection

Disclosure limitation

Reasonable safeguard and retention guidelines

Prohibition on profiting from biometric data

A private right of action for individuals harmed by violations of the BIPA

The BIPA largely went unnoticed until 2015, when a series of five similar class action lawsuits were brought against businesses. The lawsuits alleged unlawful collection and use of the biometric data of Illinois residents. Since the BIPA was enacted, more than750 putative class actions lawsuits have been filed. The onslaught is primarily due to the BIPAs private right of action provision. That provision provides statutory damages up to $1,000 for each negligent violation, and up to $5,000 for each intentional or reckless violation. Adding fuel to the fire, the Illinois Supreme Court ruled that an individual is aggrieved under the BIPA and has standing to sue for technical violations, such as a failure to provide the laws required notice.Rosenbach v. Six Flags Entertainment Corp.,No. 123186 (Ill. Jan. 25, 2019). While most of these cases involved collection of fingerprints for time management systems, several involved facial recognition, includingone that reportedly settled for $650 million. In 2021, a new wave of BIPA litigation arose with the increased use of voice recognition technology by businesses. While general voice data is not covered by the BIPA, voiceprints have a personal identifying quality, thus potentially making them subject to the BIPA. For example, a large fast-food chain is facing BIPA litigation over alleged use of AI voice recognition technology at their drive-throughs. Claims in both state and federal courts allege failures to implement BIPA-compliant data retention policies, informed consent requirements, and prohibitions on profiting and disclosure.

Many have arguedthat the BIPA went too far, opening the floodgates to litigation for plaintiffs who, in many cases, suffered little to no harm. Indeed,efforts have been madeto moderate the BIPAs impact. However, massive data breaches and surges in identity theft and fraud have supported calls for stronger measures to protect sensitive personal information, including with regard to the use of facial recognition. At the same time, mismatches and allegations of bias in the application of facial recognition have led to calls for changes.

In the last year, there has been an uptick in hackers trying to trick facial recognition technology in many settings, such as fraudulently claiming unemployment benefits from state workforce agencies. The majority of states are using facial recognition technology to verify persons eligible for government benefits to prevent fraud. The firm ID.me. Inc., which provides the facial recognition technology to help verify individual eligibility for unemployment benefits, has seen over 80,000 attempts to fool government identification facial recognition systems between June 2020 and January 2021. Hackers of facial recognition systems usea myriad of techniques includingdeepfakes(AI-generated images), special masks, or even holding up images or videos of the individual the hacker is looking to impersonate.

Fraud is not the only concern with facial recognition technology. Despite its appeal for employers and organizations, there are concerns over the technologys accuracy, as well as significant legal implications to consider. Importantly, there are growing concerns regarding accuracy and biases of the technology. Areportby the National Institute of Standards and Technology said a study of 189 facial recognition algorithms considered the majority of the industry found that most of the algorithms exhibit bias, falsely identifying Asian and Black faces 10-to-beyond-100 times more than White faces. Moreover, false positives are significantly more common for women than men and higher for the elderly and children than middle-aged adults.

A result has beenincreasing regulationof the use of biometrics, including facial recognition. Examples include:

Facial Recognition Bans.Several U.S. localities have banned the use of facial recognition for law enforcement, other government agencies, or private and commercial use.

Portland.In September 2020, theCity of Portland, Oregon, became the first city in the United States to ban the use of facial recognition technologies in the private sector. Proponents of the measure cited a lack of standards for the technology and wide ranges in accuracy and error rates that differ by race and gender, among other criticisms.

The term facial recognition technologies is broadly defined to include automated or semi-automated processes using face recognition that assist in identifying, verifying, detecting, or characterizing facial features of an individual or capturing information about an individual based on an individuals face. The ordinance carves out limited exceptions, including the use of facial recognition technologies to comply with law, to verify users of personal and employer-provided devices, and for social medial application. Failure to comply can be painful. Like the BIPA, theprovides persons injured by a material violation a cause of action for damages or $1,000 per day for each day of violation, whichever is greater.

Baltimore.TheCity of Baltimore, for example,has bannedthe use of facial recognition technologies by city residents, businesses, and most of the citys government (excluding the police department) until December 2022.Council Bill 21-0001prohibits persons from obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology. Any person who violates the ordinance is guilty of a misdemeanor and, on conviction, is subject to a fine of not more than $1,000, imprisonment for not more than 12 months, or both fine and imprisonment.

Biometrics, Generally.Beyond the BIPA, state and local governments have enacted laws to regulate the collection, use, and disclosure of biometric information. Here are a few examples:

Texas, Washington, and New York.BothTexasandWashingtonhave enacted comprehensive biometric laws similar to the BIPA, but without the same kind of private-right-of-action provision. New York, on the other hand, isconsideringa BIPA-like privacy bill that mirrors the BIPA enforcement scheme.

The California Consumer Privacy Act (CCPA).Modeled to some degree after the EUs General Data Protection Regulation (GDPR), the CCPA seeks to provide individuals who are residents of California (consumers) greater control over their personal information. Cal. Civ. Code 1798.100et seq.Personal information is defined broadly and is broken into several categories, one being biometric information. In addition to new rights relating to their personal information (such as the right to opt out of the sale of their personal information), consumers have a private right of action relating to data breaches. If a CCPA-covered business experiences a data breach involving personal information, such as biometric information, the CCPA authorized a private cause of action against the business if a failure to implement reasonable security safeguards caused the breach. For this purpose, the CCPA points to personal information as defined in subparagraph (A) of paragraph (1) of subdivision (d) of Section 1798.81.5. That section defined biometric information as Unique biometric data generated from measurements or technical analysis of human body characteristics, such as a fingerprint, retina, or iris image, used to authenticate a specific individual. Unique biometric data does not include a physical or digital photograph, unless used or stored for facial recognition purposes. Cal. Civ. Code 1798.150. If successful, a plaintiff can seek to recover statutory damages in an amount not less than $100 and not greater than $750 per consumer per incident or actual damages, whichever is greater, and injunctive or declaratory relief and any other relief the court deems proper. This means that, like under the BIPA, plaintiffs generally do not have to show actual harm to recover.

New York City.The Big Apple amendedTitle 22 of its Administrative Codeto create BIPA-like requirements for the retail, restaurant, and entertainment businesses concerning collection of biometric information from customers. Under the law, customers have a private right of action to remedy violations, subject to a 30-day notice and cure period, with damages ranging from $500 to $5,000 per violation, along with attorneys fees.

In addition, New York City passed theTenant Privacy Act, which, among other things, requires owners of smart access buildings (i.e.,those that use key fobs, mobile apps, biometric identifiers, or other digital technologies to grant access to their buildings) to provide privacy policies to their tenants prior to collecting certain types of data from them. It also strictly limits (a) the categories and scope of data that the building owner collects from tenants, (b) how it uses that data (including a prohibition on data sales), and (c) how long it retains the data. The law creates a private right of action for tenants whose data is unlawfully sold. Those tenants may seek compensatory damages or statutory damages ranging from $200 to $1,000 per tenant and attorneys fees.

Other states.Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2020 amendments in California, Vermont, and Washington, D.C. Moreover, there are a myriad of data destruction, reasonable safeguards, and vendor requirements to consider, depending on the state, when collecting biometric data.

Organizations that collect, use, and store biometric data increasingly face compliance obligations as the law attempts to keep pace with technology, cybersecurity crimes, and public awareness of data privacy and security. It is critical that they maintain a robust privacy and data protection program to ensure compliance and minimize business and litigation risks.

Jackson Lewis P.C. 2022National Law Review, Volume XII, Number 34

Read the original post:

Cybersecurity Risks of Biometric Related Technology Use - The National Law Review

Posted in Technology | Comments Off on Cybersecurity Risks of Biometric Related Technology Use – The National Law Review

Polunsky Beitel Green Recognized as a Legal Technology Trailblazer by The National Law Journal – Business Wire

Posted: at 5:05 am

SAN ANTONIO--(BUSINESS WIRE)--Polunsky Beitel Green, the countrys leading law firm representing mortgage lenders, has been named one of the nations most innovative law firms by The National Law Journal. The publications 2022 Legal Technology Trailblazers, which recognizes law firms and companies that have used technology to change the way their businesses operate, honored PBG for its development of proprietary technology that automates and streamlines the mortgage loan document preparation and review process. The National Law Journals full profile of the firm is available here.

Texas-based PBG occupies a specialized niche in the residential real estate finance industry, representing mortgage lenders in Texas and other states that require closing documents to be reviewed by the lenders third-party legal counsel.

PBG blends a team of renowned mortgage lending lawyers with a secure technology and workflow solution, enabling the firm to prepare and review closing documents for an astounding 30,000-plus transactions each month. The system automatically retrieves data from its clients systems and routes the appropriate data to PBGs team of more than 300 mortgage document specialists, who quickly confirm the accuracy of critical data so that they can turn their attention to identifying legal or regulatory compliance concerns that require more in-depth involvement from the firms lawyers.

Our technology, process, and a team of the most talented lawyers and professionals I could ever hope to work with, facilitates the efficient resolution of legal or compliance impediments, allowing files to proceed to closing without undue delay, said Eric Gilbert, Polunsky Beitel Greens Chief Technology Officer and the architect of its technology platform. The technology is the engine that enables us to provide a detailed, meaningful legal review of loan documents, while also addressing the need for perfect documents, delivered quickly and seamlessly every time.

Legal industry experts have defined New Law as having four critical characteristics, but PBG is perhaps the first to have demonstrated a mastery of each technology, alternative staffing, process improvement, and use of data.

We are delighted to be recognized as a New Law leader by one of the legal professions bellwether media outlets, said Allan Polunsky, managing partner and founder of Polunsky Beitel Green. This recognition is a testament to our staffs extreme dedication to service, which drives us each day to find better, more efficient ways of helping our clients.

About Polunsky Beitel Green

Polunsky Beitel Green is Texas' oldest law firm exclusively dedicated to providing residential mortgage originators with document preparation and review services, as well as legal, regulatory and compliance support. The firms principals, Allan Polunsky, Jay Beitel and Marty Green have more than 100 years of combined experience in the specialized field of residential mortgage lending. Polunsky Beitel Green has offices in San Antonio, Dallas and Houston, with firm employees also embedded in clients offices throughout Texas and in more than 25 other states. Collectively, the firm serves residential mortgage lenders in all 50 U.S. states.

See the original post here:

Polunsky Beitel Green Recognized as a Legal Technology Trailblazer by The National Law Journal - Business Wire

Posted in Technology | Comments Off on Polunsky Beitel Green Recognized as a Legal Technology Trailblazer by The National Law Journal – Business Wire

Page 61«..1020..60616263..7080..»