PG&E Testing Artificial Intelligence Which Could Expand Wildfire Detection Capabilities to Growing Network of High-Definition Cameras – Yahoo Finance

138 new HD cameras installed in 2021 and 487 cameras are now in operation: 46 with AI test software

Eyes in the sky across most of High Fire-Threat Districts in Northern and Central California improve situational awareness and intelligence

SAN FRANCISCO, November 18, 2021--(BUSINESS WIRE)--During extremely dry, hot, and windy weather, being able to differentiate wildfire smoke from fog and other false indicators is invaluable to analysts in Pacific Gas and Electric Companys (PG&E) Wildfire Safety Operations Center and fire agencies. Thats why PG&E is testing artificial intelligence (AI) and machine-learning capabilities in the growing network of high-definition cameras across Northern and Central California to see how it can enhance fire-watch and response capabilities.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20211118006094/en/

HD smoke-spotting cameras on top of Mount Tamalpais are included in PG&Es artificial intelligence pilot program. (Photo: Business Wire)

This year, PG&E, in collaboration with ALERTWildfire, has installed 138 new HD cameras across High Fire-Threat Districts, in accordance with its 2021 Wildfire Mitigation Plan. Of those 138 cameras, 46 of them are included in the new AI testing program in partnership with Alchera and ALERTWildfire. A similar pilot was conducted with Pano through participation in EPRIs 2021 Incubatenergy Labs Challenge. PG&E began installing HD cameras in 2018, as part of its Community Wildfire Safety Program. As of October 31, 487 cameras are now in operation.

"Even with the two significant rainstorms in October and November, we are still in a historic drought and California, along with other western states, continue to experience an increase in wildfire risk and a longer wildfire season. We are using every new tool and technology at our disposal to improve situational awareness and intelligence to help mitigate and prevent wildfires, including this new AI capability," said Sumeet Singh, PG&E Chief Risk Officer. "Every bit of data and intelligence that comes to us could potentially save a life."

Story continues

The pilot program is already demonstrating the AIs potential to reduce fire size expansion. On August 4, 2021, PG&Es Howell Mountain 1 camera located in Placer County and equipped with Alcheras AI software, spotted smoke one minute before the actual fire dispatch and several minutes sooner than the manual movement of the camera. That smoke ended up becoming the River Fire. This is one example of many noted during both pilots confirming the value of early fire detection technology.

The expert staff in the companys Wildfire Safety Operations Center (WSOC), outside agencies and first responders use the fire-watch cameras to monitor, detect, assess for threats, and respond to wildfires. The AI test programs include PG&E determining a way to get the new data to the right people quickly and effectively. The quicker the data is received, the more rapidly first responders and PG&E can confirm fires and move the right resources to the right place.

"The software analyzes the video feed and if it thinks it sees smoke, we receive an alert via email and text, telling us it just detected smoke. Our analysts then pinpoint where the smoke is coming from and determine if its a car fire, dumpster fire, or even a vegetation fire. Based on the location, we can assess for threat to the public or PG&E facilities," said Eric Sutphin, Supervisor at PG&Es WSOC whos in charge of the camera installations. "The AI filters out a significant number of false positives, for example, ruling out dust, fog or haze."

Sutphin explained that the recent installation of the AI test software with its machine-learning capabilities means the WSOC team is getting smarter over time with more experience and more data gathered.

"We know the cameras are doing well at spotting wisps of smoke from long distances. We plan to assess our initial implementation, continue to gather the data, and develop a plan for using this leading-edge technology on a more expanded basis," he said.

The cameras provide 360-degree views with pan, tilt and zoom capabilities and can be viewed by anyone through the ALERTWildfire Network at http://www.alertwildfire.org. By the end of 2022, the company plans to have approximately 600 cameras installed, providing an ability to see in real-time more than 90% of the high fire-risk areas it serves.

About PG&E

Pacific Gas and Electric Company, a subsidiary of PG&E Corporation (NYSE:PCG), is a combined natural gas and electric utility serving more than 16 million people across 70,000 square miles in Northern and Central California. For more information, visit http://www.pge.com/ and http://www.pge.com/about/newsroom/.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211118006094/en/

Contacts

MEDIA RELATIONS:415-973-5930

More:
PG&E Testing Artificial Intelligence Which Could Expand Wildfire Detection Capabilities to Growing Network of High-Definition Cameras - Yahoo Finance

Artificial Intelligence and Machine Learning, Cloud Computing, and 5G Will Be the Most Important Technologies in 2022, Says New IEEE Study – Dark…

Piscataway, N.J. - 18 November 2021 -IEEE, the world's largest technical professional organization dedicated to advancing technology for humanity, today released the results of "The Impact of Technology in 2022 and Beyond: an IEEE Global Study," a new survey of global technology leaders from the U.S., U.K., China, India, and Brazil. The study, which included 350 chief technology officers, chief information officers, and IT directors, covers the most important technologies in 2022, industries most impacted by technology in the year ahead, and technology trends through the next decade.Learn moreabout the study and the impact of technology in 2022 and beyond.

The most important technologies, innovation, sustainability, and the future

Which technologies will be the most important in 2022? Among total respondents, more than one in five (21%) say AI and machine learning, cloud computing (20%), and 5G (17%) will be the most important technologies next year. Because of the global pandemic, technology leaders surveyed said in 2021 they accelerated adoption of cloud computing (60%), AI and machine learning (51%), and 5G (46%), among others.

Its not surprising, therefore, that 95% agreeincluding 66% who strongly agreethat AI will drive the majority of innovation across nearly every industry sector in the next one to five years.

When asked which of the following areas 5G will most benefit in the next year, technology leaders surveyed said:

As for industry sectors most impacted by technology in 2022, technology leaders surveyed cited manufacturing (25%), financial services (19%), healthcare (16%), and energy (13%). As compared to the beginning of 2021, 92% of respondents agree, including 60% who strongly agree, that implementing smart building technologies that benefit sustainability, decarbonization, and energy savings has become a top priority for their organization.

Workplace technologies, human resources collaboration, and COVID-19

As the impact of COVID-19 varies globally and hybrid work continues, technology leaders nearly universally agree (97% agree, including 69% who strongly agree) that their team is working more closely than ever before with human resources leaders to implement workplace technologies and apps for office check-in, space usage data and analytics, COVID and health protocols, employee productivity, engagement, and mental health.

Among challenges technology leaders see in 2022, maintaining strong cybersecurity for a hybrid workforce of remote and in-office workers is viewed by those surveyed as challenging by 83% of respondents (40% very, 43% somewhat) while managing return-to-office health and safety protocols, software, apps, and data is seen as challenging by 73% of those surveyed (29% very, 44% somewhat). Determining what technologies are needed for their company in the post-pandemic future is anticipated to be challenging for 68% of technology leaders (29% very, 39% somewhat). Recruiting technologists and filling open tech positions in the year ahead is also seen as challenging by 73% of respondents.

Robots rise over the next decade

Looking ahead, 81% agree that in the next five years, one quarter of what they do will be enhanced by robots, and 77% agree that in the same time frame, robots will be deployed across their organization to enhance nearly every business function from sales and human resources to marketing and IT. A majority of respondents agree (78%) that in the next ten years, half or more of what they do will be enhanced by robots. As for the deployments of robots that will most benefit humanity, according to the survey, those are manufacturing and assembly (33%), hospital and patient care (26%), and earth and space exploration (13%).

Connected devices continue to proliferate

As a result of the shift to hybrid work and the pandemic, more than half (51%) of technology leaders surveyed believe the number of devices connected to their businesses that they need to track and managesuch as smartphones, tablets, sensors, robots, vehicles, drones, etc.increased as much as 1.5 times, while for 42% of those surveyed the number of devices increased in excess of 1.5 times.

However, the perspectives of technology leaders globally diverge when asked about managing even more connected devices in 2022. When asked if the number of devices connected to their companys business will grow so significantly and rapidly in 2022 that it will be unmanageable, over half of technology leaders disagree (51%), but 49% agree. Those differences can also be seen across regions78% in India, 64% in Brazil, and 63% in the U.S. agree device growth will be unmanageable, while a strong majority in China (87%) and just over half (52%) in the U.K disagree.

Cyber and physical security, preparedness, and deployment of technologies

The cybersecurity concerns most likely to be in technology leaders top two are issues related to the mobile and hybrid workforce including employees using their own devices (39%) and cloud vulnerability (35%). Additional concerns include data center vulnerability (27%), a coordinated attack on their network (26%), and a ransomware attack (25%). Notably, 59% of all technology leaders surveyed currently use or in the next five years plan to use drones for security, surveillance, or threat prevention as part of their business model. There are regional disparities though. Current drone use for security or plans to do so in the next five years are strongest in Brazil (78%), China (71%), India (60%), and the U.S. (52%) compared to only (32%) in the U.K., where 48% of respondents say they have no plans to use drones in their business.

An open-source distributed database that uses cryptography through a distributed ledger, blockchain enables trust among individuals and third parties. The four uses in the next year respondents were most likely to cite in their own top three most important uses for blockchain technology are:

The vast majority of those surveyed (92%) believe that compared to a year ago, their company is better prepared to respond to a potentially catastrophic interruption such as a data breach or natural disaster. Of that majority, 65% strongly agree that COVID-19 accelerated their preparedness.

About the Survey

"The Impact of Technology in 2022 and Beyond: an IEEE Global Study" surveyed 350 CIOs, CTOs, IT directors, and other technology leaders in the U.S., China, U.K., India, and Brazil at organizations with more than 1,000 employees across multiple industry sectors, including banking and financial services, consumer goods, education, electronics, engineering, energy, government, healthcare, insurance, retail, technology, and telecommunications. The surveys were conducted 8-20 October 2021.

About IEEE

IEEE is the worlds largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics.Learn more

See the original post:
Artificial Intelligence and Machine Learning, Cloud Computing, and 5G Will Be the Most Important Technologies in 2022, Says New IEEE Study - Dark...

Is artificial intelligence more formidable than nuclear weapons? | TheHill – The Hill

Of all the potentially new and revolutionary technologies,artificial intelligence(AI) may be the most disruptive of all. In laymans terms, AIrefers to systems able to perform tasks that normally require human intelligence, such as visual and speech recognition, decisionmaking and, perhaps one day, thinking.

Thinking? AI has already defeated the world's best chess and Pokemon GO players.SupposeAIsurpasses the intelligence of human beings.What then?

Could AIssuper-intelligencecure cancer, enhance wellbeing, redress climate change and deal with many of the planets worst evils? Or might a super-smart AI turn on mankind as portrayed in the Terminator movies?Finally, is the potential of AI being exaggerated?

Albert Einsteindescribed the universe as finite but unbounded. That definition could fit AIs future applications.But how will we know?

Perhaps the only comparable disruptive technology wasnuclear and thermonuclear weapons. These weapons irreversibly disrupted and changed the nature, conduct and character of the politics of war. The reason: no winners, only victims and losers, would emerge after a thermonuclear holocaust eviscerated the belligerents.

What then are the common links?

Nuclear weapons provoked often-fiery debateover the moral and legal implications and when or how these weapons could or should be employed from a counterforce first strike against military targets to countervalue retaliatory roles against population and industrial centers andtactically to limit escalation or rectify conventional arms imbalances.AI has reignited debate over equally critical questions and issues about its place in society.

Nuclear weapons ultimately led to adoctrineand rules of the game to deter and prevent their spread and use partly through arms control. Will AI lead to a regulatory regime or is the technology too universal for any governing code?

Nuclear weapons are existential.Are there conditions under which AI could become as dangerous? Proliferationof these weapons led to international agreements to prevent their spread.Willthat apply to AI?

It was argued that if one side gained superiority over another, conflict or more aggressive behavior would follow.Does AI raise similar concerns?

Important differences exist.Nuclear weapons affected national security.AI most certainly will affect the broader sweep of society, as have the industrial and information revolutions with positive and negative consequences.

Second, thedestructive powerof these weapons made them so significant.AI, at this stage, needs an intermediary link to exercise its full disruptive power. However, ironically, as societies became more advanced, those two revolutions had theunintended consequenceof also creating greater vulnerabilities, weaknesses and dependencies subject to major and even catastrophic disruption.

COVID-19, massive storms, fires, droughts and cyberattacks are unmistakable symptoms of the power of the new MAD Massive Attacks of Disruption.AI is a potential multiplier by its abilityto interactwith these and other disruptors, exploiting inherent societal weaknesses and vulnerabilities and creating new ones as well as preventing their harmful effects.

Last,unlike nuclear weapons, if used properly AI will have enormous and even revolutionarybenefitsfor the human species.

The critical question is what mechanisms can identify what former Defense Secretary DonaldRumsfeldcalled the known knowns; known unknowns; and unknown unknowns regarding AI.

Anational AI commissionjust completed its effort.Commissions often can bury a tough topic.The9/11 Commissiondid stellar work. But only a portion of its most important recommendations were implemented.Forming the Department of Homeland Security and the Office of the Director of National Intelligence did not bring needed reform because those agencies ultimately expanded the layering of an already bloated government bureaucracy.

That criticism aside, instead of a new AI commission, a permanent AI oversight council with a substantial amount of research funding to examine AIs societal implications must be created.Membership should be drawn from the public and the legislative and executive branches of government.

Funding should go to the best research institutions, another parallel with nuclear weapons. During the Cold War, the Pentagon underwrote countless studies covering all aspects of the nuclear balance. The same mustapply to AIbut with wider scope.

This council must also coordinate, liaise and consult with the international community, including China, Russia, allies, friends and others to widen the intellectual aperture and as confidence building measures.

By employing lessons learned from studying the nuclear balance, not only can AIspotentially destructive consequences be mitigated.More importantly, if properly utilized, as Einstein observed about the universe, AI has nearly unbounded opportunity to advance the public good.

Harlan Ullman, Ph.D, is United Press Internationals Arnaud deBorchgrave Distinguished Columnist. His latest book, due out this year, is The Fifth Horseman and the New MAD: The Tragic History of How Massive Attacks of Disruption Endangered, Infected, Engulfed and Disunited a 51% Nation and the Rest of the World.

Go here to read the rest:
Is artificial intelligence more formidable than nuclear weapons? | TheHill - The Hill

Sanofi invests $180 million equity in Owkin’s artificial intelligence and federated learning to advance oncology pipeline – GlobeNewswire

Sanofi invests $180 million equity in Owkins artificial intelligence and federated learning to advance oncology pipeline

PARIS November 18, 2021 Sanofi announced today an equity investment of $180 million and a new strategic collaboration with Owkin comprised of discovery and development programmes in four exclusive types of cancer, witha total payment of $90 million for three years plus additional research milestone-based payments. Owkin, an artificial intelligence (AI) and precision medicine company, builds best-in-class predictive biomedical AI models and robust data sets. With the ambition to optimize clinical trial design and detect predictive biomarkers for diseases and treatment outcomes, this collaboration will support Sanofis growing oncology portfolio in core areas such as lung cancer, breast cancer and multiple myeloma.

To accelerate medical research with AI in a privacy-preserving way, Owkin has assembled a global research network powered by federated learning, which allows data scientists to securely connect to decentralized, multi-party data sets and train AI models without having to pool data. This approach will complement Sanofis emerging strength in oncology, as the companys scientists apply cutting-edge technology platforms to design potentially life-transforming medicines for cancer patients worldwide.

"Owkins unique methodology, which applies AI on patient data from partnerships with multiple academic medical centers, supports our ambition to leverage data in innovative ways in R&D, said Arnaud Robert, Executive Vice President, Chief Digital Officer, Sanofi. We are striving to advance precision medicine to the next level and to discover innovative treatment methods with the greatest benefits for patients.

Sanofi will leverage the comprehensive Owkin Platform, in order to find new biomarkers and therapeutic targets, building prognostic models, and predicting response to treatment from multimodal patient data. Sanofis investment will support Owkins development and goal to grow the worlds leading histology and genomic cancer database from top oncology centers.

Owkins mission is to improve patients lives by using our platform to discover and develop the right treatment for every patient, said Thomas Clozel, M.D., Co-Founder and CEO at Owkin. We believe that the future of precision medicine lies in technologies that can unlock insights from the vast amount of patient data in hospitals and research centers in a privacy-preserving and secure way. This landmark partnership with Sanofi will see federated learning used to create research collaborations at a truly unprecedented scale. The future of AI to transform how we develop treatments is incredibly bright, and we are proud to partner with Sanofi on this mission.

This collaboration agreement will allow Sanofi to work closely with Owkin in identifying new oncology treatments across four cancers.

We look forward to working with our colleagues at Owkin to analyze data from hundreds of thousands of patients, said John Reed, M.D., Ph.D., Global Head of Research and Development, Sanofi. Sanofi's investment in the company includes a three-year agreement that will help discover and develop new treatments for non-small cell lung cancer, triple negative breast cancer, mesothelioma and multiple myeloma. This partnership will help accelerate our ambitious oncology program as we advance a rich pipeline of medicines to address unmet patient needs.

About Owkin

Owkin is a French American startup that specializes in AI and federated learning for medical research. It was co-founded in 2016 by Dr Thomas Clozel M.D., a clinical research doctor and former assistant professor in clinical hematology, and Dr Gilles Wainrib, Ph.D., a pioneer in the field of artificial intelligence in biology. Owkin has recently published groundbreaking research at the frontier of AI and medicine in Nature Medicine, Nature Communications and Hepatology. The Owkin Platform connects life science companies with world-class academic researchers and hospitals to share deep medical insights for drug discovery and development. Using federated learning and breakthrough collaborative AI technology, Owkin enables its partners to unlock siloed datasets while protecting patient privacy and securing proprietary data. Through sharing high-value insights, the company powers unprecedented collaboration to improve patient outcomes. Owkin works with the most prominent cancer centers and pharmaceutical companies in Europe and the US. Key achievements to date include HealthChain and MELLODDY; two Owkin led federated learning consortia fuelling unprecedented collaboration in academic research and drug discovery, respectively. For more information, please visit Owkin.com and follow @OWKINscience on Twitter.

About SanofiSanofi is dedicated to supporting people through their health challenges. We are a global biopharmaceutical company focused on human health. We prevent illness with vaccines, provide innovative treatments to fight pain and ease suffering. We stand by the few who suffer from rare diseases and the millions with long-term chronic conditions. With more than 100,000 people in 100 countries, Sanofi is transforming scientific innovation into healthcare solutions around the globe.

Media Relations ContactsSally BainTel: +1 (781) 264-1091Sally.Bain@sanofi.com

Nicolas Obrist Tel: + 33 6 77 21 27 55Nicolas.Obrist@sanofi.com

Investor Relations Contacts ParisEva Schaefer-JansenArnaud DelepineNathalie Pham

Investor Relations Contacts North AmericaFelix Lauscher

Tel.: +33 (0)1 53 77 45 45investor.relations@sanofi.comhttps://www.sanofi.com/en/investors/contact

Forward-Looking StatementsThis press release contains forward-looking statements as defined in the Private Securities Litigation Reform Act of 1995, as amended. Forward-looking statements are statements that are not historical facts. These statements include projections and estimates and their underlying assumptions, statements regarding plans, objectives, intentions and expectations with respect to future financial results, events, operations, services, product development and potential, and statements regarding future performance. Forward-looking statements are generally identified by the words expects, anticipates, believes, intends, estimates, plans and similar expressions. Although Sanofis management believes that the expectations reflected in such forward-looking statements are reasonable, investors are cautioned that forward-looking information and statements are subject to various risks and uncertainties, many of which are difficult to predict and generally beyond the control of Sanofi, that could cause actual results and developments to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. These risks and uncertainties include among other things, the uncertainties inherent in research and development, future clinical data and analysis, including post marketing, decisions by regulatory authorities, such as the FDA or the EMA, regarding whether and when to approve any drug, device or biological application that may be filed for any such product candidates as well as their decisions regarding labelling and other matters that could affect the availability or commercial potential of such product candidates, the fact that product candidates if approved may not be commercially successful, the future approval and commercial success of therapeutic alternatives, Sanofis ability to benefit from external growth opportunities, to complete related transactions and/or obtain regulatory clearances, risks associated with intellectual property and any related pending or future litigation and the ultimate outcome of such litigation, trends in exchange rates and prevailing interest rates, volatile economic and market conditions, cost containment initiatives and subsequent changes thereto, and the impact that COVID-19 will have on us, our customers, suppliers, vendors, and other business partners, and the financial condition of any one of them, as well as on our employees and on the global economy as a whole. Any material effect of COVID-19 on any of the foregoing could also adversely impact us. This situation is changing rapidly and additional impacts may arise of which we are not currently aware and may exacerbate other previously identified risks. The risks and uncertainties also include the uncertainties discussed or identified in the public filings with the SEC and the AMF made by Sanofi, including those listed under Risk Factors and Cautionary Statement Regarding Forward-Looking Statements in Sanofis annual report on Form 20-F for the year ended December 31, 2020. Other than as required by applicable law, Sanofi does not undertake any obligation to update or revise any forward-looking information or statements.

See original here:
Sanofi invests $180 million equity in Owkin's artificial intelligence and federated learning to advance oncology pipeline - GlobeNewswire

What is artificial intelligence good for? Panel discussion addresses the promises, opportunities and challenges – EurekAlert

From commerce, finance and agriculture to self-driving cars, personalised healthcare and social media advancements in artificial intelligence (AI) unlock countless opportunities. New applications promise to improve the quality of peoples lives throughout the world, but at the same time, raise a number of societal questions. A joint panel discussion of the German National Academy of Sciences Leopoldina and the Korean Academy of Science and Technology (KAST) explores AI technologies, their benefits and their challenges for society.

Virtual panel discussion of the German National Academy of Sciences Leopoldina and the Korean Academy of Science and TechnologyRealizing the Promises of Artificial IntelligenceThursday, 25 November 2021, 8am to 9am (CET)Online

Following opening remarks from the President of the Leopoldina, Prof (ETHZ) Dr Gerald Haug and Prof Min-Koo Han, PhD, President of the KAST, legal scholar Prof Ryan Song, PhD, Kyung Hee University, Seoul/South Korea, will provide an introduction into the topic. Subsequently, computer scientist Prof Alice Oh PhD, KAIST School of Computing, Daejeon/ South Korea, and Member of the Leopoldina Prof Dr Alexander Waibel, Karlsruhe Institute of Technology/Germany and Carnegie Mellon University, Pittsburgh/USA, will provide input statements for further discussion. The speakers will present current developments and applications of AI technologies and discuss their societal and scientific impact.

The event is open to the interested public and free of charge. It will be held in English. The panel discussion will be live-streamed via the KAST YouTube Channel. Please submit your questions prior to and during the event here: https://docs.google.com/forms/d/1A9L7JqvjljYbZH3JYX_CBGyCK91ayiJAmcPZET6O91c/edit.

Further information about the event is available here: https://www.leopoldina.org/en/events/event/event/2938/

Follow the Leopoldina on Twitter: http://www.twitter.com/leopoldina

About the German National Academy of Sciences LeopoldinaAs the German National Academy of Sciences, the Leopoldina provides independent science-based policy advice on matters relevant to society. To this end, the Academy develops interdisciplinary statements based on scientific findings. In these publications, options for action are outlined; making decisions, however, is the responsibility of democratically legitimised politicians. The experts who prepare the statements work in a voluntary and unbiased manner. The Leopoldina represents the German scientific community in the international academy dialogue. This includes advising the annual summits of Heads of State and Government of the G7 and G20 countries. With 1,600 members from more than 30 countries, the Leopoldina combines expertise from almost all research areas. Founded in 1652, it was appointed the National Academy of Sciences of Germany in 2008. The Leopoldina is committed to the common good.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

View post:
What is artificial intelligence good for? Panel discussion addresses the promises, opportunities and challenges - EurekAlert

6 ways artificial intelligence is revolutionizing home search – Inman

As all agents, brokers, and home buyers know, searching for a home is a deeply personal process, and one of the most difficult challenges for buyers is narrowing down what they want. When a prospective buyer walks through a home or searches for one online, they are making hundreds of value judgments, often without ever consciously realizing them or expressing them to the real estate professional they are working with.

Thankfully, artificial intelligence (AI) can now help bridge that gap and deliver a customized and personalized experience for consumers, without additional work by the agent or broker.

Here are a few exciting ways AI technology is making this possible:

For years, it has been easy to search for homes based on basic criteria like square footage, but what if a client wants something a little more specific, such as hardwood floors in all of the bedrooms, or homes with granite counters and white kitchen cabinets?

Thats where AI comes in. Those kinds of variables, or combinations of them, are not often captured by a listing data feed, but they can be critical to personalizing the customer experience. AI makes it easy to get the right search results quickly for even the most particular clients.

If you watch Netflix or use Amazon, youre already familiar with AI technology that reacts to each individual consumers preferences. On those platforms, what you stop to review, or even the amount of time you spend reviewing, is used to define preferences without ever asking you a specific question. In real estate, AI-powered search platforms are starting to offer buyers similar interactions.

Agents can now encourage consumers to find and upload images of what theyre looking for types of home, the finishes, the features, the layout and have tech tools handle the hard work of searching for similar properties on the market.

Firms like Wayfair, Home Depot, and others are leveraging tools that allow consumers to visualize what a room or a home would look like with different paint colors, with their own furniture or even after a renovation. This allows buyers and sellers to maximize the interest in a transaction by seeing what their home will look like in the future.

Instead of typing something like, New York, three-bedroom apartment, prospects are now able to simply speak into their phone or computer microphone and say something like, I need a three-bedroom apartment with a Central Park view in New York, facing east. And before long, platforms will be able to reply to them verbally. With computer vision technology, that becomes a reality by utilizing plain-English descriptions of what is tagged in images and searching for them.

For sellers, search placement can be improved by using technology that automatically tags home features in listing photos. That means that agents can avoid writing all those tags and detailed image descriptions, but still have their sellers benefit from optimal search engine placement. At a time when the vast majority of home searches start online, thats a big deal.

Put simply, developments like these are increasingly transforming the home search process and making it easy for real estate professionals to deliver an even more highly personalized service for their customers without adding more to their plates.

Red Bell Real Estate, LLC, a homegenius company, is at the forefront of these and other exciting technology developments that will make agents and brokers jobs easier and more lucrative. If youre interested in learning more about how this tech could work for you or your agents, visit homegenius.com.

2021 Radian Group Inc. All Rights Reserved. Red Bell Real Estate, LLC, 7730 South Union Park Avenue, Suite 400, Midvale, UT 84047. Tel: 866-626-2381. Licensed in every State and the District of Columbia. This communication is provided for use by real estate professionals only and is not intended for distribution to consumers or other third parties. This does not constitute an advertisement as defined by Section 1026.2(a)(2) of Regulation Z.

See more here:
6 ways artificial intelligence is revolutionizing home search - Inman

5 applications of Artificial Intelligence in banking – IBS Intelligence

5 applications of Artificial Intelligence in banking By Joy Dumasia

Artificial Intelligence (AI) has been around for a long time. AI was first conceptualized in 1955 as a branch of Computer Science and focused on the science of making intelligent machines machines that could mimic the cognitive abilities of the human mind, such as learning and problem-solving. AI is expected to have a disruptive effect on most industry sectors, many-fold compared to what the internet did over the last couple of decades. Organizations and governments around the world are diverting billions of dollars to fund research and pilot programs of applications of AI in solving real-world problems that current technology is not capable of addressing.

Artificial Intelligence enables banks to manage record-level high-speed data to receive valuable insights. Moreover, features such as digital payments, AI bots, and biometric fraud detection systems further lead to high-quality services for a broader customer base. Artificial Intelligence comprises a broad set of technologies, including, but are not limited to, Machine Learning, Natural Language Processing, Expert Systems, Vision, Speech, Planning, Robotics, etc.

The adoption of AI in different enterprises has increased due to the COVID-19 pandemic. Since the pandemic hit the world, the potential value of AI has grown significantly. The focus of AI adoption is restricted to improving the efficiency of operations or the effectiveness of operations. However, AI is becoming increasingly important as organizations automate their day-to-day operations and understand the COVID-19 affected datasets. It can be leveraged to improve the stakeholder experience as well.

The following are 5 applications of Artificial Intelligence in banking:

Chatbots deliver a very high ROI in cost savings, making them one of the most commonly used applications of AI across industries. Chatbots can effectively tackle most commonly accessed tasks, such as balance inquiry, accessing mini statements, fund transfers, etc. This helps reduce the load from other channels such as contact centres, internet banking, etc.

Automated advice is one of the most controversial topics in the financial services space. A robo-advisor attempts to understand a customers financial health by analyzing data shared by them, as well as their financial history. Based on this analysis and goals set by the client, the robo-advisor will be able to give appropriate investment recommendations in a particular product class, even as specific as a specific product or equity.

One of AIs most common use cases includes general-purpose semantic and natural language applications and broadly applied predictive analytics. AI can detect specific patterns and correlations in the data, which legacy technology could not previously detect. These patterns could indicate untapped sales opportunities, cross-sell opportunities, or even metrics around operational data, leading to a direct revenue impact.

AI can significantly improve the effectiveness of cybersecurity systems by leveraging data from previous threats and learning the patterns and indicators that might seem unrelated to predict and prevent attacks. In addition to preventing external threats, AI can also monitor internal threats or breaches and suggest corrective actions, resulting in the prevention of data theft or abuse.

AI is instrumental in helping alternate lenders determine the creditworthiness of clients by analyzing data from a wide range of traditional and non-traditional data sources. This helps lenders develop innovative lending systems backed by a robust credit scoring model, even for those individuals or entities with limited credit history. Notable companies include Affirm and GiniMachine.

ALSO READ: Applications of Artificial Intelligence in Banking 2021

Continued here:
5 applications of Artificial Intelligence in banking - IBS Intelligence

Visa’s Artificial Intelligence Prevents Nearly $88 Million In Fraud From Impacting | Scoop News – Scoop

Friday, 19 November 2021, 12:32 pmPress Release: Visa

Visa, the worlds leader in digital payments, has todayannounced its artificial intelligence (AI) solution VisaAdvanced Authorisation has helped financial institutions toprevent nearly $88 million in fraud from impacting NewZealand businesses in the past year.

Visa pioneeredthe use of neural networks, modeled on the human brain, topower its AI technology that analyses the risk oftransactions in real-time to identify and stop fraud. The AIalgorithm assesses more than 500 risk attributes in roughlya millisecond to produce a score of every transactionspredicted fraud probability.

While fraud rates haveremained stable over the past year and globally nearhistoric lows, Visas AI-powered security is increasinglycritical as payments continue a rapid shift online, wherefraudsters tend to commit most of their crime. NZ Postreported that in 2020, over two million New Zealandersshopped online, up 9.2% on the prior year, and spent $5.8billion on online shopping $1.2 billion more than in2019.

As consumer spending continues to moveonline, so has the focus of fraudsters. We are investingmore heavily than ever in technology that ensures a safe andsecure marketplace - combatting fraud while enablingseamless, genuine transactions. This investment, whichincludes a global Visa team of over 850 cyber specialists,covers systems resilience, cybersecurity tools liketokenisation, AI and blockchain-based solutions, saidAnthony Watson, Visas Country Manager for New Zealand andSouth Pacific.

One of the top threats to emerge forbusinesses in New Zealand and globally the past year isenumeration, the criminal practice that involves usingautomation to test and guess payment credentials such asaccount numbers, CVV2, and/or expiry dates during onlinecheckout.

To counter this, Visa is leveraging anotherAI-powered solution, Visa Account Attack Intelligence, whichspots patterns in data that are otherwise undetectable byhumans. The technology uses cutting-edge machine learning toidentify account testing, analyse the details of the attack,and enable Visa to take action in nearreal-time.

Watson concluded: The most fundamentalattribute in commerce is trust if a business loses acustomers trust, they lose sales. The global nature ofVisas network means were able to apply learnings fromtransactions processed by Visa at merchants in every countryand territory we operate in around the world to protect NewZealand businesses.

Visa Inc. (NYSE:V) is the worlds leader in digital payments. Our missionis to connect the world through the most innovative,reliable and secure payment network - enabling individuals,businesses and economies to thrive. Our advanced globalprocessing network, VisaNet, provides secure and reliablepayments around the world, and is capable of handling morethan 65,000 transaction messages a second. The companysrelentless focus on innovation is a catalyst for the rapidgrowth of digital commerce on any device for everyone,everywhere. As the world moves from analog to digital, Visais applying our brand, products, people, network and scaleto reshape the future of commerce. For more information,visit AboutVisa, visa.com/blogand @VisaNews.

Scoop Media

Become a member Find out more

Go here to read the rest:
Visa's Artificial Intelligence Prevents Nearly $88 Million In Fraud From Impacting | Scoop News - Scoop

Eyes of the City: Visions of Architecture After Artificial Intelligence – ArchDaily

Eyes of the City: Visions of Architecture After Artificial Intelligence

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

This book tells the story of Eyes of the Cityan international exhibition on technology and urbanism held in Shenzhen during the winter of 2019 and 2020, with a curation process that unfolded between summer 2018 and spring 2020. Conceived as a cultural event exploring future scenarios in architecture and design, Eyes of the City found itself in an extraordinary, if unstable, position, enmeshed within a series of powerfully contingent eventsthe political turmoil in Hong Kong, the first outbreak of COVID-19 in Chinathat impacted not only the scope of the project, but also the global debate around society and urban space.

Eyes of the City was one of the two main sections of the eighth edition of the Shenzhen Bi-City Biennale of UrbanismArchitecture (UABB), titled Urban Interactions. Jointly curated by CRA-Carlo Ratti Associati, Politecnico di Torino and South China University of Technology, it focused on the various relationships between the built environment and increasingly pervasive digital technologiesfrom artificial intelligence to facial recognition, drones to self-driving vehiclesin a city that is one of the worlds leading centers of the Fourth Industrial Revolution. [1]

The topic of the exhibition was decided well before the two events mentioned above made it an especially sensitive one for a Chinese, as well as an international, audience. The Biennale opened its doors in December 2019, just after the months-long protests in Hong Kong had reached their climax and the discussion on the role of surveillance systems embedded in physical space was at its most controversial. [2] In addition, the location the UABB organizers had chosen for the Biennale also caused controversy. The exhibition venue was at the heart of Shenzhens Central Business District, in the hall of Futian Station, one of the largest infrastructure spaces in Asia as well as a multi-modal hub connecting the metropoliss metro system with high-speed trains capable of reaching Hong Kong in about ten minutes.

The agitations occurring on the south side of the border never spilled over into the first outpost of Mainland China. Nevertheless, as the curation process progressed and the opening day approached, the climate grew more tense. In those weeks, it was enough for an exhibitor to merely include as part of his/her proposal a drawing of people on the street standing under umbrellas to prompt heated reactions, with the image reminding visitors of the 2014 pro-democracy movements symbol. Immediately prior to the opening, the stations police fenced off the Biennale venue, instituting check-points for visitors (fortunately, this provision lasted only two weeks before people were permitted again to roam freely inside the station). Despite these contingencies, Eyes of the City managed to offer what a Reuters journalist defined as a rare public space for reflection on increasingly pervasive surveillance by tech companies and the government. [3]

Then, in the second half of January 2020, what began as a local sickness in the city of Wuhan [4] 1,000 kilometers north of Shenzhenspread across the country and beyond, rapidly becoming a global pandemic. Trains between Futian and Hong Kong were discontinued [5], the Biennale venue was shut, while in a matter of weeks, the role of emerging technologies in regulating and facilitating peoples work and social lives became one of the most-discussed topics worldwide, after the grim tally of infections and deaths. In the design field, COVID-19 was seen as exposing and amplifying, on a transcontinental scale, trajectories of change that were already underway.

In an unforeseeable fashion, the occurrences of history in southern China between late 2019 and early 2020 made the question of the city with eyes even more timely and pressing. In the midst of these events, the exhibition had to reinvent itself, experimenting with its form and content in order to continue carrying out its program and contribute to the growing debate. A product of this context, this book is the result of similar processes of continuous adjustment, reflection-in-action, and exchange.

The book challenges the traditional notion of exhibition catalog, crossing the three temporal and conceptual dimensions that were also tackled by the exhibition as a whole. The book is composed of three parts, which loosely represent the different laboratories of the exhibition: the curatorial work that preceded it, the open debate that accompanied it, and the content that made it relevant. Overall, the book adopts Eyes of the City as a trans-scalar and multidisciplinary interpretative key for rethinking the city as a complex entanglement of relationships.

The first part expands on curatorial practices and reflects on the exhibition as an incubator of ideas. The opening essay is written by the exhibitions chief curator Carlo Ratti and academic curators Michele Bonino (Politecnico di Torino) and Yimin Sun (South China University of Tecnology): it positions Eyes of the City as an urgent urban category and proposes a legacy for the show which reframes the role of architecture biennales. The second essay is written by the exhibitions executive curators: it reconstructs visually the exhibitions design process and its materialization of our open-curatorship approach.

The second part of the book expands on a discussion that accompanied the entire curatorial process from spring 2019 to summer 2020, through a rubric on ArchDaily. Tens of designers, writers, and philosophers, as foundational contributors, were asked to respond to the curatorial statement of Eyes of the City: the book contains a selection of these responses covering topics as diverse as the identity of the eyes of the city and the aesthetic regimes behind them by Antoine Picon and Jian LIU . The evolution of the concept of urban anonymity by Yung-Ho Chang, and Deyan Sudjic, the role of the natural world in the technologically-enhanced city by Jeanne Gang, and advances in design practices that lie between robotics and archivization by Albena Yaneva and Philip Yuan

The third part unpacks the content of the exhibition through eight essayscorresponding to the sections of the exhibitionwritten by researchers who were part of the curatorial team. These essays position the installations within a wider landscape of intra- and inter-disciplinary debate through an outward movement from the laboratories of the exhibition to possible future scenarios.

Eyes of the City has striven to broaden discussion and reflection on possible future urban spaces as well as on the notion of the architectural biennale itself. The curatorial line adopted throughout the eighteen-month-long processan entanglement of online and on-site interactions, extensively leaning on academic researchconfigured the exhibition as an open system; that is, a platform of exchange independent of any aprioristic theoretical direction. The outbreak of COVID-19 inevitably impacted the material scale of the project. At the same time, it underlined the relevance of its immaterial legacy. Eyes of the City progressively re-invented itself in a virtual dimension, experimenting with diverse tactics to make its cultural program accessible. In doing so, it spawned a set of digital and physical documents, strategies and traces that address some of the many open issues the city with eyes will face in the future. This book aims at a first systematization of this heterogeneous legacy.

Eyes of the City: Visions of Architecture After Artificial Intelligence

Bibliography

AUTHORS BIOS:

VALERIA FEDERIGHI is an architect and assistant professor at Politecnico di Torino, Italy. She received a MArch and a Ph.D. from the same university, and a Master of Science in Design Research from the University of Michigan. She is on the editorial board of the journal Ardeth-Architectural Design Theory-and she is part of the China Room research group. Her main publication to date is the book The Informal Stance: Representations of Architectural Design and Informal Settlements (Applied Research Design, ORO Editions, 2018). She was Head Curator of Events and Editorial for the Eyes of the City exhibition.

MONICA NASO is an architect and a Ph.D. candidate in Architecture. History and Project at Politecnico di Torino. She received a MArch with honors from the same university and had several professional experiences in Paris and Turin. As a member of the China Room research group and of the South China-Torino Collaboration Lab, she takes part in international and interdisciplinary research and design projects, and she was among the curators of the Italian Design Pavilion at the Shenzhen Design Week 2018. She was Head Curator of Exhibition and On-site Coordination for the Eyes of the City exhibition.

DANIELE BELLERI is a Partner at the design and innovation practice CRA-Carlo Ratti Associati, where he manages all curatorial, editorial, and communication projects of the office. He has a background in contemporary history, urban studies, and political science, and spent a period as a researcher at Moscows Strelka Institute for Media, Architecture, and Design. Before joining CRA, he ran a London-based strategic design agency advising cultural organizations in Europe and Asia, and worked as an independent journalist writing on design and urban issues in international publications. He was one of the Executive Curators of the Eyes of the City exhibition. Currently, he is leading the development of CRAs Urban Study for Manifesta 14 Prishtina.

View post:
Eyes of the City: Visions of Architecture After Artificial Intelligence - ArchDaily

European Commissions Proposed Regulation on Artificial Intelligence: Conducting a Conformity Assessment in the Context of AI. Say What? – JD Supra

Introduction

The European Commission (EC) on April 21, 2021, proposed a regulation establishing a framework and rules (Proposed Regulation) for trustworthy Artificial Intelligence (AI) systems. As discussed in our previous OnPoints here and here, the Proposed Regulation aims to take a proportionate, risk-based regulatory approach by distinguishing between harmful AI practices, which are prohibited, and other AI uses that carry risk, but are permitted. These uses are the focus of the Proposed Regulation: high-risk AI systems can only be placed on the EU market or put into service if, among other requirements, a conformity assessment is conducted prior to doing so.

This OnPoint: (i) summarizes the requirements for conducting a conformity assessment, including unique considerations that apply to data driven algorithms and outputs that typically have not applied to physical systems and projects under EU product safety legislation; (ii) discusses the potential impacts of this new requirement on the market and how it will fit within the existing sectoral safety legislation framework in the EU; and (iii) identifies some strategic considerations, including in the context of product liability litigation, for providers and other impacted parties.

The Proposed Regulations conformity assessment requirement has its origins in EU product safety legislation. Under EU law, a conformity assessment is a process carried out to demonstrate whether specific consumer protection and product integrity requirements are fulfilled, and if not, what if any remedial measures can be implemented to satisfy such requirements. Unsafe products, or those that otherwise do not comply with applicable standards, may not make their way to the EU market. The scope of conformity assessment required differs under various directives according to the type of product and the perceived level of risk it presents, varying from self-assessment to risk assessment by a suitably qualified independent third party referred to as a Notified Body (whose accreditation may vary between Member States). An analogous concept in the U.S. is the authority of the Food and Drug Administration to require that manufacturers of medical devices follow certain regulatory procedures to market a new product in the U.S. market. The procedures required depend, among other factors, on the potential for devices to harm U.S. consumers.

As suggested, in the EU, conformity assessments are customarily intended for physical products, such as machinery, toys, medical devices and personal protective equipment. Examples of conformity assessments for physical products include sampling, testing, inspecting and evaluating a product. It remains to be seen how the conformity assessments under the Proposed Regulation will work in practice when applied to more amorphous components of AI such as software code and data assets. We anticipate, however, that the focus will be on testing such systems for bias and discriminatory/disparate impacts. Factors should include ensuring that representative data are included in the models and that outcomes avoid amplifying or perpetuating existing bias or otherwise unintentionally producing discriminatory impacts, particularly where traditionally underserved populations are targeted by AI models to correct inequities (e.g., an AI model might assign credit scores to certain demographic groups that result in targeted ads for higher interest rates than advertised to other market segments).

The Proposed Regulation provides for two different types of conformity assessments depending on the type of high-risk AI system at issue:

While the Proposed Regulation allows for a presumption of conformity for certain data quality requirements (where high-risk AI systems have been trained and tested on data concerning the specific settings within which they are intended to be used) and cybersecurity requirements (where the system has been certified or a statement of conformity issued under a cybersecurity scheme),2 providers are not absolved of their obligation to carry out a conformity assessment for the remainder of the requirements.

The specific conformity assessment to be conducted for high-risk AI systems depends on the category and type of AI at issue:

High-risk AI systems must undergo new assessments whenever they are substantially modified, regardless of whether the modified system will continue to be used by the current user or is intended to be more widely distributed. In any event, a new assessment is required every 5 years for AI systems required to conduct Notified Body Assessments.

Many questions remain about how the conduct of conformity assessments will function in practice, including how the requirement will work in conjunction with UK and EU anti-discrimination legislation (i.e., the UK Equality Act 2010) and existing sectoral safety legislation, including:

Supply Chain Impact and Division of Liability: The burdens of performing a conformity assessment will be shared among stakeholders. Prior to placing a high-risk AI system on the market, importers and distributors of such systems will be required to ensure that the correct conformity assessment was conducted by the provider of the system. Parties in the AI ecosystem may try to contract around liability issues and place the burden on parties elsewhere in the supply chain to meet conformity assessment requirements.

Costs of Compliance (and non-compliance): While the Proposed Regulation declares that the intent of the conformity assessment approach [is] to minimize the burden for economic operators [i.e. stakeholders], some commentators have expressed concern that an unintended consequence will be to force providers to conduct duplicative assessments where they are already subject to existing EU product legislation and other legal frameworks.5 Conducting a conformity assessment may also result in increased business and operational costs to businesses, such as legal fees. Companies will want to educate the EU Parliament and Council about these impacts during the legislative process through lobbying and informally, for example during conferences typically attended by industry and regulators, and in thought leadership.

In addition to the cost of conducting a conformity assessment, penalties for noncompliance will be hefty the Proposed Regulation tasks EU Member States with enforcement and imposes a three-tier fine regime similar to the GDPR: the higher of up to 2% of annual worldwide turnover or 10 million for incorrect, incomplete or misleading information to notified supervisory or other public authorities; up to 4% of annual global turnover or 20 million for non-compliant AI systems; or up to 6% of annual global turnover or 30 million for violations of the prohibitions on unacceptable AI systems and governance obligations.

Extraterritorial Reach: Like the GDPR, the Proposed Regulation is intended to have global reach and applies to: (i) providers that offer AI in the EEA, regardless of whether the provider is located in or outside the EEA; (ii) users of AI in the EEA; and (iii) providers and users of AI where the providers or users are located outside of the EEA but the AI outputs are used in the EEA. Prong (iii) could raise potential compliance headaches for providers of high-risk AI systems located outside of the EEA, who may not always be aware of or able to determine where the outputs of their AI systems are used. This may also cause providers located outside of the EEA to conduct a cost-benefit analysis before introducing their product to market in the EEA, though such providers will likely already be familiar with conformity assessments under existing EU law.

Data Use: In conducting the conformity assessment providers will need to address data privacy considerations involving the personal data used to create, train, validate and test AI models, including the GDPRs restrictions on automated decision-making, through corresponding data subject rights. As noted, this focus does not appear to be contemplated by existing product legislation, the focus of which was the integrity of physical products introduced into the EU market.

For AI conformity assessments, data sets must meet certain quality criteria. For example, the data sets must be relevant, representative and inclusive, free of errors and complete. The characteristics or elements that are specific to the geographical, behavioral, or functional setting in which the AI system is intended to operate should be considered. As noted, providers of high-risk AI systems should identify the risk of inherent bias in the data sets and outputs. The use of race, ethnicity, trade union membership, and similar demographic characteristics (or proxies) (including the use of data of only one of these groups) could result in legal, ethical and brand harm. AI fairness in credit scoring, targeted advertising, recruitment, benefits qualifications and criminal sentencing is currently being examined by regulators in the U.S. and other countries, as well as by industry trade groups, individual companies, nonprofit think tanks and academic researchers. Market trends and practices are currently nascent and evolving.

Bolstering of Producer Defenses Under the EU Product Liability Regime: Many see the EU as the next frontier for mass consumer claims. The EU has finally taken steps via EU Directive 2020/1828 on Representative Action (Directive) to enhance and standardise collective redress procedures throughout the Member States. The provisions of that Directive must be implemented no later than mid-2023. Class action activity in the EU was already showing a substantial increase and the Directive will only enhance that development. The EU Product Liability framework is often said to be strict liability reflecting Directive 85/374/EEC however, importantly, under certain limited exceptions, producers can escape liability including by asserting a state of the art defence (i.e., the state of scientific or technical knowledge at the time the product was put into circulation could not detect the defect). At least as far as this applies to an AI component, the new requirements on conformity assessments detailed above, particularly those undertaken by a notified body, may provide producers with a stronger evidential basis for asserting that defence.

While the Proposed Regulation is currently being addressed in the tripartite process, we anticipate that its core requirements will be implemented. In order to futureproof the development and use of this valuable technology, companies will want to consider the following measures to prepare.

Footnotes

1) The Proposed Regulation provides for the establishment of notified bodies within an EU member state. Notified bodies will be required to perform the third-party conformity assessment activities, including testing, certification and inspection of AI systems. In order to become a notified body, an organization must submit an application for notification to the notifying authority of the EU member state in which they are established.

2) Pursuant to Regulation (EU) 2019/881.

3) Harmonised standard is defined in the Proposed Regulation as a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012. Common specifications is defined as a document, other than a standard, containing technical solutions providing a means to, comply with certain requirements and obligations established under the Proposed Regulation.

4) The other high-risk AI systems identified in the Proposed Regulation relate to law enforcement, migration, asylum and border control management, and administration of justice and democratic processes.

5) For example, MedTech Europe submitted a response to the Proposed Regulation arguing that it would require manufacturers to undertake duplicative certification / conformity assessment, via two Notified Bodies, and maintain two sets of technical documentation, should misalignments between [the Proposed Regulation] and MDR/IVDR not be resolved. Available at: https://www.medtecheurope.org/wp-content/uploads/2021/08/medtech-europe-response-to-the-open-public-consultation-on-the-proposal-for-an-artificial-intelligence-act-6-august-2021-1.pdf.

Link:
European Commissions Proposed Regulation on Artificial Intelligence: Conducting a Conformity Assessment in the Context of AI. Say What? - JD Supra