Page 63«..1020..62636465..7080..»

Category Archives: Artificial Intelligence

Artificial intelligence: Everyone wants it, but not everyone is ready – ZDNet

Posted: November 17, 2021 at 12:53 pm

Artificial intelligence technologies have reached impressive levels of adoption, and are seen as a competitive differentiator. But there comes a point when technology becomes so ubiquitous that it is no longer a competitive differentiator -- think of the cloud. Going forward, those organizations succeeding with AI, then, will be those that apply human innovation and business sense to their AI foundations.

Such is the challenge identified in astudy released by RELX, which finds the use of AI technologies, at least in the United States, has reached 81% of enterprises, up 33 percentage points from 48% since a previous RELX survey in 2018. They're also bullish on AI delivering the goods -- 93% report that AI makes their business more competitive. This ubiquity may be the reason 95% are also reporting that finding the skills to build out their AI systems is a challenge. Plus, these systems could be potentially flawed: 75% worry that AI systems may potentially introduce the risk of bias in the workplace, and 65% admit their systems are biased.

So there's still much work to be done. It comes down to the people that can make AI happen, and make it as fair and accurate as possible.

"While many AI and machine learning deployments fail, in most cases, it's less of a problem with the actual technology and more about the environment around it," says Harish Doddi, CEO of Datatron. Moving to AI "requires the right skills, resources,andsystems."

It takes a well-developed understanding of AI and ML to deliver visible benefits to the business. While AI and ML have been around for many years, "we are still barely scratching the surface of uncovering their true capabilities," says Usman Shuja, general manager of connected buildings for Honeywell. "That said, there are many valuable lessons to be gleaned from others' missteps. While it's arguably true that AI can add significant value to practically any department across any business, one of the biggest mistakes a business can make is to implement AI for the sake of implementing AI, without a clear understanding of the business value they hope to achieve."

In addition, AI requires adroit change management, Shuja continues. "You can install the most cutting-edge AI solutions available, but if your employees can't or won't change their behaviors to adapt to a new way of doing things, you will see no value."

Another challenge is bias, as expressed by many executives in the RELX survey. "Algorithms can easily become biased based on the people who write them and the data they are providing, and bias can happen more with ML as it can be built in the base code," says Shuja. "While large amounts of data can ensure accuracy, it's virtually impossible to have enough data to mimic real-world use cases."

For example, he illustrates, "if I was looking into recruiting collegiate athletes for my professional lacrosse team, and I discovered that most of the players I am hearing about are Texas Longhorns, that might lead me to conclude that the best lacrosse players attend the University of Texas. However, this could just be because the algorithm has received too much data from one university, thus creating a bias."

The way the data is set up and who sets it up "can inadvertently sneak bias into the algorithms," Shuja says. "Companies that are not yet thinking through these implications need to put this to the forefront of their AI and ML technology efforts to build integrity into their solutions."

Another issue is that AI and ML models simply become outdated too soon, as many companies found out, and continue to find out as a result of Covid and supply chain issues. "Having good documentation that shows the model lifecyclehelps, butit'sstill insufficient when models become unreliable," says Doddi, "AI model governance helps bring accountability and traceability to machine learning models by having practitioners ask questions such as 'What were the previous versions like?' and 'What input variables are coming into the model?''" Governance is key. During development,Doddi explains, "MLmodels are bound by certain assumptions, rules, and expectations. Once deployed into production, the results can differ significantly from results in development environments.This is where governance is critical once a model is operationalized.There needs to be a way to keep track of various models and versions."

In some cases with AI, "less is more," says Shuja. "AI tends to be most successful when it is paired with mature, well-formatted data. This is mostly within the realm of IT/enterprise data, such as CRM, ERP, and marketing. However, when we move into areas where the data is less cohesive, such as with operational technology data, this is where achieving AI success becomes a bit more challenging. There is a tremendous need for scalable AI within an industrial environment, for example using AI to reduce energy consumption in a building or industrial plant -- an area of great potential for AI. One day soon, entire businesses -- from the factory floor to the board room -- will be connected; constantly learning and improving from the data it is processing. This will be the next major milestone for AI in the enterprise."

See the rest here:

Artificial intelligence: Everyone wants it, but not everyone is ready - ZDNet

Posted in Artificial Intelligence | Comments Off on Artificial intelligence: Everyone wants it, but not everyone is ready – ZDNet

5 applications of Artificial Intelligence in banking – IBS Intelligence

Posted: at 12:53 pm

5 applications of Artificial Intelligence in banking By Joy Dumasia

Artificial Intelligence (AI) has been around for a long time. AI was first conceptualized in 1955 as a branch of Computer Science and focused on the science of making intelligent machines machines that could mimic the cognitive abilities of the human mind, such as learning and problem-solving. AI is expected to have a disruptive effect on most industry sectors, many-fold compared to what the internet did over the last couple of decades. Organizations and governments around the world are diverting billions of dollars to fund research and pilot programs of applications of AI in solving real-world problems that current technology is not capable of addressing.

Artificial Intelligence enables banks to manage record-level high-speed data to receive valuable insights. Moreover, features such as digital payments, AI bots, and biometric fraud detection systems further lead to high-quality services for a broader customer base. Artificial Intelligence comprises a broad set of technologies, including, but are not limited to, Machine Learning, Natural Language Processing, Expert Systems, Vision, Speech, Planning, Robotics, etc.

The adoption of AI in different enterprises has increased due to the COVID-19 pandemic. Since the pandemic hit the world, the potential value of AI has grown significantly. The focus of AI adoption is restricted to improving the efficiency of operations or the effectiveness of operations. However, AI is becoming increasingly important as organizations automate their day-to-day operations and understand the COVID-19 affected datasets. It can be leveraged to improve the stakeholder experience as well.

The following are 5 applications of Artificial Intelligence in banking:

Chatbots deliver a very high ROI in cost savings, making them one of the most commonly used applications of AI across industries. Chatbots can effectively tackle most commonly accessed tasks, such as balance inquiry, accessing mini statements, fund transfers, etc. This helps reduce the load from other channels such as contact centres, internet banking, etc.

Automated advice is one of the most controversial topics in the financial services space. A robo-advisor attempts to understand a customers financial health by analyzing data shared by them, as well as their financial history. Based on this analysis and goals set by the client, the robo-advisor will be able to give appropriate investment recommendations in a particular product class, even as specific as a specific product or equity.

One of AIs most common use cases includes general-purpose semantic and natural language applications and broadly applied predictive analytics. AI can detect specific patterns and correlations in the data, which legacy technology could not previously detect. These patterns could indicate untapped sales opportunities, cross-sell opportunities, or even metrics around operational data, leading to a direct revenue impact.

AI can significantly improve the effectiveness of cybersecurity systems by leveraging data from previous threats and learning the patterns and indicators that might seem unrelated to predict and prevent attacks. In addition to preventing external threats, AI can also monitor internal threats or breaches and suggest corrective actions, resulting in the prevention of data theft or abuse.

AI is instrumental in helping alternate lenders determine the creditworthiness of clients by analyzing data from a wide range of traditional and non-traditional data sources. This helps lenders develop innovative lending systems backed by a robust credit scoring model, even for those individuals or entities with limited credit history. Notable companies include Affirm and GiniMachine.

ALSO READ: Applications of Artificial Intelligence in Banking 2021

Read more:

5 applications of Artificial Intelligence in banking - IBS Intelligence

Posted in Artificial Intelligence | Comments Off on 5 applications of Artificial Intelligence in banking – IBS Intelligence

Eyes of the City: Visions of Architecture After Artificial Intelligence – ArchDaily

Posted: at 12:53 pm

Eyes of the City: Visions of Architecture After Artificial Intelligence

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

This book tells the story of Eyes of the Cityan international exhibition on technology and urbanism held in Shenzhen during the winter of 2019 and 2020, with a curation process that unfolded between summer 2018 and spring 2020. Conceived as a cultural event exploring future scenarios in architecture and design, Eyes of the City found itself in an extraordinary, if unstable, position, enmeshed within a series of powerfully contingent eventsthe political turmoil in Hong Kong, the first outbreak of COVID-19 in Chinathat impacted not only the scope of the project, but also the global debate around society and urban space.

Eyes of the City was one of the two main sections of the eighth edition of the Shenzhen Bi-City Biennale of UrbanismArchitecture (UABB), titled Urban Interactions. Jointly curated by CRA-Carlo Ratti Associati, Politecnico di Torino and South China University of Technology, it focused on the various relationships between the built environment and increasingly pervasive digital technologiesfrom artificial intelligence to facial recognition, drones to self-driving vehiclesin a city that is one of the worlds leading centers of the Fourth Industrial Revolution. [1]

The topic of the exhibition was decided well before the two events mentioned above made it an especially sensitive one for a Chinese, as well as an international, audience. The Biennale opened its doors in December 2019, just after the months-long protests in Hong Kong had reached their climax and the discussion on the role of surveillance systems embedded in physical space was at its most controversial. [2] In addition, the location the UABB organizers had chosen for the Biennale also caused controversy. The exhibition venue was at the heart of Shenzhens Central Business District, in the hall of Futian Station, one of the largest infrastructure spaces in Asia as well as a multi-modal hub connecting the metropoliss metro system with high-speed trains capable of reaching Hong Kong in about ten minutes.

The agitations occurring on the south side of the border never spilled over into the first outpost of Mainland China. Nevertheless, as the curation process progressed and the opening day approached, the climate grew more tense. In those weeks, it was enough for an exhibitor to merely include as part of his/her proposal a drawing of people on the street standing under umbrellas to prompt heated reactions, with the image reminding visitors of the 2014 pro-democracy movements symbol. Immediately prior to the opening, the stations police fenced off the Biennale venue, instituting check-points for visitors (fortunately, this provision lasted only two weeks before people were permitted again to roam freely inside the station). Despite these contingencies, Eyes of the City managed to offer what a Reuters journalist defined as a rare public space for reflection on increasingly pervasive surveillance by tech companies and the government. [3]

Then, in the second half of January 2020, what began as a local sickness in the city of Wuhan [4] 1,000 kilometers north of Shenzhenspread across the country and beyond, rapidly becoming a global pandemic. Trains between Futian and Hong Kong were discontinued [5], the Biennale venue was shut, while in a matter of weeks, the role of emerging technologies in regulating and facilitating peoples work and social lives became one of the most-discussed topics worldwide, after the grim tally of infections and deaths. In the design field, COVID-19 was seen as exposing and amplifying, on a transcontinental scale, trajectories of change that were already underway.

In an unforeseeable fashion, the occurrences of history in southern China between late 2019 and early 2020 made the question of the city with eyes even more timely and pressing. In the midst of these events, the exhibition had to reinvent itself, experimenting with its form and content in order to continue carrying out its program and contribute to the growing debate. A product of this context, this book is the result of similar processes of continuous adjustment, reflection-in-action, and exchange.

The book challenges the traditional notion of exhibition catalog, crossing the three temporal and conceptual dimensions that were also tackled by the exhibition as a whole. The book is composed of three parts, which loosely represent the different laboratories of the exhibition: the curatorial work that preceded it, the open debate that accompanied it, and the content that made it relevant. Overall, the book adopts Eyes of the City as a trans-scalar and multidisciplinary interpretative key for rethinking the city as a complex entanglement of relationships.

The first part expands on curatorial practices and reflects on the exhibition as an incubator of ideas. The opening essay is written by the exhibitions chief curator Carlo Ratti and academic curators Michele Bonino (Politecnico di Torino) and Yimin Sun (South China University of Tecnology): it positions Eyes of the City as an urgent urban category and proposes a legacy for the show which reframes the role of architecture biennales. The second essay is written by the exhibitions executive curators: it reconstructs visually the exhibitions design process and its materialization of our open-curatorship approach.

The second part of the book expands on a discussion that accompanied the entire curatorial process from spring 2019 to summer 2020, through a rubric on ArchDaily. Tens of designers, writers, and philosophers, as foundational contributors, were asked to respond to the curatorial statement of Eyes of the City: the book contains a selection of these responses covering topics as diverse as the identity of the eyes of the city and the aesthetic regimes behind them by Antoine Picon and Jian LIU . The evolution of the concept of urban anonymity by Yung-Ho Chang, and Deyan Sudjic, the role of the natural world in the technologically-enhanced city by Jeanne Gang, and advances in design practices that lie between robotics and archivization by Albena Yaneva and Philip Yuan

The third part unpacks the content of the exhibition through eight essayscorresponding to the sections of the exhibitionwritten by researchers who were part of the curatorial team. These essays position the installations within a wider landscape of intra- and inter-disciplinary debate through an outward movement from the laboratories of the exhibition to possible future scenarios.

Eyes of the City has striven to broaden discussion and reflection on possible future urban spaces as well as on the notion of the architectural biennale itself. The curatorial line adopted throughout the eighteen-month-long processan entanglement of online and on-site interactions, extensively leaning on academic researchconfigured the exhibition as an open system; that is, a platform of exchange independent of any aprioristic theoretical direction. The outbreak of COVID-19 inevitably impacted the material scale of the project. At the same time, it underlined the relevance of its immaterial legacy. Eyes of the City progressively re-invented itself in a virtual dimension, experimenting with diverse tactics to make its cultural program accessible. In doing so, it spawned a set of digital and physical documents, strategies and traces that address some of the many open issues the city with eyes will face in the future. This book aims at a first systematization of this heterogeneous legacy.

Eyes of the City: Visions of Architecture After Artificial Intelligence

Bibliography

AUTHORS BIOS:

VALERIA FEDERIGHI is an architect and assistant professor at Politecnico di Torino, Italy. She received a MArch and a Ph.D. from the same university, and a Master of Science in Design Research from the University of Michigan. She is on the editorial board of the journal Ardeth-Architectural Design Theory-and she is part of the China Room research group. Her main publication to date is the book The Informal Stance: Representations of Architectural Design and Informal Settlements (Applied Research Design, ORO Editions, 2018). She was Head Curator of Events and Editorial for the Eyes of the City exhibition.

MONICA NASO is an architect and a Ph.D. candidate in Architecture. History and Project at Politecnico di Torino. She received a MArch with honors from the same university and had several professional experiences in Paris and Turin. As a member of the China Room research group and of the South China-Torino Collaboration Lab, she takes part in international and interdisciplinary research and design projects, and she was among the curators of the Italian Design Pavilion at the Shenzhen Design Week 2018. She was Head Curator of Exhibition and On-site Coordination for the Eyes of the City exhibition.

DANIELE BELLERI is a Partner at the design and innovation practice CRA-Carlo Ratti Associati, where he manages all curatorial, editorial, and communication projects of the office. He has a background in contemporary history, urban studies, and political science, and spent a period as a researcher at Moscows Strelka Institute for Media, Architecture, and Design. Before joining CRA, he ran a London-based strategic design agency advising cultural organizations in Europe and Asia, and worked as an independent journalist writing on design and urban issues in international publications. He was one of the Executive Curators of the Eyes of the City exhibition. Currently, he is leading the development of CRAs Urban Study for Manifesta 14 Prishtina.

Read the original here:

Eyes of the City: Visions of Architecture After Artificial Intelligence - ArchDaily

Posted in Artificial Intelligence | Comments Off on Eyes of the City: Visions of Architecture After Artificial Intelligence – ArchDaily

EEOC Increases Focus on Artificial Intelligence and Algorithmic Fairness – JD Supra

Posted: at 12:53 pm

On Thursday, October 28, 2021, the U.S. Equal Employment Opportunity Commission announced the launch of an initiative aimed at ensuring that the use of artificial intelligence (AI) and other technology-driven tools utilized in hiring and other employment decisions complies with anti-discrimination laws. While acknowledging that AI and algorithmic tools have potential to improve our lives and employment, EEOC Chair Charlotte A. Burrows noted, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.

At this point, according to the EEOCs press release, the initiative is focused on, how technology is fundamentally changing the way employment decisions are made. In this regard, the initiative seeks to guide stakeholders, including employers and vendors, in making sure emerging decision-making tools are being employed fairly and consistent with anti-discrimination laws. As a component of its initiative, the EEOC intends to:

In 2016, the EEOC held a public hearing on the equal employment opportunity implications of big data in the workplace, and the EEOC intends to build on that work. Focus areas of that hearing included potential discrimination, privacy concerns, and the possibility that disabled applicants or employees may be disadvantaged. At that hearing, Littler Shareholder Marko Mrkonich testified that a critical next step was for regulators, employers, and other stakeholders to learn about the opportunities AI offers, and consider how it can properly be deployed.

Of importance to global employers, earlier this year the European Union published a proposed regulation aimed at creating a regulatory framework for the use of AI. In the hierarchy discussed in the EUs proposed regulation, AI systems that are implemented in the recruitment and management of talent should be classified as high-risk. Given that classification, among other requirements, employers or vendors using such systems would be required to develop a risk management system, maintain technical documentation, adopt appropriate data governance measures, meet transparency requirements, maintain human oversight, and meet registration requirements.

It remains to be seen how U.S. regulators may utilize the tenets set forth in the EUs proposed regulation. Nevertheless, the EEOCs initiative further underscores the interest in such systems, and that care must be taken when deploying such systems to avoid running afoul of anti-discrimination laws.

Continued here:

EEOC Increases Focus on Artificial Intelligence and Algorithmic Fairness - JD Supra

Posted in Artificial Intelligence | Comments Off on EEOC Increases Focus on Artificial Intelligence and Algorithmic Fairness – JD Supra

IAEA Teams up with ITU and UN Family to Promote AI for Good – International Atomic Energy Agency

Posted: at 12:53 pm

The IAEA has joined the International Telecommunication Union (ITU) and 37 other United Nations organizations to work together in identifying artificial intelligence (AI) applications that accelerate reaching the UN Sustainable Development Goals.

At the AI for Good Global Summit 2021, on 18 and 24 November the IAEA will host two webinars open to the public, which build and expand on the way in which AI applications, methodologies and tools can advance nuclear science, technology and applications across various fields.

The two webinars, AI for Atoms and AI for Nuclear Energy, will cover the ways in which AI can help foster innovation in nuclear science and applications, and in nuclear energy, as well as provide opportunities for further support for AI proposals made at a recent, first ever, IAEA meeting on the topic. These included the establishment of a knowledge-sharing platform a Network on AI for Atoms for coordination among cross-domain researchers aimed at the development of guidance in regulation, education and training, and sharing experiences, knowledge and good practices, and to formulate guidance on ethical issues that the convergence of AI and nuclear science, technology and application could give rise too. These include the need for AI applications to be inclusive, just and equitable and benefit the entire society.

AI refers to a collection of technologies that combine numerical data, process algorithms and continuously increasing computing power to develop systems capable of approaching complex problems in ways similar to human logic and reasoning. AI technologies can analyse large amounts of data to learn and assess how to complete a particular task, a technique called machine learning.

AI can be a game changing technology, but it also comes with challenges, including concerns about transparency, trust, security and ethics, said Najat Mokhtar, IAEA Deputy Director General and Head of the Department of Nuclear Sciences and Applications. Thus, AI technologies require strong international partnerships and cross-cutting cooperation, which is why we have partnered with ITU and the other UN organizations. We look forward to expanding and strengthening our cooperation.

AI for Atoms

From nuclear medicine to water resources management and industry, AI has an enormous potential to accelerate technological development in many nuclear fields.

Experts already apply AI-based approaches to quickly analyse, for example, huge amounts of water-related isotopic data stored in global networks, such as Global Network of Isotopes in Precipitation (GNIP), maintained by the IAEA and the World Meteorological Organisation. The effective analysis of this data helps scientists better understand climate change and the impact it has on water availability worldwide.

AI could also contribute to combatting cancer. AI-based approaches are applied to boost diagnosis and treatment of cancer through improved image interpretation, more accurate treatment plans and precise tumour contouring as well as through adaptive radiotherapy a radiation therapy process that adapts to internal anatomical variants of the individual patient. Machine learning plays and increasingly important role in medical imaging for the prediction of individual disease course and treatment response. AI will also play an important role in the IAEAs Zoonotic Disease Integrated Action (ZODIAC) initiative to help experts predict, identify, assess and contain future zoonotic disease outbreaks.

In fusion and nuclear science research, it enables prediction and control solutions necessary for sustained, safe and efficient facility operation.

The AI for Atoms event will take place on 18 November from 14:00 to 15:30 Central European Time, with experts discussing how AI-based approaches can advance cancer treatment, and how machine learning can become part of the clinical toolkit. It will provide an overview of how to optimize remediation of radioactive contamination in agriculture and accelerate progress in nuclear fusion and science research. It will also discuss a new domain of normative applied ethics at the intersection of AI and nuclear technologies and applications and provide an overview of potential ways to mitigate ethical concerns. Register for the event here.

AI for Nuclear Energy

Nuclear power generates about 10 percent of the worlds electricity, which amounts to more than a quarter of all low carbon electricity.

Nuclear power is a green and reliable source of energy which, in partnership with other clean energy sources, can help countries achieve net zero emissions. In order to be competitive as well as integrated into the mix of modern energy systems, nuclear power plants in addition to being safe, reliable and sustainable also need to be economical and efficient. AI-based approaches can contribute in these areas.

Thanks to rapid developments in computer technology and data analysis tools, the nuclear power industry is already benefiting from AI applications, such as machine learning and deep learning techniques, to streamline and optimize nuclear power plant operations and maintenance and supporting the development of new advanced nuclear power technologies.

The AI for Nuclear Energy event on 24 November will discuss how AI methodologies and tools can be applied for physics-based predictive analysis that can be used to perform design, manufacturing and construction optimization, operation effectiveness, improved new reactor design iterations, model-based fault detection and advanced control systems. Some of the key developers and nuclear power industry leaders in this area are invited to discuss the vision and path forward to Nuclear 4.0 with the help of AI. Register for the event here.

More here:

IAEA Teams up with ITU and UN Family to Promote AI for Good - International Atomic Energy Agency

Posted in Artificial Intelligence | Comments Off on IAEA Teams up with ITU and UN Family to Promote AI for Good – International Atomic Energy Agency

European Commissions Proposed Regulation on Artificial Intelligence: Conducting a Conformity Assessment in the Context of AI. Say What? – JD Supra

Posted: at 12:53 pm

Introduction

The European Commission (EC) on April 21, 2021, proposed a regulation establishing a framework and rules (Proposed Regulation) for trustworthy Artificial Intelligence (AI) systems. As discussed in our previous OnPoints here and here, the Proposed Regulation aims to take a proportionate, risk-based regulatory approach by distinguishing between harmful AI practices, which are prohibited, and other AI uses that carry risk, but are permitted. These uses are the focus of the Proposed Regulation: high-risk AI systems can only be placed on the EU market or put into service if, among other requirements, a conformity assessment is conducted prior to doing so.

This OnPoint: (i) summarizes the requirements for conducting a conformity assessment, including unique considerations that apply to data driven algorithms and outputs that typically have not applied to physical systems and projects under EU product safety legislation; (ii) discusses the potential impacts of this new requirement on the market and how it will fit within the existing sectoral safety legislation framework in the EU; and (iii) identifies some strategic considerations, including in the context of product liability litigation, for providers and other impacted parties.

The Proposed Regulations conformity assessment requirement has its origins in EU product safety legislation. Under EU law, a conformity assessment is a process carried out to demonstrate whether specific consumer protection and product integrity requirements are fulfilled, and if not, what if any remedial measures can be implemented to satisfy such requirements. Unsafe products, or those that otherwise do not comply with applicable standards, may not make their way to the EU market. The scope of conformity assessment required differs under various directives according to the type of product and the perceived level of risk it presents, varying from self-assessment to risk assessment by a suitably qualified independent third party referred to as a Notified Body (whose accreditation may vary between Member States). An analogous concept in the U.S. is the authority of the Food and Drug Administration to require that manufacturers of medical devices follow certain regulatory procedures to market a new product in the U.S. market. The procedures required depend, among other factors, on the potential for devices to harm U.S. consumers.

As suggested, in the EU, conformity assessments are customarily intended for physical products, such as machinery, toys, medical devices and personal protective equipment. Examples of conformity assessments for physical products include sampling, testing, inspecting and evaluating a product. It remains to be seen how the conformity assessments under the Proposed Regulation will work in practice when applied to more amorphous components of AI such as software code and data assets. We anticipate, however, that the focus will be on testing such systems for bias and discriminatory/disparate impacts. Factors should include ensuring that representative data are included in the models and that outcomes avoid amplifying or perpetuating existing bias or otherwise unintentionally producing discriminatory impacts, particularly where traditionally underserved populations are targeted by AI models to correct inequities (e.g., an AI model might assign credit scores to certain demographic groups that result in targeted ads for higher interest rates than advertised to other market segments).

The Proposed Regulation provides for two different types of conformity assessments depending on the type of high-risk AI system at issue:

While the Proposed Regulation allows for a presumption of conformity for certain data quality requirements (where high-risk AI systems have been trained and tested on data concerning the specific settings within which they are intended to be used) and cybersecurity requirements (where the system has been certified or a statement of conformity issued under a cybersecurity scheme),2 providers are not absolved of their obligation to carry out a conformity assessment for the remainder of the requirements.

The specific conformity assessment to be conducted for high-risk AI systems depends on the category and type of AI at issue:

High-risk AI systems must undergo new assessments whenever they are substantially modified, regardless of whether the modified system will continue to be used by the current user or is intended to be more widely distributed. In any event, a new assessment is required every 5 years for AI systems required to conduct Notified Body Assessments.

Many questions remain about how the conduct of conformity assessments will function in practice, including how the requirement will work in conjunction with UK and EU anti-discrimination legislation (i.e., the UK Equality Act 2010) and existing sectoral safety legislation, including:

Supply Chain Impact and Division of Liability: The burdens of performing a conformity assessment will be shared among stakeholders. Prior to placing a high-risk AI system on the market, importers and distributors of such systems will be required to ensure that the correct conformity assessment was conducted by the provider of the system. Parties in the AI ecosystem may try to contract around liability issues and place the burden on parties elsewhere in the supply chain to meet conformity assessment requirements.

Costs of Compliance (and non-compliance): While the Proposed Regulation declares that the intent of the conformity assessment approach [is] to minimize the burden for economic operators [i.e. stakeholders], some commentators have expressed concern that an unintended consequence will be to force providers to conduct duplicative assessments where they are already subject to existing EU product legislation and other legal frameworks.5 Conducting a conformity assessment may also result in increased business and operational costs to businesses, such as legal fees. Companies will want to educate the EU Parliament and Council about these impacts during the legislative process through lobbying and informally, for example during conferences typically attended by industry and regulators, and in thought leadership.

In addition to the cost of conducting a conformity assessment, penalties for noncompliance will be hefty the Proposed Regulation tasks EU Member States with enforcement and imposes a three-tier fine regime similar to the GDPR: the higher of up to 2% of annual worldwide turnover or 10 million for incorrect, incomplete or misleading information to notified supervisory or other public authorities; up to 4% of annual global turnover or 20 million for non-compliant AI systems; or up to 6% of annual global turnover or 30 million for violations of the prohibitions on unacceptable AI systems and governance obligations.

Extraterritorial Reach: Like the GDPR, the Proposed Regulation is intended to have global reach and applies to: (i) providers that offer AI in the EEA, regardless of whether the provider is located in or outside the EEA; (ii) users of AI in the EEA; and (iii) providers and users of AI where the providers or users are located outside of the EEA but the AI outputs are used in the EEA. Prong (iii) could raise potential compliance headaches for providers of high-risk AI systems located outside of the EEA, who may not always be aware of or able to determine where the outputs of their AI systems are used. This may also cause providers located outside of the EEA to conduct a cost-benefit analysis before introducing their product to market in the EEA, though such providers will likely already be familiar with conformity assessments under existing EU law.

Data Use: In conducting the conformity assessment providers will need to address data privacy considerations involving the personal data used to create, train, validate and test AI models, including the GDPRs restrictions on automated decision-making, through corresponding data subject rights. As noted, this focus does not appear to be contemplated by existing product legislation, the focus of which was the integrity of physical products introduced into the EU market.

For AI conformity assessments, data sets must meet certain quality criteria. For example, the data sets must be relevant, representative and inclusive, free of errors and complete. The characteristics or elements that are specific to the geographical, behavioral, or functional setting in which the AI system is intended to operate should be considered. As noted, providers of high-risk AI systems should identify the risk of inherent bias in the data sets and outputs. The use of race, ethnicity, trade union membership, and similar demographic characteristics (or proxies) (including the use of data of only one of these groups) could result in legal, ethical and brand harm. AI fairness in credit scoring, targeted advertising, recruitment, benefits qualifications and criminal sentencing is currently being examined by regulators in the U.S. and other countries, as well as by industry trade groups, individual companies, nonprofit think tanks and academic researchers. Market trends and practices are currently nascent and evolving.

Bolstering of Producer Defenses Under the EU Product Liability Regime: Many see the EU as the next frontier for mass consumer claims. The EU has finally taken steps via EU Directive 2020/1828 on Representative Action (Directive) to enhance and standardise collective redress procedures throughout the Member States. The provisions of that Directive must be implemented no later than mid-2023. Class action activity in the EU was already showing a substantial increase and the Directive will only enhance that development. The EU Product Liability framework is often said to be strict liability reflecting Directive 85/374/EEC however, importantly, under certain limited exceptions, producers can escape liability including by asserting a state of the art defence (i.e., the state of scientific or technical knowledge at the time the product was put into circulation could not detect the defect). At least as far as this applies to an AI component, the new requirements on conformity assessments detailed above, particularly those undertaken by a notified body, may provide producers with a stronger evidential basis for asserting that defence.

While the Proposed Regulation is currently being addressed in the tripartite process, we anticipate that its core requirements will be implemented. In order to futureproof the development and use of this valuable technology, companies will want to consider the following measures to prepare.

Footnotes

1) The Proposed Regulation provides for the establishment of notified bodies within an EU member state. Notified bodies will be required to perform the third-party conformity assessment activities, including testing, certification and inspection of AI systems. In order to become a notified body, an organization must submit an application for notification to the notifying authority of the EU member state in which they are established.

2) Pursuant to Regulation (EU) 2019/881.

3) Harmonised standard is defined in the Proposed Regulation as a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012. Common specifications is defined as a document, other than a standard, containing technical solutions providing a means to, comply with certain requirements and obligations established under the Proposed Regulation.

4) The other high-risk AI systems identified in the Proposed Regulation relate to law enforcement, migration, asylum and border control management, and administration of justice and democratic processes.

5) For example, MedTech Europe submitted a response to the Proposed Regulation arguing that it would require manufacturers to undertake duplicative certification / conformity assessment, via two Notified Bodies, and maintain two sets of technical documentation, should misalignments between [the Proposed Regulation] and MDR/IVDR not be resolved. Available at: https://www.medtecheurope.org/wp-content/uploads/2021/08/medtech-europe-response-to-the-open-public-consultation-on-the-proposal-for-an-artificial-intelligence-act-6-august-2021-1.pdf.

Here is the original post:

European Commissions Proposed Regulation on Artificial Intelligence: Conducting a Conformity Assessment in the Context of AI. Say What? - JD Supra

Posted in Artificial Intelligence | Comments Off on European Commissions Proposed Regulation on Artificial Intelligence: Conducting a Conformity Assessment in the Context of AI. Say What? – JD Supra

Predicting eye movements with Artificial Intelligence – Innovation Origins

Posted: at 12:53 pm

Scientists develop a software that can be used in combination with MRI data for research and diagnosis

Viewing behavior provides a window into many central aspects of human cognition and health, and it is an important variable in many functional magnetic resonance imaging (fMRI) studies. Researchers from the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig and the Kavli Institute for Systems Neuroscience in Trondheim have now developed software that uses artificial intelligence to directly predict eye position and eye movements from MRI images. The method opens up rapid and cost-effective research and diagnostic possibilities, for example, in neurological diseases that often manifest as changes in eye-movement patterns, writes the Max Planck Institute in a press release.

To record eye movements, research institutions typically use a so-called eye tracker a sensor technology in which infrared light is projected onto the retina, reflected, and eventually measured. Because an MRI has a very strong magnetic field, you need special MRI-compatible equipment, which is often not feasible for clinics and small laboratories, says study author Matthias Nau, who developed the new alternative together with Markus Frey and Christian Doeller. The high cost of these cameras and the experimental effort involved in their use have so far prevented the widespread use of eye tracking in MRI examinations. That could now change. The scientists from Leipzig and Trondheim developed the easy-to-use software DeepMReye and provide it for free.

Want to be inspired 365 days per year? Heres the opportunity. We offer you one "origin of innovation" a day in a compact Telegram message. Seven days a week, delivered around 8 p.m. CET. Straight from our newsroom. Subscribe here, it's free!

With it, it is now possible to track participants viewing behavior even without a camera during an MRI scan. The neural network we use detects specific patterns in the MRI signal from the eyes. This allows us to predict where the person is looking. Artificial intelligence helps a lot here, because we often dont know exactly which patterns to look for as scientists, Markus Frey explains. He and his colleagues have trained the neural network with their own and publicly available data from study participants in such a way that it can now perform eye tracking in data the software has not been trained on. This opens up many possibilities. For example, it is now possible to study the gaze behaviour of participants and patients in existing MRI data, which were originally acquired without eye tracking. In this way, scientists could use older studies and data sets to answer entirely new questions.

The software can also predict when eyes are open or closed. Moreover, it can track eye movements even when the eyes remain closed. This may allow to perform eye tracking even when study participants are asleep. I can imagine that the software will also be used in the clinical field, for example, in the sleep lab to study eye movements in different sleep stages, says Matthias Nau. In addition, for blind patients, the traditional eye-tracking cameras have rarely been used because an accurate calibration was very cumbersome. Here too, studies can be carried out more easily with DeepMReye, as the artificial intelligence can be calibrated with the help of healthy subjects and then be applied in examinations of blind patients. The software could thus enable a variety of applications in research and clinical settings, perhaps even leading to eye tracking finally becoming a standard in MRI studies and everyday clinical practice.

Also interesting: Artificial intelligence in healthcare? Dont focus solely on technology

See the original post:

Predicting eye movements with Artificial Intelligence - Innovation Origins

Posted in Artificial Intelligence | Comments Off on Predicting eye movements with Artificial Intelligence – Innovation Origins

Google CEO will focus on search and artificial intelligence – Gizchina.com

Posted: at 12:53 pm

While global tech giants now see the metaverse as the next thing for growth, Google CEO Sundar Pichai sees the future of the company in its oldest product, Internet search.

I am fortunate that our mission is timeless. Now the need to organize data is greater than ever before, said the head of Google and Alphabet in an interview with Bloomberg Television. Not long ago, Alphabet briefly surpassed the $ 2 trillion market value thanks to strong sales and profit growth during the pandemic. When asked where the next trillion would come from, Mr Pichai pointed to the companys core services. He predicts that consumers will ask computers more questions through voice and multimodal interaction. The ability to adapt to all of this and develop search will remain the greatest potential , said Mr Pichai.

Since joining Google in 2015, Pichai has pushed the company deeper into the cloud computing and artificial intelligence industry, which has garnered more regulatory scrutiny. The companys key growing areas of activity, in his opinion, are cloud services, YouTube and the app store; all of which are based on investments in artificial intelligence.

Mr. Pichai noted that new Google products will increasingly be developed and tested in Asia, but not in China. After the company did not manage to establish search operations in the country in 2018; it had to turn off most of its branded services in the PRC, and this is unlikely to change. At the same time, he considers China to be one of the world technological leaders in the most advanced fields; including artificial intelligence and quantum computing.

Major Google partners such as Microsoft and Meta Platforms see the future of technology in the virtual worlds of the metaverse. Google has previously released several virtual and augmented reality products; although the success of those products has been limited: Google Glass has failed. However, now a new specialized division has opened with direct subordination to the head of the company; but there is no information about this project yet. I have always been in awe of the future of immersive computing. They are not owned by any company. This is the evolution of the Internet, concluded Mr. Pichai.

According to previous news from the New York Times; Google is aggressively working towards a contract with the US Department of Defense. According to the publication, the companys cloud division has appointed engineers to prepare a proposal to the Pentagon. As part of the project, Google intends to make a significant contribution to the implementation of the Joint Warfighting Cloud Capability military program.

Read this article:

Google CEO will focus on search and artificial intelligence - Gizchina.com

Posted in Artificial Intelligence | Comments Off on Google CEO will focus on search and artificial intelligence – Gizchina.com

Can Artificial Intelligence Hijack Art History of The World? – Analytics Insight

Posted: at 12:53 pm

Art history is important to reflect and help to create a cultures vision of itself. Studying the art of the past teaches everyone how people have seen themselves and their world, and how they want to show this to others. Artificial Intelligence in art was not initially applied as a creator but as an impersonator. The technique is called style transfer and it uses deep neural networks to replicate, recreate and blend styles of artwork, by teaching Artificial Intelligence or AI to understand existing pieces of art.

Art history provides a means by which people can get in-depth knowledge about the human past and its relationship to the present because the act of making art is one of humanitys most ubiquitous activities.

When Artificial Intelligence in art gets attention for recovering lost works of art, it makes the AI technology sound less scary than when it garners headlines for creating deep fakes that falsify politicians speech or for using facial recognition for authoritarian surveillance.

According to reports, many scientists are conducting constant studies of art history with the help of artificial intelligence but rather than lionizing these studies as heroic achievements, those responsible for conveying their results to the public should see them as opportunities to question what the computational sciences are doing when they appropriate the study of art. And they should ask whether any of this is for the good of anyone or anything but AI, its most zealous proponents, and those who profit from it.

AI and art have great potential together and many new artists perspectives can be explored with the help of Artificial Intelligence like earlier this autumn, several media houses reported that a Swiss company using artificial intelligence (AI) to assess the authenticity of artworks had calculated a 91.78% probability that Samson and Delilah were not painted by Rubens. The same company also wrote a report on another painting in the National Gallery A View of Het Steen in the Early Morning (c. 1636) which stated a 98.76% probability that Rubens painted the work.

So, there are always two sides to coins, and the same goes with AI technology. It can be used to hijack art history or one can utilize artificial intelligence to assess the authenticity of artworks.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Link:

Can Artificial Intelligence Hijack Art History of The World? - Analytics Insight

Posted in Artificial Intelligence | Comments Off on Can Artificial Intelligence Hijack Art History of The World? – Analytics Insight

DGTL Holdings is helping companies accelerate the Artificial Intelligence and Machine Learning (AI-ML) technological revolution – Proactive Investors…

Posted: at 12:53 pm

DGTL which stands for Digital Growth Technologies and Licensing - specializes in accelerating fully commercialized enterprise-level SaaS (software-as-a-service) companies via a blend of unique capitalization structures

Artificial Intelligence and Machine Learning (AI-ML) technology is being used to disrupt many industry sectors driven by fast-moving innovation accelerated by the coronavirus pandemic.

The digital media and advertising industry has been one of the more obvious targets for AI-MI technology, and one company embracing the revolution is DGTL Holdings Inc. (TSX-V:DGTL, OTCQB:DGTHF)

DGTL which stands for Digital Growth Technologies and Licensing - specializes in accelerating fully commercialized enterprise-level SaaS (software-as-a-service) companies via a blend of unique capitalization structures.

The company also specializes in accelerating commercialized enterprise-level SaaS companies in the sectors of content, analytics and distribution via its wholly-owned subsidiary, Hashoff, with enterprise level self-service CaaS (content-as-a-service) built on proprietary AI-ML technology.

Hashoff's AI-ML platform functions as a full-service content management system, designed to empower global brands by identifying, optimizing, engaging, managing, and tracking top-ranked digital content publishers for localized brand marketing campaigns.

Hashoff is fully commercialized and currently serves numerous global brands by providing direct access to the global gig-economy of over 150 million freelance content creators. Hashoff's customer portfolio includes global brands in a range of key growth categories, including; DraftKings, Beam Suntory,

Anheuser Busch-InBev, Currency.com, Syneos Health, American Nurses Federation, Nestle, Post Holdings, Danone (OTCQX:DANOY) and Keurig-Dr. Pepper, The Container Store, Ulta Beauty (NASDAQ:ULTA), Pizza Hut, Live Nation, The CW, Scribd, and Novartis.

Proactive caught up with John David Belfontaine, DGTLs founder and EVP Corporate Development to find out more about the company.

Proactive: DGTL is looking to build a digital media, marketing and advertising technology portfolio. How is it achieving this?

John David Belfontaine: DGTLs investment model is to use M&A to build a portfolio of fully commercialized software companies that focus on the digital media marketing and advertising technology sector.

We created DGTL about three or four years ago using the TSX-V CPC program in a very unique and creative way. What we looked to do is to build a digital media house through merger and acquisition (M&A), acquiring fully commercialized business-to-business enterprise SaaS.

We are sector-specific in digital media, martech and adtech. And throughout the last year and a half, we have been interviewing software companies for acquisition. We completed our first deal, as of June 2020, with the acquisition of Hashoff LLC.

We have since invested into Hashoff and created a version 2.0 which allows for the companys platform to be used on video-based applications as well as fully self-serve for scalability, and we've added six new customers to the portfolio.

In summary, we are sort of an asset management company with a hands-on management team looking to accelerate software companies by providing the tools - the technology, the capital, the customers, the resources - that these founders need to execute on a revenue growth plan.

Can you explain the recent developments of the Hashoff 2.0 software?

Yes, absolutely. This is an exciting project for DGTL, the first technical development of our flagship subsidiary which was done at the request of one of our largest customers, DraftKings.

As we all know, DraftKings is very forward-looking in mobile and social media and is a very innovative company. They needed to see that Hashoff had the capability to provide Content-as-a-Service for video-based social media applications such as TikTok, Youtube, Snapchat, etc.

So we were very honored to have signed DraftKings and we're excited to manage their campaign for the NCAA March Madness, as well as the PGA Masters, and most recently to help market their NFT sports collectible memorabilia product line.

Hashoff challenged us to evolve the software into this video-based applications and we answered the call quickly to create an RFP and got the software developed to a 2.0 version. We've since completed UHT testing, migrated our largest customers over, and will be beginning to market launch this 2.0 software into the marketplace to be a real leader in providing social media influencer content for both static image, text, and now also video-based social media applications.

Content-as-a-Service is being adopted by many global brands. Can you explain why this trend exists?

Yeah, I think there's a pervasive trend that began almost 10 years ago when we saw large global brands starting to migrate from traditional marketing over to digital media. The real inflection point happened in 2019 when mobile surpassed PC (personal computer) in time spent on a monthly consumption basis.

To give an idea of what a change this was to our society and the way that business is done globally, last time this happened television surpassed radio. So you can imagine how different our lives have been interacting in society and in business, in a video-based or screen-based format.

So Content-as-a-Service has now become a critical component for global brands to connect with their consumers in market, in real-time, and certainly, through the pervasive adoption of mobile, social communications and information sharing it has become the preferred way to market and to communicate with consumers for brands worldwide, regardless of the sector in which they operate.

The company has been signing a slew of contracts across may sectors and regions, what is the main focus?

So when we acquired Hashoff back in the late spring/summer of 2022, really their key accounts were retail-based. They had Anheuser Busch, ADI and Dunkin brands representing 60% to 70% of their revenue and we knew that we wanted to diversify their customer base quickly. Certainly during the pandemic, it was a critical time for us to add key accounts.

We have since done that, adding six or seven large accounts, most notably with DoorDash, Varitone, DraftKings - now our largest account - Syneos Health in the healthcare division, and Currency.com in the cryptocurrency division, as well as two major Asia Pacific-based contracts, which are driving our international expansion and growth from overseas brands.

So we're really excited about that as well. We see continued diversification and growth of all these new accounts with increased spending and budgets, moving us away from the retail-focused brands and into more innovative sectors and diversifying our customer base.

What does DGTLs management team bring to the company?

That is a great question. Were led by an operational team in the United States, operating from New York. Our CEO, Mike Rasik has 25-plus years in the digital media advertising space, and is part of the New York ad scene. He's a former senior vice president for agency partnerships and category strategy at Rocket Fuel, which was a large ad tech company in its day. He's got a wealth of experience, is a frequent speaker in the trade, and highly respected in the digital media ad tech culture. And we're really lucky to have him at the helm.

Also on the operational team in the United States, Charlie Thomas, managing director and chief strategy officer at Hashoff has 30-plus years in the digital media advertising business. He's got a real interesting story being a pioneer in the ad tech space with Time Warner AOL (NYSE:AOL), and was one of the first individuals to ever sell an ad online and created the digital media department for the company. He is also a former VP ad sales at Broadcast.com under Mark Cuban, helping to launch one of the most successful IPOs in adtech at the time, and was a regional vice president at Yahoo, sales strategy for Facebook the list goes on

So our operational team in the United States is very strong. We expect to add to that operational team through our prospective acquisition of Engagement Labs (TSX-V:EL) with some very key talent for both front office and back office which will help to continue to help us grow over time.

But in Canada, we also have an impressive Capital Markets team headed by DavidBeck, former managing director TMT investment banking at RBC and behind three or four notable public companies in the technology space. During the 2000 tech boom, he was really one of the most respected research analysts in technology in Canada, issuing one of the first research reports on RIMM BlackBerry, so that's how far he goes back in that sector. So overall we have this nice balance of a US operating executive, as well as executive directors in Canada to help with corporate governance, etc. and stewarding the capital markets.

So what should investors expect from DGTL in the short to medium term?

We have just filed our audited annual financials and our Q1 interim financials which have reflected the shift in revenues away from retail. And we're really excited to see the continued diversification of our customer base with companies like DraftKings, etc. So I think that investors will be happy to see the continued growth of those accounts over the next one to two quarters.

We look forward to Hashoff continuing to grow but also to achieve being cashflow neutral/cashflow positive, which is one of our critical goals here for the next three to six months, to get Hashoff to be self-sustaining so we can continue to execute on our M&A strategy to build the portfolio in multiple categories.

We feel we have completed the Social Media Tech category for DGTL with our acquisition of Hashoff for content and our prospective acquisition of Engagement Labs (TSX-V:EL)/Total Social, for analytics play, as well as a partnership with a company called Shuttle Rock, which allows us to provide content into commerce, turning social media posts into web ads for any screen-based format, which is really innovative at a zero capital expense. We think that's an excellent distribution play for us.

Were excited about these acquisitions but we're also looking to build a larger portfolio. We're not just focused on Social Media Tech, we want to be active in mobile, social, gaming, streaming, while hyper-focused on the high growth areas of digital media. We want to have set a flagpole in each one of those high-growth categories to build a full-service digital media house to capture more sales revenues from each of the global brand accounts.

This is really just the beginning for DGTL. We've been public for one year now; we've acquired and grown our first asset and have a transaction agreement with a secondary asset; we are approaching $10 million in top-line and revenue with the two first companies trading at less than one time sales for the combined entities, which is a great value proposition for investors. In the sector, we typically see five times price to sales, in a bear market, and DGTL is certainly trading at a majority discount to its sector peers. So we know we have that tailwind behind us from a shareholder perspective.

But we are also looking to see DGTL continue to announce new customers post the acquisition of Engagement Labs/Total Social, with the collaboration between key customers from Hashoff and Total Social being so complimentary and so geographically diverse. In this acquisition, we are adding $4 million-plus in revenue to the top line, a new CRO to the front office, a full-time CFO to the back office and further diversifying the customer base with key accounts like Netflix, Hulu, Progressive, MetLife, Audible and the NFL. The cross-sell opportunity is obvious, especially when considering Hashoff key accounts, like AB Inbev and Draftkings, which happen to be the two largest sponsors of the NFL. The magic of DGTL really occurs once we have multiple software companies under management to cross-sell.

We're going to continue to grow the company, grow our key assets, turn them from just revenue growth to cashflow neutral and positive, continue to execute on our M&A plan, and flesh out this greater portfolio model that we set out as our investment plan and vision over the last two or three years.

We're extremely excited. With the first year of audited annual financials and the majority of the Hashoff development costs behind us, we can grow Hashoff revenues, and reach cash flow positive in the near term, while continuing to add to the portfolio via M&A.

The DGTL story has just begun and continues to evolve as we add new accounts, acquire more software companies, and grow towards a leadership position in the space.

Contact the author at jon.hopkins@proactiveinvestors.com

More:

DGTL Holdings is helping companies accelerate the Artificial Intelligence and Machine Learning (AI-ML) technological revolution - Proactive Investors...

Posted in Artificial Intelligence | Comments Off on DGTL Holdings is helping companies accelerate the Artificial Intelligence and Machine Learning (AI-ML) technological revolution – Proactive Investors…

Page 63«..1020..62636465..7080..»