Page 68«..1020..67686970..8090..»

Category Archives: Artificial Intelligence

The artificial intelligence in healthcare market is projected to grow from USD 6.9 billion in 2021 to USD 67.4 billion by 2027; it is expected to grow…

Posted: November 5, 2021 at 10:01 pm

Many companies are developing software solutions for various healthcare applications; this is the key factor complementing the growth of the software segment. Strong demand among software developers (especially in medical centers and universities) and widening applications of AI in the healthcare sector are among the prime factors complementing the growth of the AI platform within the software segment.

New York, Nov. 05, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Artificial Intelligence in Healthcare Market by Offering, Technology, Application, End User and Geography - Global Forecast to 2027" - https://www.reportlinker.com/p04897122/?utm_source=GNW

Google AI Platform, TensorFlow, Microsoft Azure, Premonition, Watson Studio, Lumiata, and Infrrd are some of the top AI platforms.

The market for machine learning segment is expected to grow at the highest CAGR during the forecast periodThe increasing adoption of machine learning technology (especially deep learning) in various healthcare applications such as inpatient monitoring & hospital management, drug discovery, medical imaging & diagnostics, and cybersecurity is driving the adoption of machine learning technology in the AI in healthcare market.

The medical imaging & diagnostics segment is expected to grow at the highest CAGR of the artificial intelligence in healthcare market during the forecast period.The high growth of the medical imaging and diagnostics segment can be attributed to factors such as the presence of a large volume of imaging data, advantages offered by AI systems to radiologists in diagnosis and treatment management, and the influx of a large number of startups in this segment.

North America region is expected to hold the largest share of the artificial intelligence in healthcare market during the forecast period.Increasing adoption of AI technology across the continuum of care, especially in the US, and high healthcare spending combined with the onset of COVID-19 pandemic accelerating the adoption of AI in hospital and clinics across the region are the major factors driving the growth of the North American market.

Break-up of the profiles of primary participants: By Company Type Tier 1 40%, Tier 2 25%, and Tier 3 35% By Designation C-level 40%, Director-level 35%, and, Other 25% By Region North America - 30%, Europe 20%, APAC 45%, and RoW 5%

The key players operating in the artificial intelligence in healthcare market include Intel (US), Koninklijke Philips (Netherlands), Microsoft (US), IBM (US), and Siemens Healthineers (US)

The artificial intelligence in healthcare market has been segmented into offering, technology, application, end user, and region.

Based on offering the market has been segmented into hardware, software, and services.Based on technology the market has been segmented machine learning, natural language processing, context-aware computing, and computer vision.

Based on application the market has been segmented into patient data & risk analysis, inpatient care & hospital management, medical imaging & diagnostics, lifestyle management & monitoring, virtual assistants, drug discovery, research, healthcare assistance robots, precision medicine, emergency room & surgery, wearables, mental health, and cybersecurity.Based on end user, the market has been segmented into hospitals & healthcare providers, patients, pharmaceutical & biotechnology companies, healthcare payers, and others.

The artificial intelligence in healthcare market has been studied for North America, Europe, Asia Pacific (APAC), and the Rest of the World (RoW).

Reasons to buy the report: Illustrative segmentation, analysis, and forecast of the market based on offering, technology, application, end user, and region have been conducted to give an overall view of the artificial intelligence in healthcare market. A value chain analysis has been performed to provide in-depth insights into the artificial intelligence in healthcare market. The key drivers, restraints, opportunities, and challenges pertaining to the artificial intelligence in healthcare market have been detailed in this report. Detailed information regarding the COVID-19 impact on the artificial intelligence in healthcare market has been provided in the report. The report includes a detailed competitive landscape of the market, along with key players, as well as in-depth analysis of their revenuesRead the full report: https://www.reportlinker.com/p04897122/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Go here to see the original:

The artificial intelligence in healthcare market is projected to grow from USD 6.9 billion in 2021 to USD 67.4 billion by 2027; it is expected to grow...

Posted in Artificial Intelligence | Comments Off on The artificial intelligence in healthcare market is projected to grow from USD 6.9 billion in 2021 to USD 67.4 billion by 2027; it is expected to grow…

Bad Robot? Employing Artificial Intelligence in the Rush to Replace LIBOR – JD Supra

Posted: at 10:01 pm

Federal regulators have recommended that banks cease entering into new contracts using the London Interbank Offered Rate (LIBOR) as a reference rate by December 31, 2021. Additionally, the administrator of LIBOR will cease publishing one-week and two-month LIBOR on December 31, 2021 and the remaining tenors (overnight, one-month, three-month, six-month and 12-month) on June 30, 2023. To ensure a smooth transition from LIBOR to an alternate benchmark rate (the Secured Overnight Financing Rate (SOFR) being the leading contender), commercial banks and investment banks are in the process of identifying their outstanding LIBOR-based financial obligations and, if necessary, preparing amendments to the underlying contracts. To further this endeavor, most banks have produced standardized forms of benchmark replacement language for use in amending existing contracts. Even with this form language, however, the process of identifying LIBOR-based financial obligations, reviewing the underlying contracts, preparing amendments and negotiating the terms with the counterparties can be complicated and time-consuming for banks and their attorneys. Given the sheer volume of LIBOR-based financial obligations that are outstanding, as well as the approaching deadlines for the phasing out of LIBOR, some banks are looking for ways to streamline the legal work associated with this document review. Enter the robots!

As part of the solution to scaling the mountain of legal work involved in the LIBOR transition, some banks are employing forms of artificial intelligence (AI), computer algorithms and LIBOR-analyzing software to identify the affected financial obligations and the underlying contracts. In one example, an algorithm sifts through the contracts for LIBOR provisions, outlines the process (if any) by which the financial obligation will transition to a replacement rate and determines whether amendments are necessary. Human lawyers are still needed to check the work of these robots (ensuring that nothing was missed), advise bank clients on legal issues and negotiate specific terms with the counterparties. Nevertheless, for monumental undertakings like the LIBOR transition, AI has the potential to expedite at least part of the process, serving as a time-saving tool to compliment, but not wholly replace, the work of lawyers.

Beyond the LIBOR transition, AI software systems capable of updating and thinking by themselves are also being used to facilitate legal services more broadly. For example, DoNotPay, a subscription-based online platform, describes itself as the worlds first robot lawyer. It uses AI-driven software to assist users not only with preparing legal documents, but also providing step-by-step guidance for pursuing a vast array of legal processes, including appealing parking tickets, instituting breach of contract claims, cancelling services or subscriptions, creating powers of attorney, submitting demand letters, obtaining refunds on flight tickets and hotel bookings and filing claims in small claims court. DoNotPay remains a self-help platform, however, with express disclaimers to the effect that it is not a lawyer and the offered services do not constitute legal advice. In any event, the DoNotPay anecdote should serve as a reminder for attorneys using AI technology, with respect to the LIBOR transition or otherwise, to review the Rules of Professional Conduct in their jurisdictions as to the impact of AI on the obligations of competent representation, diligence, and the like.

The LIBOR transition presents an interesting test case for using AI to expedite the more rote aspects of large-scale document review and similar administrative tasks associated with legal representation generally. Nevertheless, AI can only go so far, with lawyers needed to provide the legal analysis and advice necessary to complete the process.

View post:

Bad Robot? Employing Artificial Intelligence in the Rush to Replace LIBOR - JD Supra

Posted in Artificial Intelligence | Comments Off on Bad Robot? Employing Artificial Intelligence in the Rush to Replace LIBOR – JD Supra

Night vision and artificial intelligence reveal secrets of spider webs – BBC Science Focus Magazine

Posted: at 10:01 pm

Even people who arent fans of spiders can appreciate the intricate beauty of their webs. Its even more fascinating when you consider the fact that the arachnids have tiny brains, yet somehow can build these geometrically precise creations.

Now, scientists at Johns Hopkins University have used artificial intelligence and night vision to establish how exactly spiders build their webs.

I first got interested in this topic while I was out birding with my son, said senior author Dr Andrew Gordus, a Johns Hopkins behavioural biologist.

After seeing a spectacular web I thought, if you went to a zoo and saw a chimpanzee building this youd think thats one amazing and impressive chimpanzee. Well, this is even more amazing because a spiders brain is so tiny and I was frustrated that we didnt know more about how this remarkable behaviour occurs. Now weve defined the entire choreography for web-building, which has never been done for any animal architecture at this fine of a resolution.

Read more about spiders:

First, the scientists had to systematically document and analyse the behaviours and motor skills involved.

They took six hackled orb weaver spiders, which are small, nocturnal spiders native to the western United States. They selected this spider species as they do not need humid conditions, and can happily co-exist with each other.

In the lab, each spider was placed on a plexiglass box, under an infrared light. Each night, the spiders were recorded using a camera that operated at a fast frame rate, to capture all of their tiny movements as they built their webs.

The researchers then tracked the millions of individual leg actions with an algorithm designed specifically to detect limb movement.

Even if you video record it, thats a lot of legs to track, over a long time, across many individuals, said lead author Abel Corver, a graduate student studying web-making and neurophysiology. Its just too much to go through every frame and annotate the leg points by hand, so we trained machine vision software to detect the posture of the spider, frame by frame, so we could document everything the legs do to build an entire web.

Researchers found that web-making behaviours are quite similar across individual spiders, so much so that the researchers were able to predict the part of a web a spider was working on just from seeing the position of a leg. They think that the algorithm would work for other species of spiders, and would like to explore this in the future.

The researchers think that the findings could offer hints on how to understand larger brain systems in other animals, including humans. Other future experiments will involve using mind-altering drugs to establish which circuits in a spiders brain are responsible for web-building.

Spider webs are one of the most amazing of natures constructions, unless youre a fly of course, said Prof Adam Hart, an entomologist who was not involved in the research. By being able to follow every tiny movement this research is finally unlocking the complex dance spiders do to make their webs. We can learn so much from nature, and research like this can give us all sorts of insights into how we can make new materials and structures.

Asked by: Jack Roberts, Cheshire

Putting conkers around the house to deter spiders is an old wives tale and theres no evidence to suggest it really works. Spiders dont eat conkers or lay eggs in them, so there is no reason why horse chestnut trees would bother to produce spider-repelling chemicals. There is no hard research on the subject, but pupils of Roselyon Primary School in Cornwall won a prize from the Royal Society of Chemistry in 2010 for their informal study showing that spiders were unphased by conkers.

Spiders are most common indoors in the autumn months. At this time of year, male house spiders leave their webs and start wandering in search of females. If you hoover up all the spiders in your house, it will probably take a couple of weeks for the spiders to recolonise regardless of whether or not you scatter conkers around the place.

Read more:

Excerpt from:

Night vision and artificial intelligence reveal secrets of spider webs - BBC Science Focus Magazine

Posted in Artificial Intelligence | Comments Off on Night vision and artificial intelligence reveal secrets of spider webs – BBC Science Focus Magazine

Artificial intelligence is getting better at writing, and universities should worry about plagiarism – The Conversation CA

Posted: at 10:00 pm

The dramatic rise of online learning during the COVID-19 pandemic has spotlit concerns about the role of technology in exam surveillance and also in student cheating.

Some universities have reported more cheating during the pandemic, and such concerns are unfolding in a climate where technologies that allow for the automation of writing continue to improve.

Over the past two years, the ability of artificial intelligence to generate writing has leapt forward significantly, particularly with the development of whats known as the language generator GPT-3. With this, companies such as Google, Microsoft and NVIDIA can now produce human-like text.

AI-generated writing has raised the stakes of how universities and schools will gauge what constitutes academic misconduct, such as plagiarism. As scholars with an interest in academic integrity and the intersections of work, society and educators labour, we believe that educators and parents should be, at the very least, paying close attention to these significant developments.

The use of technology in academic writing is already widespread. For example, many universities already use text-based plagiarism detectors like Turnitin, while students might use Grammarly, a cloud-based writing assistant. Examples of writing support include automatic text generation, extraction, prediction, mining, form-filling, paraphrasing, translation and transcription.

Read more: In an AI world we need to teach students how to work with robot writers

Advancements in AI technology have led to new tools, products and services being offered to writers to improve content and efficiency. As these improve, soon entire articles or essays might be generated and written entirely by artificial intelligence. In schools, the implications of such developments will undoubtedly shape the future of learning, writing and teaching.

Research has revealed that concerns over academic misconduct are already widespread across institutions higher education in Canada and internationally.

In Canada, there is little data regarding the rates of misconduct. Research published in 2006 based on data from mostly undergraduate students at 11 higher education institutions found 53 per cent reported having engaged in one or more instances of serious cheating on written work, which was defined as copying material without footnoting, copying material almost word for word, submitting work done by someone else, fabricating or falsifying a bibliography, submitting a paper they either bought or got from someone else for free.

Academic misconduct is in all likelihood under-reported across Canadian higher education institutions.

There are different types of violations of academic integrity, including plagiarism, contract cheating (where students hire other people to write their papers) and exam cheating, among others.

Unfortunately, with technology, students can use their ingenuity and entrepreneurialism to cheat. These concerns are also applicable to faculty members, academics and writers in other fields, bringing new concerns surrounding academic integrity and AI such as:

We are asking these questions in our own research, and we know that in the face of all this, educators will be required to consider how writing can be effectively assessed or evaluated as these technologies improve.

At the moment, little guidance, policy or oversight is available regarding technology, AI and academic integrity for teachers and educational leaders.

Over the past year, COVID-19 has pushed more students towards online learning a sphere where teachers may become less familiar with their own students and thus, potentially, their writing.

While it remains impossible to predict the future of these technologies and their implications in education, we can attempt to discern some of the larger trends and trajectories that will impact teaching, learning and research.

A key concern moving forward is the apparent movement towards the increased automation of education where educational technology companies offer commodities such as writing tools as proposed solutions for the various problems within education.

An example of this is automated assessment of student work, such as automated grading of student writing. Numerous commercial products already exist for automated grading, though the ethics of these technologies are yet to be fully explored by scholars and educators.

Read more: Online exam monitoring can invade privacy and erode trust at universities

Overall, the traditional landscape surrounding academic integrity and authorship is being rapidly reshaped by technological developments. Such technological developments also spark concerns about a shift of professional control away from educators and ever-increasing new expectations of digital literacy in precarious working environments.

Read more: Precarious employment in education impacts workers, families and students

These complexities, concerns and questions will require further thought and discussion. Educational stakeholders at all levels will be required to respond and rethink definitions as well as values surrounding plagiarism, originality, academic ethics and academic labour in the very near future.

The authors would like to sincerely thank Ryan Morrison, from George Brown College, who provided significant expertise, advice and assistance with the development of this article.

Read this article:

Artificial intelligence is getting better at writing, and universities should worry about plagiarism - The Conversation CA

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is getting better at writing, and universities should worry about plagiarism – The Conversation CA

The Cultural Benefits of Artificial Intelligence in the Enterprise – MIT Sloan

Posted: at 10:00 pm

Organization-Level Cultural Benefits

The Culture-Use-Effectiveness dynamic is different at the organizational level than it is at the team level. Figure 5 shows the C-U-E dynamic at the organizational level: Organizational culture can improve AI adoption, which in turn improves organizational effectiveness, which in turn improves organizational culture.

Improving each component can lead to a virtuous cycle of cultural improvement throughout the enterprise.

At PepsiCo, executives view AI as a strategic capability. They also acknowledge that making full use of that capability goes hand in hand with strengthening the companys culture, says Colin Lenaghan, global senior vice president, net revenue management, for the food and beverage multinational. PepsiCo is very much an organization and a culture that learns by doing, he explains. We view AI as a very strategic capability that helps us solve strategic problems. We are making quite an investment in bringing literacy of advanced analytics across the broader community. We are starting to elevate that literacy among senior management. This is clearly something that has to be driven from the top. It needs cultural change. Over time, we intend to strengthen our AI capability and hopefully the culture at the same time. Pervasive AI literacy enables communication through a shared language.

A shared language improves communication about (and the identification of) new opportunities. At Levi Strauss & Co., Paul Pallath, the clothing companys global technology head of data, analytics, and AI, agrees that broad-based adoption of AI demands culture change across the organization. We need to change the overall culture of the organization, and that depends on getting our people to think in terms of AI, he says. If you dont start thinking in that direction, youre not going to ask the right questions that can eventually be solved with AI. Thinking in terms of AI such as asking what solutions might be possible with AI or whether AI could be applied in a particular situation unveils new opportunities. Collective thinking in terms of AI depends on a shared language.

We need to change the overall culture of the organization, and that depends on getting our people to think in terms of AI.

Changing the culture to make full use of AI across the enterprise is both necessary and difficult, says Chris Couch, senior vice president and CTO at Cooper Standard, which provides components and systems for diverse transportation and industrial markets. Good companies are going to develop people in all functions, whether its finance, purchasing, manufacturing you name it that have some sense about where AI tools can be applied. Bad ones wont, he explains. While AI will continue to be something special that only certain experts use, theres going to be a democratization in the next 10 years. Its one of those things that is not easy to prepare for, but we have to prepare for it. Otherwise, were going to get displaced. When the organization depends on AI literacy, those who lack literacy add discord.

Using AI doesnt merely help with effectiveness at the team level (such as by improving efficiency and decision quality); managers can also use AI to improve an organizations competitiveness. For instance, innovating new processes with AI appears to enhance a companys ability to compete with both existing and new rivals. We compared respondents who said they are using AI primarily to innovate existing processes with those who agreed that their company is using AI primarily to explore new ways of creating value. (See Figure 6.) Respondents who agreed that they are using AI primarily to explore new ways of creating value were 2.5 times more likely to agree that AI is helping their company defend against competitors and 2.7 times more likely to agree that AI is helping their company capture opportunities in adjacent industries. Exploration with AI is correlated to a greater extent with improved competitiveness than exploitation with AI.

Organizations that report greater competitiveness from AI are focused on creating new value with AI.

Organizations can use AI to accelerate these innovation processes for existing processes. Moderna rapidly developed a widely used COVID-19 vaccine with the help of AI. Johnson says Moderna focuses on having a smaller company thats very agile and can move fast. And we see AI as a key enabler for that. The hope is that it helps us to compete in ways that other companies cant. That is certainly the intention here.

Moderna began automating work that had previously been done by humans, including testing the design sequence of messenger RNA (mRNA) used in vaccines that protect against infectious diseases. One of the big bottlenecks was having this mRNA for the scientist to run testing, Johnson says. So we put in place a ton of robotic automation, and a lot of digital systems and process automation and AI algorithms as well. And we went from maybe about 30 mRNAs manually produced in a given month to a capacity of about a thousand in a monthlong period, without using significantly more resources and with much better consistency in quality. As a result, employees at Moderna can evaluate many more options for innovation than before; the companys rapid development of the COVID-19 vaccine was due, in part, to using AI to rapidly test mRNA design sequences. Using AI accelerated innovation, increasing the companys ability to compete with much larger companies.

But speed is far from the only potential benefit of AI. Amit Shah, president of floral and gift retailer 1-800-Flowers, observes, If you think about what differentiates modern organizations, it is not just the ability to adopt technologies thats become a table stake but the ability to out-solve competitors in facing deep problems.

When I think about AI, Shah continues, I think about our competitiveness on that frontier. Five years down the road, I think every new employee that starts out in any company of consequence will have an AI toolkit, like we used to get the Excel toolkit, to both solve problems better and communicate that better to clients, to colleagues, or to any stakeholder. Being a company of consequence in the future may require all employees to work with AI to out-solve competitors with new ways of creating value.

Improving organizational effectiveness is not itself an end goal. After all, organizations can become more effective at the wrong activities: They can achieve misguided objectives, reinforce outdated values, or compete against irrelevant organizations. When CBSs Subramanyam asked her AI team to assess whether executives had the right assumptions about what factors lead to a successful TV show, she was using AI to reassess what being effective means in her organization. Using AI can help a company not only achieve effective outcomes, but also change assumptions about what counts as an effective outcome.

Many executives revealed that their AI implementations were helping them develop or refine strategic assumptions and improve how they measure performance. These changes often lead to shifts in their KPIs. Indeed, our survey found that 64% of the organizations that use AI extensively or in some parts of their processes and offerings adjust their KPIs after using AI. As Pernod Ricards Calloch says, We are planning to monitor new KPIs because AI is helping us measure performance more precisely. For example, one algorithm helps us measure the performance of each marketing campaign in isolation, whereas before, campaigns were running on various media at the same time, and it was impossible to isolate the contribution of each media component. Our ability to isolate and better measure a campaigns performance allows our marketers to be more performance-focused and to make better decisions.

KLM, for example, used AI to develop a new measure to help make complex financial and operational trade-offs involving crew scheduling and passenger delays. Rather than optimizing for on-time performance, Stomph says, we quantified what it takes not to deliver as promised across different departments. That required us to quantify things that you cannot find in your P&L. The measure looks at the cost of various situations, such as a two-hour delay to a crew members schedule if that person is switched from a flight landing at 2 p.m. to one landing at 4 p.m. Whats the price of this? he asks. If you want to run an optimization across different departments, you need to create a single currency to unify all of these players. And the single currency we created was nonperformance cost. The single currency enabled everyone to make decisions based on the same criteria instead of relying on individual judgments with uncoordinated decision-making criteria.

KLMs nonperformance measurement led to changes in a cascade of decisions, including when to swap out crew members. What I find most intriguing about the solutions we have, Stomph says, is even if you will never use the tool, that process of bringing these teams together has been very valuable from a financial and a morale point of view.

Another way that AI implementations can help organizations revise assumptions about effective outcomes is to enable workers to outperform existing KPIs so consistently and so thoroughly that new KPIs are called for. People see that they are outpacing the KPIs that they agreed upon because of AI/ML, Levi Strausss Pallath says. Based on how AI/ML is delivering value to the enterprise, the goalpost keeps shifting.

New success measures become necessary when AI-based solutions make possible new performance benchmarks, obsolesce legacy KPIs, and/or reveal new drivers of performance. Changes in KPIs often accompany shifts in organizational behavior. Indeed, organizations that revise their KPIs because of how they use AI are more likely to see improvements in collaboration than organizations that dont make AI-driven adjustments to their KPIs. Sixty-six percent of respondents who agreed that their KPIs have changed because of AI also saw improvements in team-level collaboration.

Achieving these cultural benefits, particularly at the organizational level, can require considerable change. As Pernod Ricards Calloch describes it, Some processes get changed in a significant way because the data and the processing of the data through AI give us more certainty about some of the elements. You can make quicker decisions live, during a meeting. You can iterate more frequently. And you dont have to wait six months for the return on investment of a campaign to adapt the new wave or to scale it. In fact, you can have more elements. So yes, its significantly changing processes of decision-making. Using AI can accelerate the quality and pace of organizational life extensively, requiring considerable change.

But our research suggests that even when organizations make substantial changes associated with AI, culture does not suffer quite the opposite, in fact. For example, implementing AI is associated with better morale in general. But when combined with business process change, the effects are even more pronounced: The greater (in both number and extent) the change, the greater the improvements in morale. To wit, 57% of organizations that made few changes in business processes reported an increase in morale, while 66% of organizations that made many changes reported an increase in morale. (See Figure 7.) The more that an organization uses AI, the more opportunities there are for cultural benefit.

Morale improves the more processes change.

A strong culture helps encourage AI adoption, and adopting AI can strengthen organizational culture. This cyclical relationship can build through numerous individual process improvements to enhance the overall organizational culture. Zeighami says that when he introduced AI at H&M, he wanted to avoid the common practice of making one part of your organization become very good at that, and then the rest are still lagging behind.

Its almost like putting a tire on a car, he explains. You dont screw one bolt really hard and then do the next one. You just do every bolt a little bit and then tighten everything up. And I think that has been a really good approach for us. Zeighami deployed AI for many company processes, including fashion forecasting, demand forecasting, and price management, along with more personalized customer-facing initiatives. Its been a very vast approach, he observes. Not going too deep, but a little bit in every area to enhance and elevate and change the mindset for everybody so we can become data-led, AI-led, going forward. And we have seen a lot of interesting results. In some areas we even see that working with the AI product has changed peoples way of working with other stuff, because theres a proximity impact on the business. Once an organization introduces AI widely, it can come back and improve not only individual processes but the interfaces between those processes, strengthening the organization as a whole.

Through repeated application and managerial attention, the virtuous cycle between organizational culture and AI use can result in a more cohesive organization, consistently reflecting its desired strategic values. As a result, responsible AI adoption transcends legitimate issues around minimizing bias (in product design, promotion, and customer service) and manipulation (of customers, pricing, and other business practices). Instead, AI becomes a managerial tool to align microbehavior with broader goals, including societal purpose, equity, and inclusivity.

For example, JoAnn Stonier, chief data officer at Mastercard, reports that the financial services corporation launched a data responsibility initiative in 2018 that involved privacy and security issues and included working hard on our ethical AI process. Many of her workplace conversations about AI, she adds, center on minimization of bias as well as how we build an inclusive future. But the conversations dont stop there, she says. The events of this past year have taught us that we need to pay attention to how we are designing products for society and that our data sets are really important. What are we feeding into the machines, and how do we design our algorithmic processes, and what is it going to learn from us?

We understand that data sets are going to have all sorts of bias in them, she continues. I think we can begin to design a better future, but it means being very mindful of whats inherent in the data set. Whats there and whats missing? These discussions help articulate values around which the organization can align, she says. The whole firm is really getting behind this idea of developing a broad-based playbook so that everybody in the organization understands how to think about inclusive concepts.

Pervasive change is complex. As founding director of the Notre Dame-IBM Technology Ethics Lab, Elizabeth Renieris is acutely aware of the complexities of these conversations and how they continue to evolve. The ethics conversation in the past couple of years started out with the lens very much on the technology, she says. Its been turned around and focused on whos building it and whos at the table those are the really important questions.

The value of ethics, she adds, is, rather than looking at the narrow particulars and tweaking around the edges of the specific technology or implementation, to step back and have that conversation about values to ask, What are our values, and how do those values align with what it is that were working on from a technology standpoint? Stepping back may cause discomfort. But through these conversations, AI can have a profound effect on organizational culture.

Read the original here:

The Cultural Benefits of Artificial Intelligence in the Enterprise - MIT Sloan

Posted in Artificial Intelligence | Comments Off on The Cultural Benefits of Artificial Intelligence in the Enterprise – MIT Sloan

Artificial intelligence hiring levels in the tech industry dropped in September 2021 – Verdict

Posted: at 10:00 pm

The proportion of technology and communications companies hiring for artificial intelligence related positions dropped in September 2021, with 55.5% of the companies included in our analysis recruiting for at least one such position.

This latest figure was lower than the 56.9% of companies who were hiring for artificial intelligence related jobs in August 2021 and an increase compared to the figure of 49.4% for the equivalent month last year.

When it came to the proportion of all job openings that were linked to artificial intelligence, related job postings rose in September 2021, with 6.2% of newly posted job advertisements being linked to the topic.

This latest figure was an increase compared to the 5.9% of newly advertised jobs that were linked to artificial intelligence in the equivlent month a year ago.

Artificial intelligence is one of the topics that GlobalData, from whom our data for this article is taken, have identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforseen challenges.

Our analysis of the data shows that technology and communications companies are currently hiring for artificial intelligence jobs at a rate higher than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 1.8% in September 2021.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

You can keep track of the latest data from this database as it emerges by visiting our live dashboard here.

See the rest here:

Artificial intelligence hiring levels in the tech industry dropped in September 2021 - Verdict

Posted in Artificial Intelligence | Comments Off on Artificial intelligence hiring levels in the tech industry dropped in September 2021 – Verdict

A Tale Of Two Jurisdictions: Sufficiency Of Disclosure For Artificial Intelligence (AI) Patents In The US And The EPO – Intellectual Property – United…

Posted: at 10:00 pm

PatentNext Summary: In order to prepareapplications for filing in multiple jurisdictions, practitionersshould be cognizant of claiming styles in the various jurisdictionsthat they expect to file AI-related patent applications in, anddraft claims accordingly. For example, different jurisdictions,such as the U.S. and EPO, have different legal tests that canresult in different styles for claiming artificialintelligence(AI)-related inventions.

In this article, we will compare two applications, one in theU.S. and the other in the EPO, that have the same or similarclaims. Both applications claim priority to the same PCTApplication (PCT/AT2006/000457) (the "'427 PCTApplication"), which is published as PCT Pub. No.WO/2007/053868.

As we shall see, despite the application having the same orsimilar claims, prosecution of the applications in the twojurisdictions nonetheless resulted in different outcomes, with theU.S. application prosecuted to allowance and the EPO applicationending in rejection.

****

Pertinent to our discussion is an overview of AI. A briefdescription of AI follows before analysis of the AI-related claimsat issue.

Artificial Intelligence (AI) is fundamentally a data-driventechnology that takes unique datasets as input to train AI computermodels. Once trained, an AI computer model may take new data asinput to predict, classify, or otherwise output results for use ina variety of applications.

Machine learning, arguably the most widely used AI technique,may be described as a process that uses data and algorithms totrain (or teach) computer models, which usually involves thetraining of weights of the model. Training typically involvescalculating and updating mathematical weights (i.e., numeralvalues) of a model based on input that can comprise hundreds,thousands, millions, etc. sets of data. The trained model allowsthe computer to make decisions without the need for explicit orrule-based programming.

In particular, machine learning algorithms build a model ontraining data to identify and extract patterns from the data andtherefore acquire (or learn) unique knowledge that can be appliedto new data sets.

For more information, see Artificial Intelligence & the IntellectualProperty Landscape

AI inventions are fundamentally software-related inventions. Inthe U.S., as a practical rule, software-related patents shoulddisclose an algorithm by which the software-related invention isachieved. An algorithm provides support for a software-relatedpatent pursuant to 35 U.S.C. 112(a) including (1) byproviding sufficiency of disclosure for the patent's"written description" and (2) by "enabling" oneof ordinary skill in the art (e.g., a computer engineer or computerprogrammer) to make or use the related software-related inventionwithout "undue experimentation." Without such support, apatent claim can be held invalid. For more information regardinggeneral aspects of the sufficiency of disclosure in the U.S. forsoftware-related inventions, see Why including an "Algorithm" isImportant for Software Patents (Part 2)

U.S. Patent 8,920,327 (the "'327 Patent") issuedfrom the '457 PCT Application. The ''327 Patent is anexample of an AI patent that did not experiencesufficiency issues in the U.S. The below provides an overview ofthe '327 Patent.

The '327 Patent is titled "Method for DeterminingCardiac Output" and includes a single independent claimregarding a method for cardiac output from an arterial bloodpressure curve. The method is implemented via a cardiac device, asillustrated in Figure 1 (copied below):

Fig. 1 illustrates device 1 for implementing the invention ofthe '327 patent, where measuring device 2 measures theperipheral blood pressure curve, and where related measurement datais fed into device 1 via line 3, and stored and evaluated there.The device further comprises optical display device 4, input panel5, and keys 6 for inputting and displaying information.

The claimed method includes an AI aspect, i.e., namely the useof "an artificial neural network having weightingvalues that are determined by learning."

Claim 1 is copied below (with the AI aspectbolded):

1. A method for determiningcardiac output from an arterial blood pressure curve measured at aperipheral region, comprising the steps of:

measuring the arterial bloodpressure curve at the peripheral region; arithmeticallytransforming the measured arterial blood pressure curve to anequivalent aortic pressure; and

calculating the cardiac outputfrom the equivalent aortic pressure,

wherein the arithmetictransformation of the arterial blood pressure curve measured at theperipheral region into the equivalent aortic pressure is performedby the aid of an artificial neural networkhaving weighting values that are determined bylearning.

Figure 3 of the '327 patent (copied below) is a schematicillustration of the artificial neural network, as recited in claim1.

The specification of the '327 patent describes that"FIG. 3 illustrates the structure of the neural network...,and it is apparent that the neural network ... is comprised ofthree layers 14, 15, 16." The specification discloses that asupervised learning algorithm is used to train the weights of themodel, e.g., "[t]he weights and the bias for the latter twolayers 15 and 16 are determined by supervised learning."

The input data fed to the supervised learning algorithm to trainthe AI model includes "associated blood pressure curve pairsactually determined by measurements in the periphery or in theaorta, respectively, are used." The measurements used for theinput data may come "from patients of different ages, sexes,constitutional types, health conditions and the like."

No issues with respect to sufficiency were raised during theprosecution of the application in the U.S. that was issued as the'327 patent.

More generally, issues of sufficiency in the U.S. typicallyarise in litigation, and result in expert testimony, i.e., "abattle of the experts," where expert witnesses (e.g.,typically university professors or industry consultants) fromopposing sides opine on the knowledge of a person of ordinary skillin the art and sufficiency of disclosure in view of thatperson.

The EPO has developed its own, yet similar, stance on AI-relatedinvention when compared with the U.S. Nonetheless, outcomes ofprosecution can be different. The below provides a cursory overviewof developments in the EPO with respect to AI-related inventionsand analyzes the treatment of an EPO application as filed based onthe PCT Application '457 (which is the same PCT Application asfor the '327 patent discussed above).

Generally, artificial intelligence inventions may be patented inthe European Patent Office (EPO). For example, in its Guidelinesfor Examination, the EPO defines AI and machine learning as"based on computational models and algorithms forclassification, clustering, regression and dimensionalityreduction, such as neural networks, genetic algorithms, supportvector machines, k-means, kernel regression and discriminantanalysis." Section 3.3.1 (Artificial intelligence and machinelearning).

As such, the EPO dubs AI and machine learning as "per se ofan abstract mathematical nature," irrespective of whether suchmodels may be trained with training data. Id. Thus, simplyclaiming a machine learning model (e.g., such as a "neuralnetwork") does not, alone, necessarily imply the use of a"technical means" in accordance with EPO law.

Nonetheless, the Guidelines for Examination at the EPO recognizethat the use of an AI model, when claimed as a whole with theadditional subject matter, may demonstrate a sufficient technicalcharacter. Id. As an example, the Guidelines forExamination at the EPO states that "the use of a neuralnetwork in a heart-monitoring apparatus for the purpose ofidentifying irregular heartbeats makes a technicalcontribution." Id. As a further example, the EPOGuidelines for Examination further states that "[t]heclassification of digital images, videos, audio or speech signalsbased on low-level features (e.g. edges or pixel attributes forimages) are further typical technical applications ofclassification algorithms." Id.

In a decision in 2020, the EPO Board of Appeals rejected amachine learning-based patent application that claimed an"artificial neural network" because the patentspecification failed to sufficiently disclose how the artificialneural network was trained. See T0161/18 (Equivalent aortic pressure / ARCSEIBERSDORF). The application in question claimed priority to thePCT Application '457, which is the same parent application asthe '327 patent, as discussed above.

The claims were the same or similar as to those in the U.S.,where the claims-at-issue directed to determining cardiac outputfrom an arterial blood pressure curve measured at a periphery, andrecited, in part (with respect to AI), that the "bloodpressure curve measured on the periphery is converted into theequivalent aortic pressure with the help of anartificial neural network, the weighting values ??ofwhich are determined bylearning."

Claim 1 is reproduced below (in English based on a machinetranslation of the original opinion German):

1. A method for determining thecardiac output from an arterial blood pressure curve measured atthe periphery, in which the blood pressure curve measured at theperiphery is mathematically transformed to the equivalent aorticpressure and the cardiac output is calculated from the equivalentaortic pressure, characterized in that the transformation of theblood pressure curve measured on the periphery is converted intothe equivalent aortic pressure with the help of anartificial neural network, the weighting values ??ofwhich are determined by learning.

The Board analyzed the claim in view of the specificationpursuant to Article 83 EP (Sufficient disclosure). As described bythe Board, Article 83 EPC requires that the invention be disclosedin the European patent application so clearly and completely that aperson skilled in the art can carry it out. For this, thedisclosure of the invention in the application must enable theperson skilled in the art to reproduce the technical teachinginherent in the claimed invention on the basis of his generalspecialist knowledge.

The Board then turned to the specification to determine whetherit disclosed enough support to meet these requirements in view ofthe claimed "artificial neural network." However, thespecification was found lacking because it failed to"disclose which input data aresuitable for training the artificial neural network according tothe invention, or at least one data set suitable for solving thetechnical problem at hand."

Instead, the Board found that the specification "merelyreveals that the input data should cover a broad spectrum ofpatients of different ages, genders, constitution types, healthstatus and the like."

Therefore, the Board found that the training of the artificialneural network could therefore not be reworked by the personskilled in the art, and the person skilled in the art can thereforenot carry out the invention.

Because of these deficiencies, the Board found that thespecification failed to provide sufficient disclosure pursuant toArticle 83 EPC.

For similar reasons, the Board further found that the claimedsubject matter lacked an "inventive step" pursuant toArticle 56 EPC. Specifically, the Board found that the claimed"artificial neural network" was not adapted for thespecific, claimed application because the specification failed todisclose how the artificial neural network was trained, andspecifically failed to disclose weight values that resulted fromsuch training. For this reason, the claimed "artificial neuralnetwork" could not be distinguished from the cited prior art,which resulted in failure to demonstrate the requisite inventivestep.

As the Board described:

In the present case, the claimedneural network is therefore not adapted for the specific, claimedapplication. In the opinion of the Chamber, there is therefore onlyan unspecified adaptation of the weight values, which is in thenature of every artificial neural network. The board is thereforenot convinced that the claimed effect will be achieved in theclaimed method over the entire range claimed. This effect cannot,therefore, be taken into account in the assessment of inventivestep in the sense of an improvement over the prior art.

Accordingly, at least with respect to patent applications filedin the EPO, and where an AI or machine learning model is to bedistinguished from the prior art, then a patent applicant may wantto include an example training data set, example trained weights,or at least sufficiently describe the input used to train the modelon a specific, claimed application or end-use. For example, atleast one example of data can be provided (or claimed) to show theinputs used to train specific weights, which may allow for theclaim to have sufficient disclosure, and, at the same time allowthe claim to cover a spectrum of AI models trained with aparticular set of data.

For the time being, such disclosure for an EPO case could beconsidered as additional when compared with the sufficiency ofdisclosure in the U.S. However, it is to be understood that theU.S. Patent office has also indicated the importance of includingtraining data or specific species of data used to train a model inits example guidance. See How to Patent an Artificial Intelligence (AI)Invention: Guidance from the U.S. Patent Office (USPTO). In anyevent, while there have been few court cases on AI-relatedinventions in the U.S. (see How the Courts treat Artificial Intelligence (AI)Patent Inventions: Through the Years since Alice), future casesmay indicate whether the U.S. will trend towards the EPO'sdecision in T0161/18 with respect to the sufficiency ofdisclosure.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

See original here:

A Tale Of Two Jurisdictions: Sufficiency Of Disclosure For Artificial Intelligence (AI) Patents In The US And The EPO - Intellectual Property - United...

Posted in Artificial Intelligence | Comments Off on A Tale Of Two Jurisdictions: Sufficiency Of Disclosure For Artificial Intelligence (AI) Patents In The US And The EPO – Intellectual Property – United…

Artificial Intelligence (AI) in Manufacturing Market Worth $13.96 billion by 2028 Exclusive Report by Meticulous Research – Yahoo Finance

Posted: at 10:00 pm

Artificial Intelligence in Manufacturing Market By Component, Technology (ML, NLP, Computer Vision), Application (Predictive Maintenance Quality Management, Supply Chain, Production Planning), Industry Vertical, & Geography - Global Forecast to 2028

Redding, California, Nov. 02, 2021 (GLOBE NEWSWIRE) -- According to a new market research report titled, AI in Manufacturing Market By Component, Technology (ML, NLP, Computer Vision), Application (Predictive Maintenance Quality Management, Supply Chain, Production Planning), Industry Vertical, and Geography Global Forecast to 2028, published by Meticulous Research, the artificial intelligence (AI) in manufacturing market is expected to grow at a CAGR of 38.6% during the forecast period to reach $13.96 billion by 2028.

Download Free Sample Report Now @ https://www.meticulousresearch.com/download-sample-report/cp_id=4983

The rising popularity of artificial intelligence in manufacturing industry for optimizing logistics & supply chains, enhancing production outcomes, advancing process effectiveness, reducing costs and downtime in production lines while delivering finished products to consumers are expected to drive the growth of the AI in manufacturing market. Additionally, the advent of Industrial 4.0, the increasing volume of large complex data, and the rising adoption of industrial IoT further contribute to market growth.

However, the lack of infrastructure and high procurement and operating costs are expected to restrain the growth of this market to a certain extent.

Impact of COVID-19 on AI in Manufacturing Market

The COVID-19 pandemic outbreak created serious challenges to the worlds economy and for industry verticals. The SARS-CoV-2, the virus responsible for the global COVID-19 pandemic, started showing its distressing collision on most profitable businesses across the globe, leading to a remote workforce, ensuring peoples health & safety, and business application integrity. The impact of the COVID-19 outbreak has varied by each industry sector's level of resilience. Additionally, the lockdowns imposed to contain the pandemic resulted in severe losses to businesses. Manufacturers across the globe faced grave challenges, such as diminished demand, production, and revenues, as the COVID-19 pandemic intensified in 2020. The automobile, semiconductors & electronics, and heavy metal & machinery manufacturing industries witnessed raw material shortages, with manufacturers temporarily closing down factories or minimizing production.

Story continues

Speak to our Analysts to Understand the Impact of COVID-19 on Your Business: https://www.meticulousresearch.com/speak-to-analyst/cp_id=4983

According to the United Nations Conference on Trade and Development (UNCTAD), the COVID-19 pandemic is expected to reduce the global FDI by around 515% due to the temporary shutdown of the manufacturing sector. A survey conducted by the National Association of Manufacturers (NAM) stated that around 78% of manufacturers anticipated a financial impact, and 35.5% faced supply chain disruptions due to COVID-19. These factors led manufacturing companies to deprioritize their digital transformation strategies, including equipping their production units with AI.

Consequently, the AI in manufacturing market witnessed a sharp decline in 2020. Thus, manufacturing industries require considerable productive time and assistance from local governments to get back on track and overcome the COVID-19 crisis. Several governments plan to launch favorable initiatives, such as incentive programs promoting investments in the private sector, tax exemptions, and lowering corporate interest rates. For instance, in 2021, Cisco Systems, Inc. (U.S.) launched a collaborative framework under Ciscos Country Digital Acceleration (CDA) program to accelerate digitization and support inclusive pandemic recovery across South Korea. Such developments and initiatives are exhibiting positive impacts on the growth of the market. Based on geography, the EU countries were affected the most by the COVID-19 pandemic, followed by the U.S. On the other hand, China is gradually recovering from the pandemic, with positive developments in the supply chain industry.

Several organizations post-COVID-19 pandemic might strategize to downsize by cutting business lines considered as non-critical. Many leading AI in manufacturing players are eying this crisis as a new opportunity for restructuring and revisiting their existing strategies with advanced product portfolios. AI technology providers for manufacturing industries are focused on new applications and delivery models to create smart automation technologies, digitization, and advanced AI applications. For instance, in 2021, Nvidia Corporation (U.S.) partnered with Google Cloud (U.S.) to create the industrys first AI-on-5G Lab. This partnership helped accelerate the creation of smart cities, smart factories, and other advanced 5G and AI applications. Also, in 2021, General Electric Company (U.S.) partnered with the Global Manufacturing and Industrialization Summit (GMIS) (UAE) to explore the role of digitization, lean manufacturing, and workplace safety. Such developments and initiatives are expected to help manufacturing companies recover faster and reduce dependencies on physical process handling.

Hence, despite the pandemic affecting the AI in manufacturing market, it still holds considerable potential to bounce back with the gradual recovery of the manufacturing sector.

The AI in manufacturing market is segmented based on component (hardware [processors, memory solutions, and networking solutions], software [AI platforms and AI solutions], service [deployment & integration, support & maintenance]), technology (machine learning, natural language processing, computer vision, speech & voice recognition, context-aware computing), application (predictive maintenance & machinery inspection, quality management, supply chain optimization, industrial robot, production planning, material handling, field services, safety planning, cybersecurity, energy management), industry verticals (automotive, semiconductors & electronics, heavy metals & machine manufacturing, energy & power, aerospace & defense, medical devices, pharmaceuticals, and FMCG), and region. The study also evaluates industry competitors and analyses the market at the regional and country levels.

Based on component, the hardware segment is estimated to account for the largest share of the AI in manufacturing market in 2021. The large market share of this segment is primarily driven by the increasing demand for robust and cost-effective devices, including servers, storage, and networking devices. However, the software segment is slated to grow at the fastest CAGR during the forecast period due to the high adoption of cloud-based technologies and the increasing demand for AI platforms to streamline processes and operations.

Based on technology, the machine learning segment is estimated to account for the largest share of the AI in manufacturing market in 2021. The large market share of this segment is primarily driven by the rising need for identifying, monitoring, and analyzing the critical system variables during the manufacturing process, growing demand for predictive maintenance & machinery inspection, and the increase in unstructured data generated by the manufacturing industry. However, the natural language processing segment is slated to grow at the fastest CAGR during the forecast period due to the need to strengthen interactions with search engines by allowing queries to be assessed faster in an efficient manner and the growing demand for cloud-based NLP solutions to reduce overall costs, facilitate smart environments, and enhance scalability.

Based on application, the predictive maintenance & machinery inspection segment is estimated to account for the largest share and witness the fastest CAGR of the AI in manufacturing market in 2021. This segment's large market share and high growth rate are primarily driven by the increasing demand to reduce costs related to operating heavy equipment, growing demand for equipment uptime & availability, reducing maintenance planning time, improving production capacity, and real-time reporting of manufacturing issues in industries.

Quick Buy AI in Manufacturing Market Research Report: https://www.meticulousresearch.com/Checkout/49201841

Based on industry vertical, the automotive industry is estimated to account for the largest share of the overall AI in manufacturing market in 2021. The large market share of this segment is primarily driven by the rising adoption of advanced AI automotive solutions for fault detection & isolation, quality management, smart manufacturing, production monitoring, and the need for predictive maintenance & machinery inspection solutions.

However, the medical devices manufacturing sector is slated to grow at the fastest CAGR during the forecast period due to the outbreak of the COVID-19 pandemic and the rising focus on preventive medical equipment maintenance to reduce unplanned downtime, enhance production quality control, and improve operational productivity.

Based on geography, Asia-Pacific is estimated to account for the largest share and witness the fastest CAGR of the AI in manufacturing market in 2021. This regions large market share and high growth rate are primarily attributed to the presence of major AI in manufacturing players along with several emerging startups in the region, increasing investments by technology leaders, and increasing digitization along with the strong presence of automobile and electronics and semiconductor companies and their focus on developing advanced solutions to optimize manufacturing operations and processes in the region.

The report also includes an extensive assessment of the key strategic developments adopted by the leading market participants in the industry over the past four years. The AI in manufacturing market has witnessed various strategies in recent years, such as partnerships & agreements. These strategies enabled companies to broaden their product portfolios, advance capabilities of existing products, and gain cost leadership in the AI in manufacturing market. For instance, in 2021, SAP SE (Germany) partnered with Google Cloud (U.S.) to augment existing business systems with Google Cloud capabilities in Artificial Intelligence (AI) and Machine Learning (ML). Also, SAP SE partnered with Plataine Ltd. (U.S.) to integrate IIoT and AI-based software for digital manufacturing. This partnership enabled customers to benefit from a holistic smart factory solution that extends across production operations. In 2021, Robert Bosch (Germany) collaborated with Capgemini SE (France) for intelligent manufacturing, digitization, and sustainability of their production plants.

The AI in manufacturing market is fragmented in nature. The major players operating in this market include Alphabet, Inc. (U.S.), IBM Corporation (U.S.), Intel Corporation (U.S.), Microsoft Corporation (U.S.), Nvidia Corporation (U.S.), Oracle Corporation (U.S.), Amazon Web Services, Inc. (U.S.), Siemens AG (Germany), General Electric Company (U.S.), SAP SE (Germany), Robert Bosch GmbH (Germany), Cisco Systems, Inc. (U.S.), Rockwell Automation, Inc. (U.S.), Advanced Micro Devices, Inc. (U.S.), and Sight Machine Inc. (U.S.) among others.

To gain more insights into the market with a detailed table of content and figures, click here: https://www.meticulousresearch.com/product/artificial-intelligence-in-manufacturing-market-4983

Scope of the Report:

AI in Manufacturing Market, by Component

Processors

Memory Solutions

Networking Solutions

Deployment & Integration

Support & Maintenance

AI in Manufacturing Market, by Technology

AI in Manufacturing Market, by Application

Predictive Maintenance & Machinery Inspection

Quality Management

Supply Chain Optimization

Industrial Robot/Robotics & Factory Automation

Production Planning

Material Handling

Field Services

Safety Planning

Cybersecurity

Energy management

AI in Manufacturing Market, by Industry Vertical

AI in Manufacturing Market, by Geography

North America

Europe

Germany

U.K.

France

Italy

Spain

Netherlands

Russia

Ireland

Turkey

Rest of Europe

Asia-Pacific

Japan

China

India

South Korea

Australia & New Zealand

Thailand

Indonesia

Taiwan

Vietnam

Rest of Asia-Pacific

Latin America

Mexico

Brazil

Rest of Latin America

Middle East and Africa

Download Free Sample Report Now @ https://www.meticulousresearch.com/download-sample-report/cp_id=4983

Amidst this crisis, Meticulous Research is continuously assessing the impact of COVID-19 pandemic on various sub-markets and enables global organizations to strategize for the post-COVID-19 world and sustain their growth. Let us know if you would like to assess the impact of COVID-19 on any industry here- https://www.meticulousresearch.com/custom-researchRelated Reports:

Artificial Intelligence in Retail Market by Product, Application (Predictive Merchandizing, Programmatic Advertising), Technology (Machine Learning, Natural Language Processing), Deployment (Cloud, On-Premises), and Geography - Global Forecast to 2027

https://www.meticulousresearch.com/product/artificial-intelligence-in-retail-market-4979

Healthcare Artificial Intelligence Market by Product and Services (Software, Services), Technology (Machine Learning, NLP), Application (Medical Imaging, Precision Medicine, Patient Management), End User (Hospitals, Patients) - Global Forecast to 2027

https://www.meticulousresearch.com/product/healthcare-artificial-intelligence-market-4937

Automotive Artificial Intelligence (AI) Market by Component (Hardware, Software), Technology (Machine Learning, Computer Vision), Process (Signal Recognition, Image Recognition) and Application (Semi-Autonomous Driving) - Global Forecast to 2027

https://www.meticulousresearch.com/product/automotive-artificial-intelligence-market-4996

Artificial Intelligence in Supply Chain Market by Component (Platforms, Solutions) Technology (Machine Learning, Computer Vision, Natural Language Processing), Application (Warehouse, Fleet, Inventory Management), and by End User - Global Forecast to 2027

https://www.meticulousresearch.com/product/artificial-intelligence-ai-in-supply-chain-market-5064

Artificial Intelligence (AI) in Cybersecurity Market by Technology (ML, NLP), Security (Endpoint, Cloud, Network), Application (DLP, UTM, Encryption, IAM, Antivirus, IDP), Industry (Retail, Government, Automotive, BFSI, IT, Healthcare, Education), Geography - Global Forecast to 2027

https://www.meticulousresearch.com/product/artificial-intelligence-in-cybersecurity-market-5101

About Meticulous Research

Meticulous Research was founded in 2010 and incorporated as Meticulous Market Research Pvt. Ltd. in 2013 as a private limited company under the Companies Act, 1956. Since its incorporation, the company has become the leading provider of premium market intelligence in North America, Europe, Asia-Pacific, Latin America, and the Middle East & Africa.

The name of our company defines our services, strengths, and values. Since the inception, we have only thrived to research, analyze, and present the critical market data with great attention to details. With the meticulous primary and secondary research techniques, we have built strong capabilities in data collection, interpretation, and analysis of data including qualitative and quantitative research with the finest team of analysts. We design our meticulously analyzed intelligent and value-driven syndicate market research reports, custom studies, quick turnaround research, and consulting solutions to address business challenges of sustainable growth.

Contact:Mr. Khushal BombeMeticulous Market Research Inc.1267 Willis St, Ste 200 Redding, California, 96001, U.S.USA: +1-646-781-8004Europe : +44-203-868-8738APAC: +91 744-7780008Email- sales@meticulousresearch.com Visit Our Website: https://www.meticulousresearch.com/Connect with us on LinkedIn- https://www.linkedin.com/company/meticulous-researchContent Source: https://www.meticulousresearch.com/pressrelease/294/artificial-intelligence-in-manufacturing-market-2028

More here:

Artificial Intelligence (AI) in Manufacturing Market Worth $13.96 billion by 2028 Exclusive Report by Meticulous Research - Yahoo Finance

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence (AI) in Manufacturing Market Worth $13.96 billion by 2028 Exclusive Report by Meticulous Research – Yahoo Finance

Why testing must address the trust-based issues surrounding artificial intelligence – Aerospace Testing International

Posted: at 10:00 pm

Words byJonathan Dyble

Aviation celebrates its 118th birthday this year. Over the years there have been many milestone advances, yet today engineers are still using the latest technology to enhance performance and transform capabilities in both the defence and commercial sectors.

Artificial Intelligence (AI) is arguably one of the most exciting areas of innovation and like many sectors, AI is garnering a great amount of attention in aviation.

Powered by significant advances in the processing power of computers, AI is today making aviation experts probe the opportunities of what was once seemingly impossible. It is worth noting that AI-related aviation transformation remains in its infant stages.

Given the huge risks and costs involved, full confidence and trust is required for autonomous systems to be deployed at scale. As a result, AI remains somewhat of a novelty in the aviation industry at present but attention is growing, progress continues to be made and the tide is beginning to turn.

One individual championing AI developments in aviation is Luuk Van Dijk, CEO and founder of Daedalean, a Zurich-based startup specializing in the autonomous operation of aircraft.

While Daedalean is focused on developing software for pilotless and affordable aircraft, Van Dijk is a staunch advocate of erring on the side of caution when it comes to deploying AI in an aviation environment.We have to be careful of what we mean by artificial intelligence, says Van Dijk. Any sufficiently advanced technology is indistinguishable from magic, and AI has always been referred to as the kind of thing we can almost but not quite do with computers. By that definition, AI has unlimited possible uses, but unfortunately none are ready today.

When we look at things that have only fairly recently become possible understanding an image for example that is obviously massively useful to people. But these are applications of modern machine learning and it is these that currently dominate the meaning of the term AI.

While such technologies remain somewhat in their infancy, the potential is clear to see.

Van Dijk says, When we consider a pilot, especially in VFR, they use their eyes to see where they are, where they can fly and where they can land. Systems that assist with these functions such as GPS and radio navigation, TCAS and ADS-B, PAPI [Precision approach path indicator], and ILS are limited. Strictly speaking they are all optional, and none can replace the use of your eyes.

With AI imagine that you can now use computer vision and machine learning to build systems that can help the pilot to see. That creates significant opportunities and possibilities it can reduce the workload in regular flight and in contingencies and therefore has the potential to make flying much safer and easier.

A significant reason why such technologies have not yet made their way into the cockpit is because of a lack of trust something that must be earned through rigorous, extensive testing. Yet the way mechanical systems and software is tested is significantly different, because of an added layer of complexity in the latter.

For any structural or mechanical part of an aircraft there are detailed protocols on how to conduct tests that are statistically sound and give you enough confidence to certify the system, says Van Dijk. Software is different. It is very hard to test because the failures typically depend on rare events in a discrete input space.

This was a problem that Daedalean encountered in its first project with the European Union Aviation Safety Agency (EASA), working to explore the use of neural networks in developing systems to measurably outperform humans on visual tasks such as navigation, landing guidance, and traffic detection.While the software design assurance approach that stems from the Software Considerations in Airborne Systems and Equipment Certification (DO-178C) works for more traditional software, its guidance was deemed to be only partially applicable to machine learned systems.

Instead of having human programmers translating high level functional and safety requirements into low-level design requirements and computer code, in machine learning a computer explores the design space of possible solutions given a very precisely defined target function that encodes the requirements, says Van Dijk.

If you can formulate your problem into this form, then it can be a very powerful technique, but you have to somehow come up with the evidence that the resulting system is fit for purpose and safe for use in the real world.

To achieve this, you have to show that the emergent behavior of a system meets the requirements. Thats not trivial and actually requires more care than building the system in the first place.

From these discoveries, Daedalean recently developed and released a joint report with EASA in the aim of maturing the concept of learning assurance and pinpointing trustworthy building blocks upon which AI applications could be tested thoroughly enough to be safely and confidently incorporated into an aircraft. The underlying statistical nature of machine learning systems actually makes them very conducive to evidence and arguments based on sufficient testing, Van Dijk confirms, summarizing the findings showcased in the report.

The requirements to the system then become traceable to the requirements on the test data you have to show that your test data is sufficiently representative of the data you will encounter during an actual flight.For that you must show that you have sampled any data with independence a term familiar to those versed in the art of design assurance, but something that has a much stricter mathematical meaning in this context.

Another person helping to make the strides needed to make the use of AI in the cockpit a reality is Dan Javorsek, Commander of Detachment 6, Air Force Operational Test and Evaluation Center (AFOTEC) at Nellis Air Force Base in Nevada. Javorsekt is also director of the F-35 US Operational Test Team and previously worked as a program manager for the Defense Advanced Research Projects Agency (DARPA) within its Strategic Technology Office.

Much like Van Dijk, Javorsek points to trust as being the key element in ensuring potentially transformational AI and automated systems in aircraft becoming accepted and incorporated more into future aircraft. Furthermore he believes that this will be hard to achieve using current test methods.Traditional trust-based research relies heavily on surveys taken after test events. These proved to be largely inadequate for a variety of reasons, but most notably their lack of diagnostics during different phases of a dynamic engagement, says Javorsek.

As part of his research, Javorsek attempted to address this challenge directly by building a trust measurement mechanism reliant upon a pilots physiology. Pilots attentions were divided between two primary tasks concurrently, forcing them to decide which task to accomplish and which to offload to an accompanying autonomous system.

Through these tests we were able to measure a host of physiological indicators shown by the pilots, from their heart rate and galvanic skin response to their gaze and pupil dwell times on different aspects of the cockpit environment, Javorsek says.

As a result, we end up with a metric for which contextual situations and which autonomous system behaviors give rise to manoeuvres that the pilots appropriately trust.

However a key challenge that Javorsek encountered during this research was related to the difficulty machines would have in assessing hard to anticipate events in what he describes as very messy military situations.

Real world scenarios will often throw up unusual tactics and situations, such as stale tracks and the presence of significant denial and deception on both sides of an engagement. In addition electronic jammers and repeaters are often used attempt to mimic and confuse an adversary.

This can lead to an environment prone to accidental fratricide that can be challenging for even the most seasoned and experienced pilots, Javorsek says. As a result, aircrews need to be very aware of the limitations of any autonomous system they are working with and employing on the battlefield.

It is perhaps for these reasons that Nick Gkikas, systems engineer for Airbus Defence and Space, human factors engineering and flight deck, argues that the most effective use of AI and machine learning is outside the cockpit at present. In aviation, AI and machine learning is most effective when it is used offline and on the ground in managing and exploiting big data from aircraft health and human-in / on-the-loop mission performance during training and operations, he says.

In the cockpit, most people imagine the implementation of machine learning as an R2D2 type of robot assistant. While such a capability may be possible today, it is currently still limited by the amount of processing power available on-board and the development of effective human-machine interfaces with machine agents in the system.

Gkikas agrees with Javorsek and Van Dijk in believing that AI currently hasnt be sufficiently developed to be part of the cockpit in an effective and safe manner. Until such technologies are more advanced, effectively tested, and able to be powered by an even greater sophistication in computing power, it seems AI may be better placed to be used in other aviation applications such as weapons systems.

Javorsek also believes it will be several years before AI and machine learning software will be successful in dynamically controlling the manoeuvres of fleet aircraft traditionally assigned to contemporary manned fighters. However, there is consensus amongst experts that there is undoubted potential for such technologies to be developed further and eventually incorporated within the cockpit of future aircraft.

For AI in the cockpit and in aircraft in general, I am confident we will see unmanned drones, eVTOL aircraft and similarly transformative technologies being rolled out beyond test environments in the not-so-distant future, concludes Van Dijk.

Original post:

Why testing must address the trust-based issues surrounding artificial intelligence - Aerospace Testing International

Posted in Artificial Intelligence | Comments Off on Why testing must address the trust-based issues surrounding artificial intelligence – Aerospace Testing International

US/EU Initiative Spotlights Cooperation, Differing Approaches To Regulation Of Artificial Intelligence Systems – Privacy – Worldwide – Mondaq News…

Posted: at 10:00 pm

To print this article, all you need is to be registered or login on Mondaq.com.

In late September 2021, representatives from the U.S. and theEuropean Union met to coordinate objectives related to the U.S.-EUTrade and Technology Council, and high on the Council's agendawere the societal implications of the use of artificialintelligence systems and technologies ("AISystems"). The Council's public statements on AISystems affirmed its "willingness and intention to develop andimplement trustworthy AI" and a "commitment to ahuman-centric approach that reinforces shared democraticvalues," while acknowledging concerns that authoritarianregimes may develop and use AI Systems to curtail human rights,suppress free speech, and enforce surveillance systems. Given theincreasing focus on the development and use of AI Systems from bothusers and investors, it is becoming imperative for companies totrack policy and regulatory developments regarding AI on both sidesof the Atlantic.

At the heart of the debate over the appropriate regulatorystrategy is a growing concern over algorithmic bias thenotion that the algorithm powering the AI Systems in question hasbias "baked in" that will manifest in its results.Examples of this issue abound job applicant systemsfavoring certain candidates over others, or facial recognitionsystems treating African Americans differently than Caucasians,etc. These concerns have been amplified over the last 18 months associal justice movements have highlighted the real-worldimplications of algorithmic bias.

In response, some prominent tech industry players have postedposition statements on their public-facing websites regarding theiruse of AI Systems and other machine learning practices. Thesestatements typically address issues such as bias, fairness, anddisparate impact stemming from the use of AI Systems, but often arenot binding or enforceable in any way. As a result, these publicstatements have not quelled the debate around regulating AISystems; rather, they highlight the disparate regulatory regimesand business needs that these companies must navigate.

When the EU's General Data Protection Regulation("GDPR") came into force in 2018, itprovided prescriptive guidance regarding the treatment of automateddecision-making practices or profiling. Specifically, Article 22 isgenerally understood to implicate technology involving AI Systems.Under that provision, EU data subjects have the right not to besubject to decisions based solely on automated processing (andwithout human intervention) which may produce legal effects for theindividual. In addition to Article 22, data processing principlesin the GDPR, such as data minimization and purpose limitationpractices, are applicable to the expansive data collectionpractices inherent in many AI Systems.

Consistent with the approach enacted in GDPR, recently proposedEU legislation regarding AI Systems favors tasking businesses,rather than users, with compliance responsibilities. The EU'sArtificial Intelligence Act (the "Draft AI Regulation"),released by the EU Commission in April 2021, would requirecompanies (and users) who use AI Systems as part of their businesspractices in the EU to limit the harmful impact of AI. If enacted,the Draft AI Regulation would be one of the first legal frameworksfor AI designed to "guarantee the safety and fundamentalrights of people and businesses, while strengthening AI uptake,investment and innovation across the EU." The Draft AIRegulation adopts a risk-based approach, categorizing AISystems as unacceptable risk, high risk, and minimal risk. Much ofthe focus and discussion with respect to the Draft AI Regulationhas concerned (i) what types of AI Systems are consideredhigh-risk, and (ii) the resulting obligations on such systems.Under the current version of the proposal, activities that would beconsidered "high-risk" include employee recruiting andcredit scoring, and the obligations for high-risk AI Systems wouldinclude maintaining technical documentation and logs, establishinga risk management system and appropriate human oversight measures,and requiring incident reporting with respect to AI Systemmalfunctioning.

While AI Systems have previously been subject to guidelines fromgovernmental entities and industry groups, the Draft AI Regulationwill be the most comprehensive AI Systems law in Europe, if not theworld. In addition to the substantive requirements previewed above,it proposes establishing an EU AI board to facilitateimplementation of the law, allowing Member State regulators toenforce the law, and authorizing fines up to 6% of acompany's annual worldwide turnover. The draft law will likelybe subject to a period of discussion and revision with thepotential for a transition period, meaning that companies that dobusiness in Europe or target EU data subjects will have a few yearsto prepare.

Unlike the EU, the U.S. lacks comprehensive federal privacylegislation and has no law or regulation as specifically tailoredto AI activities. Enforcement of violations of privacy practices,including data collection and processing practices through AISystems, primarily originates from Section 5 of the Federal TradeCommission ("FTC") Act, which prohibitsunfair or deceptive acts or practices. In April 2020, the FTCissued guidance regarding the use of AI Systems designed to promotefairness and equity. Specifically, the guidance directed that theuse of AI tools should be "transparent, explainable, fair, andempirically sound, while fostering accountability." The changein administration has not changed the FTC's focus on AIsystems. First, public statements from then-FTC Acting ChairRebecca Slaughter in February 2021 cited algorithms that result inbias or discrimination, or AI-generated consumer harms, as a keyfocus of the agency. Then, the FTC addressed potential bias in AISystems on its website in April 2021 and signaled that unlessbusinesses adopt a transparency approach, test for discriminatoryoutcomes, and are truthful about data use, FTC enforcement actionsmay result.

At the state level, recently enacted privacy laws in California,Colorado and Virginia will enable consumers in those states toopt-out of the use of their personal information in the context of"profiling," defined as a form of automated processingperformed on personal information to evaluate, analyze, or predictaspects related to individuals. While AI Systems are notspecifically addressed, the three new state laws require datacontrollers (or equivalent) to conduct data protection impactassessments to determine whether processing risks associated withprofiling may result in unfair or disparate impact to consumers. Inall three cases, yet-to-be promulgated implementing regulations mayprovide businesses (and consumers) with additional guidanceregarding operationalizing automated decision-making requests upuntil the laws' effective dates (January 2023 for Virginia andCalifornia, July 2023 for Colorado).

Proliferating use of AI Systems has dramatically increased thescale, scope, and frequency of processing of personal information,which has led to an accompanying increase in regulatory scrutiny toensure that harms to individuals are minimized. Businesses thatutilize AI Systems should adopt a comprehensive governance approachto comply with both the complimentary and divergent aspects of theU.S. and EU approaches to the protection of individual rights.Although laws governing the use of AI Systems remain in flux onboth sides of the Atlantic, businesses that utilize AI in theirbusiness practices should consider asking themselves the followingquestions:

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

See the article here:

US/EU Initiative Spotlights Cooperation, Differing Approaches To Regulation Of Artificial Intelligence Systems - Privacy - Worldwide - Mondaq News...

Posted in Artificial Intelligence | Comments Off on US/EU Initiative Spotlights Cooperation, Differing Approaches To Regulation Of Artificial Intelligence Systems – Privacy – Worldwide – Mondaq News…

Page 68«..1020..67686970..8090..»