Page 74«..1020..73747576..8090..»

Category Archives: Artificial Intelligence

List of Artificial Intelligence and Machine Learning (AI/ML)-enabled Devices Available on FDAs Website – JD Supra

Posted: October 7, 2021 at 3:47 pm

The U.S. Food and Drug Administration (FDA) now provides a list of Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices that are legally marketed in the United States. These include devices (1) cleared via 510(k) premarket notifications, (2) authorized pursuant to De Novo requests, and (3) approved via premarket approval applications, or PMAs. FDA explains that the list, developed by FDAs Digital Health Center of Excellence, while not exhaustive or comprehensive, is intended to increase transparency and access to information on these devices that span across medical disciplines. With interest increasing dramatically in this area and with over 300 devices dating back to 1997, this list is a helpful tool to identify the many AI/ML-enabled devices currently legally available. FDA intends to update this list periodically.

The FDA has also announced a virtual public workshop on transparency of Artificial Intelligence/Machine Learning (AI/ML)-enabled medical devices to patients, caregivers, and providers, scheduled for Thursday, October 14.

Read more here:

List of Artificial Intelligence and Machine Learning (AI/ML)-enabled Devices Available on FDAs Website - JD Supra

Posted in Artificial Intelligence | Comments Off on List of Artificial Intelligence and Machine Learning (AI/ML)-enabled Devices Available on FDAs Website – JD Supra

Artificial Intelligence (AI) and Big Data Analytics (BDA) in Telecomm Industry, 2021 Update – Market Overview, Technology Ecosystem, Telco Use Cases…

Posted: at 3:47 pm

Summary Artificial Intelligence (AI) and Big Data Analytics (BDA) in Telecomm Industry, 2021 Update provides an executive-level overview of the global artificial intelligence (AI) and big data analytics (BDA) market.

New York, Oct. 07, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Artificial Intelligence (AI) and Big Data Analytics (BDA) in Telecomm Industry, 2021 Update - Market Overview, Technology Ecosystem, Telco Use Cases and Monetisation Strategies" - https://www.reportlinker.com/p06169567/?utm_source=GNW It defines AI and BDA and delivers qualitative insights into the AI and BDA industries, value chain, and ecosystem dynamics. It also analyzes key trends in the industry and provides insights into telco AI and BDA monetization strategies.

The telecommunications industry is being transformed by the use of artificial intelligence (AI) and big data analytics (BDA).Telcos are utilizing AI and BDA in their own operations to automate networks and improve customer experience.

In addition, AI and BDA represents new revenue stream opportunities for telcos and technology companies in the way of services such as AI and BDA platform creation, integration, and management.

The report provides an in-depth analysis of the following - - AI and BDA taxonomy & market context: definitions of AI and BDA and technology enablement. This section also provides an overview of the regulatory environment and ethical considerations. - The AI and BDA ecosystem: an overview of the AI and BDA value chain and ecosystem players. This section also includes an analysis of telcos role within the AI and BDA value chains. - Case studies: this section analyzes the AI and BDA value proposition, business models, and strategies of four telecom operators. - Key findings and recommendations: the Global Outlook Report concludes with a number of key findings and a set of recommendations for AI and BDA stakeholders, including telecom service providers.

Scope - AI and BDA are complementary technologies driving use cases such as Industry 4.0 - AI and BDA are opening up new revenue stream opportunities for telcos and technology companies - Telcos are joining the AI and BDA race to drive digital transformation and personalized services, internal efficiency, and innovation - AI and BDA is set to drive telco transformation towards digital service and telco as a service models - Telcos must remain compliant to regional AI and BDA regulations as well as establish their own set of principles for the ethical use of AI and BDA

Reasons to Buy - This Global Outlook Report provides definitions of AI and BDA as well as a comprehensive examination of the AI and BDA value chains. It helps executives fully understand the ecosystem, market dynamics, and value chain. It helps telecom decision-makers determine key AI and BDA positioning strategies, formulate effective product development plans, and optimize return on investments. - Four case studies illustrate the findings of the report, providing insights around different telco AI and BDA value propositions across the world, including services, monetization approaches and partnerships. This will help telecom executives craft adapted AI and BDA (including cloud gaming) strategies to unlock new revenue streams and operational efficiencies. - The report discusses concrete opportunities in the AI and BDA market, providing a number of actionable recommendations for AI and BDA ecosystem participants, including telecom service providers.Read the full report: https://www.reportlinker.com/p06169567/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

See the article here:

Artificial Intelligence (AI) and Big Data Analytics (BDA) in Telecomm Industry, 2021 Update - Market Overview, Technology Ecosystem, Telco Use Cases...

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence (AI) and Big Data Analytics (BDA) in Telecomm Industry, 2021 Update – Market Overview, Technology Ecosystem, Telco Use Cases…

AI Futures: how artificial intelligence is infiltrating the DJ booth – DJ Mag

Posted: at 3:47 pm

Where Endel and apps like it generate hyper-personalised soundscapes with a common goal, and with AI already generating fairly competent music with services like Amper, Boomy and the aforementioned Mubert, it wouldnt be a stretch to foresee a future where streaming platforms adopt this technology to further personalise a listeners experience of the app.

Spotify already features mood playlists, and theres big business around relaxing, therapeutic playlists on the platform. Would there be a time when the same song is heard differently by two different listeners?

Even more than Spotify, I can see someone like Apple doing this, says Cherie Hu. They have a whole ecosystem of Apple Music, Fitness, Health, Apple Watch they could quite easily create something real-time and adaptive.

Music is unlikely to pivot entirely to purely generative, but a completely new style of listening generated by AI, functional and highly personalised is likely to co-exist with our favourite tracks and albums.

Technologies like this, where music stops being music in a strict sense and becomes more like a running water or electricity'as David Bowie once stated, I see it as a future of music, or at least a very pleasant part of it, says Stavitsky.

While Endel isnt competing with traditional releases, the idea of music as electricity, or as content, runs counter to the belief of some artists, who are concerned for their futures if and when the AI is no longer discernible. People dont know the difference when they listen to MP3, a WAV, vinyl, whatever, explains Jaymie Silk, a producer and label owner from Canada, now based in Paris.

People are so trapped in the algorithm bubble, its like, Is what we do [as producers] still useful or not? If all you want to do is release the content, you want to be noticed, you want to be booked, just want the attention, you already have the tools to do it. Its scary to think, How will the audience perceive music in the future? Will they listen to it, is it just noise, is it just an excuse to go to a party? Are we useless as music producers? I dont know.

For some artists, AI will assist them in getting from A to B; for others, itll create the whole journey. But, Silk hopes quality will always prevail. A microwave is not really good for your food, he laughs. But if you need to eat quickly, why not? I think its the same thing.

Go here to see the original:

AI Futures: how artificial intelligence is infiltrating the DJ booth - DJ Mag

Posted in Artificial Intelligence | Comments Off on AI Futures: how artificial intelligence is infiltrating the DJ booth – DJ Mag

How artificial intelligence is boosting the fight against Covid-19 – The National

Posted: at 3:47 pm

Scientists battling Covid-19 have technological tools that were not available during previous pandemics and among the most significant is artificial intelligence.

This has been used, for example, to design drugs and, in research projects, to predict whether a patient admitted to hospital will require oxygen.

It heralds the likely wider rollout of AI in medicine as the technology becomes increasingly valuable in diagnosing disease, developing prognoses for patients and producing better treatments.

An AI-enabled service allows Dubai residents to book a Covid-19 vaccine appointment quickly and efficiently viaWhatsApp, say officials.

Machine learning or AI was used in the early stages of the pandemic by scientists developing vaccines against Covid-19.

It allowed researchers to process vast amounts of data and to identify patterns in them in a way that would not have been possible otherwise.

When designing vaccines, it helped scientists work out which parts of the virus were likely to stimulate an immune response protective against infection.

Similarly, in drug development, it identified which substances could be effective at preventing the coronavirus from replicating.

Among other things, AI has also been useful in determining the prognosis of patients with Covid-19 and in working out how patients might respond to existing drugs.

The potential for AI to highlight drugs that could be used against Covid-19 was illustrated by a study published this year by researchers at the University of Cambridge and other institutions.

They used AI to pinpoint proteins and the biochemical processes that were involved in SARS-CoV-2 infection, which highlighted targets that drugs could act on.

Using AI, the researchers screened nearly 2,000 drugs being used for other conditions and found that about 10 per cent had the potential to be used against the coronavirus.

Of these, 40 were already being tested for their effectiveness against Covid-19 in clinical trials, which, researchers said, indicated that their approach was effective at pinpointing useful drugs.

Numerous studies have shown that AI may become a useful tool for doctors as it can predict, for example, which patients are most likely to deteriorate.

Dr Farah Shamout, an assistant professor of computer engineering at New York University Abu Dhabi, and a team of co-authors released a study in May that looked at this.

Their system, which used the likes of chest X-rays to help forecast what would happen to a patient, was found to offer an accurate prediction of what would happen to a patient over the subsequent 96 hours.

This could be helpful to clinicians as it could indicate which patients need priority in hospitals where there may be limited beds and equipment available.

Dr Farah Shamout, assistant professor of computer engineering at New York University Abu Dhabi. Khushnum Bhandari/ The National

While many studies have used AI during the pandemic, Dr Shamout says the clinical effects may, so far, have been more limited.

From a research perspective, so many papers came out using AI in relation to Covid-19, she says. In terms of whether that goes into practice Im a bit doubtful. Many of these studies were from small datasets.

With many pieces of research internally validated checked against data from one institution only their findings may not be applicable to patients at other institutions.

Even though the number of papers published was high, Im not sure if thats proportional to the impact its had in reality, Dr Shamout says.

If the data used to train algorithms comes from many hospitals, not just a few, the results are likely to have a wider use. However, sharing data from multiple institutions can be difficult because of patient confidentiality rules.

To get around this, researchers in a recent project used federated learning, where data from many sites is combined to train the AI algorithm, even though the data itself is not exchanged.

Published in September, this study from Cambridge University, the healthcare technology company NVIDIA, Massachusetts General Hospital and Harvard Medical School used chest X-rays and other patient data from more than 20 hospitals to train an AI system to forecast the oxygen needs of people with Covid-19.

Called EXAM, the study then tested the AI system in hospitals across the globe and found that it was around 90 per cent accurate.

The first author of the recent study, Dr Ittai Dayan, a medical doctor and co-founder and chief executive of Rhino Health, a US healthcare AI company, is developing the system further. Other companies are doing similar work to advance AI-based technology in medicine.

Dr Dayan says by the time the system is perfected and ready to be used by physicians, the pandemic might have ended. But the technology could have wide application in the future.

We will be able to use it in the next pandemic or any other serious public health risk, and do that much, much faster, he said.

Tools like federated learning are the picks and shovels needed by developers to research better and more adapted AI.

AI is likely to be increasingly used in medical care in general, says Dr Dayan, allowing medicine to become more precise.

[Some] people ... feel physicians will lose their jobs I dont think thats realistic, he says. But physicians will have to become people who can leverage data and leverage AI solutions to be really good at their jobs.

Updated: October 7th 2021, 3:00 AM

See original here:

How artificial intelligence is boosting the fight against Covid-19 - The National

Posted in Artificial Intelligence | Comments Off on How artificial intelligence is boosting the fight against Covid-19 – The National

AI for the Bad Guy: How Artificial Intelligence Is Making Video Verification More Effective & Marketable – Security Sales & Integration

Posted: at 3:47 pm

Advances in technology are better tying video in with intrusion alarms to slash false alarms and save valuable law enforcement resources.

The camera, as the saying goes, does not lie. The ability to see video footage of an alarm event in progress has dramatically reduced false alarms in recent years. And, when that video is supported by the capabilities of artificial intelligence (AI), it even further empowers operators and response teams to respond swiftly, and appropriately.

So while its true that video verification has been enabling better informed responses for several years now, the application of AI for this purpose has proven a real game-changer. Leveraging video to verify intrusion alarms has been gaining some serious momentum recently, as the technology has improved and become less expensive.

Increasingly more end users are cognizant of its capabilities and also realize that first responders have grown very weary of the wasted resources and dollars lost to false alarm dispatches. Video verified alarms simplify central station operator processing time significantly.

They literally cut to the chase and enable an immediate dispatch complete with concrete evidence and facts of what is truly happening at a location. Informed decisions make for better decisions and the best possible security outcome. AI is certainly helping the cause.

Innovations within the Internet of Things (IoT) and AI are enabling video surveillance systems to be much more powerful and proactively predict potential security incidents. Technologies including machine learning, deep learning and artificial neural networks are allowing intelligence based on learned patterns and predictive analytics. The information it can impart to video is impressive.

The capabilities of AI in video verification are becoming increasingly powerful, says Steve Walker, vice president of monitoring operations U.S. for STANEY Security. AI is allowing video to look beyond the presence of humans to identify actual circumstances where human activity is prohibited. For instance, AI can be used to partition a particular space within a secured area to watch for the presence of two individuals rather than just one and raise an alert only in that circumstance. Using AI to recognize certain situations known to increase the probability of a crime brings tremendous value to the monitoring company, and for the customer improves security while optimizing labor costs associated with monitoring.

AIs ability to accurately identify humans and vehicles reduces false alarms by 90% or more, says Brian Baker, chief revenue officer, Calipsa. This reduction, he says, allows central station operators to focus on real events and enables law enforcement to respond less to non-threatening activities. AI-based software performs an initial review to eliminate virtually all alarm events not containing humans or vehicles. Station operators analyze the reviewed video to ensure the alarm isnt the result of an event such as planned maintenance at the site. Law enforcement officials have confidence in response calls when they know a human has reviewed the video and verified assistance is required. With fewer incoming nuisance calls, officers can respond more quickly to events genuinely requiring their presence.

Along with the addition of AI, video verification technology itself has improved considerably the past couple of years. Joey Rao-Russell, president and CEO of Kimberlite (largest independent Sonitrol franchise), points out that, Analytics and quality continue to improve while price points reduce. With higher pixel resolutions and better learning software, we can pinpoint specific events such as loitering. We can also use facial recognition and other analytics to determine people, colors, shapes such as cars, with greater accuracy. All of this allows for decreased false alarm and greater ability to monitor efficiently.

Video has become more mainstream and the idea of using it to enhance security is more acceptable than five years ago. Morgan Hertel, Rapid Response Monitoring

The use of video verification has indeed gained momentum. Morgan Hertel, vice president of technology and innovation for Rapid Response Monitoring, states, There is no doubt that its starting to accelerate, with other factors in addition to improved tech and cost pushing it along. Video has become more mainstream and the idea of using it to enhance security is more acceptable than five years ago. You can also factor in the costs of false alarms or the increase of areas requiring verified response.

Other improvements he cites include: Increased Internet speeds and availability; better video compression CODECs; better analytics at the edge as well as in the Cloud; better integrations with consumer applications and platforms; and cameras today are more affordable and have smaller footprints.

Video verification technologies are improving in terms of image quality but, more importantly, in the realm of applied analytics. This helps operators identify potential threats within video images/streams, adds Mark McCall, director, global monitoring, STANLEY Security. Analytics, he says, are becoming increasingly effective in identifying human beings even in low light conditions.

Analytics also allows video monitoring to scale more cost-effectively in the monitoring center something that is critical for maintaining low cost for customers and reasonable profit margins for the monitoring company. Without the reduction in nuisance alarms, staffing for video monitoring becomes much more costly and impacts the ability to remain competitive in the market.

Video verification technologies are improving in terms of image quality but, more importantly, in the realm of applied analytics. This helps operators identify potential threats within video images/streams.

As Hertel explains, There is a big difference between AI and video analytics. Today, most of the video applications are really an analytic that is purpose-built to detect something. Examples are people vs. weather, tripwires, people counting, face mask adherence, etc. Some of these can be very specific and complicated in nature but are still just designed to track one thing.

AI, on the other hand, is when you take an analytic like human detection, then start to add in other data elements for the purpose of reaching a decision, Hertel points out. A simple example is a camera and AI see something every day such as a person putting trash in a dumpster. The AI looks at the times, the persons physical makeup, and the type of trash cans and then that becomes a normal event. However, when either someone else takes out the trash or the trash is being dumped outside the normal timeframes, the AI determines on its own that this is abnormal behavior and alerts someone.

AI augments the operator for much faster processing times, Rao-Russell, treasurer and immediate past-president of the Partnership for Priority Video Verified Response (PPVAR), adds. For instance, they can set the analytics to notice a car in a loading zone for a specified time during off hours.

Previously, we would have only received a motion that required an operator to watch what was happening. We are also using deep learning to filter known false alarms, to allow us to only see what is happening. This also allows the end user to be involved in the process to ensure the fastest response.

From Walkers perspective, demand for video verification services remains highest in areas where traditional security solutions struggle namely outdoor applications. Video verification enables broad coverage in complex outdoor target-rich environments such as equipment storage, vehicle storage and outdoor retail environments where the use of traditional security devices is difficult or impossible to effectively deploy.

Outdoor environments are notorious for false alarms caused by unexpected yet normal activity from humans, animals, moving shadows, flashes of light and weather events.

Video verification allows for a quick assessment of alarms and is effective at eliminating false requests for police response, he states. In addition to the natural events, a poorly designed system will add to the high volumes of alarms, all of which unnecessarily consume monitoring center operator labor at rates that drive pricing to unacceptable levels.

As Hertel points out, Typically, you need to add cameras to adequately cover the facility. These cameras need to be able to integrate with either the intrusion system or a platform that allows you to be to capture prealarm activity through the alarm condition. Most systems are upgradeable to video.

Analytics and AI have the ability to reduce costs for customers even in environments that otherwise generate high levels of false alarms, Walker believes. In addition to traditional video verification solutions, he reports that AI and machine learning are finding use cases for nonvideo verification, as well.

These would be applications that help customers better understand efficiencies of employees (movement over time) as well as patterns for customers, including where they go during business hours, where they linger/dwell, going where they are not supposed to, and the corresponding time of day for each.

In terms of who the prime adaptors of AI-based video verification software are, Baker points to central monitoring stations typically working with security dealers or integrators and their roster of small to medium-sized business (SMB) as well as enterprise customers. He also sees interest growing in the technology from government, healthcare, education, car dealerships and property management verticals. Recently, weve seen utilities and safe cities programs incorporating false alarm reduction software.

Its important to educate end users on the improved peace of mind theyd have with a video verified alarm system. As Rao-Russell explains, It is always a partnership, especially with verification. Sonitrol is dedicated to education and continued support including maintaining cameras, training end users, and ensuring compliance with current standards. Without the continued relationship, many verification applications will fail due to poor quality and user caused alarms.

Rapid Response Monitoring also offers ongoing education processes for its dealers. We meet with them regularly to educate them on industry trends, new technologies and the advantages of adding things like video to their offerings, Hertel notes.

STANLEY Security makes sure that consumers and dealers understand how video verified systems function and how the outputs may impact police response downstream, Walker adds. Proper customer education includes an overview of how better utilization of their video system can reduce/eliminate their needs for guards as well as provide insights that help their day-to-day operations to become more profitable, all of which contribute to solid ROI calculations for the customer.

Upgrading customers to video verification can present increased RMR opportunities to dealers and integrators. As McCall points out, Video always commands a higher price point in our industry most importantly because it delivers greater value to the consumer through improved decision making, fewer false alarm fines, and informed police response that potentially leads to a higher probability of arrests. In most cases, existing video systems can be used or expanded. This allows dealers/integrators to easily add video verification RMR without the cost of an expensive video hardware replacement.

Marketing video verification technologies and solutions to dealers and integrators accustomed to selling traditional alarm security systems, however, can be challenging at times, Rao-Russell contends. Integrators have to buy-in first. There is opportunity but they have to be willing to embrace the model, which tends to have higher RMR and more service needs.

New, sophisticated user dashboards, such as Calipsas False Alarm Filtering Platform, can provide a visual summary of total alarms, including a performance overview of video analytics over selected time periods.

Hertel notes that, Adding video does a number of things. Besides the verification piece that will lower false alarms and fines, you also get a much stickier subscriber that will equate to lower attrition and happier customers. And, as Baker details, Integrators may offer AI-based false alarm reduction to customers at a small cost per camera, or they may mark up the prices some central stations charge. Either way, the software provides RMR. Also, an ability to filter out nuisance alarms gives smaller integrators another tool to compete against large national providers.

A bright future for video verification coupled with AI technologies looks to be clearly on the horizon, benefitting end users, monitoring station operators and law enforcement alike. Rao-Russell delivers a clear message when she says, As tech improves, our options are limitless. But we have to get behind driving adoption. If you arent driving forward, you are going backwards.

The Partnership for Priority Verified Alarm Response (PPVAR) was established to promote the value of verification and validation of alarm events during the emergency response process using video, audio, and other emerging technologies along with proven best practices.

The organization is comprised of members from the electronic security, monitoring, public safety and insurance industries to represent all interests in the battle against property crime to provide the most reliable and cost-effective alarm response to the end user.

As stated on its website, Although PPVAR remains committed to traditional alarm response methods as a deterrent to crime, it considers video and audio verification to be a significant enhancement and one that deserves a higher priority response by all first responders. PPVARs goal is to collaborate with all members involved in the alarm response process and share best practices, ideas and the information necessary to maximize the effectiveness of all resources required to protect our valued customers life and property.

Steve Walker, vice president of monitoring operations U.S., for STANEY Security, says: PPVAR has an exceptionally well-rounded group of gifted leaders representing public safety. PPVAR has played an active role in the development of the new ANSI standard AVS-01 Alarm Scoring Standard that is focused on delivering actionable information to 911 centers based on defined scoring criteria to enable prioritization of alarms which is central to the mission of PPVAR.

For more information, visit ppvar.org.

Erin Harringtonhas 20+ years of editorial, marketing and PR experience within the security industry. Contact her aterinharrington1115@gmail.com.

Here is the original post:

AI for the Bad Guy: How Artificial Intelligence Is Making Video Verification More Effective & Marketable - Security Sales & Integration

Posted in Artificial Intelligence | Comments Off on AI for the Bad Guy: How Artificial Intelligence Is Making Video Verification More Effective & Marketable – Security Sales & Integration

What Will It Take for Government AI to Really Take Off? – Government Technology

Posted: at 3:47 pm

While public agencies continue to deploy chatbots and other artificial intelligence tools, confusion about the technology abounds, according to new survey findings from Gartner, and the pandemic has provided little fuel for its growth.

The research agency found that 36 percent of survey respondents plan to increase AI and machine learning (ML) investments this year.

Even so, proponents of AI and ML have significant work to do, said Dean Lacheca, Gartners public-sector research director, in an email interview with Government Technology.

Thats not all that faces backers of the technology.

Gartner found that while 53 percent of government workers who have used AI tools say the tech provides insights to do their job better, only 34 percent of workers unfamiliar with AI said the same.

The more that government technology leaders start to identify specific and narrow use cases and then link them with the specific, readily adoptable technologies like ML, computer vision and natural language processing, rather than the generic AI terminology, the more likely the government leadership will be to understand the potential of the technology, Lacheca said.

The Gartner findings stem in part from a global survey that attracted 166 responses from all levels of government, with 27 percent coming from state and provincial governments, and another 27 percent from local governments, as well as some respondents from counties. The findings also come from a separate Gartner survey of 258 government employees working for public agencies around the world.

Even though AI and related tools can still seem futuristic even now and still have a way to go before they are mainstream in government some of the technology is accessible to public agencies of various sizes, and not just relatively well-funded federal government units, according to Lacheca.

The reasons why chatbots have taken off in government is that they are pretty much package solutions, with only minimal concerns about data privacy, which can offer a perceived speed to value for government, he said. So anywhere that an off-the-shelf or easily configurable/trainable AI solution can be stood up rapidly with little data privacy concerns are the likely areas of rapid investment by government.

What Gartner called more specialized AI tools also could soon find more uses within government including local agencies.

Such tools include geospatial AI, which, according to Gartner, uses (AI) methods to produce knowledge through the analysis of spatial data and imagery. Those tools will by their nature have lower rates of uptake than chatbots, but are likely to be deployed in such areas as defense, intelligence, transportation and local government.

The research firm found that while 42 percent of government employees who have yet to work with AI believe the tech helps get work done, only 27 percent of those respondents believe AI has the potential to replace many tasks.

Meanwhile, 31 percent of employees who have used AI view the technology as a job threat.

Even so, 44 percent of respondents who have used artificial intelligence think the technology improves decision-making, with 31 percent saying AI reduces the risks of mistakes.

That said, 11 percent of respondents think AI makes more errors than do people.

Senior executives in the public sector must address the early apprehension among the government workforce by showing how the technology helps them to getting their work done, then continue to build confidence in the technology through exposure, use cases and case studies, Lacheca said.

See the original post here:

What Will It Take for Government AI to Really Take Off? - Government Technology

Posted in Artificial Intelligence | Comments Off on What Will It Take for Government AI to Really Take Off? – Government Technology

Artificial intelligence might eventually write this article – The Verge

Posted: September 29, 2021 at 7:02 am

I hope my headline is an overstatement, purely for job purposes, but in this weeks Vergecast artificial intelligence episode, we explore the world of large language models and how they might be used to produce AI-generated text in the future. Maybe itll give writers ideas for the next major franchise series, or write full blog posts, or, at the very least, fill up websites with copy thats too arduous for humans to do.

Among the people we speak to is Nick Walton, the cofounder and CEO of Latitude, which makes the game AI Dungeon, which creates a plot in the game around what you put into it. (Thats how Walton ended up in a band of traveling goblins youll just have to listen to understand how that makes sense!) We also chat with Samanyou Garg, founder of Writesonic, a company that offers various writing tools powered by AI. The company can even have AI write a blog post Im shaking! But really.

Anyway, toward the end of the episode, I chat with James Vincent, The Verges AI and machine learning senior reporter, who calms me down and helps me understand what the future of text-generation AI might be. Hes great. Check out the episode above, and make sure you subscribe to the Vergecast feed for one more episode of this AI miniseries, as well as the regular show. See you there!

Excerpt from:

Artificial intelligence might eventually write this article - The Verge

Posted in Artificial Intelligence | Comments Off on Artificial intelligence might eventually write this article – The Verge

Artificial intelligence could help to predict the next virus to jump from animals to humans – BBC Science Focus Magazine

Posted: at 7:02 am

Reader Q&A:How do viruses jump from animals to humans?

Every animal species hosts unique viruses that have specifically adapted to infect it. Over time, some of these have jumped to humans these are known as zoonotic viruses.

As our populations grow, we move into wilder areas, which brings us into more frequent contact with animals we dont normally have contact with. Viruses can jump from animals to humans in the same way that they can pass between humans, through close contact with body fluids like mucus, blood, faeces or urine.

Because every virus has evolved to target a particular species, its rare for a virus to be able to jump to another species. When this does happen, its by chance, and it usually requires a large amount of contact with the virus.

Initially, the virus is usually not well-suited to the new host and doesnt spread easily. Over time, however, it can evolve in the new host to produce variants that are better adapted.

When viruses jump to a new host, a process called zoonosis, they often cause more severe disease. This is because viruses and their initial hosts have evolved together, and so the species has had time to build up resistance. A new host species, on the other hand, might not have evolved the ability to tackle the virus. For example, when we come into contact with bats and their viruses, we may develop rabies orEbola virus disease, while the bats themselves are less affected.

Its likely that bats were the original source of three recently emerged coronaviruses: SARS-CoV (2003), MERS-CoV (2012) and SARS-CoV-2, the cause of the2019-20 coronavirus outbreak. All of these jumped from bats to humans via an intermediate animal; in the case of SARS-CoV-2, this may have been pangolins, but more research is needed.

Read more:

Read more from the original source:

Artificial intelligence could help to predict the next virus to jump from animals to humans - BBC Science Focus Magazine

Posted in Artificial Intelligence | Comments Off on Artificial intelligence could help to predict the next virus to jump from animals to humans – BBC Science Focus Magazine

Artificial Intelligence in Automotive Claims on Fast Track During Pandemic – Autobody News

Posted: at 7:02 am

The use of artificial intelligence (AI) in our daily lives was predicted in Hollywood movies decades ago and began to come true with Siri, Alexa and smartphones.

According to a white paper released recently by Mitchell International, parent company of NAGS, artificial intelligence use in automotive claims is growing fast as a result of the COVID-19 pandemic, which made a transition to digital essential to decrease the spread of the virus from human to human.

As insurers embrace AI and its ability to improve the claims process, they are devoting a larger portion of their technology budgets to AI-enabled solutions. In fact, according to one report, 87% of carriers are now spending in excess of $5 million annually on these technologies, which is more than in the banking and retail sectors, Mitchell reported.

Although new to the auto insurance industry, the science behind AI has existed for more than 50 years. Conceived in 1956, when President Dwight D. Eisenhower authorized construction of the nations interstate highway system, it is uncertain if AI pioneer John McCarthy imagined a future where AI would drive vehicles down Eisenhowers highway system.

Interest in AI grew until the 1980s when scientists moved from hard-coded algorithms to machine learning: what would make it possible for AI to generate predictions based on data and learned experiences.

By 2012, according to Mitchell, deep-learning algorithms powered Google Street View, Apples Siri and other applications.

With the new wave of deep learning techniques, such as convolutional neural networks, AI has the potential to live up to its promise of mimicking the perception, reasoning, learning and problem solving of the human mind. In this evolution, insurance will shift from...

More:

Artificial Intelligence in Automotive Claims on Fast Track During Pandemic - Autobody News

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence in Automotive Claims on Fast Track During Pandemic – Autobody News

Deepfakes — a dark side of artificial intelligence — will make fiction nearly indistinguishable from truth – MarketWatch

Posted: at 7:02 am

Artificial intelligence will bring enormous benefits to society and the economy by amplifying our own intelligence and creativity as well as giving us a critical tool for overcoming the challenges that lie ahead.

One of the most important dangers is the emergence of deepfakeshigh quality fabrications that be put to a variety of malicious uses with the potential to threaten our security or even undermine the integrity of elections and the democratic process.

In July 2019, the cybersecurity firm Symantec revealed that three unnamed corporations had been bilked out of millions of dollars by criminals using audio deepfakes. In all three cases, the criminals used an AI-generated audio clip of the company CEOs voice to fabricate a phone call ordering financial staff to move money to an illicit bank account.

Because the technology is not yet at the point where it can produce truly high-quality audio, the criminals intentionally inserted background noise (such as traffic) to mask imperfections.

However, the quality of deepfakes is certain to get dramatically better in the coming years, and eventually will likely reach a point where truth is virtually indistinguishable from fiction.

Deepfakes are often powered by an innovation in deep learning known as a generative adversarial network, or GAN. GANs deploy two competing neural networks in a kind of game that relentlessly drives the system to produce ever higher quality simulated media.

For example, a GAN designed to produce fake photographs would include two integrated deep neural networks. The first network, called the generator, produces fabricated images. The second network, which is trained on a dataset consisting of real photographs, is called the discriminator.

This technique produces astonishingly impressive fabricated images. Search the internet for GAN fake faces and youll find numerous examples of high-resolution images that portray nonexistent individuals.

Generative adversarial networks also can be deployed in many positive ways. For example, images created with a GAN might be used to train the deep neural networks used in self-driving cars or to use synthetic non-white faces to train facial recognition systems as a way of overcoming racial bias.

GANs also can provide people who have lost the ability to speak with a computer-generated replacement that sounds like their own voice. The late Stephen Hawking, who lost his voice to the neurodegenerative disease ALS, or Lou Gehrigs disease, famously spoke in a distinctive computer synthesized voice. More recently, ALS patients like the NFL player Tim Shaw have had their natural voices restored by training deep learning systems on recordings made before the illness struck.

However, the potential for malicious use of the technology is inescapable and, as evidence already suggests for many tech-savvy individuals, irresistible.

An especially common deepfake technique enables the digital transfer of one persons face to a real video of another person. According to the startup company Sensity (formerly Deeptrace), which offers tools for detecting deepfakes, there were at least 15,000 deepfake fabrications posted online in 2019, an 84% increase over the prior year. Of these, 96% involved pornographic images or videos in which the face of a celebritynearly always a womanis transplanted onto the body of a pornographic actor.

While celebrities like Taylor Swift and Scarlett Johansson have been the primary targets, this kind of digital abuse could eventually be used against anyone, especially as the technology advances and the tools for making deepfakes become more available and easier to use.

A sufficiently credible deepfake could quite literally shift the arc of historyand the means to create such fabrications might soon be in the hands of political operatives, foreign governments or just mischievous teenagers.

Beyond videos or sound clips intended to attack or disrupt, there will be endless illicit opportunities for those who simply want to profit. Criminals will be eager to employ the technology for everything from financial and insurance fraud to stock-market manipulation. A video of a corporate CEO making a false statement, or perhaps engaging in erratic behavior, would likely cause the companys stock to plunge.

Deepfakes will also throw a wrench into the legal system. Fabricated media could be entered as evidence, and judges and juries may eventually live in a world where it is difficult, or perhaps impossible, to know whether what they see before their eyes is really true.

To be sure, there are smart people working on solutions. Sensity, for example, markets software that it claims can detect most deepfakes. However, as the technology advances, there will inevitably be an arms racenot unlike the one between those who create new computer viruses and the companies that sell software to protect against themin which malicious actors will likely always have at least a small advantage.

Ian Goodfellow, who invented GANs and has devoted much of his career to studying security issues within machine learning, says he doesnt think we will be able to know if an image is real or fake simply by looking at the pixels. Instead, well eventually have to rely on authentication mechanisms like cybernetic signatures for photos and videos.

Perhaps someday every camera and mobile phone will inject a digital signature into every piece of media it records. One startup company, Truepic, already offers an app that does this. The companys customers include major insurance companies that rely on photographs from their customers to document the value of everything from buildings to jewelry.

Still, Goodfellow thinks that ultimately theres probably not going to be a foolproof technological solution to the deepfake problem. Instead, we will have to navigate within a new and unprecedented reality where what we see and what we hear can always potentially be an illusion.

The upshot is that increased availability and reliance on artificial intelligence will come coupled with systemic security risk. This includes threats to critical infrastructure and systems as well as to the social order, our economy and our democratic institutions.

Security risks are, I would argue, the single most important near-term danger associated with the rise of artificial intelligence. It is critical that we form an effective coalition between government and the commercial sector to develop appropriate regulations and safeguards before critical vulnerabilities are introduced.

Martin Ford is the author of Rule of the Robots; How artificial intelligence will transform everything, from which this is adapted.

Read the original post:

Deepfakes -- a dark side of artificial intelligence -- will make fiction nearly indistinguishable from truth - MarketWatch

Posted in Artificial Intelligence | Comments Off on Deepfakes — a dark side of artificial intelligence — will make fiction nearly indistinguishable from truth – MarketWatch

Page 74«..1020..73747576..8090..»