Page 94«..1020..93949596..100110..»

Category Archives: Ai

What is self-learning AI and how does it tackle ransomware? – The Register

Posted: October 21, 2021 at 10:29 pm

Sponsored There used to be two certainties in life - death and taxes - but thanks to online crooks around the world, there's a third: ransomware. This attack mechanism continues to gain traction because of its phenomenal success. Despite admonishments from governments, victims continue to pay up using low-friction cryptocurrency channels, emboldening criminal groups even further.

Darktrace, the AI-powered security company that went public this spring, aims to stop the spread of ransomware by preventing its customers from becoming victims at all. To do that, they need a defence mechanism that operates at machine speed, explains its director of threat hunting Max Heinemeyer.

According to Darktrace's 2021 Ransomware Threat Report [PDF], ransomware attacks are on the rise. It warns that businesses will experience these attacks every 11 seconds in 2021, up from 40 seconds in 2016.

"Since 2017, ransomware has exploded," he explains, adding that the rise of cryptocurrency has been a big contributing factor. "Cryptocurrencies have become mainstream and much more accessible, making it much easier for ransomware actors to cash out on ransoms."

Criminal groups have piled in to take advantage. While high-profile attacks on large companies like Colonial and JBS Meats might capture the public's imagination, they're just the tip of the iceberg. Many don't see the thousands of lower-profile attacks that target smaller organisations. Ransomware recovery company Coveware reports that the median number of employees among ransomware victims stood at 200 in Q2 2021, and has actually dropped since the end of 2020.

The threat actors are also more diverse than people think, warns Heinemeyer.

"Attacks sometimes come from sophisticated groups like REvil or BlackMatter that we see in the news, but they're often from unknown groups that don't declare themselves," he says. These are just opportunistic ransomware actors."

The diversity of groups makes it difficult to spot clear attack trends anymore, he adds. Techniques vary between groups that often switch tools over time. Their targets are also diverse.

As an example, the ransomware group FIN12 spent the pandemic attacking healthcare organisations, bucking a trend that saw other ransomware groups swear off these vulnerable targets. It also switched from using TrickBot as a post-breach exploitation tool to other software including Cobalt Strike Beacon.

Monetisation tactics have also evolved, Heinemeyer warns. "They've professionalised tremendously across the board. If encryption of data is not enough to extort money, they use a double threat, exfiltrating data beforehand to apply a second point of pressure," he says.

If that is not enough, some are starting to apply distributed denial of service (DDoS) as a third pressure point to extort money. And some of the ransomware actors have spoken about trying to innovate with newer ways of extorting money by doxing their targets.

Some groups spend lots of time in their targets' networks exfiltrating data to squeeze the maximum revenue from victims. Others, like FIN12, opt for high-velocity attacks, just encrypting data but hitting multiple targets quickly.

This range of tactics, techniques, and procedures (TTPs) make ransomware unpredictable. Darktrace believes the problem is so bad that it is no longer possible to manage at human scale. The novelty and speed of modern ransomware requires an AI approach, it says.

"Unfortunately most companies are still not very good at defending themselves in 2021," Heinemeyer says. "Even if you're a major company with all the budget in the world, it might not be enough to defend against ransomware actors."

A more complex ransomware landscape isn't the only problem for defenders. The other issue is complexity in IT, thanks to the dissolution of the network perimeter. With assets now located in the cloud and in remote offices and homes, the traditional ring of iron that used to define the network's edge is becoming less relevant. Instead, companies must protect everything, everywhere.

The other problem is a lack of resources. Attackers often hit out of hours or just before a major holiday, as was the case with the ransomware attack on remote monitoring service provider Kaseya. The attack, by the REvil group, surfaced on July 2, just before the July 4 long weekend when many people would have been away.

It's difficult enough to respond to ransomware quickly, and even more so when you're running your security operations centre (SOC) on a skeleton crew. Wait - you do have a SOC, right? Colin in IT isn't handling this all on his own?

These weaknesses in human defences are a primary reason for the introduction of AI into cybersecurity defences. Darktrace fights ransomware using what it calls 'Self-Learning AI'.

The company likens its Antigena AI product to a digital immune system, which works like the human body. Like the antibodies in your bloodstream, it recognises what's normal, and works constantly to maintain that state. To do that, it detects behaviour on your network that deviates from a normal baseline and addresses it.

Heinemeyer explains why this is useful in a ransomware scenario. With an attack landscape that is so chaotic, fast-moving, and volatile, it's difficult to rely just on known software signatures and network traffic patterns to spot likely attacks. Similarly, responding to these static patterns with predefined rules is ineffective because it doesn't address new, evolving TTPs. He says that the company's AI enables companies to spot novel, never-before-seen strains of ransomware.

So, what does it look like in practice? How does AI interrupt a ransomware attack in the real world?

"The only thing you can do to stop ransomware actors from being successful is detect them early when they're trying to get a foothold," he says. Ideally, this happens before the ransomware infection happens. Darktrace scans emails - one of the most popular delivery channels for ransomware - to detect abnormal patterns.

If companies opted not to use Darktrace for pre-infection detection, then the next-best approach is to detect existing compromise as quickly as possible. The product will pick up unusual communications that normally occur when a compromised endpoint beacons for other computers to infect.

This is what happened when ransomware attackers targeted a Darktrace client in the electronics manufacturing industry. Antigena, which was not using the product to detect the initial stages of an attack, still spotted the infected client beaconing abnormally over SMB. That meant an encryption attack was in progress.

In this case, the company had chosen to activate Darktrace's autonomous response capability. Heinemeyer distinguishes this from automated responses, which rely on predefined actions and are always based on human input.

"The autonomous response will take action by integrating with existing controls like firewalls or network access controls, or using methods native to Darktrace, or take an EDR-related action," he says. "But the logic behind the action, the decision making about what action to take, all comes from Darktrace."

Those AI-powered decisions focus on restoring normality to the system. They will escalate over time based on behaviour that the software sees, all the way up to quarantining a device. This allows the software to take appropriately aggressive action at machine speed without affecting the user experience any more than necessary, he adds.

In the electronics manufacturer's case, Antigena immediately blocked anomalous connections from the infected device, stopping it from encrypting most of the files on the network. It then quarantined the rogue device for 24 hours, containing the attack and giving the security team the chance to take further action.

Unfortunately, cybersecurity is a game of constant catch-up. If defenders are using automation, then you can be sure that attackers will follow. Those attacks might begin with automated rules-based attacks but are likely to expand into AI-based attacks as sophisticated attack groups gain those capabilities. That could include everything from using AI to write more effective phishing emails for ransomware delivery, through to supervised learning algorithms to identify defence mechanisms on a network and route around them.

"Is it the most pressing priority for cyber defenders right now? Probably not. But it is something that they should think about because it will be a paradigm shift in the future," warns Heinemeyer. "Once attackers start to embrace even more automation than they do already, there's almost no way around using defensive AI."

Ransomware will eventually give way to some new form of cybercrime that attackers haven't thought of yet, but there's still plenty of life in this criminal model yet. Companies are not prepared for it, and attackers are constantly innovating. Heinemeyer hopes that more companies will explore Darktrace's AI capabilities, and ideally before attackers come calling rather than afterwards.

This article is sponsored by Darktrace.

See the original post:

What is self-learning AI and how does it tackle ransomware? - The Register

Posted in Ai | Comments Off on What is self-learning AI and how does it tackle ransomware? – The Register

Smokey the AI – IEEE Spectrum

Posted: at 10:29 pm

The 2020 fire season in the United States was the worst in at least 70 years, with some 4 million hectares burned on the west coast alone. These West Coast fires killed at least 37 people, destroyed hundreds of structures, caused nearly US $20 billion in damage, and filled the air with smoke that threatened the health of millions of people. And this was on top of a 2018 fire season that burned more than 700,000 hectares of land in California, and a 2019-to-2020 wildfire season in Australia that torched nearly 18 million hectares.

While some of these fires started from human carelessnessor arsonfar too many were sparked and spread by the electrical power infrastructure and power lines. The California Department of Forestry and Fire Protection (Cal Fire) calculates that nearly 100,000 burned hectares of those 2018 California fires were the fault of the electric power infrastructure, including the devastating Camp Fire, which wiped out most of the town of Paradise. And in July of this year, Pacific Gas & Electric indicated that blown fuses on one of its utility poles may have sparked the Dixie Fire, which burned nearly 400,000 hectares.

Until these recent disasters, most people, even those living in vulnerable areas, didn't give much thought to the fire risk from the electrical infrastructure. Power companies trim trees and inspect lines on a regularif not particularly frequentbasis.

However, the frequency of these inspections has changed little over the years, even though climate change is causing drier and hotter weather conditions that lead up to more intense wildfires. In addition, many key electrical components are beyond their shelf lives, including insulators, transformers, arrestors, and splices that are more than 40 years old. Many transmission towers, most built for a 40-year lifespan, are entering their final decade.

The way the inspections are done has changed little as well.

Historically, checking the condition of electrical infrastructure has been the responsibility of men walking the line. When they're lucky and there's an access road, line workers use bucket trucks. But when electrical structures are in a backyard easement, on the side of a mountain, or otherwise out of reach for a mechanical lift, line workers still must belt-up their tools and start climbing. In remote areas, helicopters carry inspectors with cameras with optical zooms that let them inspect power lines from a distance. These long-range inspections can cover more ground but can't really replace a closer look.

Recently, power utilities have started using drones to capture more information more frequently about their power lines and infrastructure. In addition to zoom lenses, some are adding thermal sensors and lidar onto the drones.

Thermal sensors pick up excess heat from electrical components like insulators, conductors, and transformers. If ignored, these electrical components can spark or, even worse, explode. Lidar can help with vegetation management, scanning the area around a line and gathering data that software later uses to create a 3-D model of the area. The model allows power system managers to determine the exact distance of vegetation from power lines. That's important because when tree branches come too close to power lines they can cause shorting or catch a spark from other malfunctioning electrical components.

AI-based algorithms can spot areas in which vegetation encroaches on power lines, processing tens of thousands of aerial images in days.Buzz Solutions

Bringing any technology into the mix that allows more frequent and better inspections is good news. And it means that, using state-of-the-art as well as traditional monitoring tools, major utilities are now capturing more than a million images of their grid infrastructure and the environment around it every year.

AI isn't just good for analyzing images. It can predict the future by looking at patterns in data over time.

Now for the bad news. When all this visual data comes back to the utility data centers, field technicians, engineers, and linemen spend months analyzing itas much as six to eight months per inspection cycle. That takes them away from their jobs of doing maintenance in the field. And it's just too long: By the time it's analyzed, the data is outdated.

It's time for AI to step in. And it has begun to do so. AI and machine learning have begun to be deployed to detect faults and breakages in power lines.

Multiple power utilities, including Xcel Energy and Florida Power and Light, are testing AI to detect problems with electrical components on both high- and low-voltage power lines. These power utilities are ramping up their drone inspection programs to increase the amount of data they collect (optical, thermal, and lidar), with the expectation that AI can make this data more immediately useful.

My organization, Buzz Solutions, is one of the companies providing these kinds of AI tools for the power industry today. But we want to do more than detect problems that have already occurredwe want to predict them before they happen. Imagine what a power company could do if it knew the location of equipment heading towards failure, allowing crews to get in and take preemptive maintenance measures, before a spark creates the next massive wildfire.

It's time to ask if an AI can be the modern version of the old Smokey Bear mascot of the United States Forest Service: preventing wildfires before they happen.

Damage to power line equipment due to overheating, corrosion, or other issues can spark a fire.Buzz Solutions

We started to build our systems using data gathered by government agencies, nonprofits like the Electrical Power Research Institute (EPRI), power utilities, and aerial inspection service providers that offer helicopter and drone surveillance for hire. Put together, this data set comprises thousands of images of electrical components on power lines, including insulators, conductors, connectors, hardware, poles, and towers. It also includes collections of images of damaged components, like broken insulators, corroded connectors, damaged conductors, rusted hardware structures, and cracked poles.

We worked with EPRI and power utilities to create guidelines and a taxonomy for labeling the image data. For instance, what exactly does a broken insulator or corroded connector look like? What does a good insulator look like?

We then had to unify the disparate data, the images taken from the air and from the ground using different kinds of camera sensors operating at different angles and resolutions and taken under a variety of lighting conditions. We increased the contrast and brightness of some images to try to bring them into a cohesive range, we standardized image resolutions, and we created sets of images of the same object taken from different angles. We also had to tune our algorithms to focus on the object of interest in each image, like an insulator, rather than consider the entire image. We used machine learning algorithms running on an artificial neural network for most of these adjustments.

Today, our AI algorithms can recognize damage or faults involving insulators, connectors, dampers, poles, cross-arms, and other structures, and highlight the problem areas for in-person maintenance. For instance, it can detect what we call flashed-over insulatorsdamage due to overheating caused by excessive electrical discharge. It can also spot the fraying of conductors (something also caused by overheated lines), corroded connectors, damage to wooden poles and crossarms, and many more issues.

Developing algorithms for analyzing power system equipment required determining what exactly damaged components look like from a variety of angles under disparate lighting conditions. Here, the software flags problems with equipment used to reduce vibration caused by winds.Buzz Solutions

But one of the most important issues, especially in California, is for our AI to recognize where and when vegetation is growing too close to high-voltage power lines, particularly in combination with faulty components, a dangerous combination in fire country.

Today, our system can go through tens of thousands of images and spot issues in a matter of hours and days, compared with months for manual analysis. This is a huge help for utilities trying to maintain the power infrastructure.

But AI isn't just good for analyzing images. It can predict the future by looking at patterns in data over time. AI already does that to predict weather conditions, the growth of companies, and the likelihood of onset of diseases, to name just a few examples.

We believe that AI will be able to provide similar predictive tools for power utilities, anticipating faults, and flagging areas where these faults could potentially cause wildfires. We are developing a system to do so in cooperation with industry and utility partners.

We are using historical data from power line inspections combined with historical weather conditions for the relevant region and feeding it to our machine learning systems. We are asking our machine learning systems to find patterns relating to broken or damaged components, healthy components, and overgrown vegetation around lines, along with the weather conditions related to all of these, and to use the patterns to predict the future health of the power line or electrical components and vegetation growth around them.

Buzz Solutions' PowerAI software analyzes images of the power infrastructure to spot current problems and predict future ones

Right now, our algorithms can predict six months into the future that, for example, there is a likelihood of five insulators getting damaged in a specific area, along with a high likelihood of vegetation overgrowth near the line at that time, that combined create a fire risk.

We are now using this predictive fault detection system in pilot programs with several major utilitiesone in New York, one in the New England region, and one in Canada. Since we began our pilots in December of 2019, we have analyzed about 3,500 electrical towers. We detected, among some 19,000 healthy electrical components, 5,500 faulty ones that could have led to power outages or sparking. (We do not have data on repairs or replacements made.)

Where do we go from here? To move beyond these pilots and deploy predictive AI more widely, we will need a huge amount of data, collected over time and across various geographies. This requires working with multiple power companies, collaborating with their inspection, maintenance, and vegetation management teams. Major power utilities in the United States have the budgets and the resources to collect data at such a massive scale with drone and aviation-based inspection programs. But smaller utilities are also becoming able to collect more data as the cost of drones drops. Making tools like ours broadly useful will require collaboration between the big and the small utilities, as well as the drone and sensor technology providers.

Fast forward to October 2025. It's not hard to imagine the western U.S facing another hot, dry, and extremely dangerous fire season, during which a small spark could lead to a giant disaster. People who live in fire country are taking care to avoid any activity that could start a fire. But these days, they are far less worried about the risks from their electric grid, because, months ago, utility workers came through, repairing and replacing faulty insulators, transformers, and other electrical components and trimming back trees, even those that had yet to reach power lines. Some asked the workers why all the activity. "Oh," they were told, "our AI systems suggest that this transformer, right next to this tree, might spark in the fall, and we don't want that to happen."

Indeed, we certainly don't.

Go here to see the original:

Smokey the AI - IEEE Spectrum

Posted in Ai | Comments Off on Smokey the AI – IEEE Spectrum

AI is making waves in the rent, property owners define the accurate rent price fluctuations with variations in facilities and specifications – WCTV

Posted: at 10:29 pm

The AI market price assessment feature is updated on RENOSYOWNER's MYPAGE

Published: Oct. 21, 2021 at 12:00 PM EDT|Updated: 10 hours ago

TOKYO, Oct. 21, 2021 /PRNewswire/ -- There are several updates regarding the features of RENOSYOWNER's MYPAGE including updates of the AI market price assessment feature. The market price of rent fluctuation based on facility variations, and market price assessment trends are available in graphs on RENOSY OWNER's MYPAGE. The RENOSY service is a comprehensive one-stop real estate transaction platform provided by GA technologies Co.,Ltd. (Headquarters: Minato, Tokyo / CEO: Ryo Higuchi / Securities Code: 3491. Here refers to as "the Company".)

[Key Highlights]

Background:AI price assessment feature provided by RENOSY is a feature target at property owners exclusively on OWNER's MYPAGE.([1]) Users can check the assessment price of sell, rent as well as the monthly price fluctuation trend in the past by entering the name of the building (property), floor plan, etc.

Besides, the feature of "setting the influencing factors about rent price" based on facilities and specifications variations has been added. Users can also check about the monthly assessment trend of the selling price, rent price in the past shown as graphs on OWNER's MYPAGE .

Background about the updates: "setting the influencing factors about rent price"The assessment suggested by AI is based on the information provided by the property owners plus the data gathered and analyzed by the Company's owned AI technology. However, the actual price may vary due to the actual condition and facilities, specifications of the property such as "initial cost", "with/without parking lot", "pets allowed / not allowed" etc.

These factors that could influence the rent price are important elements of reference for owners to take into consideration when putting the property on the market for lease. This is also why we decided to add such a function into the system with the update.

1. "Setting the influencing factors about rent price"Users can use this feature to check the estimated rent price with different options of facilities and specifications selected.

Features we offered for our users.[Features]

The assessment and the trend of price change based on different selections are also available in the following graphs.

[Rent assessment]The formula for the assessment of rent price is as follows: Assessment = Basic rent (standard rent calculated by AI) + individual option(s) selected (features that could influence the rent price). The rent price will vary based on the factors selected by the owner.(Example) price of assessment 290,400 JPY= Basic rent 282,000 JPY+ individual options (factors) selected 8,400 JPY (" Corner room" is selected)

[Graph of the changing assessments]The prices being shown in the graph (the past 6 months) are the prices with the factoring price included.

2. "Trend of past assessments"Originally, only the assessed selling price, rent price of the first month are showed when the owner registered the property the very first time onto the App. With the updates of the system, owners could check about the latest monthly assessments automatically for up to 6 months as the maximum.Regarding this update, all users can check the trend of past assessments about their properties up to 6 months with a graph regardless of the date of registration of the property.

RENOSY is a comprehensive one-stop real estate platform provided by GA technologies under the concept of "making house hunting and assets management easier". With the business vision of "inspiring the world with the power of technology and innovation", the purpose of RENOSY is to make everything about real estate easier for our customers, whether you want to "rent", "buy", "sell", "lease", "renovate" or "invest" on a property, you can get what you want and need all in one place. Currently, we have about 22,0000 registered members, and more than 15,0000 existing properties in the center of Tokyo available on our website. GA technologies is working on helping to accelerate the digital transformation process of the industry, and to provide a better customer experience both online and offline.

[1]Data of the number of RENOSY members:by July 2021 / Data of the number of properties available on the website: by October 2020

Company: GA technologies Co., Ltd.Representative: Ryo HiguchiURL:https://www.ga-tech.co.jp/en/Head office: 40F of Sumitomo Fudosan Roppongi Grand Tower, Roppongi 3-2-1, Minato District, TokyoYear of founding: March 2013Capital fund: 72,19,146,516JPY (by September 2021)What we do:

Sub companies: ITANDI Co.,Ltd, Modern Standard Co., Ltd, Shenjumiausuan Co.,Ltd and 8 other companies

For this release, please contact below:Nami (+81-90-1503-9158), Judy, GA technologies Co., Ltd. MAIL: pr@ga-tech.co.jp

View original content to download multimedia:

SOURCE GA technologies Co., Ltd.

The above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc.

Read more:

AI is making waves in the rent, property owners define the accurate rent price fluctuations with variations in facilities and specifications - WCTV

Posted in Ai | Comments Off on AI is making waves in the rent, property owners define the accurate rent price fluctuations with variations in facilities and specifications – WCTV

Improve patients’ outcomes with AI-powered remote monitoring devices and increase revenue for your practice – MedCity News

Posted: at 10:29 pm

Do you need help to continuously monitor your Medicare patients health? Are you interested in improving patient care, generating additional revenue, and keeping your office running efficiently?

Meet the only remote patient monitoring solution to utilize artificial intelligence (AI) to attain dramatically higher patient adherence and clinically meaningful outcomes. 100Plus remote patient monitoring (RPM) provides measurable improvements in patient health outcomes across treated conditions, including hypertension, obesity, and diabetes. Mintu Turakhia, M.D., M.A.S., a cardiac electrophysiologist, outcomes researcher, clinical trialist, and consulting head of medical at 100Plus will present powerful RPM outcomes.

A webinar from MedCity News sponsored by 100Plus scheduled for November 9 at 12pm ET will explore these topics as well as your questions.

In addition youll learn how to:

(Not able to attend? We recommend you still register you will receive an email with how to access the recording of the event.)

Webinar Panel:

Mike Wurm, director of product and strategy, 100Plus

Daniel Gasparini, director of sales, 100Plus

Mintu Turakhia, M.D. M.A.S., consulting head of medical, 100Plus

Nadia Ziyadeh-Hammad, LVN, RPM coordinator, Greenville Healthcare Associates

Stephanie Baum, director of special projects, MedCity News, webinar host

To register for this webinar, fill in the form below:

About 100Plus

100Plus is the fastest-growing RPM platform empowering doctors to manage their chronic patients remotely, proactively engage them to avoid expensive, episodic care, and drive a higher quality of life. The 100Plus medical devices are included as part of the remote patient monitoring service, which incurs no out-of-pocket cost for the majority of patients. When a patient receives a 100Plus medical device, its fully configured and ready to use out of the box no smartphone, app, Bluetooth, WiFi, or cellular plan required. The program can reduce episodic care and improves patient care by continuously monitoring high-risk senior patients.

Photo: Maria Symchych-Navrotska, Getty Images

Original post:

Improve patients' outcomes with AI-powered remote monitoring devices and increase revenue for your practice - MedCity News

Posted in Ai | Comments Off on Improve patients’ outcomes with AI-powered remote monitoring devices and increase revenue for your practice – MedCity News

Implants, AI and the No-cash Generation Will Payments in 2030 Be a Dystopia or a Utopia? – Yahoo Finance

Posted: at 10:29 pm

New industry study explores the future of payments from embedding ethics into transactions, to whether machine learning will make financial education redundant

LONDON, October 21, 2021--(BUSINESS WIRE)--Today, Marqeta, the global modern card issuing platform, announced the launch of its new industry study, conducted in partnership with Consult Hyperion The European Payments Landscape in 2030: Implants, embedded ethics and a post-payments world examining the potential issues and innovations that may emerge in the next decade of payments.

Marqeta partnered with Consult Hyperion, independent experts in payments and identity, to conduct a series of workshops with 12 industry luminaries, including: Theodora Lau, Founder of Unconventional Ventures, and Brett King, Founder of Moven, and Jonathan Williams, Technical Payments Specialist and Payment Systems Regulator. Led by acclaimed author, advisor and commentator on digital financial services, Dave Birch, the workshops delved into a range of issues to gain expert opinion on where the market is heading from the possibility of new and emerging payment methods through to ethics and regulation. It suggests that innovations in AI and biometrics could change the way we live, but the social and ethical implications may need to be carefully managed to ensure equality of opportunity for all.

The insights from the workshops were further enriched with the findings of a 2,037 consumer survey done by Propeller Insights on behalf of Marqeta, gauging public sentiment around advancing payment technologies. Some of the key findings include:

Tribes and implants: By 2030, tribes and influencers may play a key role in driving payment adoption we are already seeing the start of this, with companies using social influencers to get its brand out. Many consumers already seem willing to embrace futuristic payment methods, with 51% of consumers surveyed saying they would consider using a microchip implant to pay. Could adoption of such extreme technologies be further encouraged by the sway of influencers and tribes?

AI and embedded ethics: Consumers love of convenience may see spending controls handed over to AI-powered smart wallets making automated decisions on everything from selecting the cheapest way to pay, to the most ethical. By embedding ethical behaviour into transactions, the payments space can take on a unique role in incentivising both consumers and companies to be more conscious. There is already some appetite for this, with 31% of 1824-year-old respondents saying they would be comfortable with AI making automated decisions on their behalf to choose the most ethical way to pay.

Ambient commerce reigns: Payments may become increasingly invisible as we move to more "ambient" models of commerce such as the recent emergence of till-less grocery stores. By 2030, this will evolve even further so there will be no cards or payments devices. Instead, a persons biometric data would be captured by cameras, along with the item they are purchasing and sent directly to their bank. While (32%) of surveyed consumers find the idea of ambient commerce creepy, experts believe given how quickly Uber and the like have been accepted as a norm, the convenience would soon win people over.

The no-cash generation: As cash disappears, there could be a generation that has never held physical money. This surfaces serious questions over how societies educate children about the value of money. This is already a concern for many consumers today, with 35% of respondents admitting they worry that the young people they know who dont use cash will struggle with learning to budget or to save without physical cash.

Digital responsibility: As we continue to move away from physical payment methods, and as the number of digital products and services on offer continue to multiply, digital responsibility may be a key focus for regulators. Digital responsibility means ensuring that new digital products and services are developed with consumer wellbeing in mind and dont misuse data or encourage harmful habits, such as overspending.

"If the past two years have taught us anything, its that no one can predict the future," said Ian Johnson, SVP, Managing Director, Europe, Marqeta. "In less than a decades time, the payments landscape could look very different. It could be the norm to pay by waving a hand. Whats clear is that in 2030 and beyond, digital payments will have an increasingly foundational role in our lives tying into our ethics, our future education, and the smooth functioning of our economies."

Story continues

Taking the thinking out of payments with AI is there any need for financial education?

The emergence of smart, AI-driven wallets as a mainstream form of payment raised an engaging debate among experts around the future role of financial education. Could technology that removes the need to think, making decisions on your behalf, be preferable to having an active role in understanding and managing finances and payments?

"We need to create behavioural mechanics, behavioural economics that change people's behaviour to become financially healthy," said Brett King, Founder of Moven. "And you're better off doing that through creating tools that help them do that, rather than educating them."

However, UK consumers appear wary of handing over such control:

A third 34% of respondents would be comfortable with making payment decisions on the most affordable method to pay for goods or services

Only 22% of respondents would be comfortable with AI selecting the most ethical way to pay

Just 21% of respondents would be comfortable with AI choosing which currency to pay in

Disappearing cash and the exclusion problem

Disappearing cash raises concerns for many around exclusion, with cash tribes of people who find it difficult to move to digital payments, or who simply do not want to such as the elderly, the unbanked, and digital sceptics. 75% of people surveyed aged 65+ said they have felt pressure to ditch cash due to places only taking card/contactless payments, while 83% of consumers surveyed said the decline of cash will exclude those most at-risk in society.

Product design was noted as an area of potential discrimination for such groups in particular, the elderly. "There are all sorts of technologies that have biases against people who are older," said Jonathan Williams, Technical Payments Specialist, Payment Systems Regulator. "One example is that biometric fingerprint scanners don't tend to work as well with people who are over the age of 70 because, in some cases, their skin tends to be drier and therefore has less ridge definition. So, they don't get quite as good a read if theyre trying to use fingerprints."

But is it possible that, by 2030, digital currencies could be built to be inclusive for all? Some workshop attendees believed that Central Bank Digital Currencies (CBDCs) could be an antidote to the exclusive implications that come with the decline of cash. However, education is needed if this is to become a reality. 30% of respondents said they dont like the idea of the state tracking their digital currency and 27% dont understand how CBDCs would affect them.

Regulation must stop playing catch up

Currently, the pace of innovation in payments is far outstripping that at which regulation can move. With so much scope for error from prolific fraud to loss of money regulators must not just catch up, but get ahead. It was noted that progress is being made. For example, the EU recently proposed AI regulation to pre-emptively tackle issues around bias and ethics a topic that was front of mind for consumers surveyed, 91% of whom expressed concerns about AI bias creeping into financial services.

However, by 2030 the challenges and areas of concern that regulators face will likely have multiplied. For instance, the possibility of an outage caused by a sophisticated, state-backed attack, is becoming increasingly likely. Regulators must prepare for this eventuality and help build systemic resilience into the payments sector.

"When it comes to regulation, we need to not look from the point of where we are now, but from where were going," said Helene Panzarino, Vacuumlabs, SME Expert, Community & Fintech Partnerships. "We need a new structure. At the moment, were trying to put old clothes on the new body. We have to suspend and forget what came before to a large extent and look at where we are now in that space to create these new regulations."

Johnson argues that flexibility in payment infrastructure will be vital to delivering a vibrant future of payments: "While the future is a mystery, one thing is for certain: the most ambitious and innovative payment methods will be those that are underpinned by agility and the ability to make automatic, real-time decisions," says Marqetas Ian Johnson. "A flexible payment ecosystem will also be key for making all this happen, creating room for more offerings to enter the space and, in turn, supporting greater inclusion and sustainable alternatives. All that remains to be said is, watch this space."

European Payments in 2030 Report

To download Marqetas European Payments Landscape in 2030 report visit: https://www.marqeta.com/resources/resource/the-european-payments-landscape-in-2030

About Marqeta

Marqeta (NASDAQ: MQ) is the global modern card issuing platform empowering builders to bring the most innovative products to the world. Marqeta provides developers advanced infrastructure and tools for building highly configurable payment cards. With its open APIs, the Marqeta platform is used by leading European fintechs like Capital on Tap, Lydia and Twisto, who want to easily build tailored payment solutions to create best-in-class experiences and power new modes of money movement. Marqeta is headquartered in Oakland, California, is enabled in 36 countries globally and has offices in London, United Kingdom and Melbourne, Australia. For more information, visit http://www.marqeta.com, Twitter and LinkedIn.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211021005122/en/

Contacts

Robert FretwellSpark Communicationsmarqeta@sparkcomms.co.uk

Original post:

Implants, AI and the No-cash Generation Will Payments in 2030 Be a Dystopia or a Utopia? - Yahoo Finance

Posted in Ai | Comments Off on Implants, AI and the No-cash Generation Will Payments in 2030 Be a Dystopia or a Utopia? – Yahoo Finance

ZERO Announces Apollo, its Latest AI-Driven Productivity Automation Tool for the Legal Industry – PRNewswire

Posted: at 10:29 pm

Apollo is a Desktop-based Time Capture Automation Solution that Records Time Spent on Billable Activity

CAMPBELL, Calif., Oct. 21, 2021 /PRNewswire/ -- ZERO, a pioneer in productivity automation solutions for professional services firms, today announced the launch of Apollo, a new software solution that automatically captures lawyers time spent on billable work on any desktop device and seamlessly integrates it into their existing billing platform. Apollo is a low-touch, high-impact solution that can be deployed seamlessly on a law firm's existing IT infrastructure.

Apollo offers the legal industry a new intelligent product that mimics human cognition by learning from users' activities to produce accurate recordings of projects and billable time to drive higher client value and improve employee morale. According to ZERO's latest legal industry survey published in August 2021, lawyers waste 30% of their time on non-billable admin tasks like tracking and reporting time time that could be spent improving the quality of their work lives by focusing on practicing law.

"We heard loud and clear from lawyers that they want a better work-life balance, and one way to provide that is freeing up their time spent on administrative tasks," said Alex Babin, ZERO CEO. "With Apollo, the time they spend working on their desktops is automatically captured and entered into their billing software, meaning they don't have to spend hours at the end of every day, week or month manually entering that time, when they could be watching their child play soccer or focusing on winning a case."

Apollo provides three key benefits to lawyers and their support staff: 1) ensuring the quality and relevancy of time entries; 2) reducing the hours spent on manual timekeeping; and 3) increasing revenue for law firms. Apollo joins ZERO's suite of automation products that enable lawyers to be more productive and generate more revenue by automating and streamlining onerous administrative tasks such as email management and mobile time capture. ZERO is well-known in the legal industry via partnerships with AMLAW500 companies automating their internal administrative processes, like billing, email, and document management.

ZERO's next-generation AI solutions sit upon existing systems and provide an additional layer of automation that runs in the background, using autonomous virtual modules to mimic human decision-making in high-value, repetitive processes like information prioritization and data classification. This automated layer sits on top of billing systems, email systems and document management systems enabling organizations to increase productivity and accuracy at the user and firm-wide level, ultimately bringing more billable hours and profit to the firm.

ABOUT ZEROZERO is a leader in Productivity Automation with products engineered to help professional services firms achieve operational excellence. ZERO's applications enable lawyers to be more productive and generate more revenue by automating and streamlining onerous administrative tasks such as email and document management and time capture. Law firms around the world rely on ZERO to minimize revenue leakage, increase email compliance, and improve the lives of their lawyers. Learn more at http://www.zerosystems.com

MEDIA CONTACT:Jessi AdlerPlat4orm [emailprotected]

SOURCE ZERO

https://zerosystems.com

Read this article:

ZERO Announces Apollo, its Latest AI-Driven Productivity Automation Tool for the Legal Industry - PRNewswire

Posted in Ai | Comments Off on ZERO Announces Apollo, its Latest AI-Driven Productivity Automation Tool for the Legal Industry – PRNewswire

DentalMonitoring, the Leading AI-Based Dental Software Company, Announces a $150 Million Growth Financing, Reaching a Valuation Over $1 Billion -…

Posted: at 10:29 pm

PARIS & AUSTIN, Texas--(BUSINESS WIRE)--DentalMonitoring has become the first dental software company to attain a valuation over $1 billion announcing a $150 million growth financing. The round is led by a new investment of $90 million from Mrieux Equity Partners and $60 million from Vitruvian Partners, an existing financial investor, demonstrating confidence in the companys ambitious plans.

Since CEO and co-founder Philippe Salah launched the company in 2014, DentalMonitoring has become the first player to harness AI for remote monitoring in the dental and orthodontic fields. Driven by the treating doctor, DentalMonitorings AI automates messages and instructions sent to patients and practice staff to synchronize the delivery of care with the need of care. DentalMonitoring is also the first and only company offering virtual practice solutions for all dental professionals to help streamline and automate their workflow from initial virtual consultation, patient triage and conversion, to remote monitoring of all appliances and brands. To date, there are over one million patients in more than 50 countries that have taken more than a billion intraoral images on the DentalMonitoring platform.

We are proud to be supported by leading international funds, says DentalMonitoring CEO Philippe Salah. This achievement marks a new milestone for the company, and is a testament to the new standard of care our team has brought to the profession. We will continue to deliver even more innovative solutions for dental professionals to help them provide better care and scale their practices.

The company plans to use the proceeds to finance its rapid global growth, targeting an increased presence in the U.S. and expanding into new markets such as China and Japan. DentalMonitoring also plans to nearly double the number of employees in the next two years and target relevant acquisitions.

AI is one of the major technologies to transform the delivery of healthcare and improve patient outcomes. DentalMonitorings team and their disruptive technology have convinced us of their ability to address the expanding demand for remote care capabilities for dental professionals. We are proud at Mrieux Equity Partner to back healthtech talent and look forward to supporting DentalMonitoring in their stellar growth, leveraging our international network. added Caroline Folleas of Mrieux Equity Partners.

By enabling remote dental care, DentalMonitoring is the biggest disruptor to the field since intraoral scanners in the 1990s and digital imaging in the 1980s. The company is an excellent fit with our investment strategy of backing best-in-class companies benefiting from strong market tailwinds with internationalization opportunities. Vitruvian renews its confidence to support the increasing needs of the companys ambitious growth plans. says Torsten Winkler of Vitruvian Partners.

Jefferies LLC, a global investment bank with a deep knowledge of the dental and software markets, served as Sole Placement Agent to DentalMonitoring on this investment round.

About DentalMonitoring - http://www.dental-monitoring.com DentalMonitoring was started with a simple idea: oral care should be connected and continuous even outside the practice. The company has created the worlds first virtual practice platform in dentistry, protected by over 200 patents, to address rapidly evolving patient expectations. Thanks to the largest database of dental images in the industry, DentalMonitoring has developed the most advanced and comprehensive doctor-driven AI solutions to help dental professionals provide superior care and a better patient experience. From patient lead engagement and conversion, providing treatment options through AI-generated reporting and advanced smile simulations, to remote monitoring of all types of treatments, DentalMonitorings unique platforms give dental professionals complete control over streamlined assessments and communication. DentalMonitoring has over 400 employees across 18 countries and 9 offices including Paris, Austin, London, Sydney and Hong Kong.

About Mrieux Equity Partners - http://www.merieux-partners.com Merieux Equity Partners ("MxEP") is an AMF-accredited management company dedicated to equity investments in the health and nutrition sector. With more than 45 companies backed, MxEP actively supports entrepreneurs and companies whose products and services bring differentiated and innovative solutions by providing privileged access to its expertise and the industrial, scientific and commercial network of Institut Mrieux (BioMerieux, Transgene, ABL, Mrieux Nutrisciences); MxEP currently manages over 1 billion in assets.

About Vitruvian Partners - http://www.vitruvianpartners.com Vitruvian is a leading international growth investor headquartered in London with offices in London, Stockholm, Munich, Luxembourg, San Francisco and Shanghai. Vitruvian focuses on dynamic situations characterized by rapid growth and change across industries. Vitruvian has backed over 100 companies and has assets under management of approximately 10 billion. Notable investments include global market leaders and disruptors in their field such as CRF Health, Fotona, Ada, doctari, Vestiaire Collective, Farfetch, Just Eat, Marqeta and TransferWise.

More here:

DentalMonitoring, the Leading AI-Based Dental Software Company, Announces a $150 Million Growth Financing, Reaching a Valuation Over $1 Billion -...

Posted in Ai | Comments Off on DentalMonitoring, the Leading AI-Based Dental Software Company, Announces a $150 Million Growth Financing, Reaching a Valuation Over $1 Billion -…

AI Weekly: AI model training costs on the rise, highlighting need for new solutions – VentureBeat

Posted: October 15, 2021 at 9:05 pm

This week, Microsoft and Nvidia announced that they trained what they claim is one of the largest and most capable AI language models to date: Megatron-Turing Natural Language Generation (MT-NLP). MT-NLP contains 530 billion parameters the parts of the model learned from historical data and achieves leading accuracy in a broad set of tasks, including reading comprehension and natural language inferences.

But building it didnt come cheap. Training took place across 560 Nvidia DGX A100 servers, each containing 8 Nvidia A100 80GB GPUs. Experts peg the cost in the millions of dollars.

Like other large AI systems, MT-NLP raises questions about the accessibility of cutting-edge research approaches in machine learning. AI training costs dropped 100-fold between 2017 and 2019, but the totals still exceed the compute budgets of most startups, governments, nonprofits, and colleges. The inequity favors corporations and world superpowers with extraordinary access to resources at the expense of smaller players, cementing incumbent advantages.

For example, in early October, researchers at Alibaba detailed M6-10T, a language model containing 10 trillion parameters (roughly 57 times the size of OpenAIs GPT-3) trained across 512 Nvidia V100 GPUs for 10 days. The cheapest V100 plan available through Google Cloud Platform costs $2.28 per hour, which would equate to over $300,000 ($2.28 per hour multiplied by 24 hours over 10 days) further than most research teams can stretch.

Google subsidiary DeepMind is estimated to have spent $35 million training a system to learn the Chinese board game Go. And when the companys researchers designed a model to play StarCraft II, they purposefully didnt try multiple ways of architecting a key component because the training cost would have been too high. Similarly, OpenAI didnt fix a mistake when it implemented GPT-3 because the cost of training made retraining the model infeasible.

Its important to keep in mind that training costs can be inflated by factors other than an algorithms technical aspects. As Yoav Shoham, Stanford University professor emeritus and cofounder of AI startup AI21 Labs, recently told Synced, personal and organizational considerations often contribute to a models final price tag.

[A] researcher might be impatient to wait three weeks to do a thorough analysis and their organization may not be able or wish to pay for it, he said. So for the same task, one could spend $100,000 or $1 million.

Still, the increasing cost of training and storing algorithms like Huaweis PanGu-Alpha, Navers HyperCLOVA, and the Beijing Academy of Artificial Intelligences Wu Dao 2.0 is giving rise to a cottage industry of startups aiming to optimize models without degrading accuracy. This week, former Intel exec Naveen Rao launched a new company, Mosaic ML, to offer tools, services, and training methods that improve AI system accuracy while lowering costs and saving time. Mosaic ML which has raised $37 million in venture capital competes with Codeplay Software, OctoML, Neural Magic, Deci, CoCoPie, and NeuReality in a market thats expected to grow exponentially in the coming years.

In a sliver of good news, the cost of basic machine learning operations has been falling over the past few years. A 2020 OpenAI survey found that since 2012, the amount of compute needed to train a model to the same performance on classifying images in a popular benchmark ImageNet has been decreasing by a factor of two every 16 months.

Approaches like network pruning prior to training could lead to further gains. Research has shown that parameters pruned after training, a process that decreases the model size, could have been pruned before training without any effect on the networks ability to learn. Called the lottery ticket hypothesis, the idea is that the initial values parameters in a model receive are crucial for determining whether theyre important. Parameters kept after pruning receive lucky initial values; the network can train successfully with only those parameters present.

Network pruning is far from a solved science, however. New ways of pruning that work before or in early training will have to be developed, as most current methods apply only retroactively. And when parameters are pruned, the resulting structures arent always a fit for the training hardware (e.g., GPUs), meaning that pruning 90% of parameters wont necessarily reduce the cost of training a model by 90%.

Whether through pruning, novel AI accelerator hardware, or techniques like meta-learning and neural architecture search, the need for alternatives to unattainably large models is quickly becoming clear. A University of Massachusetts Amherst study showed that using 2019-era approaches, training an image recognition model with a 5% error rate would cost $100 billion and produce as much carbon emissions as New York City does in a month. As IEEE Spectrums editorial team wrote in a recent piece, we must either adapt how we do deep learning or face a future of much slower progress.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read the original here:

AI Weekly: AI model training costs on the rise, highlighting need for new solutions - VentureBeat

Posted in Ai | Comments Off on AI Weekly: AI model training costs on the rise, highlighting need for new solutions – VentureBeat

Artificial intelligence | NIST

Posted: at 9:05 pm

Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a wide range of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing and benefitting nearly all aspects of our society and economy everything from commerce and healthcare to transportation and cybersecurity. But the development and use of the new technologies it brings are not without technical challenges and risks.

NIST contributes to the research, standards and data required to realize the full promise of artificial intelligence (AI) as a tool that will enable American innovation, enhance economic security and improve our quality of life. Much of our work focuses on cultivating trust in the design, development, use and governance of artificial intelligence (AI) technologies and systems. We are doing this by:

NISTs AI efforts fall in several categories:

NISTs AI portfolio includes fundamental research into and development of AI technologies including software, hardware, architectures and human interaction and teaming vital for AI computational trust.

AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AIs capabilities and limitations.

With a long history of devising and revising metrics, measurement tools, standards and test beds, NIST increasingly is focusing on the evaluation of technical characteristics of trustworthy AI.

NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are and increasingly will be a priority for the use and creation of trustworthy and responsible AI.

AI and Machine Learning (ML) is changing the way in which society addresses economic and national security challenges and opportunities. It is being used in genomics, image and video processing, materials, natural language processing, robotics, wireless spectrum monitoring and more. These technologies must be developed and used in a trustworthy and responsible manner.

While answers to the question of what makes an AI technology trustworthy may differ depending on whom you ask, there are certain key characteristics which support trustworthiness, including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience) and mitigation of harmful bias. Principles such as transparency, fairness and accountability should be considered, especially during deployment and use. Trustworthy data, standards and evaluation, validation, and verification are critical for the successful deployment of AI technologies.

Delivering the needed measurements, standards and other tools is a primary focus for NISTs portfolio of AI efforts. It is an area in which NIST has special responsibilities and expertise. NIST relies heavily on stakeholder input, including via workshops, and issues most publications in draft for comment.

Read the original here:

Artificial intelligence | NIST

Posted in Ai | Comments Off on Artificial intelligence | NIST

Facebook is researching AI systems that see, hear, and remember everything you do – The Verge

Posted: at 9:05 pm

Facebook is pouring a lot of time and money into augmented reality, including building its own AR glasses with Ray-Ban. Right now, these gadgets can only record and share imagery, but what does the company think such devices will be used for in the future?

A new research project led by Facebooks AI team suggests the scope of the companys ambitions. It imagines AI systems that are constantly analyzing peoples lives using first-person video; recording what they see, do, and hear in order to help them with everyday tasks. Facebooks researchers have outlined a series of skills it wants these systems to develop, including episodic memory (answering questions like where did I leave my keys?) and audio-visual diarization (remembering who said what when).

Right now, the tasks outlined above cannot be achieved reliably by any AI system, and Facebook stresses that this is a research project rather than a commercial development. However, its clear that the company sees functionality like these as the future of AR computing. Definitely, thinking about augmented reality and what wed like to be able to do with it, theres possibilities down the road that wed be leveraging this kind of research, Facebook AI research scientist Kristen Grauman told The Verge.

Such ambitions have huge privacy implications. Privacy experts are already worried about how Facebooks AR glasses allow wearers to covertly record members of the public. Such concerns will only be exacerbated if future versions of the hardware not only record footage, but analyze and transcribe it, turning wearers into walking surveillance machines.

The name of Facebooks research project is Ego4D, which refers to the analysis of first-person, or egocentric, video. It consists of two major components: an open dataset of egocentric video and a series of benchmarks that Facebook thinks AI systems should be able to tackle in the future.

The dataset is the biggest of its kind ever created, and Facebook partnered with 13 universities around the world to collect the data. In total, some 3,205 hours of footage were recorded by 855 participants living in nine different countries. The universities, rather than Facebook, were responsible for collecting the data. Participants, some of whom were paid, wore GoPro cameras and AR glasses to record video of unscripted activity. This ranges from construction work to baking to playing with pets and socializing with friends. All footage was de-identified by the universities, which included blurring the faces of bystanders and removing any personally identifiable information.

Grauman says the dataset is the first of its kind in both scale and diversity. The nearest comparable project, she says, contains 100 hours of first-person footage shot entirely in kitchens. Weve open up the eyes of these AI systems to more than just kitchens in the UK and Sicily, but [to footage from] Saudi Arabia, Tokyo, Los Angeles, and Colombia.

The second component of Ego4D is a series of benchmarks, or tasks, that Facebook wants researchers around the world to try and solve using AI systems trained on its dataset. The company describes these as:

Episodic memory: What happened when (e.g., Where did I leave my keys?)?

Forecasting: What am I likely to do next (e.g., Wait, youve already added salt to this recipe)?

Hand and object manipulation: What am I doing (e.g., Teach me how to play the drums)?

Audio-visual diarization: Who said what when (e.g., What was the main topic during class?)?

Social interaction: Who is interacting with whom (e.g., Help me better hear the person talking to me at this noisy restaurant)?

Right now, AI systems would find tackling any of these problems incredibly difficult, but creating datasets and benchmarks are tried-and-tested methods to spur development in the field of AI.

Indeed, the creation of one particular dataset and an associated annual competition, known as ImageNet, is often credited with kickstarting the recent AI boom. The ImagetNet datasets consists of pictures of a huge variety of objects which researchers trained AI systems to identify. In 2012, the winning entry in the competition used a particular method of deep learning to blast past rivals, inaugurating the current era of research.

Facebook is hoping its Ego4D project will have similar effects for the world of augmented reality. The company says systems trained on Ego4D might one day not only be used in wearable cameras but also home assistant robots, which also rely on first-person cameras to navigate the world around them.

The project has the chance to really catalyze work in this field in a way that hasnt really been possible yet, says Grauman. To move our field from the ability to analyze piles of photos and videos that were human-taken with a very special purpose, to this fluid, ongoing first-person visual stream that AR systems, robots, need to understand in the context of ongoing activity.

Although the tasks that Facebook outlines certainly seem practical, the companys interest in this area will worry many. Facebooks record on privacy is abysmal, spanning data leaks and $5 billion fines from the FTC. Its also been shown repeatedly that the company values growth and engagement above users well-being in many domains. With this in mind, its worrying that benchmarks in this Ego4D project do not include prominent privacy safeguards. For example, the audio-visual diarization task (transcribing what different people say) never mentions removing data about people who dont want to be recorded.

When asked about these issues, a spokesperson for Facebook told The Verge that it expected that privacy safeguards would be introduced further down the line. We expect that to the extent companies use this dataset and benchmark to develop commercial applications, they will develop safeguards for such applications, said the spokesperson. For example, before AR glasses can enhance someones voice, there could be a protocol in place that they follow to ask someone elses glasses for permission, or they could limit the range of the device so it can only pick up sounds from the people with whom I am already having a conversation or who are in my immediate vicinity.

For now, such safeguards are only hypothetical.

See the article here:

Facebook is researching AI systems that see, hear, and remember everything you do - The Verge

Posted in Ai | Comments Off on Facebook is researching AI systems that see, hear, and remember everything you do – The Verge

Page 94«..1020..93949596..100110..»