Page 80«..1020..79808182..90100..»

Category Archives: Ai

Caltech’s aquatic robot uses AI to navigate the oceans – Popular Science

Posted: December 15, 2021 at 9:58 am

The ocean is big, and our attempts to understand it are still largely surface-deep. According to the National Oceanic and Atmospheric Organization, around 80 percent of the big blue is unmapped, unobserved, and unexplored.

Ships are the primary way to collect information about the seas, but theyre costly to send out frequently. More recently, robotic buoys called Argo floats have been drifting with the currents, diving up and down to take a variety of measurements at depths up to 6,500 feet. But new aquatic robots from a lab at Caltech could rove deeper and take on more tailored underwater missions.

Were imagining an approach for global ocean exploration where you take swarms of smaller robots of various types and populate the ocean with them for tracking, for climate change, for understanding the physics of the ocean, says John O. Dabiri, a professor of aeronautics and mechanical engineering at the California Institute of Technology.

In comes CARL-Bot (Caltech Autonomous Reinforcement Learning Robot), a palm-sized aquatic robot that looks like a cross between a pill capsule and a dumbo octopus. It has motors for swimming around, is weighted to stay upright, and has sensors that can detect pressure, depth, acceleration, and orientation. Everything that CARL does is powered by a microcontroller inside, which has a 1-megabyte processor thats smaller than a postage stamp.

CARL is the latest ocean-traversing innovation out of Dabiris lab, created and 3D-printed at home by Caltech graduate student Peter Gunnarson. The first tests Gunnarson ran with it were in his bathtub, since Caltechs labs were closed at the start of 2021 because of COVID.

[Related: These free-floating robots can monitor the health of our oceans]

Right now, CARL can still be remotely controlled. But to really get to the deepest parts of the ocean, there cant be any hand-holding involved. That means no researchers giving CARL directionsit needs to learn to navigate the mighty ocean on its own. Gunnarson and Dabiri sought out computer scientist Petros Koumoutsakos, who helped develop AI algorithms for CARL that could teach it to orient itself based on changes in its immediate environment and past experiences. Their research was published this week in Nature Communications.

CARL can decide to adjust its route on-the-fly to maneuver around the rough currents and get to its destination. Or it can stay put in a designated location using minimal energy from a lithium-ion battery.

The set of algorithms developed by Koumoutsakos can perform the wayfinding calculations on-board the small robot. The algorithms also take advantage of the robots memory of prior encounters, like how to get past a whirlpool. We can use that information to decide how to navigate those situations in the future, explains Dabiri.

CARLs programming enables it to remember similar paths it has taken in previous missions, and over repeated experiences, get better and better at sampling the ocean with less time and less energy, Gunnarson adds.

A lot of machine learning is done in simulation, where all the data points are clean. But transferring that to the real world can be messy. Sensors sometimes get overwhelmed and might not pick up all the necessary metrics. Were just starting the trials in the physical tank, says Gunnarson. The first step is to test if CARL can complete simple tasks, like repeated diving. A short video on Caltechs blog shows the robot clumsily bobbing along and plunging into a still water tank.

As testing moves along, the team plans to put CARL in a pool-like tank with small jets that can generate horizontal currents for it to navigate through. When the robot graduates from that, it will move to a two-story-tall facility that can mimic upwelling and downwelling currents. There, it will have to figure out how to maintain a certain depth in a region where the surrounding water is flowing in all directions.

[Related: Fish sounds tell us about underwater reefsbut we need better tech to really listen]

Ultimately, though, we want CARL in the real world. Hell leave the nest and go into the ocean and with repeated trials there, the goal would be for him to learn how to navigate on his own, says Dabiri.

During the testing, the team will also adjust the sensors in and on CARL. One of the questions we had is what is the minimal set of sensors that you can put onboard to accomplish the task, Dabiri says. When a robot is decked out with tools like LiDAR or cameras, that limits the ability of the system to go for very long in the ocean before you have to change the battery.

By lightening the sensor load, researchers could lengthen CARLs life and open up space to add scientific instruments to measure pH, salinity, temperature, and more.

Early last year, Dabiris group published a paper on how they used electric zaps to control a jellyfishs movements. Its possible that adding a chip that harbors similar machine learning algorithms to CARLs would enable researchers to better steer the jellies through the ocean.

Figuring out how this navigation algorithm works on a real live jellyfish could take a lot of time and effort, says Dabiri. In this regard, CARL provides a testing vessel for the algorithms that could eventually go into the mechanically modified creatures. Unlike robots and rovers, these jellies wouldnt have depth limitations, as biologists know that they can exist in the Mariana Trench, some 30,000 feet below the surface.

[Related: Bionic jellyfish can swim three times faster]

CARL, in and of itself, can still be an useful asset in ocean monitoring. It can work alongside existing instruments like Argo floats, and go on solo missions to perform more fine-tuned explorations, given that it can get close to sea beds and other fragile structures. It can also track and tag along with biological organisms like a school of fish.

You might one day in the future imagine 10,000 or a million CARLs (well give them different names, I guess) all going out into the ocean to measure regions that we simply cant access today simultaneously so that we get a time-resolved picture of how the ocean is changing, Dabiri says. Thats going to be really essential to model predictions of climate, but also to understand how the ocean works.

Read the original:

Caltech's aquatic robot uses AI to navigate the oceans - Popular Science

Posted in Ai | Comments Off on Caltech’s aquatic robot uses AI to navigate the oceans – Popular Science

SQREEM’s New AI-powered Study Examines Motivations Surrounding COVID-19 Vaccine Resistance in the US – Yahoo Finance

Posted: at 9:58 am

Misinformation fuels skepticism and confusion; varying impact on the attitudes of vaccine-resistant people despite shared struggles with isolation and concerns of economic ramifications

NEW YORK, Dec. 15, 2021 /PRNewswire/ -- COVID-19 vaccines have once again become a hot topic in the United States as President Joe Biden pushes on with vaccination mandates in a bid to manage concerns around the Omicron variant ahead of the winter flu season. Despite vaccination rates reaching 60%, vaccination coverage remains uneven across the fifty states, with many Americans identifying with and embracing labels such as 'anti-vaxxer' as a form of social identity.

The Anti-Vaxxer and The Vaccine Hesitant

Leveraging proprietary Artificial Intelligence (AI) built to understand online human behavior in a completely anonymous way, SQREEM Technologies' recent U.S. COVID-19 Vaccine Study provides a striking insight into the attitudes and motivations of anti-vaxxers and the vaccine-hesitant. The study utilizes anonymized digital engagement scores as the main metric to understand audience relevance to topics/aspects surrounding COVID-19 vaccination. In the study, 'anti-vaxxers' are audiences that do not agree with the COVID-19 vaccine and its use, while 'vaccine-hesitant' are audiences that are reluctant to use the COVID-19 vaccine despite its availability. For both groups, digital engagement scores with values greater than 5 signify awareness, while values greater than 10 signify a strong engagement with the topic.

Overall, the study found that anti-vaxxers are confused about the topic of COVID-19 vaccination, considering vaccines to be an inconvenience and ineffective. On the other hand, vaccine-hesitant persons are significantly more confused and tend to have misconceptions about vaccines; however, they showed stronger engagement towards topics and content directly related to various vaccine brands.

"At SQREEM, we believe in the power of tech for good, to uplift lives and bring social and economic progress for all. We recognize that many challenges still exist in the fight against this global pandemic, including people's hesitancy towards vaccination. The goal of this study was to develop a better understanding of the attitudes and rationales behind vaccine resistance to help healthcare professionals, regulators and policymakers find innovative ways to tackle the challenges," said Ian Chapman-Banks, CEO and co-founder of SQREEM.

Story continues

Safety, level of protection worries anti-vaxxers

Taking a closer look, the study found that anti-vaxxers are highly concerned about vaccine ingredients (10.45) and often search for information regarding COVID-19 vaccines and live viruses. Unwilling to get vaccinated because they fear that vaccines may contain live viruses (6.05), they however show an interest in knowing about the efficacy rate of various vaccines (5.8). Safety of vaccines administered (8.06) and concerns about the duration of protection offered (7.96) are top reasons that shape the attitudes of anti-vaxxers who predominantly worry about blood clots as a possible adverse side effect of COVID-19 vaccinations (5.77). Anti-vaxxers also showed a high interest in anti-vaccine protests (8.68) and were found to be heavily influenced by opinions of vaccine skeptics (11.86), including public and political personalities.

Anxiety outweighs health protection interests for the vaccine-hesitant

In comparison, vaccine-hesitant people turned to pro-vaccine key opinion leaders (10.14) such as medical professionals for information significantly more than they referenced vaccine skeptics (5.6). Interestingly, they also demonstrated a strong interest in vaccine myths and conspiracies (9.74). Curious about the ingredients (23.99) and mechanism of action (12.66) for various vaccine brands, this group also showed a strong interest in searching about vaccine appointments online (12.25). But despite being highly interested in the long-term personal health protection offered by COVID-19 vaccines (26.3), their vaccine-resistant behavior is driven by skepticism around the efficacy of vaccines and concerns over possible side effects, with blood clots (13.53) and death (9.23) being their primary fears.

Resistance heightens as the pandemic prolongs

The study also noted important changes in the online behavior of both groups as the COVID-19 pandemic progressed. While both groups showed a greater affinity towards religious leaders, the vaccine-hesitant increasingly placed value on the opinion of celebrities. Notably, mainstream media such as TV and radio become the main sources of information for both groups.

While concerned about the higher severity of infections (9.55), anti-vaxxers' rejection of vaccines was driven by longer-term concerns, including the inefficiency of vaccines against new variants and severe infections (5.32), and increased interest in side-effect myths, particularly the belief that vaccines alter a person's DNA (6.25). This group also felt strongly that people with underlying conditions should not get vaccinated (17) and showed high engagement on topics surrounding vaccines and fertility.

Despite their fears about the greater risk of transmission (9.9), the vaccine-hesitant remain skeptical about the technology used to develop vaccines (5.38) and believe it is unsafe for children (6.43) and breastfeeding women (5.09) to be vaccinated. Their pre-existing belief in myths compounded their fears of vaccine side effects, including concerns about increased susceptibility to COVID-19 (9.3), links to Bell's Palsy (7.13) and cancer caused by altered DNA (8.11), harmful effects from shedding of vaccine components (7.39) and human magnetism (6.81).

Ian added, "Insights from this study indicate that an important aspect of overcoming vaccine resistance lies in understanding the behavior of people and applying this knowledge to address their concerns. Tailoring messages to be meaningful and to resonate with different audiences can be effective in countering misinformation and conspiracy theories. Promote open, honest conversations by leveraging people's existing trust in their own doctors and health care providers to direct vaccine-resistant people to professionals for reliable information."

Methodology

The U.S. COVID-19 Vaccine Study employs SQREEM Technologies' proprietary AI technology to analyze anonymized monthly online behaviors including searches, interactions on websites, apps, and publicly available social interactions, on topics surrounding COVID-19 vaccination in the United States.

'Anti-vaxxers' are audiences that do not agree with the COVID-19 vaccine and its use, while 'vaccine-hesitant' are audiences that are reluctant to use the COVID-19 vaccine despite its availability.

Monthly behaviors are represented by online engagement scores. Values between 0 and 5 signify unawareness, 5 to 10 signify awareness, while values greater than 10 signify a strong engagement with the topic. Engagement scores are deemed relevant and can be used to understand the level of engagement by the audience with the topic.

About SQREEM

SQREEM is the world's largest "behavioural pattern data aggregator" that collects, analyses, and creates a database of open data on the Internet using its own AI technology. Based on this technology, we provide data analysis and programmatic targeting services to more than 100 companies and government agencies across various industries. SQREEM has also been identified as one of Asia's fastest-growing companies by the Financial Times for two years running.

In 2021, the company introduced ONE Market, the world's first AI-enabled media exchange. ONE Market merges all of SQREEM's tech stack layers, delivering an optimised end-to-end solution that seamlessly connects the right audience with the right digital destination, when and where behaviours take place. ONE Market is purposefully designed as a plug and play platform to work directly into an agency's current workflow.

See the following link for more details.https://SQREEMtech.com

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/sqreems-new-ai-powered-study-examines-motivations-surrounding-covid-19-vaccine-resistance-in-the-us-301445211.html

SOURCE SQREEM Technologies

Read the original post:

SQREEM's New AI-powered Study Examines Motivations Surrounding COVID-19 Vaccine Resistance in the US - Yahoo Finance

Posted in Ai | Comments Off on SQREEM’s New AI-powered Study Examines Motivations Surrounding COVID-19 Vaccine Resistance in the US – Yahoo Finance

RWE and ORE Catapult to accelerate wind AI tech – reNEWS

Posted: at 9:58 am

Artificial intelligence firm Cognitive has joined forces with RWE and ORE Catapult to accelerate the commercialisation of its Wind AI technology.

A better understanding of true individual turbine performance within a wind farm can lead to spotting maintenance early warning signs and saving millions of pounds in lost revenue, Cognitive said.

Wind AI is a real-time solution, which according to the company proved that it can identify performance degradation with a less than 1% error.

This means that Wind AI can provide an opportunity to optimise operations and maintenance practices, Cognitive added.

As part of the Innovate UK-funded project, ORE Catapult conducted a traditional power performance assessment using lidar data provided by RWE.

Wind AI was also deployed and tested by RWE on an offshore wind farm, where the technology created 2.4 million machine learning models to monitor several hundred megawatts of offshore assets, using the data supplied by the instrumentation already installed on the turbines.

Applied AI Cognitive co-founder anddirector ofapplied AI Christopher Fraser said: "Wind AI does not use wind speed measurement as an input.

"Instead, it learns the performance of each turbine against every other turbine under every conceivable set of conditions, and then uses that information to create algorithms to accurately determine the predicted and actual performance of each turbine at any given moment, thus highlighting any that are underperforming."

A Catapult cost analysis verified that Wind AI could reduce the levelised cost of energy and lead to an increase in annual energy production.

Catapult also concluded that Wind AIhas the potential to predict component failures before they occur, enabling predictive maintenance to take place and potentially reducing overall corrective maintenance frequencies by 5%.

Fraser added: "Our estimates suggest that if every turbine in every wind farm in the UK was producing optimally, this would deliver the equivalent energy of two new farms, each the size of Greater Gabbard, just by using existing assets to their fullest."

RWE production manager James Vause said: "At Rampion offshore wind farm, our goal is to continuously improve our O&M efficiency.

"Wind AI supports us by identifying any underperformance in our assets very early on, which we can then evaluate and engineer solutions for, putting a stop to any potentially significant or cumulative impact on our production."

ORE Catapult senior innovation manager Andrew MacDonald added: "Cognitives technology is providing a solution for one of the industrys largest, unsolved challenges the cost-effective monitoring and tracking of wind turbine performance losses.

"Were looking forward to continuing to work with Cognitive as they expand their AI solutions to further improve cost reductions in the offshore wind sector."

Read the rest here:

RWE and ORE Catapult to accelerate wind AI tech - reNEWS

Posted in Ai | Comments Off on RWE and ORE Catapult to accelerate wind AI tech – reNEWS

Facial Recognition Firm ClearView AI Asked to Stop Collecting Biometric Data Without Consent – Tech Times

Posted: at 9:58 am

(Photo : by NICOLAS ASFOURI/AFP via Getty Images) AI (artificial intelligence) security cameras using facial recognition technology are displayed at the 14th China International Exhibition on Public Safety and Security at the China International Exhibition Center in Beijing on October 24, 2018.

Facial recognition firm, ClearView AI, is now being asked by a couple of privacy watchdogs in Canada to stop collecting biometric data from its citizens without any consent.

(Photo : by NICOLAS ASFOURI/AFP via Getty Images)AI (artificial intelligence) security cameras using facial recognition technology are displayed at the 14th China International Exhibition on Public Safety and Security at the China International Exhibition Center in Beijing on October 24, 2018.

ClearView AI has been facing numerous legal troubles around the world for allegedly violating the privacy laws under their jurisdictions.

In fact, last Dec. 3, the AI firm faced a $19 million fine from the United Kingdom for violating its data protection laws, according to the report by CPO Magazine.

Now, the United States AI firm is facing scrutiny in the Canadian territory.

As per the news story by GlobalNews.ca,a total of three provincial privacy watchdogs in various provinces in Canada, such as Quebec, Alberta, and British Columbia, ordered the facial recognition firm to delete the images that it has collected without any permission.

The binding order from the three provinces of Canada comes after the authorities, along with the office of the federal privacy commissioner, Daniel Therrien, have concluded their investigation against the firm.

Back in February, the said watchdogs discovered the facial recognition tech of ClearView AI violated both the provincial and federal laws of Canada, which concerns the personal information of its citizens.

To be precise, they found out that the facial recognition tech went on to carry out mass surveillance of Canadian folks.

On top of that, the investigation also claimed that the facial recognition firm has been collecting images of people from the internet, which has already reached a billion mark.

The scrapped images online are reportedly being given away to financial organizations, authorities, as well as their other clients.

The watchdogs further noted that such activities by the firm are against the privacy rights of the citizens of Canada.

In addition to that, the orders from the provincial watchdogs told Clearview AI to ceaseoffering its services to their locations as well.

However, it is worth noting that the US-based firm has already stopped its operations in the Canadian nation since 2020. But still, the facial recognition company is hinting at its return to the region.

Read Also: New Microsoft Defender Anti-Ransomware AI Enhances the Current Cloud Protection Features

Meanwhile, the lawyer of ClearView AI, Doug Mitchell, said that the firm was only collecting the biometrics of people around the world from public data. He also argued thatClearView is a search enginelike Google.

Mitchell further added that the recent orders from the provincial authorities are beyond their jurisdiction.

The ClearView lawyer said that the latest requirement from the watchdogs is "contrary to the Canadian constitutional guarantee of freedom of expression."

It comes as he claims that the move "restricts the free flow of publicly available information."

Related Article: Chinese AI Firm SenseTime to Pull Hong Kong IPO Worth $767 Million Amid Allegations About its Facial Recognition Programs For Uyghurs

This article is owned by Tech Times

Written by Teejay Boris

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Link:

Facial Recognition Firm ClearView AI Asked to Stop Collecting Biometric Data Without Consent - Tech Times

Posted in Ai | Comments Off on Facial Recognition Firm ClearView AI Asked to Stop Collecting Biometric Data Without Consent – Tech Times

Microsofts bid for AI firm Nuance faces CMA investigation – The Guardian

Posted: at 9:58 am

The UK competition regulator has opened an investigation into Microsofts proposed $16bn takeover of Nuance, the artificial intelligence and speech recognition firm best known for its work on Apples virtual assistant Siri, in its latest move to scrutinise the impact of deals struck by big tech.

The move by the Competition and Markets Authority (CMA) comes weeks after it ordered Facebook parent company Meta to sell the gif creation website Giphy, the first time the regulator has moved to block a deal struck by a major Silicon Valley company.

Microsoft moved to buyMassachusetts-based Nuance, which has a stock market value of almost $18bn, in April to build up its cloud computing operation for healthcare and business customers.

Nuance says that it works with more than three-quarters of US hospitals, and has also been used by the NHS in the UK, with the deal the second-largest in Microsofts history after the $26.2bn acquisition of LinkedIn five years ago.

The CMA is considering whether it is or may be the case that this transaction, if carried into effect, will result in the creation of a relevant merger situation, said the regulator. And, if so, whether the creation of that situation may be expected to result in a substantial lessening of competition within any market or markets in the United Kingdom for goods and services.

The CMA said it is inviting comments from interested parties and rivals as it assesses whether the deal warrants an in-depth investigation.

The deal has already been unconditionally cleared by regulators in the US and Australia. The European Commission has also been taking soundings from rivals, with a decision on whether to investigate expected as soon as this week. The commission is expected to clear the deal.

Nuance offers healthcare businesses and hospitals services including medical transcription, clinical speech recognition and medical imaging. Its technology has paid off during the pandemic as it cuts down on note taking, helping doctors and nurses reduce their exposure to coronavirus.

Microsoft has been in preliminary talks with the CMA, which now has 40 days to decide whether to launch an in-depth investigation, ahead of the formal request for approval of the deal.

Facebook has said it disagrees with the CMAs decision ordering it to sell Giphy, which it acquired for $400m last year, and is considering an appeal.

The CMA is set to be given beefed up powers to investigate deals struck by Silicon Valley companies through a new dedicated digital markets unit, which also includes a new enforceable code of conduct in dealings with third parties such as publishers based on fair trading and trust and transparency. The government is yet to officially grant the DMU powers which requires parliamentary time for new legislation.

The rest is here:

Microsofts bid for AI firm Nuance faces CMA investigation - The Guardian

Posted in Ai | Comments Off on Microsofts bid for AI firm Nuance faces CMA investigation – The Guardian

AI Weekly: AI researchers release toolkit to promote AI that helps to achieve sustainability goals – VentureBeat

Posted: December 10, 2021 at 6:50 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

While discussions about AI often center around the technologys commercial potential, increasingly, researchers are investigating ways that AI can be harnessed to drive societal change. Among others, Facebook chief AI scientist Yann LeCun and Google Brain cofounder Andrew Ng have argued that mitigating climate changeand promotingenergy efficiency are preeminent challenges for AI researchers.

Along this vein, researchers at the Montreal AI Ethics Institute have proposed a framework designed to quantify the social impact of AI through techniques like compute-efficient machine learning. An IBM project delivers farm cultivation recommendations from digital farm twins that simulate the future soil conditions of real-world crops. Other researchers are using AI-generated images to help visualizeclimate change, and nonprofits likeWattTime are working to reduce households carbon footprint by automating when electric vehicles, thermostats, and appliances are active based on where renewable energy is available.

Seeking to spur further explorations in the field, a group at the Stanford Sustainability and Artificial Intelligence Lab this week released (to coincide with NeurIPS 2021) a benchmark dataset called SustainBench for monitoring sustainable development goals (SDGs) including agriculture, health, and education using machine learning. As the coauthors told VentureBeat in an interview, the goal is threefold: (1) lower the barriers to entry for researchers to contribute to achieving SDGs; (2) provide metrics for evaluating SDG-tracking algorithms, and (3) encourage the development of methods where improved AI model performance facilitates progress towards SDGs.

SustainBench was a natural outcome of the many research projects that [weve] worked on over the past half-decade. The driving force behind these research projects was always the lack of large, high-quality labeled datasets for measuring progress toward the United Nations Sustainable Development Goals (UN SDGs), which forced us to come up with creative machine learning techniques to overcome the label sparsity, the coauthors said. [H]aving accumulated enough experience working with datasets from diverse sustainability domains, we realized earlier this year that we were well-positioned to share our expertise on the data side of the machine learning equation Indeed, we are not aware of any prior sustainability-focused datasets with similar size and scale of SustainBench.

Progress toward SDGs has historically been measured through civil registrations, population-based surveys, and government-orchestrated censuses. However, data collection is expensive, leading many countries to go decades between taking measurements on SDG indicators. Its estimated that only half of SDG indicators have regular data from more than half of the worlds countries, limiting the ability of the international community to track progress toward the SDGs.

For example, early on during the COVID-19 pandemic, many developing countries implemented their own cash transfer programs, similar to the direct cash payments from the IRS in the United States. However data records on household wealth and income in developing countries are often unreliable or unavailable, the coauthors said.

Innovations in AI have shown promise in helping to plug the data gaps, however. Data from satellite imagery, social media posts, and smartphones can be used to train models to predict things like poverty, annual land cover, deforestation, agricultural cropping patterns, crop yields, and even the location and impact of natural disasters. For example, the governments of Bangladesh, Mozambique, Nigeria, Togo, and Uganda used machine learning-based poverty and cropland maps to direct economic aid to their most vulnerable populations during the pandemic.

But progress has been hindered by challenges, including a lack of expertise and dearth of data for low-income countries. With SustainBench, the Stanford researchers along with contributors at Caltech, UC Berkeley, and Carnegie Mellon hope to provide a starting ground for training machine learning models that can help measure SDG indicators and have a wide range of applications for real-world tasks.

SustainBench contains a suite of 15 benchmark tasks across seven SDGs taken from the United Nations, including good health and well-being, quality education, and clean water and sanitation. Beyond this, SustainBench offers tasks for machine learning challenges that cover 119 countries, each designed to promote the development of SDG measurement methods on real-world data.

The coauthors caution that AI-based approaches should supplement, rather than replace, ground-based data collection. They point out that ground truth data are necessary for training models in the first place, and that even the best sensor data can only capture some but not all of the outcomes of interest. But AI, they still believe, can be helpful for measuring sustainability indicators in regions where ground truth measurements are scarce or unavailable.

[SDG] indicators have tremendous implications for policymakers, yet key data are scarce, and often scarcest in places where they are most needed, as several of our team members wrote in a recent Science review article. By using abundant, cheap, and frequently updated sensor data as inputs, AI can help plug these data gaps. Such input data sources include publicly available satellite images, crowdsourced street-level images, Wikipedia entries, and mobile phone records, among others, the coauthors said.

In the short term, the coauthors say that theyre focused on raising awareness of SustainBench within the machine learning community. Future versions of SustainBench are in the planning stages, potentially with additional datasets and AI benchmarks.

Two technical challenges stand out to us. The first challenge is to develop machine learning models that can reason about multi-modal data. Most AI models today tend to work with single data modalities (e.g., only satellite images, or only text), but sensor data often comes in many forms The second challenge is to design models that can take advantage of the large amount of unlabeled sensor data, compared to sparse ground truth labels, the coauthors said. On the non-technical side, we also see a challenge in getting the broader machine learning community to focus more efforts on sustainability applications As we alluded to earlier, we hope SustainBench makes it easier for machine learning researchers to recognize the role and challenges of machine learning for sustainability applications.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read more here:

AI Weekly: AI researchers release toolkit to promote AI that helps to achieve sustainability goals - VentureBeat

Posted in Ai | Comments Off on AI Weekly: AI researchers release toolkit to promote AI that helps to achieve sustainability goals – VentureBeat

AI is thriving on and driving the edge – VentureBeat

Posted: at 6:50 pm

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

Connected devices and instant mobile access to data is a common facet of modern life, but the fact is that weve only just begun this transition to a digital universe. In the near future, autonomous cars will be buzzing through our streets, everything from our shoes to our eyeglasses and even our own body parts will be connected, and digital agents will be assisting us at every turn, and cataloging everything we do.

It sounds scary, and it will most certainly produce a number of thorny issues surrounding privacy, self-determination, and even what it means to be human. But underpinning all of it will be the edge, the layer of infrastructure currently under development that will provide much of the processing and storage needed by devices to carry out their real-time functions.

By its nature, the edge will be widely distributed. Small nodes of compute and storage will exist in towns and neighborhoods, along highways and power lines, and virtually anywhere else they are needed. They will also be unmanned, for the most part, and will have to be enabled with a great deal of automation and autonomy to accommodate the massive and diverse requirements of a connected world.

This sounds like a job for artificial intelligence.

The edge, after all, is an ideal environment for AI, largely because its still in the greenfield stage of development. Unlike in the datacenter, there are no legacy systems to contend with, no processes to be reworked, or code to be altered. AI becomes the foundational element of an all-new data ecosystem. Dell Technologies, for one, is already churning out edge-specific AI solutions, many of them fully validated and integrated among compute, storage, network, software, and services for optimized AI workloads.

If anything, the pandemic has accelerated the drive to infuse AI into edge infrastructure, says IOT World Todays Callum Cyrus. As remote work and ecommerce took off, organizations turned to machine learning and other tools to overcome the significant operational challenges they faced. But this only increased the data load at the edge, which now requires greater use of AI in order to maintain the speed and flexibility that emerging applications require. A key development is a new generation of intelligent chips, which will soon inhabit all levels of the edge processing spectrum, from general, entry-level machine learning cores to specialty A/V and graphics machines and advanced neural network microcontrollers.

A look at some use cases for AI on the edge shows just how powerful this new intelligent infrastructure can be, notes XenonStacks Jagreet Kaur. Once you empower systems and devices with high-level decision-making capabilities, you can push a wide range of advanced applications to users. Among them are digital map projections, dual-facing AI dashcams, advanced security for shops and offices, and broader use of satellite imagery. Virtually every function that enters the digital ecosphere will be empowered by AI before long.

Organizations that are looking to strategize around these developments should keep three factors in mind, says Intel VP Brian McCarson. First, open source becomes a key enabler because AI thrives on ready access to as much data and as many resources as possible. Secondly, video will become a major asset as organizations evolve in the new economy. This means AIs capability to leverage video at the edge will be a primary driver for success, and this will accelerate the need for greater investment in both AI platforms and infrastructure. And finally, change will take place rapidly on the edge as new systems and new applications eat the old ones. Whatever you deploy on the edge now, be prepared to revamp it sooner rather than later.

Note that AI development on the edge shouldnt take place independently of development elsewhere on the enterprise data footprint. Interoperability among the datacenter, cloud, edge, and any other infrastructure that comes along will be crucial again because AI is only as good as the data and resources it can leverage.

While it may be tempting to view the edge as simply an extension of legacy infrastructure, the reverse is true: the edge is the new foundation for the services that affect peoples lives. In this light, AI at the edge should be the driver for AI in the cloud and the datacenter, at least if your business model is centered on fulfilling user needs, not your own.

See the article here:

AI is thriving on and driving the edge - VentureBeat

Posted in Ai | Comments Off on AI is thriving on and driving the edge – VentureBeat

MIT Researchers Develop AI That Better Understands Object Relationships – Datanami

Posted: at 6:50 pm

Increasingly, AI is competent when it comes to identifying objects in a scene: built-in AI for an app like Google Photos, for instance, might recognize a bench, or a bird, or a tree. But that same AI might be left clueless if you ask it to identify the bird flying between two trees, or the bench beneath the bird, or the tree to the left of a bench. Now, MIT researchers are working to change that with a new machine learning model aimed at understanding the relationships between objects.

When I look at a table, I cant say that there is an object at XYZ location, explained Yilun Du, a PhD student in MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper, in an interview with MITs Adam Zewe. Our minds dont work like that. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we could use that system to more effectively manipulate and change our environments.

The model incorporates object relationships by first identifying each object in a scene, then identifying relationships one at a time (e.g. the tree is to the left of the bird), then combining all identified relationships. It can then reverse that understanding, generating more accurate images from text descriptions even when the relationships between objects have changed. This reverse process works much the same as the forward process: generate each object relationship one at a time, then combine.

Other systems would take all the relations holistically and generate the image one-shot from the description, Du said. However, such approaches fail when we have out-of-distribution descriptions, such as descriptions with more relations, since these [models] cant really adapt one shot to generate images containing more relationships. However, as we are composing these separate, smaller models together, we can model a larger number of relationships and adapt to novel combinations.

Testing the results on humans, they found that 91% of participants concluded that the new model outperformed prior models. The researchers underscored that this work is important because it could, for instance, help AI-powered robots better navigate complex situations. One interesting thing we found is that for our model, we can increase our sentence from having one relation description to having two, or three, or even four descriptions, and our approach continues to be able to generate images that are correctly described by those descriptions, while other methods fail, Du said.

Next, the researchers are working to assess how the model performs on more complex, real-world images before moving to real-world testing with object manipulation.

To learn more about this research, read the article from MITs Adam Zewe here. You can read the paper describing the research here.

Visit link:

MIT Researchers Develop AI That Better Understands Object Relationships - Datanami

Posted in Ai | Comments Off on MIT Researchers Develop AI That Better Understands Object Relationships – Datanami

Top 8 AI and ML Trends to Watch in 2022 – IT Business Edge

Posted: at 6:50 pm

2022 will be a crucial year as we witness artificial intelligence (AI) and machine learning (ML) continue to stride along the path to turning themselves into the most disruptive yet transformative technology ever developed. Google CEO Sundar Pichai said that the impact of AI would be even more significant than that of fire or electricity on the development of humans as a species. It may be an ambitious claim, but AIs potential is very clear from the way it has been used to explore space, tackle climate change, and develop cancer treatments.

Now, it may be difficult to imagine the impact of machines making faster and more accurate decisions than humans, but one thing is certain: In 2022, new trends and breakthroughs will continue to emerge and push the boundaries of AI and ML.

Here are the top eight AI and ML trends to watch out for in 2022.

Since the advent of AI and ML, there have always been fears and concerns regarding these disruptive technologies that will replace human workers and even make some jobs obsolete. However, as businesses began to incorporate these technologies and to bring AI/ML literacy within their teams, they noticed that working alongside machines with smarter cognitive functionality, in fact, boosted employees abilities and skills.

For instance, in marketing, businesses are already using AI/ML tools to help them zero in on potential leads and the business value they can expect from potential customers. Furthermore, in engineering, AI and ML tools allow predictive maintenance, an ability to predict and inform the service and repair requirements of enterprise equipment. Moreover, AI/ML technology is widely used in fields of knowledge, such as law, to peruse ever-increasing amounts of data and find the right information for a specific task.

NLP is currently one of the most used AI technologies. This technology significantly reduces the necessity for typing or interacting with a screen as machines began to comprehend human languages, and we can now simply talk with them. In addition, AI-powered devices can now turn natural human languages into computer codes that can run applications and programs.

The release of GPT-3the most advanced and largest NLP model ever createdby OpenAI is a big step in language processing. It consists of around 175 billion parametersdata points and variables that machines use for language processing. Now, OpenAI is developing GPT-4, a more powerful successor of GPT-3. Speculations reveal that GPT-4 may contain roughly 100 trillion parameters, making it 500 times larger than GPT-3. This development is literally a bigger step closer to creating machines that can develop language and engage in conversations that are indistinguishable from those of a human.

Some of the NLP technologies expected to grow in popularity are sentiment analysis, process description, machine translation, automatic video caption creation, and chatbots.

Also read: Natural Language Processing Will Make Business Intelligence Apps More Accessible

Recently, the World Economic Forum stated that cybercrime poses a more significant threat to society than terrorism. When more intelligent yet complex machines connected to a vast network take control of every aspect of our lives, cybercrimes become rampant, and cybersecurity solutions become complex.

AI and ML tools can play a significant role in tackling this issue. For example, AI/ML algorithms can analyze higher network traffic and recognize patterns of nefarious virtual activities. In 2022, some of the most significant AI/ML technology developments are likely to be in this area.

The metaverse is a virtual world, like the internet, where users can work and play together with immersive experiences. The concept of the metaverse turned into a hot topic since Mark Zuckerberg, the CEO of Facebook, spoke about merging virtual reality (VR) technology with the Facebook platform.

Without a doubt, AI and ML will be a lynchpin of the metaverse. These technologies will allow an enterprise to create a virtual world where its users will feel at home with virtual AI bots. These virtual AI beings will assist users in picking the right products and services or helping users relax and unwind themselves by playing games with them.

Also read: What is the Metaverse and How Do Enterprises Stand to Benefit?

The scarcity of skilled AI developers or engineers stands as a major barrier to adopting AI technology in many companies. No-code and low-code technologies come to the rescue. These solutions aim to offer simple interfaces, in theory, to develop highly complex AI systems.

Today, web design and no-code user interface (UI) tools let users create web pages simply by dragging and dropping graphical elements together. Similarly, no-code AI technology allows developers to create intelligent AI systems by simply merging different ready-made modules and feeding them industrial domain-specific data. Furthermore, NLP, low-code, and no-code technologies will soon enable us to instruct complex machines with our voice or written instructions. These advancements will result in the democratization of AI, ML, and data technologies.

In 2022, with the aid of AI and ML technologies, more businesses will automate multiple yet repetitive processes that involve large volumes of information and data. In the coming years, an increased rate of automation can be seen in various industries using robotic process automation (RPA) and intelligent business process management software (iBPMS). This AI and ML trend allows businesses to bring down their dependence on the human workforce and to improve the accuracy, speed, and reliability of each process.

Also read: The Growing Relevance of Hyperautomation in ITOps

Modern-day businesses will soon begin utilizing AI powered by quantum computing to solve complex business problems faster than traditional AI. Quantum AI offers faster and more accurate data analyzing and pattern prediction. Thus, it assists various businesses in identifying unforeseen challenges and bringing out viable solutions. As a result, quantum AI will revolutionize many industrial sectors, such as healthcare, chemistry, and finance.

Creativity is widely considered a skill possessed only by humans. But today, we are witnessing the emergence of creativity in machines. That means artificial intelligence is inching closer to real intelligence.

We already know that AI can be used to create art, music, plays, and even video games. In 2022, the arrival of GPT-4 and Google Brain will redefine the boundaries of the possibilities of AI and ML technologies in the domain of creativity. People can now expect more natural creativity from our artificial intelligent machine friends.

Today, most of the creative pursuits of AI technology are rather demonstrations of the potential of AI. But the scenario will significantly change in 2022 as AI technology will get into our day-to-day creative tasks, such as writing and graphic designing.

All these trends in AI and ML will soon influence businesses all over the globe. These disruptive technologies are powerful enough to transform every industry by assisting organizations in achieving their business objectives, making important choices, and developing innovative goods and services.

The AI/ML industry is expected to grow at a CAGR of 33% by 2027. Estimates suggest that businesses will have at least 35 AI initiatives in their business operations by 2022.

Data specialists, data analysts, CIOs, and CTOs, should consider using these opportunities to scale their existing business capabilities and use these technologies to the advantage of their businesses.

Read next: Best Machine Learning Software in 2021

Read the original:

Top 8 AI and ML Trends to Watch in 2022 - IT Business Edge

Posted in Ai | Comments Off on Top 8 AI and ML Trends to Watch in 2022 – IT Business Edge

DeepMind tests the limits of large AI language systems with 280-billion-parameter model – The Verge

Posted: at 6:50 pm

Language generation is the hottest thing in AI right now, with a class of systems known as large language models (or LLMs) being used for everything from improving Googles search engine to creating text-based fantasy games. But these programs also have serious problems, including regurgitating sexist and racist language and failing tests of logical reasoning. One big question is: can these weaknesses be improved by simply adding more data and computing power, or are we reaching the limits of this technological paradigm?

This is one of the topics that Alphabets AI lab DeepMind is tackling in a trio of research papers published today. The companys conclusion is that scaling up these systems further should deliver plenty of improvements. One key finding of the paper is that the progress and capabilities of large language models is still increasing. This is not an area that has plateaued, DeepMind research scientist Jack Rae told reporters in a briefing call.

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a languages models size and complexity, meaning that Gopher is larger than OpenAIs GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidias Megatron model (530 billion parameters).

Its generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMinds research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix.

I think right now it really looks like the model can fail in variety of ways, said Rae. Some subset of those ways are because the model just doesnt have sufficiently good comprehension of what its reading, and I feel like, for those class of problems, we are just going to see improved performance with more data and scale.

But, he added, there are other categories of problems, like the model perpetuating stereotypical biases or the model being coaxed into giving mistruths, that [...] no one at DeepMind thinks scale will be the solution [to]. In these cases, language models will need additional training routines like feedback from human users, he noted.

To come to these conclusions, DeepMinds researchers evaluated a range of different-sized language models on 152 language tasks or benchmarks. They found that larger models generally delivered improved results, with Gopher itself offering state-of-the-art performance on roughly 80 percent of the tests selected by the scientists.

In another paper, the company also surveyed the wide range of potential harms involved with deploying LLMs. These include the systems use of toxic language, their capacity to share misinformation, and their potential to be used for malicious purposes, like sharing spam or propaganda. All these issues will become increasingly important as AI language models become more widely deployed as chatbots and sales agents, for example.

However, its worth remembering that performance on benchmarks is not the be-all and end-all in evaluating machine learning systems. In a recent paper, a number of AI researchers (including two from Google) explored the limitations of benchmarks, noting that these datasets will always be limited in scope and unable to match the complexity of the real world. As is often the case with new technology, the only reliable way to test these systems is to see how they perform in reality. With large language models, we will be seeing more of these applications very soon.

View post:

DeepMind tests the limits of large AI language systems with 280-billion-parameter model - The Verge

Posted in Ai | Comments Off on DeepMind tests the limits of large AI language systems with 280-billion-parameter model – The Verge

Page 80«..1020..79808182..90100..»