Page 35«..1020..34353637..4050..»

Category Archives: Artificial Intelligence

Artificial intelligence developed by NCAR will identify forests with high fire risks – Denver 7 Colorado News

Posted: June 11, 2022 at 1:40 am

BOULDER, Colo. The threat of wildfires has been especially front and center in Colorado for months, and projections show the worst could still be ahead as drought conditions worsen. However, firefighters will have a new tool at their fingertips developed right here in Colorado that will help them know ahead of time where fires could quickly spread.

Artificial intelligence developed at the National Center for Atmospheric Research (NCAR) in Boulder is allowing scientists to identify the locations and amounts of dead or dying trees and vegetation in Colorado forests, which burn quickly and, therefore, cause wildfires to spread faster.

NCAR scientists behind its development illustrated the technologys power by retroactively mapping the East Troublesome Fire of 2020, which grew much more explosively than anticipated.

NCAR

We can run a simulation, and basically use it to understand how the fire propagates, which areas are burning, and how much smoke is produced by the fire, said project scientist Timothy Juliano of the East Troublesome Fire projection. Before this, the fuel map inside of our model didnt even account for beetle kill. It assumed that the forest was healthy and had live trees that were still standing. In reality, we know that this area was devastated by beetle kill, and a lot of the trees were downed and dead If we had this product available [then], we would have been able to be much closer to observations in terms of how quickly the fire was going to spread.

Currently, researchers keep tabs on forest health through a combination of human cataloging and various sets of ecological data. This leads to a laborious process and, at times, unreliable results.

You can imagine that it takes a lot of time and effort to assemble that data, so it is only published about every five years, said Amy DeCastro, lead researcher for the study. Our AI-based method is able to update the vegetation maps during those times between publication and accurately capture any disturbance to the vegetation that may have happened in the meantime, like drought and beetle infestation.

As Colorado firefighters look forward and into the prospects of longer and more severe fire seasons technology that gives a more robust understanding of forest health will better equip them to show up in the right places at the right times. It will provide a head start that could literally save homes and lives.

We intentionally designed this project around freely available satellite imagery and an online modeling platform, DeCastro said. So any firefighter or land manager that wants to update their vegetation data before running wildland fire forecast has access to this method.

They can put out their evacuation warnings faster, and just have a better sense of where the fire will be and at what time, Juliano said.

The team at NCAR has been working on this artificial intelligence for more than a year, and knew early on of its potential power. It was in December, however, that its impact truly sank in for Juliano, as he was forced to evacuate his home in Louisville during the Marshall Fire.

I think for me, its like extra motivation, honestly, just to be really passionate about the research and try to resolve some of these issues, he said. It hits home when you see it firsthand.

Visit link:

Artificial intelligence developed by NCAR will identify forests with high fire risks - Denver 7 Colorado News

Posted in Artificial Intelligence | Comments Off on Artificial intelligence developed by NCAR will identify forests with high fire risks – Denver 7 Colorado News

Class of 2022: Trucking company VP sees a role for artificial intelligence in future of his industry – University of Calgary

Posted: at 1:40 am

Kenedy Assman is the fourth generation to lead Landtran Systems Inc. down the long highways of Canada as it helps to keep the countrys goods moving. The company, one of the largest trucking and logistics companies in Canada, was started back in the 1930s by his great-grandfather. However, the recent UCalgary grad didnt come to Landtran by a short route.

Assman added vice-president of corporate development for Landtran to his resume just one month after completing an MBA specializing in finance through the Haskayne School of Business at the University of Calgary. (Assman also has a Bachelor of Applied Science from Queens University, where he studied mechanical engineering in the mid-2010s.)

During his time at Haskayne, Assman also participated in the innovative course where students support ventures in theCreative Destruction Lab - Rockies (CDL-Rockies). CDL-Rockies is one of 11 CDL sites around the world that brings together experienced entrepreneurs, investors and subject-matter experts to accelerate the growth of early stage science and tech-focused companies. Assman worked closely with Curvenote to help the science-writing platform create a go-to-market strategy.

Now, hes set to focus on continuing to build his familys company.

I actually never thought that I would work here! But it seems like Ive been drawn to the transportation industry my whole life. One summer, when I was in [Queens], I started a bus company with one of my fellow students. That company was to bring students to and from Toronto and Kingston to see their families for the weekend. The company that was already there was charging a crazy amount of money, so we were able to cut that cost in half. After [Queens], I found myself in marine shipping, where I was a management trainee, then I took over managing oil tankers. But then I started missing the prairies, and thats when I decided to do the MBA at the Haskayne School of Business.

One problem that I would really like to solve and draws me to transportation is how old the systems are, how antiquated the process is. A lot of things are still done on paper; even routing and dispatching is done manually or in person. Right now, there isnt a lot of AI that goes into finding and creating routes and finding the most effective ways to move freight from A to B. Even warehousing is still done with pen and paper, whereas you see companies like Amazon really disrupting, bringing in all this tech into this industry. Hopefully, I will be here to guide the company into the next generation of technological developments and bring us up to speed on what is currently out there.

First, I got to learn about the trucking industry and the supply chain and the unique problems to this industry. [At CDL-Rockies,] I was exposed to all the new stuff, cutting-edge stuff that is coming out. There were around 60 companies, and a lot of them were energy-focused or agriculture-focused, but there are things there that can be pulled into the transportation industry. There is also the [CDL - Supply Chain program]; it would be interesting to see what is coming out of there.

Calgary is optimally positioned to be not only a transportation hub for Canada but for a lot of locations in the U.S.A. By transforming Calgary and transportation infrastructure [etc.], we really have the opportunity to be the epicentre of transportation in Canada.

In the future, I would like to be running and owning this company someday, making it into Canadas No. 1 transportation company. The best service, the greatest place to work and the safest place to work. Hopefully, I will be here to guide the company into the next generation of technological developments and get us up to speed with what is currently out there and what the Amazons of the world are doing.

Read more:

Class of 2022: Trucking company VP sees a role for artificial intelligence in future of his industry - University of Calgary

Posted in Artificial Intelligence | Comments Off on Class of 2022: Trucking company VP sees a role for artificial intelligence in future of his industry – University of Calgary

Applications and use cases of AI in Sports – Appinventiv

Posted: at 1:40 am

As we are moving toward a technology-rich future, we see the world of sports is evolving by leaps. While statistical data has always played a central role in the sports industry, one technology has significantly increased the level of audience engagement and strategic gaming. We are talking about Artificial Intelligence in Sports.

Over the past two decades, Artificial Intelligence has completely transformed the way we consume and analyze sports. AI is making the world smarter for athletes, broadcasters, advertisers, and at last the viewers with real-time statistics. Not to mention the role of AI in sports forecasting and improved decision making, amongst other benefits, is one of the top applications of modern technology.

The applications of AI in sports have become a common sight even though not many experts talk about it. However, we dont limit the potential of AI when integrated into businesses and enterprises. Considering the positive impact and precision this technology brings to the ground, theres not an iota of doubt that AI in sports will flourish immensely in the future.

Speaking of which, lets discuss the undiscussable. This article talks about the transformation AI is bringing to the sports industry, the uses and applications of computational intelligence in sports, and the future of AI technology in the sports business. So sit through; its going to be an exciting ride.

Heres the roadmap to content ahead of you:

Lets begin:

Another study suggests that mobile applications such as HomeCourt, ESPN, AI SmartCoach, etc., are used to assess players skills, giving them a good medium to improve.

The above data proves how AI influences the sports industry to be data and information-rich. Not just popular sports but certain Sports enterprises rely entirely on AI and machine learning to drive their business. If you are one of them, you might want to know the whereabouts of AI before we jump to its use cases. Lets take a quick glance at AI for sports.

Artificial Intelligence is an umbrella term covering a variety of what we refer to as smart technologies. If you are new to the whole concept of AI, check out the Artificial Intelligence in business guide.

AI collects the information and responds to it without any manual support. The technology can take mass amounts of data and analyze it for a better experience and learning. At the most complex level, we are talking about drones and self-driving automobiles; however, in our daily sports life, it boils down to screen monitors, AI-based chatbots in mobile apps, and a lot more.

The adoption of AI and statistical modeling in sports has become more prominent with recent developments in professional sports analytics. This is probably because the applicability of machine learning algorithms combined with computer processing power has made the sports audience hungry for new strategies and applications.

The primary objective of AI in sports is to make competitions more fierce on and off the field. There are certain areas where AI and machine learning have left a solid footprint in the world of sports. Lets see what are the top AI uses:

The field of AI, particularly machine learning, has proven to be beneficial for all the sports challenges mentioned above. To talk about them in detail, here is a range of applications and use cases of Artificial Intelligence in the sports industry.

The sports business is at a point where it is ready to adopt every AI strategy and improve its decision-making by performing data-driven objectives. As a matter of fact, from 2015 to 2018, the NBA reviewed over 25,000 games and found over 2,000 missed or incorrect actions. This amounts to 1.49% wrong decisions in the finals of each close game.

This deciding factor has now been rescued by AI-dependent technologies where the officials can watch every close game with probability and visual data. Not only that, AI-based technologies serve the sports industry in a thousand other scenarios.

Below are a few of the significant AI applications in sports with real use examples:

Thanks to predictive analytics, AI in sports is used to boost performance and health. With the help of wearable technology, the athletes can gather information on strain and tear levels and can further avoid serious injuries. This also helps the team shape strong tactics and strategies and maximize their strength.

The analysis of player performance is even more sophisticated, thanks to AI. Even the coaches can gain insights using visuals and data to work on the strength and weaknesses of the players and make alterations in the game strategies.

From football to tennis, this is true of all sports. A powerful AI technology, Computer Vision is used for human motion sensing and tracking using video sequences. This brings out three results:

One popular real use example of AI in sports is determining the swimmers performance below water filters using human pose estimation. This method takes over the ancient quantitative evaluation method by manually annotating the swimmers body.

AI is on a track to winning in sports; this is concluded by how AI has taken personal training to the next level. An AI diet plan uses machine learning to customize different meal plans for different players based on their needs and schedule. And thats just the beginning.

Not to forget about the featured AI-based fitness apps that have flooded the market. These tools and techniques can now train algorithms to detect human poses in real-time.

One popular example is womens fitness app development, where keypoint skeleton models are used to identify human joints for online yoga and pilates.

Sports teams are making competitions more rigid and fierce by adding Artificial Intelligence to the scouting and recruitment box of tricks. Everything that takes place on the field, from the players movements to the orientation of their body, is tracked to make the right decision.

Further, machine learning algorithms are put to aggregate data and evaluate players skills and overall potential in various game categories.

Not only the recruitment choices are improved this way, but countries also get a strong and healthy team to achieve the impossible.

In often scenarios, the audience struggles to get inside stadiums on time for the match at big sporting events. Nothing could resolve the crowd issue until now when the AI struck in.

Recently, Columbus Crew adapted the AI-based face recognition technology to allow the fans to enter the stadium without having to check their tickets. This prevented bottlenecks and made the stadium entryway more efficient.

Apart from this, predictive and cognitive analytics is used to forecast what attendance is likely to be at the stadium along with the timing schedule. This helps the officials to keep up with the demand without much effort. Moreover, the merchandise and food arrangements are on time.

Its not hidden that officials have been trying to princess a heap of data in a bid to predict outcomes and win money for years. However, examining the first half of the match or the number of aces and scores its a pseudo prediction if you are just using a probability experience.

AI in sports can not predict the exact outcomes either, but it can get much closer with the algorithms than the human prediction.

Over 40% of the sports categories can now predict match outcomes using AI based on the below factors:

AI analytic tools can analyze the closest match prediction scores possible using the above data.

Sports journalism is a big business where every highlight needs to be covered. These details and updates are strictly heard, especially when it comes to data and statistics in tournaments and minor leagues. AI has simplified and made sports journalism a little easier.

For example, AI-driven platforms can hard score data into narratives using natural language. The platforms are based on automated insights that sync intelligently with computer vision and perform the journal score hearing.

This is a fascinating take on AI sports where technologies can cover even the local matches without the officials standing on the field.

This is a short and underrated benefit of AI in sports. Artificial Intelligence can be used to identify opportunities and present more relevant ads based on demographics. Brands this way get better advertising based on top highlights of the game identified by AI.

The cherry on the cake is the automated learning algorithms of AI and Machine learning in sports that monitor players actions and audiences emotions during matches.

Isnt it fascinating how Artificial Intelligence in sports has redefined the concept of watching and playing games in the most efficient way possible? If you ask about the future of AI in the sports business, wed say its bright and shining. AI is everywhere, and theres no going back from drones and big sports monitors now, so we might as well invest in the AI sports industry.

AI has already increased competitiveness by a huge margin. With effective sensors and algorithms, AI has all in hand for game strategists, sports companies, advertisers, franchise owners, and spectators. With such a broad scope of implementations, businesses are likely to invest in health and sports fitness application development, AI development services, and similar technical sports opportunities.

Therefore make sure you are not too late for the AI in sports party.

Appinventiv is an AI and ML development company that designs intelligent solutions to help your business solve problems, automate tasks and serve customers better. Unlock business opportunities with intelligent AI-driven solutions with a wide range of services such as data capture and processing, multi-platform integration, machine learning solutions, and analytics. Talk to our AI/ML experts to grab top industrial AI solutions.

Sudeep Srivastava

Excerpt from:

Applications and use cases of AI in Sports - Appinventiv

Posted in Artificial Intelligence | Comments Off on Applications and use cases of AI in Sports – Appinventiv

Opinion: The Long, Uncertain Road to Artificial General Intelligence – Undark Magazine

Posted: June 3, 2022 at 12:50 pm

Last month, DeepMind, a subsidiary of technology giant Alphabet, set Silicon Valley abuzz when it announced Gato, perhaps the most versatile artificial intelligence model in existence. Billed as a generalist agent, Gato can perform over 600 different tasks. It can drive a robot, caption images, identify objects in pictures, and more. It is probably the most advanced AI system on the planet that isnt dedicated to a singular function. And, to some computing experts, it is evidence that the industry is on the verge of reaching a long-awaited, much-hyped milestone: Artificial General Intelligence.

Unlike ordinary AI, Artificial General Intelligence wouldnt require giant troves of data to learn a task. Whereas ordinary artificial intelligence has to be pre-trained or programmed to solve a specific set of problems, a general intelligence can learn through intuition and experience.

An AGI would in theory be capable of learning anything that a human can, if given the same access to information. Basically, if you put an AGI on a chip and then put that chip into a robot, the robot could learn to play tennis the same way you or I do: by swinging a racket around and getting a feel for the game. That doesnt necessarily mean the robot would be sentient or capable of cognition. It wouldnt have thoughts or emotions, itd just be really good at learning to do new tasks without human aid.

This would be huge for humanity. Think about everything you could accomplish if you had a machine with the intellectual capacity of a human and the loyalty of a trusted canine companion a machine that could be physically adapted to suit any purpose. Thats the promise of AGI. Its C-3PO without the emotions, Lt. Commander Data without the curiosity, and Rosey the Robot without the personality. In the hands of the right developers, it could epitomize the idea of human-centered AI.

But how close, really, is the dream of AGI? And does Gato actually move us closer to it?

For a certain group of scientists and developers (Ill call this group the Scaling-Uber-Alles crowd, adopting a term coined by world-renowned AI expert Gary Marcus) Gato and similar systems based on transformer models of deep learning have already given us the blueprint for building AGI. Essentially, these transformers use humongous databases and billions or trillions of adjustable parameters to predict what will happen next in a sequence.

The Scaling-Uber-Alles crowd, which includes notable names such as OpenAIs Ilya Sutskever and the University of Texas at Austins Alex Dimakis, believes that transformers will inevitably lead to AGI; all that remains is to make them bigger and faster. As Nando de Freitas, a member of the team that created Gato, recently tweeted: Its all about scale now! The Game is Over! Its about making these models bigger, safer, compute efficient, faster at sampling, smarter memory De Freitas and company understand that theyll have to create new algorithms and architectures to support this growth, but they also seem to believe that an AGI will emerge on its own if we keep making models like Gato bigger.

Call me old-fashioned, but when a developer tells me their plan is to wait for an AGI to magically emerge from the miasma of big data like a mudfish from primordial soup, I tend to think theyre skipping a few steps. Apparently, Im not alone. A host of pundits and scientists, including Marcus, have argued that something fundamental is missing from the grandiose plans to build Gato-like AI into full-fledged generally intelligent machines.

If you put an AGI on a chip and then put that chip into a robot, the robot could learn to play tennis the same way you or I do: by swinging a racket around and getting a feel for the game.

I recently explained my thinking in a trilogy of essays for The Next Webs Neural vertical, where Im an editor. In short, a key premise of AGI is that it should be able to obtain its own data. But deep learning models, such as transformer AIs, are little more than machines designed to make inferences relative to the databases that have already been supplied to them. Theyre librarians and, as such, they are only as good as their training libraries.

A general intelligence could theoretically figure things out even if it had a tiny database. It would intuit the methodology to accomplish its task based on nothing more than its ability to choose which external data was and wasnt important, like a human deciding where to place their attention.

Gato is cool and theres nothing quite like it. But, essentially, it is a clever package that arguably presents the illusion of a general AI through the expert use of big data. Its giant database, for example, probably contains datasets built on the entire contents of websites such as Reddit and Wikipedia. Its amazing that humans have managed to do so much with simple algorithms just by forcing them to parse more data.

In fact, Gato is such an impressive way to fake general intelligence, it makes me wonder if we might be barking up the wrong tree. Many of the tasks Gato is capable of today were once believed to be something only an AGI could do. It feels like the more we accomplish with regular AI, the harder the challenge of building a general agent appears to be.

Call me old fashioned, but when a developer tells me their plan is to wait for an AGI to magically emerge from the miasma of big data like a mudfish from primordial soup, I tend to think theyre skipping a few steps.

For those reasons, Im skeptical that deep learning alone is the path to AGI. I believe well need more than bigger databases and additional parameters to tweak. Well need an entirely new conceptual approach to machine learning.

I do think that humanity will eventually succeed in the quest to build AGI. My best guess is that we will knock on AGIs door sometime around the early-to-mid 2100s, and that, when we do, well find that it looks quite different from what the scientists at DeepMind are envisioning.

But the beautiful thing about science is that you have to show your work, and, right now, DeepMind is doing just that. Its got every opportunity to prove me and the other naysayers wrong.

I truly, deeply hope it succeeds.

Tristan Greene is a futurist who believes in the power of human-centered technology. Hes currently the editor of The Next Webs futurism vertical, Neural.

Original post:

Opinion: The Long, Uncertain Road to Artificial General Intelligence - Undark Magazine

Posted in Artificial Intelligence | Comments Off on Opinion: The Long, Uncertain Road to Artificial General Intelligence – Undark Magazine

Oregon is dropping an artificial intelligence tool used in child welfare system – NPR

Posted: at 12:50 pm

Sen. Ron Wyden, D-Ore., speaks during a Senate Finance Committee hearing on Oct. 19, 2021. Wyden says he has long been concerned about the algorithms used by his state's child welfare system. Mandel Ngan/AP hide caption

Sen. Ron Wyden, D-Ore., speaks during a Senate Finance Committee hearing on Oct. 19, 2021. Wyden says he has long been concerned about the algorithms used by his state's child welfare system.

Child welfare officials in Oregon will stop using an algorithm to help decide which families are investigated by social workers, opting instead for a new process that officials say will make better, more racially equitable decisions.

The move comes weeks after an Associated Press review of a separate algorithmic tool in Pennsylvania that had originally inspired Oregon officials to develop their model, and was found to have flagged a disproportionate number of Black children for "mandatory" neglect investigations when it first was in place.

Oregon's Department of Human Services announced to staff via email last month that after "extensive analysis" the agency's hotline workers would stop using the algorithm at the end of June to reduce disparities concerning which families are investigated for child abuse and neglect by child protective services.

"We are committed to continuous quality improvement and equity," Lacey Andresen, the agency's deputy director, said in the May 19 email.

Jake Sunderland, a department spokesman, said the existing algorithm would "no longer be necessary," since it can't be used with the state's new screening process. He declined to provide further details about why Oregon decided to replace the algorithm and would not elaborate on any related disparities that influenced the policy change.

Hotline workers' decisions about reports of child abuse and neglect mark a critical moment in the investigations process, when social workers first decide if families should face state intervention. The stakes are high not attending to an allegation could end with a child's death, but scrutinizing a family's life could set them up for separation.

From California to Colorado and Pennsylvania, as child welfare agencies use or consider implementing algorithms, an AP review identified concerns about transparency, reliability and racial disparities in the use of the technology, including their potential to harden bias in the child welfare system.

U.S. Sen. Ron Wyden, an Oregon Democrat, said he had long been concerned about the algorithms used by his state's child welfare system and reached out to the department again following the AP story to ask questions about racial bias a prevailing concern with the growing use of artificial intelligence tools in child protective services.

"Making decisions about what should happen to children and families is far too important a task to give untested algorithms," Wyden said in a statement. "I'm glad the Oregon Department of Human Services is taking the concerns I raised about racial bias seriously and is pausing the use of its screening tool."

Sunderland said Oregon child welfare officials had long been considering changing their investigations process before making the announcement last month.

He added that the state decided recently that the algorithm would be completely replaced by its new program, called the Structured Decision Making model, which aligns with many other child welfare jurisdictions across the country.

Oregon's Safety at Screening Tool was inspired by the influential Allegheny Family Screening Tool, which is named for the county surrounding Pittsburgh, and is aimed at predicting the risk that children face of winding up in foster care or being investigated in the future. It was first implemented in 2018. Social workers view the numerical risk scores the algorithm generates the higher the number, the greater the risk as they decide if a different social worker should go out to investigate the family.

But Oregon officials tweaked their original algorithm to only draw from internal child welfare data in calculating a family's risk, and tried to deliberately address racial bias in its design with a "fairness correction."

In response to Carnegie Mellon University researchers' findings that Allegheny County's algorithm initially flagged a disproportionate number of Black families for "mandatory" child neglect investigations, county officials called the research "hypothetical," and noted that social workers can always override the tool, which was never intended to be used on its own.

Wyden is a chief sponsor of a bill that seeks to establish transparency and national oversight of software, algorithms and other automated systems.

"With the livelihoods and safety of children and families at stake, technology used by the state must be equitable and I will continue to watchdog," Wyden said.

The second tool that Oregon developed an algorithm to help decide when foster care children can be reunified with their families remains on hiatus as researchers rework the model. Sunderland said the pilot was paused months ago due to inadequate data but that there is "no expectation that it will be unpaused soon."

In recent years while under scrutiny by a crisis oversight board ordered by the governor, the state agency currently preparing to hire its eighth new child welfare director in six years considered three additional algorithms, including predictive models that sought to assess a child's risk for death and severe injury, whether children should be placed in foster care, and if so, where. Sunderland said the child welfare department never built those tools, however.

Continued here:

Oregon is dropping an artificial intelligence tool used in child welfare system - NPR

Posted in Artificial Intelligence | Comments Off on Oregon is dropping an artificial intelligence tool used in child welfare system – NPR

Evaluating brain MRI scans with the help of artificial intelligence – MIT Technology Review

Posted: at 12:50 pm

Greece is just one example of a population where the share of older people is expanding, and with it the incidences of neurodegenerative diseases. Among these, Alzheimers disease is the most prevalent, accounting for 70% of neurodegenerative disease cases in Greece. According to estimates published by the Alzheimer Society of Greece, 197,000 people are suffering from the disease at present. This number is expected to rise to 354,000 by 2050.

Dr. Andreas Papadopoulos1, a physician and scientific coordinator at Iatropolis Medical Group, a leading diagnostic provider near Athens, Greece, explains the key role of early diagnosis: The likelihood of developing Alzheimers may be only 1% to 2% at age 65. But then it doubles every five years. Existing drugs cannot reverse the course of the degeneration; they can only slow it down. This is why its crucial to make the right diagnosis in the preliminary stageswhen the first mild cognitive disorder appearsand to filter out Alzheimers patients2.

Diseases like Alzheimers or other neurodegenerative pathologies characteristically have a very slow progression, which makes is difficult to recognize and quantify pathological changes on brain MRI images at an early stage. In evaluating scans, some radiologists describe the process as one of guesstimation, as visual changes in the highly complex anatomy of the brain are not always possible to observe well with the human eye. This is where technical innovations such as artificial intelligence can offer support in interpreting clinical images.

One such tool is the AI-Rad Companion Brain MR3. Part of a family of AI-based, decision-support solutions for imaging, AI-Rad Companion Brain MR is a brain volumetry software that provides automatic volumetric quantification of different brain segments. It is able to segment them from each other: it isolates the hippocampi and the lobes of the brain and quantifies white matter and gray matter volumes for each segment individually. says Dr. Papadopoulos. In total, it has the capacity to segment, measure volumes, and highlight more than 40 regions of the brain.

Calculating volumetric properties manually can be an extremely laborious and time-consuming task. More importantly, it also involves a degree of precise observation that humans are simply not able to achieve. says Dr. Papadopoulos. Papadopoulos has always been an early adopter and welcomed technological innovations in imaging throughout his career. This AI-powered tool means that he can now also compare the quantifications with normative data from a healthy population. And its not all about the automation: the software displays the data in a structured report and generates a highlighted deviation map based on user settings. This allows the user to also monitor volumetric changes manually with all the key data prepared automatically in advance.

Opportunities for more accurate observation and evaluation of volumetric changes in the brain encourages Papadopoulos when he considers how important the early detection of neurodegenerative diseases is. He explains: In the early stages, the volumetric changes are small. In the hippocampus, for example, there is a volume reduction of 10% to 15%, which is very difficult for the eye to detect. But the objective calculations provided by the system could prove a big help.

The aim of AI is to relieve physicians of a considerable burden and, ultimately, to save time when optimally embedded in the workflow. An extremely valuable role for this particular AI-powered postprocessing tool is that it can visualize a deviation of the different structures that might be hard to identify with the naked eye. Papadopoulos already recognizes that the greatest advantage in his work is the objective framework that AI-Rad Companion Brain MR provides on which he can base his subjective assessment during an examination.

AI-Rad Companion4 from Siemens Healthineers supports clinicians in their daily routine of diagnostic decision-making. To maintain a continuous value stream, our AI-powered tools include regular software updates and upgrades that are deployed to the customers via the cloud. Customers can decide whether they want to integrate a fully cloud-based approach into their working environment leveraging all the benefits of the cloud or a hybrid approach that allows them to process imaging data within their own hospital IT setup.

The upcoming software version of AI-Rad Companion Brain MR will contain new algorithms that are capable of segmenting, quantifying, and visualizing white matter hyperintensities (WMH). Along with the McDonald criteria, reporting WHM aids in multiple sclerosis (MS) evaluation.

Follow this link:

Evaluating brain MRI scans with the help of artificial intelligence - MIT Technology Review

Posted in Artificial Intelligence | Comments Off on Evaluating brain MRI scans with the help of artificial intelligence – MIT Technology Review

Artificial Intelligence Model Can Successfully Predict the Reoccurrence of Crohns Disease – SciTechDaily

Posted: at 12:50 pm

A new study finds that an artificial intelligence model can predict whether Crohns disease will recur after surgery.

A deep learning model trained to analyze histological images of surgical specimens accurately classified patients with and without Crohns disease recurrence, investigators report in The American Journal of Pathology.

According to researchers, more than 500,000 individuals in the United States have Crohns disease. Crohns disease is a chronic inflammatory bowel disease that damages the digestive system lining. It can cause digestive system inflammation, which may result in abdominal pain, severe diarrhea, exhaustion, weight loss, and malnutrition.

Many people end up needing surgery to treat their Crohns disease. Even after a successful operation, recurrence is common. Now, researchers are reporting that their AI tool is highly accurate at predicting the postoperative recurrence of Crohns disease. It also linked recurrence with the histology of subserosal adipose cells and mast cell infiltration.

Using an artificial intelligence (AI) tool that simulates how humans visualize and is trained to identify and categorize pictures, researchers created a model that predicts the postoperative recurrence of Crohns disease with high accuracy by evaluating histological images. The AI tool also identified previously unknown differences in adipose cells and substantial disparities in the degree of mast cell infiltration in the subserosa, or outer lining of the gut, when comparing individuals with and without disease recurrence. Elseviers The American Journal of Pathology published the findings.

The 10-year rate of postoperative symptomatic recurrence of Crohns disease, a chronic inflammatory gastrointestinal illness, is believed to be 40%. Although there are scoring methods to measure Crohns disease activity and the existence of postoperative recurrence, no scoring system has been devised to predict whether Crohns disease will return.

Sixty-eight patients with Crohns disease were classified according to the presence or absence of postoperative recurrence within two years. The investigators performed histological analysis of surgical specimens using deep learning EfficientNet-b5, a commercially available AI model designed to perform image classification. They achieved a highly accurate prediction of postoperative recurrence (AUC=0.995) and discovered morphological differences in adipose cells between the two groups. Credit: The American Journal of Pathology

Most of the analysis of histopathological images using AI in the past have targeted malignant tumors, explained lead investigators Takahiro Matsui, MD, Ph.D., and Eiichi Morii, MD, Ph.D., Department of Pathology, Osaka University Graduate School of Medicine, Osaka, Japan. We aimed to obtain clinically useful information for a wider variety of diseases by analyzing histopathology images using AI. We focused on Crohns disease, in which postoperative recurrence is a clinical problem.

The research involved 68 Crohns disease patients who underwent bowel resection between January 2007 and July 2018. They were divided into two groups based on whether or not they had postoperative disease recurrence within two years after surgery. Each group was divided into two subgroups, one for training and the other for validation of an AI model. Whole slide pictures of surgical specimens were cropped into tile images for training, labeled for the presence or absence of postsurgical recurrence, and then processed using EfficientNet-b5, a commercially available AI model built to perform image classification. When the model was tested with unlabeled photographs, the findings indicated that the deep learning model accurately classified the unlabeled images according to the presence or absence of disease occurrence.

Following that, prediction heat maps were created to identify areas and histological features from which the machine learning algorithm could accurately predict recurrence. All layers of the intestinal wall were shown in the photos. The heatmaps revealed that the machine learning algorithm correctly predicted the subserosal adipose tissue layer. However, the model was less precise in other regions, such as the mucosal and proper muscular layers. Images with the greatest accurate predictions were taken from the non-recurrence and recurrence test datasets. The photos with the greatest predictive results all had adipose tissue.

Because the machine learning model achieved accurate predictions from images of subserosal tissue, the investigators hypothesized that subserosal adipose cell morphologies differed between the recurrence and the non-recurrence groups. Adipose cells in the recurrence group had a significantly smaller cell size, higher flattening, and smaller center-to-center cell distance values than those in the nonrecurrence group.

These features, defined as adipocyte shrinkage, are important histological characteristics associated with Crohns disease recurrence, said Dr. Matsui and Dr. Morii.

The investigators also hypothesized that the differences in adipocyte morphology between the two groups were associated with some degree or type of inflammatory condition in the tissue. They found that the recurrence group had a significantly higher number of mast cells infiltrating the subserosal adipose tissue, indicating that the cells are associated with the recurrence of Crohns disease and the adipocyte shrinkage phenomenon.

To the investigators knowledge, these findings are the first to link postoperative recurrence of Crohns disease with the histology of subserosal adipose cells and mast cell infiltration. Dr. Matsui and Dr. Morii observed, Our findings enable stratification by the prognosis of postoperative Crohns disease patients. Many drugs, including biologicals, are used to prevent Crohns disease recurrence, and proper stratification can enable more intensive and successful treatment of high-risk patients.

Reference: Deep Learning Analysis of Histologic Images from Intestinal Specimen Reveals Adipocyte Shrinkage and Mast Cell Infiltration to Predict Postoperative Crohn Disease by Hiroki Kiyokawa, Masatoshi Abe, Takahiro Matsui, Masako Kurashige, Kenji Ohshima, Shinichiro Tahara, Satoshi Nojima, Takayuki Ogino, Yuki Sekido, Tsunekazu Mizushima and Eiichi Morii, 28 March 2022, The American Journal of Pathology.DOI: 10.1016/j.ajpath.2022.03.006

More:

Artificial Intelligence Model Can Successfully Predict the Reoccurrence of Crohns Disease - SciTechDaily

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Model Can Successfully Predict the Reoccurrence of Crohns Disease – SciTechDaily

Early Detection of Arthritis Now Possible Thanks to Artificial Intelligence – SciTechDaily

Posted: at 12:50 pm

A new study finds that utilizing artificial intelligence could allow scientists to detect arthritis earlier.

Researchers have been able to teach artificial intelligence neural networks to distinguish between two different kinds of arthritis and healthy joints. The neural network was able to detect 82% of the healthy joints and 75% of cases of rheumatoid arthritis. When combined with the expertise of a doctor, it could lead to much more accurate diagnoses. Researchers are planning to investigate this approach further in another project.

This breakthrough by a team of doctors and computer scientists has been published in the journal Frontiers in Medicine.

There are many different varieties of arthritis, and determining which type of inflammatory illness is affecting a patients joints may be difficult. Computer scientists and physicians from Friedrich-Alexander-Universitt Erlangen-Nrnberg (FAU) and Universittsklinikum Erlangen have now taught artificial neural networks to distinguish between rheumatoid arthritis, psoriatic arthritis, and healthy joints in an interdisciplinary research effort.

Within the scope of the BMBF-funded project Molecular characterization of arthritis remission (MASCARA), a team led by Prof. Andreas Maier and Lukas Folle from the Chair of Computer Science 5 (Pattern Recognition) and PD Dr. Arnd Kleyer and Prof. Dr. Georg Schett from the Department of Medicine 3 at Universittsklinikum Erlangen was tasked with investigating the following questions: Can artificial intelligence (AI) recognize different forms of arthritis based on joint shape patterns? Is this strategy useful for making more precise diagnoses of undifferentiated arthritis? Is there any part of the joint that should be inspected more carefully during a diagnosis?

Currently, a lack of biomarkers makes correct categorization of the relevant form of arthritis challenging. X-ray pictures used to help diagnosis are also not completely trustworthy since their two-dimensionality is insufficiently precise and leaves room for interpretation. This is in addition to the challenge of placing the joint under examination for X-ray imaging.

To find the answers to its questions, the research team focused its investigations on the metacarpophalangeal joints of the fingers regions in the body that are very often affected early on in patients with autoimmune diseases such as rheumatoid arthritis or psoriatic arthritis. A network of artificial neurons was trained using finger scans from high-resolution peripheral quantitative computer tomography (HR-pQCT) with the aim of differentiating between healthy joints and those of patients with rheumatoid or psoriatic arthritis.

HR-pQCT was selected as it is currently the best quantitative method of producing three-dimensional images of human bones in the highest resolution. In the case of arthritis, changes in the structure of bones can be very accurately detected, which makes precise classification possible.

A total of 932 new HR-pQCT scans from 611 patients were then used to check if the artificial network can actually implement what it had learned: Can it provide a correct assessment of the previously classified finger joints?

The results showed that AI detected 82% of the healthy joints, 75% of the cases of rheumatoid arthritis, and 68% of the cases of psoriatic arthritis, which is a very high hit probability without any further information. When combined with the expertise of a rheumatologist, it could lead to much more accurate diagnoses. In addition, when presented with cases of undifferentiated arthritis, the network was able to classify them correctly.

We are very satisfied with the results of the study as they show that artificial intelligence can help us to classify arthritis more easily, which could lead to quicker and more targeted treatment for patients. However, we are aware of the fact that there are other categories that need to be fed into the network. We are also planning to transfer the AI method to other imaging methods such as ultrasound or MRI, which are more readily available, explains Lukas Folle.

Whereas the research team was able to use high-resolution computer tomography, this type of imaging is only rarely available to physicians under normal circumstances because of restraints in terms of space and costs. However, these new findings are still useful as the neural network detected certain areas of the joints that provide the most information about a specific type of arthritis which is known as intra-articular hotspots. In the future, this could mean that physicians could use these areas as another piece in the diagnostic puzzle to confirm suspected cases, explains Dr. Kleyer. This would save time and effort during the diagnosis and is already in fact possible using ultrasound, for example. Kleyer and Maier are planning to investigate this approach further in another project with their research groups.

Reference: Deep Learning-Based Classification of Inflammatory Arthritis by Identification of Joint Shape PatternsHow Neural Networks Can Tell Us Where to Deep Dive Clinically by Lukas Folle, David Simon, Koray Tascilar, Gerhard Krnke, Anna-Maria Liphardt, Andreas Maier, Georg Schett and Arnd Kleyer, 10 March 2022, Frontiers in Medicine.DOI: 10.3389/fmed.2022.850552

View original post here:

Early Detection of Arthritis Now Possible Thanks to Artificial Intelligence - SciTechDaily

Posted in Artificial Intelligence | Comments Off on Early Detection of Arthritis Now Possible Thanks to Artificial Intelligence – SciTechDaily

Global Graphene Electronics Market Report 2021-2028: Developments in Artificial Intelligence and Machine Learning Abilities to Expand Graphene…

Posted: at 12:50 pm

DUBLIN--(BUSINESS WIRE)--The "Graphene Electronics Market Report - Global Industry Data, Analysis and Growth Forecasts by Type, Application and Region, 2021-2028" report has been added to ResearchAndMarkets.com's offering.

Graphene Electronics market illustrates an attractive growth rate during the forecast period with the advancements in technologies. Latest developments in Artificial Intelligence and machine learning abilities to expand Graphene Electronics applications and drive demand during the forecast period to 2028.

The pandemic COVID 19 has a significant impact on the manufacturers of Graphene Electronics due to disruptions in the supply chain and frequent lockdowns. Further, the economic slowdown and geopolitical matters have limited the Graphene Electronics market growth in 2020. As the market recovers from the pandemic, we forecast the growth trajectory to vary across regions with some countries offering huge growth potential while others reporting limited profit margins.

New generation Graphene Electronics with improved performance offering higher accuracy and flexibility, with easy integration into systems spur the growth in Graphene Electronics industry. However, a paradigm shift towards a connected world and growing requirement for miniaturization are necessitating further advancement in the Graphene Electronics market and develop smarter products.

Research and development in the Graphene Electronics industry to drive down costs and improve functionality are expected to advance in the medium term. Autonomous vehicles poised to hit the mainstream alongside rapid growth in AI computing capabilities with improving commercials are offering enormous opportunities in the Graphene Electronics market. Over the forecast period to 2028, we forecast the Graphene Electronics market to regain growth momentum, mainly with support from developing markets.

Graphene Electronics market competitive landscape

On the Graphene Electronics market structure front, consolidation observed in 2020 is expected to be continued in 2021. Mergers and acquisitions are primarily intended to acquiring new technologies, strengthening portfolios, and leveraging capabilities.

Companies operating in the Graphene Electronics market were hard hit by the adverse effects of COVID, with the major difficulty being the supply chain management. Managing production with shortages in supply and man force has limited the profitability of companies in 2020 and created the need to adapt to more agile methods of working.

However, growing trends of online work and education along with the exponential development of the e-commerce industry facilitate companies to regain their market share. Detailed profiles of top companies in the Graphene Electronics industry along with their key strategies to 2028 are provided in the report.

Impact of COVID 19 on Graphene Electronics Industry

The global Graphene Electronics market study carefully examines the deviation in the global outlook due to COVID-19 considering its impact on supply chain, economy, and consumer preferences by country and region.

The report identifies competitive strategies being implemented and planned by key companies in the Graphene Electronics market to counter adverse effects and take advantage of the new opportunities created by the pandemic situation. Different scenarios based on expected containment of the virus in the medium to long term are considered to provide Graphene Electronics market forecasts.

Graphene Electronics market segmentation

The research estimates global Graphene Electronics market revenues in 2021 with a detailed market share and penetration of different types, technologies, applications, and geographies in the Graphene Electronics market to 2028.

The study identifies current trends along with potential drivers and challenges leading to growth or decline in their market share, for each segment during the outlook period.

Key Topics Covered:

1. Executive Summary

1.1 Graphene Electronics Market Overview, 2021

1.1 Graphene Electronics Fastest-Growing Types, 2021-2028

1.2 Graphene Electronics Leading Application Segments, 2021-2028

1.3 Graphene Electronics High Potential markets, 2021-2028

2. Market Insights and Strategic Analysis

2.1 Key Market trends

2.2 Market Drivers

2.3 Market Challenges

2.4 Industry Attractiveness - Porter's Five Forces Analysis

2.5 Impact of COVID-19 on the Market

3. Global Graphene Electronics Market Outlook

3.1 Global Graphene Electronics Market Outlook by Type, 2021-2028

3.2 Global Graphene Electronics Market Outlook by Application, 2021-2028

3.3 Global Graphene Electronics Market Outlook by Country, 2021-2028

4. Asia Pacific Graphene Electronics Market Outlook

4.1 Key Snapshot, 2021

4.2 Asia Pacific Graphene Electronics Market Outlook by Type, 2021-2028

4.3 Asia Pacific Graphene Electronics Market Outlook by Application, 2021-2028

4.4 Asia Pacific Graphene Electronics Market Outlook by Country, 2021-2028

5. Europe Graphene Electronics Market Outlook and Growth Opportunities

5.1 Key Snapshot, 2021

5.2 Europe Graphene Electronics Market Outlook by Type, 2021-2028

5.3 Europe Graphene Electronics Market Outlook by Application, 2021-2028

5.4 Europe Graphene Electronics Market Outlook by Country, 2021-2028

6. North America Graphene Electronics Market Outlook and Growth Opportunities

6.1 Key Snapshot, 2021

6.2 North America Graphene Electronics Market Outlook by Type, 2021-2028

6.3 North America Graphene Electronics Market Outlook by Application, 2021-2028

6.4 North America Graphene Electronics Market Outlook by Country, 2021-2028

7. South and Central America Graphene Electronics Market Outlook and Growth Opportunities

7.1 Key Snapshot, 2021

7.2 South and Central America Graphene Electronics Market Outlook by Type, 2021-2028

7.3 South and Central America Graphene Electronics Market Outlook by Application, 2021-2028

7.4 South and Central America Graphene Electronics Market Outlook, 2021-2028

8. Middle East Africa Graphene Electronics Market Outlook and Growth Opportunities

8.1 Key Snapshot, 2021

8.2 Middle East Africa Graphene Electronics Market Outlook by Type, 2021-2028

8.3 Middle East Africa Graphene Electronics Market Outlook by Application, 2021-2028

8.4 Middle East Africa Graphene Electronics Market Outlook by Country, 2021-2028

9. Competitive Analysis

9.1 Leading Companies in Graphene Electronics Market

9.2 Business Profiles of Leading Graphene Electronics Companies

Introduction

SWOT Analysis

Financial Analysis

10. Latest News and Developments in Global Graphene Electronics Market

For more information about this report visit https://www.researchandmarkets.com/r/4yrb4z

Excerpt from:

Global Graphene Electronics Market Report 2021-2028: Developments in Artificial Intelligence and Machine Learning Abilities to Expand Graphene...

Posted in Artificial Intelligence | Comments Off on Global Graphene Electronics Market Report 2021-2028: Developments in Artificial Intelligence and Machine Learning Abilities to Expand Graphene…

Val Kilmers Return: A.I. Created 40 Models to Revive His Voice Ahead of Top Gun: Maverick – Variety

Posted: at 12:50 pm

SPOILER ALERT: Do not read unless you have watched Top Gun: Maverick, in theaters now.

Top Gun fans knew ahead of time that Val Kilmer would be reprising his role of Tom Iceman Kazansky in the sequel, but the specifics of the actors return were a question mark considering Kilmer lost the ability to speak after undergoing throat cancer treatment in 2014. The script for Top Gun: Maverick pulls from Kilmers real life, with Iceman also having cancer and communicating through typing. Kilmer gets to say one brief line of dialogue. In real life Kilmers speaking voice has been revived courtesy of artificial intelligence.

Kilmer announced in August 2021 that he had partnered with Sonantic to create an A.I.-powered speaking voice for himself. The actor supplied the company with hours of archival footage featuring his speaking voice that was then fed through the companys algorithms and turned into a model. According to Fortune, this process was used again for the actors Top Gun: Maverick appearance. However, a studio sources tells Variety no A.I. was used in the making of the movie.

In the end, we generated more than 40 different voice models and selected the best, highest-quality, most expressive one, John Flynn, CTO and cofounder of Sonantic, said in a statement to Forbes about reviving Kilmers voice. Those new algorithms are now embedded into our voice engine, so future clients can automatically take advantage of them as well.

Im grateful to the entire team at Sonantic who masterfully restored my voice in a way Ive never imagined possible, Kilmer originally said in a statement about the A.I. As human beings, the ability to communicate is the core of our existence and the side effects from throat cancer have made it difficult for others to understand me. The chance to narrate my story, in a voice that feels authentic and familiar, is an incredibly special gift.

As Fortune reports: After cleaning up old audio recordings of Kilmer, [Sonantic] used a voice engine to teach the voice model how to speak like Kilmer. The engine had around 10 times less data than it would have been given in a typical project, Sonantic said, and it wasnt enough. The company then decided to come up with new algorithms that could produce a higher-quality voice model using the available data.

Top Gun: Maverick is now playing in theaters nationwide.

View original post here:

Val Kilmers Return: A.I. Created 40 Models to Revive His Voice Ahead of Top Gun: Maverick - Variety

Posted in Artificial Intelligence | Comments Off on Val Kilmers Return: A.I. Created 40 Models to Revive His Voice Ahead of Top Gun: Maverick – Variety

Page 35«..1020..34353637..4050..»