Page 60«..1020..59606162..7080..»

Category Archives: Ai

The significance of AI-integrated procurement in an era of uncertainty | Ctech – CTech

Posted: February 28, 2022 at 8:20 pm

It is no secret that many enterprises and organizations, and procurement chain managers within them, had to change their worldviews during the last two years and produce new and agile responses and action plans to meet their business objectives.

The current and visible reality will continue requiring adjustments in work processes, tools, and how professional elements get ready to answer needs and challenges, alongside continuing uncertainty and fluctuations in internal and external environments.

The dynamic and fragile state of the differfent markets, including failure of suppliers to meet delivery deadlines; critical shortage of raw materials, components, and products; significant price fluctuations; all these, and more, impact the enterprises' ability to meet demands and remain profitable, demanding course recalculation while understanding that the old and familiar trajectory of procurement processes and supplier relations must change.

1

Boaz Gilad, CEO of BizWatch

(Alan Chapelski)

In the resulting complex, even chaotic state, enterprises have become more and more dependable on the suppliers' performance quality. This situation has caused many enterprises to tighten their connections with the existing suppliers if it decreases risks and prevents compromising the continuity of activities. Additionally, enterprises increased the safety stock in their warehouses to avoid a shortage of raw materials and/or components. Seemingly, this is the natural action taken by elements that recognize the existing supplier's quality, the complexity of the market, and the difficulty of sourcing and setting up a new supplier in organizational procurement systems.

But this is precisely the essence of the problem. By doing so, enterprises deepen their dependence on existing suppliers, diminishing their flexibility and their business latitude, consecutively finding themselves constrained against their interests by a reduced number of suppliers.

That is not to doubt the suppliers' quality, reliability, and integrity for the record. But in a competitive business environment, we must always be one step ahead, understand the challenges facing us and prepare accordingly.

In addition to analyzing and understanding current and future challenges, preparedness with suitable tools, methods, people, and processes is required to enable an appropriate response.

Due to the market situation during the last two years, procurement professionals found themselves increasingly busier, trying to source and analyze potential quality suppliers. Still, this is an almost impossible task without end-to-end advanced and supportive technology.

Since before the Coronavirus pandemic and more intensely after its onset, there has been a significant change in trade, especially in B2B transactions. Social and travel restrictions, along with social distancing, unprecedentedly catalyzed the development and integration of advanced technological systems and tools in procurement processes. Market analysis indicates that even now, enterprises pay over $15 billion dollars yearly for systems supporting procurement processes, especially for data analysis, risk analysis, CRM systems, and more. It is estimated that this scope will keep increasing because of the inherent advantages and challenges facing elements dealing with procurement, export, and import of products and raw materials. However, it must be noted that real change is not necessarily in the capacity for data processing (BI) of procurement processes, as was common until now.

The real revolution and disruption of the existing state are to be found in the combination of the ability to collect and process up-to-date, dynamic, and objective data from multiple data sources throughout the web and the addition of AI-based capabilities in procurement processes. The correct integration of advanced technologies provides the benefit of numerous advantages and capabilities, such as collecting extensive up-to-date data about market trends on the one hand and analyzing needs and demand on the other hand. Technology integration facilitates the automatic, efficient, fast, and advanced location and analysis of potential, previously unknown suppliers, fast and automated application for quotes or raw material data, prices, stocks, and availabilities, automatically analyzing the answers, thus facilitating procurement elements with the ability to function in digital trade arenas against a broad and dynamic range of quality suppliers.

Only organizations that adopt the required agility will withstand the storm and see it as an opportunity and leverage for fulfillment, development, and growth. In the last two years, CEOs, supply chain VPs and procurement managers have increasingly realized that the entire chain of value must go through digital upgrade and transformation since an upgrade of core production processes alone isn't sufficient, and they must bring supporting processes to the fore of technology, i.e., procurement.

This would allow procurement professionals to be proactive, manage and support production processes out of maximal flexibility, systematic data-based view, optimization, and implementation of a new automation-based procurement process out of a dynamic mix of quality suppliers

Boaz Gilad is the founder and CEO of BizWatch LTD, a portfolio member of i4Valley

Go here to read the rest:

The significance of AI-integrated procurement in an era of uncertainty | Ctech - CTech

Posted in Ai | Comments Off on The significance of AI-integrated procurement in an era of uncertainty | Ctech – CTech

Dr.Evidence Expands Capabilities of AI-powered Medical Search Engine – Business Wire

Posted: at 8:20 pm

SANTA MONICA, Calif.--(BUSINESS WIRE)--Dr.Evidence, the leading medical intelligence platform for life sciences companies, today announced groundbreaking enhancements to their market-leading medical search solution, DocSearch. DocSearch has expanded its coverage to include global patents and grants to its database and included the ability to identify new connections and generate hypotheses based on published literature with a Drugs-Targets module.

DocSearch is a specialized, real-time medical search engine powered by AI that generates actionable insights based on the universe of published medical information, real-world evidence and proprietary data. With the addition of the new Drugs-Targets module, it becomes possible to get more from published literature by establishing links between chemicals, targets, and disease, helping identify previously unrealized connections. This literature-based discovery approach creates the opportunity for researchers to explore a wide range of new hypotheses.

This expanded functionality drives potential breakthroughs by connecting existing knowledge in literature in new ways, said Dr.Evidences Head of Search, Arturo Devesa. Using the principle of Swanson Linking, we can uncover previously hidden connections between supposedly unrelated concepts in the biomedical literature and see possibilities emerging, leading to new hypotheses, in-silico drug-target profiling, target identification search, and disease-drug-target links and associations discovery search.

In addition to the Drugs-Targets module, DocSearch has added global patents (25 million) and grants (600,000) to its database. The addition of these new data sources will support more robust, comprehensive searches and will be instrumental in an upcoming DocSearch feature, Literature Lifecycle Visualization. The visualization will enable users to track a drug or treatments end-to-end literature life cycle from grant to clinical trial to PubMed publication to patent to FDA label to congresses to news and social media.

Dr.Evidences Chief Executive Officer, Bob Battista, commented, The significant expansion of DocSearch capabilities and data sources is in direct response to the needs of our life sciences clients. We are committed to rapidly advancing the Dr.Evidence platform to positively impact patients lives by uncovering insights that drive innovation.

About Dr.Evidence

Dr.Evidence is the leading medical intelligence platform for life sciences companies that enables teams to identify breakthrough insights grounded in the vast universe of published medical information, real-world evidence and proprietary data. It pushes the boundaries of healthcare technology and allows for new possibilities in science, enabling more informed decision making and faster time-to-market for accelerated impact.

Excerpt from:

Dr.Evidence Expands Capabilities of AI-powered Medical Search Engine - Business Wire

Posted in Ai | Comments Off on Dr.Evidence Expands Capabilities of AI-powered Medical Search Engine – Business Wire

Online tools to create mind-blowing AI art – Analytics India Magazine

Posted: at 8:20 pm

Art needs an audience, and their tastes vary. Computer programmers have not made art. They have made objects, media, performance, software work. What makes art is an audience as it is an artist who does work, and it is the audiences who turn that work into art through appreciation, says Mike Rugnetta of PBS Idea Channel.

AI art can be made using Generative Adversarial Networks (GAN), where two Artificial Neural Networks (ANN) are trained at the time in different ways to generate AI art. Now, developers have built various tools that help you create AI art easily. Most of these tools comply with the basic grounds of GAN with a few variations of their own.

So, what is AI Art?

Artificial intelligence art, or AI art, is any artwork created with the assistance of AI. It can be created autonomously by AI systems or created in collaboration with humans and an AI system.

In 2018, British auction house Christies sold its first piece of computer-generated art, titled Portrait of Edmond Belamy, made by a French art collective named Obvious, sold for a whopping $ 432,500, about 45 times more than its estimated worth.

Portrait of Edmond Belamy

AI art has been creating a lot of buzz among the artists community, and whether art enthusiasts fancy it or not, it is here to stay and will evolve continuously. Here is a look at some of the online tools available to create mind-blowing AI artwork:

Magenta

Magenta is an open-source research project tool that trains ML models to generate AI art and music. It does so by manipulating source data like music and images. Magenta was started by the Google Brain team, which developed new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. It is also exploring building smart tools and interfaces that allow artists and musicians to extend their processes using these models. Magenta uses TensorFlow and releases the models and tools in open-source on GitHub.

Google Deep Dream

Google engineer Alexander Mordvintsev created the AI art generator Deep Dream. This AI generator creates dream-like hallucinogenic pictures and uses a convolutional neural network to find and enhance patterns in images. Check out the experimental video of Deep Dream below.

Runway ML

Runway ML is an easy, code-free tool that makes it simple to experiment with ML models creatively. Primarily a video-photo editing and ML software for creatives, it can also create AI art and morph photos and videos.

Runway ML tool

Artbreeder

This AI art generator can create AI pictures by mixing two images. Formerly known as Ganbreeder, it is a collaborative, machine learning-based art website. Using the models StyleGAN and BigGAN, the app allows users to generate and modify images of faces, landscapes, and paintings, among other categories. Check out the video about Introduction to Artbreeder below:

Chimera Painter

Chimera painter is an AI art tool that changes a simple drawing and creates an impressive picture. The tool works by adding features and textures to a drawing by giving it a realistic look. Though aimed at game developers, this GoogleAI powered tool can be used by anyone who wants to create a realistic-looking AI picture.

Chimera Painter

Fotor GoArt

Fotor GoArt is an AI art generator that can change any picture into stunning pieces of art in seconds without manual photo editing. Just upload your picture and choose a painting filter you like, and GoArt will automatically analyse and convert your original picture into a spectacular painting within seconds. GoArt gives an experience of a new innovative way of creating art.

Image: GoArt (Pinterest)

NVIDIA AI Playground

This AI playground is named after the post-Impressionist painter Paul Gauguin. It can create realistic landscapes and bring life to rivers, rocks and clouds using basic tools rendering high-quality AI art. The tools are easy to use and dont require expertise or background. Check out their GauGAN: Changing Sketches into Photorealistic Masterpieces video below:

AI Art Machine by Google Colab

This is an art machine where you can text and get AI art. This notebook is by Hillel Wayne based on the notebook by Katherine Crowson. The platform is simplified to make it more accessible for nonprogrammers.

There are learning videos about How to generate Mind-Blowing AI Art in 5 minutes for free on YouTube uploaded by Anant Vijay Soni from AVSTech and the A.I. Whisperer. Check out the video links below:

Visit link:

Online tools to create mind-blowing AI art - Analytics India Magazine

Posted in Ai | Comments Off on Online tools to create mind-blowing AI art – Analytics India Magazine

Intel expands AI developer toolkit to bring more intelligence to the edge – ZDNet

Posted: at 8:20 pm

Intel on Wednesday announcedthat it's updating its OpenVINO AI developer toolkit, enabling developers to use it to bring a wider range of intelligent applications to the edge. Launched in 2018 with a focus on computer vision, OpenVINO now supports a broader range of deep learning models, which means adding support for audio and natural language processing use cases.

"With inference taking over as a critical workload at the edge, there's a much greater diversity of applications" under development, Adam Burns, Intel VP and GM of Internet of Things Group, said to ZDNet.

Since its launch, hundreds of thousands of developers have used OpenVINO to deploy AI workloads at the edge, according to Intel. A typical use case would be defect detection in a factory. Now, with broader model support, a manufacturer could use it to build a defect spotting system, plus a system to listen to a machine's motor for signs of failure.

Besides the expanded model support, the new version of OpenVINO offers more device portability choices besides the expanded model support with an updated and simplified API.

OpenVINO 2022.1 also includes a new automatic optimization process. The new capability auto-discovers the compute and accelerators on a given system and then dynamically load balances and increases AI parallelization based on memory and compute capacity.

"Developers create applications on different systems," Burns said. "We want developers to be able to develop right on their laptop and deploy to any system."

Intel customers already using OpenVINO include automakers like BMW and Audi; John Deere, which uses it for welding inspection; and companies making medical imaging equipment like Samsung, Siemens, Philips and GE. The software is easily deployed into Intel-based solutions -- which is a compelling selling point, given that most inference workloads already run on Intel hardware.

"We expect a lot more data to be stored and processed at the edge," Sachin Katti, CTO of Intel's Network and Edge Group, said to ZDNet. "One of the killer apps at the edge is going to be inference-driven intelligence and automation."

Ahead of this year's Mobile World Congress, Intel on Thursday also announced a new system-on-chip (SoC) designed for the software-defined network and edge. The new Xeon D processors (the D-2700 and D-1700) are built for demanding use cases, such as security appliances, enterprise routers and switches, cloud storage, wireless networks, AI inferencing and edge servers -- use cases where compute processing needs to happen close to where the data is generated. The chips deliver integrated AI and crypto acceleration, built-in Ethernet, support for time-coordinated computing and time-sensitive networking.

More than 70 companies are working with Intel on designs that utilize the Xeon D processors, including Cisco, Juniper Networks and Rakuten Symphony.

Intel also said Thursday that its next-gen Xeon Scalable platform, Sapphire Rapids, includes unique 5G-specific signal processing instruction enhancements to support RAN-specific signal processing. This will make it easier for Intel customers to deploy vRAN (virtual Radio Access Networks) in demanding environments.

Original post:

Intel expands AI developer toolkit to bring more intelligence to the edge - ZDNet

Posted in Ai | Comments Off on Intel expands AI developer toolkit to bring more intelligence to the edge – ZDNet

AI-generated faces are now more trustworthy than real ones – Fast Company

Posted: February 19, 2022 at 9:40 pm

You might be confident in your ability to tell a real face from one created using artificial intelligence. But a new study has found that your chance of choosing accurately would be slightly better if you just flipped a coinand you are more likely to trust the fake face over the real one.

Published in the Proceedings of the National Academy of Sciences, the study was conducted by Hany Farid, a professor at the University of California, Berkeley, and Sophie J. Nightingale, a lecturer at Englands University of Lancaster.

Farid has been exploring synthetic imagesand how well people can tell them apart from the real onesfor years. He initially focused on the rise of computer-generated imagery. But the mediums path has accelerated in recent years as deep-learning-based neural networks known as GANs (generative adversarial networks) have become more sophisticated at generating truly realistic synthetic images. If you look at the rate of improvement of deep fakes and [GANs], its an order of magnitude faster than CGI, he says. We would argue that we are through the uncanny valley for still faces.

The problems with such realistic fakes are manifold. Fraudulent online profiles are a good example. Fraudulent passport photos. Still photos have some nefarious usage, Farid says. But where things are going to get really gnarly is with videos and audio.

Given the speed of these improvements, Farid and Nightingale wanted to explore if faces created via artificial intelligence were able to convince viewers of their authenticity. Their study included three experiments aimed at understanding whether people can discern a real face from a synthetic one created by Nvidias StyleGAN2. After identifying 800 images of real and fake faces, Farid and Nightingale asked participants to look at a selection of them and sort them into real and fake. Participants were correct less than half the time, with an average accuracy of 48.2%.

A second experiment showed that even giving participants some tips on spotting AI-generated faces and providing feedback as they made their decisions didnt drastically improve their deciphering ability. Participants identified which face was real and which was fake with 59% accuracy, but saw no improvement over time. Even with feedback, even with training trying to make them better, they did slightly better than chance, but theyre still struggling, Farid says. Its not like they got better and betterbasically it helps a little bit, and then it plateaus.

The difficulty people had spotting faces created by artificial intelligence didnt particularly surprise Farid and Nightingale. They didnt anticipate, however, that when participants were asked to rate a set of real and fake faces based on their perceived trustworthiness, people would find synthetically generated faces 7.7% more trustworthy than real onesa small but statistically significant difference.

We were really surprised by this result because our motivation was to find an indirect route to improve performance, and we thought trust would be thatwith real faces eliciting that more trustworthy feeling, Nightingale says.

Farid noted that in order to create more controlled experiments, he and Nightingale had worked to make provenance the only substantial difference between the real and fake faces. For every synthetic image, they used a mathematical model to find a similar one, in terms of expression and ethnicity, from databases of real faces. For every synthetic photo of a young Black woman, for example, there was a real counterpart.

Though the type of images GANs can convincingly create at the moment are still limited to passport-style photos, Nightingale says the deceptions pose a threat for everything from dating scams to social media.

In terms of online romance scams, these images would be perfect, she says. [For] things like Twitter disinformation attacks, rather than having a default egg image, you just take one of these images. People trust it, and if you trust something, youre probably more likely to share it. So you see how these types of images can already cause chaos.

So how do we protect against people using synthetic images for nefarious means? Farid is a champion of an approach called controlled capture, which is being built out by companies like TruePic and the Coalition for Content Provenance and Authentication. The technology captures metadata related to time and location for any photo taken within an app that has built-in camera function.

I think the only solution is to authenticate at the point of recording, using a controlled capture-type of technology, he says. And then, anything that has that, good; anything that doesnt, buyer beware. I think this [solution] is really going to start to get some traction, and my hope is in the coming years, we start taking trust and security more seriously online.

Beyond synthetic still images, the study comes as the world of synthetic media is growing. Synthesia, an Australian company, closed a $50 million series B round in December for AI avatars used in corporate communications; and London-based Metaphysic, the company behind viral deep fakes of Tom Cruise, raised $7.5 million earlier this year. As these technologies continue to improve and change whats possible to do with AI, Nightingale says researchers and companies will have to think seriously about the ethics involved.

If the risks are greater than the benefits of some new technology, should we really be doing it at all? she asks. First of all, should we be developing it? And second, should we be uploading it to something like Github, where anyone can just get their hands on it? . . . As we see, once its out there, we cant just take it back again because people have downloaded it, and its too late.

Answers1. Fake2. Real3. Real4. Fake

Original post:

AI-generated faces are now more trustworthy than real ones - Fast Company

Posted in Ai | Comments Off on AI-generated faces are now more trustworthy than real ones – Fast Company

DeepMind Teaches AI to Assist With Nuclear Fusion Experiments – PCMag

Posted: at 9:40 pm

DeepMind wants to use artificial intelligence to help scientists experiment with nuclear fusion, which it believes is a contender for "a source of clean, limitless energy" here on Earth.

The company says it collaborated with the Swiss Plasma Center at the EPFL technical university in Switzerland "to develop the first deep reinforcement learning (RL) system" devoted to the tools researchers are using to assess the nuclear fusion's viability as an energy source.

That reinforcement learning system was designed to "autonomously discover how to control" a tokamak, which DeepMind says is "a doughnut-shaped vacuum surrounded by magnetic coils" that is "used to contain a plasma of hydrogen that is hotter than the core of the Sun."

It turns out that experimenting with something hotter than the Sun can be difficult. EPFL says that if a tokamak's settings aren't carefully managed the plasma within "could collide with the vessel walls and deteriorate." So researchers have to run their experiments in simulators first.

But those simulators can be hard to use, too, not least because of time constraints. DeepMind says that "plasma simulators are slow and require many hours of computer time to simulate one second of real time." That's hardly ideal for scientists racing to investigate nuclear fusion.

It's also where AI comes in. DeepMind and the Swiss Plasma Center published a study in Nature describing a system that's said to have allowed them to create "controllers that can both keep the plasma steady and be used to accurately sculpt it into different shapes" for further research.

"Similar to progress weve seen when applying AI to other scientific domains," DeepMind says, "our successful demonstration of tokamak control shows the power of AI to accelerate and assist fusion science, and we expect increasing sophistication in the use of AI going forward."

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Read more from the original source:

DeepMind Teaches AI to Assist With Nuclear Fusion Experiments - PCMag

Posted in Ai | Comments Off on DeepMind Teaches AI to Assist With Nuclear Fusion Experiments – PCMag

An interest in AI led this student to a job at Google – University of Georgia

Posted: at 9:40 pm

Foundation Fellow Nathan Safir is happiest when working on a complicated problem

As Nathan Safir was finishing up his masters in artificial intelligence at the University of Georgia, he had a big decision to make: Did he want to accept the Marshall Scholarship to pursue a Ph.D. in computer science in England or accept an offer to start work at Google this summer?

Safir, a Foundation Fellow, applied his problem-solving brain to the decision.

Mostly I tried to simplify it into a decision of Am I more excited to work now or am I more excited to go to grad school now? said Safir, who has a B.S. in computer science with a minor in geography.

Safir in the Institute for Artificial Intelligence in the Boyd Research and Education Center. (Photo by Chamberlain Smith/UGA)

Google was the winner. For now. Safir hopes to get his Ph.D. in artificial intelligence sometime in the future.

Whatever way he gets there, the end goal is the same he wants to work in artificial intelligence. Whether its working on a new cool application, existing AI methods, or actually doing the research, Id really love to work somewhere in that space, said Safir.

This flexibility will serve Safir well at Google, because he doesnt yet know what hell be doing at the company when he arrives in California this August. Theyll start team matching closer to my start date, said Safir. All I know is that Ill be in the Bay Area and possibly working on the Google Ads team.

During a Google internship in summer 2021, he worked with the Cloud AI team.

UGA has helped him find his passion for AI in several ways. One was a difficult math class his first year in college. I was a super eager freshman, so I took Math 3500. Its notorious for being very tough, and it was. I was glad I took it, but I realized I was less interested in the proof-based rigorous mathematics.

Nathan Safir outside of the Institute for Artificial Intelligence in the Boyd Research and Education Center. (Photo by Chamberlain Smith/UGA)

He decided to pick up machine learning instead. It presented an interesting way to use mathematical thinking and be able to think creatively.

Also helpful in guiding his path was UGAs Foundation Fellowship, a merit-based scholarship based in the Jere W. Morehead Honors College that is awarded to between 20-25 high school seniors out of 1,200 applicants. Its the reason Safir, who grew up in Kansas City, Missouri, ultimately chose UGA out of his ranked list of 13 potential schools.

He said he most appreciates the travel opportunities, although theyve been a bit truncated during the COVID years, and the people hes met along the way. The network of people you can talk to about careers or cool places you could work that type of thing was super helpful for me trying to imagine what I wanted to do in the future. Safir said he interacts with staff, peers and friends from the Fellowship on a daily basis.

Safirs gateway into programming was a documentary he saw in middle school on Mark Zuckerberg. It said he learned C++ from a book when he was about my age so I checked out the same C++ book from the library. I dont think I picked it up as quickly as he did, but that was the beginning of me trying to learn programming.

Ive been into math and quantitative problem solving since I was really young. Programming was just an extension of that logical problem-solving, he said.

Safir is a down-to-earth, well-rounded person who can move between playing in local Monday pick-up hockey games, coaching Athens Youth Hockey and working on his tennis game, to high-level problem-solving. But hes happiest when he has a tough question to work on.

My thesis is working on a theoretical problem of how to extend an architecture called variational autoencoders to work with both labeled and unlabeled data, he said. My lab, led by Dr. Quinn of the CS department, is at the intersection of biomedical imaging and artificial intelligence. Its a theoretical problem inspired by a real-world problem. Its an especially open fieldI feel like there are a million different things you can work on. Im just happy I can be a part of it.

Continue reading here:

An interest in AI led this student to a job at Google - University of Georgia

Posted in Ai | Comments Off on An interest in AI led this student to a job at Google – University of Georgia

Medicine Meets Big Data: Clinicians Look to AI for Disease Prediction, Prevention – University of Virginia

Posted: at 9:40 pm

From music streaming platforms to social media feeds and search engines, algorithms are used behind the scenes to tailor services to the unique preferences of individuals. Though the use of algorithms has been explored in health care since the origins of artificial intelligence, new strides in deep learning methods over the last decade are allowing clinicians to go after mass amounts of data that were previously inaccessible, transforming how doctors and clinical researchers detect, diagnose and treat disease.

In addition to higher data-computing capacities and advanced algorithms, clinicians can now input data through written and spoken words rather than only quantitative lab and imaging results. As they talk with patients about subjective feelings and pain levels, detailed interpretations can be coded to augment poking and prodding data collected through sensors, giving machine-learning algorithms a fuller picture. With enough input, algorithms will be able to output a series of patterns which physicians can then use in their clinical practice for better diagnoses and understandings of disease.

What has happened in the last eight or so years is that new results coming out of these deep learning methods are allowing us to go after hard data problems that werent accessible before, said Don Brown, the University of Virginias senior associate dean for research, Quantitative Foundation Distinguished Professor in Data Science and a professor in the Department of Engineering Systems and Environment.

Five years ago, Brown was approached by pediatric gastroenterologist Dr. Sana Syed, who was then working as a gastroenterology fellow in her first year on the UVA faculty. Syed was part of a team researching environmental enteropathy, a disease caused by long-term exposure to poor sanitation and hygiene that results in intestinal inflammation and is a major contributor to growth stunting in children living in developing countries around the world.

During her time as a fellow, Syeds role was essentially that of an algorithm. By sifting through thousands of images taken by a video capsule endoscopy a tiny wireless camera that works its way through a patients gut in six to eight hours Syed was tasked with tagging any abnormal images, such as polyps, bleeding and ulcers, to understand tissue structure and function in the context of inflammation. The identification of disease patterns would ideally minimize the need for doctors to conduct endoscopies (the collection of tissue from a patients intestine, which is not a sustainable practice in many low- and middle-income countries) to confirm diagnoses. The tedious task pushed Syed and her colleagues to explore the use of AI as a way of picking up patterns of disease in tissue samples, saving countless hours of analysis.

That is when Syed approached Don Brown in a pursuit to intersect big data and medicine. With such an accomplishment, physicians could begin to more quickly and more accurately predict important inflammatory bowel disease outcomes not only assisting physicians, but also giving patients more peace of mind.

What is going to happen is that a patient shows up, and their data is plugged into these larger algorithms that are trained to learn off patterns that may be representative of you. Then you will be able to predict a specific patients outcome, Syed said. The idea is that you will be able to get more tailored data and have the ability to give specific risk percentages.

In addition to identifying patterns in environmental enteropathy, Syed is researching the effectiveness of AI in other inflammatory bowel diseases, such as Crohns and celiac disease. According to Syed, algorithms that predict scar tissue development in Crohns patients, for example, or individualized risk percentages for thyroid disease and diabetes in celiac patients, would be game-changers in determining more accurate diagnosis and likelihood of disease, allowing clinicians to develop targeted medications and treatments.

That is the thinking we are moving toward: Lets not just try to stop the disease; we want to prevent it and cure it. That is the goal.

To ensure that AI in medicine is effective, Brown said that data scientists and clinical researchers must be aware of inherent biases prevalent in smaller data sets that are many times missed when that data is generalized.

Algorithms work in a way that gets it the highest reward, and if it can get that reward through a biased set of data, it will use it, Brown said. You have to make sure that you are giving [the algorithm] a fair cross section of data so that it can give you results that are truly answering the question that you are after.

For Syed, one of the biggest issues lies in the lack of publicly available data sets. When determining patterns in celiac patients, Syed and her team parsed through 2,000 to 3,000 electronic health records to collect data from the desired population. Instead of recreating the wheel by labeling hundreds of thousands of biopsies, Syed hopes to gather medical information through large industry partners such as Takeda, a R&D-driven global biopharmaceutical company.

A lot of this cross-thinking happens when there is open access to data, but that has to be cleaned and sorted and thoughtfully put out there, Syed said.

Once data is collected and confirmed to be bias-free, the final step is ensuring interpretability. When many clinicians do not have a background in data science, the only way to cross the disciplinary boundary is to ensure that the data is communicated effectively, which is a responsibility that falls back on the data science community.

At the heart of this is making it interpretable to the clinicians so that they get what it is that the system is telling them and how it can be used, and what worth it is, Brown said. And then, they can look at individual patients and decide what makes sense.

Machine learning algorithms are being used across medical disciplines throughout the world of health. Within UVA Grounds, algorithms are being used to predict the effects of cardiac disease treatments, understand health conditions through smart watches in UVAs Link Lab, and analyze real-time diabetes-monitoring data, to name a few. As Syed, Brown and other researchers continue to evolve their AI capabilities, clinicians will begin to transform the ways in which they can accurately predict, determine and treat disease.

The more I have learned about [big data], the more I have understood its potential impact in medicine across all specialties, Syed said.

Read more:

Medicine Meets Big Data: Clinicians Look to AI for Disease Prediction, Prevention - University of Virginia

Posted in Ai | Comments Off on Medicine Meets Big Data: Clinicians Look to AI for Disease Prediction, Prevention – University of Virginia

Heres why AI-equipped NFTs could be the real gateway to the Metaverse – Cointelegraph

Posted: at 9:40 pm

Nonfungible tokens (NFTs) have been largely acquired as proof-of-profile pictures (PFPs) that represent a brand, embody culture or ultimately, reflect as a static status symbol. Blue-chip NFTs like the Bored Ape Yacht Club or Cool Cats were not originally backed by any tangible utility other than speculative value and hype, along with the promise of an illustrative roadmap, but in 2022, investors are looking for a little bit more.

However, nonfungible tokens are finding their use beyond branding and status symbols by attempting to build out an existence in the Metaverse and some are ambitious enough to start within it.

The Altered State Machine (ASM) Artificial Intelligence Football Association (AIFA) has introduced a novel concept to NFTs called nonfungible intelligence or NFI. By tokenizing artificial intelligence, the ASM AIFA has captured the attention of investors who are thinking long-term about the future of the Metaverse and decentralized play-to-earn (P2E) economies.

In fusing AI features to the three growing markets of gaming, decentralized finance (DeFi) and NFTs, the ASM AIFA has the potential to be a lucrative long-term bet.

As an investor, these are the strategies Ive considered when thinking about investing in the ASM AIFA, while also factoring in the impending tokenomics that will be integrated into the nascent blockchain P2E game.

The ASM AIFA genesis box collection is essentially a starter booster pack toward its ecosystem. A box includes four ASM AI agents or all-stars as well as an ASM brain, which is the intelligence that powers each ASM all-star.

Currently valued at 5.369 Ether (ETH) ($16,768.84), the box is a valuable bet for those who hold long-term convictions in the ASTO economy and its decentralized autonomous organization (DAO) but more so, the Metaverse as a whole.

Since the ASM AIFA intends to award its early adopters and players through its play-and-earn model, the genesis box is essentially equipped as a ASTO generating set-up.

According to the ASM AIFA whitepaper, each brain will be able to mine ASTO and each all-star will be able to generate ASTO through training. Not only is ASTO the utility token of the Altered State Machine metaverse, but it's also the governance token in the ASM ecosystem.

Furthermore, these brains are not only limited to the ASM AIFA collection. They will also be supported in other notable NFT projects like FLUFF World NFT, making it interoperable as well.

ASTO tokens are needed to train the AI all-stars and also to create more AI agents. AI agents do not have to be limited to playing soccer in-game, its ASM brain can be trained to be a trading bot as well since it's dependent on its learned memories.

The project launched on Oct. 18, 2021, and since hitting the secondary market, ASM AIFA genesis boxes have increased by nearly 1,200%, suggesting there is growing interest and owners have recognized the value of AI.

In the last seven days, the average sale price of the genesis boxes have been on a downward trend, dropping from 6.3 ETH to 5.3 ETH. It seems that even with a slight correction, the genesis boxes have not dipped below the 4.75 ETH in the last month.

Based on the price points of the items in the genesis box, the floor for the ASM all-stars currently costs 0.21 Ether ($654.37) with a team of four, totaling approximately $2,617.48. The cheapest ASM brain is currently priced at 3.92 Ether ($12,214.96) bringing the total sum of the contents in the box to approximately $14,832.44 or 4.77 Ether.

Essentially, at these current prices, buying a genesis box costs roughly the same as it would purchasing the items separately. However, both the ASM brains and all-stars have experienced price fluctuations that have previously made purchasing the box a cheaper alternative than buying the items separately.

Depending on an investor's motives and strategy, they could sort other methods to own a piece of the ASM metaverse.

ASM AIFA genesis brains are unique artificial intelligence-equipped NFTs and the architecture of the brain is currently under patent pending where owners will have full rights to the machine-learning (ML) model of their NFI.

This provides an added layer of utility toward the ASM economy and a unique feature is that the ASM brains do not always need a form (avatar) and can exist and function with the parameters of its trained memories.

The ASM brain is the most expensive piece of the collection that will also be able to mine ASTO tokens. In this way, an investor can potentially make back their original capital investment via the brain's token emission. Currently, the cheapest ASM brain is worth 3.92 Ether ($12,214.96,) a 300% increase in floor prices in the last 60 days.

The ASM brains retain their value largely in part because of their genome matrix whose attributes enable for the brain to be integrated in other worlds outside of the ASM ecosystem. Meaning, the brain can be used in other ecosystems.

According to the ASM roadmap, each ASM genesis brain is slated for an ASTO token drop. Investors who are looking for exposure with AI, could consider purchasing ASTO for a more financially feasible bet.

Theres no denying that the ASM AIFA project is not the cheapest entry to the ecosystem, but for those investors who are strongly interested in the developing features of NFIs, investors could consider investing into the token or the AI all-star agents.

ASTO is the native currency that will govern activity in the ASM ecosystem. Since its needed to train the ASM brain and any AI agent, there will be an economy of gym owners who will provide GPU cloud computing for every ML model. In return for their time and energy, gym owners will be rewarded in ASTO.

When the ASTO token launches, the ASM team will host an auction to determine the price of ASTO in a unique method they have dubbed discovery auction. ASTO will also be dropped to owners of the ASM all-stars and ASTO can be staked to mine the next generation of brains in the ecosystem. In preparation for AIFAs launch, the ASTO token could be desirable to load up on.

As NFTs are finding ways to justify their value outside of speculation and the ways they can be integrated in the Metaverse expands, projects are beginning to build from the inside out. Time will tell when we begin to see more projects beginning to integrate AI features, but ASM AIFA seems to be a top contender as one of the first movers.

The views and opinions expressed here are solely those of the author and do not necessarily reflect the views of Cointelegraph.com. Every investment and trading move involves risk, you should conduct your own research when making a decision.

More:

Heres why AI-equipped NFTs could be the real gateway to the Metaverse - Cointelegraph

Posted in Ai | Comments Off on Heres why AI-equipped NFTs could be the real gateway to the Metaverse – Cointelegraph

This household greeting service uses AI to register resident’s arrival and welcome them – Yanko Design

Posted: at 9:40 pm

Dearbell is a smart household welcome service and lighting fixture that employs Artificial Intelligence to register the arrival of each resident and deliver corresponding messages.

Its been said that with the arrival of each new generation, the more disconnected we become from one another, tightening our grip on our smartphones. Its no secret that the more technologically advanced we become as a society, the more comfortable we become withdrawing from IRL socialization to prioritize our online lives.

Designers: Juan Lee, Hyebin Lee, Banhu Jeong, and Daeun Yoo

Responding to this generational gap, designers look once more to technology for a solution. Dearbell, a smart household welcome service, is one of the more recent solution-based concepts from a team of designers based in Korea.

Following a research period of interviews and surveys, the design team found that most family members living under one roof feel disconnected from one another. Due to long work hours and technology overload, communication at home is sometimes perceived as nagging. In an effort to connect family members together through communication, Dearbell is conceptualized as a customizable, home greeting service for family members to get to know one another better.

Dearbell takes on the look of a light pendant that can either hang from your ceiling or be mounted to your wall. Projecting from the light fixture, holographic messages are created and delivered by family members to other family members. Depending on artificial intelligence for operation, visual haptic sensors register each household residents gestures, shoes, and smartphone to ensure accurate message delivery.

The team of designers goes on to explain, Dearbell projector is motivated from welcoming bell. Technically, it supports automotive focusing in multi-layers and AI projection mapping that shows the image on a certain object and recognizes gesture interaction.

Walking through the homes front door, residents take off their shoes and Dearbell alerts each individual family member of the total number of steps taken during the day as well as any messages left to be read. As the unread message is beamed from the light pendant, the message can be opened by its designated receiver once they put their hand over the message icon.

Once the message is read, the receiver can choose to leave a response in the form of emojis by shaking their fists, a Dearbell-detected hand gesture. In addition to curated messages, residents can ask Dearbell for some daily information as they prepare to leave the home, including, weather, public transportation, as well as a pre-made to-do list.

Wooden elements ground the stainless steel body with a touch of warmth and rustic charm.

Once home, Dearbell alerts residents of the steps taken throughout the day.

Through the Wood detail finishing at the bottom, it visually reveals the warm communication with the family, not the technical image.

Dearbell consists of soft shapes and textures that are not solid form, luxurious arch-shaped metal frame.

Dearbell can either be mounted to a vertical service or suspended from the ceiling.

Following a period of interviews and surveys, the team of designers recognized the generational gaps present in most households.

Here is the original post:

This household greeting service uses AI to register resident's arrival and welcome them - Yanko Design

Posted in Ai | Comments Off on This household greeting service uses AI to register resident’s arrival and welcome them – Yanko Design

Page 60«..1020..59606162..7080..»