Page 99«..1020..9899100101..110120..»

Category Archives: Artificial Intelligence

Artificial Intelligence to Minimize Harvest Loss – AG INFORMATION NETWORK OF THE WEST – AGInfo Ag Information Network Of The West

Posted: January 15, 2021 at 1:42 pm

Its time for your Farm of the Future Report. Im Tim Hammerich.

Harvest loss is a big deal for grower profitability and for sustainability of our resources.

Ganssle In 2019 in the United States in corn alone, it was a $1.4 billion problem. We left $1.4 billion worth of corn grain in the field last year. So it's a big deal.

Thats Craig Ganssle, CEO and founder of Farmwave, an artificial intelligence-based autonomous measurement tool. One application is mounting the tool on a combine to minimize yield loss.

Ganssle Right now, if Farmwave shows you X amount of header loss, you know, you're losing three to four bushels per acre, on iPad in the cab. It tells you your real time, here's what's happening and here's where it's coming from. And so you can make those changes. You can, whatever the changes would need to be on machinery: slow down, change reel speed, lift the head, whatever. But the real value, and what growers want to see, is this integrated in with their machinery. So we are in discussions with multiple OEMs about how to possibly do that and work towards automation. The future is getting that integrated into the machinery so it happens autonomously.

Other applications for the Farmwave AI tool include sprayer nozzle performance, application coverage, and disease and pest count and growth stage. And by 2022, they hope to be working with planters as well.

Continued here:

Artificial Intelligence to Minimize Harvest Loss - AG INFORMATION NETWORK OF THE WEST - AGInfo Ag Information Network Of The West

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence to Minimize Harvest Loss – AG INFORMATION NETWORK OF THE WEST – AGInfo Ag Information Network Of The West

Data and Artificial Intelligence: The Only Way is Ethics – The Scotsman

Posted: at 1:42 pm

Business

Professor Shannon Vallor, an expert in the challenging relationship between ethics and technology, reminds us that artificial intelligence is "human all the way down" - and therefore reflects the positives and negatives of human nature.

Prof Vallor, Baillie Gifford Chair in the Ethics of Data and AI at the Edinburgh Futures Institute, insists self-aware machines are not about to take over the world.

She says: "We have gone through a period where people like Stephen Hawking and Elon Musk have perhaps unwittingly misled the public about machines becoming self-aware or hyper-intelligent and enslaving humanity - and from a scientific perspective, thats just a complete fantasy at this point.

There is nothing mysterious or magical about AI - its something that is transforming our world but completely reflective of our own human strengths and weaknesses.

Professor Vallor is joined on the podcast by Nick Thomas and Kyle McEnery of Baillie Gifford. Nick Thomas highlights how access to data is going to be a key competitive advantage for business in the future, while Kyle McEnery describes his work on harnessing data and AI to make better decisions about where Baillie Gifford invests its clients money - and the potential for greater targeting of ethical investment.

Mr McEnery backs up Prof Vallor's comments about data and AI being fully human and says: There are a lot of biases in data that we need to be careful of and we try very, very, very hard to avoid those but its a constant challenge.

Go here to see the original:

Data and Artificial Intelligence: The Only Way is Ethics - The Scotsman

Posted in Artificial Intelligence | Comments Off on Data and Artificial Intelligence: The Only Way is Ethics – The Scotsman

Artificial Intelligence in shipping and how it works – ShipInsight

Posted: at 1:42 pm

Forget whatever youve seen in science-fiction movies. Artificial intelligence, usually known as AI, is an umbrella term for computer programs that give machines human-like intelligence. As far as were concerned, it falls into two broad categories:

Narrow AI is what we have today. A narrow AI works well for specific tasks, for example identifying cat breeds in photographs, but its useless in all other areas. Just as you cant use the camera app on your phone to order something from Amazon, an AI designed to diagnose skin cancer from photographs of moles is completely useless for steering a self-driving car or recommending which movie to watch next.

In the future, we expect to have general AI. General AI will work across a range of areas, rather than being confined to one specific task. Were not there yet, but in a 2019 survey, 45% of technologists believed we would have it by 2060.

At the moment, the main technology under the AI umbrella is machine learning (ML). In machine learning, we provide structured and labelled training data, for example 1000 photographs of tugs and 1000 photographs of container ships. The computer analyses the data and learns to tell the difference between a photograph of a tug and a photograph of a container ship.

The main problem with machine learning is that, in most cases, we need carefully labelled training data. Unlabelled data is useless for standard machine learning. Converting thousands of entries in a database to the correct format then manually labelling them is expensive and time-consuming. In addition, machine learning systems usually need several smaller programs, known as models, to solve a problem. For example, you could build a system to look at photographs of oncoming ships and decide what action to take to avoid collision. In this case, one model could locate ships in a photograph and feed that information into the next model. The next model might identify the heading of the other vessel, while a third model would take that data and determine what action to take. You couldnt use machine learning to build a single model to look at the photograph and recommend a course of action.

Deep learning is a type of machine learning that uses artificial neural networks. The neural network is arranged in layers. Each layer processes the unstructured data, then inputs it into the next layer. Through this process, the system finds patterns in the data and eventually develops a model.

Neural networks accept unstructured and unlabelled data, and they resolve problems end-to-end rather than one part at a time. The downside is that they need a lot more training data and computing power, and they take longer to train than standard machine learning models.

Barriers to AI adoption range from fear of the unknown and laws not designed for AI, to a lack of appropriate training data and a shortage of data scientists.

More digitalised companies adopt AI at higher rates than less digitalised companies. This suggests that the digitalisation trend in the maritime industry could lead to wider adoption of AI systems.

Even without general AI, AI is creeping into all aspects of the maritime industry. Any repetitive, structured task has the potential to be carried out by a narrow AI model. Marine insurance, Fire detection from CCTV systems, AI-operated tugs, predictive maintenance, and fuel efficiency improvements are all moving towards AI-driven systems.

A study by the National Cargo Bureau found 6.5% of containers carrying dangerous goods had mis-declared cargo. To address this, Maersk is among the companies using AI screening tools to detect undeclared and mis-declared dangerous goods. HazCheck Detect, a new AI cargo screening tool, scans all booking details and highlights suspicious bookings. In the future, the same tool could screen cargoes to identify, for example, wildlife smuggling.

After demonstrating the worlds first fully-autonomous ferry in Finland in 2018, Rolls Royce is now using an AI system to provide deeper insight into the performance of installed ship equipment. This will lead to increased efficiency and reduced emissions.

Every year, 20% of vessels are diverted due to crew illness, and human error (including fatigue) accounts for around 75% to 90% of marine accidents. Communications provider KVH foresees the use of AI for seafarer health monitoring, to reduce accidents and diversions for crew illness or injury.

But illness and injury arent the only causes of human error: fatigue, intoxication, excitement and stress also lead to mistakes. Senseye uses high-resolution images of the iris to identify fatigue and intoxication, while Sensing Feeling uses real-time video to identify early signs of stress and fatigue.

As with any new technology, adoption of AI will be slow until it reaches a tipping point. As adoption of AI becomes widespread, many of the cultural barriers to AI are likely to disappear. For the last decade, the rate of AI adoption across all industries has been accelerating. Just as weve become accustomed to email and the internet, well soon take AI systems for granted too.

The bigger question is what impact AI will have on the industry. Maritime legislation, vessel manning, and much more are predicated on having a human in the loop. As autonomous ships become commonplace, we need to ensure that AI works for us.

Go here to see the original:

Artificial Intelligence in shipping and how it works - ShipInsight

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence in shipping and how it works – ShipInsight

3m to fund new wave of Artificial Intelligence for the military – GOV.UK

Posted: at 1:42 pm

The second phase of funded proposals has been announced for the Defence and Security Accelerator (DASA) Intelligent Ship competition to revolutionise military decision-making, mission planning and automation.

Phase 2 of Intelligent Ship, run by DASA on behalf of the Defence Science and Technology Laboratory (Dstl), sought novel technologies for use by the military in 2030 and beyond.

Nine innovative projects have been funded, sharing 3m.

With a focus on Artificial Intelligence (AI), the projects will support the evaluation and demonstration of a range of human-machine teams and their integration with an evaluation environment. Phase 2 will develop AI for wider application across defence platforms.

Julia Tagg, Dstl Project Technical Authority said:

The Intelligent Ship project aims to demonstrate ways of bringing together multiple AI applications to make collective decisions, with and without human operator judgement.

We hope that the use of AI in the future will lead to timely, more informed and trusted decision-making and planning, within complex operating and data environments. With applications for the Royal Navy and more broadly across defence, we are very excited to see what these Phase 2 projects might bring.

Rachel Solomons, DASA Delivery Manager said:

DASA is focussed on finding innovation to benefit the defence and security of the UK.

Artificial Intelligence and human-machine teaming are such innovations, and by taking this competition to Phase 2 we hope to help find solutions that could make a real difference to future decision making in defence.

The companies awarded funding for Phase 2 are:

Examples of proposals funded include an intelligent system for vessel power and propulsion machinery control to support the decision-making of the engineering crew, and an innovative mission AI prototype Agent for Decision-Making to support decision making during pre-mission preparation, mission execution and post mission analysis.

Phase one contracts were announced last year.

Link:

3m to fund new wave of Artificial Intelligence for the military - GOV.UK

Posted in Artificial Intelligence | Comments Off on 3m to fund new wave of Artificial Intelligence for the military – GOV.UK

Global Healthcare Artificial Intelligence Report 2020-2027: Market is Expected to Reach $35,323.5 Million – Escalation of AI as a Medical Device -…

Posted: January 9, 2021 at 2:50 pm

Dublin, Jan. 08, 2021 (GLOBE NEWSWIRE) -- The "Artificial intelligence in Healthcare Global Market - Forecast To 2027" report has been added to ResearchAndMarkets.com's offering.

Artificial intelligence in healthcare global market is expected to reach $35,323.5 million by 2027 growing at an exponential CAGR from 2020 to 2027 due to the gradual transition from volume to value-based healthcare

The surging need to accelerate and increase the efficiency of drug discovery and clinical trial processes, advancement of precision medicines, escalation of AI as a medical device, increasing prevalence of chronic, communicable diseases and escalating geriatric population and the increasing trend of acquisitions, collaborations, investments in the AI in healthcare market.

Artificial intelligence (AI) is the collection of computer programs or algorithms or software to make machines smarter and enable them to simulate human intelligence and perform various higher-order value-based tasks like visual perception, translation between languages, decision making and speech recognition.

The rapidly evolving vast and complex healthcare industry is slowly deploying AI solutions into its mainstream workflows to increase the productivity of various healthcare services efficiently without burdening the healthcare personnel, to streamline and optimize the various healthcare-associated administrative workflows, to mitigate the physician deficit and burnout issues effectively, to democratize the value-based healthcare services across the globe and to efficiently accelerate the drug discovery and development process.

Artificial intelligence in healthcare global market is classified based on the application, end-user and geography.

Based on the application, the market is segmented into Medical diagnosis, drug discovery, precision medicines, clinical trials, Healthcare Documentation management and others consisting of AI guided robotic surgical procedures and AI-enhanced medical device and pharmaceutical manufacturing processes.

The AI-powered Healthcare documentation management solutions segment accounted for the largest revenue in 2020 and is expected to grow at an exponential CAGR from 2020 to 2027. AI-enhanced Drug Discovery solutions segment is the fastest emerging segment, growing at an exponential CAGR from 2020 to 2027.

The artificial intelligence in healthcare global end-users market is grouped into Hospitals and Diagnostic Laboratories, Pharmaceutical companies, Research institutes and other end-users consisting of health insurance companies, medical device and pharmaceutical manufacturers and patients or individuals in the home-care settings.

Among these end users, Hospitals and Diagnostic Laboratories segment accounted for the largest revenue in 2020 and is expected to grow at an exponential CAGR during the forecasted period. Pharmaceutical companies segment is the fastest-growing segment, growing at an exponential CAGR from 2020 to 2027.

The artificial intelligence in healthcare global market by geography is segmented into North America, Europe, Asia-Pacific and the Rest of the world (RoW). North American region dominated the global artificial intelligence in healthcare market in 2020 and is expected to grow at an exponential CAGR from 2020 to 2027. The Asia-Pacific region is the fastest-growing region, growing at an exponential CAGR from 2020 to 2027.

The artificial intelligence in healthcare market is consolidated with the top five players occupying majority of the market share and the remaining minority share of the market being occupied by other players. Key Topics Covered:

1 Executive Summary

2 Introduction

3 Market Analysis3.1 Introduction3.2 Market Segmentation3.3 Factors Influencing Market3.3.1 Drivers and Opportunities3.3.1.1 Aiabetting the Transition from Volume Based to Value Based Healthcare3.3.1.2 Acceleration and Increasing Efficiency of Drug Discovery and Clinical Trials3.3.1.3 Escalation of Artificial Intelligence as a Medical Device3.3.1.4 Advancement of Precision Medicines3.3.1.5 Acquisitions, Investments and Collaborations to Open An Array of Opportunities for the Market to Flourish3.3.1.6 Increasing Prevalence of Chronic, Communicable Diseases and Escalating Geriatric Population3.3.2 Restraints and Threats3.3.2.1 Data Privacy Issues3.3.2.2 Reliability Issues and Black Box Reasoning Challenges3.3.2.3 Ethical Issues and Increasing Concerns Over Human Workforce Replacement3.3.2.4 Requirement of Huge Investment for the Deployment of AI Solutions3.3.2.5 Lack of Interoperability Between AI Vendors3.4 Regulatory Affairs3.4.1 International Organization for Standardization3.4.2 Astm International Standards3.4.3 U.S.3.4.4 Canada3.4.5 Europe3.4.6 Japan3.4.7 China3.4.8 India3.5 Porter's Five Force Analysis3.6 Clinical Trials3.7 Funding Scenario3.8 Regional Analysis of AI Start-Ups3.9 Artificial Intelligence in Healthcare FDA Approval Analysis3.10 AI Leveraging Key Deal Analysis3.11 AI Enhanced Healthcare Products Pipeline3.12 Patent Trends3.13 Market Share Analysis by Major Players3.13.1 Artificial Intelligence in Healthcare Global Market Share Analysis3.14 Artificial Intelligence in Healthcare Company Comparison Table by Application, Sub-Category, Product/Technology and End-User

4 Artificial Intelligence in Healthcare Global Market, by Application4.1 Introduction4.2 Medical Diagnosis4.3 Drug Discovery4.4 Clinical Trials4.5 Precision Medicine4.6 Healthcare Documentation Management4.7 Other Application

5 Artificial Intelligence in Healthcare Global Market, by End-User5.1 Introduction5.2 Hospitals and Diagnostic Laboratories5.3 Pharmaceutical Companies5.4 Research Institutes5.5 Other End-Users

6 Regional Analysis

7 Competitive Landscape7.1 Introduction7.2 Partnerships7.3 Product Launch7.4 Collaboration7.5 Up-Gradation7.6 Adoption7.7 Product Approval7.8 Acquisition7.9 Others

8 Major Companies8.1 Alphabet Inc. (Google Deepmind, Verily Lifesciences)8.2 General Electric Company8.3 Intel Corporation8.4 International Business Machines Corporation (IBM Watson)8.5 Koninklijke Philips N.V.8.6 Medtronic Public Limited Company8.7 Microsoft Corporation8.8 Nuance Communications Inc.8.9 Nvidia Corporation8.10 Welltok Inc.

For more information about this report visit https://www.researchandmarkets.com/r/dxs2ch

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Read the original:

Global Healthcare Artificial Intelligence Report 2020-2027: Market is Expected to Reach $35,323.5 Million - Escalation of AI as a Medical Device -...

Posted in Artificial Intelligence | Comments Off on Global Healthcare Artificial Intelligence Report 2020-2027: Market is Expected to Reach $35,323.5 Million – Escalation of AI as a Medical Device -…

Does Artificial Intelligence Have Psychedelic Dreams and Hallucinations? – Analytics Insight

Posted: at 2:50 pm

It is safe to say that the closest thing next to human intelligence and abilities is artificial intelligence. Powered by its tools in machine learning, deep learning and neural network, there are so many things that existing artificial intelligence models are capable of. However do they dream or have psychedelic hallucinations like humans? Can the generative feature of deep neural networks experience dream like surrealism?

Neural networks are type of machine learning, focused on building trainable systems for pattern recognition and predictive modeling. Here the network is made up of layersthe higher the layer, the more precise the interpretation. Input data feed goes through all the layers, as the output of one layer is fed into the next layer. Just like neuron is the basic unit of the human brain, in a neural network, it is perceptron which forms the essential building block. A perceptron in a neural network accomplishes simple signal processing, and these are then connected into a large mesh network.

Generative Adversarial Network (GAN) is a type of neural network that was first introduced in 2014 by Ian Goodfellow. Its objective is to produce fake images that are as realistic as possible. GANs havedisrupted the development of fake images: deepfakes. The deep in deepfake is drawn from deep learning. To create deepfakes, neural networks are trained on multiple datasets. These dataset can be textual, audio-visual depending on the type of content we want to generate. With enough training, the neural networks will be able to create numerical representations the new content like a deepfake image. Next all we have to do is rewire the neural networks to map the image on to the target. Deepfake can also be created using autoencoders, which is a type of unsupervised neural network. In fact, in most of the deepfakes, autoencoders is the primary type of neural network used in their creation.

In 2015, a mysterious photo appeared onRedditshowing a monstrous mutant. This photo was later revealed to be a result of Google artificial neural network. Many pointed out that this inhumanly and scary appearing photo had striking resemblance to what one sees on psychedelic substances such as mushrooms or LSD.Basically, Google engineers decided that instead of asking the software to generate a specific image, they would simply feed it an arbitrary image and then ask it what it saw.

As per an abstract on Popular Science, Google used the artificial neural netowrk to amplify patterns it saw in pictures. Each artificial neural layer works on a different level of abstraction, meaning some picked up edges based on tiny levels of contrast, while others found shapes and colors. They ran this process to accentuate color and form, and then told the network to go buck wild, and keep accentuating anything it recognizes. In the lower levels of network, the results were similar to Van Gogh paintings: images with curving brush strokes, or images with Photoshop filters. After running these images through the higher levels, which recognize full images, like dogs, over and over, leaves transformed into birds and insects and mountain ranges transformed into pagodas and other disturbing hallucinating images.

Few years ago, Googles AI company DeepMindwas working on a new technology, which allows robots to dream in order to improve their rate of learning.

In a new article published in the scientific journalNeuroscience of Consciousness, researchers demonstrate how classic psychedelic drugs such as DMT, LSD, and psilocybin selectively change the function of serotonin receptors in the nervous system. And for this they gave virtual versions of the substances to neural network algorithms to see what happens.

Scientists from Imperial College London and the University of Geneva managed to recreate DMT hallucinations by tinkering around with powerful image-generating neural nets so that their usually-photorealistic outputs became distorted blurs. Surprisingly, the results were a close match to how people have described their DMT trips. As per Michael Schartner, a member of the International Brain Laboratory at Champalimaud Centre for the Unknown in Lisbon, The process of generating natural images with deep neural networks can be perturbed in visually similar ways and may offer mechanistic insights into its biological counterpart in addition to offering a tool to illustrate verbal reports of psychedelic experiences.

The objective behind this was to betteruncover the mechanismsbehind the trippy visions.

One basic difference between human brain and neural network is that our neurons communicate in multi-directional manner unlike feed forward mechanism of Googles neural network. Hence, what we see is a combination of visual data and our brains best interpretation of that data. This is also why our brain tends to fail in case of optical illusion. Further under the influence of drugs, our ability to perceive visual data is impaired, hence we tend to see psychedelic and morphed images.

While we have found answer to Do Androids Dream of Electric Sheep? by Philip K. Dick, an American sci-fi novelist; which is NO!, as artificial intelligence have bizarre dreams, we are yet to uncover answers about our dreams. Once we achieve that, we can program neural models to produce visual output or deepfakes as we expect. Besides, we may also apparently solve the mystery behind black box decisions.

Read more:

Does Artificial Intelligence Have Psychedelic Dreams and Hallucinations? - Analytics Insight

Posted in Artificial Intelligence | Comments Off on Does Artificial Intelligence Have Psychedelic Dreams and Hallucinations? – Analytics Insight

How artificial intelligence and augmented reality are changing medical proctoring during COVID-19 – FierceHealthcare

Posted: at 2:50 pm

Medical device specialists often monitor the work of surgeons to provide case support during procedures in the operating room. However, during the COVID-19 pandemic, medical proctoring has gone remote.

COVID has dramatically accelerated the need to have a remote tool, when you either don't have access to the hospital, or when there are travel restrictions that make it really difficult, said Jennifer Fried, CEO and co-founder of ExplORer Surgical, a company that provides a software platform for case support during surgery. ExplORer offers the training platform in three formats: in-person, remote or hybrid.

Launched out of the University of Chicago Department of Surgery in 2013, ExplORer recently added augmented reality technology that acts as a virtual laser pointer for medical device specialists to provide on-screen guidance for surgeons similarly to how sports announcers use a Telestrator to mark up plays.

Introduced in November, ExplORers two-way audio/video system is compliant with the Health Insurance Portability and Accountability Act. The AR technology lets medical device reps zoom in or out and use a laser pointer to highlight items on the screen for the doctor. Scrub nurses that prepare the surgical instruments use the customized content in the platform to guide them through the workflow.

RELATED:COVID-19 has caused a surge in telemedicine. Is telesurgery next?

Before COVID-19, physicians would fly around the world to proctor cases. In fact, six to eight surgeons would be in the room to do a procedure. With the pandemic, proctors are logging into the OR remotely, according to Fried.

Medical proctors can draw on a fluoroscopy image, share their screen and warn doctors what to look for during a procedure such as implanting a device in the body.

The ability to do that in a live, real-time way that's also compliant is really a game changer in this environment, Fried told Fierce Healthcare.

Fried noted that an ExplORer customer recently began a large U.S. clinical trial with surgeons in Europe while their clinical specialists were in Australia. The remote proctoring software has helped facilitate this collaboration.

Remote proctoring will likely continue after the pandemic because of its cost savings and efficiency for medical professionals, according to Fried.

What we're seeing with COVID is, I think, people are getting more and more comfortable with remote support, as it has become a necessity, Fried said. I think that trend is going to be here to stay.

RELATED:Healthcare is ramping up AI investments during COVID. But the industry is still on the fence about Google, Amazon. Here's why

The AR features will be particularly valuable for medical proctors to help physicians through complex invasive procedures, noted Art Collins, former chairman and CEO of Medtronic and a senior medical adviser to ExplORer Surgical.

This expert training is critical when surgeons need to switch an implant operation from a large incision and open approach to minimally invasive or a catheter, Fried noted.

Remote proctoring solutions like those that ExplORer offers will incorporate artificial intelligence features such as computer vision to collect data on procedures and feed it automatically into a surgeons workflow.

AI and machine learning will become more impactful as the data set of video feeds increase, as a larger data set empowers better insights, Collins told Fierce Healthcare. As the use of video becomes standard practice in the OR and procedure suites, AI technologies will increasingly be able to interpret video feeds.

In addition, medical proctors will use image-object recognition and machine learning to analyze what is happening in the OR, Fried said. The AI technology could suggest best practices based on knowing that a surgeon took an instrument off the table, as one example.

You could say this step is taking a lot longer and know that because this instrument hasn't been put back onto the table, she explained.

ExplORer has built early iterations of this machine learning technology and plans to launch a commercial case in 2021, Fried said.

See more here:

How artificial intelligence and augmented reality are changing medical proctoring during COVID-19 - FierceHealthcare

Posted in Artificial Intelligence | Comments Off on How artificial intelligence and augmented reality are changing medical proctoring during COVID-19 – FierceHealthcare

Artificial Intelligence: The Winner of the RSNA 2020 – Diagnostic Imaging

Posted: at 2:50 pm

Like the other major events of healthcare in 2020, the Annual Meeting of Radiological Society of North America (RSNA) also had to be held as an online-only event forced by the COVID-19 pandemic. The virtual platform of RSNA was very well designed, and it provided ample opportunities for customers to connect to the right representatives from each company.

The virtual meeting room was a unique tool on the RSNA platform where customers can visit the room just like they would walk into a physical booth without any prior appointments. The company representatives engage with the visitors and transfer the meeting to a private room seamlessly. This feature was quite helpful for those visitors who had not secured prior appointments with the vendors to have discussions and product demonstrations.

The vendors, though, had mixed responses when asked if the virtual meeting were as effective as in-person meetings. While most agreed that there cannot be a substitute for physical meetings, some exhibitors claimed that their live virtual programs had good traffic from the customers and partners. COVID-19 has disrupted the traditional way of conducting trade shows, and it might be precipitating a larger digital transformation of the marketing departments within the vendor companies.

Five large themes can be defined from the RSNA virtual floor this year. Artificial intelligence and enterprise imaging were the predominant themes and formed the framework for the other two themes of workflow efficiency and precision medicine. Moreover, cybersecurity was another emerging theme and it has a potential for huge growth.

Artificial Intelligence in the Post-Pandemic Era

It is quite clear that artificial intelligence (AI) was the No. 1 theme for RSNA this year. Even with just one-third the usual number of exhibitors in the virtual event, almost all players seemed to be pointing out their artificial intelligence capabilities.

Quite understandably the event was dominated by the larger players, and the usual AI showcase had a far fewer number of companies and startups. The key AI theme this year, then, as Platform plays for AI could be biased from that perspective. Platform-based approaches are not really new to radiology AI. Hints were being dropped at RSNA 2019, but this year the AI platform initiatives were the heart of the AI messaging.

For more coverage based on industry expert insights and research, subscribe to the Diagnostic Imaging e-Newsletterhere.

But, what was also curious was the diverse set of strategies evolving, even within the platform play approaches. For example, GE Healthcare had a direct focus on imaging AI with its Edison platform (with Open AI Orchestrator becoming part of the broader offering of health services), whereas Philips Healthcare focused more on precision diagnosis with AI solutions forming a part of that broader strategy (IntelliSpace AI Workflow Suite, IntelliSpace Precision Medicine). Siemens Healthineers has an even broader approach, where their Digital Marketplace goes beyond imaging to support the broader healthcare community (and a somewhat diluted focus on the radiology AI marketplace type approach). We also saw more players developing and/or refining their approach including Sectra [Amplifier], Fujifilm [REiLI], Canon, among others.

But, to not be biased with the virtual nature of this event, we also looked at what happened through 2020, for imaging AI. Several startups leveraged their capabilities to offer solutions for COVID-19 screening and diagnosis, some even securing emergency authorizations (e.g. CuraCloud). Most offered their solutions for free to hospitals, as well, doing their bit to support our COVID-19 warriors. Interesting enough, some of the mature, established start-ups did quite well during this phase of uncertainty Aidoc, for example, announced that it had tripled its revenues in the first three quarters of 2020, and private conversations with some other AI vendors gave us the same impression. Indeed, similar to broader digital health trends in healthcare, the adoption of AI in imaging has improved for several use cases in 2020, thanks to the negative effects of this pandemic.

An emerging trend was the interest in imaging AI companies from non-imaging vendors. Qure.ai announced a partnership with AstraZeneca (AZ) during RSNA week. AstraZeneca's interest lies in the early identification of lung cancer cases by undertaking lung imaging in emerging markets, such as Latin America, Africa, the Middle East, and Asia. In addition, Qure.ai's solutions are built to address the specific needs of the imaging departments in those regions. This is likely to improve the uptake of AI in emerging markets since players, such as AZ, are driving a use case that resonates well with the local market conditions. To really enable the uptake in emerging markets, Qure.AI has also addressed other concerns. Their qTrack smartphone app, allows film-based X-rays to be converted to digital images using only the users smartphone camera and allows for cloud-based algorithms to assess signs of tuberculosis, as well as serves to record patient data. Innovations such as these, and the qBox solution that we covered last year, are key to ensuring AI reaches the masses even in emerging markets.

Enterprise Imaging- Enabler of Efficiency and Productivity

Implementation of enterprise imaging solutions is a time-consuming process. With most hospitals working on skeletal staff due to COVID-19 and the subsequent financial distress that it brought upon them, implementing enterprise imaging during the pandemic was an impossible task. However, this presented an opportunity for both the vendors and the hospitals to consider innovative deployment models. Providers developed an affinity for cloud-based solutions as it does not require investing as much time and effort by the hospital staff as compared to a traditional on-premise implementation. This year witnessed a flurry of activities in cloud-based imaging with various vendors launching solutions that truly leverage the cloud-computing capabilities to realize tangible benefits in the clinical environment.

Earlier in the year, Change Healthcare launched its cloud-native Enterprise Imaging Network that enables aggregation and sharing of imaging data in a secure environment. Hyland launched its Software-as-a-Service (SaaS) solution for enterprise imaging during the event that is intended to relieve the hospitals of their responsibility for application and hardware maintenance. SaaS also enables hospitals to pay as per the usage with components being added only when the hospital demands. The trend towards SaaS in enterprise imaging augurs well for the hospitals who are currently reeling under severe financial stress, as it does not require huge upfront capital spending. Fujifilm highlighted its Synapse Cloud Services for hosting its Enterprise Imaging portfolio in a cost-effective and scalable environment targeting the teleradiology providers, critical access hospitals, and imaging centers.

For those concerned about the bandwidth and privacy issues that come with the cloud solution, GE Healthcares Edison HealthLink might be the right solution. Edison HealthLink is a new edge computing technology that permits clinicians to process the clinical data and act on it even before it reaches the cloud. TrueFidelity image reconstruction, CT Smart subscription, and eight other applications are already available on Edison HealthLink. Hyland introduced edge rendering of its zero footprint NilRead viewer that runs a local instance in low internet bandwidth conditions.

With enormous growth in the number of AI applications in imaging, the responsibility of integrating them into the workflow to ensure that they work seamlessly, has been taken up by the enterprise imaging vendors. The latest offering from Agfa Healthcare, RUBEE for AI is aimed at helping the hospitals in choosing the right AI solution for their needs. RUBEE aims to save the time for providers by offering them a curated set of intelligent applications that can be seamlessly integrated into their workflow in quick time.

Enterprise imaging involves networking multiple elements of imaging from different departments and centers and, as such, poses tremendous challenges in integrating the solutions from various vendors. The expertise of vendors like Altamont Software becomes extremely important with enterprise imaging strategy in perspective. Altamont Passport is a product that incorporates routing, pre-fetch, modality worklist, DICOM SR integration solutions to ensure a smooth workflow at an enterprise level. The Altamont Connectivity Platform provides the users with the necessary tools to integrate any image into their EMR or any enterprise system.

While challenges in enterprise imaging continue to emerge new solutions that address these are also being introduced, indicating the broad-based participation of the imaging industry.

COVID-19 has brought back the focus on efficiency and financial sustainability. With this crisis in the background, enterprise imaging will continue to evolve continuously over the next few years to develop into a unified medical record for all types of images in the enterprise. AI will be the toolkit for many efficiencies and productivity improvement initiatives. The developments in these two key domains will be a major driver for the growth of the imaging industry in the next decade.

Here is the original post:

Artificial Intelligence: The Winner of the RSNA 2020 - Diagnostic Imaging

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence: The Winner of the RSNA 2020 – Diagnostic Imaging

What does Integration of Artificial Intelligence and Advanced Analytics mean in Business? – Analytics Insight

Posted: at 2:50 pm

What does Integration of Artificial Intelligence and Advanced Analytics mean in Business?

Disruptive technologies like artificial intelligence (AI) and advanced analytics have had a transformational impact on the finance industry. They are also changing the way enterprises interact with their clients and run their organizations. The emergence and rapid growth of these technologies helped companies enhance their processes and operations.

While data analytics refers to drawing insights from raw data, advanced analytics help collate previously untapped data sources, especially the unstructured data and data from the intelligent edge, to garner analytical insights. Meanwhile, artificial intelligence replicates behaviors that are generally associated with human intelligence. These include learning, reasoning, problem-solving, planning, perception, and manipulation. Some latest iterations of AI, like generative AI, can also create creative artwork, music, and more. Though these technologies sound diverse, their synergy would bring tremendous innovation across several industries. When powered by AI, advanced analytics algorithms can offer additional performance over other analytics techniques.

World Economic Forum states that the COVID-19 crisis provided a chance for advanced analytics and AI-based techniques to augment decision-making among business leaders too.

In a study conducted by Forrester Consulting on behalf of Intel, 98% of respondents believe that analytics is crucial to driving business priorities. Yet, fewer than 40% of workloads are leveraging advanced analytics or artificial intelligence. For instance, according to Deloitte Insights, only 70% of all financial services firms use machine learning to predict cash flow events, fine-tune credit scores, and detect fraud.

Advanced analytics and artificial intelligence are emerging favorites in the finance sector as they help firms authenticate customers, improve customer experience, and reduce the cost of maintaining acceptable levels of fraud risk, particularly in digital channels.As finance firms race inch to disruption, the velocity of fraud attacks and threats also increases. The amalgamation of these technologies helps mitigate such threats before there is any severe damage, thus increasing compliance. This is achieved by assessing risks, identifying potential suspicious activities, preventing fraudulent transactions, and more. Since AI powered analytical algorithms are adept at pattern recognition and processing large quantities of data, it is key to improving fraud detection rates. For customers, they can help authenticate any financial services they may be using and issue alert the customer if something is wrong.

This fraud detection capability is also helpful for brand marketers to distinguish successful campaigns and avoid wasteful spending. Boston Consulting Group has observed that consumer packaged goods (CPG) companies can boost more than 10% of their revenue growth through enhanced predictive demand forecasting, relevant local assortments, personalized consumer services, and experiences, optimized marketing and promotion ROI, and faster innovation cycles; all via the said technologies.

While factors like data silos, fear of missing out on the race to digital transformation and agility have influenced companies to rely on data-driven insights, they must leverage advanced analytics and artificial intelligence, to stay relevant in the market. In its September 2017 article, titledHow Big Consumer Companies Can Fight Back, Boston Consulting Group also mentions that these technologies top industry players can use them to transform their data into valuable insights. In other words, it can augment an enterprises ability to execute data-intensive workloads and, at the same time, keep the HPC environment adaptable, responsive, and cost-effective.

However, there are many difficulties faced by companies when adopting them too. As per a research survey by Ericsson IndustryLab, 91% of organizations surveyed reported facing problems in each of three categories of challenges studied, including technology, organizational, and company culture and people. It is true that artificial intelligence and advanced analytics tools allowed navigation and the re-imagining of all aspects of business operations, and the COVID-19 pandemic expedited their adoption. However, despite beingarguably the most powerful general-purpose technologies, companies must recognize potential, use cases, and strategize the right action plans to accelerate their artificial intelligence and advanced analytics undertakings.

Share This ArticleDo the sharing thingy

Follow this link:

What does Integration of Artificial Intelligence and Advanced Analytics mean in Business? - Analytics Insight

Posted in Artificial Intelligence | Comments Off on What does Integration of Artificial Intelligence and Advanced Analytics mean in Business? – Analytics Insight

Artificial Intelligence ABCs: What Is It and What Does it Do? – JD Supra

Posted: at 2:50 pm

Artificial intelligence is one of the hottest buzzwords in legal technology today, but many people still dont fully understand what it is and how it can impact their day-to-day legal work.

According to Brookings Institution, artificial intelligence generally refers to machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention. In other words, artificial intelligence is technology capable of making decisions that generally require a human level of expertise. It helps people anticipate problems or deal with issues as they come up. (For example, heres how artificial intelligence greatly improves contract review.)

Recently, we sat down with Onits Vice President of Product Management, technology expert and patent holder Eric Robertson to cover the ins and outs of artificial intelligence in more detail. In this first installment of our new blog series, well discuss what it is and its three main hallmarks.

At the core of artificial intelligence and machine learning are algorithms, or sequences of instructions that solve specific problems. In machine learning, the learning algorithms create the rules for the software, instead of computer programmers inputting them, as is the case with more traditional forms of technology. Artificial intelligence can learn from new data without additional step-by-step instructions.

This independence is crucial to our ability to use computers for new, more complex tasks that exceed the manual programming limitations things like photo recognition apps for the visually impaired or translating pictures into speech. Even things we now take for granted, like Alexa and Siri, are prime examples of artificial intelligence technology that once seemed impossible. We already encounter in our day-to-day lives in numerous ways and that influence will continue to grow.

The excitement about this quickly evolving technology is understandable, mainly due to its impacts on data availability, computing power and innovation. The billions of devices connected to the internet generate large amounts of data and lower the cost of mass data storage. Machine learning can use all this data to train learning algorithms and accelerate the development of new rules for performing increasingly complex tasks. Furthermore, we can now process enormous amounts of data around machine learning. All of this is driving innovation, which has recently become a rallying cry among savvy legal departments worldwide.

Once you understand the basics of artificial intelligence, its also helpful to be familiar with the different types of learning that make it up.

The first is supervised learning, where a learning algorithm is given labeled data in order to generate a desired output. For example, if the software is given a picture of dogs labeled dogs, the algorithm will identify rules to classify pictures of dogs in the future.

The second is unsupervised learning, where the data input is unlabeled and the algorithm is asked to identify patterns on its own. A typical instance of unsupervised learning is when the algorithm behind an eCommerce site identifies similar items often bought by a consumer.

Finally, theres the scenario where the algorithm interacts with a dynamic environment that provides both positive feedback (rewards) and negative feedback. An example of this would be a self-driving car where, if the driver stays within the lane, the software will receive points in order to reinforce that learning and reminders to stay in that lane.

Even after understanding the basic elements and learning models of artificial intelligence, the question often arises as to what the real essence of artificial intelligence is. The Brookings Institution boils the answer down to three main qualities:

In the next installment of our blog series, well discuss the benefits AI is already bringing to legal departments. We hope youll join us.

Read the original:

Artificial Intelligence ABCs: What Is It and What Does it Do? - JD Supra

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence ABCs: What Is It and What Does it Do? – JD Supra

Page 99«..1020..9899100101..110120..»