Kitware Offers Latest Innovations in Healthcare Simulation with Updates to Interactive Medical Simulation Toolkit and Pulse Physiology Engine -…

Clifton Park, NY, Jan. 17, 2020 (GLOBE NEWSWIRE) -- Kitware, a leader in open source software research and development, has released the latest versions of two of its popular medical training and simulation toolkits the Interactive Medical Simulation Toolkit (iMSTK) 2.0 and the Pulse Physiology Engine (Pulse) 2.3. Updates to these toolkits include improved models and functionality based on feedback from user and developer communities. Kitware will showcase these latest features and improvements at the International Meeting on Simulation in Healthcare (IMSH) in San Diego, January 18-22 at booth 912.

Both iMSTK and Pulse provide the technology to build virtual simulators that can help practicing surgeons, medical students, residents, and nurses to rehearse or plan medical procedures. For example, iMSTK has been used to help medical professionals prepare for biopsies, resectioning, radiosurgery, and laparoscopy without compromising patient safety in the operating room. It can also help accredit potential surgeons in basic skills for laparoscopy, endoscopy or robotic surgery. Pulse provides necessary physiologic feedback for clinicians training to provide life-saving medical treatment, such as for hemorrhage, tension pneumothorax, airway trauma, ventilator use and settings, and anaphylaxis.

"Kitware's medical computing team is dedicated to advancing research solutions in the medical community," said Andinet Enquobahrie, the director of medical computing at Kitware. "Whether we are collaborating with a university on research, working with our communities to improve our software platforms, or partnering with another company to integrate our software into their products and projects, our goal is to provide application developers the tools they need to develop powerful applications for medical skill training."

iMSTK 2.0 Improves Features, Efficiency of Physics, Collision and Rendering Modules

iMSTK is a free, open source toolkit that offers product developers and researchers all the software components they need to build and test virtual simulators for medical training and planning. Release 2.0 offers improved functionality with many new features as well as refactored modules that address the ease-of-use, and extendability of the API. Specifically, it has greatly improved the features as well as the efficiency of the physics, collision modules, and rendering modules.

Here are some release highlights:

Pulse 2.3 Improves Models and Functionality to Advance the Engine for Customer Needs

Pulse is a free, open source physiology engine that is used to rapidly prototype virtual simulation applications. These applications simulate whole-body human physiology through adult computational physiology models. Release 2.3 includes updates that were the result of Kitware's work with users to improve models and functionality of the engine.

Here are some release highlights:

For more information about iMSTK, visit the iMSTK website. For more information about Pulse, visit the newly redesigned Pulse website or sign up for the Pulse newsletter. To receive the latest updates on all of Kitware's software platforms, subscribe to our blog.

About Kitware

Since 1998, Kitware has been providing software research and development services to customers ranging from startups to Fortune 500 companies, including government and academic laboratories worldwide. Kitware's core areas of expertise are computer vision, data and analytics, high-performance computing and visualization, medical computing, and software process. The company has grown to more than 150 employees, with offices in Clifton Park, NY; Arlington, VA; Carrboro, NC; Santa Fe, NM; and Lyon, France. For more information visit kitware.com.

Read this article:
Kitware Offers Latest Innovations in Healthcare Simulation with Updates to Interactive Medical Simulation Toolkit and Pulse Physiology Engine -...

MongoDB: Riding The Data Wave – Seeking Alpha

MongoDB (MDB) is a database software company which is benefiting from the growth in unstructured data and leading the growth in non-relational databases. Despite MongoDB's recent rise in share price, its current valuation is modest given its strong position in a large and attractive market.

There has been an explosion in the growth of data in recent years with this growth being dominated by unstructured data. Unstructured data is currently growing at a rate of 26.8% annually compared to structured data which is growing at rate of 19.6% annually.

Figure 1: Growth in Data

(source: m-files)

Unstructured data refers to any data which despite possibly having internal structure is not structured via pre-defined data models or schema. Unstructured data includes formats like audio, video and social media postings and is often stored in non-relational database like NoSQL. Structured data is suitable for storage in a traditional database (rows and columns) and is normally stored in relational databases.

Mature analytics tools exist for structured data, but analytics tools for mining unstructured data are nascent. Improved data analytics tools for unstructured data will help to increase the value of this data and encourage companies to ensure they are collecting and storing as much of it as possible. Unstructured data analytics tools are designed to analyze information that doesn't have a pre-defined model and include tools like natural language processing.

Table 1: Structured Data Versus Unstructured Data

(source: Adapted by author from igneous)

Unstructured data is typically stored in NoSQL databases which can take a variety of forms, including:

Unstructured data can also be stored in multimodel databases which incorporate multiple database structures in the one package.

Figure 2: Multimodel Database

(source: Created by author)

Some of the potential advantages of NoSQL databases include:

Common use-cases for NoSQL databases include web-scale, IoT, mobile applications, DevOps, social networking, shopping carts and recommendation engines.

Relational databases have historically dominated the database market, but they were not built to handle the volume, variety and velocity of data being generated today nor were they built to take advantage of the commodity storage and processing power available today. Common applications of relational databases include ERP, CRM and ecommerce. Relational databases are tabular, highly dependent on pre-defined data definitions and usually scale vertically (a single server has to host the entire database to ensure acceptable performance). As a result, relational databases can be expensive, difficult to scale and have a relatively small number of failure points. The solution to support rapidly growing applications is to scale horizontally, by adding servers instead of concentrating more capacity in a single server. Organizations are now turning to scale-out architectures using open software technologies, commodity servers and cloud computing instead of large monolithic servers and storage infrastructure.

Figure 3: Data Structure and Database Type

(source: Created by author)

According to IDC, the worldwide database software market, which it refers to as structured data management software, was $44.6 billion in 2016 and is expected to grow to $61.3 billion in 2020, representing an 8% compound annual growth rate. Despite the rapid growth in unstructured data and the increasing importance of non-relational databases, IDC forecasts that relational databases will still account for 80% of the total operational database market in 2022.

Database management systems (DBMS) cloud services were 23.3% of the DBMS market in 2018, excluding DBMS licenses hosted in the cloud. In 2017 cloud DBMS accounted for 68% of the DBMS market growth with Amazon Web Services (AMZN) and Microsoft (MSFT) accounting for 75% of the growth.

MongoDB provides document databases using open source software and is one of the leading providers of NoSQL databases to address the requirements of unstructured data. MongoDB's software was downloaded 30 million times between 2009 and 2017 with 10 million downloads in 2017 and is frequently used for mobile apps, content management, real-time analytics and applications involving the Internet of Things, but can be a good choice for any application where there is no clear schema definition.

Figure 4: MongoDB downloads

(source: MongoDB)

MongoDB has a number of offerings, including:

Figure 5: MongoDB Platform

(source: MongoDB)

Functionality of the software includes:

MongoDB's platform offers high performance, horizontal scalability, flexible data schema and reliability through advanced security features and fault-tolerance. These features are helping to attract users of relational databases with approximately 30% of MongoDB's new business in 2017 resulting from the migration of applications from relational databases.

MongoDB generates revenue through term licenses and hosted as-a-service solutions. Most contracts are 1 year in length invoiced upfront with revenue recognized ratably over the term of the contract although a growing number of customers are entering multiyear subscriptions. Revenue from hosted as-a-service solutions is primarily generated on a usage basis and is billed either in arrears or paid up front. Services revenue is comprised of consulting and training services which generally result in losses and are primarily used to drive customer retention and expansion.

MongoDB's open source business model has allowed the company to scale rapidly and they now have over 16,800 customers, including half of the Global Fortune 100 in 2017. Their open source business model uses the community version as a pipeline for potential future subscribers and relies on customers converting to a paid model once they require premium support and tools.

Figure 6: Prominent MongoDB Customers

(source: Created by author using data from MongoDB)

MongoDB's growth is driven largely by its ability to expand revenue from existing customers. This is shown by the expansion of Annual Recurring Revenue (ARR) overtime, where ARR is defined as the subscription revenue contractually expected from customers over the following 12 months assuming no increases or reductions in their subscriptions. ARR excludes MongoDB Atlas, professional services and other self-service products. The fiscal year 2013 cohort increased their initial ARR from $5.3 million to $22.1 million in fiscal year 2017, representing a multiple of 4.1x.

Figure 7: MongoDB Cohort ARR

(source: MongoDB)

Although MongoDB continues to incur significant operating losses the contribution margin of new customers quickly becomes positive, indicating that as MongoDB's growth rate slows the company will become profitable. Contribution margin is defined as the ARR of subscription commitments from the customer cohort at the end of a period less the associated cost of subscription revenue and estimated allocated sales and marketing expense.

Figure 8: MongoDB 2015 Cohort Contribution Margin

(source: MongoDB)

MongoDB continues to achieve rapid revenue growth driven by an increasing number of customers and increased revenue per customer. Revenue growth has shown little sign of decline which is not surprising given the size of MongoDB's market opportunity. Revenue per customer is modest and MongoDB still has significant potential to expand the number of Global Fortune 100 customers.

Figure 9: MongoDB Revenue

(source: Created by author using data from MongoDB)

Figure 10: MongoDB Customer Numbers

(source: Created by author using data from MongoDB)

MongoDB's revenue growth has been higher than other listed database vendors since 2017 as a result of their expanding customer base and growing revenue per customer. The rise of cloud computing and non-relational databases has a large impact on relational database vendors with DBMS growth now dominated by cloud computing vendors and non-relational database vendors.

Figure 11: Database Vendor Revenue

(source: Created by author using data from company reports)

MongoDB's revenue growth is relatively high for its size when compared to other database vendors, but is likely to begin to decline in coming years.

Figure 12: Database Vendor Revenue Growth

(source: Created by author using data from company reports)

MongoDB's revenue is dominated by subscription revenue and this percentage has been increasing over time. This relatively stable source of income holds MongoDB in good stead for the future, particularly if customers can be converted to longer-term contracts.

Figure 13: MongoDB Subscription Revenue

(source: Created by author using data from MongoDB)

MongoDB generates reasonable gross profit margins for an enterprise software company from its subscription services, although these have begun to decline in recent periods. Likely as the result of the introduction of the entry level Atlas offering in 2016 and possibly also as a result of increasing competition.

Figure 14: MongoDB Gross Profit Margin

(source: Created by author using data from MongoDB)

MongoDB has exhibited a large amount of operating leverage in the past and is now approaching positive operating profitability. This is largely the result of declining sales and marketing and research and development costs relative to revenue. This trend is likely to continue as MongoDB expands, particularly as growth begins to decline and the burden of attracting new customers eases.

Figure 15: MongoDB Operating Profit Margin

(source: Created by author using data from MongoDB)

Figure 16: MongoDB Operating Expenses

(source: Created by author using data from MongoDB)

Although MongoDB's operating profitability is still negative it is in line with other database vendors and should become positive within the next few years. This is supported by the positive contribution margin of MongoDB's customers after their first year.

Figure 17: Database Vendor Operating Profit Margins

(source: Created by author using data from company reports)

MongoDB is yet to achieve consistently positive free cash flows, although appears to be on track as the business scales. This should be expected based on the high margin nature of the business and the low capital requirements. Current negative free cash flow is largely a result of expenditures in support of future growth in the form of sales and marketing and research and development.

Figure 18: MongoDB Free Cash Flow

(source: Created by author using data from MongoDB)

Competitors in the database vendor market can be broken into incumbents, cloud platforms and challengers. Incumbents are the current dominant players in the market, like Oracle (ORCL), who offer relational databases. Cloud platforms are cloud computing vendors like Amazon and Microsoft that also offer database software and services. Challengers are pure play database vendors who offer a range of non-relational database software and services.

Table 2: Database Vendors

(source: Created by author)

Incumbents

Incumbents offer proven technology with large set of features which may be important for mission critical transactional applications. This gives incumbents a strong position, particularly as relational databases are expected to continue to retain the lion's share of the database market in coming years. Incumbent players that lack a strong infrastructure-as-a-service platform though are poorly positioned to capture new applications and likely to be losers in the long run. This trend is evidenced by Teradata's (TDC) struggles since the advent of cloud computing and non-relational databases.

Cloud Platforms

Cloud service providers are able to offer a suite of SaaS solutions in addition to cloud computing, creating a compelling value proposition for customers. In exchange for reducing the number of vendors required and gaining access to applications designed to run together, database customers run the risk of being locked into a cloud vendor and paying significantly more for services which could potentially be inferior.

Challengers

Dedicated database vendors can offer best in breed technology, low costs and multi-cloud portability which helps to prevent cloud vendor lock-in.

The DBMS is typically broken into operational and analytical markets. The operational DBMS market refers to databases that are tied to a live application whereas the analytical market refers to the processing and analyzing of data imported from various sources.

Figure 19: Database Market Competitive Landscape

(source: Created by author)

Gartner assesses MongoDB as a challenger in the operational database systems market due primarily to a lack of completeness of vision. The leaders are generally large companies which offer a broader range of database types in addition to cloud computing services. MongoDB's ability to succeed against these companies will be dependent on them being able to offer best in class services and/or lower cost services.

View original post here:
MongoDB: Riding The Data Wave - Seeking Alpha

Qualys offers GPS guidance for developers at the application security crossroads – ComputerWeekly.com

All developers care deeply about application [development] security.

Okay, thats perhaps not always strictly true lets try again.

All developers care deeply about application functionality and speed, which they then carry through to a secondary level of concern related to Ops-level application manageability, flexibility and security.

How then should we engage with programmers on aspects of security, especially as it now straddles something of a crossroads brought about by the move to increasingly cloud-native cloud-first application development?

Security specialist Qualys [pronounced: KWAL-IS) has attempted to address the application development security subject head-on by hosting what probably ranks as the first tech event of 2020.

Qualys Security Conference London 2020 ran this week in London with the tagline: application security at a crossroads and isnt it just?

The company billed the event as an opportunity to explore the profound impact of digital transformation on the security industry and what it means for practitioners, partners and vendors.

Qualys is clearly focused on gaining attention from CIOs, CSOs and CTOs; but at ground level, the company says it works with network managers, cloud developers and security developers or, as they are known these days, DevSecOps practitioners.

So for developers then as we have noted before on the Computer Weekly Developer Network, the Qualys Web Application Scanning (WAS) 6.0 product now supports Swagger version 2.0 to allow programmers to streamline [security] assessments of REST APIs and get visibility of the security posture of mobile application backends and Internet of Things (IoT) services.

NOTE: Swagger is an open source software framework backed by a considerable ecosystem of tools that helps developers design, build, document and consume RESTful web services.

Qualys president and chief product officer Sumedh Thakar used his London keynote slot to deliver a piece he called The Evolution of the Qualys Platform: Unveiling the Latest Updates and Next-Gen Initiatives.

Speaking at the London show this January Thakar suggests that the process of digital transformation has moved from being a prototyping exploratory part of the business to, now in 2020, being something that IT development teams are truly rolling out.

Banks are now looking at technologies that would allow users to open an account simply by taking a selfie, said Thakar and so this will mean that these processes (which essentially run on applications) need to run on a secure backbone. The infrastructure that organisations will run on has become super-hybrid in order to be able to join all these new digital services together.

Cloud, containerisation and refactoring applications to be mobile friendly are just some of the major changes that need to happen in digitally disruptive environments.

Thakar is perhaps suggesting that if we can show developers that there are automated intelligence layers in place that will work across hybrid infrastructures and reduce the Mean Time To Remediation (MTTR), then developers might in fact take more interest in the security aspect of the systems they are working to engineer in the first place.

Thakar used a number of real world examples (from bank accounts that can be opened with nothing more than a selfie to intelligent motion-sensing doorbells) in an attempt to justify and validate the need for Qualys security technologies. With all examples tabled, Thakar led the audience forward to think about how system responses should be actioned.

He explained that the evolution of the Qualys platform has come about because SIEM, SOAR and log file analytics solutions (such as Splunk) were either never built to support a [security] data model that could be driven by Machine Learning (ML) or were not actually designed for security in the first place. and log file analytics is acting on historical data so it is very much after the event

NOTE: Security Information & Event Management - were always designed as log correlation specialists. Security Orchestration Automation & Response again was too much of a point solution (but which Qualys is adding as a function directly as a playbook anyway.)

As programmers design and evolve an image in the cloud, these developers will only need to make one single API call to bring Qualys security layers to bear upon their cloud native applications, due to the companys proximity to both Microsoft Azure and to Google Cloud Platform.

New (in terms of products) in 2020 is Qualys Respond, which includes an agent to deploy patches automatically to users devices so again, this allows applications to feature remediation controls more intuitively.

Other developer tools from the company include the ability to use Qualys Browser Recorder, a free Google Chrome browser extension, to review scripts for navigating through complex authentication and business workflows in web applications.

So then will developers ever truly embrace security issues and allow DevSecOps to put the Ops in operationalised?

Qualys would like to think so and engagement at the coal face along with an option to explain how complex authentication, the use of optimised security agents and streamlined security assessments/audits can be made easy dare we suggest almost joyful will (very arguably) ultimately really make a difference for developers.

See the original post here:
Qualys offers GPS guidance for developers at the application security crossroads - ComputerWeekly.com

Don’t want a robot stealing your job? Take a course on AI and machine learning. – Mashable

Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission.There are some 288 lessons included in this online training course.

Image: pexels

By StackCommerceMashable Shopping2020-01-16 19:44:17 UTC

TL;DR: Jump into the world of AI with the Essential AI and Machine Learning Certification Training Bundle for $39.99, a 93% savings.

From facial recognition to self-driving vehicles, machine learning is taking over modern life as we know it. It may not be the flying cars and world-dominating robots we envisioned 2020 would hold, but it's still pretty futuristic and frightening. The good news is if you're one of the pros making these smart systems and machines, you're in good shape. And you can get your foot in the door by learning the basics with this Essential AI and Machine Learning Certification Training Bundle.

This training bundle provides four comprehensive courses introducing you to the world of artificial intelligence and machine learning. And right now, you can get the entire thing for just $39.99.

These courses cover natural language processing, computer vision, data visualization, and artificial intelligence basics, and will ultimately teach you to build machines that learn as they're fed human input. Through hands-on case studies, practice modules, and real-time projects, you'll delve into the world of intelligent systems and machines and get ahead of the robot revolution.

Here's what you can expect from each course:

Access 72 lectures and six hours of content exploring topics like convolutional neural networks (CNNs), recurrent neural networks (RNNs), and other deep architectures using TensorFlow. Ultimately, you'll build a foundation in both artificial intelligence, which is the concept in which machines develop the ability to simulate natural intelligence to carry out tasks, and machine learning, which is an application of AI aiming to learn from data and build on it to maximize performance.

Through seven hours of content, you'll learn how to arrange critical data in a visual format think graphs, charts, and pictograms. You'll also learn to deploy data visualization through Python using Matplotlib, a library that helps in viewing the data. Finally, you'll tackle actual geographical plotting using the Matplotlib extension called Basemap.

In just 5.5 hours, this course gives you a more in-depth look at the role of CNNs, the knowledge of transfer learning, object localization, object detection, and using TensorFlow. You'll also learn the challenges of working with real-world data and how to tackle them head-on.

Natural language processing (NLP) is a field of AI which allows machines to interpret and comprehend human language. Through 5.5 hours of content, you'll understand the processes involved in this field and learn how to build artificial intelligence for automation. The course itself provides an innovative methodology and sample exercises to help you dive deep into NLP.

Originally $656, you can slash 93% off and get a year's worth of access to the Essential AI and Machine Learning Bundle for just $39.99 right now.

Prices subject to change.

Read the original post:
Don't want a robot stealing your job? Take a course on AI and machine learning. - Mashable

Seton Hall Announces New Courses in Text Mining and Machine Learning – Seton Hall University News & Events

Professor Manfred Minimair, Data Science, Seton Hall University

As part of its online M.S. in Data Science program, Seton Hall University in South Orange, New Jersey, has announced new courses in Text Mining and Machine Learning.

Seton Hall's master's program in Data Science is the first 100% online program of its kind in New Jersey and one of very few in the nation.

Quickly emerging as a critical field in a variety of industries, data science encompasses activities ranging from collecting raw data and processing and extracting knowledge from that data, to effectively communicating those findings to assist in decision making and implementing solutions. Data scientists have extensive knowledge in the overlapping realms of business needs, domain knowledge, analytics, and software and systems engineering.

"We're in the midst of a pivotal moment in history," said Professor Manfred Minimair, director of Seton Hall's Data Science program. "We've moved from being an agrarian society through to the industrial revolution and now squarely into the age of information," he noted. "The last decade has been witness to a veritable explosion in data informatics. Where once business could only look at dribs and drabs of customer and logistics dataas through a glass darklynow organizations can be easily blinded by the sheer volume of data available at any given moment. Data science gives students the tools necessary to collect and turn those oceans of data into clear and readily actionable information."

These tools will be provided by Seton Hall in new ways this spring, when Text Mining and Machine Learning make their debut.

Text MiningTaught by Professor Nathan Kahl, text mining is the process of extracting high-quality information from text, which is typically done by developing patterns and trends through means such as statistical pattern learning. Professor Nathan Kahl is an Associate Professor in the Department of Mathematics and Computer Science. He has extensive experience in teaching data analytics at Seton Hall University. Some of his recent research lies in the area of network analysis, another important topic which is also taught in the M.S. program.

Professor Kahl notes, "The need for people with these skills in business, industry and government service has never been greater, and our curriculum is specifically designed to prepare our students for these careers." According to EAB (formerly known as the Education Advisory Board), the national growth in demand for data science practitioners over the last two years alone was 252%. According to Glassdoor, the median base salary for these jobs is $108,000.

Machine LearningIn many ways, machine learning represents the next wave in data science. It is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. The course will be taught by Sophine Clachar, a data engineer with more than 10 years of experience. Her past research has focused on aviation safety and large-scale and complex aviation data repositories at the University of North Dakota. She was also a recipient of the Airport Cooperative Research Program Graduate Research Award, which fostered the development of machine learning algorithms that identify anomalies in aircraft data.

"Machine learning is profoundly changing our society," Professor Clachar remarks. "Software enhanced with artificial intelligence capabilities will benefit humans in many ways, for example, by helping design more efficient treatments for complex diseases and improve flight training to make air travel more secure."

Active Relationships with Google, Facebook, Celgene, Comcast, Chase, B&N and AmazonStudents in the Data Science program, with its strong focus on computer science, statistics and applied mathematics, learn skills in cloud computing technology and Tableau, which allows them to pursue certification in Amazon Web Services and Tableau. The material is continuously updated to deliver the latest skills in artificial intelligence/machine learning for automating data science tasks. Their education is bolstered by real world projects and internships, made possible through the program's active relationships with such leading companies as Google, Facebook, Celgene, Comcast, Chase, Barnes and Noble and Amazon. The program also fosters relationships with businesses and organizations through its advisory board, which includes members from WarnerMedia, Highstep Technologies, Snowflake Computing, Compass and Celgene. As a result, students are immersed in the knowledge and competencies required to become successful data science and analytics professionals.

"Among the members of our Advisory Board are Seton Hall graduates and leaders in the field," said Minimair. "Their expertise at the cutting edge of industry is reflected within our curriculum and coupled with the data science and academic expertise of our professors. That combination will allow our students to flourish in the world of data science and informatics."

Learn more about the M.S. in Data Science at Seton Hall

Read more here:
Seton Hall Announces New Courses in Text Mining and Machine Learning - Seton Hall University News & Events

Leveraging AI and Machine Learning to Advance Interoperability in Healthcare – – HIT Consultant

(Left- Wilson To, Head of Worldwide Healthcare BD, Amazon Web Services (AWS) & Patrick Combes, Worldwide Technical Leader Healthcare and Life Sciences at Amazon Web Services (AWS)- Right)

Navigating the healthcare system is often a complex journey involving multiple physicians from hospitals, clinics, and general practices. At each junction, healthcare providers collect data that serve as pieces in a patients medical puzzle. When all of that data can be shared at each point, the puzzle is complete and practitioners can better diagnose, care for, and treat that patient. However, a lack of interoperability inhibits the sharing of data across providers, meaning pieces of the puzzle can go unseen and potentially impact patient health.

The Challenge of Achieving Interoperability

True interoperability requires two parts: syntactic and semantic. Syntactic interoperability requires a common structure so that data can be exchanged and interpreted between health information technology (IT) systems, while semantic interoperability requires a common language so that the meaning of data is transferred along with the data itself.This combination supports data fluidity. But for this to work, organizations must look to technologies like artificial intelligence (AI) and machine learning (ML) to apply across that data to shift the industry from a fee-for-service where government agencies reimburse healthcare providers based on the number of services they provide or procedures ordered to a value-based model that puts focus back on the patient.

The industry has started to make significant strides toward reducing barriers to interoperability. For example, industry guidelines and resources like the Fast Healthcare Interoperability Resources (FHIR) have helped to set a standard, but there is still more work to be done. Among the biggest barriers in healthcare right now is the fact there are significant variations in the way data is shared, read, and understood across healthcare systems, which can result in information being siloed and overlooked or misinterpreted.

For example, a doctor may know that a diagnosis of dropsy or edema may be indicative of congestive heart failure, however, a computer alone may not be able to draw that parallel. Without syntactic and semantic interoperability, that diagnosis runs the risk of getting lost in translation when shared digitally with multiple health providers.

Employing AI, ML and Interoperability in Healthcare

Change Healthcare is one organization making strides to enable interoperability and help health organizations achieve this triple aim. Recently, Change Healthcareannounced that it is providing free interoperability services that breakdown information silos to enhance patients access to their medical records and support clinical decisions that influence patients health and wellbeing.

While companies like Change Healthcare are creating services that better allow for interoperability, others like Fred Hutchinson Cancer Research Center and Beth Israel Deaconess Medical Center (BIDMC) are using AI and ML to further break down obstacles to quality care.

For example, Fred Hutch is using ML to help identify patients for clinical trials who may benefit from specific cancer therapies. By using ML to evaluate millions of clinical notes and extract and index medical conditions, medications, and choice of cancer therapeutic options, Fred Hutch reduced the time to process each document from hours, to seconds, meaning they could connect more patients to more potentially life-saving clinical trials.

In addition, BIDMC is using AI and ML to ensure medical forms are completed when scheduling surgeries. By identifying incomplete forms or missing information, BIDMC can prevent delays in surgeries, ultimately enhancing the patient experience, improving hospital operations, and reducing costs.

An Opportunity to Transform The Industry

As technology creates more data across healthcare organizations, AI and ML will be essential to help take that data and create the shared structure and meaning necessary to achieve interoperability.

As an example, Cernera U.S. supplier of health information technology solutionsis deploying interoperability solutions that pull together anonymized patient data into longitudinal records that can be developed along with physician correlations. Coupled with other unstructured data, Cerner uses the data to power machine learning models and algorithms that help with earlier detection of congestive heart failure.

As healthcare organizations take the necessary steps toward syntactic and semantic interoperability, the industry will be able to use data to place a renewed focus on patient care. In practice, Philips HealthSuite digital platform stores and analyses 15 petabytes of patient data from 390 million imaging studies, medical records and patient inputsadding as much as one petabyte of new data each month.

With machine learning applied to this data, the company can identify at-risk patients, deliver definitive diagnoses and develop evidence-based treatment plans to drive meaningful patient results. That orchestration and execution of data is the definition of valuable patient-focused careand the future of what we see for interoperability drive by AI and ML in the United States. With access to the right information at the right time that informs the right care, health practitioners will have access to all pieces of a patients medical puzzleand that will bring meaningful improvement not only in care decisions, but in patients lives.

About Wilson To, Global Healthcare Business Development lead at AWS & Patrick Combes, Global Healthcare IT Lead at AWS

Wilson To is the Head Worldwide Healthcare Business Development at Amazon Web Services (AWS). currently leads business development efforts across the AWS worldwide healthcare practice.To has led teams across startup and corporate environments, receiving international recognition for his work in global health efforts. Wilson joined Amazon Web Services in October 2016 to lead product management and strategic initiatives.

Patrick Combes is the Worldwide Technical Leader for Healthcare Life & Sciences at Amazon (AWS) where he is responsible for AWS world-wide technical strategy in Healthcare and Life Sciences (HCLS). Patrick helps develop and implement the strategic plan to engage customers and partners in the industry and leads the community of technically focused HCLS specialists within AWS wide technical strategy in Healthcare and Life Sciences (HCLS). Patrick helps develop and implement the strategic plan to engage customers and partners in the industry and leads the community of technically focused HCLS specialists within AWS.

View original post here:
Leveraging AI and Machine Learning to Advance Interoperability in Healthcare - - HIT Consultant

High Investment in AI and Machine Learning will Enhance Automotive Digital Assistants by 2025 – Yahoo Finance

Emotional intelligence and in-car voice biometrics will create opportunities for OEMs and start-ups seeking new business models, finds Frost & Sullivan

High Investment in AI and Machine Learning will Enhance Automotive Digital Assistants by 2025

SANTA CLARA, Calif. , Jan. 16, 2020 /CNW/ --Digital assistants are rapidly emerging the primary input medium in human-machine interface (HMI), creating new opportunities for in-vehicle engagement services. Digital assistants offer a smart and intuitive way to operate features in the vehicle and assure minimum driver distraction. Presently, their capabilities are targeted at systems that deliver navigation and entertainment services in cars to enhance users' multimedia experience; however, future use-cases will focus on the safety and security of the vehicle and the driver.

"With the rising popularity of connected services such as traffic information and local search, digital assistants have become a key differentiator for original equipment manufacturers (OEMs). OEM-branded digital assistants will help automakers strengthen their brand and convert one-time sales into continual service-centric relationships," said Anubhav Grover , Research Analyst, Mobility. "OEMs are aiming to create their own branded digital assistants that will co-exist and integrate with third-party and tech-branded digital assistants. BMW has already launched its own Intelligent Personal Assistant (IPA), which uses Alexa to access Amazon's e-commerce and Cortana for Microsoft Office."

Frost & Sullivan's recent analysis, Strategic Analysis of Automotive Digital Assistants, Forecast to 2025, studies the competitive landscape, business models, and future focus areas of OEMs, digital assistant suppliers, and technology companies. It examines the trends in artificial intelligence integration and voice biometrics. Furthermore, it analyzes the different strategies adopted by OEMs, tier-I suppliers, and technology startups in North America , Europe , and China .

For further information on this analysis, please visit: http://frost.ly/3yk.

" North America is expected to continue leading the adoption of digital assistant solutions. Meanwhile, with higher penetration of long-term evolution (LTE) and greater production capacity in China , Asia-Pacific is expected to be a growth hub for OEMs," noted Grover. "Digital assistant developers are increasingly building strategic partnerships with telecom providers and communication module makers to enhance on-road safety and in-vehicle data-rich services. Flexible business models such as 'choice of network' for consumers will further improve customer retention and revenue generation."

For greater growth opportunities, digital assistant companies are likely to:

Strategic Analysis of Automotive Digital Assistants, Forecast to 2025,is part of Frost & Sullivan's global Automotive & Transportation Growth Partnership Service program.

About Frost & Sullivan

For over five decades, Frost & Sullivan has become world-renowned for its role in helping investors, corporate leaders and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models and companies to action, resulting in a continuous flow of growth opportunities to drive future success. Contact us: Start the discussion.

Strategic Analysis of Automotive Digital Assistants, Forecast to 2025K329-18

Story continues

See the original post:
High Investment in AI and Machine Learning will Enhance Automotive Digital Assistants by 2025 - Yahoo Finance

Google Is Now Able To Do More Accurate Rain "Nowcasting" With Machine Learning – Digital Information World

Google and Machine Learning can be regarded as two things that are always on the run to make the world a better place to live. While the later is governed by Google itself, the company, on Monday, showedtheir research, which is based on machine learning method and takes help from Radar images. By doing so, Google hopes to accurately forecast rainstorms and other weather events that can arise suddenly.

Prior to this, predicting the short-term weather events was more of a tough challenge. The numerical methods that record atmospheric dynamics, ocean effects, thermal radiation, and other processes become limited because of the computational resources. For example, even giants like the National Oceanic and Atmospheric Administration (NOAA) gets to collect a data of around 100 terabytes per day.

The numerical method is also slow as it takes a number of hours to compute one round of forecast. In more common scenarios, experts get to commute the forecast after 6 hours, which further leads them to only 3-4 runs a day and 6+ hour old data of forecasts.

By tackling the similar issues, the search giant hopes to help people when it comes to making the important immediate decisions like traffic routing, logistics, or evacuation planning as well.

With their method, Google is planning to use radar data and deal with weather prediction as a computer vision problem. The team working at it has set up a neural network that will learn about atmospheric physics with real examples and with no inclusion of previous knowledge regarding how the atmosphere operates.

There is no doubt in the fact that Googles machine learning powered rain forecasting method outclasses all of the three popular forecasting models that have been used till date. The predictions coming out are instantaneous and hence the forecasts turn out to be based on fresh data for short term.

Google is also looking forward to join its system together with High Resolution Rapid Refresh (HRRR) with an aim to make long-term forecasts better as well since HRRR would go with 3D physical model.

Photo: 400tmax via Getty Images

Read next: Google Maps Introduces Hyperspace Animation When Switching Between Planets

Follow this link:
Google Is Now Able To Do More Accurate Rain "Nowcasting" With Machine Learning - Digital Information World

BlackBerry combines AI and machine learning to create connected fleet security solution – Fleet Owner

Geotab CEO Neil Cawse stresses the importance of the company's open platform that gives fleet managers the ability to customize the software options they need to successfully run their businesses.

So it is no surprise that during theGeotab Connect 2020conference in San Diego, the company announced numerous new integrations and offerings for theGeotab Marketplace.

On Jan. 15, Geotab andEleosjointly rolled out Unify, an integrated solution that offers a fleet management system, driver workflow platform and electronic logging device (ELD).

Unify is the latest example of Geotabs mission to provide fleet owners in the heavy-duty truck market with open customizable software and industry-leading hardware that will enable them to maximize productivity, safety and profitability, said Scott Sutarik, vice president of commercial vehicle solutions at Geotab.

Eleos offers mobile workforce management solutions to help drivers maximize productivity throughout the day. The company is based in Greenville, S.C.

Unify allows fleet managers to leverage pre-built components to craft a custom mobile app for fleet drivers, giving them the control they want and the flexibility they need, said Kevin Survance, CEO at Eleos.

Also during the conference, Geotab said that theDrivewyzePreClear weigh station bypass service is now available on the Geotab Marketplace. Drivewyze helps fleets and drivers save time by providing bypass at more than 800 sites in 47 states and Canadian provinces.

We are confident that this partnership will help make it easier for safety-focused commercial vehicle operators to access the largest electronic pre-screening and clearance services network in Canada and the United States without the need to install additional equipment in their trucks, said Brian Heath, president of Drivewyze.

Trimbleannounced its video intelligence solution is now part of the Geotab Marketplace.This offering includes a two-channel DVR and forward-facing camera, with the option to add secondary camera.

Separately, Trimblesaid last week it signed a definitive agreement to acquire Kuebix, a leading transportation management system provider.

Bendix Commercial Vehicle Systemsannounced the addition of Geotab to the list of telematics platforms that can support SafetyDirect.

Safety Direct is aweb portal that combines a video-based driver safety solution with an active safety system.All fleets will have availability to this offering by the middle of this year.

Geotab also announced the availability of the Geotab Integrated Solution forGeneral Motors. The solution allows fleet managers to access their compatible vehicle data within the MyGeotab platform via a factory-fit, GM-engineered embedded OnStar module.

No installation or additional hardware is required to leverage the new offering. With the Geotab Integrated Solution for GM, compatible fleets will be equipped with advanced telematics tools that provide a deep dive into vehicle information such as fuel usage, vehicle health and driver behavior, said Sherry Calkins, Geotabs vice president of strategic partners.

In October, Geotab announced a similar agreement withFord Motors.

As the event opened on Jan. 14, Geotab unveiled anintegration withLytx, which creates a seamless experience within a single interface, allowing fleet managers to browse video and data from DriveCam event recorders through the Geotab platform.

See the rest here:
BlackBerry combines AI and machine learning to create connected fleet security solution - Fleet Owner

The Problem with Hiring Algorithms – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Originally published in EthicalSystems.org, December 1, 2019

In 2004, when a webcam was relatively unheard-of tech, Mark Newman knew that it would be the future of hiring. One of the first things the 20-year old did, after getting his degree in international business, was to co-found HireVue, a company offering a digital interviewing platform. Business trickled in. While Newman lived at his parents house, in Salt Lake City, the company, in its first five years, made just $100,000 in revenue. HireVue later received some outside capital, expanded and, in 2012, boasted some 200 clientsincluding Nike, Starbucks, and Walmartwhich would pay HireVue, depending on project volume, between $5,000 and $1 million. Recently, HireVue, which was bought earlier this year by the Carlyle Group, has become the source of some alarm, or at least trepidation, for its foray into the application of artificial intelligence in the hiring process. No longer does the company merely offer clients an asynchronous interviewing service, a way for hiring managers to screen thousands of applicants quickly by reviewing their video interview HireVue can now give companies the option of letting machine-learning algorithms choose the best candidates for them, based on, among other things, applicants tone, facial expressions, and sentence construction.

If that gives you the creeps, youre not alone. A 2017 Pew Research Center report found few Americans to be enthused, and many worried, by the prospect of companies using hiring algorithms. More recently, around a dozen interviewees assessed by HireVues AI told the Washington Post that it felt alienating and dehumanizing to have to wow a computer before being deemed worthy of a companys time. They also wondered how their recording might be used without their knowledge. Several applicants mentioned passing on the opportunity because thinking about the AI interview, as one of them told the paper, made my skin crawl. Had these applicants sat for a standard 30-minute interview, comprised of a half-dozen questions, the AI could have analyzed up to 500,000 data points. Nathan Mondragon, HireVues chief industrial-organizational psychologist, told the Washington Post that each one of those points become ingredients in the persons calculated score, between 1 and 100, on which hiring decisions candepend. New scores are ranked against a store of traitsmostly having to do with language use and verbal skillsfrom previous candidates for a similar position, who went on to thrive on the job.

HireVue wants you to believe that this is a good thing. After all, their pitch goes, humans are biased. If something like hunger can affect a hiring managers decisionlet alone classism, sexism, lookism, and other ismsthen why not rely on the less capricious, more objective decisions of machine-learning algorithms? No doubt some job seekers agree with the sentiment Loren Larsen, HireVues Chief Technology Officer, shared recently with theTelegraph: I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day. Of course, the appeal of AI hiring isnt just about doing right by the applicants. As a 2019 white paper, from the Society for Industrial and Organizational Psychology, notes, AI applied to assessing and selecting talent offers some exciting promises for making hiring decisions less costly and more accurate for organizations while also being less burdensome and (potentially) fairer for job seekers.

Do HireVues algorithms treat potential employees fairly? Some researchers in machine learning and human-computer interaction doubt it. Luke Stark, a postdoc at Microsoft Research Montreal who studies how AI, ethics, and emotion interact, told the Washington Post that HireVues claimsthat its automated software can glean a workers personality and predict their performance from such things as toneshould make us skeptical:

Systems like HireVue, he said, have become quite skilled at spitting out data points that seem convincing, even when theyre not backed by science. And he finds this charisma of numbers really troubling because of the overconfidence employers might lend them while seeking to decide the path of applicants careers.

The best AI systems today, he said, are notoriously prone to misunderstanding meaning and intent. But he worried that even their perceived success at divining a persons true worth could help perpetuate a homogenous corporate monoculture of automatons, each new hire modeled after the last.

Eric Siegel, an expert in machine learning and author of Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, echoed Starks remarks. In an email, Siegel told me, Companies that buy into HireVue are inevitably, to a great degree, falling for that feeling of wonderment and speculation that a kid has when playing with a Magic Eight Ball. That, in itself, doesnt mean HireVues algorithms are completely unhelpful. Driving decisions with data has the potential to overcome human bias in some situations, but also, if not managed correctly, could easily instill, perpetuate, magnify, and automate human biases, he said.

To continue reading this article click here.

Follow this link:
The Problem with Hiring Algorithms - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times