Page 47«..1020..46474849..6070..»

Category Archives: Artificial Intelligence

The OFC Conference and Exhibition in San Diego Concludes Showcasing Breakthrough Innovations in 5G, Artificial Intelligence, Co-Packaging, Data Center…

Posted: March 11, 2022 at 11:48 am

"Global optical communications leaders united once again to exchange information and demonstrate ground-breaking interoperability including the newly launched OFCnet, catapulting optical technologies and network advancements into the reality of tomorrow," said OFC General Chairs Shinji Matsuo, David Plant and Jun Shan Wey. "Over the past two years, many of the best and brightest companies have been hard at work and were eager to show the world just how far optical advancements have come."

"Seeing some of the products live on the show floor and networking face-to-face with industry peers after a two-year hiatus from OFC helped make some of the latest optical innovations more concrete," said Woo Jin Ho, Senior Networking and Semiconductor Analyst, Bloomberg Intelligence. "For example, OIF's demonstration of Near-Packaged Optics and Co-Packaged Optics helped me better conceptualize some of the technologies that hyperscale clouds could be adopting into their data centers over the next five to ten years."

Plenary Session:OFC's impressive line-up of plenary visionaries showcased technologies shaping our world and driving innovation for a better future. They demonstrated the impact that the latest developments in optics and photonics will have on the industry moving forward. The plenary line-up included John Bowers, Director, Institute of Energy Efficiency, University of California, Santa Barbara, USA; James Green, Scientist and Senior Advisor, NASA, USA; and Elise Neel, Senior Vice President, New Business Incubation, Verizon, USA.

Professor Bowers, a pioneer in silicon photonics, presented exciting recent developments: from small wafers on indium phosphide to the making of photonic devices on silicon. Elise Neel demonstrated the wonders of the digital transformation enabled by 5G and Verizon's vision of the fourth industrial revolution, or Industry 4.0. James Green took attendees on a journey to deep space, the Moon and Mars, and explained how critical optical communications will support upcoming missions.

Exhibits:Hundreds of companies unveiled new products and innovations at OFC. They reconnected with customers and conducted the business of innovation. Participants included AC Photonics; Advanced Microoptic Systems GmbH; Aerotech Inc.; Aragon Photonics Labs; Broadcom, Inc.; CIENA Corporation; Cisco Systems, Inc.; Corning Incorporated; East Photonics, Inc.; Go!Foton; Infinera; Intel; LIGENTEC; Keysight Technologies; Lumentum; Murata; Nokia; NTT Advanced Technology Corporation; NTT Electronics Corporation; Santec USA; Semtech Corporation; Tektronix, Inc.; Viavi Solutions and VPIphotonics.

"We had a great week here at OFC and are glad to have the industry back together in person," said Eve Griliches, Senior Product Marketing, Cisco. "It was clear the right customers were all here to see the innovation and new technologies that have come to market over the past few years."

"We are excited to be back in person participating in OFC, the premier optical networking event in the industry," said Rob Shore, senior vice president of marketing at Infinera. "It has been an invaluable opportunity to showcase our latest coherent optical innovations, including ICE-XR, ICE6 Turbo and our Open Optical Toolkit designed to help network operators overcome the challenges associated with the relentless growth in bandwidth in the edge and core of the network."

OFCnet:In collaboration with the operator of California's research and education network CENIC, OFC launched OFCnet, a live high-speed fiber path connecting CENIC's facility in San Diego to the San Diego Convention Center, designed to accelerate the development and deployment of cutting-edge networking technology. CENIC's high speed network is built on the Lumen optical fiber and colocation infrastructure. These ultra-fast connections are designed to increase scientific collaboration among diverse organizations across the U.S. OFCnet is the result of collaboration with CENIC, Ciena, EXFO, Lumen, San Diego Convention Center, Smart City Networks, the University of California San Diego's Qualcomm Institute and Supercomputer Center and Viavi.

"Ciena is proud to be the first to light OFCnet using our Wavelogic 5 technology to provide multi-Tbps, ultra-low latency capacity between the OFC exhibit and the CENIC network," said Steve Alexander, Ciena CTO and Chair of OFC's Long Range Planning Committee. "This ongoing collaboration will provide future OFC conference and exhibitions with a unique platform to showcase the latest advancements in optical technologies, systems, and networks, all operating in a real-world environment."

Online Access to Content:On-demand access to paid conference sessions will be available for paid attendees, and additional content is available to all registrants, including Exhibits Pass Plus.

Health and Safety:In preparation for OFC 2022, show management followed all global, U.S. federal, California state, and local San Diego health guidelines. All conference attendees, exhibitors, vendors and staff had to be fully vaccinated and show proof of vaccination with photo ID, and wore masks in the San Diego Convention Center at all times except when actively eating and drinking.

OFC 2023:Mark your calendar for OFC 2023, 05-09 March at the San Diego Convention Center.

About OFCThe 2022 Optical Fiber Communication Conference and Exhibition (OFC) is the premier conference and exhibition for optical communications and networking professionals. For more than 45 years, OFC has drawn attendees from all corners of the globe to meet and greet, teach and learn, make connections, and move businesses forward.

OFC includes dynamic business programming, an exhibition of global companies and high impact peer-reviewed research that, combined, showcase the trends that are shaping the entire optical networking and communications industry. OFC is co-sponsored by the IEEE Communications Society (IEEE/ComSoc) and the IEEE Photonics Society and co-sponsored and managed by Optica (formerly OSA). OFC in 2022 will be presented in a hybrid format with in-person and virtual components and will take place 06-10 March 2022 at the San Diego Convention Center in San Diego, California, USA. Follow OFC on Twitter @OFCConference, learn more at OFC Community LinkedIn, and watch highlights on OFC YouTube.

Media Contact: [emailprotected]

SOURCE The Optical Fiber Communication Conference and Exhibition (OFC)

Go here to see the original:

The OFC Conference and Exhibition in San Diego Concludes Showcasing Breakthrough Innovations in 5G, Artificial Intelligence, Co-Packaging, Data Center...

Posted in Artificial Intelligence | Comments Off on The OFC Conference and Exhibition in San Diego Concludes Showcasing Breakthrough Innovations in 5G, Artificial Intelligence, Co-Packaging, Data Center…

Growth Opportunities in API, Analytics, Cloud, O-RAN, and Artificial Intelligence 2021 – AI-Powered Video Security Platform to Offer Human-Like…

Posted: at 11:48 am

DUBLIN--(BUSINESS WIRE)--The "Growth Opportunities in API, Analytics, Cloud, O-RAN, and Artificial Intelligence" report has been added to ResearchAndMarkets.com's offering.

This report provides a snapshot of the emerging ICT led innovations in API, analytics, cloud, open RAN (O-RAN), and artificial Intelligence. This issue focuses on the application of information and communication technologies in alleviating the challenges faced across industry sectors in areas, such as telecom, retail, supply chain, and sports.

Companies Mentioned

ITCC TOE's mission is to investigate emerging wireless communication and computing technology areas including 3G, 4G, Wi-Fi, Bluetooth, Big Data, cloud computing, augmented reality, virtual reality, artificial intelligence, virtualization and the Internet of Things and their new applications; unearth new products and service offerings; highlight trends in the wireless networking, data management and computing spaces; provide updates on technology funding; evaluate intellectual property; follow technology transfer and solution deployment/integration; track development of standards and software; and report on legislative and policy issues and many more.

The Information & Communication Technology cluster provides global industry analysis, technology competitive analysis, and insights into game-changing technologies in the wireless communication and computing space. Innovations in ICT have deeply permeated various applications and markets.

These innovations have profound impact on a range of business functions for computing, communications, business intelligence, data processing, information security, workflow automation, quality of service (QoS) measurements, simulations, customer relationship management, knowledge management functions and many more. Our global teams of industry experts continuously monitor technology areas such as Big Data, cloud computing, communication services, mobile and wireless communication space, IT applications & services, network security, and unified communications markets. In addition, we also closely look at vertical markets and connected industries to provide a holistic view of the ICT Industry.

Key Topics Covered:

Innovations In API, Analytics, Cloud, Oran And Artificial Intelligence

Key Contacts

For more information about this report visit https://www.researchandmarkets.com/r/a88ji7

Original post:

Growth Opportunities in API, Analytics, Cloud, O-RAN, and Artificial Intelligence 2021 - AI-Powered Video Security Platform to Offer Human-Like...

Posted in Artificial Intelligence | Comments Off on Growth Opportunities in API, Analytics, Cloud, O-RAN, and Artificial Intelligence 2021 – AI-Powered Video Security Platform to Offer Human-Like…

Juniper Networks Announces University Research Funding Initiative to Advance Artificial Intelligence and Network Innovation – Business Wire

Posted: at 11:48 am

SUNNYVALE, Calif.--(BUSINESS WIRE)--Juniper Networks (NYSE: JNPR), a leader in secure, AI-driven networks, today announced a university funding initiative to fuel strategic research to advance network technologies for the next decade. Junipers goal is to enable universities, including Dartmouth, Purdue, Stanford and the University of Arizona, to explore next-generation network solutions in the fields of artificial intelligence (AI) and machine learning (ML), intelligent multipath routing and quantum communications.

Investing now in these technologies, as organizations encounter new levels of complexity across enterprise, cloud and 5G networks, is critical to replace tedious, manual operations as networks become mission critical for nearly every business. This can be done through automated, closed-loop workflows that use AI and ML-driven operations to scale and cope with the exponential growth of new cloud-based services and applications.

The universities Juniper selected in support of this initiative are now beginning the research that, once completed, will be shared with the networking community. In addition, Juniper joined the Center for Quantum Networks Industrial Partners Program to fund industry research being spearheaded by the University of Arizona.

Supporting Quotes:

Cloud services will continue to proliferate in the coming years, increasing network traffic and requiring the industry to push forward on innovation to manage the required scale out architectures. Junipers commitment to deliver better, simpler networks requires us to engage and get ahead of these shifts and work with experts in all areas in order to trailblaze. I look forward to collaborating with these leading universities to reach new milestones for the network of the future.

- Raj Yavatkar, CTO, Juniper Networks

With internet traffic continuing to grow and evolve, we must find new ways to ensure the scalability and reliability of networks. We look forward to exploring next-generation traffic engineering approaches with Juniper to meet these challenges.

- Sonia Fahmy, Professor of Computer Science, Purdue University

It is an exciting opportunity to work with a world-class partner like Juniper on cutting edge approaches to next-generation, intelligent multipath routing. Dartmouth's close collaboration with Juniper will combine world-class skills and technologies to advance multipath routing performance.

- George Cybenko, Professor of Engineering, Dartmouth University

As network technology continues to evolve, so do operational complexities. The ability to utilize AI and machine learning will be critical in keeping up with future demands. We look forward to partnering with Juniper on this research initiative and finding new ways to drive AI forward to make the network experience better for end users and network operators.

- Jure Leskovec, Associate Professor of Computer Science, Stanford University

The internet of today will be transformed through quantum technology which will enable new industries to sprout and create new innovative ecosystems of quantum devices, service providers and applications. With Juniper's strong reputation and its commitment to open networking, this makes them a terrific addition to building this future as part of the Center for Quantum Networks family.

- Saikat Guha, Director, NSF Center for Quantum Networks, University of Arizona

About Juniper Networks

Juniper Networks is dedicated to dramatically simplifying network operations and driving superior experiences for end users. Our solutions deliver industry-leading insight, automation, security and AI to drive real business results. We believe that powering connections will bring us closer together while empowering us all to solve the worlds greatest challenges of well-being, sustainability and equality. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter, LinkedIn and Facebook.

Juniper Networks, the Juniper Networks logo, Juniper, Junos, and other trademarks listed here are registered trademarks of Juniper Networks, Inc. and/or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners.

category-corporate

Read the original here:

Juniper Networks Announces University Research Funding Initiative to Advance Artificial Intelligence and Network Innovation - Business Wire

Posted in Artificial Intelligence | Comments Off on Juniper Networks Announces University Research Funding Initiative to Advance Artificial Intelligence and Network Innovation – Business Wire

Behavidence Closes $4.3 Million Seed Round to Amplify its Artificial Intelligence Technology used to Detect, Screen and Monitor Mental Illness -…

Posted: at 11:48 am

NEW YORK, March 10, 2022 /PRNewswire-PRWeb/ --Behavidence, a leading startup company that monitors psychiatric and neurological disorders using artificial intelligence, announced a $4.3 million seed round led by Welltech Ventures, featuring Arc Impact and Longevity Ventures.

Founded in 2020 by Roy Cohen (CEO), Dr. Girish Srinivasan, and Dr. Janine Ellenberger, Behavidence tracks and monitors the mental and emotional state of its users based on their digital usage patterns. Through its unique technology and algorithms, Behavidence can delineate the changes in its users' state of mind without needing identifiable information or monitoring private content or information.

"Today's funding announcement demonstrates our investors' support, commitment, and optimism for the future of Behavidence," said Cohen. "We see Behavidence's technology being deployed as a screening and remote monitoring tool, and are thrilled to see the adoption of our technology on so many devices across the globe."

Behavidence has secured more than $1 million in contracts with customers across the United States, Europe, Africa and Asia. Its clients include the Department of Veterans Affairs, Discovery Health Insurance and Essen Health Care.

Behavidence's product is a remote monitoring tool that helps health organizations, medical practitioners, and insurance companies identify potential diagnoses. According to recent robust clinical studies conducted by Behavidence, the company's models score up to 80% accuracy in detecting clinical depression, anxiety, and attention deficit disorders (ADHD) in patients. The company is continuing to develop its machine learning-based tools to detect and assist with remote monitoring and management of mental health conditions through passive digital biomarkers, adding no respondent burden.

"We continue to demonstrate our value to large healthcare organizations and private users by helping them assess the mental and emotional state of their clients and customers," Cohen said. "There is an ease of use to our technology that we are very proud of. The lack of respondent burden for the user makes it easy to implement for both the insurance company and the user. We are seeing metrics from insurance partners that indicate our technology can reduce the cost of a clinical case by up to 50%"

"We are not ruling out the potential of exploring a clinical track either as a medical aid or as the official diagnostic index itself in the future," explains Cohen. "But for now, we are celebrating our next round of seed funding and focused on delivering an intuitive product that will help serve the mental health needs of individuals and organizations throughout the world."

###

About Behavidence:

Behavidence develops machine learning based tools to detect and assist with remote monitoring and management of mental health conditions through passive digital biomarkers. The company was founded by a neuroscientist, neuropsychologist, physician and bioengineer with the passion to improve the lives of millions suffering from psychiatric and neurological disorders. Founded in 2020, Behavidence now offers multiple digital phenotyping models that can predict disorders such as depression, anxiety, ADHD and more. These digital phenotyping solutions have been adopted by health organizations, tech, commercial and government entities. The Behavidence products can be used as a measurement based outcome to monitor employee burnout, stress, predict relapse of conditions, screening, and remote monitoring for clinical interventions and comorbid conditions.

This disruptive solution offers absolute user privacy, zero respondent burden and real time feedback.

Meet the Behavidence Founders Team:

Roy Cohen, MNeuroSci - CEO, Co-founder (Ireland/UK based)

Dr. Girish Srinivasan - Chief Technology Officer (Chicago, IL US based)

Dr. Janine Ellenberger - Chief Medical Officer (Bryn Mawr, PA US based)

Media Contact

Anna Marie Imbordino, Zen Media, +1 630-550-7510, annamarie@zenmedia.com

SOURCE Behavidence

Read more from the original source:

Behavidence Closes $4.3 Million Seed Round to Amplify its Artificial Intelligence Technology used to Detect, Screen and Monitor Mental Illness -...

Posted in Artificial Intelligence | Comments Off on Behavidence Closes $4.3 Million Seed Round to Amplify its Artificial Intelligence Technology used to Detect, Screen and Monitor Mental Illness -…

Responsible Artificial Intelligence Is Still Out Of Reach For Many Organizations – Forbes

Posted: February 28, 2022 at 7:57 pm

Time for AI proponents to step up..

Theres strong support for analytics and data science and the capabilities it offers organizations. However, the people charged with developing analytics and artificial intelligence feel resistance from business executives in getting fully on board with data-driven practices. In addition, efforts to ensure fairness in AI are lagging.

Thats the word from a recent study of 277 data managers and scientists out of SAS, which finds that overall, more than two-thirds were satisfied with the outcomes from their analytical projects. At the same time, 42% say data science results are not used by business decision makers, making it one of the main barriers faced.

A lack of support from above is cited as the leading challenge to getting data analytics initiatives off the ground, the survey shows. Data quality issues ranked second, followed by lack of adoption of the results by decision-makers. Its interesting that explaining data science to others is also seen as a challenge, suggesting that a big part of managers jobs needs to be evangelizing and educating their business counterparts on the benefits data analytics can deliver to their organizations, and how to do it right.

Here are the leading challenges to becoming more data-driven in todays environments:

Managers are generally more satisfied with the companys use of analytics compared to individuals; however, individuals seem more satisfied with the outcome of analytics projects, the studys authors state. That difference echoes the possible difference between satisfaction with their own projects outcomes versus how theyre deployed, and that data science as a whole is more than siloed, individual efforts.

The study also tackled the question of delivery of ethical and unbiased AI. A substantial segment of companies in the survey, 43% do not conduct specific reviews of their analytical processes with respect to bias and discrimination. And only 26% of respondents indicated that unfair bias is used as a measure of model success in their organization. The top two roadblocks were a lack of communication between those who collect the data and those who analyze it, and difficulty in collecting data about groups that may be unfairly targeted.

The studys authors recommend working to make data science a team sport in organizations, and to make an effort to provide data managers and scientists a more active role in the organization. They also urge managers and executives to get more proactive about responsible AI. Get started with your own project and find ways to add in the means to detect and measure bias, they advise. Document your work and present it to management. Sometimes a working example of success is whats needed to get started theres no reason it cant be your work that sparks the beginning.

Here is the original post:

Responsible Artificial Intelligence Is Still Out Of Reach For Many Organizations - Forbes

Posted in Artificial Intelligence | Comments Off on Responsible Artificial Intelligence Is Still Out Of Reach For Many Organizations – Forbes

Using artificial intelligence to find anomalies hiding in massive datasets – MIT News

Posted: at 7:57 pm

Identifying a malfunction in the nations power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques, says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.

The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.

Probing probabilities

The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies.

Estimating those probabilities is no easy task, especially since each sample captures multiple time series, and each time series is a set of multidimensional data points recorded over time. Plus, the sensors that capture all that data are conditional on one another, meaning they are connected in a certain configuration and one sensor can sometimes impact others.

To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample.

They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns in the data and estimate anomalies more accurately, Chen explains.

The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities, he says.

This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. This allows the researchers to estimate the likelihood of observing certain sensor readings, and to identify those readings that have a low probability of occurring, meaning they are anomalies.

Their method is especially powerful because this complex graph structure does not need to be defined in advance the model can learn the graph on its own, in an unsupervised manner.

A powerful technique

They tested this framework by seeing how well it could identify anomalies in power grid data, traffic data, and water system data. The datasets they used for testing contained anomalies that had been identified by humans, so the researchers were able to compare the anomalies their model identified with real glitches in each system.

Their model outperformed all the baselines by detecting a higher percentage of true anomalies in each dataset.

For the baselines, a lot of them dont incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us, Chen says.

Their methodology is also flexible. Armed with a large, unlabeled dataset, they can tune the model to make effective anomaly predictions in other situations, like traffic patterns.

Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time, says Chen.

Though this particular project is close to its end, he looks forward to applying the lessons he learned to other areas of deep-learning research, particularly on graphs.

Chen and his colleagues could use this approach to develop models that map other complex, conditional relationships. They also want to explore how they can efficiently learn these models when the graphs become enormous, perhaps with millions or billions of interconnected nodes. And rather than finding anomalies, they could also use this approach to improve the accuracy of forecasts based on datasets or streamline other classification techniques.

This work was funded by the MIT-IBM Watson AI Lab and the U.S. Department of Energy.

View post:

Using artificial intelligence to find anomalies hiding in massive datasets - MIT News

Posted in Artificial Intelligence | Comments Off on Using artificial intelligence to find anomalies hiding in massive datasets – MIT News

Companies Improve their Supply Chains with Artificial Intelligence – Logistics Viewpoints

Posted: at 7:57 pm

Machine Learning, a Form of Artifical Intelligence, Has Feedback Loops that Improve Forecasting

Many large enterprises use one form or another of a supply chain application to help manage their supply chains. Supply chain vendors have been touting their investments in artificial intelligence (AI) for the last several years. In the course of updating our annual research on the supply chain planning market, I talked to executives across the industry. Alex Pradhan, Product Strategy Leader John Galt Solutions, told me that all planning vendors have bold marketing around AI. But the trick is to find suppliers with field-proven AI/ML algorithms that have been delivered at scale.

Further, while artificial intelligence helps solve certain types of problems, Jay Muelhoefer the chief marketing officer at Kinaxis pointed out optimization and heuristics work better for other types of planning problems. This article, which is focused on the different types of artificial intelligence used and the types of problems they are solving, is aimed at helping practitioners cut through the hype.

Lets start with a definition: any device that can perceive its environment and takes actions that maximize its chance of success at some goal is engaged in some form of artificial intelligence (AI). AI can refer to several different types of math. But, in the supply chain realm, machine learning (ML) is where most of the activity surrounding artificial intelligence has been focused.

It is also worth pointing out, that based on this definition, not all forms of machine learning are particularly complicated. Planning applications dont work well if the master data they rely on is not accurate; this is known as the garbage in, garbage out problem. Artificial intelligence is beginning to be used to update the data. Lead times, for example, are a critical form of master data for planning purposes. Having an agent detect how long it takes to ship from a supplier site to a manufacturing facility, and then doing a running calculation on how the average lead time is changing, is trivial math. The agent technology is much more complicated than the math. Relying on humans to update this data has not worked at all well; people just dont want to do it.

But sometimes fixing the bad data problem is complicated. In process industries the supply chain models used for optimization are much more complex than those used in other industries. The processing units in an oil refinery, for example, operate at high temperature and high pressure. These constraints need to be understood. So, models for heavy process industries often include first principle parameters. First principles reflect physical laws such as mass balance, energy balance, heat transfer relations, and reaction kinetics. The first principles are important to understand yields, as well as the energy requirements for running the equipment.

AspenTech has developed in a process simulator which is tuned with real plant operating data. During development, the models automatically perform thousands of permutations and perturbations of the first principles model to create a large data set to which AI algorithms applied. The AspenTech models combine the classic first principles approach with the modern pure data-driven approach. Starting with a first principles model, according to AspenTech, improves accuracy significantly. They tell me, the model with either a first principles or pure data plus AI, the model accuracy would be in the 90-97% range. But hybrid models that combine first principles, data-driven models, and AI, they have 99+% accuracy.

A supply chain planning model learns when the planning application takes an output, like a forecast, observes the accuracy of the output, and then updates its own model so that better outputs will occur in the future.

When you look at machine learning this way, artificial intelligence for supply chain planning is nothing new. Machine learning has been used to improve demand forecasting since the early 2000s. But machine learning for demand forecasting is much better than it used to be. There are far more forecasts being made in far more planning horizons and at a greater degree of specificity today than 20 years ago. For example, forecasting how much of a particular product will be sold in a particular store is far more intensive than forecasting how many products in a product family will be sold in a region. This explosion in the number of forecasts would not be possible without the latest generation of machine learning. There were only a few SCP suppliers with mature capabilities in this area a few years ago. Since then, virtually every supplier I talked to in the process of updating this years Supply Chain Planning Market Analysis Study has said they are investing in this area.

One example of the value of machine learning in demand planning comes from Mahindra & Mahindra. Aniruddh Srivastava, Head of Demand and Supply Planning at Mahindra & Mahindra, said at Blue Yonders Icon user conference that artificial intelligence and machine learning algorithms are the cornerstone of their strategy. Through their partnership with Blue Yonder, Mahindra & Mahindra was able to increase forecast accuracy by 10%. A better forecast leads to carrying less inventory while maintaining or even improving service levels. The improvement in forecasting contributed to an increase in service levels by 10% while reducing inventory investment by 20%.

But that was pre-COVID. But after the pandemic hit their safety stock was increased by 30%. Post-COVID it was not about savings, Mr. Srivastava explained. The game changed to a global competition for the same set of raw materials. This division makes automotive spare parts, so the competition was to secure semiconductor chips.

During the pandemic, forecasting accuracy was terrible. Forecasting is based on the presumption that history repeats itself. As an E2open forecasting benchmarking report pointed out, for companies trying to predict demand in March of 2020 as the world was descending into lockdown and everything was being turned upside down, what happened in March of 2019 had little to no relevance.

But if there was any silver lining it was that companies that made use of planning systems that combined demand sensing the use of multiple, real-time signals (like sales in a particular store or shipments from a retailers warehouses to their stores) and machine learning, had significantly less error. And the companies that used these solutions, saw their forecasts improve much more quickly than traditional solutions.

In making demand forecasts, one can look at product history. An alternative is to look at customer behavior surrounding how clusters of customers buy these products. QAD Dynasys is one of several suppliers investing

One thing that is difficult to forecast are new product introductions. The way this forecasting is done is through the use of attributes. If you are looking at a purse, attributes would include the material it is made of, size, color, and other things as well. To the extent that one product is like another, it may be easier to forecast. But which attributes matter? Infor is using machine learning looking at attributes and past launches to make this determination. Solvoyo and Lily AI are using another form of AI, image recognition, to tackle this problem. Getting merchandisers to enter the attributes has not worked well. Merchandisers see this as an unimportant, dull task and they just dont take the time to do this properly.

One real trend ARC has seen this year, is the increasing investment supply chain planning suppliers are making to improve the ability of SCP to help companies reach their sustainability goals. Cyrus Hadavi, the CEO of Adexa, provides a good explanation for how SCP solutions can calculate the carbon footprint associated with a plan. The way this works is that every element in the supply chain is given a carbon index, absolute or relative. That is every machine, factory, DC, mode of transportation, supplier, product, material, etc. These indices then become attributes of these objects. Every time we plan and use any of these elements, the system can project the total carbon footprint of the projected plan. In addition to carbon emissions, these attributes can be used for other forms environmental and governance goals as well.

So, a plan can be produced that predicts the emissions. After a plan is executed, the actual emissions that occurred can be measured, and it is possible to see how close the plan came to what occurred. Just as a demand planning solution compares the forecast to what actually sold and uses machine learning to improve the machines forecasting capabilities, a similar feedback loop can exist with sustainability.

Artificial intelligence can also be used in supply and factory planning. But on the supply planning side it is not about using machine learning to select the right algorithms to improve the plans. When supply plans dont pan out it is less about the model than it is about a data quality issue or an unexpected occurrence. An example of an input issue would be, We thought it took 20 minutes to set up this machine to make product C, but it really takes 60 minutes when product A was made right before product C. An example of an unexpected occurrence would be a critical piece of machinery breaking down.

Machine learning is being used to predict machine breakdowns. But very few vendors are taking those alerts and automatically feeding them into their manufacturing planning solutions. AspenTech has probably done the most in this area. AspenTech, for example, is using predictive analytic inputs on when key machinery in a refinery will break down to allow alternative production schedules to be generated in a more autonomous manner. AspenTechs advantage is that they have both asset management (a solution that can use machine learning for the predictive maintenance alert) and the supply chain planning models those alerts can feed.

A less commonly used form of AI in supply chain applications is natural language processing (NLP). Googles Alexa uses NLP to understand a persons command and then play the music they want. There is a desire to use NLP to allow planners to tell a planning system what to do so they can focus more of their time on higher priority problems.

But Coupa and Oracle are also leveraging natural language processing for supplier risk assessment. Humans dont speak with a clarity that machines can understand. A company can go bankrupt, and a machine could be programmed to understand that. But on social media someone might say that a company is about to go belly up. Machines dont understand this type of unstructured data. NLP helps to make sense of these kinds of data. Oracles DataFox is accessing databases with important company information, but it also has web crawlers examining huge numbers of online news sites as well as social media to discover negative news about a company. That news could be an impending bankruptcy, unhappy customers, key executives leaving the company, or many other things. These events are turned into supplier scores, and if significant the score goes flagged in the Oracle procurement system. Now Oracle is working to connect these scores to the planning systems. At Coupa supplier risk is also flagged for single source or capacity constraints. This can then be leveraged by their supply chain design solution to improve risk mitigation.

Companies need to take machine learning driven demand-side predictions which are particularly good at granular short-term forecasts and adjust production accordingly. The closer in time a plans creation comes to the actual execution of an order, the more a planning system becomes an execution system. The idea is for supply planning application to digest a short horizon demand signals into meaningful plans by using machine learning to suggest courses of action for planners. These suggestions are based upon the way planners had previously solved the same kind of demand/supply disruption. However, this kind of AI does not work out of the box. The system observes planners actions over time and then learns to make the pertinent suggestions. QAD and Noodle.ai are among several suppliers working in this area.

In the last couple of years, RELEX Solutions has developed new capabilities for autonomous capacity balancing. In short, the AI algorithms can pull orders forward (for products with longer shelf-life) to level out the flow of goods into distribution centers and stores as well as to adhere to time-dependent capacity limits. Johanna Smros, Co-founder & Chief Marketing Officer at RELEX, points out the current difficulties in finding staff have really raised awareness of the value of being able to plan ahead to ensure availability and efficient use of human resources as well as to plan around this availability when it becomes a bottleneck in the supply chain.

Blue Yonder, in turn has developed a machine learning powered Dynamic Segmentation solution that automatically groups customers with similar fulfillment or procurement needs based on data changes, and then develops distinct supply chain operations to meet those specific requirements. This enables planners to provide differentiated service levels based on customer value and business parameters

While this article stemmed from research ARC is doing on the supply chain planning market, and most AI investments have been focused on planning applications, it is worth pointing out that AI investments are increasing in the supply chain execution realm as well. Companies like Oracle, Manhattan Associates, Koerber, and Blue Yonder, are all increasing the R&D in AI in their supply chain execution systems. A transportation system that applies machine learning to predict how long it will take a truck to make a delivery is one example of this. A warehouse management system that can digest a prediction of what ecommerce customers are apt to buy, and then drop the right work orders at the right time to the warehouse floor, is another example of this.

To sum it up, Madhav Durbha the vice president of supply chain strategy at Coupa Software said that artificial intelligence is becoming much more widely adopted due to progress occurring on several fronts at the same time. These include the development of new machine learning algorithms, computing power, big data analytics, and acceptance by industry leaders.

But remember, AI only fixes supply chains to a degree; this is not like waving a magic wand and seeing your supply chain problems suddenly vanish. Nevertheless, AI really is improving planning, and it is increasingly being used to improve order execution as well.

Go here to see the original:

Companies Improve their Supply Chains with Artificial Intelligence - Logistics Viewpoints

Posted in Artificial Intelligence | Comments Off on Companies Improve their Supply Chains with Artificial Intelligence – Logistics Viewpoints

Artificial Intelligence in Healthcare: Tomorrow and Today – ReadWrite

Posted: at 7:57 pm

It is a typically cold day in February and the peak of the flu season. Let alone the never-ending pandemic that seems to have been haunting this world forever. And it got me thinking can technology help battle all these nasty diseases and improve patient outcomes? And most importantly, will artificial intelligence have a hand in it? It seems so.

In 2021, weve reached another milestone in Artificial Intelligence adoption $6.9 billion of market size and counting. By 2027, the intelligent market in healthcare will grow to 67.4 billion. Hence, the future of AI in healthcare certainly looks bright, yet not serene.

Today, Ill walk you through the state of artificial intelligence in healthcare, its main application areas, and its current limitations. All these will help you build a holistic image of this technology in medical services.

Artificial Intelligence is now considered one of the most important IT research areas, promoting industrial growth. Just like the transformation of power technology led to the Industrial Revolution, AI is heralded today as the source of breakthrough.

Within the healthcare continuum, COVID-19 has accelerated investments in AI. Over half of healthcare leaders expect artificial intelligence (AI) to drive innovation in their organizations in the coming years. At the same time, around 90% of hospitals have AI strategies in place.

Now lets have a look at the top impacts of intelligent algorithms in medicine.

Today, only specific settings in clinical practice have welcomed the application of artificial intelligence.

Patients have been waiting for the deployment of augmented medicine since it allows for greater autonomy and more individualized care. However, clinicians are less encouraged because augmented medicine requires fundamental shifts in clinical practice.

Nevertheless, we already have enough AI use cases to assess its potential.

In most critical cases, the treatment prognosis depends on how early the disease is detected. AI-driven technology is currently used to amplify the accurate diagnosis of a disease like cancer in its earliest stages.

Machine learning algorithms can also process patient data from ECG, EEG, or X-ray images to prevent the aggravation of symptoms.

According to the American Cancer Society, 1 in every 2 women is misdiagnosed with cancer due to a high rate of erroneous mammography results. Hence, there is certainly an acute need for more accurate and effective disease identification. Mammograms are examined and interpreted 30 times faster with up to 99 percent accuracy with AI, reducing the need for biopsies.

This year, Alphabet has launched a company that uses AI for drug discovery. It will rely on the work of DeepMind, another Alphabet unit that has pioneered the use of artificial intelligence to predict the structure of proteins.

And its not the only instance of AI-enabled clinical research.

According to a Deloitte survey, 40% of drug discovery start-ups already used AI in 2019 to monitor chemical repositories for potential drug candidates. Over 20% leverage intelligent computing to identify new drug targets. Finally, 17% use it for computer-assisted molecular design.

The healthcare data explosion is something that has gained momentum in recent years. This sudden spike of data can be attributed to the massive digitalization of the healthcare industry and the proliferation of wearables.

With a single patient accounting for around 80 megabytes of data per year in imaging and EMH data, the compound annual growth rate of data is estimated to hit 36% by 2025.

Therefore, physicians need a fast and effective tool to make sense of this data flow to produce industry-changing insights. Predictive analytics is exactly one of those tools. In particular, AI-enabled data analytics helps uncover hidden trends in the spread of illness. This allows for proactive and preventive treatment, which further improves patient outcomes.

For example, the Centers for Disease Control and Prevention (CDC) implements analytics to predict the next flu outbreak. Using historical data, they assess the severity of future flu seasons which allows them to make strategic decisions beforehand.

The global pandemic wasnt an exception as well. Thus, The National Minority Quality Forum has launched its COVID-19 Index. The latter is a predictive tool that will help leaders prepare for future waves of coronavirus.

In the past year, labs performed over 2800 clinical trials to test life-saving medications and vaccines for the coronavirus. However, this large clinical trial field wasnt fruitful and has generated misleading expectations. But its old news.

The $52B clinical trials market has been long suffering from ineffective preclinical investigation and planning. One of the most difficult components of running clinical research is finding patients. However, many of these clinical trials particularly oncology trials have become more sophisticated, making it even more challenging to find the patients in a short window of time.

Artificial intelligence holds great potential for making the selection process faster. It can amplify the patient selection by:

As artificial intelligence enters the precision medicine landscape, it can help organizations benefit from precision medicine in multiple ways. First of all, personalized medicine may come in the form of digital solutions that allow one-to-one interaction with specialists without leaving the house.

According to statistics, there are currently over 53K healthcare apps on Google Play. Why are they so popular? Patients like the convenience that healthcare apps give. Patients can save money, get immediate access to tailored care, and have greater control over their health thanks to advancements in mobile healthcare technology.

Here are some encouraging statistics to demonstrate the importance of this tech boon:

Another face of personalization in healthcare is precision medicine. It is an innovative model of medical services that offers individualized healthcare customization through medical solutions, treatments, practices, or products tailored to a subset of patients. The tools underpinning precision medicine can include molecular diagnostics, imaging, and analytics.

However, precision medicine is impossible within the traditional medical approach. Instead, it requires access to massive amounts of data coupled with cutting-edge functionality. This data includes a wide span of patient data, including health records, personal devices, and family history. AI then computes this data and generates insights, enables the system to learn, and empowers clinician decision-making.

The clinical impact of machine intelligence holds great potential for disrupting healthcare, making it more accessible and affordable. However, the adoption of AI is currently at its early stages due to a great number of industry limitations. Some of them include:

Artificial intelligence in healthcare is a long-awaited disruption that has been ripening for quite a while. Its possibilities are virtually limitless and stretch from faster drug discovery to at-home diagnostics. In 2021, AI has seen significant growth due to the pandemic-induced crisis and acute need for automation. Although in its early stages, well see more of AI revolutionizing our healthcare sector.

Image Credit: provided by the author; Thank you!

Generating billions of value for our customers by building products people love.

Excerpt from:

Artificial Intelligence in Healthcare: Tomorrow and Today - ReadWrite

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence in Healthcare: Tomorrow and Today – ReadWrite

Artificial intelligence is only as ethical as the people who use it | TheHill – The Hill

Posted: at 7:57 pm

Artificial intelligence is revolutionary, but its not without its controversies. Many hail it as a chance for a fundamental upgrade to human civilization. Some believe it can take us down a dangerous path, potentially arming governments with dangerous Orwellian surveillance and mass control capabilities.

We have to remember that any technology is only as good or bad as the people who use it. Consider the EUs hailed blueprint for AI regulation and Chinas proposed crackdown on AI development; these instances seek to regulate AI as if it were already an autonomous, conscious technology. It isnt. The U.S. must think wisely before following in their footsteps and consider addressing the actions of the user behind the AI.

In theory, the EUs proposed regulation offers reasonable guidelines for the safe and equitable development of AI. In practice, these regulations may well starve the world of groundbreaking developments, such as in industry productivity or healthcare and climate change mitigation areas that desperately need to be addressed.

You can hardly go through a day without engaging with AI. If youve searched for information online, been given directions on your smartphone or even ordered food, then youve experienced the invisible hand of AI.

Yet this technology does not just exist to make our lives more convenient; it has been pivotal in our fight against the COVID pandemic. It proved instrumental in identifying the spike protein behind many of the vaccines being used today.

Similarly, AI enabled BlueDot to be one of the first to raise the alarm about the outbreak of the virus. AI has also been instrumental in supporting the telehealth communication services used to communicate information about the virus to populations, the start-up Clevy.io being one such example.

With so many beneficial use cases for AI, where does the fear stem from? One major criticism leveled at AI is that it is giving governments the ultimate surveillance tool. One report predicts there will be 1 billion surveillance cameras installed worldwide by the end of the year. There is simply not enough manpower to watch these cameras 24/7; the pattern-recognition power of AI means that every second or every frame can be analyzed. Whilst this has life-saving applications in social distancing and crowd control, it also can be used to conduct mass surveillance and suppression at an unprecedented scale.

Similarly, some have criticized AI for cementing race and gender inequalities with fears sparked from AI-based hiring programs displaying potential bias due to a reliance of historical data patterns.

So yes, this clearly shows that there is a need to bake the principles of trust, fairness, transparency and privacy into the development of these tools. However, the question is: Who is best suited to do this? Is it those closest to the development of these tools, government officials, or a collaboration of the two?

One thing is for certain: Understanding the technology and its nuances will be critical to advance AI in a fair and just way.

There is undoubtedly a global AI arms race going on. Over-regulation is giving us an unnecessary disadvantage.

We have a lot to lose. AI will be an incredibly helpful weapon when tackling the challenges we face, from water shortages to population growth and climate change. Yet these fruits will not be borne if we keep leveling suspicion at the technologies, rather than the humans behind them.

If a car crashes, we sanction the driver; we dont crush the car.

Similarly, when AI is used for human rights and privacy violations, we must look to the people behind the technology, not the technology itself.

Beyond these concerns, a growing crowd of pessimistic futurists predict that AI could, one day, surpass human general intelligence and take over the world. Herein lies another category mistake; no matter how intelligent a machine becomes, theres nothing to say that it would or could develop the uniquely human desire for power.

That said, AI is in fact helping drive the rise of a new machine economy, where smart, connected, autonomous, and economically independent machines or devices carry out the necessary activities of production, distribution, and operations with little or no human intervention. According to PwC, 70 percent of GDP growth in the global economy between now and 2030 will be driven by machines. This is a near $7 trillion dollar contribution to U.S. GDP based on the combined production from AI, robotics, and embedded devices.

With this in mind, the ethical concerns around AI are real and must be taken seriously. However, we must not allow these considerations to morph into restrictive, innovation-stopping interventionist policy.

We must always remember that it is the people behind the AI applications that are responsible for breaches of human rights and privacy, not the technology itself. We must use our democratic values to dictate what type of technologies we create. Patchy, ill-informed regulation in such a broad space will likely prevent us from realizing some of the most revolutionary applications of this technology.

Nations who over-regulate this space are tying their own shoelaces together before the starting pistol has even sounded.

Kevin Dallas, a former executive at Microsoft, is president & CEO of Wind River, a provider of secure intelligence software.

Read more from the original source:

Artificial intelligence is only as ethical as the people who use it | TheHill - The Hill

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is only as ethical as the people who use it | TheHill – The Hill

Global Artificial Intelligence Education Technology Market Potential Growth, Share, Demand and Analysis of Key Players- Research Forecasts to 2027 to …

Posted: at 7:57 pm

The Global Artificial Intelligence Education Technology Market contains the latest market plans and new business developments. The actual expected results are assessed in the Artificial Intelligence Education Technology area, and the components that drive and drive the improvement of the business are included. Previous examples of progress, current progress, and new advanced continuous twists.

The assessment includes the companys history and its potential for improvement in the following years and surveys that suggest helpful experts in this market.

In addition, the valuation promotes the market in terms of the spatial development of the market. In addition, he would like sourcing specialists to develop philosophies for social affairs, consider the challenges of carriers and industries, review assumptions and present recognized sourcing practices.

DOWNLOAD FREE SAMPLE REPORT: https://www.mrinsights.biz/report-detail/268860/request-sample

The market is subdivided according to

The dealers represented in the Artificial Intelligence Education Technology market are:

The rating draws attention to the challenging market situation among well-known sellers and the trading profile. Sometimes, it covers the limits of the rating and creation network of company surveys.

The various applications analyzed and examined in this report are:

Geologically speaking, this test is isolated in some real-world locations, to be precise

ACCESS FULL REPORT: https://www.mrinsights.biz/report/global-artificial-intelligence-education-technology-market-growth-status-268860.html

Additionally, this review evaluates the marketability of each geographic location in terms of rate of improvement, macroeconomic boundaries, buyers methods of monitoring cash, and offer and reward terms of things.

Key point that Makes Buying this report Worth:

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team (sales@mrinsights.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

Contact UsMark StoneHead of Business DevelopmentPhone: +1-201-465-4211Email: sales@mrinsights.biz

Read the rest here:

Global Artificial Intelligence Education Technology Market Potential Growth, Share, Demand and Analysis of Key Players- Research Forecasts to 2027 to ...

Posted in Artificial Intelligence | Comments Off on Global Artificial Intelligence Education Technology Market Potential Growth, Share, Demand and Analysis of Key Players- Research Forecasts to 2027 to …

Page 47«..1020..46474849..6070..»