Global Big Data & Machine Learning in Telecom Market 2021 Growth Analysis, Opportunities, Business Insights, Key Trends and Forecast by 2027 -…

The multipurpose new research study on the Global Big Data & Machine Learning in Telecom Market from 2021 to 2027 published by MarketandResearch.biz comes out as a highly reliable source of information and data on the global market. The report aims to promise a unique approach to industry assessment of the market covering the most important factors driving industry growth. The report assesses the dynamic factors, growth determinants as well as information on segment classification that has been recorded in this versatile report.

The report has aggregated the market study on the basis of different key pillars of businesses such as drivers, restraints, and global opportunities. The report contains a trend assessment, an in-depth assessment of global Big Data & Machine Learning in Telecom market valuation, and revenue-generating trends. It exhibits comprehensive information, historical data, key segments and their sub-segments, and demand & supply data.

DOWNLOAD FREE SAMPLE REPORT: https://www.marketandresearch.biz/sample-request/208323

The Report Provides An In-Depth Knowledge of:

The research also separates the market, resulting in comprehensive revenue generation in the global industry while maintaining long-term strength. The report covers sharp insights into the present and forthcoming trends & developments in the market. The report provides a clear understanding of the current and future situations of the global Big Data & Machine Learning in Telecom market based on revenue, volume, production, trends, technology, innovation, and other critical factors.

The following major key players are covered:

By product types, the market is segmented into:

By application, the market is segmented into:

ACCESS FULL REPORT: https://www.marketandresearch.biz/report/208323/global-big-data-machine-learning-in-telecom-market-growth-status-and-outlook-2021-2026

Based on geography, the global Big Data & Machine Learning in Telecom market can be categorized:

It thoroughly reviews several factors of the market such as vital segments, regional market condition, market dynamics, investment suitability, and key players operating in the market. The comparative results provided within the report enable readers to grasp the distinction between players and the way theyre competing against one another. This report analyzes the global Big Data & Machine Learning in Telecom market in terms of market reach and consumer bases in the markets key geographical region.

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team (sales@marketandresearch.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

Contact UsMark StoneHead of Business DevelopmentPhone: +1-201-465-4211Email: sales@marketandresearch.bizWeb: http://www.marketandresearch.biz

Here is the original post:
Global Big Data & Machine Learning in Telecom Market 2021 Growth Analysis, Opportunities, Business Insights, Key Trends and Forecast by 2027 -...

Machine Learning in Europe is Predicted to Reach US$3.96 Billion by 2023 – Analytics Insight

The market of machine learning in Europe is highly competitive owing to the presence of many key players along with small players

The market of machine learning in Europe is expected to reach US$3.96 billion by 2023, expanding at a compound annual growth rate (CAGR) of 33.5% during 2018-2023. The growing use of big data in the German healthcare industry and escalating adoption of autonomous vehicles are fuelling the development and growth of the machine learning market in the country. Also, the cloud is gaining popularity as the most preferred mode of deployment owing to its advantages. Various benefits offered by the cloud are ease of access, cost-effectiveness, real-time monitoring and control, automated software updates, disaster recovery, data loss prevention, etc. Thus, the thriving automobile industry coupled with the growing utilization of big data in the healthcare sector is surging the demand for machine learning, in turn augmenting its market growth in the region.

The European machine learning market is highly competitive owing to the presence of many key players along with small players. Some of the giants operating in the market are Dell Inc., Fair Isaac Corporation (FICO), Baidu Inc., Fractal Analytics, and Amazon Web Services Inc.

The rise in the adoption of advanced analytics and data-driven decision-making has driven the growth of the machine learning market in the United Kingdom. As per a report published by the Government of the UK on the economic value of data, in August 2018, organizations that adopt data-driven decisions are likely to have a 5-6% rise in their productivity and output. Substantial investments have been made by the countrys government, in both private as well as public sectors, to promote the adoption of digital and data-driven technologies.

Based on region, the market is segmented into the European Union Five (EU5), rest of Europe, based on components the market can be segmented into software tools, cloud and web-based application programming interfaces (APIs), and others, based on service, the sub-segments are composed of professional services and managed services and based on organization size, the sub-segments include small and medium enterprises (SMEs) and large enterprises.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read this article:
Machine Learning in Europe is Predicted to Reach US$3.96 Billion by 2023 - Analytics Insight

Machine Learning: Smart Manufacturing Tool, its Application and Challenges – ELE Times

Smart manufacturing aims to integrate big data, advanced analytics, high-performance computing, and Industrial Internet of Things (IIoT) into traditional manufacturing systems and processes to create highly customizable products with higher quality at lower costs. As opposed to traditional factories, a smart factory utilizes interoperable information and communications technologies (ICT), intelligent automation systems, and sensor networks to monitor machinery conditions, diagnose the root cause of failures, and predict the remaining useful life (RUL) of mechanical systems or components.

Smart manufacturing, itself is a big umbrella term for the newest technologies like Artificial Intelligence, the Internet of Things, and Machine Learning. In this particular article, we will focus more on Machine learning and its algorithm which are currently in use for various applications like in healthcare, fraud Detection, Virtual assistance, Google Maps, Social Media Platforms, and many other important sectors.

Machine Learning Techniques for Smart Manufacturing

Machine learning is used in various manufacturing fields at various stages such as for future prediction in the manufacturing system, pattern recognition, fault detection, quality control, and monitoring. Machine learning (ML) is used for classification and regression purposes which can be achieved using past data. Machine learning algorithms and combinations of algorithms are widely used in various machining processes.

In recent years, ML has become more prevalent in the building and assembling sectors by using advanced technology to reduce the cost and time involved in production. In Smart assembly manufacturing robots can put items together with surgical precision as technology adjusts errors in real-time to reduce wastage.

The following are Machine Learning Techniques for Smart Manufacturing:-

Quality control and OEE: Machine learning plays a critical role in enhancing Overall Equipment Effectiveness (OEE). The metric measures performance, availability, and the quality of assembly equipment, which are all enhanced with the integration of deep learning neural networks. It quickly learns the weaknesses of such machines and helps to minimize the weaknesses.

Optimized semiconductor manufacturing: According to McKinsey & Company, there is great value in using ML to improve semiconductor manufacturing yields up to 30%. It is believed that it is achievable by reducing scrap rates and optimizing ML operations.

The technology uses root-cause analysis to reduce testing costs through streamlining manufacturing workflows. Also, manufacturing equipment that runs on ML technology is expected to be 10% cheaper in annual maintenance expenses with a reduced 20% downtime and a reduced inspection cost of 25%.

Perfecting the supply chain: ML plays a vital role in improving an organizations value by maximizing its logistical solutions such as asset management, inventory management system, and supply chain management. The combination of IoT and Artificial Intelligence (AI) is crucial for a modern company to realize the optimal operation of its supply chain.

PWC predicts that more manufacturers will use machine learning and its analytics to enhance predictive maintenance slated to grow by 38% in the next five years. Process automation and visualization are expected to grow by 34% over five years. The integration of APIs, analytics, and big data will grow the connected factories by 31%.

Challenges faced while using Machine Learning for Smart Manufacturing

Today, technology is creating some major security issues, if we are working on a format or platform that is digitally accessible.

Smart manufacturing enabled by machine learning is still a young scientific sector that is growing rapidly. Despite the enormous benefits it has brought to the manufacturing sector, it is still faced with various challenges. We surely have to be a little bit concerned about a few things.

Sheeba Chauhan | Sub Editor | ELE Times

Read the original post:
Machine Learning: Smart Manufacturing Tool, its Application and Challenges - ELE Times

Soli Organic to Advance Indoor, Soil-Based Agriculture Through Selective Breeding, AI, and Machine Learning – The Spoon

Soli Organic (previously known as Shenandoah Growers) is an agriculture company that operates indoor growing operations to produce organic culinary herbs. Today, the company announced two new partnerships with Rutgers University and AI/IoT company Koidra that will help enhance its cost advantage and increase the accessibility and affordability of its products.

In the multi-year partnership with Rutgers University, Soli Organic will work with plant breeding experts from the School of Environmental and Biological Sciences. The focus of the collaboration is to optimize the nutrition, flavor, aroma, and yields of selected crops. Additionally, the partners will research what crops that are not feasible for outdoor production but are potentially viable for commercial production in an indoor growing operation.

While leafy greens and herbs are often the most popular types of crops grown via indoor cultivation, there is vast potential for additional crops in this space. Dr. James (Jim) Simon, the Director of the Rutgers New Use Agriculture and Natural Plant Products Program, said, Of the over 400,000 plant species on the planet, we consume less than 100. We have not even scratched the surface of the different flavors and textures of plants. What will be key to a sustainable future is identifying plants that offer consumers the highest nutrient density combined with flavor, texture and shelf appeal, and the lowest possible environmental impact.

With Koidras artificial intelligence and machine learning technology, Soli Organics intends to automate the operation of its growing facilities. This technology will not necessarily replace human growers, but streamline operations and allow growers to make data-informed decisions. In a greenhouse setting, Koidra use of artificial intelligence, data collection, and sensing technology is able to increase yields, profitability, and consistency.

Soli Organic is relentless in our pursuit of technologies and partnerships that support our vision to offer our retailer partners and consumers nationwide a variety of nutrient-dense, differentiated fresh products in a manner that maximizes profitability while minimizing environmental impact, said Soli Organics Chief Science Officier Tessa Pocock about the new partnerships.

Soli Organics has seven growing facilities and supplies to 20,000 retailers across the country. According to the company, it is the only indoor grower that has soil-based, controlled environment growing operations. Most of the big players in this space, like Gotham Greens, Bright Farms, and Bowery Farming, use hydroponic growing methods instead.

If you have ever seen indoor-grown greens or herbs in your grocery store, you may have noticed that most of these products are a bit pricier than the standard options. Soli Organic already offers affordable herbs, but following the new partnerships, hopes to bring even more indoor-grown produce to consumers.

Related

Read more:
Soli Organic to Advance Indoor, Soil-Based Agriculture Through Selective Breeding, AI, and Machine Learning - The Spoon

5 Applications of Machine Learning in Healthcare – BBN Times

The use of machine learning (ML) in healthcare can help medical professionals save millions of lives.

Being healthy and capable of doing basic tasks is one of the prime priorities for people across the globe. Human beings tend to go way beyond their limits when the health of a loved one or is at risk. Even though the current healthcare systems are helpful, they have time and again proved that they too are prone to errors. With healthcare errors being the third leadingcause of deaths in the USin 2018, need for a makeover in the current healthcare system exists, and technology stands up to the requirement. Machine learning being a technology that can help human beings by learning from them, smart machines operate on the algorithms provided to them. In todays age where there is a plethora of information andbig data in healthcarestrives to improve the current healthcare systems, machine learning can surely make a mark in its initiative to improve human health. Here are five exciting applications of machine learning in healthcare.

Source: Radiological Society of North America

With machine learning advancing at an astounding speed, machine learning is an active application in diagnosis of human diseases. As machine learning operates on algorithms, healthcare specialists are aiming to leverage this technology in their field by actively developing algorithms and providing information to machines that can help them in imaging and analysing human bodies for abnormalities. By using smart machines on a human body, the machines can quickly scan through the body and can click images to detect diseases early on.

Source: Towards Data Science

Personalization is what humans like when they go anywhere. Asbig data has several applications and gathers information from every possible source, leveraging the same to improve human life can be helpful for doctors to provide people with enhanced services. When ML can accommodate sufficient information about a user, doctors can personalise the treatment options. This personalization of services is possible with the help of machines providing insights about risks of a particular patient being susceptible to a specific disease. With accurate information and actionable insights, machines can also suggest to users and doctors about remedies and precautionary measures depending on a patients response to medications.

Source: Frontiers

Machine learning has proved its worth and capabilities to detect cancer in the past and is one of the most viable options for leading healthcare pioneers to identify any abnormalities. With such performance, ML is proving to be another strong option for radiology and radiotherapy. Doctors can use this technology to scan through the possibilities of a patients response to a specific input of radiation through their body. ML can also help doctors and surgeons in deciding what and how intense a radiation would be required depending on how well the patient responds to specific amounts of emissions.

Source: Nature Magazine

Scientists strive to find ways of how they can discover newer ways to cure certain deadly diseases. With rigorous attempts at improving healthcare, they search for different drugs that can behave as advanced medicines and perform experiments that are focused solely on how these medications can help. Machine learning algorithms help scientists by providing them information about how to improve drug performance and behaviour of the same on a test subject. The behavioural details that are noted from a test subject and a dummy drug can be noted and ML algorithms can be used to determine how those medications perform on a human being.

Source: Medium

Current technological innovations continuously strive to improve the healthcare situation for patients and doctors. When machines focus on improving the performance of operations, they can help doctors by using surgical robots. These surgical robots prove to be of great help to doctors as they provide doctors with high definition imagery and extended flexibility to reach out in areas that are crucial for a doctor. Machine learning has several other applications in numerous fields that try to improve human life. As healthcare pioneers are working to improve the current scenario of their industry consistently, they can now search for ways in which their organization can leverage this technology and how they can benefit from the same.

Read more:
5 Applications of Machine Learning in Healthcare - BBN Times

Alibaba ponders its crystal ball to spy coming advances in AI and silicon photonics – The Register

Alibaba has published a report detailing a number of technology trends the China-based megacorp believes will make an impact across the economy and society at large over the next several years. This includes the use of AI in scientific research, adoption of silicon photonics, the integration of terrestrial, and satellite data networks among others.

The Top Ten Technology Trends report was produced by Alibaba's DAMO Academy, set up by the firm in 2017 as a blue-sky scientific and technological research outfit. DAMO hit the headlines recently with hints of a novel chip architecture that merges processing and memory.

Among the trends listed in the DAMO report, AI features more than once. In science, DAMO believes that AI-based approaches will make new scientific paradigms possible, thanks to the ability of machine learning to process massive amounts of multi-dimensional and multi-modal data, and solve complex scientific problems. The report states that AI will not only accelerate the speed of scientific research, but also help discover new laws of science, and is set to be used as a production tool in some basic sciences.

As evidence, the report cites that fact that Google's DeepMind has already used AI to prove and propose new mathematical theorems and assisted mathematicians in areas involving complex mathematics.

One unusual area where DAMO sees AI having an impact is in the integration of energy from renewable sources into existing power networks. Energy generated from renewable sources will vary depending on weather conditions, the report states, which are unpredictable and may change rapidly, thereby posing challenges for integration of renewable energies such as maintaining a stable output.

DAMO states that AI will be essential to solving these challenges, in particular being able to provide more accurate predictions of renewable energy capacity based on weather forecasts. Intelligent scheduling using deep learning techniques should be able to optimise scheduling policies across energy sources such as wind, solar, and hydroelectric.

The use of big data and deep learning technologies will be able to monitor grid equipment and predict failures, according to the report, so perhaps in the near future you will blame the AI when the power cuts out just as you are trying to binge-watch Line of Duty.

DAMO also believes that we will see a shift in the evolution of AI models, away from large-scale pre-trained models such as BERT and GPT-3 that require huge amounts of processing power to operate and therefore consume a lot of energy, to smaller-scale models that will handle learning and inferencing in downstream applications.

According to this view, the cognitive inferencing in foundational models will be delivered to small-scale models, which are then applied to downstream applications. This will result in separately evolved branches from the main model that have developed their own perception, decision-making and execution results from operating in their separate scenarios, which are then fed back into the foundational models.

In this way, the foundational models continually evolve through feedback and learning to build an organic intelligent cooperative system, the report claims.

There are challenges to this vision, of course, and the DAMO report states that any such system needs to address the collaboration between large and small-scale models, and the interpretability and causal inference issues of foundational models, as the small-scale models will be reliant on these.

Silicon photonics has been just around the corner for many years now, promising not just the ability for computer chips to communicate using optical connections, but perhaps even using photons instead of electrons inside chips. DAMO now expects we will see the widespread use of silicon photonic chips for high-speed data transmission across data centres within the next three years, and silicon photonic chips gradually replacing electronic chips in some computing fields over the next five to ten years.

The continuing rise of cloud computing and AI will be the driving factors for technological breakthroughs that will deliver the rapid advancement and commercialisation of silicon photonic chips, the report states.

Silicon photonic chips could be widely used in optical communications within and between data centres and optical computing. However, the current challenges of silicon photonic chips are in the supply chain and manufacturing processes, according to DAMO. The design, mass production, and packaging of silicon photonic chips have not yet been standardised and scaled, leading to low production capacity, low yield, and high costs.

Privacy is another area where DAMO believes we will see advances in the next few years. It states that techniques already exist that allow computation and analysis while preserving privacy, but widespread application of the technology has been limited due to performance bottlenecks and standardisation issues.

The report predicts that advanced algorithms for homomorphic encryption, which enables calculations on data without decrypting it, will hit a critical point so that less computing power will be required to support encryption. It also foresees the emergence of data trust entities that will provide technologies and operational models as trusted third parties to accelerate data sharing among organisations.

Another prediction from DAMO is that satellite-based communications and terrestrial networks will become more integrated over the next five years, providing ubiquitous connectivity. The report labels this as satellite-terrestrial integrated computing (STC), and states that it will connect high-Earth orbit (HEO) and low-Earth orbit (LEO) satellites and terrestrial mobile communications networks to deliver "seamless and multidimensional coverage."

There are major challenges to implementing all this, of course, including that traditional satellite communications are expensive and use static processing mechanisms that cannot deliver the requirements for STC, while hardware for satellite applications is not commonplace and hardware for terrestrial applications cannot be used in space.

Finally, the DAMO report predicts the rise of what it calls cloud-network-device convergence. This appears to be based on the premise that cloud platforms offer a huge amount of compute power, while modern data networks can provide access to that compute power from almost anywhere, so that endpoint devices only need provide a user interface.

Yes, it's the thin client concept emerging again, this time using the cloud as the host. Clouds allow applications to break free of the limited processing power of devices and deliver more demanding tasks, according to the report, while new network technologies such as 5G and satellite internet need to be continuously improved to ensure wide coverage and sufficient bandwidth.

Just by sheer coincidence, Alibaba Cloud already has such devices, with the handheld "Wuying" launched in 2020 and a more substantial desktop device shown off last year.

Naturally, the DAMO report expects to see a "surge of application scenarios on top of the converged cloud-network-device system" over the next two years that will drive the emergence of new types of devices and promise more high quality and immersive experiences for users.

Continue reading here:
Alibaba ponders its crystal ball to spy coming advances in AI and silicon photonics - The Register

Are we witnessing the dawn of post-theory science? – The Guardian

Isaac Newton apocryphally discovered his second law the one about gravity after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship one that could be expressed as an equation, F=ma and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebooks machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.

You cant lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that no theory, in a word. They just work and do so well. We witness the social effects of Facebooks predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.

Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were oversimplifications of reality. Soon, the old scientific method hypothesise, predict, test would be relegated to the dustbin of history. Wed stop looking for the causes of things and be satisfied with correlations.

With the benefit of hindsight, we can say that what Anderson saw is true (he wasnt alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. We have leapfrogged over our ability to even write the theories that are going to be useful for description, says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tbingen, Germany. We dont even know what they would look like.

But Andersons prediction of the end of theory looks to have been premature or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: whats the best way to acquire knowledge and where does science go from here?

The first reason is that weve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible. Think of the prejudice that has been documented in Googles search engines and Amazons hiring tools.

The second is that humans turn out to be deeply uncomfortable with theory-free science. We dont like dealing with a black box we want to know why.

And third, there may still be plenty of theory of the traditional kind that is, graspable by humans that usefully explains much but has yet to be uncovered.

So theory isnt dead, yet, but it is changing perhaps beyond recognition. The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts, says Tom Griffiths, a psychologist at Princeton University.

Griffiths has been using neural nets to help him improve on existing theories in his domain, which is human decision-making. A popular theory of how people make decisions when economic risk is involved is prospect theory, which was formulated by behavioural economists Daniel Kahneman and Amos Tversky in the 1970s (it later won Kahneman a Nobel prize). The idea at its core is that people are sometimes, but not always, rational.

In Science last June, Griffithss group described how they trained a neural net on a vast dataset of decisions people took in 10,000 risky choice scenarios, then compared how accurately it predicted further decisions with respect to prospect theory. They found that prospect theory did pretty well, but the neural net showed its worth in highlighting where the theory broke down, that is, where its predictions failed.

These counter-examples were highly informative, Griffiths says, because they revealed more of the complexity that exists in real life. For example, humans are constantly weighing up probabilities based on incoming information, as prospect theory describes. But when there are too many competing probabilities for the brain to compute, they might switch to a different strategy being guided by a rule of thumb, say and a stockbrokers rule of thumb might not be the same as that of a teenage bitcoin trader, since it is drawn from different experiences.

Were basically using the machine learning system to identify those cases where were seeing something thats inconsistent with our theory, Griffiths says. The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints. A way to picture it might be as a branching tree of if then-type rules, which is difficult to describe mathematically, let alone in words.

What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.

Some scientists are comfortable with that, even eager for it. When voice recognition software pioneer Frederick Jelinek said: Every time I fire a linguist, the performance of the speech recogniser goes up, he meant that theory was holding back progress and that was in the 1980s.

Or take protein structures. A proteins function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given proteins action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isnt the lack of a theory that will stop drug designers using it. What AlphaFold does is also discovery, she says, and it will only improve our understanding of life and therapeutics.

Others are distinctly less comfortable with where science is heading. Critics point out, for example, that neural nets can throw up spurious correlations, especially if the datasets they are trained on are small. And all datasets are biased, because scientists dont collect data evenly or neutrally, but always with certain hypotheses or assumptions in mind, assumptions that worked their way damagingly into Googles and Amazons AIs. As philosopher of science Sabina Leonelli of the University of Exeter explains: The data landscape were using is incredibly skewed.

But while these problems certainly exist, Dayan doesnt think theyre insurmountable. He points out that humans are biased too and, unlike AIs, in ways that are very hard to interrogate or correct. Ultimately, if a theory produces less reliable predictions than an AI, it will be hard to argue that the machine is the more biased of the two.

A tougher obstacle to the new science may be our human need to explain the world to talk in terms of cause and effect. In 2019, neuroscientists Bingni Brunton and Michael Beyeler of the University of Washington, Seattle, wrote that this need for interpretability may have prevented scientists from making novel insights about the brain, of the kind that only emerges from large datasets. But they also sympathised. If those insights are to be translated into useful things such as drugs and devices, they wrote, it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry.

Explainable AI, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?

Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data and hence scanning time to produce such an image, which isnt necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopras group has done so. But radiologists and patients remain wedded to the image. We humans are more comfortable with a 2D image that our eyes can interpret, he says.

The final objection to post-theory science is that there is likely to be useful old-style theory that is, generalisations extracted from discrete examples that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.

In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step the core of the creative process. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights new generalisations in the mathematics of knots.

In 2022, therefore, there is almost no stage of the scientific process where AI hasnt left its footprint. And the more we draw it into our quest for knowledge, the more it changes that quest. Well have to learn to live with that, but we can reassure ourselves about one thing: were still asking the questions. As Pablo Picasso put it in the 1960s, computers are useless. They can only give you answers.

Read more here:
Are we witnessing the dawn of post-theory science? - The Guardian

NVIDIA released Deep-Learning Dynamic Super Resolution in its latest Game Ready drivers – WindowsReport.com

Don has been writing professionally for over 10 years now, but his passion for the written word started back in his elementary school days. His work has been published on Livebitcoinnews.com, Learnbonds.com, eHow, AskMen.com,...Read more

A new feature in GeForce drivers called Dynamic Super Resolution, or DSR, uses your graphics card to upscale images to a higher resolution than your monitor is capable of displaying.

Unlike DLSS, which upscales the image after rendering it at a lower resolution, DLSS renders the frame at a higher internal resolution. This results in a sharper picture.

DSR

Downscaling-rendering is a video-processing technique that increases video resolution, then reduces it to match the output of the video device.

NVIDIA has developed an AI-powered update to their Dynamic Super Resolution (DSR) technology, and the upgrade is coming in the next Game Ready drivers.

The latest GeForce drivers will include Deep-Learning Dynamic Super Resolution, which is a new downscaling technique or rather refreshed version of it that will make its way to GeForce drivers on January 14, along with some other additions.

As the name suggests, this is just plain Deep-Learning but with flair. NVIDIA is using its new Tensor cores inside GeForce RTX 20-series graphics cards to lift In-Game Super Resolution uses artificial intelligence and machine learning to boost image quality.

The goal of this project is to provide the gaming community with a near-native experience at 1080p resolution but with a graphical fidelity that exceeds even 4K.

NVIDIAs new DLDSR 2.25x mode will offer comparable image quality to the current DSR 4x mode, but with a significant performance boost. DLDSR is supposed to produce similar-quality images with half the energy.

DLDSR (Deep Learning Dynamic Super Resolution) is a sample-based upscaler for games, and its ready for testing by select RTX graphics card owners.

NVIDIA is adding a new feature to its GeForce drivers, and it will be available on January 14 in the Game Ready driver release. In addition to that feature, NVIDIA is introducing a feature you probably didnt know existed: custom ReShade filters.

For those who arent familiar with it, ReShade is a popular graphics tool that lets you apply various filters on top of a game and adjust its lighting and textures.

NVIDIA has partnered with ReShade modder Pascal Gilcher to create new versions of classic ReShade filters that will appear in the GeForce driver itself. GeForce Experience users will be able to apply these filters via Freestyles overlay.

NVIDIA says that using DLDSR and SSRTGI will let you have a more realistic gaming experience in games like Prey.

On January 14, NVIDIA will release its new Game Ready driver, which will debut optimized performance for Assassins Creed: Unity.

Its interesting to see NVIDIA come out of nowhere with an announcement like this, just days after AMD announced its new image upscaling tech, Radeon Super Resolution.

It seems that NVIDIA isnt too happy to see anyone else having a moment of the spotlight in the image scaling space; the same thing happened just a few months ago when AMDs FSR was having a moment to shine.

What are your thoughts on the new feature that lets you upscale the game resolution? Share your thoughts with us in the comment section below.

Thank you!

Start a conversation

Go here to read the rest:
NVIDIA released Deep-Learning Dynamic Super Resolution in its latest Game Ready drivers - WindowsReport.com

How AI and machine learning can help improve therapy for humans – Boing Boing

When I first saw a headline in the Technology Review about therapists using AI to improve treatment, my initial instinct was to cringe. With the rise of remote therapy apps, I can absolutely envision a world where some intrepid entrepreneur decides to "disrupt" the cognitive behavioral therapy industry by automating the process with help from algorithms that will inevitably be exposed as racist and sexist and who knows what else.

But the story that Charlotte Jee and Will Douglas Heaven actually tell in the Review is much more nuanced and interesting. It focuses on words, and how machine learning might help us to identify those elusive words we're always looking for. A large part of psychological healing involves finding the right words to identify and describe your scenario and experience; some of the greatest epiphanies and breakthroughs come when you finally find the right words for something you've been struggling with. And that's what these therapists are proposing: using AI to help people find those words.

What's crucial is delivering the right words at the right time. Blackwell and his colleagues at Ieso are pioneering a new approach to mental-health care in which the language used in therapy sessions is analyzed by an AI. The idea is to usenatural-language processing(NLP) to identify which parts of a conversation between therapist and clientwhich types of utterance and exchangeseem to be most effective at treating different disorders.

The aim is to give therapists better insight into what they do, helping experienced therapists maintain a high standard of care and helping trainees improve. Amid a global shortfall in care, an automated form of quality control could be essential in helping clinics meet demand.

Ultimately, the approach may reveal exactly how psychotherapy works in the first place, something that clinicians and researchers are still largely in the dark about. A new understanding of therapy's active ingredients could open the door to personalized mental-health care, allowing doctors to tailor psychiatric treatments to particular clients much as they do when prescribing drugs.

There's a lot more, of course. But that's an approach to AI treatment I can get behind.

The therapists using AI to make therapy better [Charlotte Jee and Will Douglas Heaven / MIT Technology Review]

Image: Public Domain via PxHere

Read the original:
How AI and machine learning can help improve therapy for humans - Boing Boing

How leveraging AI and machine learning can give companies a competitive edge – Business Today

A recent study by Gartner indicates that by 2025 the 10% of enterprises that establish Machine Learning (ML) or Artificial Intelligence (AI) engineering best practices will generate at least three times more value from their AI and ML efforts than the 90% of enterprises that don't. With such a high value estimated to be derived only from the adoption of ML/AI practices, it is difficult to not agree that the future of enterprises rests heavily on AI and ML technologies with other digital technologies.The pandemic has unveiled a world that embraced technology at a pace that would have otherwise taken ages to evolve.

Traditional practices that saw monolithic systems, lack of flexibility and manual processes were all blocking innovation.

Also Read:Artificial Intelligence: A Pathway to success for enterprises

However, mass new-age technology acceptance induced by the pandemic has helped enterprises overcome these challenges. Modern technologies like AI and ML are opening a new world of possibilities for organisations.

Seizing the early-mover advantage will particularly benefit organisations in taking important business decisions in a more informed, intuitive way.

The applicability of new-age technologies is growing every day. For example, marketers are starting to use ML-based tools to personalise offers to their customers and further measure their satisfaction levels through the successful implementation of ML algorithms into their operations.

This and there are more examples of how AI/ML algorithms are enabling organisations run their businesses smartly and make them profitable.Additionally, enterprises are recognising the benefits of cloud infrastructure and applications with ML and AI algorithms built in.

They allow companies to spend less time on manual work and management and instead focus on high-value jobs that drive business results. ML can result in efficiencies in workloads of enterprise IT and ultimately reduce IT infrastructure costs.

This stands especially true in India, where consulting firm Accenture estimates in one of its reports that the use of AI could add $957 billion to the Indian economy in 2035 provided the "right investments" are made in new-age technology. India, with its entrepreneurial spirit, abundance of talent and the right sources of education has mega potential to unleash AI's true capabilities - but they need the right partner.

The biggest limitation in using AI is that companies often run into implementation issues which could be anything from scarcity of data science expertise to making the platform perform in real-time.

As a result, there is slight reluctance in accepting AI among organisations, and this, in turn, is leading to inconsistencies and lack of results.

Also Read:Three ways AI can help transform businesses

However, with the right partner, India's true potential can be harnessed. As we move into an AI/ML led world, we need to lead the change by building the requisite skills.

While many companies don't have enough resources to marshal an army of data science PhDs, a more practical alternative is to build smaller and more focused "MLOps" teams - much like DevOps teams in application development.

Such teams could consist of not just data scientists, but also developers and other IT engineers whose mission would be to deploy, maintain, and constantly improve AI/ML models in a production environment. While there is a huge responsibility lying on IT professionals to develop an AI/ML led ecosystem in India, companies must also align resources to help them be successful. In due course, AI/ML will be the competitive advantage that companies will need to adopt in order to stay relevant and sustain businesses.

Forrester predicts that one in five organisations will double down on "AI inside" - which is AI and ML embedded in their systems and operational practices.

AI and ML are powerful technology tools that hold the key to achieving an organization's digital transformation goals.

(The author is Head-Technology Cloud, Oracle India.)

Read the original here:
How leveraging AI and machine learning can give companies a competitive edge - Business Today