How to Measure the Performance of Your AI/Machine Learning Platform? – Analytics Insight

With each passing day, new technologies are emerging across the world. They are not just bringing innovation to industries but also radically transforming entire societies. Be it artificial intelligence, machine learning, Internet of Things, or Cloud. All of these have found a plethora of applications in the world that are implemented through their specialized platforms. Organizations choose a suitable platform that has the power to uncover the complete benefits of the respective technology and obtain the desired results.

But, choosing a platform isnt as easy as it seems. It has to be of high caliber, fast, independent, etc. In other words, it should be worth your investment. Lets say that you want to know the performance of a CPU in comparison to others. Its easy because you know you have Passmark for the job. Similarly, when you want to check the performance of a graphics processing unit, you have Unigines Superposition. But, when it comes to machine learning, how do you figure out how fast a platform is? Alternatively, as an organization, if you have to invest in a single machine learning platform, how do you decide which one is the best?

For a long period, there has been no benchmark to decide the worthiness of machine learning platforms. Put differently, the artificial intelligence and machine learning industry have lacked reliable, transparent, standard, and vendor-neutral benchmarks that help in flagging performance differences between different parameters used for handling a workload. Some of these parameters include hardware, software, algorithms, and cloud configurations among others.

Even though it has never roadblock when designing applications, the choice of platform determines the efficiency of the ultimate product in one way or the other. Technologies like artificial intelligence and machine learning are growing to be extremely resource-sensitive, as research progresses. For this reason, the practitioners of AI and ML are seeking the fastest, most scalable, power-efficient, and low-cost hardware and software platforms to run their workloads.

This need has emerged since machine learning is moving towards a workload-optimized structure. As a result, there is a more than ever need for standard benchmarking tools that will help machine learning developers access and analyze the target environments which are best suited for the required job. Not just developers but enterprise information technology professionals also need a benchmarking tool for a specific training or inference job. Andrew Ng, CEO of the Landing AI points out that there is no doubt that AI is transforming multiple industries. But for it to reach its full potential, we still need faster hardware and software. Therefore, unless we have something to measure the efficiency of the hardware and software specifically for the needs of ML, there is no way that we can design more advanced ones for our requirements.

David Patterson, Author of the Computer Architecture: A quantitative approach highlights the fact that good benchmarks enable researchers to compare different ideas quickly, which makes it easier to innovate. Having said this, the need for a standard benchmarking tool for ML is more than ever.

To solve the underlying problem of an unbiased benchmarking tool, machine learning expert David Katner along with scientists and engineers from a reputed organization such as Google, Intel, and Microsoft have come up with a new solution. Welcome ML Perf- a machine learning benchmark suite that measures how fast a system can perform ML inference using a trained model.

Measuring the speed of a machine learning problem is already a complex task and tangles even more as it is observed for a longer period. All of this is simply because of the varying nature of problem sets and architectures in machine learning services. Having said this, ML Perf in addition to performance also measures the accuracy of a platform. It is intended for the widest range of systems including mobile devices to servers.

Training is that process in machine learning, where a network is fed with large datasets and let loose to find any underlying patterns in them. The more the number of datasets, the more is the efficiency of the system. It is called training because the network learns from the datasets and trains itself to recognize a particular pattern. For example, Gmails Smart Reply is trained in 238,000,000 sample emails. Similarly, Google Translate is trained on a trillion datasets. This makes the computational cost of training quite expensive. Systems that are designed for training have large and powerful hardware since their job is to chew up the data as fast as possible. Once the system is trained, the output received from it is called the inference.

Therefore, performance certainly matters when running inference workloads. On the one hand, the training phase requires as many operations per second without the concern of any latency. On the other hand, latency is a big issue during inference since a human is waiting on the other end to receive the results of the inference query.

Due to the complex nature of architecture and metrics, one cannot receive a perfect score through ML Perf. Since ML Perf is also valid across a range of workloads and overwhelming architectures, one cannot make assumptions about a perfect score just like in the case of CPUs or GPUs. In ML Perf, scores are broken down into training workloads and inference workloads before being divided into tasks, models, datasets, and scenarios. The result obtained from ML Perf is not a perfect score but a wide spreadsheet. Each task is measured under the following four parameters-

Finally, ML Perf separates the benchmark into Open and Closed divisions, with more strict requirements for the closed division. Similarly, the hardware for an ML workload is also separated into categories such as Available, preview, Research, Development, and Others. All these factors give Ml experts and practitioners an idea of how close a given system is to real production.

Share This ArticleDo the sharing thingy

Read the rest here:
How to Measure the Performance of Your AI/Machine Learning Platform? - Analytics Insight

How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? – AI News

Humanitarian engineering programs bring together engineers, policy makers, non-profit organisations, and local communities to leverage technology for the greater good of humanity.

The intersection of technology, community, and sustainability offers a plethora of opportunities to innovate. We still live in an era where millions of people are under extreme poverty, lacking access to clean water, basic sanitation, electricity, internet, quality education, and healthcare.

Clearly, we need global solutions to tackle the grandest challenges facing our planet. So how can artificial intelligence (AI) assist in addressing key humanitarian and sustainable development challenges?

To begin with, the United Nations Sustainable Development Goals (SDGs) represent a collection of 17 global goals that aim to address pressing global challenges, achieve inclusive development, and foster peace and prosperity in a sustainable manner by 2030. AI enables the building of smart systems that imitate human intelligence to solve real-world problems.

Recent advancements in AI have radically changed the way we think, live, and collaborate. Our daily lives are centred around AI-powered solutions with smart speakers playing wakeup alarms, smart watches tracking steps in our morning walk, smart refrigerators recommending breakfast recipes, smart TVs providing personalised content recommendations, and navigation mobile apps recommending the best route based on real-time traffic. Clearly, the age of AI is here. How can we leverage this transformative technology to amplify the impact for social good?

AI core capabilities like machine learning (ML), computer vision, natural language understanding, and speech recognition offer new approaches to address humanitarian challenges and amplify the positive impact on underserved communities. ML enables machines to process massive amounts of data, interconnect underlying patterns, and derive meaningful insights for decision making. ML techniques like deep learning offer the powerful capability to create sophisticated AI models based on artificial neural networks.

Such models can be used for numerous real-world situations, like pandemic forecasting. AI tools can model and predict the spread of outbreaks like Covid-19 in low-resource settings using recent outbreak trends, treatment data, and travel history. This will help governmental and healthcare agencies to identify high-risk areas, manage demand and supply of essential medical supplies, and formulate localised remedial measures to control an outbreak.

Computer vision techniques process visual information in digital images and videos to generate valuable inference. Trained AI models assist medical practitioners to examine clinical images and identify hidden patterns of malignant tumors supporting expediated decision-making and a treatment plan for patients. Most recently, smart speakers have extended their conversational AI capabilities for healthcare use cases like chronic illness management, prescription ordering, and urgent-care appointments.

This advancement opens up the possibility to drive healthcare innovations that will break down access barriers and deliver quality healthcare to a marginalised population. Similarly, global educational programs aimed to connect the digitally unconnected can leverage satellite images and ML algorithms to map school locations. AI-powered learning products are increasingly launched to provide personalised experiences to train young children in math and science.

The convergence of AI with the Internet of Things (IoT) facilitates rapid development of meaningful solutions for agriculture to monitor soil health, assess crop damage, and optimise use of pesticides. This empowers local farmers to model different scenarios and choose the right crop that is likely to maximise the quality and yield, and it contributes toward zero hunger and economic empowerment SDGs.

To deliver high social impact, AI-driven humanitarian programs should follow a bottom-up approach. One should always work backwards from needs of the end-user, drive clarity on the targeted community/user, their major pain points, the opportunity to innovate, and expected user experience.

Most importantly, always check whether AI is relevant to the problem at hand or investigate if a meaningful alternative approach exists. Understand how an AI-powered solution will deliver value to various stakeholders involved and positively contribute toward achieving SDG for local communities. Define a suite of metrics to measure various dimensions of program success. Data acquisition is central to building robust AI models that require access to meaningful and quality data.

Delivering effective AI solutions to the humanitarian landscape requires a clear understanding of the data required and relevant sources to acquire them. For instance, satellite images, electronic health records, census data, educational records, and public datasets are used to solve problems in education, healthcare, and climate change. Partnership with key field players is important for addressing data gaps for domains with sparsely available data.

Responsible use of AI in humanitarian programs can be achieved by enforcing standards and best practices to implement fairness, inclusiveness, security, and privacy controls. Always check models and datasets for bias and negative experiences. Techniques like data visualisation and clustering can evaluate a datasets distribution for fair representation of various stakeholders dimensions. Routine updates to training and testing datasets is essential to fairly account for diversity in users growing needs and usage patterns. Safeguard sensitive user information by implementing privacy controls like encrypting user data at rest and in transit, limit access to user data and critical production systems based on least-privilege access control, and enforce data retention and deletion policy on user datasets. Implement a robust threat model to handle possible system attacks and routine checks on infrastructure security vulnerabilities.

To conclude, AI-powered humanitarian programs offer a transformative opportunity to advance social innovations and build a better tomorrow for the benefit of humanity.

Photo byElena MozhviloonUnsplash

Interested in hearing industry leaders discuss subjects like this?Attend the co-located5G Expo,IoT Tech Expo,Blockchain Expo,AI & Big Data Expo, andCyber Security & Cloud Expo World Serieswith upcoming events in Silicon Valley, London, and Amsterdam.

Original post:
How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? - AI News

U of I to lead two of seven new national artificial intelligence institutes – University of Illinois News

CHAMPAIGN, Ill. The National Science Foundation and the U.S. Department of Agricultures National Institute of Food and Agriculture are announcing an investment of more than $140 million to establish seven artificial intelligence institutes in the U.S. Two of the seven will be led by teams at the University of Illinois, Urbana-Champaign. They will support the work of researchers at the U. of I. and their partners at other academic and research institutions. Each of the new institutes will receive about $20 million over five years.

The USDA-NIFA will fund the AI Institute for Future Agricultural Resilience, Management and Sustainability at the U. of I. Illinois computer science professor Vikram Adve will lead the AIFARMS Institute.

The NSF will fund the AI Institute for Molecular Discovery, Synthetic Strategy and Manufacturing, also known as the Molecule Maker Lab Institute. Huimin Zhao, a U. of I. professor of chemical and biomolecular engineering and of chemistry, will lead this institute.

AIFARMS will advance AI research in computer vision, machine learning, soft-object manipulation and intuitive human-robot interaction to solve major agricultural challenges, the NSF reports. Such challenges include sustainable intensification with limited labor, efficiency and welfare in animal agriculture, the environmental resilience of crops and the preservation of soil health. The institute will feature a novel autonomous farm of the future, new education and outreach pathways for diversifying the workforce in agriculture and technology, and a global clearinghouse to foster collaboration in AI-driven agricultural research, Adve said.

Computer science professor Vikram Adve will lead the AI Institute for Future Agricultural Resilience, Management and Sustainability at the U. of I.

Photo by L. Brian Stauffer

Edit embedded media in the Files Tab and re-insert as needed.

The Molecule Maker Lab Institute will focus on the development of new AI-enabled tools to accelerate automated chemical synthesis to advance the discovery and manufacture of novel materials and bioactive compounds, the NSF reports. The institute also will train a new generation of scientists with combined expertise in AI, chemistry and bioengineering. The goal of the institute is to establish an open ecosystem of disruptive thinking, education and community engagement powered by state-of-the-art molecular design, synthesis and spectroscopic characterization technologies all interfaced with AI and a modern cyberinfrastructure, Zhao said.

Huimin Zhao, a professor of chemical and biomolecular engineering and of chemistry, will lead the new Molecule Maker Lab Institute at Illinois.

Photo by L. Brian Stauffer

Edit embedded media in the Files Tab and re-insert as needed.

The National Science Foundation and USDA-NIFA recognize the breadth and depth of Illinois expertise in artificial intelligence, agricultural systems and molecular innovation, U. of I. Chancellor Robert Jones said. It is no surprise to me that two of seven new national AI institutes will be led by our campus. I look forward to seeing the results of these new investments in improving agricultural outcomes and innovations in basic and applied research.

Adve is a co-director of the U. of I. Center for Digital Agriculture with crop sciences bioinformatics professor Matthew Hudson. AIFARMS will be under the CDA umbrella. Zhao and Hudson are affiliates of the Carl R. Woese Institute for Genomic Biology, where Zhao leads the Biosystems Design theme. The Molecule Maker Lab Institute will be associated with two campus institutes: IGB and the Beckman Institute for Advanced Science and Technology.

For more information, see related posts, below, from associated campus units:

Editors notes:

To reach Vikram Adve, email vadve@illinois.edu.

To reach Huimin Zhao, email zhao5@illinois.edu.

Original post:
U of I to lead two of seven new national artificial intelligence institutes - University of Illinois News

Funding boost for artificial intelligence in NHS to speed up diagnosis of deadly diseases – GOV.UK

Patients will benefit from major improvements in technology to speed up the diagnosis of deadly diseases like cancer thanks to further investment in the use of artificial intelligence across the NHS.

A 50 million funding boost will scale up the work of existing Digital Pathology and Imaging Artificial Intelligence Centres of Excellence, which were launched in 2018 to develop cutting-edge digital tools to improve the diagnosis of disease.

The 3 centres set to receive a share of the funding, based in Coventry, Leeds and London, will deliver digital upgrades to pathology and imaging services across an additional 38 NHS trusts, benefiting 26.5 million patients across England.

Pathology and imaging services, including radiology, play a crucial role in the diagnosis of diseases and the funding will lead to faster and more accurate diagnosis and more personalised treatments for patients, freeing up clinicians time and ultimately saving lives.

Health and Social Care Secretary Matt Hancock said:

Technology is a force for good in our fight against the deadliest diseases it can transform and save lives through faster diagnosis, free up clinicians to spend time with their patients and make every pound in the NHS go further.

I am determined we do all we can to save lives by spotting cancer sooner. Bringing the benefits of artificial intelligence to the frontline of our health service with this funding is another step in that mission. We can support doctors to improve the care we provide and make Britain a world-leader in this field.

The NHS is open and I urge anyone who suspects they have symptoms to book an appointment with their GP as soon as possible to benefit from our excellent diagnostics and treatments.

Today the government has also provided an update on the number of cancer diagnostic machines replaced in England since September 2019, when 200 million was announced to help replace MRI machines, CT scanners and breast screening equipment, as part of the governments commitment to ensure 55,000 more people survive cancer each year.

69 scanners have now been installed and are in use, 10 more are being installed and 75 have been ordered or are ready to be installed.

The new funding is part of the governments commitment to saving thousands more lives each year and detecting three-quarters of all cancers at an early stage by 2028.

Cancer diagnosis and treatment has been an absolute priority throughout the pandemic and continues to be so. Nightingale hospitals have been turned into mass screening centres and hospitals have successfully and quickly cared for patients urgently referred by their GP, with over 92% of urgent cancer referrals being investigated within 2 weeks, and 85,000 people starting treatment for cancer since the beginning of the coronavirus pandemic.

In June, 45,000 more people came forward for a cancer check and the public are urged if they are concerned about possible symptoms to contact their GP and get a check-up.

National Pathology Imaging Co-operative Director and Consultant Pathologist at Leeds Teaching Hospitals NHS Trust Darren Treanor said:

This investment will allow us to use digital pathology to diagnose cancer at 21 NHS trusts in the north, serving a population of 6 million people. We will also build a national network spanning another 25 hospitals in England, allowing doctors to get expert second opinions in rare cancers, such as childhood tumours, more rapidly. This funding puts the NHS in a strong position to be a global leader in the use of artificial intelligence in the diagnosis of disease.

Professor Kiran Patel, Chief Medical Officer and Interim Chief Executive Officer for University Hospitals Coventry and Warwickshire (UHCW) NHS Trust, said:

We are delighted to receive and lead this funding. This represents a major capital investment into the NHS which will massively expand the digitisation of cellular pathology services, driving diagnostic evaluation to new heights and increasing access to a vast amount of image information for research.

As a trust were excited to be playing such a major part in helping the UK to take a leading role in the development and delivery of these new technologies to improve patient outcomes and enhance our understanding and utilisation of clinical information.

Professor Reza Razavi, London Medical Imaging and AI Centre for Value-Based Healthcare Director, said:

The additional funding will enable the London Medical Imaging and AI Centre for Value-Based Healthcare to continue its mission to spearhead innovations that will have significant impact on our patients and the wider NHS.

Artificial intelligence technology provides significant opportunities to improve diagnostics and therapies as well as reduce administrative costs. With machine learning, we can use existing data to help clinicians better predict when disease will occur, diagnosing and treating it earlier, and personalising treatments, which will be less resource intensive and provides better health outcomes for our patients.

The centres benefiting from the funding are:

Alongside the clinical improvements, this investment supports the UKs long-term response to COVID-19, contributing to the governments aim of building a British diagnostics industry at scale. The funding will support the UKs artificial intelligence and technology industries, by allowing the centres to partner with new and innovative British small and medium-sized enterprises (SMEs), boosting our economic recovery from coronavirus.

As part of the delivery of the governments Data to Early Diagnosis and Precision Medicine Challenge, in 2018, the Department for Business, Energy and Industrial Strategy (BEIS) invested 50 million through UK Research and Innovation (UKRI) to establish 5 digital pathology and imaging AI Centres of Excellence.

The centres located in Leeds, Oxford, Coventry, Glasgow and London were originally selected by an Innovate UK competition run on behalf of UKRI which, to date, has leveraged over 41.5 million in industry investment. Working with their partners, the centres modernise NHS pathology and imaging services and develop new, innovative ways of using AI to speed up diagnosis of diseases.

The rest is here:
Funding boost for artificial intelligence in NHS to speed up diagnosis of deadly diseases - GOV.UK

Six Limitations of Artificial Intelligence As We Know It – Walter Bradley Center for Natural and Artificial Intelligence

The list is a selection from Bingecast: Robert J. Marks on the Limitations of Artificial Intelligence, a discussion betweenLarry L. Linenschmidtof theHill Country Instituteand Walter Bradley Center directorRobert J. Marks. The focus on why we mistakenly attribute understanding and creativity to computers. The interview was originally published by the Hill Country Institute and is reproduced with thanks.

Here is a partial transcript, listing six limits of AI as we know it: (The Show Notes, Additional Resources, and a link to the full transcript are below.)

Larry L. Linenschmidt: When I read the term classical computer, how does a computer function? Lets build on that to talk about supercomputers and kind of build into just a foundation of how these things work so we can then talk about the theory of AI and what it is and what it isnt.

Robert J. Marks: One of the things that we can identify that humans can do that computers cant do are things which are non-algorithmic. If its non-algorithmic, it means its non-computable. Actually, Alan Turing showed back in his initial work that there were things which were not algorithmic. Its very difficult, for example, to write a computer program to analyze another computer program. Turing showed that specific instantiations of that were non-algorithmic. This is something which is taught to a freshman computer science students, so they know what algorithmic and non-algorithmic/non-computable is. Again, non-computable is a synonym for non-algorithmic.

We have a number of aspects that are non-algorithmic. I would say calling it creativity, sentience, consciousness are probably things that you can not write a computer program to simulate.

Note: The film The Imitation Game (2014) dramatizes the way Turing led a team that broke the Nazis unbreakable code, Enigma, during World War II, using pioneer computing techniques.

Robert J. Marks: Basically, Turing showed that computers were limited by something called algorithms, and we hear about algorithms a lot. Such and such is doing an algorithm and Facebook has initiated an algorithm to do something. The question is, what is an algorithm?

The algorithm is simply a step-by-step procedure to accomplish something. If you go to your shampoo bottle and you look at the back and it says, Wet hair, apply shampoo, rinse, and then repeat. Thats an algorithm because it tells you the step-by-step procedures that you need to wash your hair.

Larry L. Linenschmidt: Well, thats a pretty short algorithm for me since I dont have much hair, but go right ahead.

Robert J. Marks: Isnt that right? Well, the interesting thing about that algorithm is if you gave that to a computer, that computer would wash its hair forever because it doesnt say repeat once, it just says repeat

An algorithm I like to think of as a recipe. If you look at the recipe for baking a vanilla coconut cake, for example, it will tell you the ingredients that you need and then it will give you a step-by-step procedure for doing it. That is what an algorithm is and, in fact, it is what computers are limited to do. Computers are only able to perform algorithms.

Note: Have a look at Things exist that are unknowable: A tutorial on Chaitins number by Robert J. Marks, for some sense of the limits of knowledge that computers will not transcend.

Larry L. Linenschmidt: I have a cellphone that I understand has more power than a room full of computers 50 years ago that Army intelligence used. A massive increase in computing capability, isnt there?

Robert J. Marks: Yes there is, but by increasing the speed and using parallel computers, we have just increased the speed of the computers. There is a principle taught to computer scientists called the Church-Turing Thesis, which basically says that Alan Turings original machine could also do what the computers today do. The only thing that computers could do today is do things a lot faster. That is really good, that is very useful, but in terms of the ability of the computer, they are still restricted to algorithms. Im not sure if youve ever heard of the quantum computer

Larry L. Linenschmidt: Yes.

Robert J. Marks (pictured): Which is kind of the new rage where you use this strange, weird world of quantum physics in order to get computational results. Even quantum computing is algorithmic and is constrained by the Church-Turing Thesis. With quantum computers, were going to be doing them like lightning, but still, all of the stuff we could do we could do with Turings original machine. Now, with Turings original machine, it might take us a trillion years in order to do it compared to today, but nevertheless, the capability is with Turings original machine. Were just getting faster and faster and we can do more interesting things because of that speed.

Note: You may also wish to read Google vs. IBM?: Quantum supremacy isnt the big fix anyway. If human thought is a halting oracle, then even quantum computing will not allow us to replicate human intelligence (Eric Holloway).

Larry L. Linenschmidt: One of the things we talked about earlier were algorithms and what computers can do and some things that maybe they cant do. What are the things that maybe computers will never be able to do?

Robert J. Marks: Well, I think maybe the biggest testable thing that computers will never be able to do is creativity. Computers, they can only take the data which theyve been presented and interpolate. They cant, if you will, think outside of the box. If you look at the history of creativity, like great scientists like the Galileo and Einstein and such, they actually had to take the data that they were given. They had to discard it and they came up with something which was brand new. It wasnt just a reshuffling of the status quo, which is basically what a computer can do, it was actually a creative act outside of the available data.

Note: Typical claims for computer-generated art, music, or copywriting involve combining masses of similar material and producing many composites, the most comprehensible of which are chosen by the programmers for publication. The conventional test of computer intelligence, the Turing test, measures only whether a computer can fool a human under certain circumstances. The Lovelace test, which searches for actual creativity, is not much used and has not been passed.

Robert J. Marks: Qualia is kind of the subjective experience that one has of themselves. Imagine, for example, having a big, delicious red apple and you anticipate taking the bite out of it. You take the bite, you feel the crispness, you feel the tart sweetness, you feel the crunch as you chew it and swallow it. That is an experience and the question is, do you think you could ever write an algorithm to explain that qualia experience to a computer? I dont think so. I think that that is something which is unique to the human being

John Searle was a philosopher and he said that, There is no way that a computer understands anything. He illustrated this with the Chinese room: The basic idea was, you slipped a little slip of paper with something written in Chinese through a little slot. Inside the room, somebody picked it up and they looked at it and they wanted to translate it to something, say, like Portuguese.

Theres a big bunch of file cabinets in the room. The person in the room took this little sheet, looked through all of the file cabinets, and finally found something that matched the little sheet. He took the little translation in Portuguese, wrote it down, refiled the original things, went to the door and slipped out the translation into the Portuguese.

Now, externally, the person would say, My gosh, this guy knows Chinese, he knows Portuguese. This computer is really, really smart. Internally, the guy that was actually going through the file cabinets, doing the pattern matching in order to find out what the translation was, had no idea what Chinese was, had no idea what Portuguese was. He was just following a bunch of instructions.

Larry L. Linenschmidt: The computer processes, it turns out work product based on how its directed, but in terms of understanding, as we think of understanding like you would expect one of your students to understand what youre teaching, they dont understand. They compute. They process data. Is that a fair way of putting it?

Robert J. Marks: Consider the world champions at Jeopardy. If you think about it, thats just a big Chinese room. You have all of Wikipedia and all of the internet available to you and youre given some sort of question on Jeopardy and you have to get the answer. Watson beating the world champions in Jeopardy is exactly an example of a Chinese room, except the room is a lot bigger because computers are a lot faster and can do a lot better.

Note: A mistake Watson made playing Jeopardy illustrates the limitations: Why did Watson think Toronto was in the U.S.A.? How that happened tells us a lot about what AI can and cant do, to this day. Hint: Assessment of the relevance of possible clues may not be the greatest strength of a Watson type program.

Larry L. Linenschmidt (pictured): Well, theres one other game example that comes up quite a bit in the literature, and thats the Game Go, and apparently Go is the most complicated game and a computer did very well. Is that just an extension of the same idea that it was able to match possible outcomes and evaluate the best of those? Or what? How do you look at that?

Robert J. Marks: Go was a remarkable computer achievement. I dont want to derogate this at all. They used the concept called reinforcement learning and this reinforcement learning was used in chess and Go. It was actually used to win the old arcade games where, just by looking at the pixels in an arcade game such as Pac-Man, for example, the computer could learn how to win. Now, in all of these cases, of course, there was the concept of the rules. Youve got to know the rules. The fact that Go was mastered by the program is an incredibly accomplishment of computer science. However, notice that the computer is doing exactly what it was programmed to do. It was programmed to play Go, and Go is a very narrow application of artificial intelligence.

I would be impressed if the computer program would pass something called the Lovelace test, which is the test that computer programs are given for to test their creativity. The Lovelace Test basically says that you have seen creativity if the computer program does something that cant be explained by the programmers. Now, you might get some surprising results. There was some surprising results that Alpha Go used when it played the master, but surprising doesnt count. Its still in the game of Go. If AlphaGo had gone on to do something likelet me make the point by exaggerationgive you investment advice or to forecast the weather without additional programming, that would be an example of AI creativity

Algorithms in computers are the result of human creativity. That is not a controversial viewpoint. The current CEO of Microsoft, Satya Nadella, says the same thing. He says that, Look, computers are never going to be creative. Creativity will always be a domain of the programmer.

Note: Creativity does not follow computational rules provides a look at the concept. Philosopher Sean Dorrance Kelly muses on why machines are not creative.

Larry L. Linenschmidt: Well, let me ask the question about AI a little bit differently. Self-learning, a computer teaching itself to do something different, a way that the programmers not foreseeing. Theres a program called Deep Patient and its a way of managing information on the medical side and a couple of other programs that I read about and they solved the problem, but they arent doing it in a way that the developer of the network can explain. Now, does that imply that theres a learnability going on in there? Some way that theyre doing it? Or is everything that theyre doing, even if its not fully understood by the developer, still subject to the way that the developer set up the network?

Robert J. Marks: Well, one of the things we have to differentiate here is the difference between surprise and creativity. I have certainly written computer programs that have the element of surprise in them. I look at them and I say, Wow, look at what its doing, but then I look at the program and say, Yeah, this was one of the solutions that I considered. One of the ideas, especially in computer search, is to lay out thousands, maybe millions or billions of potential different solutions, and you dont know what the effect of those solutions are going to be. It would be almost like putting out a bunch of different recipes for cake. You had different amounts of batter, different amounts of milk, a number of different eggs, the amount of oil that you put in, et cetera, and what you want to do is you want to figure out what the best one is.

If you have no domain expertise, if you want to walk around in the search space and try to find the best combination, you might get something which is totally unexpected. We did something in swarm intelligence, which is modeling social insects. We actually applied evolutionary computing, which is an area in electrical engineering, and we evolved dweebs, it was a predator-prey sort of problem and our prey was the dweebs and our predator was the bullies and the bullies would chase around the dweebs. We would evolve and try to figure out, what was the best way for the dweeb colony of the colony swarm to survive the longest? The result that we got was astonishing and very surprising.

What happened was that there was self-sacrifice that the dweebs learned. One dweeb would run around the playground and be chased by the bullies and self-sacrifice himself, and then, I guess, dweebs are males because I said himself, so they would kill the dweeb and then there would be other dweebs which would come out and individually they would self-sacrifice themselves. By using up all of the time in order to survive, the colony of dweebs survived for a very, very long time, which was exactly what we told it to do.

Now, once we looked at that, we were surprised by the result, but we looked back at the code and we said, Yeah, of these thousands, millions of different solutions that we proposed, we see how this one gave us the surprise. Surprise cant be confused with creativity. If the surprise is something which is consequent to what the programmer decided to program, then it really isnt creativity. The program has just found one of those millions of solutions that work really well in, possibly, a surprising manner.

Larry L. Linenschmidt: As youre explaining it, Im thinking that a computer is as good as its programmer, its good at matching, its good at putting things together, but true creativity, what the entrepreneur Peter Thiel refers to the fact that a lot of people can take us from one to infinite but its that zero to one that is creativity in the tech world, in the business world that sets us apart.A computer cant take us from zero to one. It needs instructions, doesnt it?

Robert J. Marks: It does, and in his book, Zero to One, Thiel talks about the requirement of creativity. His philosophy is parallel to that of some other people, Jay Richards, for example, and George Gilder, who look at business in a very different way from those who see it as a Darwinian competition. They say, No, what drives entrepreneurs is creativity. You come up with a new idea like a PayPal or a Facebook or an Uber.

That creativity in business is never going to come from a computer. A computer would have never come up with the idea of Uber unless the programmer programmed it to look in a set of different things. That was something which was creative which was above and beyond the algorithmic

Larry L. Linenschmidt: Yes. Jay Richards book The The Human Advantage: The Future of American Work in an Age of Smart Machines, has countless examples of entrepreneurs seeing a need and then filling that need. Its totally against the idea that capitalism is just about greed. He made the case that capitalism or free market enterprise is really altruistic, that the best entrepreneurs actually fill in a need. Thats reality, isnt it?

Robert J. Marks: Yes it is, yes it is.

You may also enjoy earlier conversations between Robert J. Marks and Larry L. Linenschmidt:

Why we dont think like computers: If we thought like computers, we would repeat package directions over and over again unless someone told us to stop.

and

What did the computer learn in the Chinese room? Nothing. Computers dont understand things and they cant handle ambiguity, says Robert J. Marks.

Download transcript.

Go here to read the rest:
Six Limitations of Artificial Intelligence As We Know It - Walter Bradley Center for Natural and Artificial Intelligence

Artificial Intelligence Emerges as the Superhero of Tech Era – Analytics Insight

Artificial Intelligence (AI) has changed the world for the better.AIandroboticshave existed in fictional stories and movies for a very long time. They were shown both at good and bad lights. However, in the tech era, AI is unfolding its features to be a lifesaver.

Everyone seems to be suddenly interested in AI. It changes the face of products and services it involves. AI has worked as a push-up element in various sectors including business, marketing, agriculture, banking, etc. There is no industry that doesnt have the touch of AI. Somehow, everyone is a beneficiary of emerging technology.

The recent trend in AI is quite different from how it worked so far. Scientists and researchers are finding ways to improve AI to a place where it can save human lives. Keep away all those soap operas and movies where you saw robots and AI enslaving humans. That is far beyond reality for now. Henceforth, AI applications are featured to make humans live safely.

Autonomous vehicles to Impede accidents

Vehicles are a part of a humans journey. No one can imagine a world without vehicles. They were one of the beginnings of the evolution of technology and mechanism. Today, we have far advanced vehicles on the road. Still, technological growth couldnt save human lives.

According to areport, in a year around 1.35 million people are killed on roadways around the world. In a day, 3,700 people are killed globally in road traffic crashes involving cars, buses, motorcycles, bicycles, trucks or pedestrians. The most vulnerable are pedestrians, bicyclists and motorcyclists.

It is too late to sensitize people to minimize the use of vehicles or follow traffic instructions properly. The cat is out of the bag already. Therefore, to tackle the situation AI can be used. The introduction ofautonomous vehiclesis a revolution to the mechanical industry. The vehicles main concern is how people could share the road safely without hurting each other. Autonomous vehicles have computer vision attached with which it could detect and prevent accidents. But these vehicles are yet to bang the road.

AI applications are acting as accident preventers. An application named!importantis designed to minimize the risk of accidents with all certified connected vehicles like cars, trucks, buses, autonomous vehicles and construction equipment. It can even feature drones. The app creates a virtual protection zone around pedestrians, wheelchair users, cyclists and motorcyclists using their devices. If a connected vehicle goes near !important vehicle, the brakes are triggered automatically.

Health applications to detect medical conditions

At a time when the pandemic is infecting and killing millions of people, it is unsafe to have human-to-human contact. Even doctors and frontline workers are at high risk of contracting the disease despite PPE gears. Hospitals and medical institutions are searching for ways to minimize the human hand in the helping system eventually to provide goodhealthcare facilities through AI.

Catalyst.ai and healthcare.ai designed byHealth Catalystare some of the life-saving applications developed using AI. The applications with its machine learning technology can identify patients with great risk of readmission and provide clinicians guidelines to address their problems. The application has also helped prevent hospital-acquired infections and chronic diseases, and reduce mortality rates. A hospital in Israel is testingsmart hospital roomsthat could save patients lives as well as doctors and nurses. The technology keeps away patients from health workers with the use of AI-powered robots, virtual reality glasses and early warnings.

Doctors are wise enough to find the ailment when they check a patient. But AI applications detect diseases and emergencies during the ambulance dispatch.Corti, an application to understand the medical conditions of patients was inspired to create an AI-enabled system that can identify cardiac arrest.

The voice-based digital assistant attends the emergency calls and listens to the patients complaints to determine the medical condition. The application decreased cardiac arrest cases by 43% through its features. The company is currently working on making the application detect other ailments.

Collated data for drug production

Following the detection of health issue comes treatment. No disease can be cured without providing proper medical attention and drugs. If finding the disease is an improved AI task, so is coming up with relative drugs for the ailment. It is very important to be choosy and particular while prescribing a drug to a patient as it involves side effects that can lead to other health risks.

Okwinis designing pharmaceutical solutions through AI-powered medical research and development. The company uses a machine-learning algorithm to create models designed to predict disease evolution, improve treatment and enhance the way drugs are developed for the diseases. Okwin obtains data from hospital partners to find ways to help patients improve drugs more quickly with fewer side effects.

Image recognition software to track traffickers

Human life threats are not always health-related. According to aUN reportfrom 2016, around 63,000 are victims of human trafficking across in a year. Human trafficking is a global issue. Countries and governments are trying their hand to keep human trafficking under control. But it is not an easy job as human traffickers are taking the work in shadow without anyones notice. Women and children are the most vulnerable to trafficking.

AI stands as a rescue operation for surging human traffic issues.Delta 8.7is an organization that applies AI and computational science to track and stop human traffickers. The organization can use its AI technology to recognize the image to track both the criminals and victims.

Despite facing a pandemic and losing millions of lives, humans still believe that life is invaluable. Artificial Intelligence featured applications have found a solution to help prevent road accidents, provide healthcare and minimize human trafficking through its technology. AI is giving humans a chance to live a safe and happy life.

Go here to read the rest:
Artificial Intelligence Emerges as the Superhero of Tech Era - Analytics Insight

Artificial Intelligence Identifies 80,000 Spiral Galaxies Promises More Astronomical Discoveries in the Future – SciTechDaily

Conceptual illustration of how artificial intelligence classifies various types of galaxies according to their morphologies. Credit: NAOJ/HSC-SSP

Astronomers have applied artificial intelligence (AI) to ultra-wide field-of-view images of the distant Universe captured by the Subaru Telescope, and have achieved a very high accuracy for finding and classifying spiral galaxies in those images. This technique, in combination with citizen science, is expected to yield further discoveries in the future.

A research group, consisting of astronomers mainly from the National Astronomical Observatory of Japan (NAOJ), applied a deep-learning technique, a type of AI, to classify galaxies in a large dataset of images obtained with the Subaru Telescope. Thanks to its high sensitivity, as many as 560,000 galaxies have been detected in the images. It would be extremely difficult to visually process this large number of galaxies one by one with human eyes for morphological classification. The AI enabled the team to perform the processing without human intervention.

Automated processing techniques for extraction and judgment of features with deep-learning algorithms have been rapidly developed since 2012. Now they usually surpass humans in terms of accuracy and are used for autonomous vehicles, security cameras, and many other applications. Dr. Ken-ichi Tadaki, a Project Assistant Professor at NAOJ, came up with the idea that if AI can classify images of cats and dogs, it should be able to distinguish galaxies with spiral patterns from galaxies without spiral patterns. Indeed, using training data prepared by humans, the AI successfully classified the galaxy morphologies with an accuracy of 97.5%. Then applying the trained AI to the full data set, it identified spirals in about 80,000 galaxies.

Now that this technique has been proven effective, it can be extended to classify galaxies into more detailed classes, by training the AI on the basis of a substantial number of galaxies classified by humans. NAOJ is now running a citizen-science project GALAXY CRUISE, where citizens examine galaxy images taken with the Subaru Telescope to search for features suggesting that the galaxy is colliding or merging with another galaxy. The advisor of GALAXY CRUISE, Associate Professor Masayuki Tanaka has high hopes for the study of galaxies using artificial intelligence and says, The Subaru Strategic Program is serious Big Data containing an almost countless number of galaxies. Scientifically, it is very interesting to tackle such big data with a collaboration of citizen astronomers and machines. By employing deep-learning on top of the classifications made by citizen scientists in GALAXY CRUISE, chances are, we can find a great number of colliding and merging galaxies.

Reference: Spin Parity of Spiral Galaxies II: A catalogue of 80k spiral galaxies using big data from the Subaru Hyper Suprime-Cam Survey and deep learning by Ken-ichi Tadaki, Masanori Iye, Hideya Fukumoto, Masao Hayashi, Cristian E Rusu, Rhythm Shimakawa and Tomoka Tosaki, 2 July 202, Monthly Notices of the Royal Astronomical Society.DOI: 10.1093/mnras/staa1880

Follow this link:
Artificial Intelligence Identifies 80,000 Spiral Galaxies Promises More Astronomical Discoveries in the Future - SciTechDaily

Artificial Intelligence Is Here To Calm Your Road Rage – TIME

I am behind the wheel of a Nissan Leaf, circling a parking lot, trying not to let the days nagging worries and checklists distract me to the point of imperiling pedestrians. Like all drivers, I am unwittingly communicating my stress to this vehicle in countless subtle ways: the strength of my grip on the steering wheel, the slight expansion of my back against the seat as I breathe, the things I mutter to myself as I pilot around cars and distracted pedestrians checking their phones in the parking lot.

Hello, Corinne, a calm voice says from the audio system. Whats stressing you out right now?

The conversation that ensues offers a window into the ways in which artificial intelligence could transform our experience behind the wheel: not by driving the car for us, but by taking better care of us as we drive.

Before coronavirus drastically altered our routines, three-quarters of U.S. workerssome 118 million peoplecommuted to the office alone in a car. From 2009 to 2019, Americans added an average of two minutes to their commute each way, according to U.S. Census data. That negligible daily average is driven by a sharp increase in the number of people making super commutes of 90 minutes or more each way, a population that increased 32% from 2005 to 2017. The long-term impact of COVID-19 on commuting isnt clear, but former transit riders who opt to drive instead of crowding into buses or subway cars may well make up for car commuters who skip at least some of their daily drives and work from home instead.

Longer commutes are associated with increased physical health risks like high blood pressure, obesity, stroke and sleep disorders. A 2017 research project at the University of the West of England found that every extra minute of the survey respondents commutes correlated with lower job and leisure time satisfaction. Adding 20 minutes to a commute, researchers found, has the same depressing effect on job satisfaction as a 19% pay cut.

Switching modes of transit can offer some relief: people who walk, bike or take trains to work tend to be happier commuters than those who drive (and, as a University of Amsterdam study recently found, they tend to miss their commute more during lockdown). But reliable public transit is not universally available, nor are decent jobs always close to affordable housing.

Technology has long promised that an imminent solution is right around the corner: self-driving cars. In the near future, tech companies claim, humans wont drive so much as be ferried about by fully autonomous cars that will navigate safely and efficiently to their destinations, leaving the people inside free to sleep, work or relax as easily as if they were on their own couch. A commute might be a lot less stressful if you could nap the whole way there, or get lost in a book or Netflix series without having to worry about exits or collisions.

Google executives went on the record claiming self-driving cars would be widely available within five years in 2012; they said the same thing again in 2015. Elon Musk throws out ship dates for fully autonomous Teslas as often as doomsday cult leaders reschedule the end of the world. Yet these forecasted utopias have still not arrived.

The majority of carmakers have walked back their most ambitious estimates. It will likely be decades before such cars are a reality for even a majority of drivers. In the meantime, the car commute remains a big, unpleasant, unhacked chunk of time in millions of Americans daily lives.

A smaller and less heralded group of researchers is working on how cars can make us happier while we drive them. It may be decades before artificial intelligence can completely take over piloting our vehicles. In the short run, however, it may be able to make us happierand healthierpilots.

Lane changes, left turns, four-way stops and the like are governed by rules, but also rely on drivers making on-the-spot judgments with potentially deadly consequences. These are also the moments where driver stress spikes.

Many smart car features currently on the market give drivers data that assist with these decisions, like sensors that alert them when cars are in their blind spots or their vehicle is drifting out of its lane.

Another thing that causes drivers stress is uncertainty. One 2015 study found commuters who drove themselves to work were more stressed by the journey than were transit riders or other commuters, largely because of the inconsistency that accidents, roadwork and other traffic snarls caused in their schedules. But even if we cant control the variables that affect a commute, were calmer if we can at least anticipate themhence the popularity of real-time arrival screens at subway and bus stops.

The Beaverton, Ore.-based company Traffic Technology Services (TTS) makes a product called the Personal Signal Assistant, a platform that enables cars to communicate with traffic signals in areas where that data is publicly available. TTSs first client, Audi, used the system to build a tool that counts down the remaining seconds of a red light (visually, on the dashboard) when a car is stopped at one, and suggests speed modifications as the car approaches a green light. The tool was designed to keep traffic flowingno more honking at distracted drivers who dont notice the light has turned green. But users also reported a marked decrease in stress. At the moment, the technology works in 26 North American metropolitan areas and two cities in Europe.

TTS has 60 full- and part-time employees in the U.S. and Germany, and recently partnered with Lamborghini, Bentley and a handful of corporate clients. Yet CEO Thomas Bauer says it can be hard to interest investors in technologies that focus on improving human drivers experience instead of just rendering them obsolete. We certainly dont draw the same excitement with investors as [companies focused on] autonomous driving, Bauer says. What we do is not quite as exciting because it doesnt take the driver out of the picture just yet.

Pablo Paredes, a clinical assistant professor of psychiatry and behavioral sciences at the Stanford School of Medicine, is the director of the schools Pervasive Wellbeing Technology Lab. Situated in a corner of a cavernous Palo Alto, Calif., office building that used to be the headquarters of the defunct health-technology company Theranos, the lab looks for ways to rejigger the habits and objects people use in their everyday lives to improve mental and physical health. Team members dont have to look far for reminders of what happens when grandiose promises arent backed up by data: Theranos circular logo is still inlaid in brass in the buildings marble-floored atrium.

It can be hard to tell the labs experiments from its standard-issue office furniture. To overcome the inertia that often leads users of adjustable-height desks to sit more often than stand, one of the workstations in the teams cluster of cubicles has been outfitted with a sensor and mechanical nodule that make it rise and lower at preset intervals, smoothly enough that a cup of coffee wont spill. In early trials, users particularly absorbed in their work just kept typing as the desk rose up and slowly stood along with it.

But the millions of hours consumed in the U.S. each day by the daily drive to work hold special fascination for Paredes. Hes drawn to the challenge of transforming a part of the day generally thought of as detrimental to health into something therapeutic. The commute for me is the big elephant in the room, he says. There are very simple things that were overlooking in normal life that can be greatly improved and really repurposed to help a lot of people.

In a 2018 study, Paredes and his colleagues found that its possible to infer a drivers muscle tensiona proxy for stressfrom the movement of their hands on a cars steering wheel. Theyre now experimenting with cameras that detect neck tension by noting the subtle changes in the angle of a drivers head as it bobs with the cars movements.

The flagship of the teams mindful-commuting project is the silver-colored Nissan Leaf in their parking lot. The factory-standard electric vehicle has been tricked out with a suite of technologies designed to work together to decrease a drivers stress.

On a test drive earlier this year, a chatbot speaking through the cars audio system offered me the option of engaging in a guided breathing exercise. When I verbally agreed, the drivers seatback began vibrating at intervals, while the voice instructed me to breathe along with its rhythm.

The lab published the results of a small study earlier this year showing that the seat-guided exercise reduced driver stress and breathing rates without impairing performance. They are now experimenting with a second vibrating system to see if lower-frequency vibrations could be used to slow breathing rates (and therefore stress) without any conscious effort on the drivers part.

The goal, eventually, is a mass-market car that can detect an elevation in a drivers stress level, via seat and steering wheel sensors or the neck-tension cameras. It would then automatically engage the calming-breath exercise, or talk through a problem or tell a joke to ease tension, using scripts developed with the input of cognitive behavioral therapists.

These technologies have value even as cars autonomous capabilities advance, Paredes says. Even if a car is fully self-driving, the human inside will still often be a captive audience of one, encased in a private space with private worries and fears.

Smarter technologies alone arent the solution to commuters problems. The auto industry has a long history of raising drivers tolerance for long commutes by making cars more comfortable and attractive places to beall the while promising a better driving experience thats just around the corner, says Peter Norton, an associate professor of science, technology, and society at the University of Virginia and author of Fighting Traffic: The Dawn of the Motor Age in the American City. From his perspective, stress-busting seats would join radios and air conditioners as distractions from bigger discussions about planning, transit and growing inequality, all of which could offer much more value to commuters than a nicer car.

In addition, how long it will be before these latest features become widely available options is an open question. Paredes lab had to suspend work during the pandemic, as its hard to maintain social distancing while working inside of a compact sedan. TTS is in talks to expand its offerings to other automakers, and Paredes has filed patents on some of his labs inventions. But just because a technology is relatively easy to integrate in a car doesnt mean it will be standard soon. The first commercially available backup cameras came on the market in 1991. Despite their effectiveness in reducing collisions, only 24% of cars on the road had them by 2016, according to the Insurance Institute for Highway Safety, and most were newer luxury vehicles. (The cameras are now required by law in all new vehicles.)

These technologies also raise new questions of inequality and exploitation. Its one thing for a commuter to opt for a seat that calms them down after a tough day. But if you drive for a living, should the company that owns your vehicle have the right to insist that you use a seat cover that elevates your breath rate and keeps you alert at the wheel? Who owns the health data your car collects, and who gets to access it? All of the unanswered questions that self-driving technologies raise apply to self-soothing technologies as well.

Back in Palo Alto, the pandemic still weeks away, I am piloting the Leaf around the parking lot with a member of the lab gamely along for the ride in the back. The chatbot asks again whats stressing me out. I have a deadline, I say, for a magazine article about cars and artificial intelligence.

The bot asks if this problem is significantly affecting my life (not really), if Ive encountered something similar before (yep), if previous strategies could be adapted to this scenario (they can) and when Ill be able to enter a plan to tackle this problem in my calendar (later, when Im not driving). I do feel a little better. I talk to myself alone in the car all the time. Its kind of nice to have the car talk back.

Great. Im glad you can do something about it. By breaking down a problem into tiny steps, we can often string together a solution, the car says. Sound good?

For your security, we've sent a confirmation email to the address you entered. Click the link to confirm your subscription and begin receiving our newsletters. If you don't get the confirmation within 10 minutes, please check your spam folder.

Contact us at editors@time.com.

See the original post:
Artificial Intelligence Is Here To Calm Your Road Rage - TIME

Worldwide Spending on Artificial Intelligence Is Expected to Double in Four Years, Reaching $110 Billion in 2024, According to New IDC Spending Guide…

FRAMINGHAM, Mass.--(BUSINESS WIRE)--Global spending on artificial intelligence (AI) is forecast to double over the next four years, growing from $50.1 billion in 2020 to more than $110 billion in 2024. According to the International Data Corporation (IDC) Worldwide Artificial Intelligence Spending Guide, spending on AI systems will accelerate over the next several years as organizations deploy artificial intelligence as part of their digital transformation efforts and to remain competitive in the digital economy. The compound annual growth rate (CAGR) for the 2019-2024 period will be 20.1%.

"Companies will adopt AI not just because they can, but because they must," said Ritu Jyoti, program vice president, Artificial Intelligence at IDC. "AI is the technology that will help businesses to be agile, innovate, and scale. The companies that become 'AI powered' will have the ability to synthesize information (using AI to convert data into information and then into knowledge), the capacity to learn (using AI to understand relationships between knowledge and apply the learning to business problems), and the capability to deliver insights at scale (using AI to support decisions and automation)."

Two of the leading drivers for AI adoption are delivering a better customer experience and helping employees to get better at their jobs. This is reflected in the leading use cases for AI, which include automated customer service agents, sales process recommendation and automation, automated threat intelligence and prevention, and IT automation. Combined, these four use cases will represent nearly a third of all AI spending this year. Some of the fastest growing use cases are automated human resources, IT automation, and pharmaceutical research and discovery.

The two industries that will spend the most on AI solutions throughout the forecast are Retail and Banking. The Retail industry will largely focus its AI investments on improving the customer experience via chatbots and recommendation engines while Banking will include spending on fraud analysis and investigation and program advisors and recommendation systems. Discrete Manufacturing, Process Manufacturing, and Healthcare will round out the top 5 industries for AI spending in 2020. The industries that will see the fastest growth in AI spending over the 2020-2024 forecast are Media, Federal/Central Government, and Professional Services.

"COVID-19 caused a slowdown in AI investments across the Transportation industry as well as the Personal and Consumer Services industry, which includes leisure and hospitality businesses. These industries will be cautious with their AI investments in 2020 as their focus will be on cost containment and revenue generation rather than innovation or digital experiences," said Andrea Minonne, senior research analyst, Customer Insights & Analysis. "On the other hand, AI has played a role in helping societies deal with large-scale disruptions caused by quarantines and lockdowns. Some European governments have partnered with AI start-ups to deploy AI solutions to monitor the outcomes of their social distancing rules and assess if the public was complying with rules. Also, hospitals across Europe are using AI to speed up COVID-19 diagnosis and testing, to provide automated remote consultations, and to optimize capacity at hospitals."

"This release of the Artificial Intelligence Spending Guide was adjusted for the impact of COVID-19," said Stacey Soohoo, research manager, Customer Insights & Analysis. "In the short term, the pandemic caused supply chain disruptions and store closures with continued impact expected to linger into 2021 and the outyears. For the most impacted industries, this has caused some delays in AI deployments. Elsewhere, enterprises have seen a silver lining in the current situation: an opportunity to become more resilient and agile in the long run. Artificial intelligence continues to be a key technology in the road to recovery for many enterprises and adopting artificial intelligence will help many to rebuild or enhance future revenue streams and operations."

Software and services will each account for a little more than one third of all AI spending this year with hardware delivering the remainder. The largest share of software spending will go to AI applications ($14.1 billion) while the largest category of services spending will be IT services ($14.5 billion). Servers ($11.2 billion) will dominate hardware spending. Software will see the fastest growth in spending over the forecast period with a five-year CAGR of 22.5%.

On a geographic basis, the United States will deliver more than half of all AI spending throughout the forecast, led by the Retail and Banking industries. Western Europe will be the second largest geographic region, led by Banking, Retail, and Discrete Manufacturing. China will be the third largest region for AI spending with State/Local Government, Banking, and Professional Services as the leading industries. The strongest spending growth over the five-year forecast will be in Japan (32.1% CAGR) and Latin America (25.1% CAGR).

The Worldwide Artificial Intelligence Spending Guide sizes spending for technologies that analyze, organize, access, and provide advisory services based on a range of unstructured information. The Spending Guide quantifies the AI opportunity by providing data for 27 use cases across 19 industries in nine regions and 32 countries. Data is also available for the related hardware, software, and services categories. This version (V2 2020) of the Spending Guide incorporates updated estimates for the impact of COVID-19 across all technology and industry markets as of the end of May 2020.

About IDC Spending Guides

IDC's Spending Guides provide a granular view of key technology markets from a regional, vertical industry, use case, buyer, and technology perspective. The spending guides are delivered via pivot table format or custom query tool, allowing the user to easily extract meaningful information about each market by viewing data trends and relationships.

Click here to learn about IDC's full suite of data products and how you can leverage them to grow your business.

About IDC

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. With more than 1,100 analysts worldwide, IDC offers global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. IDC's analysis and insight helps IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC is a wholly-owned subsidiary of International Data Group (IDG), the world's leading tech media, data and marketing services company. To learn more about IDC, please visit http://www.idc.com. Follow IDC on Twitter at @IDC and LinkedIn. Subscribe to the IDC Blog for industry news and insights: http://bit.ly/IDCBlog_Subscribe.

View original post here:
Worldwide Spending on Artificial Intelligence Is Expected to Double in Four Years, Reaching $110 Billion in 2024, According to New IDC Spending Guide...

Blockchain, The Bahamas, And Future Directions In Cryptocurrency Reporting – Forbes

Cryptocurrencies have now officially made a debut on the balance sheet of a central bank; could this lead to an entirely new cryptoasset reporting framework?

In Pictures via Getty Images

Recently it was discovered that Central Bank of the Bahamas had included its newly created cryptocurrency, known as the Sand Dollar, on its balance sheet during April 2020. Although the amount that was actually listed was only equivalent to $48,000, the implications of this inclusion are profound.

This revelation comes on top of the news of just how comprehensive blockchain projects are at the Federal Reserve Bank of Boston, where over 30 blockchains are in various phases of testing and evaluation for possible implementation.

Cryptocurrencies have moved quickly from the fringe to the mainstream conversation, taking the form of decentralized financing, stablecoins, and most recently, central bank digital currencies (CBDCs). Even as the ecosystem has rapidly accelerated, however, there is still ambiguity as to how exactly different types of cryptocurrencies should be treated from a reporting and disclosure perspective.

This ambiguity exists even as industry associations and regulators, including the Association of Internal Certified Professional Accountants (AICPA), the Financial Standards Board (FSB), the Bank for International Settlements (BIS), and the Public Company Accounting Oversight Board (PCAOB) have begun to issue thought leadership on the subject. Despite these recent publications, in addition to the numerous opinions issued by other securities and market regulators, there is not a definitive guide or framework for how cryptocurrencies should be valued and reported.

The Sand Dollar amount might have only been for the equivalent of $48,000, but with central banks across the globe moving quickly to develop and beta-test versions of central bank digital currencies, it is only a matter of time until the financial impact scales materially. Given that, it is worthwhile taking a look at a prevailing opinion for reporting as well as an alternative that might make sense for certain organizations.

Mark-to-market. Marking assets value to market, on the surface, seems to make logical sense as well as to be reflective of market realities. Marking assets to current fair market value already occurs for certain financial instruments, and in some cases entire asset classes, so this is not an abstract concept. This prevailing opinion by practitioners and firms, classifying cryptocurrencies as intangible assets, also incorporates this concept of adjusting value based on market changes.

That said, the price volatility (which does not have to be good or bad) associated with cryptocurrencies and the alt-coin market specifically tempers enthusiasm for this approach. Price swings might make for great headlines and commentary, but can cause headaches for merchants and individuals seeking to use cryptocurrencies as a medium of exchange. Coupled with the lack of price transparency for some thinly traded crypto, marking to market might not be as simple as it might appear.

Addressing the volatility issues is actually one of the primary selling points of stablecoins and CBDCs, and arguably has been a driving force behind the rapid growth and investment in these assets.

A new asset class. An alternative to what is basically trying to make a square peg fit into a round hole - classifying cryptocurrencies as intangibles and marking to market - would be to create an entirely new asset sector for the cryptocurrency space. Creating a new asset class might seem like an extreme reaction to the growth of cryptocurrencies, but upon closer examination it might make more sense than it initially appears.

Taking a look at cryptocurrencies, it is relatively clear that these financial instruments do not fit neatly into any existing accounting classification. Depending on the cryptocurrency that is examined, it may or may not have characteristics of equity securities, interest paying instruments (similar to preferred stock), interest bearing deposits, intangible assets, or something akin to airline miles or reward points. Put another way, the very innovative spirit that has catapulted the cryptocurrency space to its current prominence has also led to a relatively messy reporting conversation.

By creating a new asset categorization for cryptoassets, organizations and policymakers will both have an opportunity to start fresh and to actually create reporting and disclosure standards that make sense for blockchain and cryptocurrencies. For example, cryptocurrencies could be classified by use case, or be reported depending on trading volume and market capitalization. There might even be different reporting obligations depending on what cryptocurrency is being analyzed.

Think of the following for a moment. Generally speaking, the cryptocurrency space can be broken down into three areas, especially for non-expert users. There are cryptocurrencies such as bitcoin that are completely decentralized and untethered to underlying assets. In addition, there are stablecoins that are more centralized (issued by an organization) and are connected to some external asset. Finally, there is the growing field of CBDCs issued and governed by a nation state or central bank and (most likely) connected to the fiat currency of that nation.

Does it make sense for such radically different instruments to be categorized and treated the same?

That question should be top of mind for organizations, users, and regulators as these different iterations of cryptocurrency continue to develop, compete, and gain traction in the marketplace.

Here is the original post:
Blockchain, The Bahamas, And Future Directions In Cryptocurrency Reporting - Forbes