Artificial Intelligence is critical to organisations, but many unprepared – Workplace Insight

The State of Intelligent Enterprises report sets out to examine the current landscape and shows the challenges and the driving factors for businesses to become truly Intelligent Enterprises. Wipro surveyed 300 respondents in UK and US across key industry sectors like financial services, healthcare, technology, manufacturing, retail and consumer goods. The report claims to highlight that while collecting data is critical, the ability to combine this with a host of technologies to leverage insights creates an Intelligent Enterprise. Organisations that fast-track adoption of intelligent processes and technologies stand to gain an immediate competitive advantage over their counterparts.

Some of the key findings from the report are:

Organisations now need new capabilities to navigate the current challenges.

While 80 percent of organisations recognise the importance of being intelligent, only 17 percent would classify their organisations as an Intelligent Enterprise. 98 percent of those surveyed believe that being an Intelligent Enterprise yields benefits to organisations. The most important ones being improved customer experience, faster business decisions and increased organisational agility. 91 percent of organisations feel there are data barriers towards being an Intelligent Enterprise, with security, quality and seamless integration being of utmost concern. 95 percent of business leaders surveyed see Artificial Intelligence as critical to being Intelligent Enterprises, yet, currently, only 17 percent can leverage AI across the entire organisation. 74 percent of organisations consider investment in technology as the most likely enabler for an Intelligent Enterprise, however 42 percent of them think that this must be complemented with efforts to re-skill workforce.

Jayant Prabhu, Vice President & Head Data, Analytics & AI, Wipro Limited said, Organisations now need new capabilities to navigate the current challenges. The report amplifies the opportunity to gain a first-mover advantage to being Intelligent. The ability to take productive decisions depends on an organisations ability to generate accurate, fast and actionable intelligence. Successful organisations are those that quickly adapt to the new technology landscape to transform into an Intelligent Enterprise.

Image by Gerd Altmann

Excerpt from:
Artificial Intelligence is critical to organisations, but many unprepared - Workplace Insight

How the VA is using artificial intelligence to improve veterans’ mental health | TheHill – The Hill

Navy veteran Lee Becker knows how hard it can be to ask for help in the military.

I remember when I was in the military I had to talk to leaders [who] would chastise service members for getting medical support for mental health, said Becker, who served at the Navy's Bureau of Medicine and Surgery, providing care to Marines and Sailors serving in Iraq and Afghanistan.

So when he began his career at the Department of Veterans Affairs (VA) about a decade ago, he knew things needed to change. In 2017, the suicide rate for veterans was 1.5 times the rate for nonveteran adults, according to the 2019 National Veteran Suicide Prevention Annual Report, increasing the average number of veteran suicides per day to 16.8.

The VA historically has always been in reactive mode, always caught by surprise, he said, citing the example of the lack of health care for female veterans, who are 2.2 times more likely to die by suicide than non-veteran women.

WHAT YOU NEED TO KNOW ABOUT CORONAVIRUS RIGHT NOW

FAUCI ADDRESSES WHY BLACK AMERICANS ARE DISPROPORTIONALLY AFFECTED BY CORONAVIRUS

REPORT HIGHLIGHTS ASIAN AMERICAN DISCRIMINATION AMID CORONAVIRUS PANDEMIC

IM A FOREIGN-BORN DOCTOR FIGHTING AMERICAS WAR AGAINST THE CORONAVIRUS. THE US NEEDS MORE OF US

WOMEN AND THE HIDDEN BURDEN OF THE CORONAVIRUS

400,000 CALL ON CONGRESS TO EASE HOUSING CONCERNS FOR AMERICAN FAMILIES IMPACTED BY CORONAVIRUS

After an explosive report by the Washington Post in 2014 detailing tens of thousands of veterans waiting for care as VA employees were allegedly directed to manipulate records, some things have changed. This April, veteran trust in the VA reached 80 percent, up 19 percent since January 2017, according to the agency. And the former chief of staff for the Veterans Experience Office is now working for Medallia, a customer experience management company. He is the solutions principal for public sector and health care, and he helped launch the Veterans Signals program in partnership with the VA.

The programutilizesartificial intelligence systems typically used in the customer experience industry to monitor responses based on tone and language and respond immediately to at-risk veterans. About 2,800 crisis alerts have been routed to VA offices, according to Medallia, providing early intervention for more than 1,400 veterans in need within minutes of being alerted.

If they have the ability to harness this capability so they can sell more, why cant public service agencies have the ability to serve more? Becker asked. "It opened the aperture, making sure we really targeted the care. We were getting insights that helped anticipate future problems. We were able to identify veterans that are in crisis and route that case directly to the veterans crisis line.

Through surveys, Medallia collects customer feedback for the VA that seeks to understand veterans as customers with other identities outside of their military service. One call came from an Asian American female veteran living in Idaho who was scared to leave her house due to racist stigma blaming Asian Americans for the coronavirus pandemic.

I think the greatest tragedy is that I see a tsunami coming around mental health and if we dont mitigate that by truly listening and anticipating the needs of the people, were going to have an issue, Becker said.

Our country is in a historic fight. Add Changing America to your Facebook or Twitter feed to stay on top of the news.

The coronavirus pandemic has exacerbated existing inequities for the most vulnerable communities. The VA medical system has recorded more than 53,000 cases of COVID-19 among veterans in all 50 states, the District of Columbia and Puerto Rico, AARP reported, withmore than 3,000 deaths not including veterans who were not diagnosed at VA hospitals and medical centers.

Access to care is still an issue. A report released last week by the Department of Veterans Affairs, Office of Inspector General revealed deficiencies in care, care coordination and facility response in the case of patient who died by suicide after being discharged by the Memphis, Tenn., VA Medical Center. But Becker remains optimistic that he can make change from within the system.

"It has to start on the military side. We have to make sure that it's very clear it's ok not to be ok, if someone needs mental health support it's not weakness," he said.

And that support needs to carry through veterans' transitions to civilian life, Becker added.

"[The military is] a cocoon, you get fed, you have a job, you get issued clothes, he said. When you leave, how do we make sure that all of those needs are getting met?"

While hes optimistic, Becker is also a realist and he knows there are still very real problems with the VA. But he says its more an issue of capability than bad intentions.

Theres a few bad apples, Ive supervised those bad apples and I've had to get rid of those bad apples, Becker said. But hes also seen new leaders step up.

Its a tale of two cities, he said. Were seeing a set of leadership behaviors that are not conducive to the needs of what were looking for, but were seeing great leaders within the federal government who are career employees and even some politicians.

THE LATEST FROM CHANGING AMERICA ON THE CORONAVIRUS PANDEMIC

FAUCI PUSHES BACK AGAINST MINIMIZING OF CORONAVIRUS DEATH TOLL

HERD IMMUNITY EXPLAINED

THE PROBLEM WITH HOLDING UP SWEDEN AS AN EXAMPLE FOR CORONAVIRUS RESPONSE

FALSE POSITIVE AND NEGATIVE TEST RESULTS EXPLAINED

NOBEL LAUREATE PREDICTS WE WILL HAVE MUCH FASTER CORONAVIRUS RECOVERY THAN EXPECTED

The rest is here:
How the VA is using artificial intelligence to improve veterans' mental health | TheHill - The Hill

How Artificial Intelligence is changing the way we search and find online – ITProPortal

Can a machine know a customer better than they know themselves? The answer is, for the purposes of shopping, yes it can.

First of all, Artificial Intelligence (AI) takes a dispassionate view of customers and their behavior online, while in research, consumers will often give contradictory answers, answers that then change over time, depending largely on how they are feeling at that particular moment. As an indicator of how those consumers are then likely to behave in terms of what they buy, this has been proven to be unreliable.

AI on the other hand, supported by machine learning to deliver better and better outcomes over time, operates without emotions and simply reacts to and learns from what it is being told.

In online retail, AI is set to revolutionize the world of search. If revolutionize sounds too big a word for it, bear in mind that search technology has barely changed in 10 or more years. While brands have invested heavily in making their websites look amazing and optimized them to steer the customer easily to the checkout, they have generally used out of the box search technology ever since the first commercial engine was launched by Altavista back in 1995.

Given that typical conversion rates on retail websites is 2-3 percent, then there is everything to play for in making search easier and more rewarding for shoppers. Retailers invest heavily in SEO and PPC to get customers from Google to their site but too often think the job is done once they get there.

Products are then displayed to their best advantage on the site; email or newsletter sign up is offered; online chat is offered; promotions pop up; a list of nearby stores is offered; and so on. But at no point is the customer offered or given any help, apart from the online chat window which follows them around.

At this point, the customer may well start to follow the journey laid out for them by the retailer; they get distracted and end up somewhere entirely different from what they intended. Some customers like to wander, but those that already knew what they were looking for do not.

Meanwhile, what has the retailer learned from all the precious time the customer has spent on their site? Only that the customer has not bought anything, and it is only at this point that an offer pops up or the online chat box appears. But none of these actions are based on any knowledge of the customer other than which pages they have looked at.

The search engine is not very good at learning; it may be able to refer the customer back to a page they looked at before because of the consumers digital footprint or due to the cookie the site left behind, but if that webpage was not useful, then the search process has actually gone backwards. So, the customer continues to end up where they never wanted to go in the first place ever decreasing circles displaying a choice of unwanted products.

These on-site search functions can be compared to stubborn school children who simply refuse to learn, whatever they are taught. The customer searching online tries to make their query as accurate and intelligent as possible while the search engine simply responds by sharing everything it knows, but without actually answering the question. AI by contrast can spot what the customer intends and gives answers based on that intent, depending where an individual shopper is in their own personal buying journey.

It then returns increasingly accurate results because it is learning from what the customer is telling it. Search thus becomes understanding because it is looking at behavior not just keywords, which is the current limit of conventional search engines. The AI can also create the best shopping experience beyond basic search, including navigation, to seamlessly and speedily advance a customer to the checkout.

This is really what delivering personalized journeys is all about the site understands the customer, knows what they want and how they want it. For instance, when a shopper is very clear about what they want, the AI can plot the quickest route through the site to the payment page, while customers looking for inspiration can be given a slower and more immersive experience, with lots of hand-holding as required, such as links to online chat to help them with their decision or curated content to inspire browsing.

AI in ecommerce assumes a character all of its own, essentially a digital assistant that is trusted by the customer to help them find what they want. Retailers can personalize AI in any way they choose, while the processing and intelligence that sits behind it continues to work unseen.

AI in action of course creates a huge amount of interactional and behavioral data that the retailer can use to make improvements over time to base search, navigation, merchandising, display, promotions and checkout experience. It delivers good results for individual customers as well as all customers as their online behavior continues to evolve.

Our view is that customers want help when they are on a website. They want to be able to ask questions using natural rather than search language and they want the search function to learn based on those answers. By ensuring that their search strategy is underpinned by AI, retailers can then introduce more dynamic search enablers, such as visual and voice. But rather than simply adding commands, the customer is able to hold conversations with the digital assistant using natural language. Search then turns into discovery and it is this that leads to higher customer conversions, repeat visits and long-term loyalty.

To date, a lot of the conversation around AI has focused on the technology rather than what it enables in the real world. And there has been some reticence to adopt it for fear that it will replace human jobs; however, in the case of online search, one automated process is simply complementing another and all in all, doing a much better job. Check out your own search function now. How is that working for you?

Jamie Challis is UK Director, Findologic

View post:
How Artificial Intelligence is changing the way we search and find online - ITProPortal

Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence – Chatham House

AI holds the potential to replace humans for tactical tasks in military operations beyond current applications such as navigation assistance. For example, in the US, the Defense Advanced Research Projects Agency (DARPA) recently held the final round of its AlphaDogfight Trials where an algorithm controllinga simulated F-16 fighter was pitted against an Air Force pilot in virtual aerial combat. The algorithm won by 5-0. So what does this mean for the future of military operations?

The agencys deputy director remarked that these tools are now ready for weapons systems designers to be in the toolbox. At first glance, the dogfight shows that an AI-enabled air combat would provide tremendous military advantage including the lack of survival instincts inherent to humans, the ability to consistently operate with high acceleration stress beyond the limitations of the human body and high targeting precision.

The outcome of these trials, however, does not mean that this technology is ready for deployment in the battlefield. In fact, an array of considerations must be taken into account prior to their deployment and use namely the ability to adapt in real-life combat situations, physical limitations and legal compliance.

First, as with all technologies, the performance of an algorithm in its testing environment is bound to differ from real-life applications such as in the case of cluster munitions. For instance, Google Health developed an algorithm to help with diabetic retinopathy screening. While the algorithms accuracy rate in the lab was over 90 per cent, it did not perform well out of the lab because the algorithm was used to high-quality scans in its training, it rejected more than a fifth of the real-life scans which were deemed as being below the quality threshold required. As a result, the process ended up being as time-consuming and costly if not more so than traditional screening.

Similarly, virtual environments akin to the AlphaDogfight Trials do not reflect the extent of risks, hazards and unpredictability of real-life combat. In the dogfight exercise, for example, the algorithm had full situational awareness and was repeatedly trained to the rules, parameters and limitations of its operating environment. But, in a real-life dynamic andbattlefield, the list of variables is long and will inevitably fluctuate: visibility may be poor, extreme weather could affect operations and the performance of aircraft and the behaviour and actions of adversarieswill be unpredictable.

Every single eventuality would need to be programmed in line with the commanders intent in an ever-changing situation or it would drastically affect the performance of algorithms including in target identification and firing precision.

Another consideration relates to the limitations of the hardware that AI systems depend on. Algorithms depend on hardware to operate equipment such as sensors and computer systems each of which are constrained by physical limitations. These can be targeted by an adversary, for example, through electronic interference to disrupt the functioning of the computer systems which the algorithms are operating from.

Hardware may also be affected involuntarily. For instance, a pilotless aircraft controlled by an algorithm can indeed undergo higher accelerations, and thus, higher g-force than the human body can endure. However, the aircraft in itself is also subject to physical limitations such as acceleration limits beyond which parts of the aircraft, such as its sensors, may be severely damaged which in turn affects the algorithms performance and, ultimately, mission success. It is critical that these physical limitations are factored into the equation when deploying these machines especially when they so heavily rely on sensors.

Another major, and perhaps the greatest, consideration relates to the ability to rely on machines for legal compliance. The DARPA dogfight exclusively focused on the algorithms ability to successfully control the aircraft and counter the adversary, however, nothing indicates its ability to ensure that strikes remain within the boundaries of the law.

In an armed conflict, the deployment and use of such systems in the battlefield are not exempt from international humanitarian law (IHL) and most notably its customary principles of distinction, proportionality and precautions in attack. It would need to be able to differentiate between civilians, combatants and military objectives, calculate whether its attacks will be proportionate against the set military objective and live collateral damage estimates and take the necessary precautions to ensure the attacks remain within the boundaries of the law including the ability to abort if necessary. This would also require the machine to have the ability to stay within the rules of engagement for that particular operation.

It is therefore critical to incorporate IHL considerations from the conception and throughout the development and testing phases of algorithms to ensure the machines are sufficiently reliable for legal compliance purposes.

It is also important that developers address the 'black box' issue whereby the algorithms calculations are so complex that it is impossible for humans to understand how it came to its results. It is not only necessary to address the algorithms opacity to improve the algorithms performance over time, it is also key for accountability and investigation purposes in cases of incidents and suspected violations of applicable laws.

Algorithms are becoming increasingly powerful and there is no doubt that they will confer tremendous advantages to the military. Over-hype, however, must be avoided at the expense of the machines reliability on the technical front as well as for legal compliance purposes.

The testing and experimentation phases are key during which developers will have the ability to fine-tune the algorithms. Developers must, therefore, be held accountable for ensuring the reliability of machines by incorporating considerations pertaining to performance and accuracy, hardware limitations as well as legal compliance. This could help prevent incidents in real life that result from overestimating of the capabilities of AI in military operations.

Visit link:
Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence - Chatham House

How to Measure the Performance of Your AI/Machine Learning Platform? – Analytics Insight

With each passing day, new technologies are emerging across the world. They are not just bringing innovation to industries but also radically transforming entire societies. Be it artificial intelligence, machine learning, Internet of Things, or Cloud. All of these have found a plethora of applications in the world that are implemented through their specialized platforms. Organizations choose a suitable platform that has the power to uncover the complete benefits of the respective technology and obtain the desired results.

But, choosing a platform isnt as easy as it seems. It has to be of high caliber, fast, independent, etc. In other words, it should be worth your investment. Lets say that you want to know the performance of a CPU in comparison to others. Its easy because you know you have Passmark for the job. Similarly, when you want to check the performance of a graphics processing unit, you have Unigines Superposition. But, when it comes to machine learning, how do you figure out how fast a platform is? Alternatively, as an organization, if you have to invest in a single machine learning platform, how do you decide which one is the best?

For a long period, there has been no benchmark to decide the worthiness of machine learning platforms. Put differently, the artificial intelligence and machine learning industry have lacked reliable, transparent, standard, and vendor-neutral benchmarks that help in flagging performance differences between different parameters used for handling a workload. Some of these parameters include hardware, software, algorithms, and cloud configurations among others.

Even though it has never roadblock when designing applications, the choice of platform determines the efficiency of the ultimate product in one way or the other. Technologies like artificial intelligence and machine learning are growing to be extremely resource-sensitive, as research progresses. For this reason, the practitioners of AI and ML are seeking the fastest, most scalable, power-efficient, and low-cost hardware and software platforms to run their workloads.

This need has emerged since machine learning is moving towards a workload-optimized structure. As a result, there is a more than ever need for standard benchmarking tools that will help machine learning developers access and analyze the target environments which are best suited for the required job. Not just developers but enterprise information technology professionals also need a benchmarking tool for a specific training or inference job. Andrew Ng, CEO of the Landing AI points out that there is no doubt that AI is transforming multiple industries. But for it to reach its full potential, we still need faster hardware and software. Therefore, unless we have something to measure the efficiency of the hardware and software specifically for the needs of ML, there is no way that we can design more advanced ones for our requirements.

David Patterson, Author of the Computer Architecture: A quantitative approach highlights the fact that good benchmarks enable researchers to compare different ideas quickly, which makes it easier to innovate. Having said this, the need for a standard benchmarking tool for ML is more than ever.

To solve the underlying problem of an unbiased benchmarking tool, machine learning expert David Katner along with scientists and engineers from a reputed organization such as Google, Intel, and Microsoft have come up with a new solution. Welcome ML Perf- a machine learning benchmark suite that measures how fast a system can perform ML inference using a trained model.

Measuring the speed of a machine learning problem is already a complex task and tangles even more as it is observed for a longer period. All of this is simply because of the varying nature of problem sets and architectures in machine learning services. Having said this, ML Perf in addition to performance also measures the accuracy of a platform. It is intended for the widest range of systems including mobile devices to servers.

Training is that process in machine learning, where a network is fed with large datasets and let loose to find any underlying patterns in them. The more the number of datasets, the more is the efficiency of the system. It is called training because the network learns from the datasets and trains itself to recognize a particular pattern. For example, Gmails Smart Reply is trained in 238,000,000 sample emails. Similarly, Google Translate is trained on a trillion datasets. This makes the computational cost of training quite expensive. Systems that are designed for training have large and powerful hardware since their job is to chew up the data as fast as possible. Once the system is trained, the output received from it is called the inference.

Therefore, performance certainly matters when running inference workloads. On the one hand, the training phase requires as many operations per second without the concern of any latency. On the other hand, latency is a big issue during inference since a human is waiting on the other end to receive the results of the inference query.

Due to the complex nature of architecture and metrics, one cannot receive a perfect score through ML Perf. Since ML Perf is also valid across a range of workloads and overwhelming architectures, one cannot make assumptions about a perfect score just like in the case of CPUs or GPUs. In ML Perf, scores are broken down into training workloads and inference workloads before being divided into tasks, models, datasets, and scenarios. The result obtained from ML Perf is not a perfect score but a wide spreadsheet. Each task is measured under the following four parameters-

Finally, ML Perf separates the benchmark into Open and Closed divisions, with more strict requirements for the closed division. Similarly, the hardware for an ML workload is also separated into categories such as Available, preview, Research, Development, and Others. All these factors give Ml experts and practitioners an idea of how close a given system is to real production.

Share This ArticleDo the sharing thingy

Read the rest here:
How to Measure the Performance of Your AI/Machine Learning Platform? - Analytics Insight

Banks arent as stupid as enterprise AI and fintech entrepreneurs think – TechCrunch

Announcements like Selina Finances $53 million raise and another $64.7 million raise the next day for a different banking startup spark enterprise artificial intelligence and fintech evangelists to rejoin the debate over how banks are stupid and need help or competition.

The complaint is banks are seemingly too slow to adopt fintechs bright ideas. They dont seem to grasp where the industry is headed. Some technologists, tired of marketing their wares to banks, have instead decided to go ahead and launch their own challenger banks.

But old-school financiers arent dumb. Most know the buy versus build choice in fintech is a false choice. The right question is almost never whether to buy software or build it internally. Instead, banks have often worked to walk the difficult but smarter path right down the middle and thats accelerating.

Thats not to say banks havent made horrendous mistakes. Critics complain about banks spending billions trying to be software companies, creating huge IT businesses with huge redundancies in cost and longevity challenges, and investing into ineffectual innovation and intrapreneurial endeavors. But overall, banks know their business way better than the entrepreneurial markets that seek to influence them.

First, banks have something most technologists dont have enough of: Banks have domain expertise. Technologists tend to discount the exchange value of domain knowledge. And thats a mistake. So much abstract technology, without critical discussion, deep product management alignment and crisp, clear and business-usefulness, makes too much technology abstract from the material value it seeks to create.

Second, banks are not reluctant to buy because they dont value enterprise artificial intelligence and other fintech. Theyre reluctant because they value it too much. They know enterprise AI gives a competitive edge, so why should they get it from the same platform everyone else is attached to, drawing from the same data lake?

Competitiveness, differentiation, alpha, risk transparency and operational productivity will be defined by how highly productive, high-performance cognitive tools are deployed at scale in the incredibly near future. The combination of NLP, ML, AI and cloud will accelerate competitive ideation in order of magnitude. The question is, how do you own the key elements of competitiveness? Its a tough question for many enterprises to answer.

If they get it right, banks can obtain the true value of their domain expertise and develop a differentiated edge where they dont just float along with every other bank on someones platform. They can define the future of their industry and keep the value. AI is a force multiplier for business knowledge and creativity. If you dont know your business well, youre wasting your money. Same goes for the entrepreneur. If you cant make your portfolio absolutely business relevant, you end up being a consulting business pretending to be a product innovator.

So are banks at best cautious, and at worst afraid? They dont want to invest in the next big thing only to have it flop. They cant distinguish whats real from hype in the fintech space. And thats understandable. After all, they have spent a fortune on AI. Or have they?

It seems they have spent a fortune on stuff called AI internal projects with not a snowballs chance in hell to scale to the volume and concurrency demands of the firm. Or they have become enmeshed in huge consulting projects staggering toward some lofty objective that everyone knows deep down is not possible.

This perceived trepidation may or may not be good for banking, but it certainly has helped foster the new industry of the challenger bank.

Challenger banks are widely accepted to have come around because traditional banks are too stuck in the past to adopt their new ideas. Investors too easily agree. In recent weeks, American challenger banks Chime unveiled a credit card, U.S.-based Point launched and German challenger bank Vivid launched with the help of Solarisbank, a fintech company.

Traditional banks are spending resources on hiring data scientists too sometimes in numbers that dwarf the challenger bankers. Legacy bankers want to listen to their data scientists on questions and challenges rather than pay more for an external fintech vendor to answer or solve them.

This arguably is the smart play. Traditional bankers are asking themselves why should they pay for fintech services that they cant 100% own, or how can they buy the right bits, and retain the parts that amount to a competitive edge? They dont want that competitive edge floating around in a data lake somewhere.

From banks perspective, its better to fintech internally or else theres no competitive advantage; the business case is always compelling. The problem is a bank is not designed to stimulate creativity in design. JPMCs COIN project is a rare and fantastically successful project. Though, this is an example of a super alignment between creative fintech and the bank being able to articulate a clear, crisp business problem a Product Requirements Document for want of a better term. Most internal development is playing games with open source, with the shine of the alchemy wearing off as budgets are looked at hard in respect to return on investment.

A lot of people are going to talk about setting new standards in the coming years as banks onboard these services and buy new companies. Ultimately, fintech firms and banks are going to join together and make the new standard as new options in banking proliferate.

So, theres a danger to spending too much time learning how to do it yourself and missing the boat as everyone else moves ahead.

Engineers will tell you that untutored management can fail to steer a consistent course. The result is an accumulation of technical debt as development-level requirements keep zigzagging. Laying too much pressure on your data scientists and engineers can also lead to technical debt piling up faster. A bug or an inefficiency is left in place. New features are built as workarounds.

This is one reason why in-house-built software has a reputation for not scaling. The same problem shows up in consultant-developed software. Old problems in the system hide underneath new ones and the cracks begin to show in the new applications built on top of low-quality code.

So how to fix this? Whats the right model?

Its a bit of a dull answer, but success comes from humility. It needs an understanding that big problems are solved with creative teams, each understanding what they bring, each being respected as equals and managed in a completely clear articulation on what needs to be solved and what success looks like.

Throw in some Stalinist project management and your probability of success goes up an order of magnitude. So, the successes of the future will see banks having fewer but way more trusted fintech partners that jointly value the intellectual property they are creating. Theyll have to respect that neither can succeed without the other. Its a tough code to crack. But without it, banks are in trouble, and so are the entrepreneurs that seek to work with them.

Read the original here:
Banks arent as stupid as enterprise AI and fintech entrepreneurs think - TechCrunch

How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? – AI News

Humanitarian engineering programs bring together engineers, policy makers, non-profit organisations, and local communities to leverage technology for the greater good of humanity.

The intersection of technology, community, and sustainability offers a plethora of opportunities to innovate. We still live in an era where millions of people are under extreme poverty, lacking access to clean water, basic sanitation, electricity, internet, quality education, and healthcare.

Clearly, we need global solutions to tackle the grandest challenges facing our planet. So how can artificial intelligence (AI) assist in addressing key humanitarian and sustainable development challenges?

To begin with, the United Nations Sustainable Development Goals (SDGs) represent a collection of 17 global goals that aim to address pressing global challenges, achieve inclusive development, and foster peace and prosperity in a sustainable manner by 2030. AI enables the building of smart systems that imitate human intelligence to solve real-world problems.

Recent advancements in AI have radically changed the way we think, live, and collaborate. Our daily lives are centred around AI-powered solutions with smart speakers playing wakeup alarms, smart watches tracking steps in our morning walk, smart refrigerators recommending breakfast recipes, smart TVs providing personalised content recommendations, and navigation mobile apps recommending the best route based on real-time traffic. Clearly, the age of AI is here. How can we leverage this transformative technology to amplify the impact for social good?

AI core capabilities like machine learning (ML), computer vision, natural language understanding, and speech recognition offer new approaches to address humanitarian challenges and amplify the positive impact on underserved communities. ML enables machines to process massive amounts of data, interconnect underlying patterns, and derive meaningful insights for decision making. ML techniques like deep learning offer the powerful capability to create sophisticated AI models based on artificial neural networks.

Such models can be used for numerous real-world situations, like pandemic forecasting. AI tools can model and predict the spread of outbreaks like Covid-19 in low-resource settings using recent outbreak trends, treatment data, and travel history. This will help governmental and healthcare agencies to identify high-risk areas, manage demand and supply of essential medical supplies, and formulate localised remedial measures to control an outbreak.

Computer vision techniques process visual information in digital images and videos to generate valuable inference. Trained AI models assist medical practitioners to examine clinical images and identify hidden patterns of malignant tumors supporting expediated decision-making and a treatment plan for patients. Most recently, smart speakers have extended their conversational AI capabilities for healthcare use cases like chronic illness management, prescription ordering, and urgent-care appointments.

This advancement opens up the possibility to drive healthcare innovations that will break down access barriers and deliver quality healthcare to a marginalised population. Similarly, global educational programs aimed to connect the digitally unconnected can leverage satellite images and ML algorithms to map school locations. AI-powered learning products are increasingly launched to provide personalised experiences to train young children in math and science.

The convergence of AI with the Internet of Things (IoT) facilitates rapid development of meaningful solutions for agriculture to monitor soil health, assess crop damage, and optimise use of pesticides. This empowers local farmers to model different scenarios and choose the right crop that is likely to maximise the quality and yield, and it contributes toward zero hunger and economic empowerment SDGs.

To deliver high social impact, AI-driven humanitarian programs should follow a bottom-up approach. One should always work backwards from needs of the end-user, drive clarity on the targeted community/user, their major pain points, the opportunity to innovate, and expected user experience.

Most importantly, always check whether AI is relevant to the problem at hand or investigate if a meaningful alternative approach exists. Understand how an AI-powered solution will deliver value to various stakeholders involved and positively contribute toward achieving SDG for local communities. Define a suite of metrics to measure various dimensions of program success. Data acquisition is central to building robust AI models that require access to meaningful and quality data.

Delivering effective AI solutions to the humanitarian landscape requires a clear understanding of the data required and relevant sources to acquire them. For instance, satellite images, electronic health records, census data, educational records, and public datasets are used to solve problems in education, healthcare, and climate change. Partnership with key field players is important for addressing data gaps for domains with sparsely available data.

Responsible use of AI in humanitarian programs can be achieved by enforcing standards and best practices to implement fairness, inclusiveness, security, and privacy controls. Always check models and datasets for bias and negative experiences. Techniques like data visualisation and clustering can evaluate a datasets distribution for fair representation of various stakeholders dimensions. Routine updates to training and testing datasets is essential to fairly account for diversity in users growing needs and usage patterns. Safeguard sensitive user information by implementing privacy controls like encrypting user data at rest and in transit, limit access to user data and critical production systems based on least-privilege access control, and enforce data retention and deletion policy on user datasets. Implement a robust threat model to handle possible system attacks and routine checks on infrastructure security vulnerabilities.

To conclude, AI-powered humanitarian programs offer a transformative opportunity to advance social innovations and build a better tomorrow for the benefit of humanity.

Photo byElena MozhviloonUnsplash

Interested in hearing industry leaders discuss subjects like this?Attend the co-located5G Expo,IoT Tech Expo,Blockchain Expo,AI & Big Data Expo, andCyber Security & Cloud Expo World Serieswith upcoming events in Silicon Valley, London, and Amsterdam.

Original post:
How can AI-powered humanitarian engineering tackle the biggest threats facing our planet? - AI News

U of I to lead two of seven new national artificial intelligence institutes – University of Illinois News

CHAMPAIGN, Ill. The National Science Foundation and the U.S. Department of Agricultures National Institute of Food and Agriculture are announcing an investment of more than $140 million to establish seven artificial intelligence institutes in the U.S. Two of the seven will be led by teams at the University of Illinois, Urbana-Champaign. They will support the work of researchers at the U. of I. and their partners at other academic and research institutions. Each of the new institutes will receive about $20 million over five years.

The USDA-NIFA will fund the AI Institute for Future Agricultural Resilience, Management and Sustainability at the U. of I. Illinois computer science professor Vikram Adve will lead the AIFARMS Institute.

The NSF will fund the AI Institute for Molecular Discovery, Synthetic Strategy and Manufacturing, also known as the Molecule Maker Lab Institute. Huimin Zhao, a U. of I. professor of chemical and biomolecular engineering and of chemistry, will lead this institute.

AIFARMS will advance AI research in computer vision, machine learning, soft-object manipulation and intuitive human-robot interaction to solve major agricultural challenges, the NSF reports. Such challenges include sustainable intensification with limited labor, efficiency and welfare in animal agriculture, the environmental resilience of crops and the preservation of soil health. The institute will feature a novel autonomous farm of the future, new education and outreach pathways for diversifying the workforce in agriculture and technology, and a global clearinghouse to foster collaboration in AI-driven agricultural research, Adve said.

Computer science professor Vikram Adve will lead the AI Institute for Future Agricultural Resilience, Management and Sustainability at the U. of I.

Photo by L. Brian Stauffer

Edit embedded media in the Files Tab and re-insert as needed.

The Molecule Maker Lab Institute will focus on the development of new AI-enabled tools to accelerate automated chemical synthesis to advance the discovery and manufacture of novel materials and bioactive compounds, the NSF reports. The institute also will train a new generation of scientists with combined expertise in AI, chemistry and bioengineering. The goal of the institute is to establish an open ecosystem of disruptive thinking, education and community engagement powered by state-of-the-art molecular design, synthesis and spectroscopic characterization technologies all interfaced with AI and a modern cyberinfrastructure, Zhao said.

Huimin Zhao, a professor of chemical and biomolecular engineering and of chemistry, will lead the new Molecule Maker Lab Institute at Illinois.

Photo by L. Brian Stauffer

Edit embedded media in the Files Tab and re-insert as needed.

The National Science Foundation and USDA-NIFA recognize the breadth and depth of Illinois expertise in artificial intelligence, agricultural systems and molecular innovation, U. of I. Chancellor Robert Jones said. It is no surprise to me that two of seven new national AI institutes will be led by our campus. I look forward to seeing the results of these new investments in improving agricultural outcomes and innovations in basic and applied research.

Adve is a co-director of the U. of I. Center for Digital Agriculture with crop sciences bioinformatics professor Matthew Hudson. AIFARMS will be under the CDA umbrella. Zhao and Hudson are affiliates of the Carl R. Woese Institute for Genomic Biology, where Zhao leads the Biosystems Design theme. The Molecule Maker Lab Institute will be associated with two campus institutes: IGB and the Beckman Institute for Advanced Science and Technology.

For more information, see related posts, below, from associated campus units:

Editors notes:

To reach Vikram Adve, email vadve@illinois.edu.

To reach Huimin Zhao, email zhao5@illinois.edu.

Original post:
U of I to lead two of seven new national artificial intelligence institutes - University of Illinois News

Funding boost for artificial intelligence in NHS to speed up diagnosis of deadly diseases – GOV.UK

Patients will benefit from major improvements in technology to speed up the diagnosis of deadly diseases like cancer thanks to further investment in the use of artificial intelligence across the NHS.

A 50 million funding boost will scale up the work of existing Digital Pathology and Imaging Artificial Intelligence Centres of Excellence, which were launched in 2018 to develop cutting-edge digital tools to improve the diagnosis of disease.

The 3 centres set to receive a share of the funding, based in Coventry, Leeds and London, will deliver digital upgrades to pathology and imaging services across an additional 38 NHS trusts, benefiting 26.5 million patients across England.

Pathology and imaging services, including radiology, play a crucial role in the diagnosis of diseases and the funding will lead to faster and more accurate diagnosis and more personalised treatments for patients, freeing up clinicians time and ultimately saving lives.

Health and Social Care Secretary Matt Hancock said:

Technology is a force for good in our fight against the deadliest diseases it can transform and save lives through faster diagnosis, free up clinicians to spend time with their patients and make every pound in the NHS go further.

I am determined we do all we can to save lives by spotting cancer sooner. Bringing the benefits of artificial intelligence to the frontline of our health service with this funding is another step in that mission. We can support doctors to improve the care we provide and make Britain a world-leader in this field.

The NHS is open and I urge anyone who suspects they have symptoms to book an appointment with their GP as soon as possible to benefit from our excellent diagnostics and treatments.

Today the government has also provided an update on the number of cancer diagnostic machines replaced in England since September 2019, when 200 million was announced to help replace MRI machines, CT scanners and breast screening equipment, as part of the governments commitment to ensure 55,000 more people survive cancer each year.

69 scanners have now been installed and are in use, 10 more are being installed and 75 have been ordered or are ready to be installed.

The new funding is part of the governments commitment to saving thousands more lives each year and detecting three-quarters of all cancers at an early stage by 2028.

Cancer diagnosis and treatment has been an absolute priority throughout the pandemic and continues to be so. Nightingale hospitals have been turned into mass screening centres and hospitals have successfully and quickly cared for patients urgently referred by their GP, with over 92% of urgent cancer referrals being investigated within 2 weeks, and 85,000 people starting treatment for cancer since the beginning of the coronavirus pandemic.

In June, 45,000 more people came forward for a cancer check and the public are urged if they are concerned about possible symptoms to contact their GP and get a check-up.

National Pathology Imaging Co-operative Director and Consultant Pathologist at Leeds Teaching Hospitals NHS Trust Darren Treanor said:

This investment will allow us to use digital pathology to diagnose cancer at 21 NHS trusts in the north, serving a population of 6 million people. We will also build a national network spanning another 25 hospitals in England, allowing doctors to get expert second opinions in rare cancers, such as childhood tumours, more rapidly. This funding puts the NHS in a strong position to be a global leader in the use of artificial intelligence in the diagnosis of disease.

Professor Kiran Patel, Chief Medical Officer and Interim Chief Executive Officer for University Hospitals Coventry and Warwickshire (UHCW) NHS Trust, said:

We are delighted to receive and lead this funding. This represents a major capital investment into the NHS which will massively expand the digitisation of cellular pathology services, driving diagnostic evaluation to new heights and increasing access to a vast amount of image information for research.

As a trust were excited to be playing such a major part in helping the UK to take a leading role in the development and delivery of these new technologies to improve patient outcomes and enhance our understanding and utilisation of clinical information.

Professor Reza Razavi, London Medical Imaging and AI Centre for Value-Based Healthcare Director, said:

The additional funding will enable the London Medical Imaging and AI Centre for Value-Based Healthcare to continue its mission to spearhead innovations that will have significant impact on our patients and the wider NHS.

Artificial intelligence technology provides significant opportunities to improve diagnostics and therapies as well as reduce administrative costs. With machine learning, we can use existing data to help clinicians better predict when disease will occur, diagnosing and treating it earlier, and personalising treatments, which will be less resource intensive and provides better health outcomes for our patients.

The centres benefiting from the funding are:

Alongside the clinical improvements, this investment supports the UKs long-term response to COVID-19, contributing to the governments aim of building a British diagnostics industry at scale. The funding will support the UKs artificial intelligence and technology industries, by allowing the centres to partner with new and innovative British small and medium-sized enterprises (SMEs), boosting our economic recovery from coronavirus.

As part of the delivery of the governments Data to Early Diagnosis and Precision Medicine Challenge, in 2018, the Department for Business, Energy and Industrial Strategy (BEIS) invested 50 million through UK Research and Innovation (UKRI) to establish 5 digital pathology and imaging AI Centres of Excellence.

The centres located in Leeds, Oxford, Coventry, Glasgow and London were originally selected by an Innovate UK competition run on behalf of UKRI which, to date, has leveraged over 41.5 million in industry investment. Working with their partners, the centres modernise NHS pathology and imaging services and develop new, innovative ways of using AI to speed up diagnosis of diseases.

The rest is here:
Funding boost for artificial intelligence in NHS to speed up diagnosis of deadly diseases - GOV.UK

Six Limitations of Artificial Intelligence As We Know It – Walter Bradley Center for Natural and Artificial Intelligence

The list is a selection from Bingecast: Robert J. Marks on the Limitations of Artificial Intelligence, a discussion betweenLarry L. Linenschmidtof theHill Country Instituteand Walter Bradley Center directorRobert J. Marks. The focus on why we mistakenly attribute understanding and creativity to computers. The interview was originally published by the Hill Country Institute and is reproduced with thanks.

Here is a partial transcript, listing six limits of AI as we know it: (The Show Notes, Additional Resources, and a link to the full transcript are below.)

Larry L. Linenschmidt: When I read the term classical computer, how does a computer function? Lets build on that to talk about supercomputers and kind of build into just a foundation of how these things work so we can then talk about the theory of AI and what it is and what it isnt.

Robert J. Marks: One of the things that we can identify that humans can do that computers cant do are things which are non-algorithmic. If its non-algorithmic, it means its non-computable. Actually, Alan Turing showed back in his initial work that there were things which were not algorithmic. Its very difficult, for example, to write a computer program to analyze another computer program. Turing showed that specific instantiations of that were non-algorithmic. This is something which is taught to a freshman computer science students, so they know what algorithmic and non-algorithmic/non-computable is. Again, non-computable is a synonym for non-algorithmic.

We have a number of aspects that are non-algorithmic. I would say calling it creativity, sentience, consciousness are probably things that you can not write a computer program to simulate.

Note: The film The Imitation Game (2014) dramatizes the way Turing led a team that broke the Nazis unbreakable code, Enigma, during World War II, using pioneer computing techniques.

Robert J. Marks: Basically, Turing showed that computers were limited by something called algorithms, and we hear about algorithms a lot. Such and such is doing an algorithm and Facebook has initiated an algorithm to do something. The question is, what is an algorithm?

The algorithm is simply a step-by-step procedure to accomplish something. If you go to your shampoo bottle and you look at the back and it says, Wet hair, apply shampoo, rinse, and then repeat. Thats an algorithm because it tells you the step-by-step procedures that you need to wash your hair.

Larry L. Linenschmidt: Well, thats a pretty short algorithm for me since I dont have much hair, but go right ahead.

Robert J. Marks: Isnt that right? Well, the interesting thing about that algorithm is if you gave that to a computer, that computer would wash its hair forever because it doesnt say repeat once, it just says repeat

An algorithm I like to think of as a recipe. If you look at the recipe for baking a vanilla coconut cake, for example, it will tell you the ingredients that you need and then it will give you a step-by-step procedure for doing it. That is what an algorithm is and, in fact, it is what computers are limited to do. Computers are only able to perform algorithms.

Note: Have a look at Things exist that are unknowable: A tutorial on Chaitins number by Robert J. Marks, for some sense of the limits of knowledge that computers will not transcend.

Larry L. Linenschmidt: I have a cellphone that I understand has more power than a room full of computers 50 years ago that Army intelligence used. A massive increase in computing capability, isnt there?

Robert J. Marks: Yes there is, but by increasing the speed and using parallel computers, we have just increased the speed of the computers. There is a principle taught to computer scientists called the Church-Turing Thesis, which basically says that Alan Turings original machine could also do what the computers today do. The only thing that computers could do today is do things a lot faster. That is really good, that is very useful, but in terms of the ability of the computer, they are still restricted to algorithms. Im not sure if youve ever heard of the quantum computer

Larry L. Linenschmidt: Yes.

Robert J. Marks (pictured): Which is kind of the new rage where you use this strange, weird world of quantum physics in order to get computational results. Even quantum computing is algorithmic and is constrained by the Church-Turing Thesis. With quantum computers, were going to be doing them like lightning, but still, all of the stuff we could do we could do with Turings original machine. Now, with Turings original machine, it might take us a trillion years in order to do it compared to today, but nevertheless, the capability is with Turings original machine. Were just getting faster and faster and we can do more interesting things because of that speed.

Note: You may also wish to read Google vs. IBM?: Quantum supremacy isnt the big fix anyway. If human thought is a halting oracle, then even quantum computing will not allow us to replicate human intelligence (Eric Holloway).

Larry L. Linenschmidt: One of the things we talked about earlier were algorithms and what computers can do and some things that maybe they cant do. What are the things that maybe computers will never be able to do?

Robert J. Marks: Well, I think maybe the biggest testable thing that computers will never be able to do is creativity. Computers, they can only take the data which theyve been presented and interpolate. They cant, if you will, think outside of the box. If you look at the history of creativity, like great scientists like the Galileo and Einstein and such, they actually had to take the data that they were given. They had to discard it and they came up with something which was brand new. It wasnt just a reshuffling of the status quo, which is basically what a computer can do, it was actually a creative act outside of the available data.

Note: Typical claims for computer-generated art, music, or copywriting involve combining masses of similar material and producing many composites, the most comprehensible of which are chosen by the programmers for publication. The conventional test of computer intelligence, the Turing test, measures only whether a computer can fool a human under certain circumstances. The Lovelace test, which searches for actual creativity, is not much used and has not been passed.

Robert J. Marks: Qualia is kind of the subjective experience that one has of themselves. Imagine, for example, having a big, delicious red apple and you anticipate taking the bite out of it. You take the bite, you feel the crispness, you feel the tart sweetness, you feel the crunch as you chew it and swallow it. That is an experience and the question is, do you think you could ever write an algorithm to explain that qualia experience to a computer? I dont think so. I think that that is something which is unique to the human being

John Searle was a philosopher and he said that, There is no way that a computer understands anything. He illustrated this with the Chinese room: The basic idea was, you slipped a little slip of paper with something written in Chinese through a little slot. Inside the room, somebody picked it up and they looked at it and they wanted to translate it to something, say, like Portuguese.

Theres a big bunch of file cabinets in the room. The person in the room took this little sheet, looked through all of the file cabinets, and finally found something that matched the little sheet. He took the little translation in Portuguese, wrote it down, refiled the original things, went to the door and slipped out the translation into the Portuguese.

Now, externally, the person would say, My gosh, this guy knows Chinese, he knows Portuguese. This computer is really, really smart. Internally, the guy that was actually going through the file cabinets, doing the pattern matching in order to find out what the translation was, had no idea what Chinese was, had no idea what Portuguese was. He was just following a bunch of instructions.

Larry L. Linenschmidt: The computer processes, it turns out work product based on how its directed, but in terms of understanding, as we think of understanding like you would expect one of your students to understand what youre teaching, they dont understand. They compute. They process data. Is that a fair way of putting it?

Robert J. Marks: Consider the world champions at Jeopardy. If you think about it, thats just a big Chinese room. You have all of Wikipedia and all of the internet available to you and youre given some sort of question on Jeopardy and you have to get the answer. Watson beating the world champions in Jeopardy is exactly an example of a Chinese room, except the room is a lot bigger because computers are a lot faster and can do a lot better.

Note: A mistake Watson made playing Jeopardy illustrates the limitations: Why did Watson think Toronto was in the U.S.A.? How that happened tells us a lot about what AI can and cant do, to this day. Hint: Assessment of the relevance of possible clues may not be the greatest strength of a Watson type program.

Larry L. Linenschmidt (pictured): Well, theres one other game example that comes up quite a bit in the literature, and thats the Game Go, and apparently Go is the most complicated game and a computer did very well. Is that just an extension of the same idea that it was able to match possible outcomes and evaluate the best of those? Or what? How do you look at that?

Robert J. Marks: Go was a remarkable computer achievement. I dont want to derogate this at all. They used the concept called reinforcement learning and this reinforcement learning was used in chess and Go. It was actually used to win the old arcade games where, just by looking at the pixels in an arcade game such as Pac-Man, for example, the computer could learn how to win. Now, in all of these cases, of course, there was the concept of the rules. Youve got to know the rules. The fact that Go was mastered by the program is an incredibly accomplishment of computer science. However, notice that the computer is doing exactly what it was programmed to do. It was programmed to play Go, and Go is a very narrow application of artificial intelligence.

I would be impressed if the computer program would pass something called the Lovelace test, which is the test that computer programs are given for to test their creativity. The Lovelace Test basically says that you have seen creativity if the computer program does something that cant be explained by the programmers. Now, you might get some surprising results. There was some surprising results that Alpha Go used when it played the master, but surprising doesnt count. Its still in the game of Go. If AlphaGo had gone on to do something likelet me make the point by exaggerationgive you investment advice or to forecast the weather without additional programming, that would be an example of AI creativity

Algorithms in computers are the result of human creativity. That is not a controversial viewpoint. The current CEO of Microsoft, Satya Nadella, says the same thing. He says that, Look, computers are never going to be creative. Creativity will always be a domain of the programmer.

Note: Creativity does not follow computational rules provides a look at the concept. Philosopher Sean Dorrance Kelly muses on why machines are not creative.

Larry L. Linenschmidt: Well, let me ask the question about AI a little bit differently. Self-learning, a computer teaching itself to do something different, a way that the programmers not foreseeing. Theres a program called Deep Patient and its a way of managing information on the medical side and a couple of other programs that I read about and they solved the problem, but they arent doing it in a way that the developer of the network can explain. Now, does that imply that theres a learnability going on in there? Some way that theyre doing it? Or is everything that theyre doing, even if its not fully understood by the developer, still subject to the way that the developer set up the network?

Robert J. Marks: Well, one of the things we have to differentiate here is the difference between surprise and creativity. I have certainly written computer programs that have the element of surprise in them. I look at them and I say, Wow, look at what its doing, but then I look at the program and say, Yeah, this was one of the solutions that I considered. One of the ideas, especially in computer search, is to lay out thousands, maybe millions or billions of potential different solutions, and you dont know what the effect of those solutions are going to be. It would be almost like putting out a bunch of different recipes for cake. You had different amounts of batter, different amounts of milk, a number of different eggs, the amount of oil that you put in, et cetera, and what you want to do is you want to figure out what the best one is.

If you have no domain expertise, if you want to walk around in the search space and try to find the best combination, you might get something which is totally unexpected. We did something in swarm intelligence, which is modeling social insects. We actually applied evolutionary computing, which is an area in electrical engineering, and we evolved dweebs, it was a predator-prey sort of problem and our prey was the dweebs and our predator was the bullies and the bullies would chase around the dweebs. We would evolve and try to figure out, what was the best way for the dweeb colony of the colony swarm to survive the longest? The result that we got was astonishing and very surprising.

What happened was that there was self-sacrifice that the dweebs learned. One dweeb would run around the playground and be chased by the bullies and self-sacrifice himself, and then, I guess, dweebs are males because I said himself, so they would kill the dweeb and then there would be other dweebs which would come out and individually they would self-sacrifice themselves. By using up all of the time in order to survive, the colony of dweebs survived for a very, very long time, which was exactly what we told it to do.

Now, once we looked at that, we were surprised by the result, but we looked back at the code and we said, Yeah, of these thousands, millions of different solutions that we proposed, we see how this one gave us the surprise. Surprise cant be confused with creativity. If the surprise is something which is consequent to what the programmer decided to program, then it really isnt creativity. The program has just found one of those millions of solutions that work really well in, possibly, a surprising manner.

Larry L. Linenschmidt: As youre explaining it, Im thinking that a computer is as good as its programmer, its good at matching, its good at putting things together, but true creativity, what the entrepreneur Peter Thiel refers to the fact that a lot of people can take us from one to infinite but its that zero to one that is creativity in the tech world, in the business world that sets us apart.A computer cant take us from zero to one. It needs instructions, doesnt it?

Robert J. Marks: It does, and in his book, Zero to One, Thiel talks about the requirement of creativity. His philosophy is parallel to that of some other people, Jay Richards, for example, and George Gilder, who look at business in a very different way from those who see it as a Darwinian competition. They say, No, what drives entrepreneurs is creativity. You come up with a new idea like a PayPal or a Facebook or an Uber.

That creativity in business is never going to come from a computer. A computer would have never come up with the idea of Uber unless the programmer programmed it to look in a set of different things. That was something which was creative which was above and beyond the algorithmic

Larry L. Linenschmidt: Yes. Jay Richards book The The Human Advantage: The Future of American Work in an Age of Smart Machines, has countless examples of entrepreneurs seeing a need and then filling that need. Its totally against the idea that capitalism is just about greed. He made the case that capitalism or free market enterprise is really altruistic, that the best entrepreneurs actually fill in a need. Thats reality, isnt it?

Robert J. Marks: Yes it is, yes it is.

You may also enjoy earlier conversations between Robert J. Marks and Larry L. Linenschmidt:

Why we dont think like computers: If we thought like computers, we would repeat package directions over and over again unless someone told us to stop.

and

What did the computer learn in the Chinese room? Nothing. Computers dont understand things and they cant handle ambiguity, says Robert J. Marks.

Download transcript.

Go here to read the rest:
Six Limitations of Artificial Intelligence As We Know It - Walter Bradley Center for Natural and Artificial Intelligence