Page 101«..1020..100101102103..110120..»

Category Archives: Ai

Part 1: Quality Control in Delivery Only Ghost Kitchens Utilizing AI and Computer – MarketScale

Posted: September 27, 2021 at 5:36 pm

There is a need to have a safe, predictable, and stable food supply chain, Jain said. Internet of Things, artificial intelligence, and data analytics can play a big role in creating these safe, predictable, and sustainable food supply chains.

On this first part of a two-part episode of To the Edge and Beyond by Intel with Voice of B2B Daniel Litwin is Maria Meow, APAC Hospitality Vertical Marketing Manager for Intels Internet of Things Group, and Ankur Jain, Founder, and CEO of UdyogYantra Technologies, a company focused on bringing Industry 4.0 and its associated ecosystem of technologies to various industries, including the Restaurant industry.

When it comes to IoT, Meow has been at the intersection for over ten years at Intel. One thing shes noticed is that as technology has evolved, so has knowledge. In her role she is regularly relying on IoT to help improve food operations and to help ensure the trust and quality of food, which addresses a lot of consumer concerns.

With population is expected to increase globally by 10 billion in 2050, this creates a demand for creative food solutions, according to Jain. There is a need to have a safe, predictable, and stable food supply chain, Jain said. Internet of Things, artificial intelligence, and data analytics can play a big role in creating these safe, predictable, and sustainable food supply chains.

Intel technology is at the forefront of these IoT innovations, powering AI and analytics solutions like UdyogYantras to address these concerns for both consumers and restaurant brands.

The wheels of ghost kitchens were already in motion before COVID-19. But, their viability increased during the pandemic, as restaurants pivoted to meet consumer demand for BOPIS and third-party delivery. Now that things have settled a bit, restaurants are still sticking with some or all of the models developed during the pandemic.

Digital transformations have been accelerated ever since the pandemic began and have pivoted to delivery, pick up and drive through, Meow said. Digitally transformed brands will reap the benefits.

View original post here:

Part 1: Quality Control in Delivery Only Ghost Kitchens Utilizing AI and Computer - MarketScale

Posted in Ai | Comments Off on Part 1: Quality Control in Delivery Only Ghost Kitchens Utilizing AI and Computer – MarketScale

AI adoption in the ETF industry begins to grow – Financial Times

Posted: at 5:36 pm

The growing appreciation that human stockpickers struggle to outperform their benchmark indices has helped fuel a massive surge in assets held by passively managed exchange traded funds. Now some companies are hoping to show that artificial intelligence can finally give them an edge.

The technology is fast-evolving but at least two fund managers, EquBot and Qraft Technologies, running dedicated AI-powered ETFs are claiming early success, even though some of their AI models decisions might have required strong nerves to implement.

For example, the team at Qraft, which offers four AI-powered ETFs, listed on NYSE Arca, witnessed its technology build a weighting of 14.7 per cent in Tesla in its Qraft AI-Enhanced US Large Cap Momentum ETF (AMOM) in August last year, but when it rebalanced a month later on September 1 it sold it all.

The ETF began buying Tesla again in November, amassing a stake of 7.6 per cent by January this year, but in the February rebalancing it sold the entire holding once again. In each case when it sold it anticipated a sharp decline in the price of Tesla and was able to profit from the subsequent rise when it bought back in.

Alpha [excess returns above the market] is getting harder and harder to find, said Francis Oh, managing director and head of AI ETFs at Qraft, pointing out that humans can become emotionally attached to certain stocks, impeding their portfolio returns. Our model has no human bias.

Academic research has certainly shown that humans tend to be reluctant to crystallise losses, while conversely they feel driven to realise gains sometimes too early.

However, it is arguable whether the AI systems used by Qraft and EquBot can really be said to eliminate human bias because both are supported by large teams of data scientists who are constantly enhancing their models EquBot has teamed up with IBM Watson and Qraft has its own dedicated team in South Korea.

The machine only has historical data. It sees opportunities according to the rules it has been programmed for, Greg Davies, head of behavioural science at consultancy Oxford Risk, pointed out.

Chris Natividad, chief investment officer at EquBot, agreed: The system only knows what it knows, and its historical, he said, adding that in addition to humans deciding what new information the self-learning system should be given, data scientists also needed to check the outcomes so we can explain it to investors.

Both of EquBots ETFs, the AI Powered Equity ETF (AIEQ) and the AI Powered International Equity ETF (AIIQ) have outperformed their respective benchmarks since inception.

Similar successes have been notched up by Qrafts suite of AI powered vehicles. But the outperformance narrative is less clear when compared to the plain vanilla SPDR S&P 500 ETF (SPY) so far this year.

AMOM and Qrafts other funds, the Qraft AI-Enhanced Large Cap ETF (QRFT), the Qraft AI-Enhanced US High Dividend ETF (HDIV) and the Qraft AI-Enhanced US Next Value ETF delivered returns of between 15.3 per cent and 20.8 per cent during the eight months to the end of August. While respectable, none of them quite matched the 21.6 per cent return of SPY over the same period.

AIEQ also just undershot SPY, notching up a 21.3 per cent gain in the eight months to the end of August, while AIIQ delivered only 12.2 per cent.

Despite the unremarkable returns this year, Oh and Natividad remain convinced that their models have much to offer.

The velocity, variety and volume of data is exploding, said Natividad, adding that bringing in new data sources was a bit like adding more pixels to an online image. You get a clearer picture. He said asset managers and index providers were embroiled in an arms race for data.

Oh said value and momentum factors were becoming so short lived and fragmented that AI systems helped to find opportunities.

EquBot scours news, social media, industry and analyst reports and financial statements to build predictive models. It also looks at things such as job posts. Qraft also uses a variety of so-called structured and unstructured data sources to drive its models.

However, despite the promise of the technology, assets under management for the ETFs at both companies remain modest. AIEQ and AIIQ have less than $200m in assets between them, although Natividad said a partnership with HSBC which was using the two EquBot indices meant that there was $1.4bn tracking the strategies.

Qrafts ETFs have attracted less than $70m across all four vehicles, although Oh said the business model was once again focused on B2B advisory asset allocation modelling.

Rony Abboud, ETF analyst at TrackInsight, said investors probably wanted to know more. He emphasised the importance of due diligence, adding: The more data points used, the higher probability of having an error. So where do they get their data from and how accurate is it?

Despite misgivings, adoption of AI techniques in the investment world is increasing. Natural language processing is certainly a growing area, said Emerald Yau, head of equity index product management for the Apac region at FTSE Russell, which has made its first foray into NLP-powered offerings with its launch of a suite of innovation-themed indices.

However, Oxford Risks Davies warned that while algorithms were good at finding arbitrage opportunities they could not deal with ambiguity.

The problem with the investing world is the rules are not static, he said, adding that humans still retained an edge. If humans learn one thing in one context they can transfer it to another.

Visit our ETF Hub for investor news and education, market updates and analysis and easy-to-use tools to help you select the right ETFs.

See the original post here:

AI adoption in the ETF industry begins to grow - Financial Times

Posted in Ai | Comments Off on AI adoption in the ETF industry begins to grow – Financial Times

Parrot wants pro pilots to test its new ANAFI Ai drone – DroneDJ

Posted: at 5:36 pm

If youre a professional drone pilot who routinely uses flying robots in complex environments, you would want to check out this Early Access Program being offered by Parrot. The drone maker is handing out the new ANAFI Ai drone for a loan period of two months to test users who can help the company to evaluate the machines 4G standards.

The European drone manufacturer is looking for professionals from inspection, construction, infrastructure, energy utilities, public safety, surveying, agriculture, and defense sectors to:

Inspect buildings with strong 4G connectivity during built-up urban missions. Map long distance electric power lines in 48 MP at 1fps. Quickly generate 3D models of a building just by clicking on its land register in the new FreeFlight 7 App. Create precise flight plans in complex environments thanks to a unique obstacle avoidance system. Benefit from the embedded Secure Element to protect sensitive data. Develop flight missions with AirSDK and contribute to Parrot open-source piloting App.

In return, youd be expected to stay in contact with the Parrot team throughout the loan period, sharing datasets, photogrammetry 3D models, flight logs, and usage feedback.

Readers may recall, the ANAFI Ai is the first commercial drone to use 4G LTE as the primary data link between the drone and the operator. The bird features a uniquely designed omnidirectional obstacle avoidance system, 48 MP imaging sensor, 4K 60fps videos, and up to 32 minutes of flight time in an airframe that weighs less than 2 pounds.

In the United States, Parrot is teaming exclusively with mobile network provider Verizon and its drone software subsidiary Skyward to help operators utilize 4G out of the box. However, despite the splash made by this first-of-its-kind connected drone, some industry experts have questioned whether a 4G connectivity premium would actually be worth its price. The results from this program would, no doubt, help to answer those questions as well.

Interested? You can apply for the ANAFI Ai 4G drone Early Access Program by filling in this form here.

Read more: Sony Airpeak S1 drone hits pre-order in Japan; US launch soon

Subscribe to DroneDJ on YouTube for exclusive videos

The rest is here:

Parrot wants pro pilots to test its new ANAFI Ai drone - DroneDJ

Posted in Ai | Comments Off on Parrot wants pro pilots to test its new ANAFI Ai drone – DroneDJ

Could these AI robots replace farmers and make agriculture more sustainable? – Euronews

Posted: at 5:36 pm

Robots powered by artificial intelligence could farm more sustainably than traditional agriculture, claims one Silicon Valley company.

Agricultural technology start-up Iron Ox says that its mission is to make the global agriculture sector carbon negative. And they have just secured 47 million ($53 million) from investors including Bill Gates.

CEO Brandon Alexander can't be accused of lacking experience when it comes to food production. He spent every summer of his childhood on his grandparents farm, picking cotton, potatoes, or peanuts under the Texas sun.

Yet, with a degree in robotics "precisely to escape farm work", Alexander says he couldn't shake the feeling that he could have a bigger impact working in agriculture.

A feeling that only grew when he learnt that 40 per cent of food grown worldwide is thrown away before reaching our shopping baskets.

Alexander left his previous job in 2015 to take a six-month road trip through California. He wanted to discover first-hand what problems American farmers were facing and figure out how automation could help.

Along the way, he learnt about extreme water scarcity, difficulties of finding labour, and a whole host of other issues. Armed with this knowledge, Alexander launched a startup focusing on autonomous farming in 2018.

"To really eliminate waste, to really get to that next level of sustainability and impact, we have to rethink the entire grow process," Alexander explains.

Now, in the company's greenhouse in Gilroy, California, two robots named Grover and Ada grow basil, strawberries and other crops using a hydroponic system.

Iron Ox's system uses artificial intelligence to ensure each plant gets the optimal levels of sunshine, water, and nutrients. Grover is in charge of carrying the plants to the dosing station, where it hands off the produce to Ada, a robotic arm.

The stop is akin to a doctor's visit. Sensors help to check the water for nutrients levels and any other ingredients needed for healthy growth. After the examination, customised doses of nutrients can be automatically delivered to the plants through the hydroponics system.

This way, the farm produces less waste and uses only the amount of water it really needs. Data is continuously collected from the crops helping the AI learn what the plants need - improving their yield and reducing their environmental impact.

Watch the video to see these AI-powered robots in action.

Read more:

Could these AI robots replace farmers and make agriculture more sustainable? - Euronews

Posted in Ai | Comments Off on Could these AI robots replace farmers and make agriculture more sustainable? – Euronews

Sell real estate better with AI and fully automated marketing – Inman

Posted: at 5:36 pm

Utilizing digital marketing when selling real estate is key to ensure that you optimize your sales operations. However, the digital landscape is changing rapidly and as AI solutions are becoming more and more powerful, the industry needs to be constantly alert to the commercial opportunities that come with them. Implementing AI and automation into your digital marketing strategy will provide competitive advantages and create more streamlined processes.

Recent statistics from Statista document that 97% of all home buyers in the United States in 2020 used the internet for home searching. This percentage alone is a reason to investigate if your business is reaching the full potential of its digital marketing strategy.

Because of low mortgage rates and remarkably high demand compared to supply, much due to the pandemic and work from home opportunities, the housing market in the US is booming right now. However, this is not the case for all areas and for all housing types. And like all cycles, this sentiment will change again, either slowly over time or due to an unforeseen disruption in the market.

Your real estate business will be much better equipped to perform well throughout any type of cycle or market condition with a proactive digital marketing strategy in place. By investing in an AI-driven and automated digital marketing solution, you will increase your chances of delivering superior results to your clients while streamlining the internal operations for your agents.

Marketers own scientific research from 54,000 property transactions carried out over a period of almost three years documents that the implementation of AI and automation in your marketing operations can result in an average return of 13.5 times the marketing investment for one of the most normal housing types. If the ambition is to do well for your clients, such results cannot be ignored by real estate Realtors.

For any successful property transaction, the goal should always be to identify, get in contact with, and present the given object to the one(s) that are most interested in the property. So, the million-dollar question is how to accomplish that, and how to do it in the most effective way. A defined strategy for targeting the most likely interested audience is key for converting perceived interest to real leads, and ultimately to achieve the best possible result for both the home seller and the realtor. A firm focus on quality rather than quantity in your approach to leads accumulation will increase the chance for utilising the potential of each individual property. To succeed with this, there is no alternative to advanced, automated solutions and efficient use of relevant data.

A proven digital marketing solution, with integrated automation and AI capabilities, will normally have all the tools and capabilities to run a streamlined marketing operation for both small and large real estate firms. The results will be immense for those that transition from more traditional and manual marketing operations and substantial for those moving from less advanced digital solutions. Accordingly, agents will also free up a lot of time, enabling them to focus more on what they normally are good at: connecting directly with potential buyers.

Marketing properties through an AI-powered, automated platform includes many key parts. The most important are (1) finding the right target audience, (2) tailor the communication thereafter, and (3) advertise in the right places at the right time. Your digital marketing solution will also conduct all necessary campaign management following the launch of a campaign, including optimisation and making sure that the overall budget is spent in the best possible way.

By combining historical data, AI-driven technology, and real-time analysis, Marketers intuitive products ensure you market your properties with the right content, to the right people, at the right time, and in the right channels. The benefits for both you and your clients can be immense. Curious to learn more? Click here to read more about the positive effect of using Marketers AI-powered platform for selling real estate

Here is the original post:

Sell real estate better with AI and fully automated marketing - Inman

Posted in Ai | Comments Off on Sell real estate better with AI and fully automated marketing – Inman

Improved algorithms may be more important for AI performance than faster hardware – VentureBeat

Posted: September 20, 2021 at 9:30 am

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

When it comes to AI, algorithmic innovations are substantially more important than hardware at least where the problems involve billions to trillions of data points. Thats the conclusion of a team of scientists at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), who conducted what they claim is the first study on how fast algorithms are improving across a broad range of examples.

Algorithms tell software how to make sense of text, visual, and audio data so that they can, in turn, draw inferences from it. For example, OpenAIs GPT-3 was trained on webpages, ebooks, and other documents to learn how to write papers in a humanlike way. The more efficient the algorithm, the less work the software has to do. And as algorithms are enhanced, less computing power should be needed in theory. But this isnt settled science. AI research and infrastructure startups like OpenAI and Cerberus are betting that algorithms will have to increase in size substantially to reach higher levels of sophistication.

The CSAIL team, led by MIT research scientist Neil Thompson, who previously coauthored a paper showing that algorithms were approaching the limits of modern computing hardware, analyzed data from 57 computer science textbooks and more than 1,110 research papers to trace the history of where algorithms improved. In total, they looked at 113 algorithm families, or sets of algorithms that solved the same problem, that had been highlighted as most important by the textbooks.

The team reconstructed the history of the 113, tracking each time a new algorithm was proposed for a problem and making special note of those that were more efficient. Starting from the 1940s to now, the team found an average of eight algorithms per family of which a couple improved in efficiency.

For large computing problems, 43% of algorithm families had year-on-year improvements that were equal to or larger than the gains from Moores law, the principle that the speed of computers roughly doubles every two years. In 14% of problems, the performance improvements vastly outpaced those that came from improved hardware, with the gains from better algorithms being particularly meaningful for big data problems.

The new MIT study adds to a growing body of evidence that the size of algorithms matters less than their architectural complexity. For example, earlier this month, a team of Google researchers published a study claiming that a model much smaller than GPT-3 fine-tuned language net (FLAN) bests GPT-3 by a large margin on a number of challenging benchmarks. And in a 2020 survey, OpenAI found that since 2012, the amount of compute needed to train an AI model to the same performance on classifying images in a popular benchmark, ImageNet, has been decreasing by a factor of two every 16 months.

Theres findings to the contrary. In 2018, OpenAI researchers released a separate analysis showing that from 2012 to 2018, the amount of compute used in the largest AI training runs grew more than 300,000 times with a 3.5-month doubling time, exceeding the pace of Moores law. But assuming algorithmic improvements receive greater attention in the years to come, they could solve some of the other problems associated with large language models, like environmental impact and cost.

In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car. GPT-3 alone used 1,287 megawatts during training and produced 552 metric tons of carbon dioxide emissions, a Google study found the same amount emitted by 100 average homes electricity usage over a year.

On the expenses side, a Synced report estimated that the University of Washingtons Grover fake news detection model cost $25,000 to train; OpenAI reportedly racked up $12 million training GPT-3; and Google spent around $6,912 to train BERT. While AI training costs dropped 100-fold between 2017 and 2019, according to one source, these amounts far exceed the computing budgets of most startups and institutions let alone independent researchers.

Through our analysis, we were able to say how many more tasks could be done using the same amount of computing power after an algorithm improved, Thompson said in a press release. In an era where the environmental footprint of computing is increasingly worrisome, this is a way to improve businesses and other organizations without the downside.

Read more from the original source:

Improved algorithms may be more important for AI performance than faster hardware - VentureBeat

Posted in Ai | Comments Off on Improved algorithms may be more important for AI performance than faster hardware – VentureBeat

This AI could predict 10 years of scientific prioritiesif we let it – MIT Technology Review

Posted: at 9:30 am

Thesurveycommittee, which receives input from a host of smaller panels, takes into account a gargantuan amount of information to create research strategies. Although the Academies wont release the committees final recommendation to NASA for a few more weeks, scientists are itching to know which of their questions will make it in, and which will be left out.

The Decadal Survey really helps NASA decide howtheyregoing to lead the future of human discovery in space, soitsreally important thattheyrewell informed, saysBrant Robertson, a professor of astronomy and astrophysics at UC Santa Cruz.

One teamof researcherswants to useartificial intelligenceto make this process easier. Their proposal isnt for a specific mission or line of questioning; rather, they say,their AI can help scientists make tough decisions about which other proposals to prioritize.

The idea is that by training an AI to spot research areas that are either growing or declining rapidly, the tool could make it easier for survey committees and panels to decide what should make the list.

What we wanted was to have a system that would do a lot of the work that the Decadal Survey does, and let the scientists working on the Decadal Survey do what they will do best, saysHarley Thronson, a retired senior scientist at NASAs Goddard Space Flight Center and lead authorof the proposal.

Although members of each committee are chosen for their expertise in their respective fields,itsimpossible for every member to grasp the nuance of every scientific theme. The number of astrophysics publications increases by 5%every year, according to the authors. Thats a lot for anyone to process.

Thats where Thronsons AI comes in.

It took just over a year to build, but eventually, Thronsons team was able to train it on more than 400,000 pieces of research published in the decade leading up to the Astro2010 survey. They were also able to teach the AI to sift through thousands of abstracts toidentifyboth low-and high-impact areasfromtwo-and three-word topic phrases likeplanetary systemorextrasolar planet.

According to theresearcherswhitepaper, the AI successfullybackcastedsix popular research themesofthe last 10 years, including a meteoric rise in exoplanet research and observation of galaxies.

One of the challenging aspects of artificial intelligence is that they sometimes will predict, or come up with, or analyze things that are completely surprising to the humans, says Thronson. And we saw this a lot.

Thronson and his collaborators think the steering committee should use their AI to help review and summarize the vast amounts of text the panel must sift through, leaving human experts to makethe final call.

Their research isnt the first to try to use AI to analyze and shape scientific literature. Other AIs have already been usedto help scientistspeer-reviewtheircolleagueswork.

But could it be trusted with a task as important and influential as the DecadalSurvey?

Read the rest here:

This AI could predict 10 years of scientific prioritiesif we let it - MIT Technology Review

Posted in Ai | Comments Off on This AI could predict 10 years of scientific prioritiesif we let it – MIT Technology Review

Abductive inference: The blind spot of artificial intelligence – TechTalks

Posted: at 9:30 am

Welcome toAI book reviews, a series of posts that explore the latest literature on artificial intelligence.

Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain.

But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larsons new book, The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.

And unless scientists, researchers, and the organizations that support their work dont change course, Larson warns, they will be doomed to resignation to the creep of a machine-land, where genuine invention is sidelined in favor of futuristic talk advocating current approaches, often from entrenched interests.

From a scientific standpoint, the myth of AI assumes that we will achieve artificial general intelligence (AGI) by making progress on narrow applications, such as classifying images, understanding voice commands, or playing games. But the technologies underlying these narrow AI systems do not address the broader challenges that must be solved for general intelligence capabilities, such as holding basic conversations, accomplishing simple chores in a house, or other tasks that require common sense.

As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking the low-hanging fruit, Larson writes.

The cultural consequence of the myth of AI is ignoring the scientific mystery of intelligence and endlessly talking about ongoing progress on deep learning and other contemporary technologies. This myth discourages scientists from thinking about new ways to tackle the challenge of intelligence.

We are unlikely to get innovation if we choose to ignore a core mystery rather than face it up, Larson writes. A healthy culture for innovation emphasizes exploring unknowns, not hyping extensions of existing methods Mythology about inevitable success in AI tends to extinguish the very culture of invention necessary for real progress.

You step out of your home and notice that the street is wet. Your first thought is that it must have been raining. But its sunny and the sidewalk is dry, so you immediately cross out the possibility of rain. As you look to the side, you see a road wash tanker parked down the street. You conclude that the road is wet because the tanker washed it.

This is an example inference, the act of going from observations to conclusions, and is the basic function of intelligent beings. Were constantly inferring things based on what we know and what we perceive. Most of it happens subconsciously, in the background of our mind, without focus and direct attention.

Any system that infers must have some basic intelligence, because the very act of using what is known and what is observed to update beliefs is inescapably tied up with what we mean by intelligence, Larson writes.

AI researchers base their systems on two types of inference machines: deductive and abductive. Deductive inference uses prior knowledge to reason about the world. This is the basis of symbolic artificial intelligence, the main focus of researchers in the early decades of AI. Engineers create symbolic systems by endowing them with a predefined set of rules and facts, and the AI uses this knowledge to reason about the data it receives.

Inductive inference, which has gained more traction among AI researchers and tech companies in the past decade, is the acquisition of knowledge through experience. Machine learning algorithms are inductive inference engines. An ML model trained on relevant examples will find patterns that map inputs to outputs. In recent years, AI researchers have used machine learning, big data, and advanced processors to train models on tasks that were beyond the capacity of symbolic systems.

A third type of reasoning, abductive inference, was first introduced by American scientist Charles Sanders Peirce in the 19th century. Abductive inference is the cognitive ability to come up with intuitions and hypotheses, to make guesses that are better than random stabs at the truth.

For example, there can be numerous reasons for the street to be wet (including some that we havent directly experienced before), but abductive inference enables us to select the most promising hypotheses, quickly eliminate the wrong ones, look for new ones and reach a reliable conclusion. As Larson puts it in The Myth of Artificial Intelligence, We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible.

Abductive inference is what many refer to as common sense. It is the conceptual framework within which we view facts or data and the glue that brings the other types of inference together. It enables us to focus at any moment on whats relevant among the ton of information that exists in our mind and the ton of data were receiving through our senses.

The problem is that the AI community hasnt paid enough attention to abductive inference.

Abduction entered the AI discussion with attempts at Abductive Logic Programming in the 1980s and 1990s, but those efforts were flawed and later abandoned. They were reformulations of logic programming, which is a variant of deduction, Larson told TechTalks.

Abduction got another chance in the 2010s as Bayesian networks, inference engines that try to compute causality. But like the earlier approaches, the newer approaches shared the flaw of not capturing true abduction, Larson said, adding that Bayesian and other graphical models are variants of induction. In The Myth of Artificial Intelligence, he refers to them as abduction in name only.

For the most part, the history of AI has been dominated by deduction and induction.

When the early AI pioneers like [Alan] Newell, [Herbert] Simon, [John] McCarthy, and [Marvin] Minsky took up the question of artificial inference (the core of AI), they assumed that writing deductive-style rules would suffice to generate intelligent thought and action, Larson said. That was never the case, really, as should have been earlier acknowledged in discussions about how we do science.

For decades, researchers tried to expand the powers of symbolic AI systems by providing them with manually written rules and facts. The premise was that if you endow an AI system with all the knowledge that humans know, it will be able to act as smartly as humans. But pure symbolic AI has failed for various reasons. Symbolic systems cant acquire and add new knowledge, which makes them rigid. Creating symbolic AI becomes an endless chase of adding new facts and rules only to find the system making new mistakes that it cant fix. And much of our knowledge is implicit and cannot be expressed in rules and facts and fed to symbolic systems.

Its curious here that no one really explicitly stopped and said Wait. This is not going to work! Larson said. That would have shifted research directly towards abduction or hypothesis generation or, say, context-sensitive inference.

In the past two decades, with the growing availability of data and compute resources, machine learning algorithmsespecially deep neural networkshave become the focus of attention in the AI community. Deep learning technology has unlocked many applications that were previously beyond the limits of computers. And it has attracted interest and money from some of the wealthiest companies in the world.

I think with the advent of the World Wide Web, the empirical or inductive (data-centric) approaches took over, and abduction, as with deduction, was largely forgotten, Larson said.

But machine learning systems also suffer from severe limits, including the lack of causality, poor handling of edge cases, and the need for too much data. And these limits are becoming more evident and problematic as researchers try to apply ML to sensitive fields such as healthcare and finance.

Some scientists, including reinforcement learning pioneer Richard Sutton, believe that we should stick to methods that can scale with the availability of data and computation, namely learning and search. For example, as neural networks grow bigger and are trained on more data, they will eventually overcome their limits and lead to new breakthroughs.

Larson dismisses the scaling up of data-driven AI as fundamentally flawed as a model for intelligence. While both search and learning can provide useful applications, they are based on non-abductive inference, he reiterates.

Search wont scale into commonsense or abductive inference without a revolution in thinking about inference, which hasnt happened yet. Similarly with machine learning, the data-driven nature of learning approaches means essentially that the inferences have to bein the data, so to speak, and thats demonstrably not true of many intelligent inferences thatpeople routinelyperform, Larson said. We dont just look to the past, captured, say, in a large dataset, to figure out what to conclude or think or infer about the future.

Other scientists believe that hybrid AI that brings together symbolic systems and neural networks will have a bigger promise of dealing with the shortcomings of deep learning. One example is IBM Watson, which became famous when it beat world champions at Jeopardy! More recent proof-of-concept hybrid models have shown promising results in applications where symbolic AI and deep learning alone perform poorly.

Larson believes that hybrid systems can fill in the gaps in machine learningonly or rules-basedonly approaches. As a researcher in the field of natural language processing, he is currently working on combining large pre-trained language models like GPT-3 with older work on the semantic web in the form of knowledge graphs to create better applications in search, question answering, and other tasks.

But deduction-induction combos dont get us to abduction, because the three types of inference are formally distinct, so they dont reduce to each other and cant be combined to get a third, he said.

In The Myth of Artificial Intelligence, Larson describes attempts to circumvent abduction as the inference trap.

Purely inductively inspired techniques like machine learning remain inadequate, no matter how fast computers get, and hybrid systems like Watson fall short of general understanding as well, he writes. In open-ended scenarios requiring knowledge about the world like language understanding, abduction is central and irreplaceable. Because of this, attempts at combining deductive and inductive strategies are always doomed to fail The field needs a fundamental theory of abduction. In the meantime, we are stuck in traps.

The AI communitys narrow focus on data-driven approaches has centralized research and innovation in a few organizations that have vast stores of data and deep pockets. With deep learning becoming a useful way to turn data into profitable products, big tech companies are now locked in a tight race to hire AI talent, driving researchers away from academia by offering them lucrative salaries.

This shift has made it very difficult for non-profit labs and small companies to become involved in AI research.

When you tie research and development in AI to the ownership and control of very large datasets, you get a barrier to entry for start-ups, who dont own the data, Larson said, adding that data-driven AI intrinsically creates winner-take-all scenarios in the commercial sector.

The monopolization of AI is in turn hampering scientific research. With big tech companies focusing on creating applications in which they can leverage their vast data resources to maintain the edge over their competitors, theres little incentive to explore alternative approaches to AI. Work in the field starts to skew toward narrow and profitable applications at the expense of efforts that can lead to new inventions.

No one at present knows how AI would look in the absence of such gargantuan centralized datasets, so theres nothing really on offer for entrepreneurs looking to compete by designing different and more powerful AI, Larson said.

In his book, Larson warns about the current culture of AI, which is squeezing profits out of low-hanging fruit, while continuing to spin AI mythology. The illusion of progress on artificial general intelligence can lead to another AI winter, he writes.

But while an AI winter might dampen interest in deep learning and data-driven AI, it can open the way for a new generation of thinkers to explore new pathways. Larson hopes scientists start looking beyond existing methods.

In The Myth of Artificial Intelligence, Larson provides an inference framework that sheds light on the challenges that the field faces today and helps readers to see through the overblown claims about progress toward AGI or singularity.

My hope is that non-specialists have some tools to combat this kind of inevitability thinking, which isnt scientific, and that my colleagues and other AI scientists can view it as a wake-up call to get to work on the very real problems the field faces, Larson said.

View original post here:

Abductive inference: The blind spot of artificial intelligence - TechTalks

Posted in Ai | Comments Off on Abductive inference: The blind spot of artificial intelligence – TechTalks

AI Disruption: What VCs Are Betting On – Forbes

Posted: at 9:30 am

Venture Capital concept image with business icons and copyspace.For more variation of this image ... [+] please visit my portfolio

According to data from PitchBook, the funding for AI deals has continued its furious pace.In the latest quarter, the amount invested came to a record $31.6 billion.Note that there were 11 deals the closed more than $500 million.

Granted, plenty of these startups will fade away or even go bust.But of course, some will ultimately disrupt industries and change the landscape of the global economy.

To be disrupted, you have to believe the AI is going to make 10x better recommendations than whats available today, said Eric Vishria, who is a General Partner at Benchmark.I think that is likely to happen in really complex, high dimensional spaces, where there are so many intermingled factors at play that finding correlations via standard analytical techniques is really difficult.

So then what are some of the industries that are vulnerable to AI disruption?Well, lets see where some of the top VCs are investing today:

Software Development:There have been advances in DevOps and IDEs.Yet software development remains labor intensive.And it does not help that its extremely difficult to recruit qualified developers.

But AI can make a big difference.Advancements in state-of-the-art natural language processing algorithms could revolutionize software development, initially by significantly reducing the boilerplate code that software developers write today and in the long-run by writing entire applications with little assistance from humans, said Nnamdi Iregbulem, who is a Partner at Lightspeed Venture Partners.

Consider the use of GPT-3, which is a neural network that trains models to create content.Products like GitHub Copilot, which are also based on GPT-3, will also disrupt software development, said Jai Das, who is the President and Partner at Sapphire Ventures.

Cybersecurity:This is one of the biggest software markets.But the technologies really need retooling.After all, there continues to be more and more breaches and hacks.

Cybersecurity is likely to turn into an AI-vs-AI game very soon, said Deepak Jeevankumar, who is a Managing Director at Dell Technologies Capital.Sophisticated attackers are already using AI and bots to get over defenses.

Construction:This is a massive industry and will continue to grow, as the global population continues to increase.Yet construction has seen relatively small amounts of IT investment.But AI could be a game changer.

An incremental 1% increase in efficiency can mean millions of dollars in cost savings, said Shawn Carolan, who is a Managing Partner at Menlo Ventures.There are many companies, like Openspace.ai, doing transformative work using AI in the construction space. Openspace leverages AI and machine vision to essentially become a photographic memory for job sites. It automatically uploads and stitches together images of a job site so that customers can do a virtual walk-through and monitor the project at any time.

Talent Management:HR has generally lagged with innovation.The fact is that many of the processes are manual and inefficient.

But AI can certainly be a solution. In fact, AI startups like Eightfold.ai have been able to post substantial growth in the HR category.In June, the company announced funding of $220 million, which was led by the SoftBank Vision Fund 2.

Every single company is talking about talent as a key priority, and the companies that embrace AI to find better candidates faster, cheaper, at scale, they have a true competitive advantage, said Kirthiga Reddy, who is a Partner at SoftBank.Understanding how to use AI to amplify the interactions in the talent lifecycle is a differentiator and advantage for these businesses."

Drug Discovery: The development of the Covid-19 vaccinesfrom companies like Pfizer, Moderna and BioNTechhas highlighted the power of innovation in the healthcare industry.But despite this, there is still much be done.The fact is that drug development is costly and time-consuming.

It's becoming impossible to process these large datasets without using the latest AI/ML technologies, said Dusan Perovic, who is a partner at Two Sigma Ventures.Companies that are early adopters of these data science tools and thereby are able to analyze larger datasets are going to make faster progress than companies that rely on older data analytics tools.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL.

See the rest here:

AI Disruption: What VCs Are Betting On - Forbes

Posted in Ai | Comments Off on AI Disruption: What VCs Are Betting On – Forbes

AI in Pro Sport: Laying the Groundwork – SportTechie

Posted: at 9:30 am

Even advocates of artificial intelligence (AI) will acknowledge that the concepthas endured some false starts over the years. However, the past decade hasbrought a transformation in how AI is perceived in sport -- with the clubs, leagues,organizations and businesses that underpin the industry discovering the innovationsthat can emerge through the simulation of human intelligence in machines.

Members of the public rely on this technology every day, and we take it forgranted, says Dr. Patrick Lucey, chief scientist at sports data and analyticsprovider Stats Perform.

The availability of data these days is one big difference that has driven AIadoption. There is also a greater appreciation of AI, and the return on investmentis there to see via objective measures. It also helps that AI can be applied acrossall business segments.

Barriers to Adoption

AI is fueled by crunching swathes of data via iterative processing and algorithmsthat allow software platforms to identify patterns and predict future outcomes. So,it follows that increasing volumes of data being collected and analyzed in thesports industry in recent years have refined such processes, generating moreaccurate results and bottom-line benefits.

However, given its definition, it is hardly surprising that AI has also been adifficult notion to grasp for many, especially given how it is often usedinterchangeably with machine learning a strand of AI that focuses on howcomputers can imitate the way that humans learn.

This barrier to adoption, though, has slowly evaporated as clubs and franchiseshave gradually learned to gauge the real-life results from an idea that manyinitially considered to be abstract.

When terms like AI and data science were first being bandied around, I was oneof those who didnt understand the value of it, says Ben Mackriell, VP data, AIand pro products at Stats Perform.

But now there is a greater level of understanding in the market in general thatAI is simply a mechanism that enables better experiences, with the coreingredient being data. The challenge is to make AI consumable and break downsome of the myths. The process is complex, but the output doesnt have to becomplex.

Journey of Understanding

Sports clubs have been on this journey of understanding how deploying AI canultimately improve results and there is certainly no turning back now.From a performance perspective, more than 350 clubs across various sports relyon Stats Performs data and technology services, of which AI is a centralcomponent.

Stats Perform was the first company to offer player tracking technology inbasketball more than a decade ago. It is now unthinkable for a team in the NBA as well as any other leading league not to have analysts on the payroll.It is an area that has grown exponentially over the past 10 years, Mackrielladds. Most Premier League clubs had one or two analysts a decade ago. Now, itis common for them to have more than 10 people working across multipleaspects data analytics.

Clubs are hiring data engineers now and you would not have seen that evenjust three years ago.

Vivid Illustration

During this summers delayed UEFA Euro 2020 soccer tournament, Stats Performpresented a vivid illustration of how consumable its AI capabilities can be forfans across Europe and beyond with its Euros Prediction model.Through Stats Performs public-facing digital platform, The Analyst, themodel estimated the probability of each match outcome by using a series of inputs that ranged from historical team and player performances to betting marketodds and team rankings.

Hundreds of thousands of scenarios were being crunched every time a goalwent in, Mackriell says.For clubs though, AI-driven predictive modelling can provide insights that delveeven deeper.Stats Performs Playing Styles framework, for example, takes into considerationnumerous events and factors to determine a teams tendencies. Eight playingstyles are put under the microscope, from build-up play to counter attacks.

Such data-based insights can then be used to identify the roles of individualplayers within each style and also analyze crucially, in an age of sky-high salariesand transfer fees how a possible new signing would slot into the existing teamsystem.Every action and phase on the field is broken down, Mackriell adds. Everyaction on the pitch can be quantified in terms of how likely it is to lead to a goal,and you can see how individuals contribute towards a goal-scoring opportunity.

This supports decision-making and assists in terms of scouting and investing inthe team.One of the most common questions we are asked by a club is: How will thisplayers skills translate into our team and league. That is where teams areseeing a return on investment with AI.

Moneyball

For sports clubs and franchises, AI is Moneyball 2.0 using data to introducelayers of predictive insights that can help them make sound businessdecisions.Most importantly, it is about focusing on solving a problem at the outset.We spend time with clubs across multiple sports to identify the problems theyare trying to solve, Mackriell says. This problem-solving approach is how wedeploy AI as a company, rather than just trying to bring together AI tech anddata.

Given increasing levels of data coverage, the results for clubs and franchisesworldwide will become increasingly sophisticated, according to Lucey.Sport has been a slow adopter as clubs are understandably private about howthey operate, he says. Like anything in sport, though, once there is success,there is a snowball effect.

See the original post:

AI in Pro Sport: Laying the Groundwork - SportTechie

Posted in Ai | Comments Off on AI in Pro Sport: Laying the Groundwork – SportTechie

Page 101«..1020..100101102103..110120..»