Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions – ArchDaily

Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions

Your browser does not support the video tag.

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

AI in the architecture industry has provoked much debate in recent years - yet it seems like very few of us know exactly what it is or why ithas created this storm of emotions. There are professionals researching AI who know more about the field than I do, but I have hands-on experience using AI and algorithms in my designover the past 10 years through various projects. This is one of the challenges that our field faces. How can we make practical use of these new tools?

Many people have reached out to me claiming that AI could not do their job and that being an architect is so much more than just composing a plan drawing or calculating the volume of a building envelope. They are right. But having said that, there is no reasonnot to be open to the possibility that AIcan help usdesign even better buildings. There are a lot of tasks that are much better solved with computation than manually, and vice versa. In general, if we are able to reduce a problem to numbersor clearly define what we are trying to solve, AI will probably be able to solve it. If weare looking forsubjective opinions or emotions, it might be trickier for an AI to help. Or to be more precise, it might be trickier for us to provide the AI with the right tools to subjectively analyze our designs.

When we talk about AI within the field of architecture it often boils down to optimization. Where can we find more sellable square meters or how can we get more daylight into dark apartments? A bigger building and more windows might be the answer, but what other parameters might be affected by this?

Where there are a lot of parameters at stake that need to be weighed against each other, AI can help us a lot. Millions of scenarios can be evaluated, and the best selected in the same amount of timethat it takes for us to ride the subway to work.Our AI will present us with the ultimate suggestion based on the parameters we provided.

What if we forgot something? As soon as we start to optimize, we have to consider that the result will be no better than the parameters, training sets, and preferences we provided the AI with for solving the task. If we were to ask a thousand different people Whos the better architect, Zaha Hadid or Le Corbusier? we would probably get an even split of answers motivated by a thousand different reasons, since the question is highly subjective. In this case, there is no right or wrong, but if we asked who had designed the highest number of buildings, we could get a correct answer. Even if the answerfrom your AI is the correct one and mathematically optimal, you must consider if the question itself was right.

Your browser does not support the video tag.

Another important part of optimization is the question of how to weigh different features against each other. Is Gross Floor Area (GFA) more important than daylight and if it is, how much more? This is a decision that the architect, the designer of the algorithm, or the client needs to decide. Humans have opinions, a specific taste, preferred style, and so on. AI does not.

Optimizing for maximum gross floor area in parallel witha daylight analysis will give you a certain result, but it might not be the same thing as designing a great building. Yet on the flip side, not being able to meet the clients expectations for GFA or not being able to make an apartment inhabitable due to lack of light might result in no building at all.

AI presentsmany new opportunities for our profession, and I believe that the architect is harder to replace with AI than many other professions due to our job's subjective nature. The decisions we make to create great buildings often depend on opinions, and as a result there is no right or wrong. But I also believe that there are a lot of things we can improve on. We do not have to go as far as using AI: in many cases, we would benefit a lot from simple automation. There are many manual tasks performed by architects at the moment that has to be done to realize a project but do not add any value for the final product. If AI or automation can help us with these tasks, we can spend more time doing what we do best - which is designing great architecture, adding value to project inhabitants and our cities more widely.

Originally posted here:
Artificial Intelligence Can Only Help Architecture if We Ask the Right Questions - ArchDaily

Women wanted: Why now could be a good time for women to pursue a career in AI – CNBC

The coronavirus pandemic has upended countless jobs and even entire industries, leaving many wondering which will emerge out of the other side.

One industry likely to endure or even thrive under the virus, however, is artificial intelligence (AI), which could offer a glimpse into one of the rising careers of the future.

"This outbreak is creating overwhelming uncertainty and also greater demand for AI," IBM's vice president of data and AI, Ritika Gunnar told CNBC Make It.

Already, AI has been deployed sweepingly to help tackle the pandemic. Hospitals use the technology to diagnose patients; governments employ it in contact tracing apps and companies rely on it to support the biggest work from home experiment in history.

And that demand is only set to rise. Market research company International Data Corporation says it expects the number of AI jobs globally to grow 16% this year.

That could create new opportunities in an otherwise challenging jobs market. But the industry will need more women, in particular, if it is to overcome some of its historic bias challenges.

"In order to remove bias from AI, you need diverse perspectives among the people working on it. That means more women, and more diversity overall, in AI," said Gunnar.

The industry has been making progress lately. In a new report released Wednesday,IBMfound the majority (85%) of AI professionals think the industry has become more diverse over recent years, which has had a positive impact on the technology.

Of the more than 3,200 people surveyed acrossNorth America, Europe and India, 86% said they are now confident in AI systems' ability to make decisions without bias.

The AI opportunities from this crisis are numerous and the career opportunities are there.

Lisa Bouari

executive director, OutThought AI Assistants

However, Lisa Bouari, executive director at OutThought AI Assistants and a recipient of IBM's Women Leaders in AI awards, said more needs to be done to encourage women into the industry and keep them there.

"Attracting and retaining women are two halves of the same issue supporting a greater balance of women in AI," said Bouari. "The issues highlighted in the report around career progression, and hurdles, hold the keys to helping women stay in AI careers, and ultimately attracting more women as the status quo evolves."

For Gunnar, that means getting more women and girls excited about AI from a young age.

"We should expose girls to AI, math and science at a much earlier age so they have a support system in place," said Gunnar.

Indeed, IBM's report noted that although more women have been drawn to the industry over recent years, they did not consider AI a viable career path until later in life due to a lack of support during early education.

A plurality of men (46%) said they became interested in a tech career in high school or earlier, while a majority of women (53%) only considered it a possible path during their undergraduate degree or grad school.

But Bouari said she's hopeful that the surge in demand for AI currently can help drive the industry forward.

"The AI opportunities from this crisis are numerous and the career opportunities are there if we can successfully move hurdles and adopt it efficiently," she said.

Don't miss:Reaching gender equality at work means getting over this major hurdle first

Like this story?Subscribe to CNBC Make It on YouTube!

Read this article:
Women wanted: Why now could be a good time for women to pursue a career in AI - CNBC

IBM Leverages Artificial Intelligence to Automate IT Infrastructure – HITInfrastructure.com

May 07, 2020 -IBM recently launched new capabilities and services powered by artificial intelligence to help chief information officers (CIOs) help their businesses recover and restart in the wake of the COVID-19 pandemic.

IBMs Think Digital conference intends to help CIOs automate their IT infrastructures to become more resilient to disruptions and reduce costs.

What we've learned from companies all over the world is that there are three major factors that will determine the success of AI in business language, automation and trust, saidRob Thomas, senior vice president, cloud and data platform at IBM.

The COVID-19 crisis and increased demand for remote work capabilities are driving the need for AI automation at an unprecedented rate and pace. With automation, we are empowering next generation CIOs and their teams to prioritize the crucial work of today's digital enterprisesmanaging and mining data to apply predictive insights that help lead to more impactful business results and lower cost.

IDC, a market intelligence firm, predicted that enterprises that are powered by AI will be able to respond to customers, competitors, regulators, and partners 50 percent faster than those not using AI by 2024.

To that end, IBM launched IBM Watson AIOps as part of the Think Digital conference, which uses AI to automate how enterprises self-detect, diagnose, and respond to IT anomalies in real-time.

IT incidents that are unforeseen cost businesses. Watson AIOps allow organizations to introduce automation at the infrastructure level to help CIOs predict and shape future outcomes, focus on higher-value work, and build more responsive and intelligent networks that can stay up and running longer, IBM stated.

Built on Red Hat OpenShift,Watson AIOpsruns across any cloudenvironment and works in collaboration with an ecosystem of partners, including Slack and Box.

The greatest challenge for organizations is one of alignment. Slack is most valuable when it integrates tightly with the other tools customers use every day, bringing critical business information into channels where it can be collaborated on by teams, said Stewart Butterfield, Slack CEO and co-founder. By using Slack with Watson AIOps, IT operators can effectively collaborate on incident solutions, allowing them to spend critical time solving problems rather than identifying them.

Aaron Levie, CEO of Box also noted that secure data sharing and accessing files is vital, especially during this time of remote work.

It is more important than ever before and we're thrilled to expand our partnership with IBM to deliver content and collaboration across Watson AIOps,enablingIT organizations and businesses to get work done faster, simpler, and more securely, Levie said in the IBM announcement.

As part of Think Digital conferences rollout, IBM also launched the Accelerator for Application Modernization with AI, which was designed to help clients decrease the effort and costs associated with application modernization.

The accelerator includes various tools designed to optimize modernization and boost the analysis and recommendations for architectural and microservices options, IBM said. The accelerator uses learning AI models to adapt to preferred software engineering practices and stay up to date with the evolution of technology and platforms.

In addition to the Think Digital conference, IBM announced various new and updated capabilities designed to give CIOs a guideline for successfully operating during a pandemic. The new capabilities are designed to help automate business planning, business operations, and call centers.

IBM Cloud Pak for Data, IBM's fully integrated data and AI platform, was also updated with new capabilities tohelp business leaders automate the access to critical business-ready data. And a new update toIBM Cloud Pak Automation, software for designing, building and running automation apps allows clients to easily create AI digital worker automation solutions.

Additionally, updates to IBM Watson Assistant, IBM's AI-based conversation platform, will automate the more complex interactions and potentially boost customer satisfaction while reducing operating costs. The assistant now has a pre-built user interface that requires no development effort to deploy and is designed with user experience-based best practices, IBM stated.

Link:
IBM Leverages Artificial Intelligence to Automate IT Infrastructure - HITInfrastructure.com

Global Artificial Intelligence in Agriculture Industry (2020 to 2026) – Developing Countries to Offer Significant Growth Opportunities – GlobeNewswire

Dublin, May 06, 2020 (GLOBE NEWSWIRE) -- The "Artificial Intelligence in Agriculture Market by Technology (Machine Learning, Computer Vision, and Predictive Analytics), Offering (Software, Hardware, AI-as-a-Service, and Services), Application, and Geography - Global Forecast to 2026" report has been added to ResearchAndMarkets.com's offering.

The AI in the agriculture market is projected to grow at a CAGR of 25.5% from 2020 to 2026.

The AI in agriculture market growth is propelled by the increasing implementation of data generation through sensors and aerial images for crops, increasing crop productivity through deep-learning technology, and government support for the adoption of modern agricultural techniques. However, the high cost of gathering precise field data restrains the market growth. Developing countries, such as China, Brazil, and India, are likely to provide an opportunity for the players in the AI in agriculture market due to the increasing use of unmanned aerial vehicles/drones by these countries in their agricultural farms.

By technology, the machine learning segment is estimated to account for the largest share of the AI in the agriculture market during the forecast period.

Machine learning-enabled solutions are being significantly adopted by agricultural organizations and farmers worldwide to enhance farm productivity and to gain a competitive edge in business operations. In the coming years, the application of machine learning in various agricultural practices is expected to rise exponentially.

By offering, the AI-as-a-Service segment is projected to register the highest CAGR from 2020 to 2026.

Increasing demand for machine learning tool kits and applications that are available in AI-based services, along with benefits, such as advanced infrastructure at minimal cost, transparency in business operations, and better scalability, is leading to the growth of the AI-as-a-Service segment.

By application, the precision farming segment held the largest market size in 2019.

Precision farming involves the usage of innovative artificial intelligence (AI) technologies, such as machine learning, computer vision, and predictive analytics tools, for increasing agriculture productivity. It comprises a technology-driven analysis of data acquired from the fields for increasing crop productivity. Precision farming helps in managing variations in the field accurately, thus enabling the growth of more crops using fewer resources and at reduced production costs. Precision devices integrated with AI technologies help in collecting farm-related data, thereby helping the farmers make better decisions and increase the productivity of their lands

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights 4.1 Attractive Opportunities for the AI in the Agriculture Market4.2 AI in Agriculture Market, by Offering4.3 AI in Agriculture Market, by Technology4.4 AI in Agriculture Market for Apac, by Application & Country4.5 AI in Agriculture Market, by Geography

5 Market Overview 5.1 Introduction5.2 Market Dynamics5.2.1 Drivers5.2.1.1 Increasing Strain on Global Food Supply Owing to Rising Population5.2.1.2 Increasing Implementation of Data Generation Through Sensors and Aerial Images for Crops5.2.1.3 Increasing Crop Productivity Through Deep Learning Technology5.2.1.4 Government Support to Adopt Modern Agricultural Techniques5.2.2 Restraints5.2.2.1 High Cost of Gathering Precise Field Data5.2.3 Opportunities5.2.3.1 Developing Countries to Offer Significant Growth Opportunities5.2.3.2 Use of AI Solutions to Manage Small Farms (Less than 5 Hectares)5.2.4 Challenges5.2.4.1 Lack of Standardization5.2.4.2 Lack of Awareness About AI Among Farmers5.2.4.3 Limited Availability of Historical Data5.3 Value Chain Analysis5.4 Impact of Covid-19 on AI in Agriculture Market

6 Artificial Intelligence in Agriculture Market, by Technology 6.1 Introduction6.2 Machine Learning6.2.1 Machine Learning Technology to Hold the Largest Share of AI in Agriculture Market6.3 Computer Vision6.3.1 Computer Vision Technology is Expected to Grow at the Highest CAGR during the Forecast Period6.4 Predictive Analytics6.4.1 Increasing Predictive Analytics Applications is Expected to Drive the Growth of AI in Agriculture Market

7 Artificial Intelligence in Agriculture Market, by Offering 7.1 Introduction7.2 Hardware7.2.1 Technological Advancements in the Hardware Segment is Leading to the Widespread Adoption of AI in Agriculture7.2.2 Processor7.2.3 Storage Device7.2.4 Network7.3 Software7.3.1 AI in Agriculture Market for Software Segment is Projected to Hold the Largest Market Share during the Forecast Period7.3.2 AI Platform7.3.3 AI Solution7.4 Ai-As-A-Service7.4.1 Ai-As-A-Service Segment is Expected to Grow at the Highest CAGR during the Forecast Period7.5 Services7.5.1 Increasing Requirement of Online and Offline Support Services is Leading to the Growth of This Segment7.5.2 Deployment & Integration7.5.3 Support & Maintenance

8 Artificial Intelligence in Agriculture Market, by Application 8.1 Introduction8.2 Precision Farming8.2.1 Precision Farming is Expected to Hold the Largest Market Share during the Forecast Period8.2.2 Yield Monitoring8.2.3 Field Mapping8.2.4 Crop Scouting8.2.5 Weather Tracking & Forecasting8.2.6 Irrigation Management8.3 Livestock Monitoring8.3.1 Increasing Livestock Monitoring Applications is Driving the Growth of This Segment8.4 Drone Analytics8.4.1 Drone Analytics Application Expected to Grow at the Highest CAGR during the Forecast Period8.5 Agriculture Robots8.5.1 Increased Deep Learning Capabilities of Agriculture Robots is Driving the Growth of This Segment8.6 Labor Management8.6.1 Major Benefits Such As Reduced Production Costs Due to Labor Management Application is Leading to the Growth of This Segment8.7 Others8.7.1 Smart Greenhouse Management8.7.2 Soil Management8.7.2.1 Moisture Monitoring8.7.2.2 Nutrient Monitoring8.7.3 Fish Farming Management

9 Geographic Analysis 9.1 Introduction9.2 Americas9.2.1 North America9.2.1.1 Us9.2.1.1.1 Us Projected to Account for the Largest Size of the AI in Agriculture Market in North America9.2.1.2 Canada9.2.1.2.1 Increasing AI Technology Adoption is Leading to the Growth of Canadian AI in Agriculture Market9.2.1.3 Mexico9.2.1.3.1 AI in Agriculture Market in Mexico is Projected to Grow at the Highest CAGR during the Forecast Period9.2.2 South America9.2.2.1 Brazil9.2.2.1.1 Brazil Expected to Hold the Largest Share in the South American AI in Agriculture Market9.2.2.2 Argentina9.2.2.2.1 Expanding Industrial Production in Argentina is Driving the Market9.2.2.3 Rest of South America9.3 Europe9.3.1 Uk9.3.1.1 Increasing Adoption of Ai-Based Solutions for Agriculture is Driving the Uk Market9.3.2 Germany9.3.2.1 Germany Held the Largest Share of European AI in Agriculture Market in 20199.3.3 France9.3.3.1 Increasing Number of Start-Ups Developing AI Solutions for Agriculture is Driving the Market in Europe9.3.4 Italy9.3.4.1 AI in Agriculture Market in Italy is Growing Steadily to Overcome Drastic Climate Conditions9.3.5 Spain9.3.5.1 Favorable Government Policies are Driving the AI in Agriculture Market in Spain9.3.6 Rest of Europe9.4 Asia Pacific9.4.1 Australia9.4.1.1 Australia Expected to Hold the Largest Share of the AI in Agriculture Market in Apac9.4.2 China9.4.2.1 Increasing Precision Farming Applications in China is Expected to Drive the AI in Agriculture Market for Apac9.4.3 Japan9.4.3.1 in 2019, Japan Held the Second-Largest Share of AI in Agriculture Market in Apac9.4.4 South Korea9.4.4.1 Government Funding and Initiatives are Driving the Growth of AI in Agriculture Market in South Korea9.4.5 India9.4.5.1 India is Expected to be the Fastest-Growing AI in Agriculture Market in Apac9.4.6 Rest of Apac9.5 Rest of the World9.5.1 Increasing Awareness Among Farmers Regarding the Benefits of AI-Assisted Agricultural Operations is Driving the Market in Row

10 Competitive Landscape 10.1 Overview10.2 Ranking Analysis10.3 Competitive Scenario10.3.1 Product Launches and Developments10.3.2 Partnerships, Agreements, and Collaborations10.3.3 Mergers and Acquisitions10.4 Competitive Leadership Mapping10.4.1 Visionary Leaders10.4.2 Dynamic Differentiators10.4.3 Innovators10.4.4 Emerging Companies

11 Company Profiles 11.1 Key Players11.1.1 IBM11.1.2 Deere & Company11.1.3 Microsoft11.1.4 the Climate Corporation11.1.5 Farmers Edge11.1.6 Granular11.1.7 Ageagle11.1.8 Descartes Labs11.1.9 Prospera11.1.10 Taranis11.1.11 Awhere11.2 Right-To-Win11.3 Other Key Companies11.3.1 Gamaya11.3.2 Ec2Ce11.3.3 Precision Hawk11.3.4 Vineview11.3.5 Cainthus11.3.6 Tule Technologies11.3.7 Resson11.3.8 Connecterra11.3.9 Vision Robotics11.3.10 Farmbot11.3.11 Harvest Croo11.3.12 Peat11.3.13 Autonomous Tractor Corporation11.3.14 Trace Genomics11.3.15 Cropx Technologies

For more information about this report visit https://www.researchandmarkets.com/r/sqhb4s

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Continue reading here:
Global Artificial Intelligence in Agriculture Industry (2020 to 2026) - Developing Countries to Offer Significant Growth Opportunities - GlobeNewswire

The Impending Artificial Intelligence Revolution in Healthcare – Op-Ed – HIT Consultant

Harjinder Sandhu, CEO of Saykara

For at least a decade, healthcare luminaries have been predicting the coming AI revolution. In other fields, AI has evolved beyond the hype and has begun to showcase real and transformative applications: autonomous vehicles, fraud detection, personalized shopping, virtual assistants, and so on. The list is long and impressive. But in healthcare, despite the expectations and the tremendous potential in improving the delivery of care, the AI revolution is just getting started. There have been definite advancements in areas such as diagnostic imaging, logistics within healthcare, and speech recognition for documentation. Still, the realm of AI technologies that impact the cost and quality of patient care continues to be rather narrow today.

Why has AI been slow in delivering change in the care processes of healthcare? With a wealth of new AI algorithms and computing power ready to take on new challenges, the limiting function in AIs successful application has been the availability of meaningful data sets to train on. This is surprising to many, given that EHRs were supposed to have solved the data barrier.

The promise of EHRs was that they would create a wealth of actionable data that could be leveraged for better patient care. Unfortunately, this promise never fully materialized. Most of the interesting information that can be captured in the course of patient care either is not or is captured minimally or inconsistently. Often, just enough information is recorded in the EHR to support billing and is in plain text (not actionable) form. Worse, documentation requirements have had a serious impact on physicians, to whom it ultimately fell to input much of that data. Burnout and job dissatisfaction among physicians have become endemic.

EHRs didnt create the documentation challenge. But using an EHR in the exam room can significantly detract from patient care. Speech recognition has come a long way since then, although it hasnt changed that fundamental dynamic of the screen interaction that takes away from the patient. Indeed, using speech recognition, physicians stare at the screen even more intently as they must be mindful of mistakes that the speech recognition system may generate.

Having been involved in the advancement of speech recognition in the healthcare domain and been witness to its successes and failures, I continue to believe that the next stage in the evolution of this technology would be to free physicians from the tyranny of the screen. To evolve from speech recognition systems to AI-based virtual scribes that listen to doctor-patient conversations, creating notes, and entering orders.

Using a human scribe solves a significant part of the problem for physicians scribes relieve the physician of having to enter data manually. For many physicians, a scribe has allowed them to reclaim their work lives (they can focus on patients rather than computers) as well as their personal lives (fewer evening hours completing patient notes). However, the inherent cost of both training and then employing a scribe has led to many efforts to build digital counterparts, AI-based scribes that can replicate the work of a human scribe.

Building an AI scribe is hard. It requires a substantially more sophisticated system than the current generation of speech recognition systems. Interpreting natural language conversation is one of the next major frontiers for AI in any domain. The current generation of virtual assistants, like Alexa and Siri, simplify the challenge by putting boundaries on speech, forcing a user, for example, to express a single idea at a time, within a few seconds and within the boundaries of a list of skills that these systems know how to interpret.

In contrast, an AI system that is listening to doctor-patient conversations must deal with the complexity of human speech and narrative. A patient visit could last five minutes or an hour, the speech involves at least two parties (the doctor and the patient), and a patients visit can meander to irrelevant details and branches that dont necessarily contribute to a physician making their diagnosis.

As a result of the complexity of conversational speech, it is still quite early for fully autonomous AI scribes. In the meantime, augmented AI scribes, AI systems augmented by human power, are filling in the gaps of AI competency and allowing these systems to succeed while incrementally chipping away at the goal of making these systems fully autonomous. These systems are beginning to do more than simply relieve doctors of the burden of documentation, though that is obviously important. The real transformative impact will be from capturing a comprehensive set of data about a patient journey in a structured and consistent fashion and putting that into the medical records, thereby building a base for all other AI applications to come.

About Harjinder Sandhu

Harjinder Sandhu, CEO of Saykara, a company leveraging the power and simplicity of the human voice to make delivering great care easier while streamlining physician workflow

More:
The Impending Artificial Intelligence Revolution in Healthcare - Op-Ed - HIT Consultant

Theres Nothing Fake About These 10 Artificial Intelligence Stocks to Buy – InvestorPlace

Artificial intelligence is one of those catchy phrases that continues to grab investors attention. Like 5G, it tugs on the sleeves of those looking to get in on cutting-edge technology. While it is a very important sector of technology, investors need to be wary of hype and focus on reality before buying AI stocks.

Take, for example, International Business Machines (NYSE:IBM). IBM has been on the front line of AI with its Watson-branded products and services. Sure, it did a bang up job on Jeopardy and it partners with dozens of companies. But for IBM shareholders, Watson is not a portfolio favorite.

Over the past five years, IBM has lost 28.7% in price compared to the S&P 500s gain of 37.5% and the S&P Information Technology Indexs gain of 130%. And over the past 10 years, IBMs AI leadership has generated a shareholder loss of 3.4%.

Source: Chart by Bloomberg

IBM (White), S&P 500 (Red) & S&P 500 Information Technology (Gold) Indexes Total Return

But AI is more than just a party trick like Watson. AI brings algorithms into computers. These algorithms then take internal and external data, and in turn process decisions behind all sorts of products and services. Think for example something as simple as targeted ads. Data is gathered and processed while you simply shop online.

But AI can go much further. Think, of course, of autonomous vehicles. AI takes all sorts of input data and the central processor makes calls to how the vehicle moves and at what speed and direction.

Or in medicine, AI brings quicker analysis of symptoms, diagnostic data and tests.

And the list goes on.

So then what do I bring to the table as a human? I have found ten AI stocks that arent just companies using AI. These are companies to own and follow for years complete with dividends along the way.

Lets start with the index of the best technology companies found inside that S&P Information Technology index cited earlier. The Vanguard Information Technology ETF (NYSEARCA:VGT) synthetically invests in the leaders of that index. It should be the starting point for all technology investing it offers a solid foundation for portfolios.

Source: Chart by Bloomberg

Vanguard Information Technology ETF (VGT) Total Return

The exchange-traded fund continues to perform well. Its return for just the past five years runs at 141.1% for an average annual equivalent return of 19.2%. This includes the major fall in March 2020.

Before I move to the next of my AI stocks, its important to note that data doesnt just get collected. It also has to be communicated quickly and efficiently to make processes work.

Take the AI example mentioned earlier for autonomous vehicles. AI driving needs to know not just what is in front of the vehicle, but what is coming around the next corner. This means having dependable data transmission. And the two leaders that make this happen now and will continue to do so with 5G are AT&T (NYSE:T) and Verizon (NYSE:VZ).

Source: Chart by Bloomberg

AT&T (White) & Verizon (VZ) Total Return

Much like other successful AI stocks, AT&T and Verizon have lots of communications services and content. This provides some additional opportunities and diversification but can limit investor interest in the near term. This is the case with AT&T and its Time Warner content businesses. But this also means that right now, both of these stocks are good bargains.

And they have a history of delivering to shareholders. AT&T has returned 100% over the past 10 years, while Verizon has returned 242%.

AI takes lots of equipment. Chips, processors and communications gear all go into making AI computers and devices. And you should buy these two companies for their role in equipment: Samsung Electronics (OTCMKTS:SSNLF) and Ericsson (NASDAQ:ERIC).

Samsung is one of the global companies that is essential for nearly anything that involves technology and hardware.Hardly any device out there isnt a either a Samsung product or has components invented and produced by Samsung.

And Ericsson is one of the leaders in communications gear and systems. Its products makes AI communications and data transmission work, including on current 4G and 5G.

Source: Chart by Bloomberg

Samsung Electronics (White) & Ericsson (Red) Total Return

Over the past 10 years Samsung has delivered a return of 235.4% in U.S. dollars while Ericsson has lagged, returning a less-than-stellar 6.5%.

Both have some challenges in their stock prices. Samsungs shares are more challenging to buy in the U.S. And Ericsson faces economic challenges as its deep in the European market. But in both cases, you get great products from companies that are still value buys.

Samsung is valued at a mere 1.2 times book and 1.3 times trailing sales, which is significantly cheaper than its global peers. And Ericsson is also a bargain, trading at a mere 1.3 times trailing sales.

To make AI work you need lots of software. This brings in Microsoft (NASDAQ:MSFT). The company is one of the cornerstones of software its products have all sorts of tech uses.

And AI especially on the move needs quick access to huge amounts of data in the cloud. Microsoft and its Azure-branded cloud fits the bill.

Source: Chart by Bloomberg

Microsoft (MSFT) Total Return

Microsoft, to me, is the poster child of successful technology companies. It went from one-off unit sales of packaged products to recurring income streams from software subscriptions. Now its pivoting to cloud services. And shareholders continue to see rewards. The companys stock has returned 702.7% over the past 10 years alone.

AI and the cloud are integral in their processing and storage of data. But beyond software and hardware, you need to stack off the hardware, complete with power and climate controls, transmission lines and wireless capabilities.

This means data centers. And there are two companies set up as real estate investment trusts (REITs) that lead the way with their real estate and data centers. These are Digital Realty Trust (NYSE:DLR) and Corporate Office Properties (NYSE:OFC).

Digital Realty has the right name, as Corporate Office Properties doesnt tell the full story. The latter company has Amazon (NASDAQ:AMZN) and its Amazon Web Services (AWS) as exclusive clients in core centers, including the vital hub in Northern Virginia.

And the stock-price returns show the power of the name. Digital Realty has returned 310.9% against a loss of 0.9% for Corporate Office Properties.

Source: Chart by Bloomberg

Corporate Office Properties (White) & Digital Realty (Red) Total Return

But this means that while both are good buys right now, Corporate Office Properties is a particular bargain. The stock price is at a mere 1.7 times the companys book value.

Now Ill get to the newer companies in the AI space. These are the companies that are in various stages of development. Some are private now, or are pending public listings. Others are waiting for larger companies to snap them up.

Most individual investors unless they have a net worth nearing $1 billion dont get access. But I have a company that brings this access, and its my stock for theInvestorPlaceBest Stocks for 2020 contest.

Hercules Capital (NYSE:HTGC) is set up as business development company (BDC) that provides financing to all levels of technology companies. Along the way, it takes equity participation in these companies.

It supports hundreds of current technology companies using or developing AI for products and services along with a grand list of past accomplishments. The current portfolio can be found here.

I have followed this company since its early days. I like that it is very investor focused, complete with big dividend payments throughout the many years. And it has returned 184.3% over just the past 10 years alone.

Source: Chart by Bloomberg

Hercules Capital (HTGC) Total Return

Who doesnt buy goods and services from Amazon? I am a prime member with video, audio and book services. And I also have many Alexa devices that I use throughout the day. While I dont contract directly with its AWS, I use its cloud storage as part of other services. Few major companies that are part daily life make use of AI more than Amazon.

The current lockdown mess has made Amazon a further necessity. Toilet paper, paper towels, cleaning supplies, toothpaste, soap and so many other items are sold and delivered by Amazon.

And I also use the platform for additional digital information from the Washington Post. Plus, I get food and other household goods from Whole Foods, and products for my miniature dachshund, Blue, come from Amazon.

This is a company that I have always liked as a consumer, but didnt completely get as an investor. Growth for growths sake was what it appeared to be from my perspective. But I have been coming to a different understanding of what Amazon means as an investment.

It really is more of an index of what has been working in the U.S. for cloud computing and goods and services. And the current mess makes it not just more relevant but a necessity. Its proof comes from the sales that keep rolling up for the company on real GAAP terms.

Source: Chart by Bloomberg

Amazon Sales Revenue (GAAP)

I know that my subscribers to my Profitable Investing dont pay to have me tell them about Amazon. But I am recommending buying shares as the company is really a leading index of the evolving U.S. It is fully engaged in benefitting from AI, like my other AI stocks.

Neil George was once an all-star bond trader, but now he works morning and night to steer readers away from traps and into safe, top-performing income investments. Neils new income program is a cash-generating machine one that can help you collect $208 every day the markets open. Neil does not have any holdings in the securities mentioned above.

Read the original post:
Theres Nothing Fake About These 10 Artificial Intelligence Stocks to Buy - InvestorPlace

The impact of artificial intelligence on intelligence analysis – Reuters

In the last decade, artificial intelligence (AI) has progressed from near-science fiction to common reality across a range of business applications. In intelligence analysis, AI is already being deployed to label imagery and sort through vast troves of data, helping humans see the signal in the noise. But what the intelligence community is now doing with AI is only a glimpse of what is to come. The future will see smartly deployed AI supercharging analysts ability to extract value from information.

Exploring new possibilities

We expect several new tasks for AI, which will likely fall into one of these three categories:

Delivering new models. The rapid pace of modern decision-making is among the biggest challenges leaders face. AI can add value by helping provide new ways to more quickly and effectively deliver information to decision-makers. Our model suggests that by adopting AI at scale, analysts can spend up to 39 percent more time advising decision-makers.

Developing people. Analysts need to keep abreast of new technologies, new services, and new happenings across the globenot just in annual trainings, but continuously. AI could help bring continuous learning to the widest scale possible by recommending courseware based on analysts work.

Maintaining the tech itself. Beyond just following up on AI-generated leads, organizations will likely also need to maintain AI tools and to validate their outputs so that analysts can have confidence when using them. Much of this validation can be performed as AI tools are designed or training data is selected.

Avoiding pitfalls

Intelligence organizations must be clear about their priorities and how AI fits within their overall strategy. Having clarity about the goals of an AI tool can also help leaders communicate their vision for AI to the workforce and alleviate feelings of mistrust or uncertainty about how the tools will be used.

Intelligence organizations should also avoid investing in empty technologyusing AI without having access to the data it needs to be successful.

Survey results suggest that analysts are most skeptical of AI, compared to technical staff, management, or executives. To overcome this skepticism, management will need to focus on educating the workforce and reconfiguring business processes to seamlessly integrate the tools into workflows. Also, having an interface that allowed the analyst to easily scan the data underpinning a simulated outcome or view a representation of how the model came to its conclusion would go a long way toward that analyst incorporating the technology as part and parcel of his or her workflow.

While having a workforce that lacks confidence in AIs outputs can be a problem, however, the opposite may also turn out to be a critical challenge. With so much data at their disposal, analysts could start implicitly trusting AI, which can be quite dangerous.

But there are promising ways in which AI could help analysts combat human cognitive limitations. They would be very good at continuously conducting key assumptions checks, analyses of competing hypotheses, and quality of information checks.

How to get started today

Across a government agency or organization, successful adoption at scale would require leaders to harmonize strategy, organizational culture, and business processes. If any of those efforts are misaligned, AI tools could be rejected or could fail to create the desired value. Leaders need to be upfront about their goals for AI projects, ensure those goals support overall strategy, and pass that guidance on to technology designers and managers to ensure it is worked into the tools and business processes. Establishing a clear AI strategy can also help organizations frame decisions about what infrastructure and partners are necessary to access the right AI tools for an organization.

Tackling some of the significant nonanalytical challenges analyst teams face could be a palatable way to introduce AI to analysts and build their confidence in it. Today, analysts are inundated with a variety of tasks, each of which demands different skills, background knowledge, and the ability to communicate with decision-makers. For any manager, assigning these tasks across a team of analysts without overloading any one individual or delaying key products can be daunting. AI could help pair the right analyst to the right task so that analysts can work to their strengths more often, allowing work to get done better and more quickly than before.

AI is not coming to intelligence work; it is already there. But the long-term success of AI in the intelligence community depends as much on how the workforce is prepared to receive and use it as any of the 1s and 0s that make it work.

Learn how to assess your AI readiness

Excerpt from:
The impact of artificial intelligence on intelligence analysis - Reuters

Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements – Unite.AI

Julien Rebetez, is the Lead Software & Machine Learning Engineer at Picterra. Picterra provides a geospatial cloud-based-platform specially designed for training deep learning based detectors, quickly and securely.

Without a single line of code and with only few human-made annotations, Picterras users build and deploy unique actionable and ready to use deep learning models.

Itautomates the analysis of satellite and aerial imagery, enabling users to identify objects and patterns.

What is it that attracted you to machine learning and AI?

I started programming because I wanted to make video games and got interested in computer graphics at first. This led me to computer vision, which is kind of the reverse process where instead of having the computer create a fake environment, you have it perceive the real environment. During my studies, I took some Machine Learning courses and I got interested in the computer vision angle of it. I think whats interesting about ML is that its at the intersection between software engineering, algorithms and math and it still feels kind of magical when it works.

Youve been working on using machine learning to analyze satellite image for many years now. What was your first project?

My first exposure to satellite imagery was the Terra-i project (to detect deforestation) and I worked on it during my studies. I was amazed at the amount of freely available satellite data that is produced by the various space agencies (NASA, ESA, etc). You can get regular images of the planet for free every day or so and this is a great resource for many scientific applications.

Could you share more details regarding the Terra-i project?

The Terra-i project (http://terra-i.org/terra-i.html) was started by Professor Andrez Perez-Uribe, from HEIG-VD (Switzerland) and is now led by Louis Reymondin, from CIAT (Colombia). The idea of the project is to detect deforestation using freely available satellite images. At the time, we worked with MODIS imagery (250m pixel resolution) because it provided a uniform and predictable coverage (both spatially and temporally). We would get a measurement for each pixel every few days and from this time series of measurements, you can try to detect anomalies or novelties as we call them in ML sometimes.

This project was very interesting because the amount of data was a challenge at the time and there was also some software engineering involved to make it work on multiple computers and so on. From the ML side, it used Bayesian Neural Network (not very deep at the time ) to predict what the time series of a pixel should look like. If the measurement didnt match the prediction, then we would have an anomaly.

As part of this project, I also worked on cloud removal. We took a traditional signal processing approach there, where you have a time series of measurements and some of them will be completely off because of a cloud. We used a fourier-based approach (HANTS) to clean the time series before detecting novelties in it. One of the difficulties is that if we would clean it too strongly, wed also remove novelties, so there were quite some experiments to do to find the right parameters.

You also designed and implemented a deep learning system for automatic crop type classification from aerial (drone) imagery of farm fields. What were the main challenges at the time?

This was my first real exposure to Deep Learning. At the time, I think the main challenge were more on getting the framework to run and properly use a GPU than on the ML itself. We used Theano, which was one of the ancestors of Tensorflow.

The goal of the project was to classify the type of crop in a field, from drone imagery. We tried an approach where the Deep Learning Model was using color histograms as inputs as opposed to just the raw image. To make this work reasonably quickly, I remember having to implement a custom Theano layer, all the way to some CUDA code. That was a great learning experience at the time and a good way to dig a bit into the technical details of Deep Learning.

Youre officially the Lead Software and Machine Learning Engineer at Picterra. How would you best describe your day to day activities?

It really varies, but a lot of it is about keeping an eye on the overall architecture of the system and the product in general and communicating with the various stakeholders. Although ML is at the core of our business, you quickly realize that most of the time is not spent on ML itself, but all the things around it: data management, infrastructure, UI/UX, prototyping, understanding users, etc This is quite a change from Academia or previous experience in bigger companies where you are much more focused on a specific problem.

Whats interesting about Picterra is that we not only run Deep Learning Models for users, but we actually allow them to train their own. That is different from a lot of the typical ML workflows where you have the ML team train a model and then publish it to production. What this means is that we cannot manually play with the training parameters as you often do. We have to find some training method that will work for all of our users. This led us to create what we call our experiment framework, which is a big repository of datasets that simulates the training data our users would build on the platform. We can then easily test changes to our training methodology against these datasets and evaluate if they help or not. So instead of evaluating a single model, we are more evaluating an architecture + training methodology.

The other challenge is that our users are not ML practitioners, so they dont necessarily know what a training set is, what a label is and so on. Building a UI to allow non-ML practitioners to build datasets and train ML models is a constant challenge and there is a lot of back-and-forth between the UX and ML teams to make sure we guide users in the right direction.

Some of your responsibilities include prototyping new ideas and technologies. What are some of the more interesting projects that you have worked on?

I think the most interesting one at Picterra was the Custom Detector prototype. 1.5 years ago, we had built-in detectors on the platform: those were detectors that we trained ourselves and made accessible to users. For example, we had a building detector, a car detector, etc

This is actually the typical ML workflow: you have some ML engineer develop a model for a specific case and then you serve it to your clients.

But we wanted to do something differently and push the boundaries a bit. So we said: What if we allow users to train their own models directly on the platform ? There were a few challenges to make this work: first, we didnt want this to take multiple hours. If you want to keep this feeling of interactivity, training should take a few minutes at most. Second, we didnt want to require thousands of annotations, which is typically what you need for large Deep Learning models.

So we started with a super simple model, did a bunch of tests in jupyter and then tried to integrate it in our platform and test the whole workflow, with a basic UI and so on. At first, it wasnt working very well in most cases, but there were a few cases where it would work. This gave us hope and we started iterating on the training methodology and the model. After some months, we were able to reach a point where it worked well, and we now have our users using this all the time.

What was interesting about this is the double challenge of keeping the training fast (currently a few minutes) and therefore the model not too complex, but at the same time making it complex enough that it works and solves users problems. On top of that, it works with few (<100) labels for a lot of cases.

We also applied many of Googles Rules of Machine Learning, in particular the ones about implementing the whole pipeline and metrics before starting to optimize the model. It puts you into system thinking mode where you figure out that not all your problems should be handled by the core ML, but some of them can be pushed to the UI, some of them pre/post-processed, etc

What are some of the machine learning technologies that are used at Picterra?

In production, we are currently using Pytorch to train & run our models. We are also using Tensorflow from time to time, for some specific models developed for clients. Other than that, its a pretty standard scientific Python stack (numpy, scipy) with some geospatial libraries (gdal) thrown in.

Can you discuss how Picterra works in the backend once someone uploads images and wishes to train the neural network to properly annotate objects?

Sure, so first when you upload an image, we process it and store it in a Cloud-Optimized-Geotiff (COG) format on our blobstore (Google Cloud Storage), which allows us to quickly access blocks of the image without having to download the whole image later on. This is a key point because geospatial imagery can be huge: we have users routinely working with 5000050000 images.

So then, to train your model, you will have to create your training dataset through our web UI. You will do that by defining 3 types of areas:

Once you have created this dataset, you can simply click Train and well train a detector for you. What happens next is that we enqueue a training job, have one of our GPU worker pick it up (new GPU workers are started automatically if there are many concurrent jobs), train your model, save its weights to the blobstore and finally predict in the testing area to display on the UI. From there, you can iterate over your model. Typically, youll spot some mistakes in testing areas and add training areas to help the model improve.

Once you are happy with the score of your model, you can run it at scale. From the users point of view, this is really simple: just click on Detect next to the image you want to run it on. But its a bit more involved under the hood if the image is large. To speed things up, handle failures and avoid having detections taking multiple hours, we break down large detections in grid cells and run an independent detection job for each cell. This allows us to run very large-scale detections. For example, we had a customer run detection over the whole country of Denmark on 25cm imagery, which is in the range of TB of data for a single project. Weve covered a similar project in this medium post.

Is there anything else that you would like to share about Picterra?

I think whats great about Picterra is that it is a unique product, at the intersection between ML and Geospatial. What differentiates us from other companies that process geospatial data is that we equip our users with a self-serve platform. They can easily find locations, analyze patterns, and detect and count objects on Earth observation imagery. It would be impossible without machine learning, but our users dont even need basic coding skills the platform does the work based on a few human-made annotations. For those who want to go deeper and learn the core concepts of machine learning in the geospatial domain, we have launched a comprehensive online course.

What is also worth mentioning is that possible applications of Picterra are endless detectors built on the platform have been used in city management, precision agriculture, forestry management, humanitarian and disaster risk management, farming, etc., just to name the most common applications. We are basically surprised every day by what our users are trying to do with our platform. You can give it a try and let us know how it worked on social media.

Thank you for the great interview and for sharing with us how powerful Picterra is, readers who wish to learn more should visit the Picterra website.

Originally posted here:
Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements - Unite.AI

Wharton School Receives $5 Million to Launch Artificial Intelligence for Business, Extending Its Commitment to Analytics, Learning, and Engagement -…

Led by AI expert Kartik Hosanagar, AI for Business will explore impact to industries and society

PHILADELPHIA, May 7, 2020 /PRNewswire-PRWeb/ --The Wharton School of the University of Pennsylvania announced today the establishment of Wharton AI for Business (Artificial Intelligence for Business), which will inspire cutting-edge teaching and research in artificial intelligence, while joining with global business leaders to set a course for better understanding of this nascent discipline. The launch of AI for Business is made possible by a new $5 million gift from Tao Zhang, WG'02 and his wife Selina Chin, WG'02, which greatly expands Wharton's analytics capabilities, a major focus of Wharton's More Than Ever campaign.

"The advances made possible by artificial intelligence hold the potential to vastly improve lives and business processes," said Wharton Dean Geoff Garrett. "Our students, faculty, and industry partners are eager to join in our AI knowledge creation efforts to more deeply explore how machine learning will impact the future for everyone. We are deeply grateful to Tao and Selina for so generously enabling us to explore this opportunity and get AI for Business underway."

Operating within Analytics at Wharton and led by faculty member Kartik Hosanagar, John C. Hower Professor of Operations, Information and Decisions, AI for Business will explore AI's applications and impact across industries. Planned activities include:

Professor Hosanagar is renowned for his AI research and instruction. He is the author of the book, A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control and created the first Wharton online courses on AI: Artificial Intelligence for Business. Professor Hosanagar has also founded or advised numerous startups in online marketing and retail including Yodle and Milo.

"Our students and professors are energized by the idea that AI is influencing nearly every aspect of humanity and our efforts to understand it can make a difference for years to come," said Professor Hosanagar. "I'm very excited to help lead AI for Business since the future of machine learning is happening now there are unlimited entry points for experiential learning to explore the topic."

"Selina and I share experience and interest in management, tech, startups, and opportunities for leadership in global business which comes together in AI," said Zhang. "Wharton is the ideal setting for us to enable these experiences for such talented students and renowned faculty. We are proud to be engaged with the School and to be a part of jump starting AI for Business."

Tao Zhang is a Wharton MBA alumnus from the class of 2002. He previously served as co-chairman and co-CEO of Meituan-Dianping, a leading internet company and platform in China. He was the founder and CEO of Dianping.com prior to its merger with Meituan and held positions in American Management Systems, an IT consulting firm. In addition to his generosity toward AI for Business, he has spoken at and supported Wharton Global Forums in Beijing and Shanghai.

Selina Chin is a Wharton MBA alumna from the class of 2002. She served as the China Chief Financial Officer and Vice President of Finance for Goodyear Tires & Rubber Co. She currently runs the Blue Hill Foundation based out of Singapore.

About the Wharton School

Founded in 1881 as the world's first collegiate business school, the Wharton School of the University of Pennsylvania is shaping the future of business by incubating ideas, driving insights, and creating leaders who change the world. With a faculty of more than 235 renowned professors, Wharton has 5,000 undergraduate, MBA, executive MBA, and doctoral students. Each year 18,000 professionals from around the world advance their careers through Wharton Executive Education's individual, company-customized, and online programs. More than 99,000 Wharton alumni form a powerful global network of leaders who transform business every day. For more information, visit http://www.wharton.upenn.edu.

###

SOURCE The Wharton School

Excerpt from:
Wharton School Receives $5 Million to Launch Artificial Intelligence for Business, Extending Its Commitment to Analytics, Learning, and Engagement -...

Intuality Inc.’s Artificial Intelligence Making Accurate Predictions of Coronavirus Cases and Deaths – PRNewswire

WINSTON SALEM, N.C., May 5, 2020 /PRNewswire/ --Grant Renier, Chairman of IntualityInc., and Dr. Howard Rankin have been presenting, during weekly YouTube podcasts, the results of the system's cases and deaths for each of 120 days into the future since March, for the USA, Canada, UK, and 5 major EU countries.IntualityAI is tracking and predicting in real-time 500+ countries and governmental districts worldwide, as a free public service during this world-wide crisis.

"The numbers have been pretty accurate so far," says Grant Renier.So, what does IntualityAI predict about the future?

"We see a slight flattening of the curve by early July, but a second spike appearing in August. The system predicts the cumulative number of deaths in the US up to 103,000 by August 24," Grant continued.

Similar patterns are charted for the UK and Canada. By August 24, the system predicts a cumulative total of 6,800 deaths in Canada, and slightly over 38,000 deaths in the UK.

IntualityAI, the behavioral economics-based technology, has had success in forecasting in money markets, elections, sports, health and technology applications. It is the product of more than 30 years of research and development.

Dr. Howard Rankin, an expert in cognitive bias and author of "I Think Therefore I Am Wrong: A Guide to Bias, Political Correctness, Fake News and the Future of Mankind," along with Mr. Renier, hasbeen running IntualityAI podcasts related to COVID-19 atleast once a week.Accessthem on YouTube under"IntualityAI"and onitswebsite at http://www.intualityai.com.

Contact: GrantRenierPhone: 207.370.1330Email: [emailprotected]

Alt Contact: Dr. Howard RankinPhone: 843.247.2980Email: [emailprotected]

Related Images

intualityai-covid-19-prediction.png IntualityAI COVID-19 Prediction Accuracy AI prediction engine continues to predict daily COVID-19 cases and deaths within 2% of actual, since April 10, 2020.

Related Links

Company website

SOURCE Intuality Inc

IntualityAI

Read more here:
Intuality Inc.'s Artificial Intelligence Making Accurate Predictions of Coronavirus Cases and Deaths - PRNewswire