How smart city tech is being used to control the coronavirus outbreak – TechRepublic

CEO of Smart City Works says location-tracking efforts like those used in South Korea, Singapore, and China may offer a solution, with privacy caveats.

South Korea and Singapore are taking a smart city approach to halting the spread of the coronavirus . Both countries have been using contact tracing to identify people who have been exposed to the virus as well as all the people who had interacted with an infected individual. This process is manual and time-intensive.

To speed up this process in South Korea, the Ministry of Land, Infrastructure, and Transport used the country's Smart City Data Hub. The ministry has been building the tool in collaboration with the Ministry of Science and ICT since 2018.

Previously, the National Policy Agency had to request contact information from several agencies in order to get in touch with an individual with a suspected case of coronavirus. By running these requests through the Smart City Data Hub, the request for information and the response are processed in one place. Also, a person's movements can be tracked on a map that is part of the hub. The government said the hub will be used for this tracking only during the crisis response phase.

SEE: Coronavirus: Critical IT policies and tools every business needs (TechRepublic Premium)

David Heyman, founder and CEO of Smart City Works, said contact tracing is the bread and butter of public health when it comes to controlling infectious disease.

"Finding those who you have been in close contact with who is contagious is the key to stemming the spread of disease," he said.

Contact tracing is particularly difficult with COVID-19 because of the exponential growth in new cases, the fact that people without symptoms can be infected, and the two-week delay between acquiring the virus and getting sick. Using location-based technologies could help governments track down people who have been infected.

"Using smart city location-based technologiesused every day in hailing ride services, tracing our running routes, and keeping track of our kids or friendsmay provide a solution," he said.

In Singapore, the Government Technology Agency of Singapore launched TraceTogether on March 20 in collaboration with the Ministry of Health.

The TraceTogether app uses short-distance Bluetooth signals to connect one phone using the app with another user who is close by. It stores detailed records on a user's phone for 21 days but does not include location data. Residents are not required to use the app and more than 500,000 people downloaded it after the launch. Authorities have said they will decrypt the data if there is a public health risk related to an individual's movements. The data is not automatically shared with the government and is deleted after 21 days.

China used a similar method to track a person's health status and to control movement in cities with high numbers of coronavirus cases. Individuals had to use the app and share their status to be able to access public transportation.

Researchers at the University of Oxford proposed a similar app that could turn the manual process of contact tracing into a digital one. An entrepreneur in El Salvador has built a similar app for Android phones and is looking for government support to expand development.

A Senate bill to address the coronavirus in the US included money for the Centers for Disease Control and Prevention for tracking the spread of the virus. The bill passed Wednesday night allotted $500 million for tracking and analytics.

The unique challenge in the US to an app-based approach to contact tracing is protecting people's privacy. Heyman said more and more people are resisting governments and big data companies tracking personal data from Edward Snowden to Europe's "right to be forgotten" rule.

"The concern with tracking everyone's locations and who they have been near and whether they are sick, is not the question of whether this might be beneficial, but rather what else might the government do with this data?" he said. "Could it be used to deny employment, services, insurance, or other vital items? Will it be made public to embarrass, stigmatize, or ostracize individuals or groups?"

The challenge is to balance the public good of protecting human health in the short term with diminished personal privacy and a stronger surveillance state by the government.Heyman said that the keys to addressing privacy concerns about high-tech surveillance by the state is anonymizing the data and giving individuals as much control over their own data as possible.

"Personal details that may reveal your identity such as a user's name should not be collected or should be encrypted with access to be granted for only specific health purposes, and data should be deleted after its specific use is no longer needed," he said.

Smart City Works is a next-generation business accelerator that can move early-stage ventures to commercialization quickly, help speed products to market, and reduce investor risk. The organization is currently working on smart city deployments to ease the impact of social distancing due to the coronavirus through technologies that support seniors, students, and the public health community.

Any tracking system that monitors personal health information in the US would have to follow HIPAA requirements which dictate how this information can be collected and used.

Kevin Lancaster, general manager of security solutions at Kaseya, said that CDC leaders would have to make sure HIPAA protections are in place before turning over protected health information to app developers or other tech companies.

"Many external tech companies are very good at securely storing large amounts of data at low cost and providing big data tools to healthcare providers to leverage these scalable technologies," he said. "However, healthcare executives should be careful to use due care and vet all solutions before turning over any protected health information to these third-party providers and subsequently exposing patient data to risk."

Our editors highlight the TechRepublic articles, downloads, and galleries that you cannot miss to stay current on the latest IT news, innovations, and tips. Fridays

The Ministry of Health in Singapore has built an app that uses Bluetooth so that everyone use the app can track who they have come in contact with, including people who have contracted the coronavirus.

Image: Ministry of Health Singapore

Read the original here:
How smart city tech is being used to control the coronavirus outbreak - TechRepublic

How Artificial Intelligence Is Helping Fight The COVID-19 Pandemic – Entrepreneur

Spurred by China's gains in this area, other nations can unite to share expertise in order to expand AI's current capability and ensure that AI can replicate its role in helping China deal with the novel coronavirus pandemic.

March30, 20208 min read

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur Middle East, an international franchise of Entrepreneur Media.

From its epicenter in China, the novel coronavirus has spread to infect 414,179 people and cause no less than 18,440 deaths in at least 160 countries across a three-month span from January 2020 till date. These figures are according to the World Health Organization (WHO) Situation report as of March 25th. Accompanying the tragic loss of life that the virus has caused is the impact to the global economy, which has reeled from the effects of the pandemic.

Due to the lockdown measures imposed by several governments, economic activity has slowed around the world, and the Organization for Economic Cooperation and Development (OECD) has stated that the global economy could be hit by its worst growth rate since 2009. The OECD have alerted that the growth rate could be as slow as 2.4%, potentially dragging many countries into recession. COVID-19 has, in a short period of time, emerged as one of the biggest challenges to face the 21st century world. Further complicating the response to this challenge are the grey areas surrounding the virus itself, in terms of its spread and how to treat it.

Related:We're In This Together: Business Resources, Offers, And More For MENA Entrepreneurs To Get Through The Coronavirus Pandemic

As research details emerge, the data pool grows exponentially, beyond the capacity of human intelligence alone to handle. Artificial intelligence (AI) is adept at identifying patterns from big data, and this piece will elucidate how it has become one of humanitys ace cards in handling this crisis. Using China as a case-study, Chinas success with AI as a crisis management tool demonstrates its utility, and justifies the financial investment the technology has required to evolve over the last few years.

Advancements in AI application such as natural language processing, speech recognition, data analytics, machine learning, deep learning, and others such as chatbots and facial recognition have not only been utilized for diagnosis but also for contact tracing and vaccine development. AI has no doubt aided the control of the COVID-19 pandemic and helped to curb its worst effects.

Related:Here's What Your Business Should Focus On As It Navigates The Coronavirus Pandemic

Spurred by Chinas gains in this area, other nations can unite to share expertise in order to expand AIs current capability and ensure that AI can replicate its role in helping China deal with the novel coronavirus pandemic. AI has been deployed in several ways so far, and the following are just seven of the ways in which AI has been applied as a measure to solve the pandemic:

1. DISEASE SURVEILLANCE AI With an infectious disease like COVID-19, surveillance is crucial. Human activity -especially migration- has been responsible for the spread of the virus around the world. Canada based BlueDot has leveraged machine learning and natural language processing to track, recognize, and report the spread of the virus quicker than the World Health Organization and the US Centre for Disease Control and Prevention (CDC). In the near and distant future, technology like this may be used to predict zoonotic infection risk to humans considering variables such as climate change and human activity. The combined analysis of personal, clinical, travel and social data including family history and lifestyle habits obtained from sources like social media would enable more accurate and precise predictions of individual risk profiles and healthcare results. While concerns may exist about the potential infringement to civil liberties of individuals, policy regulations that other AI applications have faced will ensure that this technology is used responsibly.

2. VIRTUAL HEALTHCARE ASSISTANTS (CHATBOTS) The number of COVID-19 cases has shown that healthcare systems and response measures can be overwhelmed. Canada-based Stallion.AI has leveraged its natural language processing capabilities to build a multi-lingual virtual healthcare agent that can answer questions related to COVID-19, provide reliable information and clear guidelines, recommend protection measures, check and monitor symptoms, and advise individuals whether they need hospital screening or self-isolation at their homes.

Related:The Coronavirus Pandemic Versus The Digital Economy: The Pitfalls And The Opportunities

3. DIAGNOSTIC AI Immediate diagnosis means that response measures such as quarantine can be employed quickly to curb further spread of the infection. An impediment to rapid diagnosis is the relative shortage of clinical expertise required to interpret diagnostic results due to the volume of cases. AI has improved diagnostic time in the COVID-19 crisis through technology such as that developed by LinkingMed, a Beijing-based oncology data platform and medical data analysis company. Pneumonia, a common complication of COVID-19 infection, can now be diagnosed from analysis of a CT scan in less than sixty seconds with accuracy as high as 92% and a recall rate of 97% on test data sets. This was made possible by an open-source AI model that analyzed CT images and not only identified lesions but also quantified in terms of number, volume and proportion. This platform, novel in China, was powered by Paddle Paddle, Baidus open-source deep learning platform.

4. FACIAL RECOGNITION AND FEVER DETECTOR AI Thermal cameras have been used for some time now for detecting people with fever. The drawback to the technology is the need for a human operator. Now, however, cameras possessing AI-based multisensory technology have been deployed in airports, hospitals, nursing homes, etc. The technology automatically detects individuals with fever and tracks their movements, recognize their faces, and detect whether the person is wearing a face mask.

5. INTELLIGENT DRONES & ROBOTS The public deployment of drones and robots has been accelerated due to the strict social distancing measures required to contain the virus spread. To ensure compliance, some drones are used to track individuals not using facemasks in public, while others are used to broadcast information to larger audiences and also disinfect public spaces. MicroMultiCopter, a Shenzhen-based technology company, has helped to lessen the virus transmission risk involved with city-wide transport of medical samples and quarantine materials through the deployment of their drones. Patient care, without risk to healthcare workers, has also benefited as robots are used for food and medication delivery. The role of room cleaning and sterilization of isolation wards has also been filled by robots. Catering-industry centred Pudu Technology have extended their reach to the healthcare sector by deploying their robots in over 40 hospitals for these purposes.

Related:How Managers Can Weather The Impact Of The Coronavirus Pandemic On Their Businesses

6. CURATIVE RESEARCH AI Part of what has troubled the scientific community is the absence of a definitive cure for the virus. AI can potentially be a game changer as companies such as the British startup, Exscienta, has shown. Earlier this year, they became the first company to present an AI designed drug molecule that has gone to human trials. A year is all it took the algorithm to develop the molecular structure compared with the five-year average time that it takes traditional research methods.

In the same vein, AI can lead the charge for the development of antibodies and vaccines for the novel coronavirus, either entirely designed from scratch or through drug repurposing. For instance, using its AlphaFold system, Googles AI company, DeepMind, is creating structure models of proteins that have been linked with the virus in a bid to aid the science worlds comprehension of the virus. Although the results have not been experimentally verified, it represents a step in the right direction.

7. INFORMATION VERIFICATION AI The uncertainty of the pandemic has unavoidably resulted in the propagation of myths on social media platforms. While no quantitative assessment has been done to evaluate how much misinformation is out there already, it is certainly a significant figure. Technology giants like Google and Facebook are battling to combat the waves of conspiracy theories, phishing, misinformation and malware. A search for coronavirus/COVID-19 yields an alert sign coupled with links to verified sources of information. YouTube, on the other hand, directly links users to the WHO and similar credible organizations for information. Videos that misinform are scoured for and taken down as soon as they are uploaded.

While the world continues to grapple with the effects of COVID-19, positives can be drawn from the expertise and bravery of healthcare workers, as well as the complementary efforts of AI technology to their endeavors in the above listed ways. As the AI world partners with other sectors for solutions, the light at the end of this tunnel shines brighter, creating the much-needed hope the world needs in these uncertain times.

Related:Work In The Time Of Coronavirus: Here's How You Can Do Your Job From Home (Like A Pro)

Read more:
How Artificial Intelligence Is Helping Fight The COVID-19 Pandemic - Entrepreneur

How Artificial Intelligence is Going to Make Your Analytics Better Than Ever – Security Magazine

How Artificial Intelligence is Going to Make Your Analytics Better Than Ever | 2020-03-31 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Read the rest here:
How Artificial Intelligence is Going to Make Your Analytics Better Than Ever - Security Magazine

Return On Artificial Intelligence: The Challenge And The Opportunity – Forbes

Moving up the charts with AI

There is increasing awareness that the greatest problems with artificial intelligence are not primarily technical, but rather how to achieve value from the technology. This was a growing problem even in the booming economy of the last several years, but a much more important issue in the current pandemic-driven recessionary economic climate.

Older AI technologies like natural language processing, and newer ones like deep learning, work well for the most part and are capable of providing considerable value to organizations that implement them. The challenges are with large-scale implementation and deployment of AI, which are necessary to achieve value. There is substantial evidence of this in surveys.

In an MIT Sloan Management Review/BCG survey, seven out of 10 companies surveyed report minimal or no impact from AI so far. Among the 90% of companies that have made some investment in AI, fewer than 2 out of 5 report business gains from AI in the past three years.This number improves to 3 out of 5 when we include companies that have made significant investments in AI. Even so, this means 40% of organizations making significant investments in AI do not report business gains from AI.

NewVantage Partners 2019 Big Data and AI Executive surveyFirms report ongoing interest and an active embrace of AI technologies and solutions, with 91.5% of firms reporting ongoing investment in AI. But only 14.6% of firms report that they have deployed AI capabilities into widespread production. Perhaps as a result, the percentage of respondents agreeing that their pace of investment in AI and big data was accelerating fell from 92% in 2018 to 52% in 2019.

Deloitte 2018 State of Enterprise AI surveyThe top 3 challenges with AI were implementation issues, integrating AI into the companys roles and functions, and data issuesall factors involved in large-scale deployment.

In a 2018 McKinsey Global Survey of AI, most respondents whose companies have deployed AI in a specific function report achieving moderate or significant value from that use, but only 21 percent of respondents report embedding AI into multiple business units or functions.

In short, AI has not yet achieved much return on investment. It has yet to substantially improve the lives of workers, the productivity and performance of organizations, or the effective functions of societies. It is capable of doing all these things, but is being held back from its potential impact by a series of factors I will describe below.

Whats Holding AI Back

Ill describe the factors that are preventing AI from having a substantial return in terms of the letters of our new organization: the ROAI Institute. Although it primarily stands for return on artificial intelligence, it also works to describe the missing or critical ingredients for a successful return:

ReengineeringThe business process reengineering movement of the 1980s and early 90s, in which I wrote the first article and book (admittedly by only a few weeks in both cases) described an opportunity for substantial change in broad business processes based on the capabilities of information technology. Then the technology catalyst was enterprise systems and the Internet; now its artificial intelligence and business analytics.

There is a great opportunitythus far only rarely pursuedto redesign business processes and tasks around AI. Since AI thus far is a relatively narrow technology, task redesign is more feasible now, and essential if organizations are to derive value from AI. Process and task design has become a question of what machines will do vs. what tasks are best suited to humans.

We are not condemned to narrow task redesign forever, however. Combinations of multiple AI technologies can lead to change in entire end to end processesnew product and service development, customer service, order management, procure to pay, and the like.

Organizations need to embrace this new form of reengineering while avoiding the problems that derailed the movement in the past; I called it The Fad that Forgot People. Forgetting people, and their interactions with AI, would also lead to the derailing of AI technology as a vehicle for positive change.

Organization and CultureAI is the child of big data and analytics, and is likely to be subject to the same organization and culture issues as the parent. Unfortunately, there are plenty of survey results suggesting that firms are struggling to achieve data-driven cultures.

The 2019 NewVantage Partners survey of large U.S. firms I cite above found that only 31.0% of companies say they are data-driven. This number has declined from 37.1% in 2017 and 32.4% in 2018. 28% said in 2019 that they have a data culture. 77% reported that business adoption of big data and AI initiatives remains a major challenge. Executives cited multiple factors (organizational alignment, agility, resistance), with 95% stemming from cultural challenges (people and process), and only 5% relating to technology.

A 2019 Deloitte survey of US executives on their perspectives on analytical insights found that most executives63%do not believe their companies are analytics-driven. 37% say their companies are either analytical competitors (10%) or analytical companies (27%). 67% of executives say they are not comfortable accessing or using data from their tools and resources; even 37% of companies with strong data-driven cultures express discomfort.

The absence of a data-driven culture affects AI as much as any technology. It means that the company and its leaders are unlikely to be motivated or knowledgeable about AI, and hence unlikely to build the necessary AI capabilities to succeed. Even if AI applications are successfully developed, they may not be broadly implemented or adopted by users. In addition to culture, AI systems may be a poor fit with an organization for reasons of organizational structure, strategy, or badly-executed change management. In short, the organizational and cultural dimension is critical for any firm seeking to achieve return on AI.

Algorithms and DataAlgorithms are, of course, the key technical feature of most AI systemsat least those based on machine learning. And its impossible to separate data from algorithms, since machine learning algorithms learn from data. In fact, the greatest impediment to effective algorithms is insufficient, poor quality, or unlabeled data. Other algorithm-related challenges for AI implementation include:

InvestmentOne key driver of lack of return from AI is the simple failure to invest enough. Survey data suggest most companies dont invest much yet, and I mentioned one above suggesting that investment levels have peaked in many large firms. And the issue is not just the level of investment, but also how the investments are being managed. Few companies are demanding ROI analysis both before and after implementation; they apparently view AI as experimental, even though the most common version of it (supervised machine learning) has been available for over fifty years. The same companies may not plan for increased investment at the deployment stagetypically one or two orders of magnitude more than a pilotonly focusing on pre-deployment AI applications.

Of course, with any technology it can be difficult to attribute revenue or profit gains to the application. Smart companies seek intermediate measures of effectiveness, including user behavior changes, task performance, process changes, and so forththat would precede improvements in financial outcomes. But its rare for these to be measured by companies either.

A Program of Research and Structured Action

Along with several other veterans of big data and AI, I am forming the Return on AI Institute, which will carry out programs of research and structured action, including surveys, case studies, workshops, methodologies, and guidelines for projects and programs. The ROAI Institute is a benefit corporation that will be supported by companies and organizations who desire to get more value out of their AI investments

Our focus will be less on AI technology-though technological breakthroughs and trends will be considered for their potential to improve returnsand more on the factors defined in this article that improve deployment, organizational change, and financial and social returns. We will focus on the important social dimension of AI in our work as wellis it improving work or the quality of life, solving social or healthcare problems, or making government bodies more responsive? Those types of benefits will be described in our work in addition to the financial ones.

Our research and recommendations will address topics such as:

Please contact me at tdavenport@babson.edu if you care about these issues with regard to your own organization and are interested in approaches to them. AI is a powerful and potentially beneficial technology, but its benefits wont be realized without considerable attention to ROAI.

Continue reading here:
Return On Artificial Intelligence: The Challenge And The Opportunity - Forbes

AI (Artificial Intelligence) Companies That Are Combating The COVID-19 Pandemic – Forbes

MADRID, SPAIN - MARCH 28: Health personnel are seen outside the emergency entrance of the Severo ... [+] Ochoa Hospital on March 28, 2020 in Madrid, Spain. Spain plans to continue its quarantine measures at least through April 11. The Coronavirus (COVID-19) pandemic has spread to many countries across the world, claiming over 20,000 lives and infecting hundreds of thousands more. (Photo by Carlos Alvarez/Getty Images)

AI (Artificial Intelligence) has a long history, going back to the 1950s when the computer industry started. Its interesting to note that much of the innovation came from government programs, not private industry.This was all about how to leverage technologies to fight the Cold War and put a man on the moon.

The impact of these program would certainly be far-reaching.They would lead to the creation of the Internet and the PC revolution.

So fast forward to today: Could the COVID-19 pandemic have a similar impact? Might it be our generations Space Race?

I think so. And of course, its just not the US this time. This is about a worldwide effort.

Wide-scale availability of data will be key.The White House Office of Science and Technology has formed the Covid-19 Open Research Dataset, which has over 24,000 papers and is constantly being updated.This includes the support of the National Library of Medicine (NLM), National Institutes of Health (NIH), Microsoft and the Allen Institute for Artificial Intelligence.

This database helps scientists and doctors create personalized, curated lists of articles that might help them, and allows data scientists to apply text mining to sift through this prohibitive volume of information efficiently with state-of-the-art AI methods, said Noah Giansiracusa, who is the Assistant Professor at Bentley University.

Yet there needs to be an organized effort to galvanize AI experts to action.The good news is that there are already groups emerging.For example, there is the C3.ai Digital Transformation Institute, which is a new consortium of research universities, C3.ai (a top AI company) and Microsoft.The organization will be focused on using AI to fight pandemics.

There are even competitions being setup to stir innovation.One is Kaggles COVID-19 Open Research Dataset Challenge, which is a collaboration with the NIH and White House.This will be about leveraging Kaggles 4+ million community of data scientists.The first contest was to help provide better forecasts of the spread of COVID-19 across the world.

Next, the Decentralized Artificial Intelligence Alliance is putting together Covidathon, an AI hackathon to fight the pandemic coordinated by SingularityNET and Ocean Protocol.The organization has more than 50 companies, labs and nonprofits.

And then there is MIT Solve, which is a marketplace for social impact innovation.It has established the Global Health Security & Pandemics Challenge.In fact, a member of this organization, Ada Health, has developed an AI-powered COVID-19 personalized screening test.

AI tools and infrastructure services can be costly.This is especially the case for models that target complex areas like medical research.

But AI companies have stepped upthat is, by eliminating their fees:

DarwinAI's COVID-19 neural network

Patient care is an area where AI could be essential.An example of this is Biofourmis.In a two-week period, this startup created a remote monitoring system that has a biosensor for a patients arm and an AI application to help with the diagnosis.In other words, this can help reduce infection rates for doctors and medical support personnel.Keep in mind thatin Chinaabout 29% of COVID-19 deaths were healthcare workers.

Another promising innovation to help patients is from Vital. The founders are Aaron Patzer, who is the creator of Mint.com, and Justin Schrager, an ER doc.Their company uses AI and NLP (Natural Language Processing) to manage overloaded hospitals.

Vital is now devoting all its resources to create C19check.com.The app, which was built in a partnership with Emory Department of Emergency Medicine's Health DesignED Center and the Emory Office of Critical Event Preparedness and Response, provides guidance to the public for self-triage before going to the hospital.So far, its been used by 400,000 people.

And here are some other interesting patient care innovations:

While drug discovery has made many advances over the years, the process can still be slow and onerous.But AI can help out.

For example, a startup that is using AI to accelerate drug development is Gero Pte. It has used the technology to better isolate compounds for COVID-19 by testing treatments that are already used in humans.

Mapping the virus genome has seemed to happen very quickly since the outbreak, said Vadim Tabakman, who is the Director of technical evangelism at Nintex.Leveraging that information with Machine Learning to explore different scenarios and learn from those results could be a game changer in finding a set of drugs to fight this type of outbreak.Since the world is more connected than ever, having different researchers, hospitals and countries, providing data into the datasets that get processed, could also speed up the results tremendously.

Tom (@ttaulli) is the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems.

Read more:
AI (Artificial Intelligence) Companies That Are Combating The COVID-19 Pandemic - Forbes

6 Visions of How Artificial Intelligence will Change Architecture – ArchDaily

6 Visions of How Artificial Intelligence will Change Architecture

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

In his book "Life 3.0", MIT professor Max Tegmark says "we are all the guardians of the future of life now as we shape the age of AI." Artificial Intelligence remains a Pandora's Box of possibilities, with the potential to enhance the safety, efficiency, and sustainability of cities, or destroy the potential for humans to work, interact, and live a private life. The question of how Artificial Intelligence will impact the cities of the future has also captured the imagination of architects and designers, and formed a central question to the 2019 Shenzhen Biennale, the world's most visited architecture event.

As part of the "Eyes of the City" section of the Biennial, curated by Carlo Ratti, designers were asked to put forth their visions and concerns of how artificial intelligence will impact the future of architecture. Below, we have selected six visions, where designers reflect in their own words on aspects from ecology and the environment to social isolation. For further reading on AI and the Shenzhen Biennial, see our interview with Carlo Ratti and Winy Maas on the subject, and visit our dedicated landing page of content here.

The advance of AI technologies can make it feel as if we know everything about our citiesas if all city dwellers are counted and accounted for, our urban existence fully monitored, mapped, and predicted.

But what happens when we train our attention and technologies on the non-human beings with whom we share our urban environments? How can our notion of urban life, and the possibilities to design for it, expand when we use technology to visualize more than just the relationship between humans and human-made structures?

There is much we have yet to discover about our evolving urban environments. As new technologies are developed, deployed, and appropriated, it is critical to ask how they can help us see both the city and our discipline differently. Can architecture and urban design become a multi-species, collaborative practice? The first step is opening our eyes to all of our fellow city dwellers.

Read the full article here

For all of their history, the machines around us have stood silent, but when the city acquires the ability to see, to listen and to talk back to us, what might constitute a meaningful reciprocal interaction? Is it possible to have a productive dialogue with an autonomous shipping crane loading containers into the hull of a ship at a Chinese mega port; or, how do we ask a question of a warehouse filled with a million objects or talk to a city managing itself based on aggregated data sets from an infinite network of media feeds? Consumer-facing AIs like Amazons Alexa, Microsofts Cortana, Google Assistant or Apples Siri repeat biases and forms of interactions which are a legacy of human to human relationships. If you ask Microsofts personal digital assistant Cortana if she is a woman she replies Well, technically I'm a cloud of infinitesimal data computation. It is unclear if Cortana is a she or an it or a they. Deborah Harrison, the lead writer for Cortana, uses the pronoun she when referring to Cortana but is also explicit in stating that this does not mean she is female, or that she is human or that a gender construct could even apply in this context. We are very clear that Cortana is not only not a person, but there is no overlay of personhood that we ascribe, with the exception of the gender pronoun, Harrison explains. We felt that it was going to convey something impersonal and while we didnt want Cortana to be thought of as human, we dont want her to be impersonal or feel unfamiliar either.

Read the full article here

AI (artificial intelligence) can transform the environment we live in. Cities are facing the rise of UI (urban intelligence). Micro sensors and smart handheld electronics can gather large amounts of information. Mobile sensors, referred to as urban tech, allow cars, buses, bicycles, and even citizens to collect information about air quality, noise pollution, and the urban infrastructure at large. For example, noise data can be captured, archived, and made accessible. In an effort to contribute toward urban noise mitigation, citizens will be able to measure urban soundscapes, and urban planners and city councils can react to the data. How will our lives change intellectually, physically, and emotionally as the Internet of Things migrates into urban environments? How does technology intersect with society?

Read the full article here

Thanks to the development of the digital world, cities can be part of natural history. This is our great challenge for the next few decades, The digital revolution should allow us to promote an advanced, ecological and human world. Being digital was never the goalit was a means to reinvent the world. But what kind of world?

In many cases, digital allows us to continue doing everything we invented with the industrial revolution in a more efficient way. Thats why many of the problems that arose with industrial life have been exacerbated with the introduction of new digital technologies. Our cities are still machines that import goods and generate waste. We import hydrocarbons extracted from the subsoil of the earth to make plastics or fuels, which allow us to consume or move effectively while polluting the environment. Cities are also the recipients of the millions of containers filled with products that move around the world, and where we produce waste that creates mountains of garbage.

Read the full article here

We may imagine that one day, when a city was full of sensors to give it the ability of watching and hearing, data could be collected and analyzed as much as possible to make the city run more efficiently. Public space would be better managed to avoid any offense and crime, traffic flows be better monitored to avoid any traffic jam or traffic accident, public services be more evenly distributed to achieve social equity in space, land use be more reasonably zoned or rezoned to achieve a land value as high as possible, and so on. The city would function as a giant machine of high efficiency and rationality that would treat everyone and everything in the city as an element on the giant machine, under the supervision and in line with the values of the hidden eyes and ears. But, the city is not a machine, it is an organism composed of first of all numerous men who are often different one from another, and then the physical environment they create and shape in a collective way. Before the appearance of the city full of sensors, man needs to first work out a complete set of regulations on the utilization of sensors and the data they collect to deal with the issues of privacy and diversity.

Read the full article here

In his bookThe Second Digital Turn, Mario Carpo provides an incisive definition of the difference between artificial intelligence and "human" intelligence. Through the slogan "search, don't sort", he well describes how our way of using email has changed after the spread of Gmail:

We used to think that sorting saves time. It did; but it doesnt any more, because Google searches (in this instance, Gmail searches) now work faster and better. So taxonomies, at least in their more practical, utilitarian modeas an information retrieval toolare now useless. And of course computers do not have queries on the meaning of life, so they do not need taxonomies to make sense of the world, eitheras we do, or did.[Mario Carpo,The Second Digital Turn. Design Beyond Intelligence, MIT Press, Cambridge MA, 2017, p. 25.]

Machine-intelligence is an infinite search based on a finite request: Carpo's machine, which announces the second digital turn (or revolution?), is able to find a needle in a haystack - so long as someone asks it to look for a needle, for reasons that are still human. There is no longer any need for shelves, drawers, or taxonomies to narrow down the search-terms into increasingly coherent sets (as was the case with "sorting"). The machine will find the needle wherever it is, in the chaos of the pseudo-infinite space of the World Wide Web or, in a more general sense, of the "Big Data". It will do so in an instant. And herein lies its intelligence: it can look for a needle in a pseudo-infinite haystack (Big Data) at a very high speed (Big Calcula).

Read the full article here

Originally posted here:
6 Visions of How Artificial Intelligence will Change Architecture - ArchDaily

How long have we got before humans are replaced by artificial intelligence? – Scroll.in

My view, and that of the majority of my colleagues in AI, is that itll be at least half a century before we see computers matching humans. Given that various breakthroughs are needed, and its very hard to predict when breakthroughs will happen, it might even be a century or more. If thats the case, you dont need to lose too much sleep tonight.

One reason for believing that machines will get to human-level or even superhuman-level intelligence quickly is the dangerously seductive idea of the technological singularity. This idea can be traced back to a number of people over fifty years ago: John von Neumann, one of the fathers of computing, and the mathematician and Bletchley Park cryptographer IJ Good. More recently, its an idea that has been popularised by the science-fiction author Vernor Vinge and the futurist Ray Kurzweil.

The singularity is the anticipated point in humankinds history when we have developed a machine so intelligent that it can recursively redesign itself to be even more intelligent. The idea is that this would be a tipping point, and machine intelligence would suddenly start to improve exponentially, quickly exceeding human intelligence by orders of magnitude.

Once we reach the technological singularity, we will no longer be the most intelligent species on the planet. It will certainly be an interesting moment in our history. One fear is that it will happen so quickly that we wont have time to monitor and control the development of this super-intelligence, and that this super-intelligence might lead intentionally or unintentionally to the end of the human race.

Proponents of the technological singularity who, tellingly, are usually not AI researchers but futurists or philosophers behave as if the singularity is inevitable. To them, it is a logical certainty; the only question mark is when. However, like many other AI researchers, I have considerable doubt about its inevitability.

We have learned, over half a century of work, how difficult it is to build computer systems with even modest intelligence. And we have never built a single computer system that can recursively self-improve. Indeed, even the most intelligent system we know of on the planet the human brain has made only modest improvements in its cognitive abilities. It is, for example, still as painfully slow today for most of us to learn a second language as it always was. Little of our understanding of the human brain has made the task easier.

Since 1930, there has been a significant and gradual increase in intelligence test scores in many parts of the world. This is called the Flynn effect, after the New Zealand researcher James Flynn, who has done much to identify the phenomenon. However, explanations for this have tended to focus on improvements in nutrition, healthcare and access to school, rather than on how we educate our young people.

There are multiple technical reasons why the technological singularity might never happen. I discussed many of these in my last book. Nevertheless, the meme that the singularity is inevitable doesnt seem to be getting any less popular. Given the importance of the topic it may decide the fate of the human race I will return again to these arguments, in greater detail, and in light of recent developments in the debates. I will also introduce some new arguments against the inevitability of the technological singularity.

My first objection to the supposed inevitability of the singularity is an idea that has been called the faster-thinking dog argument. It considers the consequences of being able to think faster. While computer speeds may have plateaued, computers nonetheless still process data faster and faster. They achieve this by exploiting more and more parallelism, doing multiple tasks at the same time, a little like the brain.

Theres an expectation that by being able to think longer and harder about problems, machines will eventually become smarter than us. And we certainly have benefited from ever-increasing computer power; the smartphone in your pocket is evidence of that. But processing speed alone probably wont get us to the singularity.

Suppose that you could increase the speed of the brain of your dog. Such a faster-thinking dog would still not be able to talk to you, play chess or compose a sonnet. For one thing, it doesnt possess complex language. A faster-thinking dog will likely still be a dog. It will still dream of chasing squirrels and sticks. It may think these thoughts more quickly, but they will likely not be much deeper. Similarly, faster computers alone will not yield higher intelligence.

Intelligence is a product of many things. It takes us years of experience to train our intuitions. And during those years of learning we also refine our ability to abstract: to take ideas from old situations and apply them to new, novel situations. We add to our common sense knowledge, which helps us adapt to new circumstances. Our intelligence is thus much more than thinking faster about a problem.

My second argument against the inevitability of the technological singularity is anthropocentricity. Proponents of the singularity place a special importance on human intelligence. Surpassing human intelligence, they argue, is a tipping point. Computers will then recursively be able to redesign and improve themselves. But why is human intelligence such a special point to pass?

Human intelligence cannot be measured on some single, linear scale. And even if it could be, human intelligence would not be a single point, but a spectrum of different intelligences. In a room full of people, some people are smarter than others. So what metric of human intelligence are computers supposed to pass? That of the smartest person in the room? The smartest person on the planet today? The smartest person who ever lived? The smartest person who might ever live in the future? The idea of passing human intelligence is already starting to sound a bit shaky.

But lets put these objections aside for a second. Why is human intelligence, whatever it is, the tipping point to pass, after which machine intelligence will inevitably snowball? The assumption appears to be that if we are smart enough to build a machine smarter than us, then this smarter machine must also be smart enough to build an even smarter machine. And so on. But there is no logical reason that this would be the case. We might be able to build a smarter machine than ourselves. But that smarter machine might not necessarily be able to improve on itself.

There could be some level of intelligence that is a tipping point. But it could be any level of intelligence. It seems unlikely that the tipping point is less than human intelligence. If it were less than human intelligence, we humans could likely simulate such a machine today, use this simulation to build a smarter machine, and thereby already start the process of recursive self-improvement.

So it seems that any tipping point is at, or above, the level of human intelligence. Indeed, it could be well above human intelligence. But if we need to build machines with much greater intelligence than our own, this throws up the possibility that we might not be smart enough to build such machines.

My third argument against the inevitability of the technological singularity concerns meta-intelligence. Intelligence, as I said before, encompasses many different abilities. It includes the ability both to perceive the world and to reason about that perceived world. But it also includes many other abilities, such as creativity.

The argument for the inevitability of the singularity confuses two different abilities. It conflates the ability to do a task and the ability to improve your ability to do a task. We can build intelligent machines that improve their ability to do particular tasks, and do these tasks better than humans. Baidu, for instance, has built Deep Speech 2, a machine-learning algorithm that learned to transcribe Mandarin better than humans.

But Deep Speech 2 has not improved our ability to learn tasks. It takes Deep Speech 2 just as long now to learn to transcribe Mandarin as it always has. Its superhuman ability to transcribe Mandarin hasnt fed back into improvements of the basic deep-learning algorithm itself. Unlike humans, who get to be better learners as they learn new tasks, Deep Speech 2 doesnt learn faster as it learns more.

Improvements to deep-learning algorithms have come about the old-fashioned way: by humans thinking long and hard about the problem. We have not yet built any self-improving machines. Its not certain that we ever will.

Excerpted with permission from 2062: The World That AI Made, Toby Walsh, Speaking Tiger Books.

Excerpt from:
How long have we got before humans are replaced by artificial intelligence? - Scroll.in

STAT’s guide to how hospitals are using AI to fight Covid-19 – STAT

The coronavirus outbreak has rapidly accelerated the nations slow-moving effort to incorporate artificial intelligence into medical care, as hospitals grasp onto experimental technologies to relieve an unprecedented strain on their resources.

AI has become one of the first lines of defense in the pandemic. Hospitals are using it to help screen and triage patients and identify those most likely to develop severe symptoms. Theyre scanning faces to check temperatures and harnessing fitness tracker data, to zero in on individual cases and potential clusters. They are also using AI to keep tabs on the virus in their own communities. They need to know who has the disease, who is likely to get it, and what supplies are going to run out tomorrow, two weeks from now, and further down the road.

Just weeks ago, some of those efforts might have stirred a privacy backlash. Other AI tools were months from deployment because clinicians were still studying their impacts on patients. But as Covid-19 has snowballed into a global crisis, health cares normally methodical approach to new technology has been hijacked by demands that are plainly more pressing.

advertisement

Theres a crucial caveat: Its not clear if these AI tools are going to work. Many are based on drips of data, often from patients in China with severe disease. Those data might not be applicable to people in other places or with milder disease. Hospitals are testing models for Covid-19 care that were never intended to be used in such a scenario. Some AI systems could also be susceptible to overfitting, meaning that theyve modeled their training data so well that they have trouble analyzing new data which is coming in constantly as cases rise.

The uptake of new technologies is moving so fast that its hard to keep track of which AI tools are being deployed and how they are affecting care and hospital operations. STAT has developed a comprehensive guide to that work, broken down by how the tools are being used.

advertisement

This list focuses only on AI systems being used and developed to directly aid hospitals, clinicians, and patients. It doesnt cover the flurry of efforts to use AI to identify drug and vaccine candidates, or to track and forecast the spread of the virus.

This is one of the earliest and most common uses of AI. Hospitals have deployed an array of automated tools to allow patients to check their symptoms and get advice on what precautions to take and whether to seek care.

Some health systems, including Cleveland Clinic and OSF HealthCare of Illinois, have customized their own chatbots, while others are relying on symptom checkers built in partnership with Microsoft or startups such as Boston-based Buoy Health. Apple has also released its own Covid-19 screening system, created after consultation with the White House Coronavirus Task Force and public health authorities.

Developers code knowledge into those tools to deliver recommendations to patients. While nearly all of them are built using the CDCs guidelines, they vary widely in the questions they ask and the advice they deliver.

STAT reporters recently drilled eight different chatbots about the same set of symptoms. They produced confusing patchwork of responses. Some experts on AI have cautioned that these tools while well-intentioned are a poor substitute for a more detailed conversation with a clinician. And given the shifting knowledge-base surrounding Covid-19, these chatbots also require regular updates.

If you dont really know how good the tool is, its hard to understand if youre actually helping or hurting from a public health perspective.

Andrew Beam, artificial intelligence researcher

If you dont really know how good the tool is, its hard to understand if youre actually helping or hurting from a public health perspective, said Andrew Beam, an artificial intelligence researcher in the epidemiology department at Harvard T.H. Chan School of Public Health.

Clover, a San Francisco-based health insurance startup, is using an algorithm to identify its patients most at risk of contracting Covid-19 so that it can reach out to them proactively about potential symptoms and concerns. The algorithm uses three main sources of data: an existing algorithm the company uses to flag people at risk of hospital readmission, patients scores on a frailty index, and information on whether a patient has an existing condition puts them at a higher risk of dying from Covid-19.

AI could also be used to catch early symptoms of the illness in health care workers, who are at particularly high risk of contracting the virus. In San Francisco, researchers at the University of California are using wearable rings made by health tech company Oura to track health care workers vital signs for early indications of Covid-19. If those signs including elevated heart rate and increased temperature show up reliably on the rings, they could be fed into an algorithm that would give hospitals a heads-up about workers who need to be isolated or receive medical care.

Covid-19 testing is currently done by taking a sample from a throat or nasal swab and then looking for tiny snippets of the genetic code of the virus. But given severe shortages of those tests in many parts of the country, some AI researchers believe that algorithms could be used as an alternative.

Theyre using chest images, captured via X-rays or computed tomography (CT) scans, to build AI models. Some systems aim simply to recognize Covid-19; others aim to distinguish, say, a case of Covid-19-induced pneumonia from a case caused by other viruses or bacteria. However, those models rely on patients to be scanned with imaging equipment, which creates a contamination risk.

Other efforts to detect Covid-19 are sourcing training data in creative ways including by collecting the sound of coughs. An effort called Cough for the Cure led by a group of San Francisco-based researchers and engineers is asking people who have tested either negative or positive for Covid-19 to upload audio samples of their cough. Theyre trying to train a model to tell the difference, though its not clear yet that a Covid-19 cough has unique features.

Among the most urgent questions facing hospitals right now: Which of their Covid-19 patients are going to get worse, and how quickly will that happen? Researchers are racing to develop and validate predictive models that can answer those questions as rapidly as possible.

The latest algorithm comes from researchers at NYU Grossman School of Medicine, Columbia University, and two hospitals in Wenzou, China. In an article published in a computer science journal on Monday, the researchers reported that they had developed a model to predict whether patients would go on to develop acute respiratory distress syndrome or ARDS, a potentially deadly accumulation of fluid in the lungs. The researchers trained their model using data from 53 Covid-19 patients who were admitted to the Wenzhou hospitals. They found that the model was between 70% and 80% accurate in predicting whether the patients developed ARDS.

At Stanford, researchers are trying to validate an off-the-shelf AI tool to see if it can help identify which hospitalized patients may soon need to be transferred to the ICU. The model, built by the electronic health records vendor Epic, analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. Stanford researchers are trying to validate the model which was trained on data from patients hospitalized for other conditions in dozens of Covid-19 patients. If it works, Stanford plans to use it as a decision-support tool in its network of hospitals and clinics.

Similar efforts are underway around the globe. In a paper posted to a preprint server that has not yet been peer-reviewed, researchers in Wuhan, China, reported that they had built models to try to predict which patients with mild Covid-19 would ultimately deteriorate. They trained their algorithms using data from 133 patients who were admitted to a hospital in Wuhan at the height of its outbreak earlier this year. And in Israel, the countrys largest hospital has deployed an AI model developed by the Israeli company EarlySense, which aims to predict which Covid-19 patients may experience respiratory failure or sepsis within the next six to eight hours.

AI is also helping to answer pressing questions about when hospitals might run out of beds, ventilators, and other resources. Definitive Healthcare and Esri, which makes mapping and spatial analytics software, have built a tool that measures hospital bed capacity across the U.S. It tracks the location and number of licensed beds and intensive care (ICU) beds, and shows the average utilization rate.

Using a flu surge model created by the CDC, Qventus is working with health systems around the country to predict when they will reach their breaking point. It has published a data visualization tracking how several metrics will change from week to week, including the number of patients on ventilators and in ICUs.

Its current projection: At peak, there will be a shortage of 9,100 ICU beds and 115,000 beds used for routine care.

To focus in-person resources on the sickest patients, many hospitals are deploying AI-driven technologies designed to monitor patients with Covid-19 and chronic conditions that require careful management. Some of these tools simply track symptoms and vital signs, and make limited use of AI. But others are designed to pull out trends in data to predict when patients are heading toward a potential crisis.

Mayo Clinic and the University of Pittsburgh Medical Center are working with Eko, the maker of a digital stethoscope and mobile EKG technology whose products can flag dangerous heart rhythm abnormalities and symptoms of Covid-19. Mayo is also teaming up with another mobile EKG company, AliveCor, to identify patients at risk of a potentially deadly heart problem associated with the use of hydroxychloroquine, a drug being evaluated for use in Covid-19.

Many developers of remote monitoring tools are scrambling to deploy them after the Food and Drug Administration published a new policy indicating it will not object to minor modifications in the use or functionality of approved products during the outbreak. That covers products such as electronic thermometers, pulse oximeters, and products designed to monitor blood pressure and respiration.

Among them is Biofourmis, a Boston-based company that developed a wearable that uses AI to flag physiological changes associated with the infection. Its product is being used to monitor Covid-19 patients in Hong Kong and three hospitals in the U.S. Current Health, which makes a similar technology, said orders from hospitals jumped 50% in a five-day span after the coronavirus began to spread widely in the U.S.

Several companies are exploring the use of AI-powered temperature monitors to remotely detect people with fevers and block them from entering public spaces. Tampa General Hospital in Florida recently implemented a screening system that includes thermal-scanning face cameras made by Orlando, Fla.-based company Care.ai. The cameras look for fevers, sweating, and discoloration. In Singapore, the nations health tech agency recently partnered with a startup called KroniKare to pilot the use of a similar device at its headquarters and at St. Andrews Community Hospital.

As experimental therapies are increasingly tested in Covid-19 patients, monitoring how theyre faring on those drugs may be the next frontier for AI systems.

A model could be trained to analyze the lung scans of patients enrolled in drug studies and determine whether those images show potential signs of improvement. That could be helpful for researchers and clinicians desperate for signal on whether a treatment is working. Its not clear yet, however, whether imaging is the most appropriate way to measure response to drugs that are being tried for the first time on patients.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Originally posted here:
STAT's guide to how hospitals are using AI to fight Covid-19 - STAT

Artificial intelligence and climate change converge – Yale Climate Connections

What do the climate crisis and artificial intelligence have in common? In the world of Anthropocene Rag, the latest novel by writer and game designer Alex Irvine, they both completely alter our experience of life on Earth as we now know it. Set in a future United States, Anthropocene Rag is told from a variety of perspectives, including adventurous, meaning-seeking humans and nanoconstructs designed by all-powerful AI called the Boom to look like archetypes plucked from a classic American Western.

Two such characters are Henry Dale, a God-worshiping human, and Prospector Ed, an AI-construct that wants to better understand the intelligence that created him. Theyre joined by a motley crew of other humans and constructs, and together, they set out to find Monument City, a mythical place where humans and AI have learned to live in harmony.

To get there, they traverse a planet that looks quite different than our own. Climate change has ravaged the land, and the Boom has developed capabilities to transform landscapes instantaneously and with a grand sense of absurdity. Early on we witness a childrens playground come to life; the animal-shaped rides and swing sets having been granted the ability to speak. The novel is awash in the tropes of westerns and science-fiction, while playing with the familiar arcs of American myth. And yet, very little is familiar in this stunningly innovative book.

I spoke with Irvine about his inspiration for Anthropocene Rag, the ways in which his myths and tropes explore and mirror humanitys real-life response to climate change, and what he really thinks of artificial intelligence.

Amy Brady: Lets discuss your title, Anthropocene Rag. Anthropocene conjures images of how humans have completely altered the planet and how well continue to change it while rag conjures the past, the Americana of the early 20th century. Where did the title come from? Did it or the story come first?

Alex Irvine: Id been tinkering with this story for years under various unsatisfying titles before I landed on Anthropocene Rag. There were a few contributing factors. One, the idea of the Anthropocene has been on my mind, this concept that we as a single species have exerted such a decisive influence on the landscape, climate, and ecology of this period that it makes sense to name it after us. I find that awful even though its probably apt. Two, after a long time fiddling around with the middle of the story, I decided I really wanted a steamboat in it, and before I knew it there was a piano player on the steamboat. Of course he was a nanoconstruct of Scott Joplin and he had to be playing some kind of rag. Thats when the title popped into my head. At that point I realized that another bit of cultural flotsam leading me in that direction was the title of an old George R. R. Martin novel, my favorite of his: Armageddon Rag. Sometimes it takes me a long time to figure out what one part of my mind is saying to the other.

Amy Brady: The novel is set in the future on an Earth ravaged not only by climate change but by the very technology we thought might save us artificial intelligence. Outside the realm of fiction, writers such as Bill McKibben (with books like Falter) are suggesting that climate change and AI not only pose similar threats to humanity, they stem from a similar problem: hubris. Do you think thats an accurate view?

Alex Irvine: Its accurate as far as it goes, but its also reductive because were not going to point fingers and cry hubris when someone cures cancer and that ambition stems from the same human qualities that have given us AI, the atom bomb, and climate change. I mean, its true that if you look at any persistent self-destructive behavior, you find either addiction or a sense of exceptionalism, which is a form of hubris. And most huge transformative technological leaps have at their root an ambition that contains some hubris. On the other hand, I think the concept of hubris is judgmental in a way that maybe isnt fair to the fundamental human impulses to make, change, and create. As human beings, its almost literally impossible to have an idea and then decide not to think about how it might be made a reality. Has there ever been a potentially dangerous technological advance that we have considered and then decided not to pursue? Is that hubris, or just the same relentless curiosity that first got us down out of the trees and onto the savannah? I dont know. Would the world be better or worse without that drive? We wouldnt have climate change, but we wouldnt have penicillin, either.

Amy Brady: Why explore AI in your novel? What draws you to the subject?

Alex Irvine: Artificial intelligence is a fascinating thing to explore because so many takes on it fall into a limited number of camps. Either [the AI] will hate and exterminate us, or babysit us, or become docile assistants. The focus is generally on the power of AI, but Im more interested in what its going to feel like for them when they realize theyve been brought into the world by a bunch of mentally inferior meatsacks who had no idea what they were doing and only figured it out after decades of obsessive trial and error. Seems to me like that would be pretty confusing and lonely. That idea is where the character of Life-7 came from. Before I knew who any of the human characters in the book were, I knew who Life-7 and Prospector Ed were.

Amy Brady: Your novel cleverly plays with American myth. Its fascinating to think that such myths like that of Manifest Destiny will persist in a future world that looks differently than ours today. Of course, America was originally built on those myths and they persist despite our world looking so different than it did 200 years ago. What is it about myths that make them so persistent, so dominant, in our countrys storytelling?

Alex Irvine: Myths are a beautiful, figurative shorthand for explaining things that we find difficult to explain. They become ahistorical and present themselves as carriers of eternal truths. Who doesnt want to believe that there are eternal truths? I know I do. I also think we have a deep-seated need to believe things that arent true. The world can be a cruel and disappointing place, and its difficult to live in it if we cant imagine a better version or at least a version that removes our uncertainties and guilt a la Manifest Destiny; the sanitized versions of Pocahontas and Sacajawea we all learned in elementary school; the happy slaves of minstrel shows who persist into movies of the 30s and 40s and pop up in the 70s as Oompa-Loompas. (I know, Dahl was British, but he spent a lot of time in the U.S. and Charlie and the Chocolate Factory strikes me as a book written about the United States.) But there are also myths that show us at our best instead of covering up our worst: Johnny Appleseed and Calamity Jane, for instance.

Maybe we as Americans are especially prone to this because of the Puritans idea of New Jerusalem, John Winthrops city set on the hill (misquoted by Reagan 350 years later). The creation of America has been mythologized even as it happens. Consider these famous words: We hold these truths to be self-evident Put another way, we choose to believe this. America was founded in an act of self-mythologizing, and weve been doing it ever since. And nobody needs comforting myths more than confused and lonely people except maybe confused and lonely AIs.

Another great thing about myths is that you can remix them for a new age and people will recognize their basic outlines. To take an example from Anthropocene Rag: Bugs Bunny is Brier Rabbit is John the Conqueror but hes also Coyote and Anansi. America is a great big remixed myth fertile and confusing, totalizing but also generous, because were so promiscuous with our mythological crossbreeding.

Amy Brady: I was struck in your novel by how quickly humans seemed to have adapted to the Boom, how the strangeness it wrought everywhere quickly became commonplace. I can see parallels between that and humanitys regard in the real world for changes wrought by the climate crisis. How quickly we seem to have forgotten just how cold the winters used to be, how loud the nights used to be with insects. Why do you think our memories are so short? Is it a survival instinct?

Alex Irvine: Our adaptability is a blessing and a curse. Mostly its a blessing for us and a curse for every other living thing. I read a study once that concluded that predators including us are wired to think about where their next meal is coming from, as opposed to where their meals will be coming from next season or next year. Predators dont bury nuts, for example, so they we are terrible at assessing long-term risks and benefits. The difference between us and other predators like cheetahs is we can conceive of long-term risks and benefits. We can do the math, we can understand intellectually what long-term risks and benefits look like. What we dont seem to be able to do is internalize that knowledge at a gut level and turn it into belief. The math tells us that in 100 years, one billion people will be refugees and hundreds of millions will starve. But thats in 100 years. Our predator brains can think that, but they have trouble feeling it. So, we keep driving cars and pumping oil and mining Bitcoin, because that takes care of us right now.

One of the fundamental ruptures in human nature is between our individual ability to conceive of practically anything and our collective inability to act on anything other than immediate need. Individuals are different, but on the level of nations or civilizations we tend to stabilize around behaviors that reward us in the short term. Sometimes these also offer the illusion of long-term security take wealth hoarding. If youre a billionaire, you might think you can buy your way out of any trouble that might come along, but thats only because you cant make yourself believe that in a truly catastrophic ecological collapse, monetary wealth will be meaningless. Capitalist conditioning, another potent American myth, makes it harder to think outside that problem, too. One of the characters in the book calls capitalism a disease, and I think thats true. The idea that there can be endless growth is completely at odds with reality. The only thing that always increases is entropy.

Having said all that, I think that we will adapt to whatever happens. Thats why humanity has survived for so long, because were adaptable. We adapt with amazing speed to even the worst circumstances. Thats another thing that makes it hard for us to inconvenience ourselves to deal with distant looming catastrophes. We always just figure that we can handle it, well adapt. And yeah, if something like the Boom happened, people would adapt. Before a month had gone by, someone would be selling tickets to see the sentient playground in Stuyvesant Town. Enterprising guides would set up hiking tours to search for Monument City. People would go about looting the ruins of Miami in a completely matter-of-fact way, because what else are you going to do?

Amy Brady: Those are startling examples from your book. How has the climate crisis manifested in your own life?

Alex Ivine: Its odd, the things you notice but dont notice youre noticing until something appears in the news to frame it for you. An example: Id been remembering trips to rural parts of Michigan when I was a kid, with the bugs so thick over the road that once in a while my dad would stop at a gas station just to clean the windshield. It seemed to me that wasnt happening anymore, and then I read an article suggesting that as many as 90% of the insects in North America have died off in recent decades. Thats terrifying, because those insects feed everything above them on the food chain. Its also a sign of the fundamental sickness of our environment, which tends to be invisible to us until someone remembers the way bug splatter looked in the glare of oncoming headlights on childhood trips down dirt roads in northern Michigan, and thinks, jeez, didnt there use to be a lot more bugs?

Since reading that article, Ive been wondering what else Im not noticing or what else I am noticing but not putting in context. Sometimes I feel like Im in the middle of experiencing an ending, and I wonder what the world has in store for my children. I do notice spring coming earlier to southern Maine, where Ive lived for 18 years. For the last ten of those years, Ive been in the same house near Portland Harbor. Every spring, theres a period of a week where all the migrating ospreys arrive and float around in the sky over my house peeping and cheeping at each other. Then they pair off and go build nests. Just since Ive lived in my current house ten years, an eyeblink! Im pretty sure that osprey rendezvous has moved a week or two earlier in the spring. When I think about that, I wonder again: What else am I not seeing?

Amy Brady: Finally, whats next for you?

Alex Irvine: Ive always got lots of things cooking. Ive been doing a lot of writing for games, and also working on short stories, a couple of different new books (one of which is partially set in Chicago). Maybe the best way for your readers to keep up is to cruise by alex-irvine.com and see what they can see.

FICTIONAnthropocene Rag, by Alex Irvine (Tor.com, published March 31, 2020)

Reprinted with permission of Amy Brady and Chicago Review of Books, a Yale Climate Connections content-sharing partner.

View original post here:
Artificial intelligence and climate change converge - Yale Climate Connections

The future through Artificial Intelligence – The Star Online

ARTIFICIAL Intelligence (AI) is the wave of the future. This area of computer science emphasising the creation of intelligent machines that work and react like humans is heavily influencing and taking over the way we get on with daily life.

Artificial Intelligence is revolutionising industries and improving the way business is conducted.

More importantly, it is revolutionising industries and improving the way business is done, being already widely used in applications including automation, data analytics and natural language processing.

On a bigger spectrum, from self-driving cars to voice-initiated mobile phones and computer-controlled robots, the presence of AI is seen and felt almost everywhere.

As more industries shift towards embracing the science of incorporating human intelligence in machines so the latter can function, think and work like humans, the demand for human capital with the relevant skill and expertise correspondingly increases.

As such, the question is, how do engineering students ride this wave and make the most of it?

AI has a high learning curve but the rewards of a career in AI far outweigh the investment of time and energy.

Unlike most conventional careers, AI is still in its infancy stage although several modern nations have fully embraced the Fourth Industrial Revolution.

Taking this into account, UCSI University has taken the initiative to develop the Bachelor of Computer Engineering (Artificial Intelligence) programme.

The nations best private university for two years in a row, according to the two recent QS World University Rankings exercises, proactively defines its own AI curriculum to offer educational content that can help increase the supply of AI engineers with job-ready graduates and real world experiences.

The AI programme at UCSI consists of a number of specialisations and several overlapping disciplines, including mathematical and statistical methods, computer sciences and other AI core subjects to provide a conceptual framework in providing solutions for real-world engineering problems.

The first two years covers core theoretical knowledge such as mathematics and statistics, algorithm design and computer programming, as well as electrical and electronics.

Students will progress to the AI subfields by selecting the specialisation elective tracks covering emerging areas such as machine learning, decision-making and robotics, perception and language and human-AI interaction, among others.

We aim to nurture the new generation workforce with the right skills set and knowledge on smart technologies to accelerate Malaysias transformation into a smart and modern manufacturing system, says Ang.

UCSI Faculty of Engineering, Technology and Built Environment dean Asst Prof Ts Dr Ang Chun Kit pointed out that AI was unavoidably the way forward.

We aim to nurture the new generation workforce with the right skills set and knowledge on smart technologies to accelerate Malaysias transformation into a smart and modern manufacturing system.

This programme was developed with a vision to provide the foundation for future growth in producing more complex and high-value products for industry sectors in Malaysia, he said.

Leading the faculty in which 46 of its members have PhDs, Ang emphasised the university focuses on research attachment abroad and has established partnerships with key industry players.

The faculty also stands out in terms of receiving grants to advance high impact projects.

Students from the faculty are also annually selected for researches at world-renowned universities such as Imperial College London and Tsinghua University.

The faculty also strives to give students field experience through internships at various top companies.

An example would be Harry Hoon Jian Wen, an Electrical and Electronic Engineering student. He was selected to go to the University of Queensland for a research attachment while also successfully completing his internship at Schneider Electric.

For further details, visit http://online.ucsiuniversity.edu.my/ or email info.sec@ucsiuniversity.edu.my

Read this article:
The future through Artificial Intelligence - The Star Online