What Are the Biggest Challenges Technology Must Overcome in the Next 10 Years? – Gizmodo

Technologys fineI definitely like texting, and some of the shows on Netflix are tolerablebut the fields got some serious kinks to work out. Some of these are hardware-related: when, for instance, will quantum computing become practical? Others are of more immediate concern. Is there some way to stop latently homicidal weirdos from getting radicalized online? Can social networks be tweaked in such a way as to not nearly guarantee the outbreak of the second Civil War? As AI advances and proliferates, how can we stop it from perpetuating, or worsening, injustice and discrimination?

For this weeks Giz Asks, weve assembled a wide-ranging panelof futurists, engineers, anthropologists, and experts in privacy and AIto address these and many other hurdles.

Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT

Here are some broad societal impact challenges for AI. There are so many important and exciting challenges in front of upI include a few I have been thinking about:

1) virtual 1-1 student-teacher ratios for all childrenthis will enable personalized education and growth for all children

2) individualized healthcarethis will deliver medical attention to patients that is customized to their own bodies

3) reversing climate changethis will take us beyond mapping climate change into identifying ways to repair the damage; one example is to reverse engineer photosynthesis and incorporate such processes into smart cities to ameliorate pollution

4) interspecies communicationthis will enable us to understand and communicate with members of other species, for example to understand what whales are communicating through their song, etc

5) intelligent clothing that will monitor our bodies (1) to ensure we live well and (2) to detect the emergence of a disease before the disease happens

And here are some technical challenges:

1) interpretability and explainability of machine learning systems

2) robustness of machine learning systems

3) learning from small data

4) symbolic decision making with provable guarantees

5) generalizability

7) machine learning with provable guarantees

8) unsupervised machine learning

9) new models of machine learning that are closer to nature

Anthropologist and Research Director at the Centre National de la Recherche Scientifique, Institut Jean Nicod, Paris; Co-Founder of the Centre for the Resolution of Intractable Conflict, University of Oxford, and author of Talking to the Enemy: Faith, Brotherhood and the (Un)Making of Terrorists

How to tell the difference between real vs fake, and between good vs harmful so that we can prevent harmful fake (malign) activity and promote what is real and good?

Malign social media ecologies (hate speech, disinformation, polarizing and radicalizing campaigns, etc.) have both bottom-up and top-down aspects, each of which is difficult to deal but together stump most counter efforts. These problems are severely compounded by exploitation of cognitive biases (e.g., their tendency to believe in messages that conform to ones prior believes and to disbelieve messages that dont), and also by exploitation of cultural belief systems (e.g., gaining trust, as in the West, based on accuracy, objectivity, validation and competence vs. gaining trust, as in most of the rest of the world, based on respect, recognition, honor, and dignity) and preferences (e.g., values associated with family, communitarian, nationalist, traditional mores vs. universal, multicultural, consensual, progressive values).

Malign campaigns exploit psychological biases and political vulnerabilities in the socio-cultural landscape of nations, and among transnational and substate actors, which has already led to new ways of resisting, reinforcing and remaking political authority and alliances. Such campaigns also can be powerful force multipliers for kinetic warfare and affect economies. Although pioneered by state actors, disinformation tools are now readily available to anyone or any group with internet access to deploy at low cost. This democratization of influence operations, coupled with democracies vulnerabilities owing to political tolerance and free speech, requires our societies to create new forms of resilience as well as deterrence. This means that a significant portion of malign campaigns involve self-organizing bottom-up phenomena that self-repair. Policing and banning on any single platform (Twitter, Facebook, Instagram, VKontake, etc.) can be downright counterproductive, with users going to back doors even being banned, jumping between countries, continents and languages, and eventually producing global dark pools, in which illicit and malign online behaviors will flourish.

Because large clusters that carry hate speech or disinformation arise from small, organic clusters, it follows that large clusters can hence be reduced by first banning small clusters. In addition, random banning of a small fraction of the entire user population (say, 10 percent) would serve the dual role of lowering the risk of banning many from the same cluster, and inciting a large crowd. But if, indeed, States and criminal organizations with deep offline presence can create small clusters almost at will, then the problem becomes not one of simply banning small clusters or a small fraction of randomly chosen individuals. Rather, the key involves identifying small clusters that initiate a viral cascade propagating hate or malign influence. Information cascades follow a heavy-tailed distribution, with large-scale information cascades relatively rare (only 2 percent > 100 re-shares), with 50 percent of shares in a cascade occurring within an hour; so the problem is to find an appropriate strategy i to identify an incipient malign viral cascade and apply counter measures well within the first hour

There is also a layering strategy evident in State-sponsored and criminally-organized illicit online networks. Layering is a technique where links to disinformation sources are embedded in popular blogs, forums and websites of activists (e.g., environment, guns, healthcare, immigration, etc.) and enthusiasts (e.g., automobiles, music, sports, food and drink, etc.). These layering-networks, masquerading as alternative news and media sources, regularly seek bitcoin donations. Their block chains show contributions made by anonymous donors in orders of tens of thousands of dollars at a time, and hundreds of thousands of dollars over time. We find that these layering-networks often form clusters linking to the same Google Ad accounts, earning advertising dollars for their owners and operators. Social media and advertising companies often have difficulty identifying account owners linked with illicit and malign activity, in part because they often appear to be organic and regularly pass messages containing a kernel of truth. How, then, to detect layering-networks (Breitbart, One America News Network, etc.), symbols (logos, flags), faces (politicians, leaders), suspicious objects (weapons), hate speech and anti-democracy framing & as suspicious?

Finally, knowledge of psychology and cultural belief systems are needed to train the data that technology uses to mine, monitor, and manipulate information. Overcoming malign social media campaigns ultimately relies on human appraisal of strategic aspects, such as importance of core values and the stakes at play (political, social, economic), and relative strengths of players in those stakes. The critical role of social science goes beyond the expertise of engineers, analysts, and data scientists that platforms like Twitter, Instagram, and Facebook use to moderate propaganda, disinformation, and hateful content.

Yet, an acute problem concerns overwhelming evidence from cognitive and social psychology and anthropology, that truth and evidenceno matter how logically consistent or factually correctdo not sway public opinion or popular allegiance as much as appeals to basic cognitive biases that confirm deep beliefs and core cultural values. Indeed, many so-called biases used in argument do not reflect sub-optimal or deficient reasoning but rather suggest their efficient (even optimal) use for persuasionan evolutionarily privileged form of reasoning to socially recruit others to ones circle of beliefs for cooperation and mutual defense. Thus, to combat false or faulty reasoningas in noxious messagingits not enough to target an arguments empirical and logical deficiencies versus a counterarguments logical and empirical coherence. Moreover, recent evidence suggests that warning about misinformation has little effect (e.g., despite advanced warning, yes voters are more likely than no voters to remember a fabricated scandal about a vote no campaign, and no voters are more likely to remember a fabricated scandal about a vote yes campaign). Evidence is also mounting that value-driven, morally focused information in general, and social media in particular not only drives readiness to believe, but also concerted actions for beliefs.

One counter strategy involves compromising ones own truth and honesty, and ultimately moral legitimacy, in a disinformation arms race. Another is to remain true to the democratic values upon which our society is based (in principle if not practice), never denying or contradicting them, or threatening to impose them on others.

But how to consistently expose misleading, false, and malicious information while advancing truthful, evidence-based information that never contradicts our core values or threatens the core values of others (to the extent tolerable)? How to encourage people to exit echo chambers of the like-minded to engage in a free and open public deliberation on ideas that challenge preconceived or fed attitudes, a broader awareness of what is on offer and susceptibility to alternatives may be gained however initially strong ones preconception or fed history?

Professor, Mechanical Engineering, MIT, whose research focuses on quantum information and control theory

The two greatest technological challenges of our current time are

(a) good cellphone service, and

(b) a battery with the energy density of extra virgin olive oil

I need say no more about (a). For (b) I could have used diesel fuel instead of olive oil (they have similar energy densities), but I like the thought of giving my computer a squirt of extra virgin olive oil every time it runs out of juice.

Since you are also interested in quantum computing Ill comment there too.

Quantum computing is at a particularly exciting and maybe scary moment. If we can build large-scale quantum computers, they would be highly useful for a variety of problems, from code-breaking (Shors algorithm), to drug discovery (quantum simulation), to machine learning (quantum computers could find patterns in data that cant be found by classical computers).

Over the past two decades, quantum computers have progressed from relatively feeble devices capable of performing a few hundred quantum logic operations on a few quantum bits, to devices with hundreds or thousands of qubits capable of performing thousands to tens of thousands of quantum ops.

That is, we are just at the stage where quantum computers may actually be able to do something useful. Will they do it? Or will the whole project fail?

The primary technological challenge over the next few years is to get complex superconducting quantum circuits or extended quantum systems such as ion traps or quantum optical devices to the point where they can be sufficiently precisely controlled to perform computations that classical computers cant. Although there are technological challenges of fabrication and control involved, there are well-defined paths and strategies for overcoming those challenges. In the longer run, to build scalable quantum computers will require devices with hundreds of thousands of physical qubits, capable of implementing quantum error correcting codes.

Here the technological challenges are daunting, and in my opinion, we do not yet possess a clear path to overcoming them.

Quantitative futurist, Founder of the Future Today Institute, Professor of Strategic Foresight at New York University Stern School of Business, and the author, most recently, of The Big Nine: How the Tech Titans and Their Thinking Could Warp Humanity

The short answer is this: We continue to create new technologies without actively planning for their downstream implications. Again and again, we prioritize short-term solutions that simply never address long-term risk. We are nowists. Were not engaged in strategic thinking about the future.

The best example of our collective nowist culture can be seen in the development of artificial intelligence. Weve prioritized speed over safety, and longer-term strategy over short-term commercial gains. But were not asking important questions, like what happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone? The answer isnt as simple as it may seem, because we now rely on just a few companies to investigate, develop, produce, sell, and maintain the technology we use each and every day. There is tremendous pressure for these companies to build practical and commercial applications for AI as quickly as possible. Paradoxically, systems intended to augment our work and optimize our personal lives are learning to make decisions that we, ourselves, wouldnt. In other caseslike warehouses and logisticsAI systems are doing much of the cognitive work on their own and relegating the physical labor to human workers.

There are new regulatory frameworks for AI being developed by the governments of the US, Canada, EU, Japan, China, and elsewhere. Agencies like the U.S.-based National Institute of Standards and Technology are working on technical standards for AI, but that isnt being done in concert with similar agencies in other countries. Meanwhile, China is forging ahead with various AI initiatives and partnerships that are linking together emerging markets around the world into a formidable global network. Universities arent making fast, meaningful changes to their curricula to address ethics, values and bias throughout all of the courses in their AI programs. Everyday people arent developing the digital street smarts needed to confront this new era of technology. So they are tempted to download fun-looking, but ultimately suspicious apps. Theyre unwittingly training machine learning systems. Too often, they are outright tricked into allowing others to access untold amounts of their social, location, financial, and biometric data.

This is a systemic problem, one that involves our governments, financiers, universities, tech companies and even you, dear Gizmodo readers. We must actively work to create better futures. That will only happen through meaningful collaboration and a global coordination to shape AI in way that benefits companies and shareholders, but also prioritizes transparency, accountability and our personal data and privacy. The best way to engineer systematic change is to treat AI as a public good.

University Distinguished Professor, Chicago-Kent College of Law, Illinois Institute of Technology, whose work focuses on the impact of technologies on individuals, relationships, communities, and social institutions

Technologies from medicine to transportation to workplace tools are overwhelmingly designed by men and tested on men. Rather than being neutral, technologies developed with male-oriented specs can cause physical harm and financial risks to women. Pacemakers are unsuited to many women since womens hearts beat faster than mens and that was not figured into the design. Because only male crash test dummies were used in safety ratings until 2011, seat-belted women are 47% more likely to be seriously harmed in car accidents. When men and women visit help wanted websites, the technological algorithms direct men to higher-paying jobs. Machine learning algorithms designed to screen resumes so that companies can hire people like their current top workers erroneously discriminate against women when those current workers are men.

Womens hormones are different than mens, causing some drugs to have enhanced effects in women and some to have diminished effects. Even though 80% of medications are prescribed to women, drug research is still predominantly conducted on men. Between 1997 and 2000, the FDA pulled ten prescription drugs from the market, eight of which were recalled because of the health risks they posed to women.

On the other hand, some treatments may be beneficial to women, but never brought to market if the testing is done primarily on men. Lets say that a drug study enrolls 1000 people, 100 of whom are women. What if it offers no benefit to the 900 men, but all 100 women are cured? The researchers will abandon the drug, judging that it is only 10% effective. If a follow-up study focused on women, it could lead to a new drug to the benefit of women and the economy.

Workplace technologies also follow a male model. Female surgeons in even elite hospitals have to stack stools on top of one another to stand high enough to undertake laparoscopic surgeries. Their lesser hand strength causes them to have to use both hands to operate tools that male surgeons operate with one, leading female surgeons to have more back, neck and hand problems than men. Nonetheless, the patients of female surgeons do better than those of men. Imagine the health gain to the patients (and their female surgeons) if technologies were designed to accommodate women as well as men.

Female fighter pilots wear g-suits designed in the 1960s to fit men. These too-large suits do not provide adequate protection for women against g-forces, which can lead to a sudden loss of color vision or a full blackout as blood begins to rush from their brain. The zippers generally dont unzip far enough to comfortably fit the female bladder device, which causes some female pilots not to drink before missions, potentially leading to blackouts from dehydration. Other military equipment poses safety and efficacy risks to women. Designing with women in mindsuch as the current work on exoskeletonscan benefit both female and male soldiers by providing protection and increasing strength and endurance.

Id like to see the equivalent of a Moon Shota focused technology research programthat tackles the issue of women and technology. Innovation for and by women can grow the economy and create better products for everyone.

Do you have a question for Giz Asks? Email us at tipbox@gizmodo.com.

See the rest here:

What Are the Biggest Challenges Technology Must Overcome in the Next 10 Years? - Gizmodo

Related Posts

Comments are closed.