The 5 most dystopian technologies of 2020 and beyond – Fast Company

Tech is always both good and bad. But we live in a time when everything gets weaponizedideas, images, ancient texts, biases, and even people. And technology provides the tools to do it easier, faster, and with less resources.

Older threats like atomic warheads are still a serious danger, but theyre hard to deliver and take time and money to build. Delivering toxic images or malware to millions or billions of people, or even badly edited genes to future generations, is easy by comparison. Other technologies like artificial intelligence could have gradual, long-term effects that we do not or can not understand at present.

Were living in a period of technological wonderment, but many of the shiniest new technologies come with their own built-in potential for harm. These are five of the most dystopian technologies of 2020and beyond.

This summer, the Cybersecurity and Infrastructure Security Agency (CISA) called ransomware the most visible cybersecurity risk playing out across our nations networks. CISA says that many attacksin which a cybercriminal seizes and encrypts a persons or organizations data and then extorts the victim for cashare never reported because the victim organization pays off the cybercriminals and doesnt want to publicize its insecure systems.

Cybercriminals often target older people who have trouble differentiating honest from dishonest content online through malware embedded in an email attachment, or a pop-up at an infected website. But the scale of attacks on large corporations, hospitals, and state governments and agencies has been growing. Governments in particular have become prime targets because of the sensitive data they hold and their ability to pay high ransoms, with 70 state and local governments hit with ransomware attacks in 2019.

Some data, like health information, is far more valuable to the owner and can yield a bigger payoff if held for ransom. Thieves can capture or quarantine large blocks of clinical information thats critical for patient care, like test results or medication data. When lives are at stake, a hospital is in a poor position to negotiate. One hospital actually shut down permanently in November after a ransomware attack in August.

It will probably get worse. The Department of Homeland Security said in 2017 that ransomware attacks could be aimed a critical infrastructure like water utilities. And the tools needed to carry out ransomware attacks are becoming more available to smaller operators, with criminal organizations like Cerber and Petya selling ransomware toolkits as a service and taking a cut of the ransom in successful attacks.

Today, scientists use software tools like CRISPR to edit genes, and some of this work has been controversial. Chinese scientist He Jiankui was widely criticized for editing the genes in human embryos to make them resistant to the AIDS virus, because the changes he made could be passed down through generations with unpredictable consequences.

Its these long-term generational impacts that make the young science of gene editing so dangerous. One of the scarier examples of this is something called a gene drive. In the natural world, a gene has a 50% chance of passing on to the next generation. But a gene drive is passed on to the next generation 100% of the time, and increases the trait it carries every time until the whole population of an organism carries the gene and the trait. Scientists have suggested that gene drives could carry a trait found in an invasive species of weeds that would eradicate the plants resistance to pesticides.

Introducing an immunity to the AIDS virus in humans might sound like a good idea. But things can go wrong, and the implications could range from harmful to horrific, according to Stanford synthetic biologist Christina Smolkes comments during a panel on genetic engineering in 2016. A gene drive could mutate as it makes its way down through the generations and begin to allow genetic disorders like hemophilia or sickle cell anemiato ride along to affect future generations.

Even if the gene drive works as planned in one population of an organism, the same inherited trait could be harmful if its somehow introduced into another population of the same species, according to a paper published in Nature Reviews by University of California Riverside researchers Jackson Champer, Anna Buchman, and Omar Akbari. According to Akbari, the danger is scientists creating gene drives behind closed doors and without peer review. If someone intentionally or unintentionally introduced a harmful gene drive into humans, perhaps one that destroyed our resistance to the flu, it could mean the end of the species.

In the political realm, misinformation is nothing new. Earlier in our history it was called dirty tricks, and later, ratfuckingand referred to publishing a libelous story about an opponent or hammering up a closed sign outside a polling place in enemy territory.

Technology has turned this type of thing into a far darker art. Algorithms that can identify and analyze images have developed to a point where its possible to create convincing video or audio footage depicting a person doing or saying something they really didnt. Such deepfake content, skillfully created and deployed with the right subject matter at the right time, could cause serious harm to individuals, or even calamitous damage to whole nations. Imagine a deepfaked President Trump taking to Facebook to declare war on North Korea. Or a deepfake of Trumps 2020 opponent saying something disparaging about black voters.

The anxiety over high-tech interference in the 2020 presidential election is already high. It could come in many forms, from hacks on voting systems to social media ads specifically designed to keep target groups from voting. Due to the threats that deepfakes pose, Facebook and other tech companies are trying to develop detection tools that quickly find these videos on social networks before they spread.

Deepfakes are partially so dangerous because social networks naturally propagate the most dramatic political messages. This model creates more page views, engagement, and ad revenue, while amplifying and legitimizing the opinions of people and groups that earlier in history would have been considered fringe. Combine this with political advertisers ability to narrowly target political messages at audiences that are already inclined to believe them. The advertisements arent meant to persuade so much as they are to inflame voters to take some action, like organize a rally, vote, or just click share.

These factors have helped make social media platforms powerful political polarization machines where confirmation bias is the primary operator. Theyre far from the public square for free speech, meaningful political discourse, and debate that Facebook CEO Mark Zuckerberg likes to talk about. Facebook is a place to trade news and memes you agree with, and to become more entrenched in the political worldview you already keep.

If politics in a democracy is the process of guiding a society through discourse and compromise, tech companies like Facebook are hurting more than helping. Worse still, Facebook refusing to ensure the truthfulness of its political ads signals that conspiracy theories and alternative facts are legitimate and normal. When the basic facts of the world are constantly in dispute, theres no baseline for discussion.

When you talk about artificial intelligence, theres almost always someone there to offer calming words about how AI will work with humans and not against them. That may be perfectly true now, but the scale and complexity of neural networks is growing quickly. Elon Musk has said that AI is the biggest danger facing humankind.

Why? The creation and training of deep neural networks is a bit of a dark art, with secrets hidden within a black box thats too complex for most people to understand. Neural networks are designed in a long and convoluted process to create a desired result. The choices made during that process owe m
ore to the experience and instinct of the designer than to established standards and principles, consolidating the power of creating AI within the hands of a relatively small number of people.

Human biases have already been trained into neural networks, but that might seem trivial compared to what could happen. A computer scientist with bad intentions could introduce dangerous possibilities. According to data scientist and Snips.ai founder Rand Hindi, it might be possible for a bad actor to insert images into the training data used for autonomous driving systemswhich could lead, for instance, to the AI deciding a crowded sidewalk is a good place to drive.

The bigger fear is that neural networks, given enough compute power, can learn from data far faster than humans can. Not only can they make inferences faster than the human brain, but theyre far more scalable. Hundreds of machines can work together on the same complex problem. By comparison, the way humans share information with each other is woefully slow and bandwidth-constrained. Big tech companies are already working on generative neural networks that process mountains of data to create completely new and novel outputs, like chatbots that can carry on conversations with humans, or original musical compositions.

Where this is all leading, and whether humans can keep up, is a subject for debate. Musk believes that as AIs begin to learn and reason at larger and larger scale, an intelligence may develop somewhere deep within the layers of the network. The thing that is the most dangerousand it is the hardest to . . . get your arms around because it is not a physical thingis a deep intelligence in the network, Musk said during a July speech to the National Governors Association.

The kind of sentience that Musk describes does not presently exist, and were probably decades away from it. But most experts believe its coming in this century. According to the aggregate response of 352 AI researchers in a 2016 survey, AI is projected to have a 50% chance of exceeding human capability in all tasks in 45 years.

These examples are just the most sensational of the tech threats facing us today and in the future. There are many other near-term threats to worry about. In many ways, our technology, and our technology companies, are still a threat to the environment. Some of our biggest tech companies, like Seagate, Intel, and the Chinese company Hikvision, the worlds largest surveillance camera vendor, are enabling a growing tide of surveillance around the world. The ad-tech industry has normalized the destruction of personal privacy online. The U.S. government is sitting on its hands when it comes to securing the voting technology that will be used in the 2020 election.

Its going to take a much improved partnership between the tech community and government regulators to ensure we stay on the good side of our most promising technology.

Visit link:
The 5 most dystopian technologies of 2020 and beyond - Fast Company

Related Posts

Comments are closed.