Keeping Up With Encryption in 2020 – Security Boulevard

Encryption has become key to many cyber defense strategies, with organizations looking to more securely protect their data and privacy, as well as meet stricter compliance regulations including Europes GDPR and the California Consumer Privacy Act. Its use is unsurprisingly on the rise, with Gartner estimating that over 80% of enterprise web traffic was encrypted in 2019 and Google currently offering the HTTPS protocol as standard to 94% of its customers, putting the company well on its way to its goal of 100% encryption this year.

From WhatsApps end-to-end encrypted messages to secure online banking, encryption is everywhere. Cryptographic protocols Secure Socket Layer (SSL) and its successor, Transport Layer Security (TLS), ensure organizations protect the important data on their networks while remaining compliant. Though some authorities believe they should have backdoor access to this content, tech giants and whistleblowers alike have condemned the idea, with Facebook stating it would undermine the privacy and security of people everywhere, and Edward Snowden claiming it would be the largest [] violation of privacy in history.

However, for all its privacy and data protection benefits, encryption has unintentionally created a new threat: encrypted malware. Cybercriminals are using the very aspects that make encryption so appealing for their own means and increasingly leveraging cryptographic protocols to provide cover for their attacks. As more companies adopt encryption, hackers will have even more places to hide.

Many organizations have had firsthand experience of encrypted malware attacks. Here are just some of 2019s higher-profile attacks that hid among encrypted traffic flows between compromised network servers and command and control centers, as a way to avoid being detected by IDS and other anti-malware solutions:

Emotet, TrickBot and Ryuk have also been dubbed a triple-threat, with Emotet and TrickBot trojans being used to deliver Ryuk ransomware, causing even more damage to the affected organizations.

The biggest issue with encrypted malware attacksand the primary reason the above examples were so successfulis that they are nearly impossible to detect, with many commonly deployed solutions offering woefully inadequate protection.

The challenge for organizations looking to spot and stop encrypted malware attacks is being able to see inside their encrypted data flows. To achieve this, many organizations decrypt the traffic entering and leaving their networks, before scanning it for threats and then re-encrypting it. While in principle this technique should work, the decryption approach comes with a whole host of issues.

First, it raises concerns around compliance. Since all encrypted traffic has to decrypted to be inspected, there is a very real risk that some sensitive information will, for a brief time at least, be visible in plaintext. Secondly, there are the huge financial costs and latency issues to consider with costs growing and network performance being severely impacted by the amount of data that has to be processeda problem that will only grow in correlation with an increase in encrypted data.

A more recentand potentially biggerproblem is that decryption will no longer be possible thanks to the introduction of TLS 1.3. This cryptographic protocol, ratified by the IETF in 2018, includes stronger encryption and streamlined authentication processes, but also flags any decryption attempt as a man-in-the-middle attack, immediately terminating the session and preventing malicious traffic from being detected. Even the NSA has warned of the problems associated with TLS Inspection, issuing a cyber advisory on the subject.

This inability to see inside encrypted traffic traversing an organizations network is worrying, to say the least, with 87% of CIOs believing their security defenses are less effective because they cannot inspect encrypted network traffic for attacks, according to Venafi. As a new decade begins, organizations need to be wary of relying on traditional methods of detecting this new attack vector and not depend on decryption alone to solve the problem. If 2019 is any indication, then hidden malware isnt going anywhere.

Gartner predicts that over 70% of malware campaigns in 2020 will use some type of encryption. Whether this includes new strains of Emotet or Ryuk, or completely new threats, organizations need to be prepared.

In particular, they must look at alternative methods of protecting their networks and consider more modern solutions. Rather than rely on anti-malware scanners that are unable to see inside encrypted traffic or count on decryption to sort the bad data from the good, organizations should look at AI and machine learning techniques that analyze encrypted traffic at a metadata level. These methods dont require decryption, so as well as avoiding compliance issues by avoiding looking at traffic content, there are also no problems with latency or with navigating TLS 1.3.

This proactive and neater approach to malware detection will be an essential tool as encrypted malware becomes an even greater threat.

More:
Keeping Up With Encryption in 2020 - Security Boulevard

The Coronavirus Crisis: ‘Global Surveillance in Response to COVID-19 Surpassing 9/11’ – Byline Times

Campaigners warn that it would be short-sighted for governments to allow efforts to save lives in the COVID-19 outbreak to destroy fundamental rights in societies.

Around the world, journalists are being gagged and imprisoned, the location of citizens is being tracked and some are being named and shamed on Government websites. This dystopian crackdown on human rights is all taking place under the pretext of keeping people safe from an invisible killer.

COVID-19 has forced governments to introduce emergency legislation that would be unthinkable in any other situation. In many cases, the emergency powers are helping to keep people safe but, in others, they are beginning to look more like power grabs by quasi-dictators who have seen an opportunity.

A stark example of this can be seen in the centre of Europe where Hungarian Prime Minister Viktor Orbn has passed legislation allowing him to continue to rule by decree for as long as there is a state of emergency a state which has been declared but has no clear time limit. The legislation paves the way for citizens to be jailed for up to five years for spreading what the state considers to be misinformation.

Pavol Szalai, head of the European Union and Balkans desk with Reporters Without Borders, branded it an Orwellian law that introduces a full-blown information police state in the heart of Europe.

In Bulgaria, Prime Minister Boyko Borissov has proposed a law that allows jail terms for those spreading fake news about infectious diseases and police have been given the authority to request and obtain metadata from citizens private communications. Meanwhile in Poland, Coronavirus patients are being told to download a new app that will require them to take selfies to prove that they are quarantining properly.

The UK Government has also sparked controversy with its Coronavirus Bill, labelled the most draconian powers in peacetime by UK campaign group Big Brother Watch because it allows police to detain anyone they believe could be infectious, restrict public events and gatherings and impose travel restrictions. The Government is also reportedly in negotiations with mobile network operators such as O2 and EE, asking them to hand over customer data that could allow people to be tracked through their phones, in the UK and abroad.

Edin Omanovic, advocacy director of Privacy International, warned in a statement that the growing use of invasive surveillance is even surpassing how Governments across the world responded to 9/11.

The laws, powers, and technologies being deployed around the world pose a grave and long-term threat to human freedom, he said. Some measures are based on public health measures with significant protections, while others amount to little more than opportunistic power grabs. This extraordinary crisis requires extraordinary measures, but it also demands extraordinary protections. It would be incredibly short-sighted to allow efforts to save lives to instead destroy our societies. Even now, Governments can choose to deploy measures in ways that are lawful, build public trust and respect peoples wellbeing. Now, more than ever, Governments must choose to protect their citizens rather than their own tools of control.

Privacy International is one of more than 100 civil society groups to sign an open letter urging Governments not to respond to the Coronavirus with an increase in digital surveillance if it comes at a cost to human rights. An increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities undermining the effectiveness of any public health response, it states.

These are extraordinary times, but human rights law still applies. Indeed, the human rights framework is designed to ensure that different rights can be carefully balanced to protect individuals and wider societies. States cannot simply disregard rights such as privacy and freedom of expression in the name of tackling a public health crisis.

Another signatory of the statement is Amnesty International. Rasha Abdul Rahim, deputy director of Amnesty Tech, acknowledged that technology does play an important role in combatting COVID-19 but said that it should not give governments carte blanche to expand digital surveillance.

The recent past has shown governments are reluctant to relinquish temporary surveillance powers, she said. We must not sleepwalk into a permanent expanded surveillance state. Increased digital surveillance to tackle this public health emergency, can only be used if certain strict conditions are met. Authorities cannot simply disregard the right to privacy and must ensure any new measures have robust human rights safeguards.

In the years following the 9/11 terror attacks, the UK and US implemented major new surveillance programmes under the pretext of tackling terrorism. This included almost all US mobile phone companies providing the US National Security Agency (NSA) with all of their customers phone records and the UKs Government communications headquarters, GCHQ, intercepting fibre optic cables around the world to capture data flowing through the internet.

These programmes and many more were revealed by NSA whistleblower Edward Snowden. In a video conference interview for the Copenhagen Documentary Film Festival, Snowden spoke of the dangers that the virus now presents to civil liberties.

On governments taking health data from devices such as fitness trackers to monitor heart rhythms, he said: Five years later, the Coronavirus is gone, this datas still available to them they start looking for new things. They already know what youre looking at on the internet, they already know where your phone is moving, now they know what your heart rate is. What happens when they start to intermix these and apply artificial intelligence to them?

Here is the original post:
The Coronavirus Crisis: 'Global Surveillance in Response to COVID-19 Surpassing 9/11' - Byline Times

4 ONLINE THEATRE Wild, Hampstead Theatre London – Morning Star Online

MIKE BARTLETT'S opening play in Hampstead Theatre's short season of free weekly online productions owes much to Pinter's comedies of menace, with their characteristic mixture of humour, mystery and lurking fear.

Like The Dumb Waiter, originally planned for Hampstead's main theatre programme now postponed Wild is set initially in a recognisable social context, with the plot progressively leaving the target character bewildered and unhinged.

Michael, played by Jack Farthing, is a somewhat naive Edward Snowden-type whistleblower who, having leaked a massive stash of incriminating Pentagon documents, is on the run.

He's trapped in a Moscow hotel room with Caioilfhionn Dunne's zany minder pressing him to join her unidentified resistance movement. In the background there is apparently an unnamed leader holed up in a nearby foreign embassy Julian Assange?

She progressively strips the nervous Michael of his wavering self-confidence: If you want to know anything about yourself, just ask.

When he fights back, insisting he had acted in the hope of creating a freer world and demanding to know what his tormentor believes in, she answers: Progress. Her evidence? Wi-fi.

She is replaced by an equally enigmatic protector with a more threatening approach, leading to a final surrealist climax which both mirrors the increasingly tragi-farcical nature of our contemporary world and, in James Macdonald's production, cleverly plays with and merges the very artifice of theatre and video.

Available online until April 5, hampsteadtheatre.com

View original post here:
4 ONLINE THEATRE Wild, Hampstead Theatre London - Morning Star Online

Artificial Intelligence News: Latest Advancements in AI …

How does Artificial Intelligence work?

Artificial Intelligence is a complex field with many components and methodologies used to achieve the final result an intelligent machine. AI was developed by studying the way the human brain thinks, learns and decides, then applying those biological mechanisms to computers.

As opposed to classical computing, where coders provide the exact inputs, outputs, and logic, artificial intelligence is based on providing a machine the inputs and a desired outcome, letting the machine develop its own path to achieve its set goal. This frequently allows computers to better optimize a situation than humans, such as optimizing supply chain logistics and streamlining financial processes.

There are four types of AI that differ in their complexity of abilities:

Artificial intelligence is used in virtually all businesses; in fact, you likely interact with it in some capacity on a daily basis. Chatbots, smart cars, IoT devices, healthcare, banking, and logistics all use artificial intelligence to provide a superior experience.

One AI that is quickly finding its way into most consumers homes is the voice assistant, such as Apples Siri, Amazons Alexa, Googles Assistant, and Microsofts Cortana. Once simply considered part of a smart speaker, AI-equipped voice assistants are now powerful tools deeply integrated across entire ecosystems of channels and devices to provide an almost human-like virtual assistant experience.

Dont worry we are still far from a Skynet-like scenario. AI is as safe as the technology it is built upon. But keep in mind that any device that uses AI is likely connected to the internet, and given that internet connected device security isnt perfect and we continue to see large company data breaches, there could be AI vulnerabilities if the devices are not properly secured.

Startups and legacy players alike are investing in AI technology. Some of the leaders include household names like:

As well as newcomers such as:

APEX Technologies was also ranked as the top artificial intelligence company in China last year.

You can read our full list of most innovative AI startups to learn more.

Artificial intelligence can help reduce human error, create more precise analytics, and turn data collecting devices into powerful diagnostic tools. One example of this is wearable devices such as smartwatches and fitness trackers, which put data in the hands of consumers to empower them to play a more active role managing their health.

Learn more about how tech startups are using AI to transform industries like digital health and transportation.

Then-Dartmouth College professor John McCarthy coined the term, artificial intelligence, and is widely known as the father of AI. in the summer of 1956, McCarthy, along with nine other scientists and mathematicians from Harvard, Bell Labs, and IBM, developed the concept of programming machines to use language and solve problems while improving over time.

McCarthy went on to teach at Stanford for nearly 40 years and received the Turing Award in 1971 for his work in AI. He passed away in 2011.

Open application programming interfaces (APIs) are publicly available governing requirements on how an application can communicate and interact. Open APIs provide developers access to proprietary software or web services so they can integrate them into their own programs. For example, you can create your own chatbot using this framework.

As you could imagine, artificial intelligence technology is evolving daily and Business Insider Intelligence keeping its finger on the pulse of how artificial intelligence will shape the future of a variety of industries, such as the Internet of Things (IoT), transportation and logistics, digital health, and multiple branches of fintech including insurtech and life insurance.

See the original post here:
Artificial Intelligence News: Latest Advancements in AI ...

Benefits and Risks of Artificial Intelligence

We might still be decades away from the superhuman artificial intelligence (AI), like sentient HAL 9000 from 2001: A Space Odyssey, but our fear of robots having a mind of their own and acting at their own (free) will and using it against humankind is nonetheless present. Even some of the greatest minds of our time, such as Elon Musk and Stephen Hawking have been talking about this possibility.

On a more down-to-earth and practical level, artificial intelligence has already sneaked into our lives. Weve grown so accustomed to some of the best AI apps,such as Cortana, Alexa or Siri, that we already think of them as our trusted companions that help us run our everyday tasks easily and smoothly.

However, while a catastrophic sci-fi movie scenario is not a thing we should be worried about (at least not at the moment) there are some risks related to AI implementation which are far more tangible and possible.

Read on to find out more about some real-life benefits and risks of AI implementation.

By now, all of the industries have opened their doors to the various advancements AI brings. Here are some of the most prominent usages were witnessing and will be seeing more of in the years to come, in the digital marketing, healthcare, and finance industry.

If youve recently used a chat to reach customer service, chances are high youve been talking to a chatbot, maybe even without realizing this fact. It may come as a surprise that 40% of customers are fine with both options, as long as they get their issues solved.

Chatbots embody many benefits AI brings to businesses, and are a great example of how it may improve a sensitive and time-consuming matter such as customer service.

Some of the crucial points where you can see the advantages of AI-powered chatbots are:

Besides chatbots, AI can benefit digital marketing in many different ways, as it can be used to automate many different tasks, such as email and paid ads campaigns. It can also help marketers create more precise buyer personas, predict customers behavior and give sales forecasts, help with content creation, etc. These benefits to the e-commerce industry can hardly be measured, as businesses can now always be there for their online customers, assisting them in making their purchasing decisions and helping them navigate their customer journey.

Another noticeable way AI benefits our lives is through its usage in healthcare.

Weve recently witnessed a win of trained AI over human experts, as AI outperformed six radiologists in reading mammograms and recognizing breast cancer. Images can now be analyzed in a few seconds by the computer algorithm, so the use of AI can significantly improve the speed of diagnosis.

Except in radiology, AI is widely used in digital consultations, on platforms such as Buoy or Isabel symptom checkers, offering remote medical assistance, and suggesting how to see a professional based on their location.

The advantages of AI have been recognized early by the finance and banking sectors, and the technology is now implemented in the ways beneficial for both parties.

One of the best examples of how beneficial AI in this industry can be, is Erica, a virtual employee of the National Bank of America. Erica has by now served over 7 million customers and managed over 50 million of their requests, helping them with their transactions and budgeting, tracking their spending habits and giving useful advice.

As for the potential actual risks of AI nowadays, the one that seems to bring the most concerns is job loss, which in some industries seem inevitable.

AI-powered employees have quite a few advantages when compared to their human colleagues. As they have no personal and emotional responses theyre never exhausted, bored or distracted, not to mention that they are more productive and efficient. Furthermore, their capacity to make errors is significantly reduced.

Such qualities of AI are the most likely to cause layoffs where a lot of tasks can be automated, such as the trucking, food service and retail industry, leading to millions of unemployed and an even higher income inequality.

Another rising concern has been an invasion of privacy. This has already taken place in China, where AI-powered technologies are used for the purposes of mass surveillance, impacting the so-called social credit system.

The system tracks users behavior everywhere it has access to their social media profiles, their financial reports, health records etc. Data collected this way, including jaywalking and failing to correctly sort personal waste can now negatively influence the credit score while donating blood or volunteering can increase it. Negative credit can, for example, ban you from buying plane tickets, or enrolling your kids in certain schools.Finally, the possibility of using AI capacities for military purposes shouldnt be neglected, as the idea of having this kind of power concentrated in the hands of any of the world leaders, seems like a genuine threat to the world as we know it.

And while we think about all the benefits and the risks artificial intelligence brings, lets not forget one crucial point AI doesnt set its own goals. The power it has is the power we delegate it to achieve the things we are trying to accomplish, meaning that were responsible for both its benefits and its risks.

Visit link:
Benefits and Risks of Artificial Intelligence

What Skills Do I Need to Get a Job in Artificial Intelligence?

Automation, robotics and the use of sophisticated computer software and programs characterize a career in artificial intelligence (AI). Candidates interested in pursuing jobs in this field require specific education based on foundations of math, technology, logic, and engineering perspectives. Written and verbal communication skills are also important to convey how AI tools and services are effectively employed within industry settings. To acquire these skills, those with an interest in an AI career should investigate the various career choices available within the field.

The most successful AI professionals often share common characteristics that enable them to succeed and advance in their careers. Working with artificial intelligence requires an analytical thought process and the ability to solve problems with cost-effective, efficient solutions. It also requires foresight about technological innovations that translate to state-of-the-art programs that allow businesses to remain competitive. Additionally, AI specialists need technical skills to design, maintain and repair technology and software programs. Finally, AI professionals must learn how to translate highly technical information in ways that others can understand in order to carry out their jobs. This requires good communication and the ability to work with colleagues on a team.

Basic computer technology and math backgrounds form the backbone of most artificial intelligence programs. Entry level positions require at least a bachelors degree while positions entailing supervision, leadership or administrative roles frequently require masters or doctoral degrees. Typical coursework involves study of:

Candidates can find degree programs that offer specific majors in AI or pursue an AI specialization from within majors such as computer science, health informatics, graphic design, information technology or engineering.

A career in artificial intelligence can be realized within a variety of settings including private companies, public organizations, education, the arts, healthcare facilities, government agencies and the military. Some positions may require security clearance prior to hiring depending on the sensitivity of information employees may be expected to handle. Examples of specific jobs held by AI professionals include:

From its inception in the 1950s through the present day, artificial intelligence continues to advance and improve the quality of life across multiple industry settings. As a result, those with the skills to translate digital bits of information into meaningful human experiences will find a career in artificial intelligence to be sustaining and rewarding.

See original here:
What Skills Do I Need to Get a Job in Artificial Intelligence?

Whats the Difference Between Artificial Intelligence …

This is the first of a multi-part series explaining the fundamentals of deep learning by long-time tech journalist Michael Copeland.

Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to.

For example, when Google DeepMinds AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how DeepMind won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol. But they are not the same things.

The easiest way to think of their relationship is to visualize them as concentric circles with AI the idea that came first the largest, then machine learning which blossomed later, and finally deep learning which is driving todays AI explosion fitting inside both.

AI has been part of our imaginations and simmering in research labs since a handful of computer scientists rallied around the term at the Dartmouth Conferences in 1956 and birthed the field of AI. In the decades since, AI has alternately been heralded as the key to our civilizations brightest future, and tossed on technologys trash heap as a harebrained notion of over-reaching propellerheads. Frankly, until 2012, it was a bit of both.

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) images, text, transactions, mapping data, you name it.

Lets walk through how computer scientists have moved from something of a bust until 2012 to a boom that has unleashed applications used by hundreds of millions of people every day.

Back in that summer of 56 conference the dream of those AI pioneers was to construct complex machines enabled by emerging computers that possessed the same characteristics of human intelligence. This is the concept we think of as General AI fabulous machines that have all our senses (maybe even more), all our reason, and think just like we do. Youve seen these machines endlessly in movies as friend C-3PO and foe The Terminator. General AI machines have remained in the movies and science fiction novels for good reason; we cant pull it off, at least not yet.

What we can do falls into the concept of Narrow AI. Technologies that are able to perform specific tasks as well as, or better than, we humans can. Examples of narrow AI are things such as image classification on a service like Pinterest and face recognition on Facebook.

Those are examples of Narrow AI in practice. These technologies exhibit some facets of human intelligence. But how? Where does that intelligence come from? That get us to the next circle, machine learning.

Machine learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is trained using large amounts of data and algorithms that give it the ability to learn how to perform the task.

Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.

To learn more about deep learning, listen to the 100th episode of our AI Podcast with NVIDIAs Ian Buck.

As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters S-T-O-P. From all those hand-coded classifiers they would develop algorithms to make sense of the image and learn to determine whether it was a stop sign.

Good, but not mind-bendingly great. Especially on a foggy day when the sign isnt perfectly visible, or a tree obscures part of it. Theres a reason computer vision and image detection didnt come close to rivaling humans until very recently, it was too brittle and too prone to error.

Time, and the right learning algorithms made all the difference.

Another algorithmic approach from the early machine-learning crowd, artificial neural networks, came and mostly went over the decades. Neural networks are inspired by our understanding of the biology of our brains all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.

You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.

Each neuron assigns a weighting to its input how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and examined by the neurons its octogonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural networks task is to conclude whether this is a stop sign or not. It comes up with a probability vector, really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident its a speed limit sign, and 5% its a kite stuck in a tree ,and so on and the network architecture then tells the neural network whether it is right or not.

Even this example is getting ahead of itself, because until recently neural networks were all but shunned by the AI research community. They had been around since the earliest days of AI, and had produced very little in the way of intelligence. The problem was even the most basic neural networks were very computationally intensive, it just wasnt a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasnt until GPUs were deployed in the effort that the promise was realized.

If we go back again to our stop sign example, chances are very good that as the network is getting tuned or trained its coming up with wrong answers a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time fog or no fog, sun or rain. Its at that point that the neural network has taught itself what a stop sign looks like; or your mothers face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.

Ngs breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ngs case it was images from 10 million YouTube videos. Ng put the deep in deep learning, which describes all the layers in these neural networks.

Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Googles AlphaGo learned the game, and trained for its Go match it tuned its neural network by playing against itself over and over and over.

Deep learning has enabled many practical applications of machine learning and by extension the overall field of AI. Deep learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep learnings help, AI may even get to that science fiction state weve so long imagined. You have a C-3PO, Ill take it. You can keep your Terminator.

Original post:
Whats the Difference Between Artificial Intelligence ...

Artificial Intelligence and COVID-19: How Technology Can Understand, Track, and Improve Health Outcomes – Stanford University News

On April 1, nearly 30 artificial intelligence (AI) researchers and experts met virtually to discuss ways AI can help understand COVID-19 and potentially mitigate the disease and developing public health crisis.

COVID-19 and AI: A Virtual Conference, hosted by the Stanford Institute for Human-Centered Artificial Intelligence, brought together Stanford faculty across medicine, computer science, and humanities; politicians, startup founders, and researchers from universities across the United States.

In these trying times, I am especially inspired by the eagerness and diligence of scientists, clinicians, mathematicians, engineers, and social scientists around the world that are coming together to combat this pandemic, Fei-Fei Li, Denning Family Co-Director of Stanford HAI, told the live audience.

COVID-19: What is Working?

As the virus envelops the world, South Korea, China, Hong Kong, and Singapore have been able to drastically flatten their curves, says Michele Barry, Stanford University professor of medicine. To begin, these countries were quick to enact strong containment, social-distancing or quarantine rules, rigorous and free testing and tracking, and far-reaching communication strategies. Why else were they so successful? All were highly prepared to meet this health crisis as a result of prior experience confronting the 2002 SARS epidemic, she notes.

Jason Wang, director of Stanfords Center for Policy, Outcomes, and Prevention, pointed to Taiwan as another leader in this space. Taiwan focused on tracking health supplies, coordinating government agencies, regulating transportation, and amending laws for violating quarantine. Both Taiwan and South Korea implemented aggressive technologies, including thermal imaging. If your temperature reading was too high, for example, you were denied entry to an office building or restaurant.

In the United States, early focus has shifted from containment to quarantine and testing. Were paying attention to Korea, China, and Singapore and other places that are a month ahead of us, says U.S. Rep. Ami Bera. Serological testing used to diagnose the presence of antibodies in the blood will help us understand who has immunity and when we can reopen parts of the community, he adds.

The Fight Against Misinformation

Managing the scope of this global pandemic has been made more difficult and complicated by the spread of disinformation, misinformation, and conspiracy theories.

In times of crisis, University of Washington associate professor Kate Starbird explains, people come together to seek information and take psychological comfort. But sensemaking can also lead to false rumors. Disinformation false information thats spread intentionally causes confusion and even panic and can divert resources to the wrong areas, says Stanford Health Communication Initiative director Seema Yasmin. Both disinformation and misinformation (any information thats inaccurate) can breed xenophobia. Eram Alam, Harvard University assistant professor, notes a recent uptick in hate crimes and racist incidents as references to the Chinese virus or Wuhan virus peppered articles and government news conferences.

To maintain trust, says Starbird, political leaders must be mindful that their statements not contribute to the spread of misinformation or cast doubt on science; crisis communicators must be transparent about the rationale for their actions (while acknowledging that facts may change as we learn more).

Researchers Roles in Fighting COVID-19

Across disciplines, researchers are finding ways to fight COVID-19, by sharing data and building new tools. Infectious diseases data scientist Lucy Li of the Chan Zuckerberg Biohub says her organization is developing a tool to estimate unreported infections. At Stanford, associate professor of medicine Nigam Shah and colleagues are honing in on ways data science can respond both operationally (How many patients will our region have? How many ICU beds do we need?) and clinically (Whom do we test?), while pointing to critical areas for further research (What drugs can help us?). Harvard Medical School pediatrician John Brownstein and his team are tracking all coronavirus infections worldwide and partnering with organizations designing tools around the information together with the CDC, for example, they are working to analyze the efficacy of various social-distancing policies.

At Carnegie Mellon, statistics and machine learning associate professor Ryan Tibshiranis epidemiological forecasting team has shifted from studying flu to COVID-19 to predict short-term forecasts that will inform public health officials in making policy decisions. Meanwhile, Tina White, a Stanford mechanical engineering PhD candidate, designed an open-source app to track the spread of COVID-19, using anonymized Bluetooth data. HAI co-director Fei-Fei Lis research offers an AI approach to helping senior citizens stay in their homes: sensors and cameras could send valuable information about sleep or dietary patterns, for instance, to clinicians in a secure and ethical way.

Meanwhile, startups are playing a role. Curai co-founder Xavier Amatriain says his companys machine learning tools create personalized diagnostic assessments, while Anthony Goldblooms company, Kaggle, offers the machine-learning community ways to share data and review each others work.

Finding a Cure

Tools are essential weapons for tracking and better understanding the disease, but vaccines and drugs are the pathway to an eventual cure. Binbin Chen, Stanford genetics MD and PhD student, says vaccines are among the most powerful ways to curb a pandemic and prevent its recurrence. His team uses artificial intelligence to examine fragments of SARS-CoV-2 to determine how they might apply to COVID-19 vaccines. These tools, says Chen, can give us a better educated guess and increase our chances of finding an effective vaccine. Meanwhile, Stanford bioengineering research engineer Stefano Rensi is examining existing drugs that can be repurposed to combat the disease. He and his team use natural language processing, protein structure prediction, and biophysics to identify potential drugs. According to preliminary results, the team has classified several candidates, including one undergoing clinical testing in Japan.

Read the rest here:
Artificial Intelligence and COVID-19: How Technology Can Understand, Track, and Improve Health Outcomes - Stanford University News

IBM Research releases a new set of cloud- and artificial intelligence-based COVID-19 resources – TechRepublic

Access to the online databases is free to qualified researchers and medical experts to help them identify a potential treatment for the novel coronavirus.

IBM Research is making multiple free resources available to help healthcare researchers, doctors, and scientists around the world accelerate COVID-19 drug discovery. The resources can help with gathering insights, to applying the latest virus genomic information and identifying potential targets for treatments, to creating new drug molecule candidates, the company said in a statement.Though some of the resources are still in exploratory stages, IBM is giving access to qualified researchers at no charge to aid the international scientific investigation of COVID-19.The announcement follows IBM's launch of the US COVID-19 High Performance Computing Consortium, which is harnessing massive computing power in the effort to help confront the coronavirus, the company said.

Healthcare agencies and governments around the world have quickly amassed medical and other relevant data about the pandemic. And, there are already vast troves of medical research that could prove relevant to COVID-19, IBM said."Yet, as with any large volume of disparate data sources, it is difficult to efficiently aggregate and analyze that data in ways that can yield scientific insights," the company said.SEE: How tech companies are fighting COVID-19 with AI, data and ingenuity (TechRepublic)

To help researchers access structured and unstructured data quickly, IBM has offered a cloud-based AI research resource that the company said has been trained on a corpus of thousands of scientific papers contained in the COVID-19 Open Research Dataset (CORD-19), prepared by the White House and a coalition of research groups, and licensed databases from the DrugBank, Clinicaltrials.gov and GenBank.

"This tool uses our advanced AI and allows researchers to pose specific queries to the collections of papers and to extract critical COVID-19 knowledge quickly," the company said. However, access to this resource will be granted only to qualified researchers, IBM said.

The traditional drug discovery pipeline relies on a library of compounds that are screened, improved, and tested to determine safety and efficacy, IBM noted.

"In dealing with new pathogens such as SARS-CoV-2, there is the potential to enhance the compound libraries with additional novel compounds," the company said. "To help address this need, IBM Research has recently created a new, AI-generative framework which can rapidly identify novel peptides, proteins, drug candidates and materials."

This AI technology has been applied against three COVID-19 targets to identify 3,000 new small molecules as potential COVID-19 therapeutic candidates, the company said. IBM is releasing these molecules under an open license, and researchers can study them via a new interactive molecular explorer tool to understand their characteristics and relationship to COVID-19 and identify candidates that might have desirable properties to be further pursued in drug development.To streamline efforts to identify new treatments for COVID-19, IBM said it is also making the IBM Functional Genomics Platform available for free for the duration of the pandemic."Built to discover the molecular features in viral and bacterial genomes, this cloud-based repository and research tool includes genes, proteins and other molecular targets from sequenced viral and bacterial organisms in one place with connections pre-computed to help accelerate discovery of molecular targets required for drug design, test development and treatment," IBM said.

Select IBM collaborators from government agencies, academic institutions and other organizations already use this platform for bacterial genomic study, according to IBM. Now, those working on COVID-19 can request the IBM Functional Genomics Platform interface to explore the genomic features of the virus.

Clinicians and healthcare professionals on the frontlines of care will also have free access to hundreds of pieces of evidence-based, curated COVID-19 and infectious disease content from IBM Micromedex and EBSCO DynaMed, the company said.

These two decision support solutions will give users access to drug and disease information in a single and comprehensive search, according to IBM. Clinicians can also provide patients with consumer-friendly education handouts with relevant, actionable medical information, the company said.IBM's Micromedex online reference databases provide medication information that is used by more than 4,500 hospitals and health systems worldwide, according to IBM."The scientific community is working hard to make important new discoveries relevant to the treatment of COVID-19, and we're hopeful that releasing these novel tools will help accelerate this global effort," the company said."This work also outlines our long-term vision for the future of accelerated discovery, where multi-disciplinary scientists and clinicians work together to rapidly and effectively create next generation therapeutics, aided by novel AI-powered technologies."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Image: Getty Images/iStockphoto

Read more from the original source:
IBM Research releases a new set of cloud- and artificial intelligence-based COVID-19 resources - TechRepublic

Leveraging Artificial Intelligence to Enhance the Radiologist and Patient Experience – Imaging Technology News

Arecent study earlier this year in the journal Nature, which included researchers from Google Health London, demonstrated that artificial intelligence (AI) technology outperformed radiologists in diagnosing breast cancer on mammograms. This study is the latest to fuel ongoing speculation in the radiology industry that AI could potentially replace radiologists. However, this notion is simply sensational.

Consider the invention of autopilot. Despite its existence, passengers still rely on pilots, in conjunction with autopilot technology, to travel. Similarly, radiologists can combine their years of medical knowledge and personal patient relationships with AI technology to improve the patient and clinician experience. To examine this in greater detail, consider the scenarios in which AI is making, or can make, a positive impact.

Measuring a womans breast density is critical in assessing her risk for developing breast cancer, as women with very dense breasts are four to five times more likely to develop breast cancer than women with less dense breasts.1,2 However, as radiologists know, very dense breast tissue can create a masking effect on a traditional 2-D image, since the glandular tissue color matches that of cancer. As a result, a womans breast density classification can influence the type of breast screening exam she should get. For example, digital breast tomosynthesis (DBT) technology has proven as superior for all women, including those with dense breasts.

Categorizing density, though, can traditionally be a subjective process radiologists must manually view the breast images and make a determination, and in some cases two radiologists may disagree on a classification. This is where AI technology can make a positive impact. Through a collection of images in a database and consistent algorithms, AI technology can help unify breast density classification, especially for images teetering between a B and C BI-RADS score.

While AI technology may offer the potential to provide more consistent BI-RAD scores, the role of the radiologist is still very necessary its the radiologist who would know the patients full profile that could impact clinical care. For example, this can include other risk factors their patient may have, such as family history of breast cancer, to personal beliefs about various screening options and beyond all of which are external factors that could influence how to manage a particular patients journey of care.

In addition to helping assist with breast density classification, AI technology can also help improve workflow for radiologists which can, in turn, impact patient care. Although it is clinically proven to detect more invasive breast cancers, DBT technology produces a much larger amount of data and larger data files compared to 2-D mammography, creating workflow challenges for radiologists. However, AI technology now exists that can help reduce reading time for radiologists by identifying the critical parts of 3-D data worth preserving. The technology can then cut down on the number of images to read while maintaining image quality. The AI technology does not take over the radiologists entire role of reading the images and providing a diagnosis to patients it simply calls to their attention the higher risk images and cases that require urgent attention, allowing radiologists to prioritize cases in need of more serious and immediate scrutiny.

There are many more challenges that radiologists face today in which AI technology can potentially make an impact in the future. For example the length of time between a womans screening and the delivery of her results could use improvement, especially since that waiting period can elicit very high emotions. The important thing to realize for now, though, is that AI technology plays an important and positive role in radiology today, and the best outcomes will occur when radiologists and AI technology are not mutually exclusive but rather work in practice together.

Samir Parikh is the global vice president of research and development for Hologic. In this role, he is responsible for leading and driving innovative advanced solutions across the continuum of care to drive sustainable growth of the breast and skeletal health division.

References:

1.Boyd NF, Guo H, Martin LJ, et al. Mammographic density and the risk and detection of breast cancer. N Engl J Med. 356(3):227-36, 2007.

2. Yaghjyan L, Colditz GA, Collins LC, et al. Mammographic breast density and subsequent risk of breast cancer in postmenopausal women according to tumor characteristics. J Natl Cancer Inst. 103(15):1179-89, 2011.

Read the rest here:
Leveraging Artificial Intelligence to Enhance the Radiologist and Patient Experience - Imaging Technology News