#SocialSec Hot takes on this week’s biggest cybersecurity news (Dec 20) – The Daily Swig

A major breach of patient data in Canada; Snowdens book hits the courtroom; and the best hacker films to watch over the holidays

A major data breach took over social media feeds this week after news broke that a Canadian medical testing lab had fallen victim to a cyber-attack.

LifeLabs, the impacted healthcare company, said on Tuesday that approximately 15 million of its customers were potentially affected by the incident, which occurred when an unknown actor gained unwarranted access to one of its systems.

Customer information including names, addresses, emails, logins, passwords, dates of birth, health card numbers, and lab test results are potentially implicated, the company said, adding that it done its due diligence by informing both the authorities and Canadas privacy commissioners.

Any customer who is concerned about this incident can receive one free year of protection that includes dark web monitoring and identity theft insurance, Charles Brown, president and CEO of LifeLabs, said in the statement.

While the investigation still appears to be underway, rumours have circulated that LifeLabs may have been subject to a ransomware attack, its CEO additionally stating that protection measures included retrieving data by making a payment.

Infosec marketing through fear, uncertainity, and doubt (FUD) is at least as old as the web itself, if not older.

One example of FUD is to suggest software vulnerabilities might enable cyber-terrorism, a best a poorly defined term and one is arguably distateful because it compares the victims of bombings and knifing rampages against those suffering from hacked PCs or smartphones.

Journalist Joseph Cox politely called Check Point out in equating a recently discovered WhatsApp vulnerability to cyber-terrorism.

Rather than beat a tactical retreat a PR rep for the firm doubled down on the analogy.

Cox reports that multiple researchers and other non marketing staff from Check Point quickly got in touch with him to distance themselves from the PR hyperbole.

And in other news, a judge has ruled that Edward Snowden, the NSA contractor turned whistleblower, will have to give the proceeds of his recently published memoir, Permanent Record, to the US government.

The ruling on Tuesday stated that Snowden, who was charged under the Espionage Act in 2013 for leaking confidential files to the worlds press, had violated his agreement with US intelligence agencies, requiring a review of any material prior to publication.

The terms of the CIA Secrecy Agreements further provide that Snowden forfeits any proceeds from disclosures that breach the Agreements, AP reports US District Judge Liam OGrady as saying.

These terms continue to apply to Snowden.

Defense lawyers argued that the book would not have received a fair review, and are currently looking at ways to appeal.

Permanent Record, an autobiography depicting Snowdens time within the military industrial complex, was published in September. The ruling this week has not impacted the books distribution.

As the winter nights close in, what better way to deter diversion for infosec folk than to enjoy a hacking-themed movie?

For your edification and entertainment,The Daily Swig has put together a feature offering a rundown of The Best Hacking Films of All Time.

Our list includes some left-field suggestions, including a TV show and a documentary, as well as old school and more recent favorites.

Rather than attempting to rank these films ourselves, we ran an online Twitter poll. Naturally, the resulting selections and ranking did not please everyone

Additional reporting by John Leyden.

See original here:
#SocialSec Hot takes on this week's biggest cybersecurity news (Dec 20) - The Daily Swig

10 tech trends that shaped the 2010s – Pew Research Center

The tech landscape has changed dramatically over the past decade, both in the United States and around the world. There have been notable increases in the use of social media and online platforms (including YouTube and Facebook) and technologies (like the internet, cellphones and smartphones), in some cases leading to near-saturation levels of use among major segments of the population. But digital tech also faced significant backlash in the 2010s.

Here are 10 of the top tech-related changes that Pew Research Center has studied over the past decade:

1 Social media sites have emerged as a go-to platform for connecting with others, finding news and engaging politically. When the Center first asked U.S. adults if they ever use a social media site in 2005, just 5% said they did. Today, the share is 72%, according to a survey in early 2019.

Social media has also taken hold around the world. The Centers spring 2017 global survey conducted in 17 advanced and 19 emerging economies found that a median of 53% of adults across emerging and developing countries use social media.

In the U.S. and around the world, younger adults are the most likely age group to use social media. For example, nine-in-ten Americans ages 18 to 29 report ever using a social media site, compared with 40% of those ages 65 and older.

In terms of specific platforms, YouTube and Facebook are the most widely used online platforms among U.S. adults, with roughly seven-in-ten Americans saying they use each site. The shares of adults who use Instagram and Snapchat are much smaller, but these platforms are especially popular with younger Americans.

2 Around the world and in the U.S., social media has become a key tool for activists, as well as those aligned against them. The decade began with the Arab Spring and ended with protesters in Hong Kong and elsewhere using social media to promote and organize their causes. In some cases, governments fought back by shutting down the internet, while opponents of some activists mounted social media campaigns of their own.

In the U.S., social media played a role in major social movements such as #MeToo and #BlackLivesMatter. For example, a Pew Research Center analysis of publicly available English language tweets found that the #MeToo hashtag had been used more than 19 million times on Twitter from Oct. 15, 2017 (when actress Alyssa Milano tweeted urging victims of sexual harassment to reply me too) through Sept. 30, 2018.

Still, Americans have expressed mixed views about the impact social media has on the broader political environment. Roughly two-thirds of Americans (64%) say the statement social media helps give a voice to underrepresented groups describes these sites very or somewhat well, a 2018 survey found. At the same time, 77% believe these platforms distract people from issues that are truly important, and 71% agree with the statement social media makes people think theyre making a difference when they really arent.

3 Smartphones have altered the way many Americans go online. One of the biggest digital trends of the decade has been the steady rise of mobile connectivity. Smartphone adoption has more than doubled since the Center began surveying on this topic in 2011. Then, 35% of U.S. adults reported owning a smartphone of some kind, a share that has risen to 81% today.

Teens have also become much more likely to use smartphones: More than nine-in-ten (95%) teens ages 13 to 17 report owning or having access to a smartphone, according to a 2018 survey.

Adults are increasingly likely to name their smartphone as the primary way of going online. Today, 37% of U.S. adults say they mostly use a smartphone to access the internet, up from 19% in 2013.

The 2010s, meanwhile, were also the decade that saw the advent of tablet computers, which are now used by around half (52%) of U.S. adults.

4 Growth in mobile and social media use has sparked debates about the impact of screen time on Americas youth and others. More than half of teens (54%) believe they spend too much time on their cellphone, while 41% say they spend too much time on social media and about one-quarter say the same about video games, a 2018 survey found. At the same time, about half or more of teens say they have cut back on the amount of time they spend on their cellphones (52%), and similar shares say they have tried to limit their use of social media (57%) and video games (58%).

Still, teens are not the only group who struggle with balancing their use of digital technology with other aspects of their lives. Some 36% of parents of teens say they themselves spend too much time on their cellphone, while a similar share (39%) say they at least sometimes lose focus at work because theyre checking their cellphone.

5 Data privacy and surveillance have become major concerns in the post-Snowden era. In June 2013, then-National Security Agency contractor Edward Snowden leaked information showing that the NSA had conducted widespread surveillance of Americans online and phone communications. In the aftermath of the revelations, about half of Americans (49%) said the release of the classified information served the public interest, while 44% said it harmed the public interest, according to a 2013 survey.

In the years following the leaks, there have been high-profile commercial and government data breaches, as well as revelations about how firms and governments exploit social media profiles and other data sources to target users. Surveys have consistently shown that these issues have prompted significant public concern about peoples personal data, as well as the publics lack of confidence that companies can and will keep their data safe. For instance, the majority of Americans now say that they feel they have very little or no control over the data collected about them by the government (84%), while roughly two-thirds (64%) report that they feel at least somewhat concerned about how the government is using the data it collects about them.

6 Tech platforms have given rise to a gig economy. Mobile technology has helped create new businesses and jobs, while at the same time sparking debate about regulating companies that provide services that can be ordered by apps. Ride-hailing is one of the most well-documented examples of growth in the gig economy, and more Americans are using this kind of service: As of fall 2018, 36% of U.S. adults said they had ever used a ride-hailing service such as Uber or Lyft, up from 15% in 2015. In addition to car services, the gig economy has spawned businesses ranging from home sharing to online marketplaces for homemade goods.

7 Online harassment has become a fairly common feature of online life, both for teens and adults. Roughly six-in-ten U.S. teens (59%) say they have been bullied or harassed online, with offensive name-calling being the most common type of harassment they have encountered, according to a 2018 survey of those ages 13 to 17. A similar share of teens (63%) view online harassment as a major problem for people their age.

Many adults also report being the target of some form of abusive behavior online. Some 41% of adults have experienced some form of online harassment, as measured in a 2017 survey.

8 Made-up news and misinformation has sparked growing concern. The lead-up to the 2016 U.S. presidential election brought to the surface concerns around misinformation and its ability to affect the democratic process. Half of Americans believe made-up news and misinformation is a very big problem for the country today, making it a pressing problem for more Americans than said so of terrorism, illegal immigration, sexism and racism, according to a 2019 survey. Some 68% of U.S. adults say made-up news greatly impacts Americans confidence in government institutions.

The challenge of navigating the new information environment was reflected in a 2018 survey that measured the publics ability to identify five factual statements and five opinion statements. A small share of Americans were able to correctly classify all 10 statements. About a third (35%) were able to correctly identify all five opinion statements, while around a quarter (26%) were able to correctly identify all five factual statements. Americans with high political awareness, those who are very digitally savvy and those who have high levels of trust in the news media were able to more accurately identify news-related statements as factual or opinion.

9 A majority of Americans see gender discrimination as a problem in the tech industry. Tech companies have faced criticism for their hiring practices and work cultures, including reports of discrimination on the basis of race and ethnicity and gender. A majority of Americans (73%) say discrimination against women is a problem in the tech industry, with 37% citing it as a major problem, according to a summer 2017 survey. When it comes to discrimination against black and Hispanic Americans in tech two groups that are underrepresented in the industry roughly two-thirds of Americans (68%) say this is a problem (31% say its a major problem), according to the same survey.

10 Americans views about tech companies have turned far less positive in recent years. Controversies related to digital privacy, made-up news, harassment and other issues may have taken their toll on public attitudes about tech companies. The share of Americans who say these companies are having a positive effect on the way things are going in the country has declined sharply since 2015, according to a July 2019 survey. Four years ago, the majority of U.S. adults (71%) said these companies had a positive impact on the country, compared with 50% today.

In a survey in summer 2018, roughly seven-in-ten Americans (72%) said it is likely that social media platforms actively censor political views that those companies find objectionable. Around half (51%) of the public said major tech companies should be regulated more than they are now.

See more here:
10 tech trends that shaped the 2010s - Pew Research Center

What is Machine Learning? A definition – Expert System

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

Machine learning algorithms are often categorized as supervised or unsupervised.

Machine learning enables analysis of massive quantities of data. While it generally delivers faster, more accurate results in order to identify profitable opportunities or dangerous risks, it may also require additional time and resources to train it properly. Combining machine learning with AI and cognitive technologies can make it even more effective in processing large volumes of information.

See the rest here:

What is Machine Learning? A definition - Expert System

Data science and machine learning: what to learn in 2020 – Packt Hub

Its hard to keep up with the pace of change in the data science and machine learning fields. And when youre under pressure to deliver projects, learning new skills and technologies might be the last thing on your mind. But if you dont have at least one eye on what you need to learn next you run the risk of falling behind. In turn this means you miss out on new solutions and new opportunities to drive change: you might miss the chance to do things differently.

Thats why we want to make it easy for you with this quick list of what you need to watch out for and learn in 2020.

TensorFlow remains the most popular deep learning framework in the world. With TensorFlow 2.0 the Google-based development team behind it have attempted to rectify a number of issues and improve overall performance. Most notably, some of the problems around usability have been addressed, which should help the projects continued growth and perhaps even lower the barrier to entry.

Relatedly TensorFlow.js is proving that the wider TensorFlow ecosystem is incredibly healthy. It will be interesting to see what projects emerge in 2020 it might even bring JavaScript web developers into the machine learning fold.

Explore Packts huge range of TensorFlow eBooks and videos on the store.

PyTorch hasnt quite managed to topple TensorFlow from its perch, but its nevertheless growing quickly. Easier to use and more accessible than TensorFlow, if you want to start building deep learning systems quickly your best bet is probably to get started on PyTorch.

Search PyTorch eBooks and videos on the Packt store.

When it comes to data analysis, one of the most pressing issues is to speed up pipelines. This is, of course, notoriously difficult even in organizations that do their best to be agile and fast, its not uncommon to find that their data is fragmented and diffuse, with little alignment across teams.

One of the opportunities for changing this is cloud. When used effectively cloud platforms can dramatically speed up analytics pipelines and make it much easier for data scientists and analysts to deliver insights quickly. This might mean that we need increased collaboration between data professionals, engineers, and architects, but if were to really deliver on the data at our disposal, then this shift could be massive.

Learn how to perform analytics on the cloud with Cloud Analytics with Microsoft Azure.

While cloud might help to smooth some of the friction that exists in our organizations when it comes to data analytics, theres no substitute for strong and clear leadership. The split between the engineering side of data and the more scientific or interpretive aspect has been noted, which means that there is going to be a real demand for people that have a strong understanding of what data can do, what it shows, and what it means in terms of action.

Indeed, the article just linked to also mentions that there is likely to be an increasing need for executive level understanding. That means data scientists have the opportunity to take a more senior role inside their organizations, by either working closely with execs or even moving up to that level.

Learn how to build and manage a data science team and initiative that delivers with Managing Data Science.

In the excitement about the opportunities of machine learning and artificial intelligence, its possible that weve lost sight of some of the fundamentals: the algorithms. Indeed, given the conversation around algorithmic bias, and unintended consequences it certainly makes sense to place renewed attention on the algorithms that lie right at the center of our work.

Even if youre not an experienced data analyst or data scientist, if youre a beginner its just as important to dive deep into algorithms. This will give you a robust foundation for everything else you do. And while statistics and mathematics will feel a long way from the supposed sexiness of data science, carefully considering what role they play will ensure that the models you build are accurate and perform as they should.

Get stuck into algorithms with Data Science Algorithms in a Week.

Computer vision and Natural Language Processing are two of the most exciting aspects of modern machine learning and artificial intelligence. Both can be used for analytics projects, but they also have applications in real world digital products. Indeed, with augmented reality and conversational UI becoming more and more common, businesses need to be thinking very carefully about whether this could give them an edge in how they interact with customers.

These sorts of innovations can be driven from many different departments but technologists and data professionals should be seizing the opportunity to lead the way on how innovation can transform customer relationships.

For more technology eBooks and videos to help you prepare for 2020, head to the Packt store.

More:

Data science and machine learning: what to learn in 2020 - Packt Hub

Kubernetes and containers are the perfect fit for machine learning – JAXenter

Machine learning is permeating every corner of the industry, from fraud detection to supply chain optimization to personalizing the customer experience. McKinsey has found that nearly half of enterprises have infused AI into at least one of their standard business processes, and Gartner says seven out of 10 enterpriseswill be using some form of AI by 2021. Thats a short two years away.

But for businesses to take advantage of AI, they need an infrastructure that allows data scientists to experiment and iterate with different data sets, algorithms, and computing environments without slowing them down or placing a heavy burden on the IT department. That means they need a simple, automated way to quickly deploy code in a repeatable manner across local and cloud environments and to connect to the data sources they need.

A cloud-native environment built on containers is the most effective and efficient way to support this type of rapid development, evidenced by announcements from big vendors like Googleand HPE, which have each released new software and services to enable machine learning and deep learning in containers. Much as containers can speed the deployment of enterprise applications by packaging the code in a wrapper along with its runtime requirements, these same qualities make containers highly practical for machine learning.

Broadly speaking, there are three phases of an AI project where containers are beneficial: exploration, training, and deployment. Heres a look at what each involves and how containers can assist with each by reducing costs and simplifying deployment, allowing innovation to flourish.

To build an AI model, data scientists experiment with different data sets and machine learning algorithms to find the right data and algorithms to predict outcomes with maximum accuracy and efficiency. There are various libraries and frameworksfor creating machine learning models for different problem types and industries. Speed of iteration and the ability to run tests in parallel is essential for data teams as they try to uncover new revenue streams and meet business goals in a reasonable timeframe.

Containers provide a way to package up these libraries for specific domains, point to the right data source and deploy algorithms in a consistent fashion. That way, data scientists have an isolated environment they can customize for their exploration, without needing IT to manage multiple sets of libraries and frameworks in a shared environment.

SEE ALSO:Unleash chaos engineering: Kubethanos kills half your Kubernetes pods

Once an AI model has been built, it needs to be trained against large volumes of data across different platforms to maximize accuracy and minimize resource utilization. Training is highly compute-intensive, and containers make it easy to scale workloads up and down across multiple compute nodes quickly. A scheduler identifies the optimal node based on available resources and other factors.

A distributed cloud environment also allows compute and storage to be managed separately, which cuts storage utilization and therefore costs. Traditionally, compute and storage were tightly coupled, but containers along with a modern data management plane allows compute to be scaled independently and moved close to the data, wherever it resides.

With compute and storage separate, data scientists can run their models on different types of hardware, such as GPUs and specialized processors, to determine which model will provide the greatest accuracy and efficiency. They can also work to incrementally improve accuracy by adjusting weightings, biases and other parameters.

In production, a machine learning application will often combine several models that serve different purposes. One model might summarize the text in a social post, for example, while another assesses sentiment. Containers allow each model to be deployed as a microservice an independent, lightweight program that developers can reuse in other applications.

Microservices also make it easier to deploy models in parallel in different production environments for purposes such as a/b testing, and the smaller programs allow models to be updated independently from the larger application, speeding release times, and reducing the room for error.

SEE ALSO:Artificial intelligence & machine learning: The brain of a smart city

At each stage of the process, containers allow data teams to explore, test and improve their machine learning programs more quickly and with minimal support from IT. Containers provide a portable and consistent environment that can be deployed rapidly in different environments to maximize the accuracy, performance, and efficiency of machine learning applications.

The cloud-native model has revolutionized how enterprise applications are deployed and managed by speeding innovation and reducing costs. Its time to bring these same advantages to machine learning and other forms of AI so that businesses can better serve their customers and compete more effectively.

Excerpt from:

Kubernetes and containers are the perfect fit for machine learning - JAXenter

The Machines Are Learning, and So Are the Students – The New York Times

Riiid claims students can increase their scores by 20 percent or more with just 20 hours of study. It has already incorporated machine-learning algorithms into its program to prepare students for English-language proficiency tests and has introduced test prep programs for the SAT. It expects to enter the United States in 2020.

Still more transformational applications are being developed that could revolutionize education altogether. Acuitus, a Silicon Valley start-up, has drawn on lessons learned over the past 50 years in education cognitive psychology, social psychology, computer science, linguistics and artificial intelligence to create a digital tutor that it claims can train experts in months rather than years.

Acuituss system was originally funded by the Defense Departments Defense Advanced Research Projects Agency for training Navy information technology specialists. John Newkirk, the companys co-founder and chief executive, said Acuitus focused on teaching concepts and understanding.

The company has taught nearly 1,000 students with its course on information technology and is in the prototype stage for a system that will teach algebra. Dr. Newkirk said the underlying A.I. technology was content-agnostic and could be used to teach the full range of STEM subjects.

Dr. Newkirk likens A.I.-powered education today to the Wright brothers early exhibition flights proof that it can be done, but far from what it will be a decade or two from now.

The world will still need schools, classrooms and teachers to motivate students and to teach social skills, teamwork and soft subjects like art, music and sports. The challenge for A.I.-aided learning, some people say, is not the technology, but bureaucratic barriers that protect the status quo.

There are gatekeepers at every step, said Dr. Sejnowski, who together with Barbara Oakley, a computer-science engineer at Michigans Oakland University, created a massive open online course, or MOOC, called Learning How to Learn.

He said that by using machine-learning systems and the internet, new education technology would bypass the gatekeepers and go directly to students in their homes. Parents are figuring out that they can get much better educational lessons for their kids through the internet than theyre getting at school, he said.

Craig S. Smith is a former correspondent for The Times and hosts the podcast Eye on A.I.

See the original post here:

The Machines Are Learning, and So Are the Students - The New York Times

Dotscience Forms Partnerships to Strengthen Machine Learning – Database Trends and Applications

Dotscience, a provider of DevOps for Machine Learning (MLOps) solutions, is forming partnerships with GitLab and Grafana Labs, along with strengthening integrations with several platforms and cloud providers.

The company is deepening integrations to include Scikit-learn, H2O.ai and TensorFlow; expanding multi-cloud support with Amazon Web Services (AWS) and Microsoft Azure; and a entering a joint collaboration with global enterprises to develop an industry benchmark for helping enterprises get maximum ROI out of their AI initiatives.

MLOps is poised to dominate the enterprise AI conversation in 2020, as it will directly address the challenges enterprises face when looking to create business value with AI, said Luke Marsden, CEO and founder at Dotscience. Through new partnerships, expanded multi-cloud support, and collaborations with MLOps pioneers at global organizations in the Fortune 500, we are setting the bar for MLOps best practices for building production ML pipelines today.

Grafana Labs, the open observability platform, and Dotscience are partnering to deliver observability for ML in production.

With Dotscience, ML teams can statistically monitor the behavior of ML models in production on unlabelled production data by analyzing the statistical distribution of predictions.

The partnership simplifies the deployment of ML models to Kubernetes and adds the ability to set up monitoring dashboards for deployed ML models using cloud-native tools including Grafana and Prometheus, which reduces the time spent on these tasks from weeks to seconds.

As a GitLab Technology Partner, Dotscience is extending the use of its platform for collaborative, end-to-end ML data and model management to the more than 100,000 organizations and developers actively using GitLab as their DevOps platform.

Dotscience is now available on the AWS Marketplace, enabling AWS customers to easily and quickly deploy Dotscience directly through AWS Marketplaces 1-Click Deployment, and through Microsoft Azure.

Dotscience has expanded the frameworks in which data scientists can deploy tested and trained ML models into production and statistically monitor the productionized models, to include Scikit-learn, H2O.ai and TensorFlow.

These new integrations make Dotsciences recently added deploy and monitor platform advancementsthe easiest way to deploy and monitor ML models on Kubernetes clustersavailable to data scientists using a greater range of ML frameworks.

For more information about these partnerships and updates, visit https://dotscience.com/.

Originally posted here:

Dotscience Forms Partnerships to Strengthen Machine Learning - Database Trends and Applications

QStride to be acquired by India-based blockchain, analytics, machine learning consultancy – Staffing Industry Analysts

QStride Inc., a Detroit-based provider of IT staffing, struck a deal to be acquired by Drisla Inc., a provider of consulting in blockchain, analytics, machine learning and artificial intelligence.

Drisla is based in Hyderabad, India, with a US office in Princeton, New Jersey.

In the deal, Drisla will acquire QStrides assets and customer contacts; it intends to fund the transaction with cash and debt financing.

QStride founder and CEO Shane Gianino said the deal represents an important building block for the company. It will also allow us to better serve our customers in the tristate area of New York, New Jersey, Connecticut, and Pennsylvania, where we already have a client base, Gianino said.

Gianino will remain with the company as operating CEO and equity shareholder and the company will continue operating under the QStride brand.

QStride was founded in April 2012 in Troy, Michigan, and reported it reached revenue of $1.5 million the following year.

Shane and his team have done a great job over the years positioning QStride for long-term success, said Pavan Kuchana, Drisla founder and CEO. Their niche in business intelligence, analytics, data warehousing and software engineering align very well with our expertise in innovative technology offerings such as AI, ML, and blockchain.

The rest is here:

QStride to be acquired by India-based blockchain, analytics, machine learning consultancy - Staffing Industry Analysts

Machine Learning Answers: If BlackBerry Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? – Forbes

Blackberry Limited Chairman & CEO John Chen, right, watches as company employees take pictures with ... [+] their phones after Chen rang the opening bell to mark his company's stock transfer from Nasdaq to the New York Stock Exchange, Monday, Oct. 16, 2017. (AP Photo/Richard Drew)

The markets have largely remained divided on BlackBerry stock. While the companys revenues have declined sharply over the last few years, driven by its exit from the smartphone business and the decline of its lucrative BlackBerry services business, it has been making multiple bets on high-growth areas ranging from cybersecurity to automotive software, although they have yet to pay off. This uncertainty relating to BlackBerrys future has caused the stock to remain very volatile.

Considering the significant price movements, we began with a simple question that investors could be asking about BlackBerrys stock: given a certain drop or rise, say a 10% drop in a week, what should we expect for the next week? Is it very likely that the stock will recover the next week? What about the next month or a quarter? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if the BlackBerry stock dropped, whats the chance itll rise.

For example, if BlackBerry Stock drops 10% or more in a week (5 trading days), there is a 27% chance itll recover 10% or more, over the next month (about 20 trading days). On the other hand, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over the next month, are about 36%. This is quite significant, and helpful to know for someone trying to recover from a loss. Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves.

Below, we also discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in BlackBerry stock become more likely after a drop?

Answer:

The chances of a 5% rise in BlackBerry stock over the next month:

= 37%% after BlackBerry stock drops by 5% in a week

versus,

= 41% after BlackBerry stock rises by 5% in a week

Question 2: What about the other way around, does a drop in BlackBerry stock become more likely after a rise?

Answer:

Consider two cases

Case 1: BlackBerry stock drops by 5% in a week

Case 2: BlackBerry stock rises by 5% in a week

Turns out the chances of a 5% drop after Case 1 or Case 2 has occurred, is actually quite similar, both pretty close to 35%.

Question 3: Does patience pay?

Answer:

According to data and Trefis machine learning engines calculations, only to an extent.

Given a drop of 5% in BlackBerry stock over a week (5 trading days), while there is only about 24% chance the BlackBerry stock will gain 5% over the subsequent week, there is a 45% chance this will happen in 6 months, and 41% chance itll gain 5% over a year (about 250 trading days).

The table below shows the trend:

Trefis

Question 4: What about the possibility of a drop after a rise if you wait for a while?

Answer:

After seeing a rise of 5% over 5 days, the chances of a 5% drop in BlackBerry stock are about 44% over the subsequent quarter of waiting (60 trading days). This chance increases to about 53% when the waiting period is a year (250 trading days).

Whats behind Trefis? See How Its Powering New Collaboration and What-Ifs ForCFOs and Finance Teams|Product, R&D, and Marketing Teams More Trefis Data Like our charts? Exploreexample interactive dashboardsand create your own

Continue reading here:

Machine Learning Answers: If BlackBerry Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes

What is Deep Learning? Everything you need to know – TechRadar

Although technology has come a long way in recent years, not least in terms of the immense power and resources available through cloud computing services let alone the vast amount of data that can be allocated to cloud storage, computers and machines still cant match the power of the human brain.

What makes humans so unique is that we can learn as we go, drawing on our memories and experiences. That means taking in data from the world around us and forming ideas about how to optimally perform tasks or understand new information.

Deep learning, which is a branch of artificial intelligence, aims to replicate our ability to learn and evolve in machines. At the end of the day, deep learning allows computers to take in new information, decipher it, and produce an outputall without humans needing to be involved in the process. This field has enormous implications for the technologies of the future, including self-driving vehicles, facial recognition software, personalized medicine, and much more.

The end goal of deep learning is to teach a computer how, given a set of unstructured data, to recognize patterns. A simple example of unstructured data is an image of a real-world scene, in which things like the sky, trees, and people arent marked for the computer by a human supervisor. An algorithm trained by deep learning should be able to identify those individual components. That is, it should be able to tell you which pixels in the image make up a person, which make up a tree, and which are part of the sky.

On a broader scale, this capacity for pattern recognition can be applied to almost anything. For example, in a self-driving car, the computer should be able to recognize a stop sign and then trigger the car to stop appropriately. In medicine, a deep learning algorithm should be able to look at a microscope image of cells and decide if those cells are cancerous are not.

Deep learning has essentially the same goal as machine learning, which plays an increasingly large role in modern technology. However, machine learning is limited in how much data it can take in. It may be good at recognizing features in a set of images, for example, but machine learning doesnt have the capacity to adapt to a 3-dimensional scene like a self-driving car must be able to do.

Deep learning, on the other hand, offers a virtually unlimited capacity for learning that could theoretically exceed the capacity of the human brain someday. Thats because of the family of algorithms that underlie deep learning, known as neural networks.

Neural networks are so-named because they essentially aim to mimic the functioning of neurons in the human brain. These networks are made up of three layers of digital neurons: the input layer, the hidden layer, and the output layer.

The input layer is a series of digital neurons that see the information the computer is being given. One neuron might fire when the color green is present in an image, for example, while another might fire when a particular shape is present. There can be thousands of input layer neurons, each firing when it sees a specific characteristic in the data.

The output layer tells the computer what to do in response to the input data. In a self-driving car, these would be the digital neurons that ultimately tell the computer to accelerate, brake, or turn.

The real magic of a neural network happens in the hidden layer. This layer takes the neuron firings from the input layer and redirects them to fire the appropriate output layer neurons. The hidden layer consists of thousands or millions of individual rows of neurons, each of which is connected to all of its neighbors within the network.

Training a deep learning model involves feeding the model an image, pattern, or situation for which the desired model output is already known. During training, each connection from one neuron to another is strengthened or weakened based on how close the networks actual output is to the intended output. If it was very closeour self-driving car stopped at the stop signthe connections might not change much at all. But if the model result is far from the intended result, the connections between neurons are tweaked slightly.

Doing this millions of times allows the network to strengthen connections that do a good job of producing the desired model output and weakening connections that throw off the model results. The final model, then, has learned how to take in new data, recognize patterns, and produce the desired outcome based on those patterns without human supervision.

Deep learning holds a lot of promise for new automated technologies. Self-driving cars are perhaps the most prominent potential use of deep learning algorithms, but there are far more applications in the business world and beyond.

For example, deep learning could have major implications for the finance industry. Banks could use deep learning to help protect your online accounts by teaching a model to determine whether your latest sign-in attempt is similar to your usual sign-ins. Or, banks can apply deep learning algorithms to better pick up on fraudulent activities like money laundering. Yet another possibility is that banks and investment firms use deep learning to predict when stock prices are about to go up or down.

Another application of deep learning technology is facial recognition. For facial recognition to work on a wide scale, the computer needs to be able to recognize you whether you get a haircut or a tan, or put on makeup. A deep learning algorithm trained on images of your face would allow facial recognition software to recognize you no matter what you look like on a given day, while keeping others out of your accounts.

Interestingly, deep learning can also help scientists predict earthquakes and other natural disasters. In earthquake-prone areas, the ground is almost always trembling a little bit. Deep learning models can be trained on what kind of shaking patterns preceded earthquakes in the pastand then sound the alarm when these same patterns are detected in the future.

As deep learning technology continues to improve, the list of potential applications is only likely to get longer and more impressive. We may be able to teach computers to recognize patterns, but human creativity will be essential in figuring out how best to put deep learning to work for society.

See more here:

What is Deep Learning? Everything you need to know - TechRadar