Machine Learning as a Service Market: Indoor Applications Projected to be the Most Attractive Segment during 2020-2027 – Bandera County Courier

This Machine Learning as a Service report comprises of a deep knowledge and information on what the markets definition, classifications, applications, and engagements and also explains the drivers and restraints of the market which is derived from SWOT analysis. An analytical assessment of the competitors confers clear idea of the most important challenges faced by them in the present market and in upcoming years. Besides, the identity of respondents is also kept undisclosed and no promotional approach is made to them while analyzing the data. Global Machine Learning as a Service market research document covers major manufacturers, suppliers, distributors, traders, customers, investors and major types, major applications.

Request a Sample Copy of this report @

https://www.marketresearchinc.com/request-sample.php?id=16701

Geographically, the globalMachine Learning as a Servicemarket has been fragmented across several regions such asNorth America, Latin America, Asia-Pacific, Africa, and Europe. The study enlists various market key players in order to present a clear idea about different strategies undertaken by top-notch companies. Inclusive of in-depth analysis of market dynamics such as drivers, restraints and global opportunities, the study provides a cogent study about the fluctuating highs and lows of the businesses. Several market parameters are also stated while curating the research report, these include investors, share market and budget of the companies.

Top Key Players in the Global Machine Learning as a Service Market Research Report:

Microsoft (Washington,US), Amazon Web Services (Washington, US), Hewlett Packard Enterprises (California, US), Google, Inc

Avail 40% Discount on this report at

https://www.marketresearchinc.com/ask-for-discount.php?id=16701

In order to understand the competitive business environment, the report studies various market analysis methodologies such as Porters five analysis and SWOT analysis. Several market dynamics have been scrutinized which are responsible for driving or hampering the progress of theMachine Learning as a Servicemarket. Additionally, the study underlines recent technological advancements and tools referred by several industries. Furthermore, it draws attention to several effective sales methodologies which help to increase number of customers rapidly. Insightful case studies from different industry experts also form an inclusive part of the report. The bargaining power of several vendors and buyers also form a salient feature of the report.

InquireBefore Buying:

https://www.marketresearchinc.com/enquiry-before-buying.php?id=16701

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Kevin

51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us:+1 (628) 225-1818

Write Us@sales@marketresearchinc.com

https://www.marketresearchinc.com

Here is the original post:
Machine Learning as a Service Market: Indoor Applications Projected to be the Most Attractive Segment during 2020-2027 - Bandera County Courier

Quantiphi Wins Google Cloud Social Impact Partner of the Year Award – AiThority

Awarded to recognize Google Cloud partners who have made a positive impact on the world

Quantiphi, an award-winning applied artificial intelligence and data science software and services company, announced today that it has been named 2019 Social Impact Partner of the Year by Google Cloud. Quantiphi was recognized for its achievements for working with nonprofits, research institutions, and healthcare providers, to leverage AI for Social Good.

We are believers in the power of human acumen and technology to solve the worlds toughest challenges. This award is a recognition of our mission driven culture and our passion to apply AI for social good, said Asif Hasan, Co-founder, Quantiphi. Partnering with Google Cloud has given us the opportunity to work with the worlds leading nonprofit, healthcare and research institutions and we are truly humbled by this recognition.

Recommended AI News:Opinion: Young Jamaicans Invention Could Help Tackle Spread of Viruses Like COVID-19

Were delighted to recognize Quantiphis commitment to social impact, said Carolee Gearhart, Vice President, Worldwide Channel Sales at Google Cloud. By applying its capabilities in AI and ML to important causes, Quantiphi has demonstrated how Google Cloud partners are contributing to positive change in the world.

A few initiatives that helped Quantiphi earn this recognition:

Recommended AI News:Automation Provides A Content Lifeline For Remote Work

Quantiphi previously earned the Google Cloud Machine Learning Partner of the Year twice in a row for 2018 and 2017 and is a premier partner for Google Cloud and holds Specializations in machine learning, data analytics and marketing analytics.

Recommended AI News:Identity Theft is Booming; Your SSN Sells for Less than $4 on Darknet

See more here:
Quantiphi Wins Google Cloud Social Impact Partner of the Year Award - AiThority

When Machines Design: Artificial Intelligence and the Future of Aesthetics – ArchDaily

When Machines Design: Artificial Intelligence and the Future of Aesthetics

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

Are machines capable of design? Though a persistent question, it is one that increasingly accompanies discussions on architecture and the future of artificial intelligence. But what exactly is AI today? As we discover more about machine learning and generative design, we begin to see that these forms of "intelligence" extend beyond repetitive tasks and simulated operations. They've come to encompass cultural production, and in turn, design itself.

+ 8

When artificial intelligence was envisioned during thethe 1950s-60s, thegoal was to teach a computer to perform a range of cognitive tasks and operations, similar to a human mind. Fast forward half a century, andAIis shaping our aesthetic choices, with automated algorithms suggesting what we should see, read, and listen to. It helps us make aesthetic decisions when we create media, from movie trailers and music albums to product and web designs. We have already felt some of the cultural effects of AI adoption, even if we aren't aware of it.

As educator and theorist Lev Manovich has explained, computers perform endless intelligent operations. "Your smartphones keyboard gradually adapts to your typing style. Your phone may also monitor your usage of apps and adjust their work in the background to save battery. Your map app automatically calculates the fastest route, taking into account traffic conditions. There are thousands of intelligent, but not very glamorous, operations at work in phones, computers, web servers, and other parts of the IT universe."More broadly, it's useful to turn the discussion towards aesthetics and how these advancements relate to art, beauty and taste.

Usually defined as a set of "principles concerned with the nature and appreciation of beauty, aesthetics depend on who you are talking to. In 2018, Marcus Endicott described how, from the perspective of engineering, the traditional definition of aesthetics in computing could be termed "structural, such as an elegant proof, or beautiful diagram." A broader definition may include more abstract qualities of form and symmetry that "enhance pleasure and creative expression." In turn, as machine learning is gradually becoming more widely adopted, it is leading to what Marcus Endicott termed a neural aesthetic. This can be seen in recent artistic hacks, such as Deepdream, NeuralTalk, and Stylenet.

Beyond these adaptive processes, there are other ways AI shapes cultural creation. Artificial intelligence hasrecently made rapid advances in the computation of art, music, poetry, and lifestyle. Manovich explains that AIhas given us the option to automate our aesthetic choices (via recommendation engines), as well as assist in certain areas of aesthetic production such as consumer photography and automate experiences like the ads we see online. "Its use of helping to design fashion items, logos, music, TV commercials, and works in other areas of culture is already growing." But, as he concludes, human experts usually make the final decisions based on ideas and media generated by AI. And yes, the human vs. robot debate rages on.

According to The Economist, 47% of the work done by humans will have been replaced by robots by 2037, even those traditionally associated with university education. The World Economic Forum estimated that between 2015 and 2020, 7.1 million jobs will be lost around the world, as "artificial intelligence, robotics, nanotechnology and other socio-economic factors replace the need for human employees." Artificial intelligence is already changing the way architecture is practiced, whether or not we believe it may replace us. As AI is augmenting design, architects are working to explore the future of aesthetics and how we can improve the design process.

In a tech report on artificial intelligence, Building Design + Construction explored how Arup had applied a neural network to a light rail design and reduced the number of utility clashes by over 90%, saving nearly 800 hours of engineering. In the same vein, the areas of site and social research that utilize artificial intelligence have been extensively covered, and examples are generated almost daily. We know that machine-driven procedures can dramatically improve the efficiency of construction and operations, like by increasing energy performance and decreasing fabrication time and costs. The neural network application from Arup extends to this design decision-making. But the central question comes back to aesthetics and style.

Designer and Fulbright fellow Stanislas Chaillou recently created a project at Harvard utilizing machine learning to explore the future of generative design, bias and architectural style. While studying AI and its potential integration into architectural practice, Chaillou built an entire generation methodology using Generative Adversarial Neural Networks (GANs). Chaillou's project investigates the future of AI through architectural style learning, and his work illustrates the profound impact of style on the composition of floor plans.

As Chaillou summarizes, architectural styles carry implicit mechanics of space, and there are spatial consequences to choosing a given style over another. In his words, style is not an ancillary, superficial or decorative addendum; it is at the core of the composition.

Artificial intelligence and machine learningare becomingincreasingly more important as they shape our future. If machines can begin to understand and affect our perceptions of beauty, we should work to find better ways to implement these tools and processes in the design process.

Architect and researcher Valentin Soana once stated that the digital in architectural design enables new systems where architectural processes can emerge through "close collaboration between humans and machines; where technologies are used to extend capabilities and augment design and construction processes." As machines learn to design, we should work with AI to enrich our practices through aesthetic and creative ideation.More than productivity gains, we can rethink the way we live, and in turn, how to shape the built environment.

View post:
When Machines Design: Artificial Intelligence and the Future of Aesthetics - ArchDaily

Self-supervised learning is the future of AI – The Next Web

Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

Read the rest here:
Self-supervised learning is the future of AI - The Next Web

University of Cambridge researchers develop machine learning app to collect the sounds of Covid-19 – Cambridge Independent

University of Cambridge researchers have developed a machine learning app to collect the sounds of Covid-19

Researchers at the University of Cambridge have developed an app that will collect the sounds of Covid-19.

The Covid-19 Sounds App will be used to gain data to develop machine learning algorithms that could automatically detect whether a person is suffering from the disease.

It would be based on the sound of their voice, their breathing and coughing.

Theres still so much we dont know about this virus and the illness it causes, and in a pandemic situation like the one were currently in, the more reliable information you can get, the better, said Professor Cecilia Mascolo from Cambridges department of computer science and technology, who led the development of the app.

Being a respiratory condition, the sounds made by people with the condition, including voice, breathing and cough sounds, are very specific.

A large, crowdsourced data set will be useful in developing machine learning algorithms that could be used for automatic detection of the condition.

The app collects basic demographic and medical information from users, as well as spoken voice samples, breathing and coaching samples through the phones microphone.

It will ask users if they have tested positive for the coronavirus, and collect one coarse grain location sample.

But it will not track users and will only collect location data once when users are actively using it.

Data will be stored on university servers and used solely for research purposes.

Once the initial analysis of the collected data has been completed, it will be released to other researchers and could help shed light on disease progression or the further relationship of the respiratory complication with medical history, for example.

Having spoken to doctors, one of the most common things they have noticed about patients with the virus is the way they catch their breath when theyre speaking, as well as a dry cough, and the intervals of their breathing patterns, said Prof Mascolo.

There are very few large datasets of respiratory sounds, so to make better algorithms that could be used for early detection, we need as many samples from as many participants as we can get.

Even if we dont get many positive cases of coronavirus, we could find links with other health conditions.

The study has been approved by the ethics committee of the department of computer science and technology, and is partly funded by the European Research Council through Project EAR.

Professor Pietro Cicuta, from from Cambridges Cavendish Laboratory and a member of the team behind the apps development, said: I am amazed at the speed that we managed to connect across the University to conceive this project, and how Cecilia's team of developers came together to respond to the urgency of the situation.

The app is available as a web app, and versions for Android and iOS will be available soon.

Go here to read the rest:
University of Cambridge researchers develop machine learning app to collect the sounds of Covid-19 - Cambridge Independent

Threat detection and the evolution of AI-powered security solutions – Help Net Security

Ashvin Kamaraju is a true industry leader. As CTO and VP of Engineering, he drives the technology strategy for Thales Cloud Protection & Licensing, leading a researchers and technologists that develop the strategic vision for data protection products and services. In this interview, he discusses automation, artificial intelligence, machine learning and the challenges related to detecting evolving threats.

Discovering an unknown cyber-threat is like trying to find a needle in a haystack. With this enlarged target surface area and a growing number of active hackers, automation and specifically machine learning can be important in aiding this issue through its ability to provide CISOs with the insights they need.

Consequently, it enables an opportunity for CISOs to more effectively deploy their human analysts against potential cyber-attacks and data breaches. However, just because an organization has an automation/AI system in place, this doesnt mean its secure. Countering cyber-threats is a constant game of cat and mouse and hackers always want to get the maximum reward from the minimum effort, tweaking known attack methods as soon as these are detected by the AI. CTOs therefore need to make sure that the AI system is routinely exercised and fed new data and that the algorithms are trained to understand the new data.

The first thing to note is AI should not be confused with machine learning. What most people associate with AI is actually machine learning algorithms with no human level intelligence. AI is based on heuristics whereas machine learning requires a lot of data and algorithms that must be trained to learn the data and provide insights that will help to make decisions.

While the insights provided by AI/machine learning algorithms are very valuable, they are dependent on the data used. If the data has anomalies or is not representative of the entire scope of the problem domains, there will be bias in the insights. These must then be reviewed by an expert team in place to add technical and contextual awareness to the data. AI is here to stay, as data sets become more and more complex, but it will only be effective when added with human intelligence.

AI is beneficial to organizations if it can be used effectively, in addition to human intelligence, not in lieu of. Due to the rapid rise of the amount of data out there, and with the growing number of threat businesses now face, AI and machine learning will play an increasingly important role for those that embrace it.

However, it requires constant investment, not necessarily from a cost perspective, but from a time aspect, as it needs to be kept up-to-date with fresh data to adapt to the changing threat landscape. Organizations need to decide if they have the capabilities to use AI in the right way, or it can soon become an expensive mistake.

Cyber-attacks are getting harder to detect with the evolution of technology to more closely align with how business operates creating new issues. The adoption of mobile phones, tablets, and IoT devices as part of digital transformation strategies is increasing the threat landscape by opening companies up to connect with more people outside their organization.

As the attack surface area expands, and thousands more hackers get in on the action, IT experts are being forced to deal with protecting near-infinite amounts of data and multiple entry points where hackers can get in. Where hacking once took dedication and expertise, with zero-day attacks targeting mostly unknown vulnerabilities, anyone can launch a DDoS attack with hacking toolkits and thousands of tutorials freely available online.

So, to defend themselves going into the future, AI can play a key part. With a new, evolved role in cybersecurity, experts and researchers can leverage AI to identify and counteract sophisticated cyber-attacks with minimal human intervention in the first instance. However, AI will always need that human intelligence to provide the context of the data that it is evaluating and has flagged as potentially malicious.

Any new CISO walking into a large enterprise could be forgiven for potentially feeling daunted at the responsibility for protecting that companys assets. Several questions would spring to mind, from where to start to what to protect. Here are six simple steps to get them started:

1. Know the where and the what of your data Prior to implementing any long-term security strategy, CISOs must first conduct a data sweep. Auditing all data within the perimeter helps identify not only what it has collected, but where theyre holding their most sensitive data. Its impossible to protect data if they dont know where it is.

2. Securing sensitive data is the key Technology such as encryption will provide a key layer of defense for the data, rendering it useless even if its hackers access it. Whether its stored in their own servers, in a public cloud, or a hybrid environment security-minded tools like encryption must be implemented.

3. Protect the data encryption keys Encrypting data creates an encryption key a unique tool used to unlock the data, making it only accessible to those who have access to the key. Safe storage of these keys is crucial and needs to be done offsite to ensure they arent located in the same place as the data, putting both at risk.

4. Forget single-factor authentication The next step is to employ strong multi-factor authentication, ensuring authorized parties can access only the data they need. Two-factor authentication requires an extra layer of information to verify the users password, such as entering a specific code they receive through their smartphone. Since passwords can be hacked easily, two-factor authentication is necessary for a successful security strategy. Multi-factor authentication takes this a step further by requiring additional context such as a device ID, location or IP address.

5. Up-to-date software Vendors are constantly patching their software and hardware to prevent cyber criminals from exploiting bugs and other vulnerabilities that emerge. For many companies, they have relied on software that isnt regularly patched or simply hasnt updated new patches soon enough. Companies must install the most recent patches or risk becoming a victim of hackers.

6. Evaluate and go again After implementing the above, the process must be repeated for all new data that comes into the system. GDPR-led compliance is a continual process and applies to future data as much as it does to what is just entering the system and what is already there. Making a database unattractive to hackers is central to a good cybersecurity strategy. Done correctly, these processes will make data relevant only to those allowed to access it.

Follow this link:
Threat detection and the evolution of AI-powered security solutions - Help Net Security

Two startups find ways to bring AI to the edge – Stacey on IoT

Steve Teig, the CEO of the newly created Perceive Corp. Image courtesy of Perceive.

The market for specialty silicon that enables companies to run artificial intelligence models on battery-sipping and relatively constrained devices is flush with funds and ideas. Two new startups have entered the arena, each proposing a different way to break down the computing-intensive tasks of recognizing wake words, identifying people, and other jobs that are built on neural networks.

Perceive, which launched this week, andKneron(pronounced neuron), which launched in March, are relying on neural networks at the edge to reduce bandwidth, speed up results, and protect privacy. They join a dozen or more startups all trying to bring specialty chips to the edge to make the IoT more efficient and private.

Perceive was spun out of Xperi, a semiconductor company that has built hundreds of AI models to help identify people, objects, wake words, and other popular use cases for edge AI. Two-year-old Perceive has built a 7mm x 7mm chip designed to run neural networks at the edge, but it does so by changing the way the training is done so it can build smaller models that are still accurate.

In general, when a company wants to run neural networks on an edge chip, it must make that model smaller, which can reduce accuracy. Designers also build special sections of the chip that can handle the specific type of math required to run the convolutions used in running a neural network. But Perceive threw all of that out the window, instead turning to information theory to build efficient models.

Information theory is all about finding the signal in a bunch of a noise. When applied to machine learning it is used to ascertain which features are relevant in figuring out if an image is a dog or a cat, or if an individual person is me or my husband. Traditional neural networks are trained by giving a computer tens or hundreds of thousands of images and letting them ascertain which elements are most important when it comes to determining what an object or person is.

Perceives methodology requires less training data, and CEO Steve Teig says that its end models are smaller, which is what allows them to run efficiently on a lower-power chip. The result of the Perceive training is expressed in PyTorch, a common machine learning framework. The company currently offers a chip as well as a service that will help generate custom models. Perceive has also developed hundreds of its own models based on the work done by Xperi.

According to Teig, Perceive has already signed two substantial customers neither of which can be named and is in talks with connected device makers ranging from video doorbells to toy companies.

The other chip startup tackling machine learning is Kneron, formed in 2015. It has built a chip that can reconfigure an element on it specifically for the type of machine learning model it needs to run. When an edge chip has to run a machine learning model it needs to do a lot of math, which has led chipmakers to put special coprocessors on the chip that can handle a type of math known as matrix multiplication. (The Perceive method of training models doesnt require matrix multiplication.)

This flexibility, and the promise it has to enable devices to run local AI, has led Kneron to raise $73 million. Eventually, Kneron hopes to be able to tackle learning at the edge, with CEO Albert Liu promising that the company might be able to offer simplified learning later this year. (Today, all edge AI chips can only match inputs against an existing AI model, as opposed to taking input from the environment and creating a new model.)

Both Perceive and Kneron are riding high on the promise of delivering more intelligence to products that dont need to stay connected to the internet. As privacy, power management, and local control continue to rise in importance, the two companies are joining a host of startups trying to make their hardware the next big thing in silicon.

Related

Original post:
Two startups find ways to bring AI to the edge - Stacey on IoT

Machine Learning in Pharmaceutical Market Business Opportunities and Global Industry Analysis by 2026- Key Players are McKinsey, Boston, IBM Watson -…

The research report on the Machine Learning in Pharmaceutical Market is a deep analysis of the market. This is a latest report, covering the current COVID-19 impact on the market. The pandemic of Coronavirus (COVID-19) has affected every aspect of life globally. This has brought along several changes in market conditions. The rapidly changing market scenario and initial and future assessment of the impact is covered in the report. Experts have studied the historical data and compared it with the changing market situations. The report covers all the necessary information required by new entrants as well as the existing players to gain deeper insight.

Request a Sample Copy of these Reports@ https://www.qyreports.com/request-sample/?report-id=223440

Furthermore, the statistical survey in the report focuses on product specifications, costs, production capacities, marketing channels, and market players. Upstream raw materials, downstream demand analysis, and a list of end-user industries have been studied systematically, along with the suppliers in this market. The product flow and distribution channel have also been presented in this research report.

Key Players:

McKinsey, Boston, IBM Watson, ALTEN Calsoft Labs, Axtria Ingenious Insights, GRAIL, Inc., Aktana, Owkin, Amgen, BASF, Bayer, Lilly, Novartis, Pfizer, Sunovion, and WuXi.

By Regions:

North America (The US, Canada, and Mexico)Europe (the UK, Germany, France, and Rest of Europe)Asia Pacific (China, India, and Rest of Asia Pacific)Latin America (Brazil and Rest of Latin America)Middle East & Africa (Saudi Arabia, the UAE, South Africa, and Rest of Middle East & Africa)

Ask for Discount on this Premium Report@ https://www.qyreports.com/ask-for-discount/?report-id=223440

The Machine Learning in Pharmaceutical Market Report Consists of the Following Points:

Enquiry Before Buying@ https://www.qyreports.com/enquiry-before-buying/?report-id=223440

In conclusion, the Machine Learning in Pharmaceutical Market report is a reliable source for accessing the research data that is projected to exponentially accelerate your business. The report provides information such as economic scenarios, benefits, limits, trends, market growth rate, and figures. SWOT analysis is also incorporated in the report along with speculation attainability investigation and venture return investigation.

About QYReports:

We at QYReports, a leading market research report publisher cater to more than 4,000 prestigious clients worldwide meeting their customized research requirements in terms of market data size and its application. Our list of customers include renowned Chinese companys multinational companies, SMEs and private equity firms. Our business study covers a market size of over 30 industries offering you accurate, in depth and reliable market insight, industry analysis and structure. QYReports specialize in forecasts needed for investing in an and execution of a new project globally and in Chinese markets.

Contact Us:

Name: Jones John

Contact number: +1-510-560-6005

204, Professional Center,

7950 NW 53rd Street, Miami, Florida 33166

sales@qyreports.com

http://www.qyreports.com

Follow this link:
Machine Learning in Pharmaceutical Market Business Opportunities and Global Industry Analysis by 2026- Key Players are McKinsey, Boston, IBM Watson -...

Washington state governor green-lights facial-recog law championed by… guess who: Yep, hometown hero Microsoft – The Register

Roundup Here's your quick-fire summary of recent artificial intelligence news.

DeepMind has built a reinforcement-learning bot capable of playing 57 classic Atari 2600 games about as well as the average human.

Why 57, you may ask? The Atari 2600 console was launched in 1977 and has a library of hundreds of games. In 2012, a group of computer scientists came up with The Arcade Learning Environment (ALE), a toolkit consisting of 57 old Atari games to test reinforcement-learning agents.

AI researchers have been using this collection to benchmark the progress of their game-playing bots ever since. The average score reached on all 57 games has steadily increased with the development of more complex machine-learning systems, but most models have struggled to play the most difficult ones, such as Montezuma's Revenge, Pitfall, Solaris, and Skiing.

Reinforcement learning attempts to teach AI bots how to complete a specific task, such as playing a game, without explicitly telling it the rules. The agents thus have to learn through trial and error, and are guided by rewards. Reaching high scores means more delicious rewards, and over time, the computer learns to make good moves to play the game well.

The researchers have improved their system by employing different types of algorithms and tricks. The bot, dubbed Agent57, is better equipped in dealing with the most difficult games because it's been programmed to be able to explore its environment more efficiently even when the rewards are sparse.

A number of steps have to be executed in the games before a reward is given, so it's not immediately obvious how to play Montezuma's Revenge, Pitfall, Solaris, and Skiing, compared to games like Pong that have a more immediate reward feedback system.

The boffins reckon that mastering games in the ALE dataset is a good sign that a system is more generally intelligent and robust so that they might be applied in the real world.

"The ultimate goal is not to develop systems that excel at games, but rather to use games as a stepping stone for developing systems that learn to excel at a broad set of challenges," Deepmind wrote.

You can read more about the numerous nifty techniques that were used to improve Agent57 in more detail here [PDF].

The governor of the US state of Washington, Jay Inslee, has passed a piece of legislation that regulates the use of facial-recognition systems.

While the likes of San Francisco and Oakland in California, and Somerville in Massachusetts, have banned law enforcement from using facial-recognition technology, Washington has gone for a softer approach. That's not too much of a surprise, considering the bill [PDF] was sponsored by Microsoft, and the US state is the home of the Windows giant. Microsoft is keen for organizations to use its machine-learning services for things like facial and object recognition.

"This legislation represents a significant breakthrough the first time a state or nation has passed a new law devoted exclusively to putting guardrails in place for the use of facial recognition technology," Redmond's president, Brad Smith, said.

Law enforcement agencies in Washington will be allowed to deploy facial-recognition systems, but will have to be more transparent about using it. First, they have to file a "notice of intent", a report that details the service the cops want to use from a particular vendor, and what it's being used for. The document also has to show what kind of data is collected and generated, what decisions the software makes, and where it will be deployed. The notice has to be given to a "legislative authority" that will be made public.

On the vendor side of things, companies will have to provide an application programming interface (API) to enable an independent party to audit the algorithm's performance. They must also report "any complaints or reports of bias regarding the service".

Smith gushed: "Through some of the new law's most important provisions, Washington state has become the first jurisdiction to enact specific facial recognition rules to protect civil liberties and fundamental human rights. While the public will rightly assess ways to improve upon this approach over time, it's worth recognizing at the outset the thorough approach the Washington state legislature has adopted."

Meanwhile, the American Civil Liberties Union has been fighting for a moratorium on facial recognition, demanding a temporary ban on the technology until Congress passes stricter laws that protect an individual's rights.

The Washington law is due to go into effect next year.

Remember Amazon's little AI music-generating keyboard DeepComposer that was touted at its annual re:Invent developer conference last year?

Well, now you can finally play with it. Don't worry if you don't have an actual physical keyboard, Amazon has released a digital version alongside the software needed to create music via machine learning.

DeepComposer trains generative adversarial networks (GANs) to create new jingles based on a particular style of music. The software is designed to help enthusiasts who don't necessarily have a deep knowledge of machine learning or music to learn about GANs in more detail.

It gives step-by-step instructions on how to build, train, and test GANs without having to write any code. Users create a little melody on the digital keyboard and pick the type of genre, and the GAN fills in the blanks, transforming the simple tune into computer generated music. The physical keyboard is available too, but only for the US.

You can find out more about that here.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Read the rest here:
Washington state governor green-lights facial-recog law championed by... guess who: Yep, hometown hero Microsoft - The Register

The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 2020-2030 (forecast period),…

NEW YORK, April 6, 2020 /PRNewswire/ -- Quantum Computing Market Research Report: By Offering (Hardware, Software, Service), Deployment Type (On-Premises, Cloud-Based), Application (Optimization, Simulation and Data Problems, Sampling, Machine Learning), Technology (Quantum Dots, Trapped Ions, Quantum Annealing), Industry (BFSI, Aerospace & Defense, Manufacturing, Healthcare, IT & Telecom, Energy & Utilities) Industry Share, Growth, Drivers, Trends and Demand Forecast to 2030

Read the full report: https://www.reportlinker.com/p05879070/?utm_source=PRN

The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 20202030 (forecast period), to ultimately reach $64,988.3 million by 2030. Machine learning (ML) is expected to progress at the highest CAGR, during the forecast period, among all application categories, owing to the fact that quantum computing is being integrated in ML for improving the latter's use case.

Government support for the development and deployment of the technology is a prominent trend in the quantum computing market, with companies as well as public bodies realizing the importance of a coordinated funding strategy. For instance, the National Quantum Initiative Act, which became a law in December 2018, included a funding of $1.2 billion from the U.S. House of Representatives for the National Quantum Initiative Program. The aim behind the funding was to facilitate the development of technology applications and quantum information science, over a 10-year period, by setting its priorities and goals.

Moreover, efforts are being made to come with standards for the quantum computing technology. Among the numerous standards being developed by the IEEE Standards Association Quantum Computing Working Group are the benchmarks and performance matrix, which would help in analyzing the performance of quantum computers against that of conventional computers. Other noteworthy standards are those related to the nomenclature and definitions, in order to create a common language for quantum computers.

In 2019, the quantum computing market was dominated by the quantum annealing category, on the basis of technology. This is because the physical challenges in its development have been overcome, and it is now being deployed in larger systems. That year, the banking, financial services, and insurance (BFSI) division held the largest share in the market, on account of the rapid expansion of this industry. Additionally, banks and other financial institutions are quickly deploying this technology to make their business process streamlined as well as secure their data.

By 2030, Europe and North America are expected to account for more than 78.0% in the quantum computing market, as Canada, the U.S., the U.K., Germany, and Russia are witnessing heavy investments in the field. For instance, the National Security Agency (NSA), National Aeronautics and Space Administration (NASA), and Los Alamos National Laboratory are engaged in quantum computing technology development. Additionally, an increasing number of collaborations and partnerships are being witnessed in these regions, along with the entry of several startups.

The major players operating in the highly competitive quantum computing market are Telstra Corporation Limited, International Business Machines (IBM) Corporation, Silicon Quantum Computing, IonQ Inc., Alphabet Inc., Huawei Investment & Holding Co. Ltd., Microsoft Corporation, Rigetti & Co. Inc., Zapata Computing Inc., D-Wave Systems Inc., and Intel Corporation. Google LLC, the main operating subsidiary of Alphabet Inc. is establishing the Quantum AI Laboratory, in collaboration with the NSA, wherein the quantum computers developed by D-Wave Systems Inc. are being used.

Read the full report: https://www.reportlinker.com/p05879070/?utm_source=PRN

About Reportlinker ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________ Contact Clare: clare@reportlinker.com US: (339)-368-6001 Intl: +1 339-368-6001

View original content:http://www.prnewswire.com/news-releases/the-quantum-computing-market-valued-507-1-million-in-2019--from-where-it-is-projected-to-grow-at-a-cagr-of-56-0-during-20202030-forecast-period-to-ultimately-reach-64-988-3-million-by-2030--301036177.html

SOURCE Reportlinker

View original post here:
The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 2020-2030 (forecast period),...