Scientists use machine learning to tackle a big challenge in gene therapy – STAT

As the world charges to vaccinate the population against the coronavirus, gene therapy developers are locked in a counterintuitive race. Instead of training the immune system to recognize and combat a virus, theyre trying to do the opposite: designing viruses the body has never seen, and cant fight back against.

Its OK, really: These are adeno-associated viruses, which are common and rarely cause symptoms. That makes them the perfect vehicle for gene therapies, which aim to treat hereditary conditions caused by a single faulty gene. But they introduce a unique challenge: Because these viruses already circulate widely, patients immune systems may recognize the engineered vectors and clobber them into submission before they can do their job.

Unlock this article by subscribing to STAT+ and enjoy your first 30 days free!

STAT+ is STAT's premium subscription service for in-depth biotech, pharma, policy, and life science coverage and analysis.Our award-winning team covers news on Wall Street, policy developments in Washington, early science breakthroughs and clinical trial results, and health care disruption in Silicon Valley and beyond.

Read the original post:
Scientists use machine learning to tackle a big challenge in gene therapy - STAT

Using AI and Machine Learning will increase in horti industry – hortidaily.com

The expectation is that in 2021, artificial intelligence and machine learning technologies will continue to become more mainstream. Businesses that havent traditionally viewed themselves as candidates for AI applications will embrace these technologies.

A great story of machine learning being used in an industry that is not known for its technology investments is the story of Makoto Koike. Using Googles TensorFlow, Makoto initially developed a cucumber sorting system using pictures that he took of the cucumbers. With that small step, a machine learning cucumber sorting system was born.

Getting started with AI and machine learning is becoming increasingly accessible for organizations of all sizes. Technology-as-a-service companies including Microsoft, AWS and Google all have offerings that will get most organizations started on their AI and machine learning journeys. These technologies can be used to automate and streamline manual business processes that have historically been resource-intensive.

An article on forbes.com claims that, as business leaders continue to refine their processes to support the new normal of the Covid-19 pandemic, they should be considering where these technologies might help reduce manual, resource-intensive or paper-based processes. Any manual process should be fair game for review for automation possibilities.

Photo source: Dreamstime.com

The rest is here:
Using AI and Machine Learning will increase in horti industry - hortidaily.com

The head of JPMorgan’s machine learning platform explained what it’s like to work there – eFinancialCareers

For the past few years, JPMorgan has been busy building out its machine learning capability underDaryush Laqab, its San Francisco-based head of AI platform product management, who was hired from Google in 2019. Last time we looked, the bank seemed to be paying salaries of $160-$170k to new joiners onLaqab's team.

If that sounds appealing, you might want to watch the video below so that you know what you're getting into. Recorded at the AWS re:Invent conferencein December, it's only just made it to you YouTube. The video is flagged as a day in the life of JPMorgan's machine learning data scientists, butLaqab arguably does a better of job of highlighting some of the constraints data professionals at allbanks have to work under.

"There are some barriers to smooth data science at JPMorgan," he explains - a bank is not the same as a large technology firm.

For example, data scientists at JPMorgan have to check data is authorized for use, saysLaqab: "They need to go to a process to log that use and make surethat they have the adequate approvals for that intent in terms of use."

They also have to deal with the legacy infrastructureissue: "We are a large organization, we have a lot of legacy infrastructure," says Laqab. "Like any other legacy infrastructure, it is built over time,it is patched over time. These are tightly integrated,so moving part or all of that infrastructure to public cloud,replacing rule base engines with AI/ML based engines.All of that takes time and brings inertia to the innovation."

JPMorgan's size and complexity is another source of inertia as multiple business lines in multiple regulated entities in different regulated environments need to be considered. "Making sure that those regulatory obligationsare taken care of, again, slows down data science at times," saysLaqab.

And then there are more specific regulations such as those concerning model governance. At JPMorgan, a machine learning model can't go straight into a production environment."It needs to go through a model review and a model governance process," says Laqab. "- To make sure we have another set of eyes that looksat how that model was created, how that model was developed..." And then there are software governance issues too.

Despite all these hindrances, JPMorgan has already productionized AI models and built an 'Omni AI ecosystem,'which Laqab heads,to help employees to identify and ingest minimum viable data so that they canbuild models faster. Laqab saysthe bank saved $150m in expenses in 2019 as a result. JPMorgan's AI researchers are now working on everything fromFAQ bots and chat bots, to NLP search models for the bank'sown content, pattern recognition in equities markets and email processing. - The breadth of work on offer is considerable. "We play in every market that is out there," saysLaqab,

The bank has also learned that the best way to structure its AI team is to split people into data scientists who train and create models and machine learning engineers who operationalize models, saysLaqab. - Before you apply, you might want to consider which you'd rather be.

Photo by NeONBRAND on Unsplash

Have a confidential story, tip, or comment youd like to share? Contact:sbutcher@efinancialcareers.comin the first instance. Whatsapp/Signal/Telegram also available. Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will unless its offensive or libelous (in which case it wont.)

See the original post:
The head of JPMorgan's machine learning platform explained what it's like to work there - eFinancialCareers

If you know nothing about deep learning with Python, start here – TechTalks

This article is part ofAI education, a series of posts that review and explore educational content on data science and machine learning. (In partnership withPaperspace)

Teaching yourself deep learning is a long and arduous process. You need a strong background in linear algebra and calculus, good Python programming skills, and a solid grasp of data science, machine learning, and data engineering. Even then, it can take more than a year of study and practice before you reach the point where you can start applying deep learning to real-world problems and possibly land a job as a deep learning engineer.

Knowing where to start, however, can help a lot in softening the learning curve. If I had to learn deep learning with Python all over again, I would start with Grokking Deep Learning, written by Andrew Trask. Most books on deep learning require a basic knowledge of machine learning concepts and algorithms. Trasks book teaches you the fundamentals of deep learning without any prerequisites aside from basic math and programming skills.

The book wont make you a deep learning wizard (and it doesnt make such claims), but it will set you on a path that will make it much easier to learn from more advanced books and courses.

Most deep learning books are based on one of several popular Python libraries such as TensorFlow, PyTorch, or Keras. In contrast, Grokking Deep Learning teaches you deep learning by building everything from scratch, line by line.

You start with developing a single artificial neuron, the most basic element of deep learning. Trask takes you through the basics of linear transformations, the main computation done by an artificial neuron. You then implement the artificial neuron in plain Python code, without using any special libraries.

This is not the most efficient way to do deep learning, because Python has many libraries that take advantage of your computers graphics card and parallel processing power of your CPU to speed up computations. But writing everything in vanilla Python is excellent for learning the ins and outs of deep learning.

In Grokking Deep Learning, your first artificial neuron will take a single input, multiply it by a random weight, and make a prediction. Youll then measure the prediction error and apply gradient descent to tune the neurons weight in the right direction. With a single neuron, single input, and single output, understanding and implementing the concept becomes very easy. Youll gradually add more complexity to your models, using multiple input dimensions, predicting multiple outputs, applying batch learning, adjusting learning rates, and more.

And youll implement every new concept by gradually adding and changing bits of Python code youve written in previous chapters, gradually creating a roster of functions for making predictions, calculating errors, applying corrections, and more. As you move from scalar to vector computations, youll shift from vanilla Python operations to Numpy, a library that is especially good at parallel computing and is very popular among the machine learning and deep learning community.

With the basic building blocks of artificial neurons under your belt, youll start creating deep neural networks, which is basically what you get when you stack several layers of artificial neurons on top of each other.

As you create deep neural networks, youll learn about activation functions and apply them to break the linearity of the stacked layers and create classification outputs. Again, youll implement everything yourself with the help of Numpy functions. Youll also learn to compute gradients and propagate errors through layers to spread corrections across different neurons.

As you get more comfortable with the basics of deep learning, youll get to learn and implement more advanced concepts. The book features some popular regularization techniques such as early stopping and dropout. Youll also get to craft your own version of convolutional neural networks (CNN) and recurrent neural networks (RNN).

By the end of the book, youll pack everything into a complete Python deep learning library, creating your own class hierarchy of layers, activation functions, and neural network architectures (youll need object-oriented programming skills for this part). If youve already worked with other Python libraries such as Keras and PyTorch, youll find the final architecture to be quite familiar. If you havent, youll have a much easier time getting comfortable with those libraries in the future.

And throughout the book, Trask reminds you that practice makes perfect; he encourages you to code your own neural networks by heart without copy-pasting anything.

Not everything about Grokking Deep Learning is perfect. In a previous post, I said that one of the main things that defines a good book is the code repository. And in this area, Trask could have done a much better job.

The GitHub repository of Grokking Deep Learning is rich with Jupyter Notebook files for every chapter. Jupyter Notebook is an excellent tool for learning Python machine learning and deep learning. However, the strength of Jupyter is in breaking down code into several small cells that you can execute and test independently. Some of Grokking Deep Learnings notebooks are composed of very large cells with big chunks of uncommented code.

This becomes especially problematic in the later chapters, where the code becomes longer and more complex, and finding your way in the notebooks becomes very tedious. As a matter of principle, the code for educational material should be broken down into small cells and contain comments in key areas.

Also, Trask has written the code in Python 2.7. While he has made sure that the code also works smoothly in Python 3, it contains old coding techniques that have become deprecated among Python developers (such as using the for i in range(len(array)) paradigm to iterate over an array).

Trask has done a great job of putting together a book that can serve both newbies and experienced Python deep learning developers who want to fill the gaps in their knowledge.

But as Tywin Lannister says (and every engineer will agree), Theres a tool for every task, and a task for every tool. Deep learning isnt a magic wand that can solve every AI problem. In fact, for many problems, simpler machine learning algorithms such as linear regression and decision trees will perform as well as deep learning, while for others, rule-based techniques such as regular expressions and a couple of if-else clauses will outperform both.

The point is, youll need a full arsenal of tools and techniques to solve AI problems. Hopefully, Grokking Deep Learning will help get you started on the path to acquiring those tools.

Where do you go from here? I would certainly suggest picking up an in-depth book on Python deep learning such as Deep Learning With PyTorch or Deep Learning With Python. You should also deepen your knowledge of other machine learning algorithms and techniques. Two of my favorite books are Hands-on Machine Learning and Python Machine Learning.

You can also pick up a lot of knowledge browsing machine learning and deep learning forums such as the r/MachineLearning and r/deeplearning subreddits, the AI and deep learning Facebook group, or by following AI researchers on Twitter.

The AI universe is vast and quickly expanding, and there is a lot to learn. If this is your first book on deep learning, then this is the beginning of an amazing journey.

Subscribe to get the latest updates from TechTalks:

See the article here:
If you know nothing about deep learning with Python, start here - TechTalks

Mental health diagnoses and the role of machine learning – Health Europa

It is common for patients with psychosis or depression to experience symptoms of both conditions which has meant that traditionally, mental health diagnoses have been given for a primary illness with secondary symptoms of the other.

Making an accurate diagnosis often poses difficulties to mental health clinicians and diagnoses often do not accurately reflect the complexity of individual experience or neurobiology. For example, a patient being diagnosed with psychosis will often have depression regarded as a secondary condition, with more focus on the psychosis symptoms, such as hallucinations or delusions; this has implications on treatment decisions for patients.

A team at the University of Birminghams Institute for Mental Health and Centre for Human Brain Health, along with researchers at the European Union-funded PRONIA consortium, explored the possibility of using machine learning to create extremely accurate models of pure forms of both illnesses and using these models to investigate the diagnostic accuracy of a cohort of patients with mixed symptoms. The results of this study have been published in Schizophrenia Bulletin.

Paris Alexandros Lalousis, lead author, explains that the majority of patients have co-morbidities, so people with psychosis also have depressive symptoms and vice versa That presents a big challenge for clinicians in terms of diagnosing and then delivering treatments that are designed for patients without co-morbidity. Its not that patients are misdiagnosed, but the current diagnostic categories we have do not accurately reflect the clinical and neurobiological reality.

The researchers analysed questionnaire responses and detailed clinical interviews, as well as data from structural magnetic resonance imaging from a cohort of 300 patients taking part in the study. From this group of patients, they identified small subgroups of patients, who could be classified as suffering either from psychosis without any symptoms of depression, or from depression without any psychotic symptoms.

With the goal of developing a precise disease profile for each patient and testing it against their diagnosis to see how accurate it was, the research team was able to identify machine learning models of pure depression, and pure psychosis by using the collected data. They were then able to use machine learning methods to apply these models to patients with symptoms of both illnesses.

The team discovered that patients with depression as a primary illness were more likely to have accurate mental health diagnoses, whereas patients with psychosis with depression had symptoms which most frequently leaned towards the depression dimension. This may suggest that depression plays a greater part in the illness than had previously been thought.

Lalousis added: There is a pressing need for better treatments for psychosis and depression, conditions which constitute a major mental health challenge worldwide. Our study highlights the need for clinicians to understand better the complex neurobiology of these conditions, and the role of co-morbid symptoms; in particular considering carefully the role that depression is playing in the illness.

In this study we have shown how using sophisticated machine learning algorithms, which take into account clinical, neurocognitive, and neurobiological factors can aid our understanding of the complexity of mental illness. In the future, we think machine learning could become a critical tool for accurate diagnosis. We have a real opportunity to develop data-driven diagnostic methods this is an area in which mental health is keeping pace with physical health and its really important that we keep up that momentum.

The rest is here:
Mental health diagnoses and the role of machine learning - Health Europa

5 Ways the IoT and Machine Learning Improve Operations – BOSS Magazine

Reading Time: 4 minutes

By Emily Newton

The Internet of Things (IoT) and machine learning are two of the most disruptive technologies in business today. Separately, both of these innovations can bring remarkable benefits to any company. Together, they can transform your business entirely.

The intersection of IoT devices and machine learning is a natural progression. Machine learning needs large pools of relevant data to work at its best, and the IoT can supply it. As adoption of both soars, companies should start using them in conjunction.

Here are five ways the IoT and machine learning can improve operations in any business.

Around 25% of businesses today use IoT devices, and this figure will keep climbing. As companies implement more of these sensors, they add places where they can gather data. Machine learning algorithms can then analyze this data to find inefficiencies in the workplace.

Looking at various workplace data, a machine learning program could see where a company spends an unusually high amount of time. It could then suggest a new workflow that would reduce the effort employees expend in that area. Business leaders may not have ever realized this was a problem area without machine learning.

Machine learning programs are skilled at making connections between data points that humans may miss. They can also make predictions 20 times earlier than traditional tools and do so with more accuracy. With IoT devices feeding them more data, theyll only become faster and more accurate.

Machine learning and the IoT can also automate routine tasks. Business process automation (BPA) leverages AI to handle a range of administrative tasks, so workers dont have to. As IoT devices feed more data into these programs, they become even more effective.

Over time, technology like this has contributed to a 40% productivity increase in some industries. Automating and streamlining tasks like scheduling and record-keeping frees employees to focus on other, value-adding work. BPAs potential doesnt stop there, either.

BPA can automate more than straightforward data manipulation tasks. It can talk to customers, plan and schedule events, run marketing campaigns and more. With more comprehensive IoT implementation, it would have access to more areas, becoming even more versatile.

One of the most promising areas for IoT implementation is in the supply chain. IoT sensors in vehicles or shipping containers can provide companies with critical information like real-time location data or product quality. This data alone improves supply chain visibility, but paired with machine learning, it could transform your business.

Machine learning programs can take this real-time data from IoT sensors and put it into action. It could predict possible disruptions and warn workers so they can respond accordingly. These predictive analytics could save companies the all-too-familiar headache of supply chain delays.

UPS Orion tool is the gold standard for what machine learning can do for supply chains. The system has saved the shipping giant 10 million gallons of fuel a year by adjusting routes on the fly based on traffic and weather data.

If a company cant understand the vulnerabilities it faces, business leaders cant make fully informed decisions. IoT devices can provide the data businesses need to get a better understanding of these risks. Machine learning can take it a step further and find points of concern in this data that humans could miss.

IoT devices can gather data about the workplace or customers that machine learning programs then process. For example, Progressive has made more than 1.7 trillion observations about its customers driving habits through Snapshot, an IoT tracking device. These analytics help the company adjust clients insurance rates based on the dangers their driving presents.

Business risks arent the only hazards the Internet of Things and machine learning can predict. IoT air quality sensors could alert businesses when to change HVAC filters to protect employee health. Similarly, machine learning cybersecurity programs could sense when hackers are trying to infiltrate a companys network.

Another way the IoT and machine learning could transform your business is by eliminating waste. Data from IoT sensors can reveal where the company could be using more resources than it needs. Machine learning algorithms can then analyze this data to suggest ways to improve.

One of the most common culprits of waste in businesses is energy. Thanks to various inefficiencies, 68% of power in America ends up wasted. IoT sensors can measure where this waste is happening, and with machine learning, adjust to stop it.

Machine learning algorithms in conjunction with IoT devices could restrict energy use, so processes only use what they need. Alternatively, they could suggest new workflows or procedures that would be less wasteful. While many of these steps may seem small, they add up to substantial savings.

Without the IoT and machine learning, businesses cant reach their full potential. These technologies enable savings companies couldnt achieve otherwise. As they advance, theyll only become more effective.

The Internet of Things and machine learning are reshaping the business world. Those that dont take advantage of them now could soon fall behind.

Emily Newton is the Editor-in-Chief of Revolutionized, a magazine exploring how innovations change our world. She has over 3 years experience writing articles in the industrial and tech sectors.

Link:
5 Ways the IoT and Machine Learning Improve Operations - BOSS Magazine

There Is No Silver Bullet Machine Learning Solution – Analytics India Magazine

Download our Mobile App

A recommendation engine is a class of machine learning algorithm that suggests products, services, information to users based on analysis of data. Robust recommendation systems are the key differentiator in the operations of big companies like Netflix, Amazon, and Byte Dance (TikTok parent) etc.

Alok Menthe, Data Scientist at Ericsson, gave an informative talk on building Custom recommendation engines for real-world problems at the Machine Learning Developers Summit (MLDS) 2021. Whenever a niche business problem comes in, it has complicated intertwined ways of working. Standard ML techniques may be inadequate and might not serve the customers purpose. That is where the need for a custom-made engine comes in. We were also faced with such a problem with our service network unit at Ericsson, he said.

Menthe said the unit wanted to implement a recommendation system to provide suggestions for assignment workflow a model to delegate the incoming projects to the most appropriate team or resource pool

Credit: Alok Menthe

There were three kinds of data available:

Pool definition data: It relates to the composition of a particular resource poolthe number of people, their competence, and other metadata.

Historical demand data: This kind of data helps in establishing a relationship between the feature demand and a particular resource pool.

Transactional data: It is used for operational purposes.

Menthe said building a custom recommendation system in this context involves the following steps:

Credit: Alok Menthe

After building our model, the most difficult part was feature engineering, which is imperative for building an efficient system. Among the two major modules classification and clusteringwe faced challenges with respect to the latter. We had only categorical information making it difficult to find distances within the objects. We went out of the box to see if we can do any special encoding for the data. We adopted data encoding techniques and frequency-based encoding in this regard, said Menthe.

Clustering module: For this module, initially the team implemented K-modes and agglomerative. However, the results were far from perfect, prompting the team to consider the good-old K-means algorithm. For evaluation purposes, it was done manually with the help of subject matter experts.

The final model had 700 resource pools condensed to 15 pool clusters.

Classification module: For this module, three kinds of algorithm iterations were usedRandom Forest, Artificial Neural Network, XGBoost. Classification accuracy was used as an evaluation metric. Finally, upon 50,00,000 training records, this module demonstrated an accuracy of 71 percent.

Menthe said this recommendation model is monitored on a fortnightly basis by validating the suggested pools against the allocated pools for project demands:

The model has proved to be successful on three fronts:

Menthe summarised the three major takeaways from this project in his concluding remarks: the need to preserve business nuances in ML solutions; thinking beyond standard ML approaches; and understanding that there is no silver bullet ML solution.

I am a journalist with a postgraduate degree in computer network engineering. When not reading or writing, one can find me doodling away to my hearts content.

Read the original post:
There Is No Silver Bullet Machine Learning Solution - Analytics India Magazine

Postdoctoral Research Associate in Digital Humanities and Machine Learning job with DURHAM UNIVERSITY | 246392 – Times Higher Education (THE)

Department of Computer Science

Grade 7:-33,797 - 40,322 per annumFixed Term-Full TimeContract Duration:7 monthsContracted Hours per Week:35Closing Date:13-Mar-2021, 7:59:00 AM

Durham University

Durham University is one of the world's top universities with strengths across the Arts and Humanities, Sciences and Social Sciences. We are home to some of the most talented scholars and researchers from around the world who are tackling global issues and making a difference to people's lives.

The University sits in a beautiful historic city where it shares ownership of a UNESCO World Heritage Site with Durham Cathedral, the greatest Romanesque building in Western Europe. A collegiate University, Durham recruits outstanding students from across the world and offers an unmatched wider student experience.

Less than 3 hours north of London, and an hour and a half south of Edinburgh, County Durham is a region steeped in history and natural beauty. The Durham Dales, including the North Pennines Area of Outstanding Natural Beauty, are home to breathtaking scenery and attractions. Durham offers an excellent choice of city, suburban and rural residential locations. The University provides a range of benefits including pension and childcare benefits and the Universitys Relocation Manager can assist with potential schooling requirements.

Durham University seeks to promote and maintain an inclusive and supportive environment for work and study that assists all members of our University community to reach their full potential. Diversity brings strength and we welcome applications from across the international, national and regional communities that we work with and serve.

The Department

The Department of Computer Science is rapidly expanding. A new building for the department (joint with Mathematical Sciences) has recently opened to house the expanded Department. The current Department has research strengths in (1) algorithms and complexity, (2) computer vision, imaging, and visualisation and (3) high-performance computing, cloud computing, and simulation. We work closely with industry and government departments. Research-led teaching is a key strength of the Department, which came 5th in the Complete University Guide. The department offers BSc and MEng undergraduate degrees and is currently redeveloping its interdisciplinary taught postgraduate degrees. The size of its student cohort has more than trebled in the past five years. The Department has an exceptionally strong External Advisory Board that provides strategic support for developing research and education, consisting of high-profile industrialists and academics.Computer Science is one of the very best UK Computer Science Departments with an outstanding reputation for excellence in teaching, research and employability of our students.

The Role

Postdoctoral Research Associate to work on the AHRC-funded project Visitor Interaction and Machine Curation in the Virtual Liverpool Biennial.

The project looks at virtual art exhibitions that are curated by machines, or even co-curated by humans and machines; and how audiences interact with these exhibitions in the era of online art shows. The project is in close collaboration with the 2020 (now 2021) Liverpool Biennial (http://biennial.com/). The role of the post holder is, along with the PI Leonardo Impett, to implement different strategies of user-machine interaction for virtual art exhibits; and to investigate the interaction behaviour of different types of users with such systems.

Responsibilities:

This post is fixed term until31 August 2021 as the research project is time limited and will end on 31 August 2021.

The post-holder is employed to work on research/a research project which will be led by another colleague. Whilst this means that the post-holder will not be carrying out independent research in his/her own right, the expectation is that they will contribute to the advancement of the project, through the development of their own research ideas/adaptation and development of research protocols.

Successful applicants will, ideally, be in post byFebruary 2021.

How to Apply

For informal enquiries please contactDr Leonardo Impett (leonardo.l.impett@durham.ac.uk).All enquiries will be treated in the strictest confidence.

We prefer to receive applications online via the Durham University Vacancies Site.https://www.dur.ac.uk/jobs/. As part of the application process, you should provide details of 3 (preferably academic/research) referees and the details of your current line manager so that we may seek an employment reference.

Applications are particularly welcome from women and black and minority ethnic candidates, who are under-represented in academic posts in the University.We are committed to equality: if for any reason you have taken a career break or periods of leave that may have impacted on your career path, such as maternity, adoption or parental leave, you may wish to disclose this in your application.The selection committee will recognise that this may have reduced the quantity of your research accordingly.

What to Submit

All applicants are asked to submit:

The Requirements

Essential:

Qualifications

Experience

Skills

Desirable:

Experience

Skills

DBS Requirement:Not Applicable.

Original post:
Postdoctoral Research Associate in Digital Humanities and Machine Learning job with DURHAM UNIVERSITY | 246392 - Times Higher Education (THE)

The Collision of AI’s Machine Learning and Manipulation: Deepfake Litigation Risks to Companies from a Product Liability, Privacy, and Cyber…

AI and machine-learning advances have made it possible to produce fake videos and photos that seem real, commonly known as deepfakes. Deepfake content is exploding in popularity.[i] In Star Wars: The Rise of Skywalker, for instance, a visage of Carrie Fischer graced the screen, generated through artificial intelligence models trained on historic footage. Using thousands of hours of interviews with Salvador Dali, the Dali Museum of Florida created an interactive exhibit featuring the artist.[ii] For Game of Thrones fans miffed over plot holes in the season finale, Jon Snow can be seen profusely apologizing in a deepfake video that looks all too real.[iii]

Deepfake technologyhow does it work? From a technical perspective, deepfakes (also referred to as synthetic media) are made from artificial intelligence and machine-learning models trained on data sets of real photos or videos. These trained algorithms then produce altered media that looks and sounds just like the real deal. Behind the scenes, generative adversarial networks (GANs) power deepfake creation.[iv] With GANs, two AI algorithms are pitted against one another: one creates the forgery while the other tries to detect it, teaching itself along the way. The more data is fed into GANs, the more believable the deepfake will be. Researchers at academic institutions such as MIT, Carnegie Mellon, and Stanford University, as well as large Fortune 500 corporations, are experimenting with deepfake technology.[v] Yet deepfakes are not solely the province of technical universities or AI product development groups. Anybody with an internet connection can download publicly available deepfake software and crank out content.[vi]

Deepfake risks and abuse. Deepfakes are not always fun and games. Deepfake videos can phish employees to reveal credentials or confidential information, e-commerce platforms may face deepfake circumvention of authentication technologies for purposes of fraud, and intellectual property owners may find their properties featured in videos without authorization. For consumer-facing online platforms, certain actors may attempt to leverage deepfakes to spread misinformation. Another well-documented and unfortunate abuse of deepfake technology is for purposes of revenge pornography.[vii]

In response, online platforms and consumer-facing companies have begun enforcing limitations on the use of deepfake media. Twitter, for example, announced a new policy within the last year to prohibit users from sharing synthetic or manipulated media that are likely to cause harm. Per its policy, Twitter reserves the right to apply a label or warning to Tweets containing such media.[viii] Reddit also updated its policies to ban content that impersonates individuals or entities in a misleading or deceptive manner (while still permitting satire and parody).[ix] Others have followed. Yet social media and online platforms are not the only industries concerned with deepfakes. Companies across industry sectors, including financial and healthcare, face growing rates of identity theft and imposter scams in government services, online shopping, and credit bureaus as deepfake media proliferates.[x]

Deepfake legal claims and litigation risks. We are seeing legal claims and litigation relating to deepfakes across multiple vectors:

1. Claims brought by those who object to their appearance in deepfakes. Victims of deepfake media sometimes pursue tort law claims for false light, invasion of privacy, defamation, and intentional infliction of emotional distress. At a high level, these overlapping tort claims typically require the person harmed by the deepfake to prove that the deepfake creator published something that gives a false or misleading impression of the subject person in a manner that (a) damages the subjects reputation, (b) would be highly offensive to a reasonable person, or (c) causes mental anguish or suffering. As more companies begin to implement countermeasures, the lack of sufficient safeguards against misleading deepfakes may give rise to a negligence claim. Companies could face negligence claims for failure to detect deepfakes, either alongside the deepfake creator or alone if the creator is unknown or unreachable.

2. Product liability issues related to deepfakes on platforms. Section 230 of the Communications Decency Act shields online companies from claims arising from user content published on the companys platform or website. The law typically bars defamation and similar tort claims. But e-commerce companies can also use Section 230 to dismiss product liability and breach of warranty claims where the underlying allegations focus on a third-party sellers representation (such as a product description or express warranty). Businesses sued for product liability or other tort claims should look to assert Section 230 immunity as a defense where the alleged harm stems from a deepfake video posted by a user. Note, however, the immunity may be lost where the host platform performs editorial functions with respect to the published content at issue. As a result, it is important for businesses to implement clear policies addressing harmful deepfake videos that broadly apply to all users and avoid wading into influencing a specific users content.

3. Claims from consumers who suffer account compromise due to deepfakes. Multiple claims may arise where cyber criminals leverage deepfakes to compromise consumer credentials for various financial, online service, or other accounts. The California Consumer Privacy Act (CCPA), for instance, provides consumers with a private right of action to bring claims against businesses that violate the duty to implement and maintain reasonable security procedures and practices.[xi] Plaintiffs may also bring claims for negligence, invasion of privacy claims under common law or certain state constitutions, and state unfair competition or false advertising statutes (e.g., Californias Unfair Competition Law and Consumers Legal Remedies Act).

4. Claims available to platforms enforcing Terms of Use prohibitions of certain kinds of deepfakes. Online content platforms may be able to enforce prohibitions on abusive or malicious deepfakes through claims involving breach of contract and potential violations of the Computer Fraud and Abuse Act (CFAA), among others. These claims may turn on nuanced issues around what conduct constitutes exceeding authorized access under the CFAA, or Terms of Use assent and enforceability of particular provisions.

5. Claims related to state statutes limiting deepfakes. As malicious deepfakes proliferate, several states such as California, Texas, and Virginia have enacted statutes prohibiting their use to interfere with elections or criminalizing pornographic deepfake revenge video distribution.[xii] More such statutes are pending.

Practical tips for companies managing deepfake risks. While every company and situation is unique, companies dealing with deepfakes on their platforms, or as a potential threat vector for information security attacks, can consider several practical avenues to manage risks:

While the future of deepfakes is uncertain, it is apparent that the underlying AI and machine-learning technology is very real and here to staypresenting both risks and opportunity for organizations across industries.

Here is the original post:
The Collision of AI's Machine Learning and Manipulation: Deepfake Litigation Risks to Companies from a Product Liability, Privacy, and Cyber...

Automated Data Science and Machine Learning Platforms Market Technological Growth and Precise Outlook 2021- Microsoft, MathWorks, SAS, Databricks,…

Global Automated Data Science and Machine Learning Platforms Market Size, Status and Forecast 2021

The Global Automated Data Science and Machine Learning Platforms Market Research Report 2021-2026 is a valuable source of insightful data for business strategists. It provides the industry overview with growth analysis and historical & futuristic cost, revenue, demand, and supply data (as applicable). The research analysts provide an elaborate description of the value chain and its distributor analysis. This Market study provides comprehensive data that enhances the understanding, scope, and application of this report.

Click the link to get a Sample Copy of the Report:

https://www.marketinsightsreports.com/reports/01122519203/global-automated-data-science-and-machine-learning-platforms-market-growth-status-and-outlook-2020-2025/inquiry?Mode=P68

Market Segmentation:

Key Players:Palantier, Microsoft, MathWorks, SAS, Databricks, Alteryx, H2O.ai, TIBCO Software, IBM, Dataiku, Domino, Altair, Google, RapidMiner, DataRobot, Anaconda, KNIME and others.

Segment by Types:Cloud-based

On-premises

Segment by Applications:Small and Medium Enterprises (SMEs)

Large Enterprises

Regions Are covered By Automated Data Science and Machine Learning Platforms Market Report 2021 To 2026

For comprehensive understanding of market dynamics, the global Automated Data Science and Machine Learning Platforms market is analyzed across key geographies namely: North America (United States, Canada, and Mexico), Europe (Germany, France, UK, Russia, and Italy), Asia-Pacific (China, Japan, Korea, India, and Southeast Asia), South America (Brazil, Argentina, and Colombia), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria, and South Africa). Each of these regions is analyzed on the basis of market findings across major countries in these regions for a macro-level understanding of the market.

Key Highlights of the Report

Quantitative market information and forecasts for the global Automated Data Science and Machine Learning Platforms industry, segmented by type, end-use, and geographic region.

Expert analysis of the key technological, demographic, economic, and regulatory factors driving growth in the Automated Data Science and Machine Learning Platforms to 2026.

Market opportunities and recommendations for new investments.

Growth prospects among the emerging nations through 2026.

Browse Full Report at:

https://www.marketinsightsreports.com/reports/01122519203/global-automated-data-science-and-machine-learning-platforms-market-growth-status-and-outlook-2020-2025?Mode=P68

There are 13 Sections to show the global Automated Data Science and Machine Learning Platforms market:

Chapter 1:Market Overview, Drivers, Restraints and Opportunities, Segmentation overviewChapter 2:Market competition by ManufacturersChapter 3:Production by RegionsChapter 4:Consumption by RegionsChapter 5:Production, By Types, Revenue and Market share by TypesChapter 6:Consumption, By Applications, Market share (%) and Growth Rate by ApplicationsChapter 7:Complete profiling and analysis of ManufacturersChapter 8:Manufacturing cost analysis, Raw materials analysis, Region-wise manufacturing expensesChapter 9:Industrial Chain, Sourcing Strategy and Downstream BuyersChapter 10:Marketing Strategy Analysis, Distributors/TradersChapter 11:Market Effect Factors AnalysisChapter 12:Market ForecastChapter 13:Automated Data Science and Machine Learning Platforms Market Research Findings and Conclusion, Appendix, methodology and data source

Finally, researchers throw light on the pinpoint analysis of Global Automated Data Science and Machine Learning Platforms Market dynamics. It also measures the sustainable trends and platforms which are the basic roots behind the market growth. The degree of competition is also measured in the research report. With the help of SWOT and Porters five analysis, the market has been deeply analyzed. It also helps to address the risk and challenges in front of the businesses. Furthermore, it offers extensive research on sales approaches.

Note: All the reports that we list have been tracking the impact of COVID-19. Both upstream and downstream of the entire supply chain has been accounted for while doing this. Also, where possible, we will provide an additional COVID-19 update supplement/report to the report in Q3, please check for with the sales team.

ABOUT US:

MarketInsightsReports provides syndicated market research on industry verticals including Healthcare, Information, and Communication Technology (ICT), Technology and Media, Chemicals, Materials, Energy, Heavy Industry, etc.MarketInsightsReportsprovides global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations.

CONTACT US:

Irfan Tamboli (Head of Sales) Market Insights Reports

Phone: + 1704 266 3234 | +91-750-707-8687

sales@marketinsightsreports.com |irfan@marketinsightsreports.com

Originally posted here:
Automated Data Science and Machine Learning Platforms Market Technological Growth and Precise Outlook 2021- Microsoft, MathWorks, SAS, Databricks,...