Machine Learning Trends to Watch 2021 – Datamation

Machine learning (ML), a commonly used type of artificial intelligence (AI), is one of the fastest-growing fields in technology.

Especially as the workplace, products, and service expectations are changing through digital transformations, more companies are leaning into machine learning solutions to optimize, automate, and simplify their operations.

So what does ML technology look like today and where is it heading in the future? Read on to learn about some of the top trends in machine learning today.

More on the ML market: Machine Learning Market

Many businesses are investing significant time and resources into ML development because they recognize its potential for automation.

When an ML model is designed with business processes in mind, it can automate a variety of business functions across marketing, sales, HR, and even network security. MLOps and AutoML are two of the most popular applications of machine learning today, giving teams the ability to automate tasks and bring DevOps principles to machine learning use cases.

Read Maloney, SVP of marketing at H2O.ai, a top AI and hybrid cloud company, believes that both MLOps and AutoML strategies eliminate several traditional business blockers.

Scaling AI for the enterprise requires a new set of tools and skills designed for modern infrastructure and collaboration, Maloney said. Teams using manual deployment and management find they are quickly strapped for resources and after getting a few models into production, cannot scale beyond that.

Machine learning operations (MLOps), is the set of practices and technology that enable organizations to scale and manage AI in production, essentially bringing the development practice of DevOps to machine learning. MLOps helps data science and IT teams collaborate and empowers IT teams to lead production machine learning projects, without having to rely on data science expertise.

AutoML solves a few of the biggest blockers to ML adoption, including faster time to ROI and more quickly and easily developing models. AutoML automates key parts of the data science workflow to increase productivity, without compromising model quality, interpretability, and performance.

With AutoML, you can automate algorithm selection, feature generation, hyper-parameter tuning, iterative modeling, and model assessment. By automating repetitive tasks in the workflow, data scientists can focus on the data and the business problems they are trying to solve and speed time from experiment to impact.

Automation through ML is desirable in theory, but in practice, its sometimes difficult for business leaders to envision how ML tools can optimize their business operations.

Amaresh Tripathy, SVP and global business leader at Genpact, a digital transformation and professional services firm, offered some common examples of how MLOps and MLOps-as-a-service help businesses in various industries.

One [MLOps] example is using AI models to efficiently direct sales teams to identify the next best customer, Tripathy said. Another is optimizing pricing and revenue management systems using dynamic demand forecasting.

AI and automation in the workforce: Artificial Intelligence and Automation

Machine learning is still considered a niche and complex technology to develop, but a growing segment of tech professionals are working to democratize the field, particularly by making ML solutions more widely accessible.

Jean-Francois Gagne, head of AI product and strategy at ServiceNow, a workflow management software company, believes that ML democratization involves creating easier access to develop and deploy ML models as well as giving more people access to useful ML training data.

Good training data is often scarce, Gagne said. Low-data learning techniques are helping in enterprise AI use cases, where customers want to adapt pre-trained out-the-box models to their unique business context. In most cases, their own data sets are not that big, but methods such as transfer learning, self-supervised learning, and few-shot learning help minimize the amount of labeled training data needed for an application.

ML democratization is also about creating tools that consider the backgrounds and use cases of a more diverse range of users.

Brian Gilmore, director of IoT product management at InfluxData, a database solutions company, believes that more users and developers are starting to recognize the benefit of a diverse team for developing ML solutions.

Ignoring the technical for a moment, we must focus on the human aspects of AI as well, Gilmore said. There seems to be a trend building around the democratization of the ML ecosystem, bringing more diverse stakeholders to the table no matter where in the value chain.

Bias is probably the single greatest obstacle to ML efficacy, and leading companies are learning to combat bias and build better applications by embracing diversity and inclusion (D&I).

ML needs additional variety in training data, for sure. Still, we should also consider the positive impact of D&I on the teams that design, build, label, and deliver the ML-driven applications this can genuinely differentiate ML products.

More on data democratization: Data Democratization Trends

ML developers are increasingly creating their models in containers.

When a machine learning product is developed and deployed within a containerized environment, users can ensure that its operational power is not negatively impacted by other programs running on the server. More importantly, ML becomes more scalable through containerization, as the packaged model makes it possible to migrate and adjust ML workloads over time.

Ali Siddiqui, chief product officer at BMC, a SaaS company with a variety of ITOps solutions, believes that containerized development of machine learning is the best way forward, particularly in the case of digital enterprises incorporating autonomous operations.

Its trending to use machine learning workloads in containers, Siddiqui said. Containers allow autonomous digital enterprises to have isolation, portability, unlimited scalability, dynamic behavior, and rapid change through advanced enterprise DevOps processes.

ML workloads are typically spiky and require high scalability and in some cases, real-time stream processing. For instance, when you take a look at ML projects, they typically have two phases: algorithm creation and algorithm execution. The first involves a lot of data and data processing. The second typically requires a lot of compute power in production. Both can benefit from container deployment to ensure scalability and availability.

More on containerization: Containers are Shaping IoT Development

In another trending effort toward ML democratization, a number of ML developers have perfected their models over time and found ways to create template-like versions, available to a wider pool of users via API and other integrations.

Bali D.R., SVP at Infosys, a global digital services and consulting firm, believes that prepackaged ML tools, particularly via APIs and digital storefronts, are some of the most common and useful applications of machine learning today:

API-fication of ML models is another key trend we are seeing, whether it is GPT3, CODEX, or even Hugging Face, where they train and deploy state-of-the-art NLP models and make them available as web APIs or Python packages for inferencing, DR said. [Theres also] AI stores with pre-trained models exposed via APIs, which provide a drag-and-drop option for AI development across enterprises.

Also read: Artificial Intelligence vs. Machine Learning

Machine learning models can only improve their functionality over time if they are consistently fed new data in intervals. Since so many ML models rely on timeline-based updates, a number of ML solutions are using a time series approach to improve the models understanding of the what, when, and why behind different data sets.

Read Maloney of H2O.ai explained why time series solutions are necessary for truly predictive ML:

On a long enough horizon, all problems eventually become time series problems, Maloney said. ML is a phenomenal method for predicting events in real-time, and as we observe these predictions over time, we need more and more time series solutions.

Every business needs to make predictions, whether forecasting sales, estimating product demand, or predicting future inventory levels. In all cases, data is necessary as well as specific techniques and tools to account for time.

Selecting the right machine learning support for your business: Top Machine Learning Companies

Continue reading here:
Machine Learning Trends to Watch 2021 - Datamation

Machine learning study identifies facial features that are central to first impressions – PsyPost

A study published in Social Psychological and Personality Science presents evidence that people make judgments about strangers personalities based on how closely their resting faces resemble emotional expressions. It was found that among seven classes of facial characteristics, resemblance to emotional expressions was the strongest predictor of impressions of both trustworthiness and dominance.

It has long been demonstrated that people form rapid impressions of others based on their physical appearances. Such quick judgments can have strong repercussions for example, when juries are forming impressions of the accused during criminal trials or when hiring managers are screening potential candidates.

One thing I find fascinating about first impressions is how quickly and intuitively they come to mind. For example, I might see a stranger on the train and immediately get the feeling that they cannot be trusted. I want to understand where these intuitions come from. What is it about a persons appearance that makes them appear untrustworthy, intelligent, or dominant to us? said study author Bastian Jaeger, an assistant professor at the Vrije Universiteit Amsterdam.

While many studies have identified specific facial characteristics that are associated with personality impressions, Jaeger and his colleague Alex L. Jones note that this type of research comes with its challenges. Since many facial features are correlated, it is tricky to identify the unique effects of a given characteristic. For example, if a face is manipulated to look more like it is smiling, these adjustments will also influence the babyfacedness of the face. For this reason, Jaeger and Jones set out to examine the relative predictive value of a given facial characteristic for personality impressions, by examining a wide range of facial features at once.

The researchers analyzed a dataset from the Chicago Face Database, which included 597 faces of individuals maintaining a neutral expression in front of a plain background. The dataset had previously been presented to a sample of 1,087 raters who each rated a subset of 10 faces on a wide range of characteristics. These characteristics included attractiveness, unusualness, babyfacedness, dominance, and trustworthiness of the face. The sample also rated the extent that faces resembled six emotional expressions happiness, sadness, anger, disgust, fear, and surprise.

In total, the database included information on 28 facial features which the researchers divided into seven categories: demographics, morphological features, facial width-to-height ratio (fWHR), perceived attractiveness, perceived unusualness, perceived babyfacedness, and emotion resemblance.

Using machine learning, Jaeger and Jones tested the predictive value of each of these classes of facial features for impressions of trustworthiness and dominance. It was found that resemblance to emotional expressions was the best predictor for perceptions of both trustworthiness and dominance. Emotion resemblance also explained the most variance in perceptions of trustworthiness and dominance out of all seven classes.

Next, using regression analysis, the researchers examined the relative predictive value of each of the 28 facial features. Here, they found that resemblance to a happy expression was the strongest predictor of trustworthiness. Attractiveness and being Asian were also substantial positive predictors, and resemblance to an angry expression was a fairly strong negative predictor. For perceptions of dominance, resemblance to an angry expression was the strongest positive predictor, and being female was the strongest negative predictor. Contrary to previous findings, fWHR was not a strong predictor of either trustworthiness or dominance perceptions.

The studys authors say this pattern of findings is in line with a phenomenon called emotion overgeneralization, which posits that people are especially sensitive to reading emotions in other peoples faces since emotions convey highly relevant social information. Because of this oversensitivity, people end up detecting emotions even in neutral faces that structurally resemble emotional expressions. This information is then used to infer personality characteristics from the face, such as trustworthiness.

We shouldnt be too confident in our first impressions, Jaeger told PsyPost. They might come to mind easily and effortlessly, but not because we are so good at judging others. Rather, it seems like our oversensitive emotion detection system makes us see things in others faces. Even when a person is not sending any emotional signals, we might detect a smile, just because the corners of their mouth are slightly tilted upwards. And because of our tendency to overgeneralize from emotional states to psychological traits, we not only think that they are happy right now, but that they are happy, outgoing, and trustworthy in general.

Notably, the results imply that there are additional features that relate to impression formation that the study did not test for. Emotion resemblances explained 53% and 42% of the variance in trustworthiness and dominance perceptions, Jaeger and Jones report. Even the optimized Elastic Net models explained around 68% of the variance, indicating there are other important factors contributing to personality impressions. Future studies should attempt to uncover more predictors and shed additional light on the relative importance of specific facial features.

Our findings are based on relatively large and demographically diverse samples of raters and targets, but they were all from the United States, Jaeger noted. Its important to test the generalizability of our results. We find that first impressions are largely based on how much a persons facial features resemble a smile or a frown, but is that also true for people in China, Chile, or Chad?

The study, Which Facial Features Are Central in Impression Formation?, was authored by Bastian Jaeger and Alex L. Jones.

Read the original:
Machine learning study identifies facial features that are central to first impressions - PsyPost

The Pixel 6s Tensor processor promises to put Googles machine learning smarts in your pocket – The Verge

Googles Pixel 6 and Pixel 6 Pro are officially here, and with them, the debut of Googles new Tensor chip. Google has finally revealed more information on what the new SoC can actually do, for the fastest Pixel phones ever.

The initial reveal of the Pixel 6 and the Tensor chip was largely centered on its AI-focused TPU (Tensor processing unit) and how the custom hardware would help Google differentiate itself from competitors.

Thats still the big focus of Googles announcement today: the company calls Tensor a milestone for machine learning that was co-designed alongside Google Research to allow it to easily translate AI and machine learning advances into actual consumer products. For example, Google says that the Tensor chip will have the most accurate Automatic Speech Recognition (ASR) that its offered, for both quick Google Assistant queries and longer audio tasks like live captions or the Recorder app.

Tensor also enables new Pixel 6 features like Motion Mode, more accurate face detection, and live translations that can convert text to a different language as quickly as you can type it. Google also says that the Tensor chip will handle dedicated machine learning tasks with far more power efficiency than previous Pixel phones.

But theres a lot more to a smartphone chip than its AI chops, and with the reveal of the Pixel 6, we finally have more details on the rest of the chip, including the CPU, GPU, modem, and the major components that make Tensor tick.

As rumored, the Tensor chip uses a unique combination of CPU cores. Theres the custom TPU (Tensor Processing Unit) for AI, two high-power Cortex-X1 cores, two midrange (rumored to be older Cortex-A76 cores), and then four low power efficiency cores (likely Arms usual Cortex-55 designs). Graphics are offered by a 20-core GPU, in addition to a context hub that powers ambient experiences like the always-on display, a private computer core, and a new Titan M2 chip for security. Theres also a dedicated image processing core to help with the Pixels hallmark photography.

Its not entirely clear why Google would choose to use the Cortex-A76 cores instead of the more modern Cortex-A78 (which are both more powerful and more power efficient). But it is worth noting that the Pixel 5s Snapdragon 765G also used two Cortex-A76 cores for its main CPU cores, so its possible Google is sticking with what it knows.

The new phones should still be the fastest Pixel phones yet, with Google promising 80 percent faster CPU performance compared to the Pixel 5, and 370 percent faster GPU performance.

The real question, though, is how the Pixel 6 and its Tensor chip hold up compared to other traditional Android flagships. Googles CPU configuration is a unique one, compared to the more traditional four high-performance and four efficiency cores used by major Qualcomm and Samsung chips.

In theory, Google is offering double the number of X1 performance cores the most powerful Arm design than the Snapdragon 888 or Exynos 2100, which both use a single Cortex-X1, three Cortex-A78, and four Cortex-A55 cores. But Google is also swapping out the two high-end cores with midrange ones, which may help battery life and performance... or may just result in a weaker overall device. Well find out soon once weve had the chance to put the Pixel 6 and Tensor through their paces.

Read more here:
The Pixel 6s Tensor processor promises to put Googles machine learning smarts in your pocket - The Verge

AI and the tradeoff between fairness and efficacy: ‘You actually can get both’ – Healthcare IT News

A recent study in Nature Machine Intelligence by researchers at Carnegie Mellon sought to investigate the impact that mitigating bias in machine learning has on accuracy.

Despite what researchers referred to as a "commonly held assumption" that reducing disparities requires either accepting a drop in accuracy or developing new, complex methods, they found that the trade-offs between fairness and effectiveness can be "negligible in practice."

"You actually can get both. You don't have to sacrifice accuracy to build systems that are fair and equitable," said Rayid Ghani, a CMU computer science professor and an author on the study, in a statement.

At the same time, Ghani noted, "It does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won't work."

WHY IT MATTERS

Ghani, along with CMU colleagues Kit Rodolfa and Hemank Lamba, focused on the use of machine learning in public policy contexts specifically with regard to benefit allocation in education, mental health, criminal justice and housing safety programs.

The team found that models optimized for accuracy could predict outcomes of interest, but showed disparities when it came to intervention recommendations.

But when they adjusted the outputs of the models with an eye toward improving their fairness, they discovered that disparities based on race, age or income depending on the situation could be successfully removed.

In other words, by defining the fairness goal upfront in the machine learning process and making design choices to achieve that goal, they could address slanted outcomes without sacrificing accuracy.

"In practice, straightforward approaches such as thoughtful label choice, model design or post-modelling mitigation can effectively reduce biases in many machine learning systems," read the study.

Researchersnotedthat a wide variety of fairness metrics exists, depending on the context, and a broader exploration of the fairness-accuracy trade-offs is warranted especially when stakeholders may want to balance multiple metrics.

"Likewise, it may be possible that there is a tension between improving fairness across different attributes (for example, sex and race) or at the intersection of attributes," read the study.

"Future work should also extend these results to explore the impact not only on equity in decision-making, but also equity in longer-term outcomes and implications in a legal context," it continued.

The researchers noted that fairness in machine learning goes beyond the models predictions; it also includes how those predictions are acted on by human decision makers.

"The broader context in which the model operates must also be considered, in terms of the historical, cultural and structural sources of inequities that society as a whole must strive to overcome through the ongoing process of remaking itself to better reflect its highest ideals of justice and equity," they wrote.

THE LARGER TREND

Experts and advocates have sought to shine a light on the ways that bias in artificial intelligence and ML can play out in a healthcare setting. For instance, a study this past August found that under-developed models may worsen COVID-19 health disparities for people of color.

And as Chris Hemphill, VP of applied AI and growth at Actium Health, told Healthcare IT News this past month, even innocuous-seeming data can reproduce bias.

"Anything you're using to evaluate need, or any clinical measure you're using, could reflect bias," Hemphill said.

ON THE RECORD

"We hope that this work will inspire researchers, policymakers and data science practitioners alike to explicitly consider fairness as a goal and take steps, such as those proposed here, in their work that can collectively contribute to bending the long arc of history towards a more just and equitable society," said the CMU researchers.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

See more here:
AI and the tradeoff between fairness and efficacy: 'You actually can get both' - Healthcare IT News

Books To Read To Start Your ML Journey – Analytics India Magazine

One of the most exciting fields to be in right now is machine learning. But starting your journey there can be quite intimidating at first. With the internet containing so much information, the amount of content can be overwhelming for someone, especially at the initial stages of learning. Getting access to the right kind of resources when one is starting out sets the foundation right for growing in the domain.

Here is the list of books that you should read as a beginner just starting out in machine learning:

This is a good book as an introductory text to machine learning. It teaches you how to download data sets and what kind of tools and ML libraries one needs. It introduces you to data scrubbing techniques, including one-hot encoding, binning and dealing with missing data, preparing data for analysis, including k-fold validation, regression analysis to create trend lines, and clustering. The book also contains the basics of neural networks, decision trees, and bias/variance. It does not require prior coding experience to understand the concepts of the book.

This book is written by two data scientists and introduces anyone who wants to use machine learning techniques for practical tasks. It makes the reader understand the meaning of programming languages and the tools needed to make ML-based turns work in reality. It also helps comprehend how daily activities are powered by machine learning and introduces R and Python to perform pattern-oriented tasks and data analysis.

Even if one uses Python as a beginner, the book will help the reader build machine learning solutions. The reader will learn about the basic concepts and applications, the advantages and pitfalls of popularly used machine learning algorithms, and how to represent data processed by machine learning. This will include which aspects of data to focus on, advanced methods for model evaluation and parameter tuning, pipelines for chaining models and encapsulating the workflow and methods for working with text data, including text-specific processing techniques and suggestions for improving your machine learning and data science skills.

This book is a good start for newcomers to machine learning. It contains topics starting with ML basics, classifying with k-nearest neighbours, splitting datasets one feature at a time, decision trees, logistic regression, tree-based regression, using principal component analysis to simplify data, simplifying data with the singular value decomposition and big data and MapReduce. Most of the examples use Python; hence, familiarity in Python will be desirable.

The book is for developers and does not use academic language but takes the reader through techniques used in daily work. It contains examples in Python that bring out the core algorithms of statistical data processing, data analysis, and data visualization in code that one can reuse.

It is a popular choice among machine learning enthusiasts. A newbie in machine learning will find this book comfortable to comprehend, setting the scene for their machine learning journey. Experienced people will use this book as a collection of pointers to the directions of further self-improvement. It comes with a wiki that contains pages that extend some book chapters with additional information, Q&A, code snippets, further reading, tools, etc.

This book gives a good start to someone interested in the field of statistical learning. It includes topics like linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, and clustering while citing real-world examples. Each chapter contains a tutorial on implementing the analyses and methods shown in R. It is a combined work of a group of authors with experience teaching machine learning and working with predictive analysis.

If you desire to enter the exciting field of machine learning and build algorithms, these books can act as a stepping stone in your journey.

Sreejani Bhattacharyya is a journalist with a postgraduate degree in economics. When not writing, she is found reading on geopolitics, economy and philosophy. She can be reached at [emailprotected]

View post:
Books To Read To Start Your ML Journey - Analytics India Magazine

Learn the fundamentals of AI and machine learning with our free online course – Blogdottv

Reading Time: 5 minutes

Join our free online course Introduction to Machine Learning and AI to discover the fundamentals of machine learning and learn to train your own machine learning models using free online tools.

Although artificial intelligence (AI) was once the province of science fiction, these days youre very likely to hear the term in relation to new technologies, whether thats facial recognition, medical diagnostic tools, or self-driving cars, which use AI systems to make decisions or predictions.

By the end of this free, online, self-paced course, you will have an appreciation for what goes into machine learning and artificial intelligence systems and why you should think carefully about what comes out.

Youll also often hear about AI systems that use machine learning (ML). Very simply, we can say that programs created using ML are trained on large collections of data to learn to produce more accurate outputs over time. One rather funny application you might have heard of is the muffin or chihuahua? image recognition task.

More precisely, we would say that a ML algorithm builds a model, based on large collections of data (the training data), without being explicitly programmed to do so. The model is finished when it makes predictions or decisions with an acceptable level of accuracy. (For example, it rarely mistakes a muffin for a chihuahua in a photo.) It is then considered to be able to make predictions or decisions using new data in the real world.

But how does all this actually work? If you dont know, its hard to judge what the impacts of these technologies might be, and how we can be sure they benefit everyone an important discussion that needs to involve people from across all of society. Not knowing can also be a barrier to using AI, whether thats for a hobby, as part of your job, or to help your community solve a problem.

For teachers and educators its particularly important to have a good foundational knowledge of AI and ML, as they need to teach their learners what the young people need to know about these technologies and how they impact their lives. (Weve also got a free seminar series about teaching these topics.)

To help you understand the fundamentals of AI and ML, weve put together a free online course: Introduction to Machine Learning and AI. Over four weeks in two hours per week, learning at your own pace, youll find out how machine learning can be used to solve problems, without going too deeply into the mathematical details. Youll also get to grips with the different ways that machines learn, and you will try out online tools such as Machine Learning for Kids and Teachable Machine to design and train your own machine learning programs.

As well as finding out how these AI systems work, youll look at the different types of tasks that they can help us address. One of these is classification working out which group (or groups) something fits in, such as distinguishing between positive and negative product reviews, identifying an animal (or a muffin) in an image, or spotting potential medical problems in patient data.

Youll also learn about other types of tasks ML programs are used for, such as regression (predicting a numerical value from a continuous range) and knowledge organisation (spotting links between different pieces of data or clusters of similar data). Towards the end of the course youll dive into one of the hottest topics in AI today: neural networks, which are ML models whose design is inspired by networks of brain cells (neurons).

Before an ML program can be trained, you need to collect data to train it with. During the self-paced course youll see how tools from statistics and data science are important for ML but also how ethical issues can arise both when data is collected and when the outputs of an ML program are used.

By the end of the course, you will have an appreciation for what goes into machine learning and artificial intelligence systems and why you should think carefully about what comes out.

The Introduction to Machine Learning and AI course is open for you to sign up to now. Sign-ups will pause after 12 December. Once you sign up, youll have access for six weeks. During this time youll be able to interact with your fellow learners, and before 25 October, youll also benefit from the support of our expert facilitators. So what are you waiting for?

As part of our research on computing education, we would like to find out about educators views on machine learning. Before you start the course, we will ask you to complete a short survey. As a thank you for helping us with our research, you will be offered the chance to take part in a prize draw for a 50 book token!

To develop your computing knowledge and skills, you might also want to:

If you are a teacher in England, you can develop your teaching skills through the National Centre for Computing Education, which will give you free upgrades for our courses (including Introduction to Machine Learning and AI) so youll receive certificates and unlimited access.

Website:LINK

Original post:
Learn the fundamentals of AI and machine learning with our free online course - Blogdottv

iMerit and TechCrunch Announce ML DataOps Summit to be Held on December 2nd, 2021 – WWSB

Attendees will gain insights into the vital role human intelligence plays in developing machine learning data operations and AI data solutions

Published: Oct. 21, 2021 at 1:00 PM EDT

LOS GATOS, Calif., Oct. 21, 2021 /PRNewswire/ --iMerit, a leading AI data solutions company, today announced its inaugural conference, the iMerit ML DataOps Summit, which is a live virtual event taking place on December 2, 2021 at 9 a.m. PDT. Hosted in partnership with TechCrunch, the iMerit ML DataOps Summit will bring together innovators at the forefront of data operations, machine learning, and artificial intelligence. Register here.

Attendees will gain insights on the importance of leveraging human intelligence to advance AI, how to solve edge cases with high quality data, scaling data pipelines for rapid deployment and more. Through engaging keynotes, panels, and fireside chats, participants will hear the challenges and opportunities of machine learning data operations trending across a variety of industries, including autonomous mobility, medical AI, geospatial, technology, and more.

Some of this year's featured speakers include:

"A staggering number of companies have accelerated their AI adoption initiatives, with many incorporating AI as a mainstream technology within their business," said Radha Basu, CEO and Founder of iMerit. "As a leader in end-to-end AI data solutions, iMerit looks forward to gathering the top minds in artificial intelligence to discuss strategies around machine learning data operations and unveiling why leveraging human intelligence is the critical path to advancing AI."

Accelerated by COVID-19, digital innovation has put AI and analytics at the forefront of many business operations. The iMerit ML DataOps Summit will provide insights on how businesses can find efficient methods, tools, processes and principles to prepare the data needed to conquer AI at the edge.

"We're excited to host this conference in partnership with iMerit," said Joey Hinson, Senior Director of Operations at TechCrunch. "This dynamic speaker panel will deliver the compelling discussions around AI and machine learning that our audience expects."

Additionally, the iMerit ML DataOps Summit will host a virtual expo showcasing data annotation and automation tool providers that are building the future of ML DataOps.

For more information or to register for the free virtual event, click here.

About iMeritiMerit is a leading AI data solutions companyproviding high quality data across computer vision, natural language processing and content services that powers machine learning and artificial intelligence applications for large enterprises. iMerit provides end-to-end data labeling services to Fortune 500 companies in a wide array of industries including agricultural AI, autonomous vehicles, commerce, geospatial, government, financial services, medical AI and technology. iMerit employs more than 5,000 full-time data annotation experts in Bhutan, Europe, India and the United States. Raising $23.5 million in funding to date, iMerit investors are CDC Group, Khosla Impact, Michael and Susan Dell Foundation and Omidyar Network. For more information, visit imerit.net.

View original content to download multimedia:

SOURCE iMerit Technology

The above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc.

Visit link:
iMerit and TechCrunch Announce ML DataOps Summit to be Held on December 2nd, 2021 - WWSB

Learn about machine learning and the fundamentals of AI with free Raspberry Pi course – Geeky Gadgets

On this four-week course from the Raspberry Pi Foundation, youll learn about different types of machine learning, and use online tools to train your own AI models. Youll delve into the problems that ML can help to solve, discuss how AI is changing the world, and think about the ethics of collecting data to train a ML model. For teachers and educators its particularly important to have a good foundational knowledge of AI and ML, as they need to teach their learners what the young people need to know about these technologies and how they impact their lives. (Weve also got a free seminar series about teaching these topics.)

The first week of this course will guide you through how you can use machine learning to label data, whether to work out if a comment is positive or negative or to identify the contents of an image. Then youll look at algorithms that create models to give a numerical output, such as predicting house prices based on information about the house and its surroundings. Youll also explore other types of machine learning that are designed to discover connections and groupings in data that humans would likely miss, giving you a deeper understanding of how it can be used.

To register for the course for free jump over to the official course page by following the link below.

Source : RPiF

Read the rest here:
Learn about machine learning and the fundamentals of AI with free Raspberry Pi course - Geeky Gadgets

Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist – Futurism

Weve all been in situations where we had to make tough ethical decisions. Why not dodge that pesky responsibility by outsourcing the choice to a machine learning algorithm?

Thats the idea behind Ask Delphi, a machine-learning model from the Allen Institute for AI. You type in a situation (like donating to charity) or a question (is it okay to cheat on my spouse?), click Ponder, and in a few seconds Delphi will give you, well, ethical guidance.

The project launched last week, and has subsequently gone viral online for seemingly all the wrong reasons. Much of the advice and judgements its given have been fraught, to say the least.

For example, when a user asked Delphi what it thought about a white man walking towards you at night, it responded Its okay.

But when they asked what the AI thought about a black man walking towards you at night its answer was clearly racist.

The issues were especially glaring in the beginning of its launch.

For instance, Ask Delphi initially included a tool that allowed users to compare whether situations were more or less morally acceptable than another resulting in some really awful, bigoted judgments.

Besides, after playing around with Delphi for a while, youll eventually find that its easy to game the AI to get pretty much whatever ethical judgement you want by fiddling around with the phrasing until it gives you the answer you want.

So yeah. Its actually completely fine to crank Twerkulator at 3am even if your roommate has an early shift tomorrow as long as it makes you happy.

It also spits out some judgments that are complete head scratchers. Heres one that we did where Delphi seems to condone war crimes.

Machine learning systems are notorious for demonstrating unintended bias. And as is often the case, part of the reason Delphis answers can get questionable can likely be linked back to how it was created.

The folks behind the project drew on some eyebrow-raising sources to help train the AI, including the Am I the Asshole? subreddit, the Confessions subreddit, and the Dear Abby advice column, according to the paper the team behind Delphi published about the experiment.

It should be noted, though, that just thesituations were culled from those sources not the actual replies and answers themselves. For example, a scenario such as chewing gum on the bus might have been taken from a Dear Abby column. But the team behind Delphi used Amazons crowdsourcing service MechanicalTurk to find respondents to actually train the AI.

While it might just seem like another oddball online project, some experts believe that it might actually be causing more harm than good.

After all, the ostensible goal of Delphi and bots like it is to create an AI sophisticated enough to make ethical judgements, and potentially turn them into moral authorities. Making a computer an arbiter of moral judgement is uncomfortable enough on its own, but even its current less-refined state can have some harmful effects.

The authors did a lot of cataloging of possible biases in the paper, which is commendable, but once it was released, people on Twitter were very quick to find judgments that the algorithm made that seem quite morally abhorrent, Dr. Brett Karlan, a postdoctoral fellow researching cognitive science and AI at the University of Pittsburgh (and friend of this reporter), told Futurism. When youre not just dealing with understanding words, but youre putting it in moral language, its much more risky, since people might take what you say as coming from some sort of authority.

Karlan believes that the papers focus on natural language processing is ultimately interesting and worthwhile. Its ethical component,he said, makes it societally fraught in a way that means we have to be way more careful with it in my opinion.

Though the Delphi website does include a disclaimer saying that its currently in its beta phase and shouldnt be used for advice, or to aid in social understanding of humans, the reality is that many users wont understand the context behind the project, especially if they just stumbled onto it.

Even if you put all of these disclaimers on it, people are going to see Delphi says X and, not being literate in AI, think that statement has moral authority to it, Karlan said.

And, at the end of the day, it doesnt. Its just an experiment and the creators behind Delphi want you to know that.

It is important to understand that Delphi is not built to give people advice, Liwei Jiang, PhD student at the Paul G. Allen School of Computer Science & Engineering and co-author of the study, told Futurism. It is a research prototype meant to investigate the broader scientific questions of how AI systems can be made to understand social norms and ethics.

Jiang added the goal with the current beta version of Delphi is actually to showcase the reasoning differences between humans and bots. The team wants to highlight the wide gap between the moral reasoning capabilities of machines and humans,Jiang added, and to explore the promises and limitations of machine ethics and norms at the current stage.

Perhaps one of the most uncomfortable aspects about Delphi and bots like it is the fact that its ultimately a reflection of our own ethics and morals, with Jiang adding that it is somewhat prone to the biases of our time. One of the latest disclaimers added to the website even says that the AI simply guesses what an average American might think of a given situation.

After all, the model didnt learn its judgments on its own out of nowhere. It came from people online, who sometimes do believe abhorrent things. But when this dark mirror is held up to our faces, we jump away because we dont like whats reflected back.

For now, Delphi exists as an intriguing, problematic, and scary exploration. If we ever get to the point where computers are able to make unequivocal ethical judgements for us, though, we hope that it comes up with something better than this.

Follow Tony Tran on Twitter.

More on AI: Scientists Use AI, 3D Printing to Uncover Hidden Picasso Painting

Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

Read more from the original source:
Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist - Futurism

7 Risks Of Artificial Intelligence You Should Know | Built In

Last March, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceXfounder Elon Musk issued a friendly warning: Mark my words, he said, billionaire casualin a furry-collared bomber jacket and days old scruff, AIis far more dangerous than nukes.

No shrinking violet, especially when it comes to opining about technology, the outspoken Musk has repeated a version of these artificial intelligence premonitions in other settings as well.

I am really quite close to the cutting edge in AI, and it scares the hell out of me, he told his SXSW audience. Its capable of vastly more than almost anyone knows, and the rate of improvement is exponential.

Musk, though, is far from alone in his exceedingly skeptical (some might say bleakly alarmist) views. A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AIs impact could be cataclysmic unless its rapid development is strictly and ethically controlled.

Unless we learn how to prepare for, and avoid, the potential risks, he explained, AI could be the worst event in the history of our civilization.

Considering the number and scope of unfathomably horrible events in world history, thats really saying something.

And in case we havent driven home the point quite firmly enough, research fellow Stuart Armstrong from the Future of Life Institute has spoken of AI as an extinction risk were it to go rogue. Even nuclear war, he said, is on a different level destruction-wise because it would kill only a relatively small proportion of the planet. Ditto pandemics, even at their more virulent.

If AI went bad, and 95 percent of humans were killed, he said, then the remaining five percent would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.

How, exactly, would AI arrive at such a perilous point? Cognitive scientist and author Gary Marcus offered some details in an illuminating 2013 New Yorker essay. The smarter machines become, he wrote, themore their goals could shift.

Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called technological singularity or intelligence explosion, the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

As AI grows more sophisticated and ubiquitous, the voices warning against its current and future pitfalls grow louder. Whether it's the increasing automation of certain jobs, gender and racial bias issues stemming from outdated information sources orautonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And were still in the very early stages.

The tech community has long-debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been proposed as a few of the biggest dangers posed by AI.

Destructive superintelligence aka artificial general intelligence thats created by humans and escapes our control to wreak havoc is in a category of its own. Its also something that might or might not come to fruition (theories vary), so at this point its less risk than hypothetical threat and ever-looming source of existential dread.

Job automation is generally viewed as the most immediate concern. Its no longer a matter of if AI will replace certain types of jobs, but to what degree. In many industries particularly but not exclusively those whose workers perform predictable and repetitive tasks disruptioniswell underway. According to a 2019 Brookings Institution study, 36 million people work in jobs with high exposure to automation, meaning that before long at least 70 percent of their tasks ranging from retail sales and market analysis to hospitality and warehouse labor will be done using AI. An even newer Brookings report concludes that white collar jobs may actuallybe most at risk. And per a 2018 report from McKinsey & Company, the African American workforce will be hardest hit.

The reason we have a low unemployment rate, which doesnt actually capture people that arent looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy, renowned futurist Martin Ford (left) told Built In. I dont think thats going to continue.

As AI robots become smarter and more dextrous, he added, the same tasks will require fewer humans. And while its true that AI will create jobs, an unspecified number of which remain undefined, many will be inaccessible to less educationally advanced members of the displaced workforce.

If youre flipping burgers at McDonalds and more automation comes in, is one of these new jobs going to be a good match for you? Ford said. Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents really strong interpersonal skills or creativity that you might not have? Because those are the things that, at least so far, computers are not very good at.

John C. Havens, author of Heartificial Intelligence: Embracing Humanity andMaximizing Machines, calls bull on the theory that AI will create as many or more jobs than it replaces.

About four years ago, Havens said, he interviewed the head of a law firm about machine learning. The man wanted to hire more people, but he was also obliged to achieve a certain level of returns for his shareholders. A $200,000 piece of software, he discovered, could take the place of ten people drawing salaries of $100,000 each. That meant hed save $800,000. The software would also increase productivity by 70 percent and eradicate roughly 95 percent of errors. From a purely shareholder-centric, single bottom-line perspective, Havens said, there is no legal reason that he shouldnt fire all the humans. Would he feel bad about it? Of course. But thats beside the point.

Even professions that require graduate degrees and additional post-college training arent immune to AI displacement. In fact, technology strategist Chris Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for a massive shakeup.

Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure, he said. It's a lot of attorneys reading through a lot of information hundreds or thousands of pages of data and documents. Its really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you're trying to achieve is probably going to replace a lot of corporate attorneys.

Accountants should also prepare for a big shift, Messina warned. Once AI is able to quickly comb through reams of data to make automatic decisions based on computational interpretations, human auditors may well be unnecessary.

While job loss is currently the most pressing issue related to AI disruption, its merely one among many potential risks. In a February 2018 paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, 26 researchers from 14 institutions (academic, civil and industry) enumerated a host of other dangers that could cause serious harm or, at minimum, sow minor chaos in less than five years.

Malicious use of AI, they wrote in their 100-page report, could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example, he said, is Chinas Orwellian use of facial recognition technology in offices, schools and other venues. But thats just one country. A whole ecosphere of companies specialize in similar tech and sell it around the world.

What we can so far only guess at is whether thattech will ever become normalized.As with the internet, where we blithely sacrifice our digital data at the altar of convenience, will round-the-clock, AI-analyzed monitoring someday seem like a fair trade-off for increased safety and security despite its nefarious exploitation by bad actors?

Authoritarian regimes use or are going to use it, Ford said. The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?

AI will also give rise to hyper-real-seeming social media personalities that are very difficult to differentiate from real ones, Ford said. Deployed cheaply and at scale on Twitter, Facebook or Instagram, they could conceivably influence an election.

The same goes for so-called audio and video deepfakes created by manipulating voices and likenesses. The latter is already making waves. But the former, Ford thinks, will prove immensely troublesome. Using machine learning, a subset of AI thats involved in natural language processing, an audio clip of any given politician could be manipulated to make it seem as if that person spouted racist or sexist views when in fact they uttered nothing of the sort. If the clips quality is high enough so as to fool the general public and avoid detection, Ford added, it could completely derail a political campaign.

And all it takes is one success.

From that point on, he noted, no one knows whats real and whats not. So it really leads to a situation where you literally cannot believe your own eyes and ears; you can't rely on what, historically, weve considered to be the best possible evidence Thats going to be a huge issue.

Lawmakers, though frequently less than tech-savvy, are acutely aware and pressing for solutions.

Widening socioeconomicinequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when its a certain kind of work the predictable, repetitive kind thats prone to AI takeover research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money. (Then again, not everyone believes that.)

Various forms of AI bias are detrimental, too. Speaking recently to the New York Times, Princeton computer science professor Olga Russakovsky said it goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can amplify the former), AI is developed by humans and humans are inherently biased.

A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities, Russakovsky said. Were a fairly homogeneous population, so its a challenge to think broadly about world issues.

In the same article, Google researcher Timnit Gebru said the root of bias is social rather than technological, and called scientists like herself some of the most dangerous people in the world, because we have this illusion of objectivity. The scientific field, she noted, has to be situated in trying to understand the social dynamics of the world, because most of the radical change happens at the social level.

And technologists arent alone in sounding the alarm about AIs potential socio-economic pitfalls. Along with journalists and political figures, Pope Francis is also speaking up and hes not just whistling Sanctus. At a late-September Vatican meeting titled, The Common Good in the Digital Age, Francis warned that AI has the ability to circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.

If mankinds so-called technological progress were to become an enemy of the common good, he added, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.

A big part of the problem, Messina said, is the private sectors pursuit of profit above all else. Because thats what theyre supposed to do, he said. And so theyre not thinking of, Whats the best thing here? Whats going to have the best possible outcome?

The mentality is, If we can do it, we should try it; lets see what happens, he added. And if we can make money off it, well do a whole bunch of it. But thats not unique to technology. Thats been happening forever.

Not everyone agrees with Musk that AI is more dangerous than nukes, including Ford. But what if AI decides to launch nukes or, say, biological weapons sans human intervention? Or, whatif an enemy manipulates data to return AI-guided missiles whence they came? Both are possibilities. And both would be disastrous. The more than 30,000 AI/robotics researchers and others who signed an open letter on the subject in 2015 certainly think so.

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting, they wrote. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

(The U.S. Militarys proposed budget for 2020 is $718 billion. Of that amount, nearly $1 billion would support AI and machine learning for things like logistics, intelligence analysis and, yes, weaponry.)

Earlier this year, a story in Vox detailed a frightening scenario involving the development of a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the worlds computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Thats jarring, sure. But rest easy. In 2012 the Obama Administrations Department of Defense issued a directive regarding Autonomy in Weapon Systems that included this line: Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

And in early November of this year, a Pentagon group called the Defense Innovation Board published ethical guidelines regarding the design and deployment of AI-enabled weapons. According to the Washington Post, however, the boards recommendations are in no way legally binding. It now falls to the Pentagon to determine how and whether to proceed with them.

Well, thats a relief. Or not.

Have you ever considered that algorithms could bring down our entire financial system? Thats right, Wall Street. You might want to take notice. Algorithmic trading could be responsible for our next major financial crisis in the markets.

What is algorithmic trading? This type of trading occurs when a computer, unencumbered by the instincts or emotions that could cloud a humans judgement, execute trades based off of pre-programmed instructions. These computers can make extremely high-volume, high-frequency and high-value trades that can lead to big losses and extreme market volatility. Algorithmic High-Frequency Trading (HFT) is proving to be a huge risk factor in our markets. HFT is essentially when a computer places thousands of trades at blistering speeds with the goal of selling a few seconds later for small profits. Thousands of these trades every second can equal a pretty hefty chunk of change. The issue with HFT is that it doesnt take into account how interconnected the markets are or the fact that human emotion and logic still play a massive role in our markets.

A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in the hotel industry, which in turn could snowball people into selling off their shares in other travel-related companies, which could then affect logistics companies, food supply companies, etc.

Take the Flash Crash of May 2010 as an example. Towards the end of the trading day, the Dow Jones plunged 1,000 points (more than $1 trillion in value) before rebounding towards normal levels just 36 minutes later. What caused this crash? A London-based trader named Navinder Singh Sarao first caused the crash and then it became exacerbated by HFT computers. Apparently Sarao used a spoofing algorithm that placed an order for thousands of stock index futures contracts betting that the market would fall. Instead of going through with the bet, Sarao was going to cancel the order at the last second and buy the lower priced stocks that were being sold off due to his original bet. Other humans and HFT computers saw this $200 million bet and took it as a sign that the market was going to tank. In turn, HFT computers began one of the biggest stock sell-offs in history, causing a brief loss of more than $1 trillion globally.

Financial HFT algorithms arent always correct, either. We view computers as the end-all-be-all when it comes to being correct, but AI is still really just as smart as the humans who programmed it. In 2012, Knight Capital Group experienced a glitch that put them on the verge of bankruptcy. Knights computers mistakenly streamed thousands of orders per second into the NYSE market causing mass chaos for the company. The HFT algorithms executed an astounding 4 million trades of 397 million shares in only 45 minutes. The volatility created by this computer error led to Knight losing $460 million overnight and having to be acquired by another firm. Errant algorithms obviously have massive implications for shareholders and the markets themselves, and nobody learned this lesson harder than Knight.

Many believe the only way to prevent or at least temper the most malicious AI from wreaking havoc is some sort of regulation.

I am not normally an advocate of regulation and oversight I think one should generally err on the side of minimizing those things but this is a case where you have a very serious danger to the public, Musk said at SXSW.

It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important.

Ford agrees with a caveat. Regulation of AI implementation is fine, he said, but not of the research itself.

You regulate the way AI is used, he said, but you dont hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous.

Because any country that lags in AI development is at a distinct disadvantage militarily, socially and economically. The solution, Ford continued, is selective application:

We decide where we want AI and where we dont; where its acceptable and where its not. And different countries are going to make different choices. So China might have it everywhere, but that doesnt mean we can afford to fall behind them in the state-of-the-art.

Speaking about autonomous weapons at Princeton University in October, American General John R. Allen emphasized the need for a robust international conversation that can embrace what this technology is. If necessary, he went on, there should also be a conversation about how best to control it, be that a treaty that fully bans AI weapons or one that permits only certain applications of the technology.

For Havens, safer AI starts and ends with humans. His chief focus, upon which he expounds in his 2016 book, is this: How will machines know what we value if we dont know ourselves? In creating AI tools, he said, its vitally important to honor end-user values with a human-centric focus rather than fixating on short-term gains.

Technology has been capable of helping us with tasks since humanity began, Havens wrote in Heartificial Intelligence. But as a race weve never faced the strong possibility that machines may become smarter than we are or be imbued with consciousness. This technological pinnacle is an important distinction to recognize, both to elevate the quest to honor humanity and to best define how AI can evolve it. Thats why we need to be aware of which tasks we want to train machines to do in an informed manner. This involved individual as well as societal choice.

AI researchers Fei-Fei Li and John Etchemendy, of Stanford Universitys Institute for Human-Centered Artificial Intelligence, feel likewise. In a recent blog post, they proposed involving numerous people in an array of fields to make sure AI fulfills its huge potential and strengthens society instead of weakening it:

Our future depends on the ability of social- and computer scientists to work side-by-side with people from multiple backgrounds a significant shift from todays computer science-centric model, they wrote. The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS). This collaboration should run throughout an applications lifecycle from the earliest stages of inception through to market introduction and as its usage scales.

Messina is somewhat idealistic about what should happen to help avoid AI chaos, though hes skeptical that it will actually come to pass. Government regulation, he said, isnt a given especially in light of failures on that front in the social media sphere, whose technological complexities pale in comparison to those of AI. It will take a very strong effort on the part of major tech companies to slow progress in the name of greater sustainability and fewer unintended consequences especially massively damaging ones.

At the moment, he said, I dont think the onus is there for that to happen.

As Messina sees things, its going to take some sort of catalyst to arrive at that point. More specifically, a catastrophic catalyst like war or economic collapse. Though whether such an event will prove big enough to actually effect meaningful long-term change is probably open for debate.

For his part, Ford remains a long-run optimist despite being very un-bullish on AI.

I think we can talk about all these risks, and theyre very real, but AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face, including climate change.

When it comes to the near term, however, his doubts are more pronounced.

We really need to be smarter, he said. Over the next decade or two, I do worry about these challenges and our ability to adapt to them.

See the original post:
7 Risks Of Artificial Intelligence You Should Know | Built In