Facebook to allow parents to monitor their kids’ chat messages – Sussex Express

Facebook has announced plans to add new parental tools to its Messenger app for users under the age of 13.

This will allow concerned parents to finally monitor their children's chats online, months after concerns were raised around the app's safety.

"Messenger Kids" was launched back in 2017 and allowed children who are too young to have a full Facebook account to still benefit from Facebook chat features.

'One stop grooming shop for predators'

In August, Facebook fixed a flaw within the app that accidentally allowed thousands of children to join group chats in which not all children participating in the chats were approved by parents.

End-to-end encryption hides whoever is receiving and sending messages from a third party.

Facebook has also been moving to encrypt its messaging services, which include Facebook Messenger and Instagram.

Facebook has said that the new features on Messenger Kids will include access for parents to see their childrens chat history.

WhatsApp, also owned by Facebook, is already encrypted and child protection agencies worry that overwhelming encryption may make detecting online predators more difficult.

The child protection agency NSPCC said in August that Facebook risked becoming a "one stop grooming shop" for children if they continued to enforced end-to-end-encryption.

Encryption can make it difficult to source predators online (Photo: Shutter)

Predators can hide behind encryption

Data obtained by the NSPCC via freedom of information requests to the police between April 2018 and 2019 showed that, out of 9,259 instances of children being groomed on a known platform, 4,000 were identified as being on Facebook platforms including Instagram and Whatsapp.

However, only 299 instances were identified as being from WhatsApp, which the NSPCC says highlights how difficult it becomes to detect crimes on an end-to-end encrypted platform.

The charity believes criminals will be able to carry out more serious child abuse on Facebook's apps undetected without needing to lure them off to encrypted platforms, if it goes ahead with changes.

Facebook has not confirmed whether Messenger Kids will be encrypted or not. The company said it will inform Messenger Kids users on the types of information others can see about them.

Follow this link:
Facebook to allow parents to monitor their kids' chat messages - Sussex Express

TLS 1.0/1.1 end-of-life countdown heads into the danger zone – The Daily Swig

Web admins have about one month to upgrade

Websites that support encryption protocols no higher than TLS 1.0 or 1.1 have only a few weeks to upgrade before major browsers start returning secure connection failed error pages.

Google, Apple, Microsoft, and Mozilla jointly agreed in October 2018 to deprecate the aging protocols by early 2020 a move likely to throttle the traffic flowing to laggard sites yet to upgrade to TLS 1.2 and above.

Mozilla will likely be first to jettison support for TLS 1.0 and 1.1 21 and 14 years old, respectively with the release of Firefox 74on March 10.

Google Chrome 81, slated for launch on March 17, will disable support too, while Apples next Safari update is expected to land, with support for older encryption suites removed, by the end of the month.

Microsoft is expected to remove support for the moribund protocols from Edge 82 in April and Internet Explorer at around the same time.

Webmasters have been notified about the upcoming switch, for instance by advice to migrate issued within developer tools in Firefox 68 and Chrome 72, which were launched last year.

In December, Firefox 71 arrived with support disabled in Nightly mode to uncover more sites that arent able to speak TLS 1.2.

SSL Pulses latest analysis of Alexas most popular websites, conducted in February, reveals that of nearly 140,000 websites, just 3.2% fail to support protocols higher than TLS 1.0, and less than 0.1% have a ceiling of TLS 1.1.

Some 71.7% support a maximum of TLS 1.2, while the remaining 25% support the latest version, TLS 1.3.

According to these figures, then, 3.3% of sites could soon be returning secure connection failed error pages to visiting surfers.

The Internet Engineering Task Force (IETF), the global guardian for internet standards, is formally deprecating both TLS 1.0 and 1.1.

The National Institute of Standards and Technology (NIST) says it is no longer practical to patch the protocols existing vulnerabilities, such as the POODLE and BEAST man-in-the-middle attacks.

The protocols neither support the latest cryptographic algorithms nor comply with todays PCI Data Security Standards (PCI DSS) for protecting payment data.

While TLS 1.3, launched in 2018, is now the gold standard, TLS 1.2 is PCI DSS-compliant and remains in good standing despite being more than a decade old.

Both TLS 1.2 and 1.3 are supported by all major browsers. Both support the latest cryptographic cipher suites and algorithms, remove mandatory, insecure SHA-1 and MD5 hash functions as part of peer authentication, and are resilient against downgrade-related attacks like LogJam and FREAK.

Michal paek, developer at Report URI and Password Storage Rating, urges webmasters to take action before it's too late.

If theyre unsure about their sites SSL configuration, he recommends using tools like SSL Labs Server Test and Mozilla Observatory.

If checks reveal that a websites fails to support at least TLS 1.2, how should webmasters proceed?

The short answer is to check with their vendors, paek told The Daily Swig. The slightly longer (and maybe better) answer is to run recent encryption libraries (like OpenSSL) and servers (like Apache or Nginx), all of which support TLS 1.2 and TLS 1.3 - and the latter might even be a one-line change in the supported protocols config option.

He added: You can also check what protocol is used to access the site in the browser devtools, Security tab.

In a recent blog post, security researcher Scott Helme points out that you don't necessarily have to remove support for these Legacy TLS versions, you simply have to make sure that you support at least TLSv1.2 for clients like Chrome/Firefox/Safari to be able to connect.

In a message addressed to developers in September 2019 Mozilla engineer Martin Thomson said: This is a potentially disruptive change, but we believe that this is good for the security and stability of the web, noting that the number of sites that will be affected is reducing steadily.

READ MORE Chrome SameSite cookie change expected to result in modest website breakage

Read the original:
TLS 1.0/1.1 end-of-life countdown heads into the danger zone - The Daily Swig

Overview of causal inference in machine learning – Ericsson

In a major operators network control center complaints are flooding in. The network is down across a large US city; calls are getting dropped and critical infrastructure is slow to respond. Pulling up the systems event history, the manager sees that new 5G towers were installed in the affected area today.

Did installing those towers cause the outage, or was it merely a coincidence? In circumstances such as these, being able to answer this question accurately is crucial for Ericsson.

Most machine learning-based data science focuses on predicting outcomes, not understanding causality. However, some of the biggest names in the field agree its important to start incorporating causality into our AI and machine learning systems.

Yoshua Bengio, one of the worlds most highly recognized AI experts, explained in a recent Wired interview: Its a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case.

Yann LeCun, a recent Turing Award winner, shares the same view, tweeting: Lots of people in ML/DL [deep learning] know that causal inference is an important way to improve generalization.

Causal inference and machine learning can address one of the biggest problems facing machine learning today that a lot of real-world data is not generated in the same way as the data that we use to train AI models. This means that machine learning models often arent robust enough to handle changes in the input data type, and cant always generalize well. By contrast, causal inference explicitly overcomes this problem by considering what might have happened when faced with a lack of information. Ultimately, this means we can utilize causal inference to make our ML models more robust and generalizable.

When humans rationalize the world, we often think in terms of cause and effect if we understand why something happened, we can change our behavior to improve future outcomes. Causal inference is a statistical tool that enables our AI and machine learning algorithms to reason in similar ways.

Lets say were looking at data from a network of servers. Were interested in understanding how changes in our network settings affect latency, so we use causal inference to proactively choose our settings based on this knowledge.

The gold standard for inferring causal effects is randomized controlled trials (RCTs) or A/B tests. In RCTs, we can split a population of individuals into two groups: treatment and control, administering treatment to one group and nothing (or a placebo) to the other and measuring the outcome of both groups. Assuming that the treatment and control groups arent too dissimilar, we can infer whether the treatment was effective based on the difference in outcome between the two groups.

However, we can't always run such experiments. Flooding half of our servers with lots of requests might be a great way to find out how response time is affected, but if theyre mission-critical servers, we cant go around performing DDOS attacks on them. Instead, we rely on observational datastudying the differences between servers that naturally get a lot of requests and those with very few requests.

There are many ways of answering this question. One of the most popular approaches is Judea Pearl's technique for using to statistics to make causal inferences. In this approach, wed take a model or graph that includes measurable variables that can affect one another, as shown below.

To use this graph, we must assume the Causal Markov Condition. Formally, it says that subject to the set of all its direct causes, a node is independent of all the variables which are not direct causes or direct effects of that node. Simply put, it is the assumption that this graph captures all the real relationships between the variables.

Another popular method for inferring causes from observational data is Donald Rubin's potential outcomes framework. This method does not explicitly rely on a causal graph, but still assumes a lot about the data, for example, that there are no additional causes besides the ones we are considering.

For simplicity, our data contains three variables: a treatment , an outcome , and a covariate . We want to know if having a high number of server requests affects the response time of a server.

In our example, the number of server requests is determined by the memory value: a higher memory usage means the server is less likely to get fed requests. More precisely, the probability of having a high number of requests is equal to 1 minus the memory value (i.e. P(x=1)=1-z , where P(x=1) is the probability that x is equal to 1). The response time of our system is determined by the equation (or hypothetical model):

y=1x+5z+

Where is the error, that is, the deviation from the expected value of given values of and depends on other factors not included in the model. Our goal is to understand the effect of on via observations of the memory value, number of requests, and response times of a number of servers with no access to this equation.

There are two possible assignments (treatment and control) and an outcome. Given a random group of subjects and a treatment, each subject has a pair of potential outcomes: and , the outcomes Y_i (0) and Y_i (1) under control and treatment respectively. However, only one outcome is observed for each subject, the outcome under the actual treatment received: Y_i=xY_i (1)+(1-x)Y_i (0). The opposite potential outcome is unobserved for each subject and is therefore referred to as a counterfactual.

For each subject, the effect of treatment is defined to be Y_i (1)-Y_i (0) . The average treatment effect (ATE) is defined as the average difference in outcomes between the treatment and control groups:

E[Y_i (1)-Y_i (0)]

Here, denotes an expectation over values of Y_i (1)-Y_i (0)for each subject , which is the average value across all subjects. In our network example, a correct estimate of the average treatment effect would lead us to the coefficient in front of x in equation (1) .

If we try to estimate this by directly subtracting the average response time of servers with x=0 from the average response time of our hypothetical servers with x=1, we get an estimate of the ATE as 0.177 . This happens because our treatment and control groups are not inherently directly comparable. In an RTC, we know that the two groups are similar because we chose them ourselves. When we have only observational data, the other variables (such as the memory value in our case) may affect whether or not one unit is placed in the treatment or control group. We need to account for this difference in the memory value between the treatment and control groups before estimating the ATE.

One way to correct this bias is to compare individual units in the treatment and control groups with similar covariates. In other words, we want to match subjects that are equally likely to receive treatment.

The propensity score ei for subject is defined as:

e_i=P(x=1z=z_i ),z_i[0,1]

or the probability that x is equal to 1the unit receives treatmentgiven that we know its covariate is equal to the value z_i. Creating matches based on the probability that a subject will receive treatment is called propensity score matching. To find the propensity score of a subject, we need to predict how likely the subject is to receive treatment based on their covariates.

The most common way to calculate propensity scores is through logistic regression:

Now that we have calculated propensity scores for each subject, we can do basic matching on the propensity score and calculate the ATE exactly as before. Running propensity score matching on the example network data gets us an estimate of 1.008 !

We were interested in understanding the causal effect of binary treatment x variable on outcome y . If we find that the ATE is positive, this means an increase in x results in an increase in y. Similarly, a negative ATE says that an increase in x will result in a decrease in y .

This could help us understand the root cause of an issue or build more robust machine learning models. Causal inference gives us tools to understand what it means for some variables to affect others. In the future, we could use causal inference models to address a wider scope of problems both in and out of telecommunications so that our models of the world become more intelligent.

Special thanks to the other team members of GAIA working on causality analysis: Wenting Sun, Nikita Butakov, Paul Mclachlan, Fuyu Zou, Chenhua Shi, Lule Yu and Sheyda Kiani Mehr.

If youre interested in advancing this field with us, join our worldwide team of data scientists and AI specialists at GAIA.

In this Wired article, Turing Award winner Yoshua Bengio shares why deep learning must begin to understand the why before it can replicate true human intelligence.

In this technical overview of causal inference in statistics, find out whats need to evolve AI from traditional statistical analysis to causal analysis of multivariate data.

This journal essay from 1999 offers an introduction to the Causal Markov Condition.

Read this article:
Overview of causal inference in machine learning - Ericsson

AI, machine learning, robots, and marketing tech coming to a store near you – TechRepublic

Retailers are harnessing the power of new technology to dig deeper into customer decisions and bring people back into stores.

The National Retail Federation's 2020 Big Show in New York was jam packed full of robots, frictionless store mock-ups, and audacious displays of the latest technology now available to retailers.

Dozens of robots, digital signage tools, and more were available for retail representatives to test out, with hundreds of the biggest tech companies in attendance offering a bounty of eye-popping gadgets designed to increase efficiency and bring the wow factor back to brick-and-mortar stores.

SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic)

Here are some of the biggest takeaways from the annual retail event.

With the explosion in popularity of Amazon, Alibaba, and other e-commerce sites ready to deliver goods right to your door within days, many analysts and retailers figured the brick-and-mortar stores of the past were on their last legs.

But it turns out billions of customers still want the personal, tailored touch of in-store experiences and are not ready to completely abandon physical retail outlets.

"It's not a retail apocalypse. It's a retail renaissance," said Lori Mitchell-Keller, executive vice president and global general manager of consumer industries at SAP.

As leader of SAP's retail, wholesale distribution, consumer products, and life sciences industries division, Mitchell-Keller said she was surprised to see that retailers had shifted their stance and were looking to find ways to beef up their online experience while infusing stores with useful but flashy technology.

"Brick-and-mortar stores have this unique capability to have a specific advantage against online retailers. So despite the trend where everything was going online, it did not mean online at the expense of brick-and-mortar. There is a balance between the two. Those companies that have a great online experience and capability combined with a brick-and-mortar store are in the best place in terms of their ability to be profitable," Mitchell-Keller said during an interview at NRF 2020.

"There is an experience that you cannot get online. This whole idea of customer experience and experience management is definitely the best battleground for the guys that can't compete in delivery. Even for the ones that can compete on delivery, like the Walmarts and Targets, they are using their brick-and-mortar stores to offer an experience that you can't get online. We thought five years ago that brick-and-mortar was dead and it's absolutely not dead. It's actually an asset."

In her experience working with the world's biggest retailers, companies that have a physical presence actually have a huge advantage because customers are now yearning for a personalized experience they can't get online. While e-commerce sites are fast, nothing can beat the ability to have real people answer questions and help customers work through their options, regardless of what they're shopping for.

Retailers are also transforming parts of their stores into fulfillment centers for their online sales, which have the doubling effect of bringing customers into the store where they may spend even more on things they see.

"The brick-and-mortar stores that are using their stores as fulfillment centers have a much lower cost of delivery because they're typically within a few miles of customers. If they have a great online capability and good store fulfillment, they're able to get to customers faster than the aggregators," Mitchell-Keller said. "It's better to have both."

SEE: Feature comparison: E-commerce services and software (TechRepublic Premium)

But one of the main trends, and problems, highlighted at NRF 2020 was the sometimes difficult transition many retailers have had to make to a digitized world.

NRF 2020 was full of decadent tech retail tools like digital price tags, shelf-stocking robots and next-gen advertising signage, but none of this could be incorporated into a retail environment without a basic amount tech talent and systems to back it all.

"It can be very overwhelmingly complicated, not to mention costly, just to have a team to manage technology and an environment that is highly digitally integrated. The solution we try to bring to bear is to add all these capabilities or applications into a turn key environment because fundamentally, none of it works without the network," said Michael Colaneri, AT&T's vice president of retail, restaurants and hospitality.

While it would be easy for a retailer to leave NRF 2020 with a fancy robot or cool gadget, companies typically have to think bigger about the changes they want to see, and generally these kinds of digital transformations have to be embedded deep throughout the supply chain before they can be incorporated into stores themselves.

Colaneri said much of AT&T's work involved figuring out how retailers could connect the store system, the enterprise, the supply chain and then the consumer, to both online and offline systems. The e-commerce part of retailer's business now had to work hand in hand with the functionality of the brick-and-mortar experience because each part rides on top of the network.

"There are five things that retailers ask me to solve: Customer experience, inventory visibility, supply chain efficiency, analytics, and the integration of media experiences like a robot, electronic shelves or digital price tags. How do I pull all this together into a unified experience that is streamlined for customers?" Colaneri said.

"Sometimes they talk to me about technical components, but our number one priority is inventory visibility. I want to track products from raw material to where it is in the legacy retail environment. Retailers also want more data and analytics so they can get some business intelligence out of the disparate data lakes they now have."

The transition to digitized environments is different for every retailer, Colaneri added. Some want slow transitions and gradual introductions of technology while others are desperate for a leg up on the competition and are interested in quick makeovers.

While some retailers have balked at the thought, and price, of wholesale changes, the opposite approach can end up being just as costly.

"Anybody that sells you a digital sign, robot, Magic Mirror or any one of those assets is usually partnering with network providers because it requires the network. And more importantly, what typically happens is if someone buys an asset, they are underestimating the requirements it's going to need from their current network," Colaneri said.

"Then when their team says 'we're already out of bandwidth,' you'll realize it wasn't engineered and that the application wasn't accommodated. It's not going to work. It can turn into a big food fight."

Retailers are increasingly realizing the value of artificial intelligence and machine learning as a way to churn through troves of data collected from customers through e-commerce sites. While these tools require the kind of digital base that both Mitchell-Keller and Colaneri mentioned, artificial intelligence (AI) and machine learning can be used to address a lot of the pain points retailers are now struggling with.

Mitchell-Keller spoke of SAP's work with Costco as an example of the kind of real-world value AI and machine learning can add to a business. Costco needed help reducing waste in their bakeries and wanted better visibility into when customers were going to buy particular products on specific days or at specific times.

"Using machine learning, what SAP did was take four years of data out of five different stores for Costco as a pilot and used AI and machine learning to look through the data for patterns to be able to better improve their forecasting. They're driving all of their bakery needs based on the forecast and that forcecast helped Costco so much they were able to reduce their waste by about 30%," Mitchell-Keller said, adding that their program improved productivity by 10%.

SAP and dozens of other tech companies at NRF 2020 offered AI-based systems for a variety of supply chain management tools, employee payment systems and even resume matches. But AI and machine learning systems are nothing without more data.

SEE:Managing AI and ML in the enterprise 2019: Tech leaders expect more difficulty than previous IT projects(TechRepublic Premium)

Jeff Warren, vice president of Oracle Retail, said there has been a massive shift toward better understanding customers through increased data collection. Historically, retailers simply focused on getting products through the supply chain and into the hands of consumers. But now, retailers are pivoting toward focusing on how to better cater services and goods to the customer.

Warren said Oracle Retail works with about 6,000 retailers in 96 different countries and that much of their work now prioritizes collecting information from every customer interaction.

"What is new is that when you think of the journey of the consumer, it's not just about selling anymore. It's not just about ringing up a transaction or line busting. All of the interactions between you and me have value and hold something meaningful from a data perspective," he said, adding that retailers are seeking to break down silos and pool their data into a single platform for greater ease of use.

"Context would help retailers deliver a better experience to you. Its petabytes of information about what the US consumer market is spending and where they're spending. We can take the information that we get from those interactions that are happening at the point of sale about our best customers and learn more."

With the Oracle platform, retailers can learn about their customers and others who may have similar interests or live in similar places. Companies can do a better job of targeting new customers when they know more about their current customers and what else they may want.

IBM is working on similar projects with hundreds of different retailers , all looking to learn more about their customers and tailor their e-commerce as well as in-store experience to suit their biggest fans.

IBM global managing director for consumer industries Luq Niazi told TechRepublic during a booth tour that learning about consumer interests was just one aspect of how retailers could appeal to customers in the digital age.

"Retailers are struggling to work through what tech they need. When there is so much tech choice, how do you decide what's important? Many companies are implementing tech that is good but implemented badly, so how do you help them do good tech implemented well?" Niazi said.

"You have all this old tech in stores and you have all of this new tech. You have to think about how you bring the capability together in the right way to deploy flexibly whatever apps and experiences you need from your store associate, for your point of sale, for your order management system that is connected physically and digitally. You've got to bring those together in different ways. We have to help people think about how they design the store of the future."

Get expert tips on mastering the fundamentals of big data analytics, and keep up with the latest developments in artificial intelligence. Delivered Mondays

Read this article:
AI, machine learning, robots, and marketing tech coming to a store near you - TechRepublic

Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 – JD Supra

Introduction

This article is the first of a five-part series of articles dealing with what patentability of machine learning looks like in 2019. This article begins the series by describing the USPTOs 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) in the context of the U.S. patent system. Then, this article and the four following articles will describe one of five cases in which Examiners rejections under Section 101 were reversed by the PTAB under this new 2019 PEG. Each of the five cases discussed deal with machine-learning patents, and may provide some insight into how the 2019 PEG affects the patentability of machine-learning, as well as software more broadly.

Patent Eligibility Under the U.S. Patent System

The US patent laws are set out in Title 35 of the United States Code (35 U.S.C.). Section 101 of Title 35 focuses on several things, including whether the invention is classified as patent-eligible subject matter. As a general rule, an invention is considered to be patent-eligible subject matter if it falls within one of the four enumerated categories of patentable subject matter recited in 35 U.S.C. 101 (i.e., process, machine, manufacture, or composition of matter).[1] This, on its own, is an easy hurdle to overcome. However, there are exceptions (judicial exceptions). These include (1) laws of nature; (2) natural phenomena; and (3) abstract ideas. If the subject matter of the claimed invention fits into any of these judicial exceptions, it is not patent-eligible, and a patent cannot be obtained. The machine-learning and software aspects of a claim face 101 issues based on the abstract idea exception, and not the other two.

Section 101 is applied by Examiners at the USPTO in determining whether patents should be issued; by district courts in determining the validity of existing patents; in the Patent Trial and Appeal Board (PTAB) in appeals from Examiner rejections, in post-grant-review (PGR) proceedings, and in covered-business-method-review (CBM) proceedings; and in the Federal Circuit on appeals. The PTAB is part of the USPTO, and may hear an appeal of an Examiners rejection of claims of a patent application when the claims have been rejected at least twice.

In determining whether a claim fits into the abstract idea category at the USPTO, the Examiners and the PTAB must apply the 2019 PEG, which is described in the following section of this paper. In determining whether a claim is patent-ineligible as an abstract idea in the district courts and the Federal Circuit, however, the courts apply the Alice/Mayo test; and not the 2019 PEG. The definition of abstract idea was formulated by the Alice and Mayo Supreme Court cases. These two cases have been interpreted by a number of Federal Circuit opinions, which has led to a complicated legal framework that the USPTO and the district courts must follow.[2]

The 2019 PEG

The USPTO, which governs the issuance of patents, decided that it needed a more practical, predictable, and consistent method for its over 8,500 patent examiners to apply when determining whether a claim is patent-ineligible as an abstract idea.[3] Previously, the USPTO synthesized and organized, for its examiners to compare to an applicants claims, the facts and holdings of each Federal Circuit case that deals with section 101. However, the large and still-growing number of cases, and the confusion arising from similar subject matter [being] described both as abstract and not abstract in different cases,[4] led to issues. Accordingly, the USPTO issued its 2019 Revised Patent Subject Matter Eligibility Guidance on January 7, 2019 (2019 PEG), which shifted from the case-comparison structure to a new examination structure.[5] The new examination structure, described below, is more patent-applicant friendly than the prior structure,[6] thereby having the potential to result in a higher rate of patent issuances. The 2019 PEG does not alter the federal statutory law or case law that make up the U.S. patent system.

The 2019 PEG has a structure consisting of four parts: Step 1, Step 2A Prong 1, Step 2A Prong 2, and Step 2B. Step 1 refers to the statutory categories of patent-eligible subject matter, while Step 2 refers to the judicial exceptions. In Step 1, the Examiners must determine whether the subject matter of the claim is a process, machine, manufacture, or composition of matter. If it is, the Examiner moves on to Step 2.

In Step 2A, Prong 1, the Examiners are to determine whether the claim recites a judicial exception including laws of nature, natural phenomenon, and abstract ideas. For abstract ideas, the Examiners must determine whether the claim falls into at least one of three enumerated categories: (1) mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations); (2) certain methods of organizing human activity (fundamental economic principles or practices, commercial or legal interactions, managing personal behavior or relationships or interactions between people); and (3) mental processes (concepts performed in the human mind: encompassing acts people can perform using their mind, or using pen and paper). These three enumerated categories are not mere examples, but are fully-encompassing. The Examiners are directed that [i]n the rare circumstance in which they believe[] a claim limitation that does not fall within the enumerated groupings of abstract ideas should nonetheless be treated as reciting an abstract idea, they are to follow a particular procedure involving providing justifications and getting approval from the Technology Center Director.

Next, if the claim limitation recites one of the enumerated categories of abstract ideas under Prong 1 of Step 2A, the Examiner is instructed to proceed to Prong 2 of Step 2A. In Step 2A, Prong 2, the Examiners are to determine if the claim is directed to the recited abstract idea. In this step, the claim does not fall within the exception, despite reciting the exception, if the exception is integrated into a practical application. The 2019 PEG provides a non-exhaustive list of examples for this, including, among others: (1) an improvement in the functioning of a computer; (2) a particular treatment for a disease or medical condition; and (3) an application of the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.

Finally, even if the claim recites a judicial exception under Step 2A Prong 1, and the claim is directed to the judicial exception under Step 2A Prong 2, it might still be patent-eligible if it satisfies the requirement of Step 2B. In Step 2B, the Examiner must determine if there is an inventive concept: that the additional elements recited in the claims provide[] significantly more than the recited judicial exception. This step attempts to distinguish between whether the elements combined to the judicial exception (1) add[] a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field; or alternatively (2) simply append[] well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality. Furthermore, the 2019 PEG indicates that where an additional element was insignificant extra-solution activity, [the Examiner] should reevaluate that conclusion in Step 2B. If such reevaluation indicates that the element is unconventional . . . this finding may indicate that an inventive concept is present and that the claim is thus eligible.

In summary, the 2019 PEG provides an approach for the Examiners to apply, involving steps and prongs, to determine if a claim is patent-ineligible based on being an abstract idea. Conceptually, the 2019-PEG method begins with categorizing the type of claim involved (process, machine, etc.); proceeds to determining if an exception applies (e.g., abstract idea); then, if an exception applies, proceeds to determining if an exclusion applies (i.e., practical application or inventive concept). Interestingly, the PTAB not only applies the 2019 PEG in appeals from Examiner rejections, but also applies the 2019 PEG in its other Section-101 decisions, including CBM review and PGRs.[7] However, the 2019 PEG only applies to the Examiners and PTAB (the Examiners and the PTAB are both part of the USPTO), and does not apply to district courts or to the Federal Circuit.

Case 1: Appeal 2018-007443[8] (Decided October 10, 2019)

This case involves the PTAB reversing the Examiners Section 101 rejections of claims of the 14/815,940 patent application. This patent application relates to applying AI classification technologies and combinational logic to predict whether machines need to be serviced, and whether there is likely to be equipment failure in a system. The Examiner contended that the claims fit into the judicial exception of abstract idea because monitoring the operation of machines is a fundamental economic practice. The Examiner explained that the limitations in the claims that set forth the abstract idea are: a method for reading data; assessing data; presenting data; classifying data; collecting data; and tallying data. The PTAB disagreed with the Examiner. The PTAB stated:

Specifically, we do not find monitoring the operation of machines, as recited in the instant application, is a fundamental economic principle (such as hedging, insurance, or mitigating risk). Rather, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning.

As explained in the previous section of this paper, the 2019 PEG set forth three possible categories of abstract ideas: mathematical concepts, certain methods of organizing human activity, and mental processes. Here, the PTAB addressed the second of these categories. The PTAB found that the claims do not recite a fundamental economic principle (one method of organizing human activity) because the claims recite AI components like neural networks in the context of monitoring machines. Clearly, economic principles and AI components are not always mutually exclusive concepts.[9] For example, there may be situations where these algorithms are applied directly to mitigating business risks. Accordingly, the PTAB was likely focusing on the distinction between monitoring machines and mitigating risk; and not solely on the recitation of the AI components. However, the recitation of the AI components did not seem to hurt.

Then, moving on to another category of abstract ideas, the PTAB stated:

Claims 1 and 8 as recited are not practically performed in the human mind. As discussed above, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning. . . . [Also,] claim 8 recites an output device that transforms the composite prediction output into human-readable form.

. . . .

In other words, the classifying steps of claims 1 and modules of claim 8 when read in light of the Specification, recite a method and system difficult and challenging for non-experts due to their computational complexity. As such, we find that one of ordinary skill in the art would not find it practical to perform the aforementioned classifying steps recited in claim 1 and function of the modules recited in claim 8 mentally.

In the language above, the PTAB addressed the third category of abstract ideas: mental processes. The PTAB provided that the claim does not recite a mental process because the AI algorithms, based on the context in which they are applied, are computationally complex.

The PTAB also addressed the first of the three categories of abstract ideas (mathematical concepts), and found that it does not apply because the specific mathematical algorithm or formula is not explicitly recited in the claims. Requiring that a mathematical concept be explicitly recited seems to be a narrow interpretation of the 2019 PEG. The 2019 PEG does not require that the recitation be explicit, and leaves the math category open to relationships, equations, or calculations. From this, the PTAB might have meant that the claims list a mathematical concept (the AI algorithm) by its name, as a component of the process, rather than trying to claim the steps of the algorithm itself. Clearly, the names of the algorithms are explicitly recited; the steps of the AI algorithms, however, are not recited in the claims.

Notably, reciting only the name of an algorithm, rather than reciting the steps of the algorithm, seems to indicate that the claims are not directed to the algorithms (i.e., the claims have a practical application for the algorithms). It indicates that the claims include an algorithm, but that there is more going on in the claim than just the algorithm. However, instead of determining that there is a practical application of the algorithms, or an inventive concept, the PTAB determined that the claim does not even recite the mathematical concepts.

Additionally, the PTAB found that even if the claims had been classified as reciting an abstract idea, as the Examiner had contended the claims are not directed to that abstract idea, but are integrated into a practical application. The PTAB stated:

Appellants claims address a problem specifically using several artificial intelligence classification technologies to monitor the operation of machines and to predict preventative maintenance needs and equipment failure.

The PTAB seems to say that because the claims solve a problem using the abstract idea, they are integrated into a practical application. The PTAB did not specify why the additional elements are sufficient to integrate the invention. The opinion actually does not even specifically mention that there are additional elements. Instead, the PTABs conclusion might have been that, based on a totality of the circumstances, it believed that the claims are not directed to the algorithms, but actually just apply the algorithms in a meaningful way. The PTAB could have fit this reasoning into the 2019 PEG structure through one of the Step 2A, Prong 2 examples (e.g., that the claim applies additional elements in some other meaningful way), but did not expressly do so.

Conclusion

This case illustrates:

(1) the monitoring of machines was held to not be an abstract idea, in this context;(2) the recitation of AI components such as neural networks in the claims did not seem to hurt for arguing any of the three categories of abstract ideas;(3) complexity of algorithms implemented can help with the mental processes category of abstract ideas; and(4) the PTAB might not always explicitly state how the rule for practical application applies, but seems to apply it consistently with the examples from the 2019 PEG.

The next four articles will build on this background, and will provide different examples of how the PTAB approaches reversing Examiner 101-rejections of machine-learning patents under the 2019 PEG. Stay tuned for the analysis and lessons of the next case, which includes methods for overcoming rejections based on the mental processes category of abstract ideas, on an application for a probabilistic programming compiler that performs the seemingly 101-vulnerable function of generat[ing] data-parallel inference code.

FOOTNOTES

[1] MPEP 2106.04.[2] Accordingly, the USPTO must follow both the Federal Circuits case law that interprets Title 35 of the United States Code, and must follow the 2019 PEG. The 2019 PEG is not the same as the Federal Circuits standard the 2019 PEG does not involve distinguishing case law (the USPTO, in its 2019 PEG, has declared the Federal Circuits case law to be too clouded to be practically applied by the Examiners. 84 Fed. Reg. 52.). The USPTO practically could not, and actually did not, synthesize the holdings of each of the Federal Circuit opinions regarding Section 101 into the standard of the 2019 PEG. Therefore, logically, the only way to ensure that the 2019 PEG does not impinge on the statutory rights (provided by 35 U.S.C.) of patent applicants, as interpreted by the Federal Circuit, is for the 2019 PEG to define the scope of the 101 judicial exceptions more narrowly than the Statutory requirement. However, assuming there are instances where the 2019 PEG defines the 101 judicial exceptions more broadly than the statutory standard (if the USPTO rejects claims that the Federal Circuit would not have), that patent applicant may have additional arguments for eligibility.[3] 84 Fed. Reg. 50, 52.[4] Id.[5] The USPTO also, on October 17 of 2019, issued an update to the 2019 PEG. The October update is consistent with the 2019 PEG, and merely provides clarification to some of the terms used in the 2019 PEG, and clarification as to the scope of the 2019 PEG. October 2019 Update: Subject Matter Eligibility (October 17, 2019), https://www.uspto.gov/sites/default/files/documents/peg_oct_2019_update.pdf.%5B6%5D See Frequently Asked Questions (FAQs) on the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG), C-6 (https://www.uspto.gov/sites/default/files/documents/faqs_on_2019peg_20190107.pdf) (Any claim considered patent eligible under the current version of the MPEP and subsequent guidance should be considered patent eligible under the 2019 PEG. Because the claim at issue was considered eligible under the current version of the MPEP, the Examiner should not make a rejection under 101 in view of the 2019 PEG.).[7] See American Express v. Signature Systems, CBM2018-00035 (Oct. 30, 2019); Supercell Oy v. Gree, Inc., PGR2018-00061 (Oct. 15, 2019).[8] https://e-foia.uspto.gov/Foia/RetrievePdf?system=BPAI&flNm=fd2018007443-10-10-2019-0.%5B9%5D Notably, the mental process category and not the certain methods of organizing human activity category is the one that focuses on the complexity of the process. Furthermore, as shown in the following paragraph, the mental process category was separately discussed by the PTAB, again mentioning the algorithms. Accordingly, the PTAB is likely not mentioning the algorithms for the purpose of describing the complexity of the method.

See the original post here:
Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 - JD Supra

The 17 Best AI and Machine Learning TED Talks for Practitioners – Solutions Review

The editors at Solutions Review curated this list of the best AI and machine learning TED talks for practitioners in the field.

TED Talks are influential videos from expert speakers in a variety of verticals. TED began in 1984 as a conference where Technology, Entertainment and Design converged, and today covers almost all topics from business to technology to global issues in more than 110 languages. TED is building a clearinghouse of free knowledge from the worlds top thinkers, and their library of videos is expansive and rapidly growing.

Solutions Review has curated this list of AI and machine learning TED talks to watch if you are a practitioner in the field. Talks were selected based on relevance, ability to add business value, and individual speaker expertise. Weve also curated TED talk lists for topics like data visualization and big data.

Erik Brynjolfsson is the director of the MIT Center for Digital Business and a research associate at the National Bureau of Economic Research. He asks how IT affects organizations, markets and the economy. His books include Wired for Innovation and Race Against the Machine. Brynjolfsson was among the first researchers to measure the productivity contributions of information and community technology (ICT) and the complementary role of organizational capital and other intangibles.

In this talk, Brynjolfsson argues that machine learning and intelligence are not the end of growth its simply the growing pains of a radically reorganized economy. A riveting case for why big innovations are ahead of us if we think of computers as our teammates. Be sure to watch the opposing viewpoint from Robert Gordon.

Jeremy Howard is the CEO ofEnlitic, an advanced machine learning company in San Francisco. Previously, he was the president and chief scientist atKaggle, a community and competition platform of over 200,000 data scientists. Howard is a faculty member atSingularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on Jobs for the Machines.

Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis.

Nick Bostrom is a professor at the Oxford University, where he heads theFuture of Humanity Institute, a research group of mathematicians, philosophers and scientists tasked with investigating the big picture for the human condition and its future. Bostrom was honored as one ofForeign Policys 2015Global Thinkers. His bookSuperintelligenceadvances the ominous idea that the first ultraintelligent machine is the last invention that man need ever make.

In this talk, Nick Bostrom calls machine intelligence the last invention that humanity will ever need to make. Bostrom asks us to think hard about the world were building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values or will they have values of their own?

Lis work with neural networks and computer vision (with Stanfords Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations. Fei-Fei was honored as one ofForeign Policys 2015Global Thinkers.

This talk digs into how computers are getting smart enough to identify simple elements. Computer vision expert Fei-Fei Li describes the state of the art including the database of 15 million photos her team built to teach a computer to understand pictures and the key insights yet to come.

Anthony Goldbloom is the co-founder and CEO ofKaggle. Kaggle hosts machine learning competitions, where data scientists download data and upload solutions to difficult problems. Kaggle has a community of over 600,000 data scientists. In 2011 and 2012,Forbesnamed Anthony one of the 30 under 30 in technology; in 2013 theMIT Tech Reviewnamed him one of top 35 innovators under the age of 35, and the University of Melbourne awarded him an Alumni of Distinction Award.

This talk by Anthony Goldbloom describes some of the current use cases for machine learning, far beyond simple tasks like assessing credit risk and sorting mail.

Tufekci is a contributing opinion writer at theNew York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvards Berkman Klein Center for Internet and Society. Her book,Twitter and Tear Gas was published in 2017 by Yale University Press.

Machine intelligence is here, and were already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that dont fit human error patterns and in ways we wont expect or be prepared for.

In his bookThe Business Romantic, Tim Leberecht invites us to rediscover romance, beauty and serendipity by designing products, experiences, and organizations that make us fall back in love with our work and our life. The book inspired the creation of the Business Romantic Society, a global collective of artists, developers, designers and researchers who share the mission of bringing beauty to business.

In this talk, Tim Leberecht makes the case for a new radical humanism in a time of artificial intelligence and machine learning. For the self-described business romantic, this means designing organizations and workplaces that celebrate authenticity instead of efficiency and questions instead of answers. Leberecht proposes four principles for building beautiful organizations.

Grady Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBMs research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

Grady Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how well teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

Tom Gruberis a product designer, entrepreneur, and AI thought leader who uses technology to augment human intelligence. He was co-founder, CTO, and head of design for the team that created theSiri virtual assistant. At Apple for over 8 years, Tom led the Advanced Development Group that designed and prototyped new capabilities for products that bring intelligence to the interface.

This talk introduces the idea of Humanistic AI. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function from turbocharging our design skills to helping us remember everything weve ever read. The idea of an AI-powered personal memory also extends to relationships, with the machine helping us reflect on our interactions with people over time.

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His bookArtificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty.

His talk centers around the question of whether we can harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover. As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.

Dr. Pratik Shahs research creates novel intersections between engineering, medical imaging, machine learning, and medicine to improve health and diagnose and cure diseases. Research topics include: medical imaging technologies using unorthodox artificial intelligence for early disease diagnoses; novel ethical, secure and explainable artificial intelligence based digital medicines and treatments; and point-of-care medical technologies for real world data and evidence generation to improve public health.

TED Fellow Pratik Shah is working on a clever system to do just that. Using an unorthodox AI approach, Shah has developed a technology that requires as few as 50 images to develop a working algorithm and can even use photos taken on doctors cell phones to provide a diagnosis. Learn more about how this new way to analyze medical information could lead to earlier detection of life-threatening illnesses and bring AI-assisted diagnosis to more health care settings worldwide.

Margaret Mitchells research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Researchs Cognition group, focused on advancing artificial intelligence, and a researcher in Microsoft Researchs Natural Language Processing group.

Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI and asks us to consider what the technology we create today will mean for tomorrow.

Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. Sharma was recently named in theForbes 30 Under 30 list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Governments Centre for Data Ethics and Innovation.

AI algorithms make important decisions about you all the time like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.

Matt Beane does field research on work involving robots to help us understand the implications of intelligent machines for the broader world of work. Beane is an Assistant Professor in the Technology Management Program at the University of California, Santa Barbara and a Research Affiliate with MITs Institute for the Digital Economy. He received his PhD from the MIT Sloan School of Management.

The path to skill around the globe has been the same for thousands of years: train under an expert and take on small, easy tasks before progressing to riskier, harder ones. But right now, were handling AI in a way that blocks that path and sacrificing learning in our quest for productivity, says organizational ethnographer Matt Beane. Beane shares a vision that flips the current story into one of distributed, machine-enhanced mentorship that takes full advantage of AIs amazing capabilities while enhancing our skills at the same time.

Leila Pirhaji is the founder ofReviveMed, an AI platform that can quickly and inexpensively characterize large numbers of metabolites from the blood, urine and tissues of patients. This allows for the detection of molecular mechanisms that lead to disease and the discovery of drugs that target these disease mechanisms.

Biotech entrepreneur and TED Fellow Leila Pirhaji shares her plan to build an AI-based network to characterize metabolite patterns, better understand how disease develops and discover more effective treatments.

Janelle Shane is the owner of AIweirdness.com. Her book, You Look Like a Thing and I Love Youuses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.

The danger of artificial intelligence isnt that its going to rebel against us, but that its going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems like creating new ice cream flavors or recognizing cars on the road Shane shows why AI doesnt yet measure up to real brains.

Sylvain Duranton is the global leader of BCG GAMMA, a unit dedicated to applying data science and advanced analytics to business. He manages a team of more than 800 data scientists and has implemented more than 50 custom AI and analytics solutions for companies across the globe.

In this talk, business technologist Sylvain Duranton advocates for a Human plus AI approach using AI systems alongside humans, not instead of them and shares the specific formula companies can adopt to successfully employ AI while keeping humans in the loop.

For more AI and machine learning TED talks, browse TEDs complete topic collection.

Timothy is Solutions Review's Senior Editor. He is a recognized thought leader and influencer in enterprise BI and data analytics. Timothy has been named a top global business journalist by Richtopia. Scoop? First initial, last name at solutionsreview dot com.

See more here:
The 17 Best AI and Machine Learning TED Talks for Practitioners - Solutions Review

Here’s what happens when you apply machine learning to enhance the Lumires’ 1896 movie "Arrival of a Train at La Ciotat" – Boing Boing

Here's what happens when you apply machine learning to enhance the Lumires' 1896 movie "Arrival of a Train at La Ciotat" / Boing Boing

First, take a look at this 1895 short movie "L'Arrive d'un Train la Ciotat" ("Arrival of a Train at La Ciotat," from the Lumire Brothers. This film was upscaled to 4K and 60 frames per second using a variety of neural networks and other enhancement techniques. The result can be seen in the video below:

The Spot has an article about how it was done:

[YouTuber Denis Shiryaev] used a mix of neural networks from Gigapixel AI and a technique called depth-aware video frame interpolation to not only upscale the resolution of the video, but also increase its frame rate to something that looks a lot smoother to the human eye.

At this years Conference on Computer Vision and Pattern Recognition, researcher Chen Chen presented a cool project that vastly improves the quality of images captured in low-light conditions. Via his presentation: Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce []

In Identifiable Images of Bystanders Extracted from Corneal Reflections, British psychology researchers Rob Jenkins and Christie Kerr show that recognizable images of the faces of unpictured bystanders can be captured from modern, high-resolution photography by zooming in on subjects eyes to see the reflections in their corneas. The researchers asked experimental subjects to identify faces []

Unless youve been living under an extremely large rock over the past year, youve doubtlessly heard about the vaping epidemic, which has put dozens of people in the hospital and threatens the lives of smokers who are simply trying to find a safe way to quit. The SAUC Starter Kit: Lead-free Cannabinoid Delivery System offers []

A common New Years resolution is to learn a new language, which of course makes foreign travel much more enjoyable and sharpens the mind while reducing the cognitive effects of aging. But far too many people give up on their goals because the leading language-learning platforms are either far too expensive or rely on boring []

Theres no shortage of CRMs out there for large corporations, e-commerce companies, and booming startups to utilize. Think Salesforce, for example. But if youre running a small business of ten employees max, theres no way it makes sense to invest in an expensive, robust option like that. Youre better off using your old system of []

See the original post:
Here's what happens when you apply machine learning to enhance the Lumires' 1896 movie "Arrival of a Train at La Ciotat" - Boing Boing

Artnome Wants to Predict the Price of a Masterpiece. The Problem? There’s Only One. – Built In

Buying a Picasso is like buying a mansion.

Theres not that many of them, so it can be hard to know what a fair price should be. In real estate, if the house last sold in 2008 right before the lending crisis devastated the real estate market basing todays price on the last sale doesnt make sense.

Paintings are also affected by market conditions and a lack of data. Kyle Waters, a data scientist at Artnome, explained to us how his Boston-area firm is addressing this dilemma and, in doing so, aims to do for the art world what Zillow did for real estate.

If only 3 percent of houses are on the market at a time, we only see the prices for those 3 percent. But what about the rest of the market? Waters said. Its similar for art too. We want to price the entire market and give transparency.

We want to price the entire market and give transparency.

Artnome is building the worlds largest database of paintings by blue-chip artists like Georgia OKeeffe, including her super-famous works, lesser-known items, those privately heldand artworks publicly displayed. Waters is tinkering with the data to create a machine learning model that predicts how much people will pay for these works at auctions. Because this model includes an artists entire collection, and not just those works that have been publicly sold before, Artnome claims its machine learning model will be more accurate than the auction industrys previous practice of simply basing current prices on previous sales.

The companys goal is to bring transparency to the auction house industry. But Artnomes new model faces the old problem: Its machine learning system performs poorly on the works that typically sell for the most the ones that people are the most interested in since its hard to predict the price of a one-of-a-kind masterpiece.

With a limited data set, its just harder to generalize, Waters said.

We talked to Waters about how he compiled, cleaned and created Artnomes machine learning model for predicting auction prices, which launched in late January.

Most of the information about artists included in Artnomes model comes from the dusty basement libraries of auction houses, where they store their catalog raissons, which are books that serve as complete records of an artists work. Artnome is compiling and digitizing these records representing the first time these books have ever been brought online, Waters said.

Artnomes model currently includes information from about 5,000 artists whose works have been sold over the last 15 years. Prices in the data set range from $100 at the low end to Leonardo DaVincis record-breaking Salvator Mundi a painting thatsold for $450.3 million in 2017, making it the most expensive work of art ever sold.

How hard was it to predict what DaVincis 500-year-old Mundi would sell for? Before the sale, Christies auction house estimated his portrait of Jesus Christ was worth around $100 million less than a quarter of the price.

It was unbelievable, Alex Rotter, chairman of Christies postwar and contemporary art department, told The Art Newspaper after the sale. Rotter reported the winning phone bid.

I tried to look casual up there, but it was very nerve-wracking. All I can say is, the buyer really wanted the painting and it was very adrenaline-driven.

The buyer really wanted the painting and it was very adrenaline-driven.

A piece like Salvatore Mundi could come to market in 2017 and then not go up for auction again for 50 years. And because a machine learning model is only as good as the quality and quantity of the data it is trained on, market, condition and changes in availability make it hard to predict a future price for a painting.

These variables are categorized into two types of data: structured and unstructured. And cleaning all of it represents a major challenge.

Structured data includes information like what artist painted which painting on what medium, and in whichyear.

Waters intentionally limited the types of structured information he included in the model to keep the system from becoming too unruly to work with. But defining paintings as solely two-dimensional works on only certain mediums proved difficult, since there are so many different types of paintings (Salvador Dali famously painted on a cigar box, after all). Artnomes problem represents an issue of high cardinality, Waters said, since there are so many different categorical variables he could include in the machine learning system.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit, Waters said, adding that large models also become more unruly to work with.

Other structured data focuses on the artist herself, denoting details like when the creator was born or if they were alive during the time of auction. Waters also built a natural language processing system that analyzes the type and frequency of the words an artist used in her paintings titles, noting trends like Georgia OKeeffe using the word white in many of her famous works.

Including information on market conditions, like current stock prices or real estate data, was important from a structured perspective too.

How popular is an artist, are they exhibiting right now? How many people are interested in this artist? Whats the state of the market? Waters said. Really getting those trends and quantifying those could be just as important as more data.

Another type of data included in the model is unstructured data which, as the name might suggest, is a little less concrete than the structured items. This type of data is mined from the actual painting, and includes information like the artworks dominant color, number of corner points and if faces are pictured.

Waters created a pre-trained convolutional neural network to look for these variables, modeling the project after the ResNet 50 model, which famously won the ImageNet Large Scale Visual Recognition Challenge in 2012 after it correctly identified and classified nearly all of the 14 billion objects featured.

Including unstructured data helps quantify the complexity of an image, Waters said, giving it what he called an edge score.

An edge score helps the machine learning system quantify the subjective points of a painting thatseem intuitive to humans, Waters said. An example might be Vincent Van Goghs series of paintings of red-haired men posing in front of a blue background. When youre looking at the painting, its not hard to see youre looking at self portraits of Van Gogh, by Van Gogh.

Including unstructured data in Artnomes system helps the machine spot visual cues that suggest images are part of a series, which has an impact on their value, Waters said.

When you start interacting with different variables, then you can start getting into more granular details.

Knowing that thats a self-portrait would be important for that artist, Waters said. When you start interacting with different variables, then you can start getting into more granular details that, for some paintings by different artists, might be more important than others.

Artnomes convoluted neural network is good at analyzing paintings for data that tells a deeper story about the work. Butsometimes, there are holes inthe story being told.

In its current iteration, Artnomes model includes both paintings with and without frames it doesnt specify which work falls into which category. Not identifying the frame could affect the dominant color the system discovers, Waters said, adding an error to its results.

That could maybe skew your results and say, like, the dominant color was yellow when really the painting was a landscape and it was green, Waters said.

Interested in convolutional neural networks?Convolutional Neural Networks Explained: From Pytorch to CNN

The model also lacks information on the condition of the painting, which, again, could impact the artworks price. If the model cant detect a crease in the painting, it might overestimate its value. Also missing is data on an artworks provenance, or its ownership history. Some evidence suggests that paintings that have been displayed by prominent institutions sell for more. Theres also the issue of popularity. Waters hasnt found a concrete way to tell the system that people like the work of OKeeffe more than the paintings by artist and actor James Franco.

Im trying to think of a way to come up with a popularity score for these very popular artists, Waters said.

An auctioneer hits the hammer to indicate a sale has been made. But the last price the bidder shouts isnt what theyactually pay.

Buyers also must pay the auction house a commission, which varies between auction houses and has changed over time. Waters has had to dig up the commission rates for these outlets over the years and add them to the sales price listed. Hes also had to make sure all sales prices are listed in dollars, converting those listed in other currencies. Standardizing each sale ensures the predictions the model makes are accurate, Waters said.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did, Waters said. It would be clearly wrong to start comparing the two.

Once Artnomes data has been gleaned and cleaned, information is input into the machine learning system, which Waters structured into a random forest model, an algorithm that builds and merges multiple decision trees to arrive at an accurate prediction. Waters said using a random forest model keeps the system from overfitting paintings into one category, and also offers a level of explainability through its permutation score a metric that basically decides the most important aspects of a painting.

Waters doesnt weigh the data he puts into the model. Instead, he lets the machine learning system tell him whats important, with the model weighing factors like todays S&P prices more heavily than the dominant color of a work.

Thats kind of one way to get the feature importance, for kind of a black box estimator, Waters said.

Although Artnome has been approached by private collectors, gallery owners and startups in the art tech world interested in its machine learning system, Waters said its important this data set and model remain open to the public.

His aim is for Artnomes machine learning model to eventually function like Zillows Zestimate, which estimates real estate prices for homes on and off the market, and act as a general starting point for those interested in finding out the price of an artwork.

When it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

We might not catch a specific genre, or era, or point in the art history movement, Waters said. I dont think itll ever be perfect. But when it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

Want to learn more about machine learning? A Tour of the Top 10 Algorithms for Machine Learning Newbies

More here:
Artnome Wants to Predict the Price of a Masterpiece. The Problem? There's Only One. - Built In

VUniverse Named One of Five Finalists for SXSW Innovation Awards: AI & Machine Learning Category – Yahoo Finance

Company to Demonstrate Live at Finalists Showcase in Austin, TX on Saturday, March 14

NEW YORK, Feb. 5, 2020 /PRNewswire/ -- VUniverse, a personalized movie and show recommendation platform that enables users to browse their streaming services in one appa channel guide for the streaming universe, announced today its been named one of five finalists in the AI & Machine Learning category for the 23rd annual SXSW Innovation Awards.

The SXSW Innovation Awards recognizes the most exciting tech developments in the connected world. During the showcase on Saturday, March 14, 2020, VUniverse will offer first-look demos of its platform as attendees explore this year's most transformative and forward-thinking digital projects. They'll be invited to experience how VUniverse utilizes AI to cross-reference all streaming services a user subscribes to and then delivers personalized suggestions of what to watch.

"We're honored to be recognized as a finalist for the prestigious SXSW Innovation Awards and look forward to showcasing our technology that helps users navigate the increasingly ever-changing streaming service landscape," said VUniverse co-founder Evelyn Watters-Brady. "With VUniverse, viewers will spend less time searching and more time watching their favorite movies and shows, whether it be a box office hit or an obscure indie gem."

About VUniverse VUniverse is a personalized movie and show recommendation platform that enables users to browse their streaming services in one appa channel guide for the streaming universe. Using artificial intelligence, VUniverse creates a unique taste profile for every user and serves smart lists of curated titles using mood, genre, and user-generated tags, all based on content from the user's existing subscription services. Users can also create custom watchlists and share them with friends and family.

Media Contact Jessica Cheng jessica@relativity.ventures

View original content:http://www.prnewswire.com/news-releases/vuniverse-named-one-of-five-finalists-for-sxsw-innovation-awards-ai--machine-learning-category-300999113.html

SOURCE VUniverse

Follow this link:
VUniverse Named One of Five Finalists for SXSW Innovation Awards: AI & Machine Learning Category - Yahoo Finance

How To Drive More CPQ Sales With AI In 2020 – Forbes

Getty Images/iStockphoto

Bottom Line: AI-based deal intelligence, pricing and predictive analytics are defining the future of CPQ selling today by providing real-time insights applicable to every sales cycle, from initial quote through contracts and renewals.

AI and machine learning are making immediate contributions to driving more revenue by improving deal price guidance, deal intelligence, dynamic pricing and improving rebate & incentive management. Every CPQ vendor knows that pricing is the catalyst they need in their applications to attract and keep new customers. Theyve redefined their product road maps to enable customers to create pricing segmentation models, provide price optimization guidance to sales teams, and optimize pricing for each product and customer. Salesforce is providing Einstein Pricing and Einstein Analytics Templates to optimize margins, improve close rates, improve quoting efficiency and improve subscription metrics.

What CROs Are Looking For

Chief Revenue Officers (CROs), Sales and Sales Operations VPs are looking for CPQ solutions to step up and deliver improved pricing guidance across sales cycles, ideally through intuitive, easily understood visualizations. A CRO of a medical device manufacturer who is generating over 40% of all revenue from CPQ sales strategies told me recently that for CPQ to make a difference in their company, it needs to do the following:

AIs Four Cornerstones Of CPQ Growth

Pricing agility, intelligence and speed win more deals by enabling organizations to complete quotes faster and more completely than competitors. AIs four cornerstones of CPQ growth are intelligent pricing, embedding sales insights into more sales cycles, improving deal workflows with greater intelligence and improving enterprise-wide CPQ integration. User experience is now table stakes to deliver any CPQ solution, and its the glue that keeps all four cornerstones aligned. Several vendors including Vendavo are defining their product strategies based on these trends and are well-positioned for 2020. Expanding on each of these four cornerstones provides insights into how AI can drive more CPQ revenue in 2020:

Conclusion

CPQ selling strategies effectiveness is improving thanks to AI and machine learning. Based on visits with medical device and discrete manufacturers who rely on CPQ for a large percentage of sales, four cornerstones of why CPQ is improving based on AI emerge. The first is how important intelligent pricing is becoming. In addition, embedding sales insights into more sales cycles, improving deal workflows with greater intelligence and improving enterprise-wide CPQ integration are cornerstones of CPQ in 2020. AI and machine learning are today improving revenue lifecycles by providing sales teams guidance on which deals to quote, at which price, for which specific products.

Read the original:
How To Drive More CPQ Sales With AI In 2020 - Forbes