Blatant censorship: Retrospective of American painter Philip Guston delayed four years – WSWS

The decision by four major art museums in the UK and US to postpone for four years Philip Guston Now, a long-planned retrospective of one of postwar Americas most significant artists, is a cowardly act of censorship.

The National Gallery of Art in Washington, D.C., Tate Modern in London, Museum of Fine Arts, Boston and Museum of Fine Arts, Houston claimed in a September 21 statement that Gustons obviously hostile and darkly satirical images of Ku Klux Klansmen and others could not be exhibited until a time at which we think that the powerful message of social and racial justice that is at the center of Philip Gustons work can be more clearly interpreted.

The museums directors said they needed more time to properly prepare the public to understand Gustons message through outreach and programming. This is evasive and duplicitous. No honest opponent of racism and anti-Semitism would object to Gustons attack on the KKK and other reactionary features of American society. Those who object to the artists supposed appropriation of African American suffering are cultural-nationalist elements who insist that race is the category that defines human beings.

The directors may share this foul view or simply feel the need to accommodate themselves to the current atmosphere. In either case, they have helped deliver a blow to artistic freedom.

In the face of a deluge of criticism, the directors of the National Gallery and the Tate have tried to defend themselves. National Gallery Director Kaywin Feldman told Hyperallergic this week that in todays Americabecause Guston appropriated images of Black traumathe show needs to be about more than Guston. She went on, Also, related, an exhibition with such strong commentary on race cannot be done by all-white curators. Everybody involved in this project is white. ... We definitely need some curators of color working on the project with us. I think all four museums agree with that statement.

This is simply disgusting, a craven giving in to racialist thinking of the most sinister type, which historically has been associated with the far right. Along those lines, those who object or might object to the Guston exhibition are now generally vociferous in their calls for censorship. These are the same political forces who in 2017 protested against the exhibitionat the Whitney Museum in New Yorkof Dana Shutzs Open Casket, a painting based on a photograph of 15-year-old Emmett Till, a black youth murdered and mutilated in 1955. Some of the protesters, in fact, went so far as to demand the painting be burned!

To paraphrase what we said in 2017, the subject matter, the activities of the Klan, does not belong to African American artists or anyone else. It is the common property and responsibility of those who oppose, in Lenins phrase, all cases of tyranny, oppression, violence, and abuse. These petty-bourgeois nationalist elements are not genuinely concerned with the history of African American suffering or anyone elses. If they were, they would want it to be exposed and denounced as widely as possible. They are objecting to anyone else, as they see it, gaining some advantage from the franchise.

These are selfish, careerist elements who want to monopolize a field for their own prestige and profit. At the same time, the extreme racialism serves the political purpose, pursued by the New York Times and the Democratic Party milieu, of attempting to confuse the population and divide it along racial and ethnic lines, diverting from the struggle against social inequality, war and the threat of dictatorship.

In the past three years, the situation has only become more noxious and the racialists activities more provocative.

The museum directors announcement of the postponement was met with dismay by art critics who objected to the overt act of censorship, especially against an artist deeply committed to the struggle against racism, although most seemed resigned to the delay. The artists daughter, Musa Mayer, commented, Its sad. This should be a time of reckoning, of dialogue. These paintings meet the moment we are in today. The danger is not in looking at Philip Gustons work but in looking away.

A forceful demand that the show be reinstated was issued in an open letter signed by 100 artists, curators, art dealers and writers published last Wednesday in the Brooklyn Rail, which has since garnered hundreds more signatures. Signed by Matthew Barney, Nicole Eisenman, Joan Jonas, Martin Puryear, Lorna Simpson and Henry Taylor among others, the list reads like a whos who of todays most prominent artists, black and white.

The open letter begins by noting that the undersigned artists were shocked and disappointed by the four-year postponement. The letter cites the comment by Musa Mayer that Guston had dared to unveil [the] racist terror that he had witnessed since boyhood, when the Klan marched openly by the thousands in the streets of Los Angeles. As poor Jewish immigrants, his family fled extermination in the Ukraine. He understood what hatred was. It was the subject of his earliest works.

The open letter and the principled opposition of many artists to the museums censorship are welcome and objectively significant, although the signatories weaken their own position by giving in too much to the notion of white culpability and other nostrums of identity politics.

The open letter is strongest in denouncing the notion that hiding Gustons art will somehow improve matters. The people who run our great institutions do not want trouble, it argues. They fear controversy. They lack faith in the intelligence of their audience. If museum officials feel that the current social eruptions will blow over in four years, the letter asserts, they are mistaken. The tremors shaking us all will never end until justice and equity are installed. Hiding away images of the KKK will not serve that end. Quite the opposite. And Gustons paintings insist that justice has never yet been achieved.

The artists letter demands the exhibition be restored to the museums schedules, and that their staffs prepare themselves to engage with a public that might well be curious about why a painterever self-critical and a standard-bearer for freedomwas compelled to use such imagery.

Guston (1913-1980) was born in Montreal to Ukrainian-Jewish parents but grew up in California and attended high school in Los Angeles with fellow future painter Jackson Pollock. Moving to New York, according to ArtNet, Guston was enrolled in the Works Progress Administration during the 1930s [like Pollock], where he produced works inspired by the Mexican Muralists and Italian Renaissance paintings.

Guston became associated with Abstract Expressionism, the loose gestural painting style also known as the New York School that was the dominant artistic school of the Cold War period of the 1950s. Other Abstract Expressionists were Arshile Gorky, Willem de Kooning and, of course, Pollock.

After playing a leading role in the development of abstract art, however, Guston came to reject its approach as too rarefied and confining as a means of responding artistically and politically to the upheavals of the civil rights and antiwar movements of the 1960s. What kind of man am I, he once asked, sitting at home, reading magazines, going into a frustrated fury about everythingand then going into my studio to adjust a red to a blue?

Guston became widely known for his blunt, almost cartoonish images suggesting the thuggish brutality and political corruption of official American society. He developed a distinctive figurative style populated with oversized heads, hands, bricks, shoes and other bizarre objects. The artists highly personal iconography also included hooded Klansmen, who began appearing in his work as early as the 1930s. These buffoonish figures often appear crammed into cars like the Three Stooges, if anything more menacing because they seem so omnipresent and ordinary.

Attracted as a teenager to left-wing politics, Guston (then Goldstein) had joined one of the John Reed clubs sponsored by the Communist Party. While the role of the Stalinists was already a negative one, these clubs still attracted artists seeking to fight poverty and inequality. He and his friend Reuben Kadish painted a mural and joined a rally in Los Angeles to raise money for the defense of the Scottsboro Boys, the nine African American teenagers falsely accused of raping two white women in Alabama.

After the National Association for the Advancement of Colored People (NAACP) backed off the case over fears of repercussions, the youths defense was taken up by the Communist Party. This won the CP broad support among radicalized white and black workers, as well as artists and young people like Guston. The painter, like many artists of his generation, eventually left the Stalinist orbit of the CP in favor of left-liberal politics. However, his commitment to fighting racism and anti-Semitism retained a genuine, democratic character at odds with the current racialist trends.

Often cloaked in left-sounding rhetoric by groups of political activist/artistic collectives who call for increasing the number of BIPOC (Black, Indigenous, People of Color) on museum staffs, boards and among the artists whose work is acquired and promoted, the identity politics campaigns against the systemic racism of cultural institutions have nothing progressive about them.

In response, the various institutions have endlessly adapted themselves to and retreated before their racialist critics. In mid-September, the Brooklyn Museumno doubt in straitened circumstances because of the pandemic-induced closureannounced it would auction 12 works from its collection to raise funds for the care of its collection.

While culling work by 16th-19th century European painters Cranach the Elder, Gustave Courbet and Jean-Baptiste Camille Corot, the Brooklyn Museum has said that it would not sell any of its work by living, presumably more ethnically diverse artists. The Baltimore Museum of Art and the San Francisco Museum of Modern Art for their part recently made a point of selling work to acquire more art by women and artists of color.

In another manifestation of the logic of segregation to which this sort of outlook leads, the blue-chip Chelsea gallery and art dealer David Zwirner recently announced it was hiring Ebony L. Haynes as a new gallery director to realize her vision for a kunsthalle with an all-Black staff, which would offer exhibits of and internships to exclusively Black youth. There arent enough places of accessespecially in commercial galleriesfor Black staff and for people of color to gain experience, she said.

But what would access on this backward, racially exclusive basis amount to? What sort of art will come out of such a process?

The rotten character of this resurgence of racial-ethnic thinking finds expression in the censorship of the Guston exhibition itself. A show dedicated to the work of an artist who fiercely pursued equality and an end to oppression of all types has run afoul of a privileged, upper middle class crowd whose outlook and activity operate in a very different direction: toward racial-ethnic exclusivism, selfishness and the striving for privilege.

See the rest here:

Blatant censorship: Retrospective of American painter Philip Guston delayed four years - WSWS

Facebook, Twitter Censor Trump Post Comparing COVID with the Flu – CBN News

The social media giants Facebook and Twitter on Tuesday censored President Trump's post and tweet, comparing COVID-19 and the flu.

Facebook removed Trump's post in which he claimed COVID-19 is less deadly "in most populations" than the flu.

The President wrote on Twitter: "Flu season is coming up! Many people every year, sometimes over 100,000, and despite the Vaccine, die from the Flu. Are we going to close down our Country? No, we have learned to live with it, just like we are learning to live with Covid, in most populations far less lethal!!!"

Twitter left the President's tweet in place, but added the following disclaimer:

"This Tweet violated the Twitter Rules about spreading misleading and potentially harmful information related to COVID-19. However, Twitter has determined that it may be in the public's interest for the Tweet to remain accessible," the disclaimer read.

Axios reports Facebook has been criticized for not removing posts that violate community guidelines in a timely manner, yet the company took swift action when Trump posted information about the virus that "could contribute to imminent physical harm." Twitter took action about 30 minutes later.

A Facebook spokesperson told Axios, "We remove incorrect information about the severity of COVID-19, and have now removed this post."

A Twitter spokesman also told the website: "We placed a public interest notice on this Tweet for violating our COVID-19 Misleading Information Policy by making misleading health claims about COVID-19. As is standard with this public interest notice, engagements with the Tweet will be significantly limited."

The President's social media posts came after he tested positive for COVID-19 and spent three days at the Walter Reed Medical Center. While reportedly still contagious, he will continue his recovery at the White House, where he will be cared for 24/7 by a team of doctors and nurses.

Out of 7.4 million cases in the US, COVID-19 has killed almost 210,000 Americans this year, according to the CDC. For comparison, the CDC's website estimates 24,000 to 62,000 have died during the most recent flu season, out of 39 million to 56 million people who were sick from it.

STAY UP TO DATE WITH THE FREE CBN NEWS APP!Click Here Get the App with Special Alerts on Breaking News and Live Events!

Read more from the original source:

Facebook, Twitter Censor Trump Post Comparing COVID with the Flu - CBN News

Censorship vote: the Civil Guard requests more information for the alleged irregularities – Sportsfinding

As reported by the journalist Jordi Mart, in SER Catalunya, the Civil Guard has asked Bara for more information, considering that it was insufficient in its day due to alleged irregularities in the signatures of the vote of no-confidence. Always according to the SER, Bara denies that it has made a formal complaint but, upon receiving the notification from the Censorship Vote Table that the members would not be called individually to check if they had signed each ballot (a guarantee measure requested by the club), conveyed to the Civil Guard his suspicions of alleged irregularities that could be connected to a bag of fraud involving the resale of membership cards.

A matter that affected 2,800 cards and that has also been investigated by the Civil Guard. Bara claims to be certain that the signature that has been put on certain ballots are false. With the count completed, it seems difficult that the doubts expressed by the Board of Directors of Bartomeu can prosper.

Go here to see the original:

Censorship vote: the Civil Guard requests more information for the alleged irregularities - Sportsfinding

The secrets of small data: How machine learning finally reached the enterprise – VentureBeat

Over the past decade, big data has become Silicon Valleys biggest buzzword. When theyre trained on mind-numbingly large data sets, machine learning (ML) models can develop a deep understanding of a given domain, leading to breakthroughs for top tech companies. Google, for instance, fine-tunes its ranking algorithms by tracking and analyzing more than one trillion search queries each year. It turns out that the Solomonic power to answer all questions from all comers can be brute-forced with sufficient data.

But theres a catch: Most companies are limited to small data; in many cases, they possess only a few dozen examples of the processes they want to automate using ML. If youre trying to build a robust ML system for enterprise customers, you have to develop new techniques to overcome that dearth of data.

Two techniques in particular transfer learning and collective learning have proven critical in transforming small data into big data, allowing average-sized companies to benefit from ML use cases that were once reserved only for Big Tech. And because just 15% of companies have deployed AI or ML already, there is a massive opportunity for these techniques to transform the business world.

Above: Using the data from just one company, even modern machine learning models are only about 30% accurate. But thanks to collective learning and transfer learning, Moveworks can determine the intent of employees IT support requests with over 90% precision.

Image Credit: Moveworks

Of course, data isnt the only prerequisite for a world-class machine learning model theres also the small matter of building that model in the first place. Given the short supply of machine learning engineers, hiring a team of experts to architect an ML system from scratch is simply not an option for most organizations. This disparity helps explain why a well-resourced tech company like Google benefits disproportionately from ML.

But over the past several years, a number of open source ML models including the famous BERT model for understanding language, which Google released in 2018 have started to change the game. The complexity of creating a model the caliber of BERT, whose aptly named large version has about 340 million parameters, means that few organizations can even consider quarterbacking such an initiative. However, because its open source, companies can now tweak that publicly available playbook to tackle their specific use cases.

To understand what these use cases might look like, consider a company like Medallia, a Moveworks customer. On its own, Medallia doesnt possess enough data to build and train an effective ML system for an internal use case, like IT support. Yet its small data does contain a treasure trove of insights waiting for ML to unlock them. And by leveraging new techniques to glean these insights, Medallia has become more efficient, from recognizing which internal workflows need attention to understanding the company-specific language its employees use when asking for tech support.

So heres the trillion-dollar question: How do you take an open source ML model designed to solve a particular problem and apply that model to a disparate problem in the enterprise? The answer starts with transfer learning, which, unsurprisingly, entails transferring knowledge gained from one domain to a different domain that has less data.

For example, by taking an open source ML model like BERT designed to understand generic language and refining it at the margins, it is now possible for ML to understand the unique language employees use to describe IT issues. And language is just the beginning, since weve only begun to realize the enormous potential of small data.

Above: Transfer learning leverages knowledge from a related domain typically one with a greater supply of training data to augment the small data of a given ML use case.

Image Credit: Moveworks

More generally, this practice of feeding an ML model a very small and very specific selection of training data is called few-shot learning, a term thats quickly become one of the new big buzzwords in the ML community. Some of the most powerful ML models ever created such as the landmark GPT-3 model and its 175 billion parameters, which is orders of magnitude more than BERT have demonstrated an unprecedented knack for learning novel tasks with just a handful of examples as training.

Taking essentially the entire internet as its tangential domain, GPT-3 quickly becomes proficient at these novel tasks by building on a powerful foundation of knowledge, in the same way Albert Einstein wouldnt need much practice to become a master at checkers. And although GPT-3 is not open source, applying similar few-shot learning techniques will enable new ML use cases in the enterprise ones for which training data is almost nonexistent.

With transfer learning and few-shot learning on top of powerful open source models, ordinary businesses can finally buy tickets to the arena of machine learning. But while training ML with transfer learning takes several orders of magnitude less data, achieving robust performance requires going a step further.

That step is collective learning, which comes into play when many individual companies want to automate the same use case. Whereas each company is limited to small data, third-party AI solutions can use collective learning to consolidate those small data sets, creating a large enough corpus for sophisticated ML. In the case of language understanding, this means abstracting sentences that are specific to one company to uncover underlying structures:

Above: Collective learning involves abstracting data in this case, sentences with ML to uncover universal patterns and structures.

Image Credit: Moveworks

The combination of transfer learning and collective learning, among other techniques, is quickly redrawing the limits of enterprise ML. For example, pooling together multiple customers data can significantly improve the accuracy of models designed to understand the way their employees communicate. Well beyond understanding language, of course, were witnessing the emergence of a new kind of workplace one powered by machine learning on small data.

Read more:
The secrets of small data: How machine learning finally reached the enterprise - VentureBeat

Commentary: Can AI and machine learning improve the economy? – FreightWaves

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

In this installment of the AI in Supply Chain series (#AIinSupplyChain), I tried to discern the outlines of an answer to the question posed in the headline above by reading three academic papers. This article distills what I consider the most important takeaways from the papers.

Although the context of the investigations that resulted in these papers looks at the economy as a whole, there are implications that are applicable at the level of an individual firm. So, if you are responsible for innovation, corporate development and strategy at your company, its probably worth your time to read each of them and then interpret the findings for your own firm.

In this paper, Erik Brynjolfsson, Daniel Rock and Chad Syverson explore the paradox that while systems using artificial intelligence are advancing rapidly, measured economywide productivity has declined.

Recent optimism about AI and machine learning is driven by recent and dramatic improvements in machine perception and cognition. These skills are essential to the ways in which people get work done. So this has fueled hopes that machines will rapidly approach and possibly surpass people in their ability to do many different tasks that today are the preserve of humans.

However, productivity statistics do not yet reflect growth that is driven by the advances in AI and machine learning. If anything, the authors cite statistics to suggest that labor productivity growth fell in advanced economies starting in the mid-2000s and has not recovered to its previous levels.

Therein lies the paradox: AI and machine learning boosters predict it will transform entire swathes of the economy, yet the economic data do not point to such a transformation taking place. What gives?

The authors offer four possible explanations.

First, it is possible that the optimism about AI and machine learning technologies is misplaced. Perhaps they will be useful in certain narrow sectors of the economy, but ultimately their economywide impact will be modest and insignificant.

Second, it is possible that the impact of AI and machine learning technologies is not being measured accurately. Here it is pessimism about the significance of these technologies that prevents society from accurately measuring their contribution to economic productivity.

Third, perhaps these new technologies are producing positive returns to the economy, BUT these benefits are being captured by a very small number of firms and as such the rewards are enjoyed by only a minuscule fraction of the population.

Fourth, the benefits of AI and machine learning will not be reflected in the wider economy until investments have been made to build up complementary technologies, processes, infrastructure, human capital and other types of assets that make it possible for society to realize and measure the transformative benefits of AI and machine learning.

The authors argue that AI, machine learning and their complementary new technologies embody the characteristics of general purpose technologies (GPTs). A GPT has three primary features: It is pervasive or can become pervasive; it can be improved upon as time elapses; and it leads directly to complementary innovations.

Electricity. The internal combustion engine. Computers. The authors cite these as examples of GTPs, with which readers are familiar.

Crucially, the authors state that a GPT can at one moment both be present and yet not affect current productivity growth if there is a need to build a sufficiently large stock of the new capital, or if complementary types of capital, both tangible and intangible, need to be identified, produced, and put in place to fully harness the GPTs productivity benefits.

It takes a long time for economic production at the macro- or micro-scale to be reorganized to accommodate and harness a new GPT. The authors point out that computers took 25 years before they became ubiquitous enough to have an impact on productivity. It took 30 years for electricity to become widespread. As the authors state, the changes required to harness a new GPT take substantial time and resources, contributing to organizational inertia. Firms are complex systems that require an extensive web of complementary assets to allow the GPT to fully transform the system. Firms that are attempting transformation often must reevaluate and reconfigure not only their internal processes but often their supply and distribution chains as well.

The authors end the article by stating: Realizing the benefits of AI is far from automatic. It will require effort and entrepreneurship to develop the needed complements, and adaptability at the individual, organizational, and societal levels to undertake the associated restructuring. Theory predicts that the winners will be those with the lowest adjustment costs and that put as many of the right complements in place as possible. This is partly a matter of good fortune, but with the right roadmap, it is also something for which they, and all of us, can prepare.

In this paper, Brynjolfsson, Xiang Hui and Meng Liu explore the effect that the introduction of eBay Machine Translation (eMT) had on eBays international trade. The authors describe eMT as an in-house machine learning system that statistically learns how to translate among different languages. They also state: As a platform, eBay mediated more than 14 billion dollars of global trade among more than 200 countries in 2014. Basically, eBay represents a good approximation of a complex economy within which to examine the economywide benefits of this type of machine translation.

The authors state: We show that a moderate quality upgrade increases exports on eBay by 17.5%. The increase in exports is larger for differentiated products, cheaper products, listings with more words in their title. Machine translation also causes a greater increase in exports to less experienced buyers. These heterogeneous treatment effects are consistent with a reduction in translation-related search costs, which comes from two sources: (1) an increased matching relevance due to improved accuracy of the search query translation and (2) better translation quality of the listing title in buyers language.

They report an accompanying 13.1% increase in revenue, even though they only observed a 7% increase in the human acceptance rate.

They also state: To put our result in context, Hui (2018) has estimated that a removal of export administrative and logistic costs increased export revenue on eBay by 12.3% in 2013, which is similar to the effect of eMT. Additionally, Lendle et al. (2016) have estimated that a 10% reduction in distance would increase trade revenue by 3.51% on eBay. This means that the introduction of eMT is equivalent of [sic] the export increase from reducing distances between countries by 37.3%. These comparisons suggest that the trade-hindering effect of language barriers is of first-order importance. Machine translation has made the world significantly smaller and more connected.

In this paper, Brynjolfsson, Rock and Syverson develop a model that shows how GPTs like AI enable and require significant complementary investments, including co-invention of new processes, products, business models and human capital. These complementary investments are often intangible and poorly measured in the national accounts, even when they create valuable assets for the firm AND they develop a model that shows how this leads to an underestimation of productivity growth in the early years of a new GPT, and how later, when the benefits of intangible investments are harvested, productivity growth will be overestimated. Their model generates a Productivity J-Curve that can explain the productivity slowdowns often accompanying the advent of GPTs, as well as the increase in productivity later.

The authors find that, first, As firms adopt a new GPT, total factor productivity growth will initially be underestimated because capital and labor are used to accumulate unmeasured intangible capital stocks. Then, second, Later, measured productivity growth overestimates true productivity growth because the capital service flows from those hidden intangible stocks generates measurable output. Finally, The error in measured total factor productivity growth therefore follows a J-curve shape, initially dipping while the investment rate in unmeasured capital is larger than the investment rate in other types of capital, then rising as growing intangible stocks begin to contribute to measured production.

This explains the observed phenomenon that when a new technology like AI and machine learning, or something like blockchain and distributed ledger technology, is introduced into an area such as supply chain, it generates furious debate about whether it creates any value for incumbent suppliers or customers.

If we consider the reported time it took before other GPTs like electricity and computers began to contribute measurably to firm-level and economywide productivity, we must admit that it is perhaps too early to write off blockchains and other distributed ledger technologies, or AI and machine learning, and their applications in sectors of the economy that are not usually associated with internet and other digital technologies.

Give it some time. However, I think we are near the inflection point of the AI and Machine Learning Productivity J-curve. As I have worked on this #AIinSupplyChain series, I have become more convinced that the companies that are experimenting with AI and machine learning in their supply chain operations now will have the advantage over their competitors over the next decade.

I think we are a bit farther away from the inflection point of a Blockchain and Distributed Ledger Technologies Productivity J-Curve. I cannot yet make a cogent argument about why this is true, although in March 2014, I published #ChainReaction: Who Will Own The Age of Cryptocurrencies? part of an ongoing attempt to understand when blockchains and other distributed technologies might become more ubiquitous than they are now.

Examining this topic has added to my understanding of why disruption happens. The authors of the Productivity J-Curve paper state that the more transformative the new technology, the more likely its productivity effects will initially be underestimated.

The long duration during which incumbent firms underestimate the productivity effects of a relatively new GPT is what contributes to the phenomenon studied by Rebecca Henderson and Kim Clark in Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms. It is also described as Supply Side Disruption by Josgua Gans in his book, The Disruption Dilemma, and summarized in this March 2016 HBR article, The Other Disruption.

If we focus on AI and machine learning specifically, in an exchange on Twitter on Sept. 27, Brynjolfsson said, The machine translation example is in many ways the exception. More often it takes a lot of organizational reinvention and time before AI breakthroughs translate into productivity gains.

By the time entrenched and industry-leading incumbents awaken to the threats posed by newly developed GPTs, a crop of challengers who had no option but to adopt the new GPT at the outset has become powerful enough to threaten the financial stability of an industry.

One example? E-commerce and its impact on retail in general.

If you are an executive, what experiments are you performing to figure out if and how your companys supply chain operations can be made more productive by implementing technologies that have so far been underestimated by you and other incumbents in your industry?

If you are not doing anything yet, are you fulfilling your obligations to your companys shareholders, employees, customers and other stakeholders?

If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains, wed love to tell your story in FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Dig deeper into the #AIinSupplyChain Series with FreightWaves.

Commentary: Optimal Dynamics the decision layer of logistics?

Commentary: Combine optimization, machine learning and simulation to move freight

Commentary: SmartHop brings AI to owner-operators and brokers

Commentary: Optimizing a truck fleet using artificial intelligence

Commentary: FleetOps tries to solve data fragmentation issues in trucking

Commentary: Bulgarias Transmetrics uses augmented intelligence to help customers

Commentary: Applying AI to decision-making in shipping and commodities markets

Commentary: The enabling technologies for the factories of the future

Commentary: The enabling technologies for the networks of the future

Commentary: Understanding the data issues that slow adoption of industrial AI

Commentary: How AI and machine learning improve supply chain visibility, shipping insurance

Commentary: How AI, machine learning are streamlining workflows in freight forwarding, customs brokerage

Authors disclosure: I am not an investor in any early-stage startups mentioned in this article, either personally or through REFASHIOND Ventures. I have no other financial relationship with any entities mentioned in this article.

See the original post:
Commentary: Can AI and machine learning improve the economy? - FreightWaves

Why organisations are poised to embrace machine learning – IT Brief Australia

Article by Snowflake senior sales engineer Rishu Saxena.

Once a technical novelty seen only in software development labs or enormous organisations, machine learning (ML) is poised to become an important tool for large numbers of Australian and New Zealand businesses.

Lured by promises of improved productivity and faster workflows, companies are investing in the technology in rising numbers. According to research firm Fortune Business Insights, the ML market will be worth US$117.19 billion by 2027.

Historically, ML was perceived to be an expensive undertaking that required massive upfront investment in people, as well as both storage and compute systems. Recently, many of the roadblocks that had been hindering adoption have now been removed.

One such roadblock was not having the right mindset or strategy when undertaking ML-related projects. Unlike more traditional software development, ML requires a flexible and open-ended approach. Sometimes it wont be possible to assess the result accurately, and this could well change during deployment and preliminary use.

A second roadblock was the lack of ML automation tools available on the market. Thanks to large investments and hard work by computer scientists, the latest generation of auto ML tools are feature-rich, intuitive and affordable.

Those wanting to put them to work no longer have to undertake extensive data science training or have a software development background. Dubbed citizen data scientists, these people can readily experiment with the tools and put their ideas into action.

The way data is stored and accessed by ML tools has also changed. Advances in areas such as cloud-based data warehouses and data lakes means an organisation can now have all its data in a single location. This means the ML tools can scan vast amounts of data relatively easily, potentially leading to insights that previously would have gone unnoticed.

The lowering of storage costs has further assisted this trend. Where an organisation may have opted to delete or archive data onto tape, that data can now continue to be stored in a production environment, making it accessible to the ML tools.

For those organisations looking to embrace ML and experience the business benefits it can deliver, there are a series of steps that should be followed:

When starting with ML, dont try to run before you walk. Begin with small, stand-alone projects that give citizen data scientists a chance to become familiar with the machine learning process, the tools, how they operate, and what can be achieved. Once this has been bedded down, its then easier to gradually increase the size and scope of activities.

To start your ML journey, lean on the vast number of auto ML tools available on the market instead of using open source notebook based IDEs that require high levels of skills and familiarity with ML.

There is an increasing number of ML tools on the market, so take time to evaluate options and select the ones best suited to your business goals. This will also give citizen data scientists required experience before any in-house development is undertaken.

ML is not something that has to be the exclusive domain of the IT department. Encourage the growth of a pool of citizen data scientists within the organisation who can undertake projects and share their growing knowledge.

To enable ML tools to do as much as possible, centralise the storage of all data in your organisation. One option is to make use of a cloud-based data platform that can be readily scaled as data volumes increase.

Once projects have been underway for some time, closely monitor the results being achieved. This will help to guide further investments and shape the types of projects that will be completed in the future.

Once knowledge and experience levels within the organisation have increased, consider tackling more complex projects. These will have the potential to add further value to the organisation and ensure that stored data is generating maximum value.

The potential for ML to support organisations, help them to achieve fresh insights, and streamline their operations is vast. By starting small and growing over time, its possible to keep costs under control while achieving benefits in a relatively short space of time.

Read more:
Why organisations are poised to embrace machine learning - IT Brief Australia

Using Machine Learning To Predict Disease In Cattle Might Help Solve A Billion-Dollar Problem – Forbes

One of the challenges in scaling up meat production are issues of disease for the animals. Take bovine respiratory disease (BRD), for example. This contagious infection is responsible for nearly half of all feedlot deaths for cattle every year in North America. The industrys costs for managing the disease come close to $1 billion annually.

Preventative measures could significantly decrease these costs, and a small team comprising a data scientist, a college student and two entrepreneurs spent the past weekend at the Forbes Under 30 Agtech+ Hackathon figuring out a concept for better managing the disease.

Their solution? Tag-Ag, a conceptual set of predictive models that could take data already routinely gathered by cattle ranchers and tracked using ear tags to both identify cows at risk for BRD to focus prevention efforts; and to trace outbreaks of BRD to provide more focused treatment and management decisions.

By providing these insights, we can instill confidence in both big consumers such as McDonalds or Wal-Mart, and small consumers like you and me, that their meat is sourced from a healthy and sustainable operation, said team member Natalie McCaffrey, an 18 year-old undergraduate at Washington & Lee University at the Hackathons final presentations on Sunday evening.

McCaffrey was joined by Jacob Shields, 30, a senior research scientist at Elanco Animal Health; Marya Dzmiturk, 28, cofounder of TK startup Avanii and an alumnus of the 2020 Forbes Under 30 list in Manufacturing & Industry; and Shaina Steward, 29, founder of The Model Knowledge Group & Ekal Living.

They joined a larger group of hackathoners who brainstormed a variety of concepts related to animal health on Friday night before settling on three different ideas, at which point the group split into the smaller teams. The initial pitch for the Tag-Ag team was the use of AI & Big Data to help producers keep animals healthy.

As the Tag-Ag team began its research and development process on Saturday, one clear challenge was the scope of potential animal health issues, as well as a potentially intense labor process in collecting useful information. They settled on cattle because, McCaffrey says, big ranchers are already electronically collecting data on cattle, and because BRD by itself makes a huge impact on the industry.

Another advantage of using data already being collected, adds Shields, is that tools exist to build a model for the concepts predictive analytics based on whats out there. For supervised machine learning algorithms, the more inputs the better, he says. I dont believe well need additional studies to support this case, unless we knew of a handful of data points that werent being collected that really would help with the predictability.

For a business model, the Tag-Ag team suggests a subscription-based model, with a one-time implementation fee for any hardware needs. They believe that theres definitely room to raise capital, pointing to the size of the market loss theyre addressing plus the $500 million in venture capital invested in AgTech companies in 2019 alone.

Investors and institutions are recognizing opportunities in the AgTech space, McCaffrey says, and beyond that, she adds, our space of AI and data has space for additional players.

Team members: Natalie McCaffery, undergraduate, Washington & Lee University; Jacob Shields, senior research scientist, Elanco Animal Health; Marya Dzmiturk, cofounder, Avanii; Shaina Steward, 29, founder, The Model Knowledge Group and Ekal Living.

The rest is here:
Using Machine Learning To Predict Disease In Cattle Might Help Solve A Billion-Dollar Problem - Forbes

Is Quantum Machine Learning the next thing? | by Alessandro Crimi | ILLUMINATION-Curated | Oct, 2020 – Medium

In classical computers, bits are stored as either a 0 or a 1 in binary notation. Quantum computers use quantum bits or qubits which can be both 0 and 1, this is called superimposition. Last year Google and NASA claimed to have achieved quantum supremacy, raising some controversies though. Quantum supremacy means that a quantum computer can perform a single calculation that no conventional computer, even the biggest supercomputer can perform in a reasonable amount of time. Indeed, according to Google, the Sycamore is a computer with a 54-qubit processor, which is can perform fast computations.

Machines like Sycamore can speed up simulation of quantum mechanical systems, drug design, the creation of new materials through molecular and atomic maps, the Deutsch Oracle problem and machine learning.

When data points are projected in high dimensions during machine learning tasks, it is hard for classical computers to deal with such large computations (no matter the TensorFlow optimizations and so on). Even if the classical computer can handle it, an extensive amount of computational time is necessary.

In other words, the current computers we use can be sometime slow while doing certain machine learning application compared to quantum systems.

Indeed, superposition and entanglement can come in hand to train properly support vector machine or neural networks to behave similarly to a quantum system.

How we do this in practice can be summarized as

In practice, quantum computers can be used and trained like neural networks, or better neural networks comprises some aspects of quantum physics. More specifically, in photonic hardware, a trained circuit of quantum computer can be used to classify the content of images, by encoding the image into the physical state of the device and taking measurements. If it sounds weird, it is because this topic is weird and difficult to digest. Moreover, the story is bigger than just using quantum computers to solve machine learning problems. Quantum circuits are differentiable, and a quantum computer itself can compute the change (rewrite) in control parameters needed to become better at a given task, pushing further the concept of learning.

Read the original:
Is Quantum Machine Learning the next thing? | by Alessandro Crimi | ILLUMINATION-Curated | Oct, 2020 - Medium

Machine Learning Is Cheaper But Worse Than Humans at Fund Analysis – Institutional Investor

Morningstar had a problem.

Or rather, its millions of users did: The star-rating system, which drives huge volumes of assets, is inherently backwards-looking. These make-or-break badges label how good (or bad) a fund has performed, not how it will perform.

Morningstars solution was analysts: humans who dig deep into the big and popular fund products, then assign them forward-looking ratings. For analyzing the lesser or niche products, Morningstar unleashed the algorithms.

But the humans still have an edge, academic researchers found except in productivity.

We find that the analyst report, which is usually 4 or 5 pages, provides very detailed information, and is better than a star rating, as it claims to be, said Si Cheng, an assistant finance professor at the Chinese University of Hong Kong, in an interview.

[II Deep Dive: AQRs Problem With Machine Learning: Cats Morph Into Dogs]

The most potent value in all of these Morningstar modes came from the tone of human-generated reports assessed using machine-driven textual analysis Cheng and her co-authors of a just-publishedworking paperfound.

Tone is likely to come from soft information, such as what the analyst picks up from speaking to fund management and investors. That deeply human sense of enthusiasm or pessimism matters when it comes through in conflict with the actual rating, which the analysts and algos based on quantitative factors.

Most of Morningstars users are retail investors, but only professionals are tapping into this human-quant arbitrage, discovered Cheng and her Peking University co-authors Ruichang Lu and Xiajun Zhang.

We do find that only institutional investors are taking advantage of analysts reports, she told Institutional Investor Tuesday. They do withdraw from a fund if the fund gets a gold rating but a pessimistic tone.

Cheng, her coauthors, and other academic researchers working in the same vein highlight cost one major advantage of algorithmic analysis over the old-fashioned kind. After initial set up, they automatically generate all of the analysis at a frequency that a human cannot replicate, Cheng said.

As Anne Tucker, director of the legal analytics and innovation initiative at Georgia State University, cogently put it, machine learning is leveraging components of human judgement at scale. Its not a replacement; its a tool for increasing the scale and the speed. On the legal side, almost all of our data is locked in text: memos, regulatory filings, orders, court decisions, and the like.

Tucker has teamed up with GSU analytics professor Yusen Xia and associate law professor Susan Navarro Smelcer to gather the text of fund filings and turn machine-learning programs onto them, searching for patterns and indicators of future risk and performance. The project is underway, and detailed in a recent working paper.

We have complied all of the investment strategy and risk sections from 2010 onwards, and are using text mining, machine learning, a suite of other computational tools to understand the content, study compliance, and then to aggregate texts in order to model emerging risks, Tucker told II. If we listen to the most sophisticated investors collectively, what can we learn? If we would have had these tools before 2008, would we have been able to pick up tremors?

Maybe but they wouldnt have picked up the Covid-19 crisis, early findings suggest.

There were essentially no pandemic-related risk disclosures before this happened, Tucker said.

Original post:
Machine Learning Is Cheaper But Worse Than Humans at Fund Analysis - Institutional Investor

Bespoken Spirits raises $2.6M in seed funding to combine machine learning and accelerated whiskey aging – TechCrunch

Bespoken Spirits, a Silicon Valley spirits company that has developed a new data-driven process to accelerate the aging of whiskey and create specific flavors, today announced that it has raised a $2.6 million seed funding round. Investors include Clos de la Tech owner T.J. Rodgers and baseballs Derek Jeter.

The company was co-founded by former Bloom Energy, BlueJeans and Mixpanel exec Stu Aaron and another Bloom Energy alumn, Martin Janousek, whose name can be found on a fair number of Bloom Energy patents.

Bespoken isnt the first startup to venture into accelerated aging, a process that tries to minimize the time it takes to age these spirits, which is typically done in wooden barrels. The company argues that its the first to combine that with a machine learning-based approach though what it calls its ACTivation technology.

Rather than putting the spirit in a barrel and passively waiting for nature to take its course, and just rolling the dice and seeing what happens, we instead use our proprietary ACTivation technology with the A, C and T standing for aroma, color and taste to instill the barrel into the spirit, and actively control the process and the chemical reactions in order to deliver premium quality tailored spirits and to be able to do that in just days rather than decades, explained Aaron.

Image Credits: Bespoken Spirits

And while there is surely a lot of skepticism around this technology, especially in a business that typically prides itself on its artisanal approach, the company has won prizes at a number of competitions. The team argues that traditional barrel aging is a wasteful process, where you lose 20% of the product through evaporation, and one that is hard to replicate. And because of how long it takes, it also creates financial challenges for upstarts in this business and it makes it hard to innovate.

As the co-founders told me, there are three pillars to its business: selling its own brand of spirits, maturation-as-a-service for rectifiers and distillers and producing custom private label spirits for retailers, bars and restaurants. At first, the team mostly focused on the latter two and especially its maturation-as-a-service business. Right now, Aaron noted, a lot of craft distilleries are facing financial strains and need to unlock their inventory and get their product to market sooner and maybe at a better quality and hence higher price point than they previously could.

Theres also the existing market of rectifiers, who, at least in the U.S., take existing products and blend them. These, too, are looking for ways to improve their processes and make it more replicable.

Interestingly, a lot of breweries, too, are now sitting on excess or expired beer because of the pandemic. Theyre realizing that rather than paying somebody to dispose of that beer and taking it back, they can actually recycle or upcycle maybe is a better word the beer, by distilling it into whiskey, Aaron said. But unfortunately, when a brewery distills beer into whiskey, its typically not very good whiskey. And thats where we come in. We can take that beer bin, as a lot of people call initial distillation, and we can convert it into a premium-quality whiskey.

Image Credits: Bespoken Spirits

Bespoken is also working with a few grocery chains, for example, to create bespoke whiskeys for their house brands that match the look and flavor of existing brands or that offer completely new experiences.

The way the team does this is by collecting a lot of data throughout its process and then having a tasting panel describe the product for them. Using that data and feeding it into its systems, the company can then replicate the results or tweak them as necessary without having to wait for years for a barrel to mature.

Were collecting all this data and some of the data that were collecting today, we dont even know yet what were going to use it for, Janousek said. Using its proprietary techniques, Bespoken will often create dozens of samples for a new customer and then help them whittle those down.

I often like to describe our company as a cross between 23andme, Nespresso and Impossible Foods, Aaron said. Were like 23andme, because again, were trying to map the customer to preference to the recipe to results. There is this big data, genome mapping kind of a thing. And were like Nespresso because our machine takes spirit and supply pods and produces results, although obviously were industrial scale and theyre not. And its like Impossible Foods, because its totally redefining an age-old antiquated model to be completely different.

The company plans to use the new funding to accelerate its market momentum and build out its technology. Its house brand is currently available for sale in California, Wisconsin and New York.

The companys ability to deliver both quality and variety is what really caught my attention and made me want to invest, said T.J. Rogers. In a short period of time, theyve already produced an incredible range of top-notch spirits, from whiskeys to rum, brandy and tequila all independently validated time and again in blind tastings and prestigious competitions.

Full disclaimer: The company sent me a few samples. Im not enough of a whiskey aficionado to review those, but I did enjoy them (responsibly).

See more here:
Bespoken Spirits raises $2.6M in seed funding to combine machine learning and accelerated whiskey aging - TechCrunch