Warner Bros. signs AI startup that claims to predict film success – The Verge

Storied film company Warner Bros. has signed a deal with Cinelytic, an LA startup that uses machine learning to predict film success. A story from The Hollywood Reporter claims that Warner Bros. will use Cinelytics algorithms to guide decision-making at the greenlight stage, but a source at the studio told The Verge that the software would only be used to help with marketing and distribution decisions made by Warner Bros. Pictures International.

In an interview with THR, Cinelytics CEO Tobias Queisser stressed that AI was only an assistive tool. Artificial intelligence sounds scary. But right now, an AI cannot make any creative decisions, Queisser told the publication. What it is good at is crunching numbers and breaking down huge data sets and showing patterns that would not be visible to humans. But for creative decision-making, you still need experience and gut instinct.

Regardless of what Cinelytics technology is being used for, the deal is a step forward for Hollywoods slow embrace of machine learning. As The Verge reported last year, Cinelytic is just one of a new crop of startups leveraging AI to forecast film performance, but the film world has historically been skeptical about their ability.

Andrea Scarso, a film investor and Cinelytic customer, told The Verge that the startups software hadnt ever changed his mind, but opens up a conversation about different approaches. Said Scarso: You can see how, sometimes, just one or two different elements around the same project could have a massive impact on the commercial performance.

Cinelytics software lets customers play fantasy football with films. Users can model a pitch; inputting genre, budget, actors, and so on, and then see what happens when they tweak individual elements. Does replacing Tom Cruise with Keanu Reeves get better engagement with under-25s? Does it increase box office revenue in Europe? And so on.

Many AI experts are skeptical about the ability of algorithms to make predictions in a field as messy as filmmaking. Because machine learning applications are trained on historical data they tend to be conservative, focusing on patterns that led to past successes rather than predicting what will excite future audiences. Scientific studies also suggest algorithms only produce limited predictive gains, often repeating obvious insights (like Scarlett Johansson is a bankable film star) that can be discovered without AI.

But for those backing machine learning in filmmaking, the benefit is simply that such tools produce uncomplicated analysis faster than humans can. This can be especially useful at film festivals, notes THR, when studios can be forced into bidding wars for distribution rights, and have only a few hours to decide how much a film might be worth.

We make tough decisions every day that affect what and how we produce and deliver films to theaters around the world, and the more precise our data is, the better we will be able to engage our audiences, Warner Bros. senior vice president of distribution, Tonis Kiis, told THR.

Update January 8, 11:00AM ET: Story has been updated with additional information from a source at Warner Bros.

Read the original post:
Warner Bros. signs AI startup that claims to predict film success - The Verge

SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets – Design and Reuse

Joint silicon development through SiFive's DesignShare Program combines IP and design strengths of both companies to develop Edge AI SoCs for a range of high-volume end markets including smart home, automotive, robotics, security, augmented reality, industrial and IoT

SAN MATEO and MOUNTAIN VIEW, Calif., Jan. 7, 2020 -- SiFive, Inc., the leading provider of commercial RISC-V processor IP and silicon solutions and CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies, today announced a new partnership to enable the design and creation of ultra-low-power domain-specific Edge AI processors for a range of high-volume end markets. The partnership, as part of SiFive's DesignShare program, is centered around RISC-V CPUs, CEVA's DSP cores, AI processors and software, which will be designed into SoCs targeting an array of end markets where on-device neural networks inferencing supporting imaging, computer vision, speech recognition and sensor fusion applications is required. Initial end markets include smart home, automotive, robotics, security and surveillance, augmented reality, industrial and IoT.

Machine Learning Processing at the Edge

Domain-specific SoCs which can handle machine learning processing on-device are set to become mainstream, as the processing workloads of devices increasingly includes a mix of traditional software and efficient deep neural networks to maximize performance, battery life and to add new intelligent features. Cloud-based AI inference is not suitable for many of these devices due to security, privacy and latency concerns. SiFive and CEVA are directly addressing these challenges through the development of a range of domain-specific scalable edge AI processor designs, with the optimal balance of processing, power efficiency and cost.

The Edge AI SoCs are supported by CEVA's award-winning CDNN Deep Neural Network machine learning software compiler that creates fully-optimized runtime software for the CEVA-XM vision processors, CEVA-BX audio DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing. CEVA will also supply a full development platform for partners and developers based on the CEVA-XM and NeuPro architectures to enable the development of deep learning applications using the CDNN, targeting any advanced network, as well as DSP tools and libraries for audio and voice pre- and post-processing workloads.

SiFive DesignShare Program

The SiFive DesignShare IP program offers a streamlined process for companies seeking to partner with leading vendors to provide pre-integrated premium Silicon IP for bringing new SoCs to market. As part of SiFive's business model to license IP when ready for mass production, the flexibility and choice of the DesignShare IP program reduces the complexities of contract negotiation and licensing agreements to enable faster time to market through simpler prototyping, no legal red tape, and no upfront payment.

"CEVA's partnership with SiFive enables the creation of Edge AI SoCs that can be quickly and expertly tailored to the workloads, while also retaining the flexibility to support new innovations in machine learning," said Issachar Ohana, Executive Vice President, Worldwide Sales at CEVA. "Our market leading DSPs and AI processors, coupled with the CDNN machine learning software compiler, allow these AI SoCs to simplify the deployment of cloud-trained AI models in intelligent devices and provides a compelling offering for anyone looking to leverage the power of AI at the edge."

"Enabling future-proof, technology-leading processor designs is a key step in SiFive's mission to unlock technology roadmaps," said Dr. Naveed Sherwani, president and CEO, SiFive. "The rapid evolution of AI models combined with the requirements for low power, low latency, and high-performance demand a flexible and scalable approach to IP and SoC design that our joint CEVA / SiFive portfolio is superbly positioned to provide. The result is shorter time-to-market, while lowering the entry barriers for device manufacturers to create powerful, differentiated products."

Availability

SiFive's DesignShare program, including CEVA-BX Audio DSPs, CEVA-XM Vision DSPs and NeuPro AI processors, is available now. Visit http://www.sifive.com/designshare for more information.

About SiFive

SiFive is on a mission to free semiconductor roadmaps and declare silicon independence from the constraints of legacy ISAs and fragmented solutions. As the leading provider of market-ready processor core IP and silicon solutions based on the free and open RISC-V instruction set architecture SiFive helps SoC designers reduce time-to-market and realize cost savings with customized, open-architecture processor cores, and democratizes access to optimized silicon by enabling system designers in all markets to build customized RISC-V based semiconductors. Founded by the inventors of RISC-V, SiFive has 16 design centers worldwide, and has backing from Sutter Hill Ventures, Qualcomm Ventures, Spark Capital, Osage University Partners, Chengwei, Huami, SK Hynix, Intel Capital, and Western Digital. For more information, please visit http://www.sifive.com.

About CEVA, Inc.

CEVA is the leading licensor of wireless connectivity and smart sensing technologies. We offer Digital Signal Processors, AI processors, wireless platforms and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence, all of which are key enabling technologies for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial and IoT. Our ultra-low-power IPs include comprehensive DSP-based platforms for 5G baseband processing in mobile and infrastructure, advanced imaging and computer vision for any camera-enabled device and audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For sensor fusion, our Hillcrest Labs sensor processing technologies provide a broad range of sensor fusion software and IMU solutions for AR/VR, robotics, remote controls, and IoT. For artificial intelligence, we offer a family of AI processors capable of handling the complete gamut of neural network workloads, on-device. For wireless IoT, we offer the industry's most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi 4/5/6 (802.11n/ac/ax) and NB-IoT. Visit us at http://www.ceva-dsp.com

Read more:
SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets - Design and Reuse

Achieving Paperless Operations and Document Automation with AI and ML – ReadWrite

Paper is an essential commodity for office operations. Most conventional offices rely on paper for completing the simplest tasks. Even after digitization, the dream of a completely paperless office is far from reality. Humans are used to a standard form of note-taking and documentation. Here is how to achieve paperless operations and document automation with AI and ML.

Progressive technologies like artificial intelligence and machine learning help enterprises achieve their goal of paperless offices. Using these technologies, the issues associated with managing large volumes of data documented on paper can be efficiently solved.

Paperless enterprises run on digital devices with minimum paper consumption. In a digitally connected world, this gives businesses an unprecedented edge. All data is stored digitally, on the cloud or on-premise, which can be used in real-time to derive valuable insights about operational efficiency, marketing campaigns, employee engagement and a lot more.

Machine Learning (ML) is making it possible to achieve next-gen digital transformation by automating several business operations, that requires filling up loads of paper documents. Already, businesses are making an effort to integrate machine learning and artificial intelligence to go digital and achieve higher efficiency.

Source: https://www.fingent.com/blog/machine-learning-to-accelerate-paperless-offices

Automation can offer several benefits to modern enterprises. Not only the tedious task of filing and storing a large number of documents can be minimized, but organizations can improve their data discovery and utilization capabilities. Here are some of the benefits of adopting paperless processes:

Digitization through artificial intelligence and machine learning allows companies to organize all information in easily accessible formats. This saves time as employees dont have to waste hours searching for a document. Also, this promotes remote working culture and bring next-level authentication as the origin of digital information can be identified.

One of the biggest drawbacks of paper-based data storage is associated with the security and safety of data. Conventionally, office cultures were not serious about data protection and stored critical information either in filing cabinets or any similar method.

All these methods are prone to data theft or damage due to unavoidable circumstances. Paperless office enhances the security measures as companies can take a backup of data, secure data through passwords and take steps to enforce security measures.

Storing data using paper-based techniques is a cumbersome and costly affair. Companies can save millions of dollars annually by eliminating the need for paper, copier equipment, and maintenance. Also, companies dont have to waste valuable real estate for storage of files and other documents.

Paperless digitization promotes easy accessibility from anywhere, which means less money is spent on the physical transmission of data using conventional methods.

Digitally stored data serves as a massive data pool to derive real-time insights from available data. This means that the information available to an enterprise can be put to better use for boosting efficiency. Marketing managers can utilize real-time data gathered from various campaigns; production teams can understand customer preferences.

Machine learning and artificial intelligence can enhance data analysis capabilities and make organizational processes closer to the customers needs and preferences.

AI/ML-based paperless workflows will significantly improve the productivity of law firms. Traditionally, the legal profession is seen as a labor-intensive task- browsing through thousands of legal case files, reviewing past case studies, examining legal contracts and more.

AI can reduce manual intervention for data analysis and processing, leaving more time with advocates, lawyers and legal firms to advise their clients and appeal in courts. Artificial intelligence (AI) can be leveraged to keep a record of legal contracts and provide real-time alerts on renewals, proofreading legal documents and locate valuable information in seconds. For the legal system, artificial intelligence is the key to paper-free litigation and trials in the future.

2. Automobile industries

The automobile industry is one of the biggest beneficiaries of the AI/ML innovation. Machine learning has allowed automobile factories to create autonomous systems for managing large volumes of data generated during the manufacturing process.

Moreover, AI is reducing the effort required for filing claims in case of shop-floor accidents as data is digitized and form filing can be automated. Also, ML algorithms allow customers to get real-time diagnostic support without needing to file paper-based forms as a vehicle can be directly connected to the manufacturer via cloud infrastructure. This means that repairs, service and general performance issues can be reported in real-time without the need for paper.

The insurance sector can use machine learning to automate claims will prove delightful for customer service processes. Machine learning and artificial intelligence can be leveraged to create sophisticated rating systems for evaluating risks and predicting an efficient pricing structure for each policy. All this can be automated, which reduces the need for manual intervention from human agents for classifying risks.

Also, artificial intelligence can streamline workflow by managing a large volume of claims data, policy benefits, medical/personal records, digitally. The data stored on the cloud can be used by an AI algorithm to derive real-time insights about policyholders and bring efficiency to the fraud detection process.

Wrapping Up

Artificial intelligence has the potential to revolutionize workspaces like never before. With the help of an AI development company, small, medium and large-scale enterprises can make a substantial move towards a paperless future. Not only will it reduce the cost of operations but it will boost the overall efficiency of the existing business processes. The industry use cases suggested above is just the tip of a massive iceberg.

The possibilities are limitless. An AI-driven product development companycan understand your existing business processes and suggest custom solutions that can be a suitable fit for your business operations.

Im Namee, a digital marketer working for Azilen Technologies. Im also passionate about exploring and writing about innovation, technology including AI, IOT, Big Data and HR Tech.

Read the original here:
Achieving Paperless Operations and Document Automation with AI and ML - ReadWrite

AI and machine learning trends to look toward in 2020 – Healthcare IT News

Artificial intelligence and machine learning will play an even bigger role in healthcare in 2020 than they did in 2019, helping medical professionals with everything from oncology screenings to note-taking.

On top of actual deployments, increased investment activity is also expected this year, and with deeper deployments of AI and ML technology, a broader base of test cases will be available to collect valuable best practices information.

As AI is implemented more widely in real-world clinical practice, there will be more academic reports on the clinical benefits that have arisen from the real-world use, said Pete Durlach, senior vice president for healthcare strategy and new business development at Nuance.

"With healthy clinical evidence, we'll see AI become more mainstream in various clinical settings, creating a positive feedback loop of more evidence-based research and use in the field," he explained. "Soon, it will be hard to imagine a doctor's visit, or a hospital stay that doesn't incorporate AI in numerous ways."

In addition, AI and ambient sensing technology will help re-humanize medicine by allowing doctors to focus less on paperwork and administrative functions, and more on patient care.

"As AI becomes more commonplace in the exam room, everything will be voice enabled, people will get used to talking to everything, and doctors will be able to spend 100% of their time focused on the patient, rather than entering data into machines," Durlach predicted. "We will see the exam room of the future where clinical documentation writes itself."

The adoption of AI for robotic process automation ("RPA") for common and high value administrative functions such as the revenue cycle, supply chain and patient scheduling also has the potential to rapidly increase as AI helps automate or partially automate components of these functions, driving significantly enhanced financial outcomes to provider organizations.

Durlach also noted the fear that AI will replace doctors and clinicians has dissipated, and the goal now is to figure out how to incorporate AI as another tool to help physicians make the best care decisions possible effectively augmenting the intelligence of the clinician.

"However, we will still need to protect against phenomenon like alert fatigue, which occurs when users who are faced with many low-level alerts, ignore alerts of all levels, thereby missing crucial ones that can affect the health and safety of patients," he cautioned.

In the next few years, he predicts the market will see a technology that finds a balance between being too obtrusive while supporting doctors to make the best decisions for their patients as the learn to trust the AI powered suggestions and recommendations.

"So many technologies claim they have an AI component, but often there's a blurred line in which the term AI is used in a broad sense, when the technology that's being described is actually basic analytics or machine learning," Kuldeep Singh Rajput, CEO and founder of Boston-based Biofourmis, told Healthcare IT News. "Health system leaders looking to make investments in AI should ask for real-world examples of how the technology is creating ROI for other organizations."

For example, he pointed to a study of Brigham & Women's Home Hospital program, recently published in Annals of Internal Medicine, which employed AI-driven continuous monitoring combined with advanced physiology analytics and related clinical care as a substitute for usual hospital care.

The study found that the program--which included an investment in AI-driven predictive analytics as a key component--reduced costs, decreased healthcare use, and lowered readmissions while increasing physical activity compared with usual hospital care.

"Those types of outcomes could be replicated by other healthcare organizations, which makes a strong clinical and financial case to invest in that type of AI," Rajput said.

Nathan Eddy is a healthcare and technology freelancer based in Berlin.Email the writer:nathaneddy@gmail.comTwitter:@dropdeaded209

Original post:
AI and machine learning trends to look toward in 2020 - Healthcare IT News

What Is Machine Learning? | How It Works, Techniques …

Supervised Learning

Supervised machine learning builds a model that makes predictions based on evidence in the presence of uncertainty. A supervised learning algorithm takes a known set of input data and known responses to the data (output) and trains a model to generate reasonable predictions for the response to new data. Use supervised learning if you have known data for the output you are trying to predict.

Supervised learning uses classification and regression techniques to develop predictive models.

Classification techniques predict discrete responsesfor example, whether an email is genuine or spam, or whether a tumor is cancerous or benign. Classification models classify input data into categories. Typical applications include medical imaging, speech recognition, and credit scoring.

Use classification if your data can be tagged, categorized, or separated into specific groups or classes. For example, applications for hand-writing recognition use classification to recognize letters and numbers. In image processing and computer vision, unsupervised pattern recognition techniques are used for object detection and image segmentation.

Common algorithms for performing classification include support vector machine (SVM), boosted and bagged decision trees, k-nearest neighbor, Nave Bayes, discriminant analysis, logistic regression, and neural networks.

Regression techniques predict continuous responsesfor example, changes in temperature or fluctuations in power demand. Typical applications include electricity load forecasting and algorithmic trading.

Use regression techniques if you are working with a data range or if the nature of your response is a real number, such as temperature or the time until failure for a piece of equipment.

Common regression algorithms include linear model, nonlinear model, regularization, stepwise regression, boosted and bagged decision trees, neural networks, and adaptive neuro-fuzzy learning.

Read more:
What Is Machine Learning? | How It Works, Techniques ...

Chemists are training machine learning algorithms used by Facebook and Google to find new molecules – News@Northeastern

For more than a decade, Facebook and Google algorithms have been learning as much as they can about you. Its how they refine their systems to deliver the news you read, those puppy videos you love, and the political ads you engage with.

These same kinds of algorithms can be used to find billions of molecules and catalyze important chemical reactions that are currently induced with expensive and toxic metals, says Steven A. Lopez, an assistant professor of chemistry and chemical biology at Northeastern.

Lopez is working with a team of researchers to train machine learning algorithms to spot the molecular patterns that could help find new molecules in bulk, and fast. Its a much smarter approach than scanning through billionsand billionsof molecules without a streamlined process.

Were teaching the machines to learn the chemistry knowledge that we have, Lopez says. Why should I just have the chemical intuition for myself?

The alternative to using expensive metals is organic molecules, and particularly plastics, which are everywhere, Lopez says. Depending on their molecular structure and ability to absorb light, these plastics can be converted with chemistry to produce better materials for todays most important problems.

Lopez says the goal is to find molecules with the right properties and similar structures as metal catalysts. But to attain that goal, Lopez will need to explore an enormous number of molecules.

Thus far, scientists have been able to synthesize only about a million molecules. But conservative estimates of the number of possible molecules that could be analyzed is a quintillion, which is 10 raised to the power of 18, or the number one followed by 18 zeros.

Lopez thinks of this enormous number of possibilities as a vast ocean made up of billions of unexplored molecules. Such an immense molecular space is practically impossible to navigateeven if scientists were to combine experiments with supercomputer analysis.

Lopez says all of the calculations that have ever been done by computers add up to about a billion, or 10 to the ninth power. Thats about a million times less than the possible molecules.

Forget it, theres no chance, he says. We just have to use a smarter search technique.

Thats why Lopez is leading a team, supported by a grant from the National Science Foundation, that includes research from Tufts University, Washington University in St. Louis, Drexel University, and Colorado School of Mines. The team is using an open-access database of organic molecules called VERDE materials DB, which Lopez and colleagues recently published, to improve their algorithms and find more useful molecules.

The database will also register newly found molecules, and can serve as a data hub of information for researchers across several different domains, Lopez says. Thats because it can launch researchers toward finding different molecules with many new properties and applications.

In tandem with the database, the algorithms will allow scientists to use computational resources more efficiently. After molecules of interest are found, researchers will recalibrate the algorithm to find more similar groups of molecules.

The active-search algorithm, developed by Roman Garnett at Washington University in St. Louis, uses a process similar to the classic board game Battleship, in which two players guess hidden locations off a grid to target and destroy vessels within a naval fleet.

In that grid, players place vessels as far apart as possible to make opponents miss targets. Once a ship is hit, players can readjust their strategy and redirect their attacks to the coordinates surrounding that hit.

Thats exactly how Lopez thinks of the concept of exploring a vast ocean of molecules.

We are looking for regions within this ocean, he says. We are starting to set up the coordinates of all the possible molecules.

Hitting the right candidate molecules might also expand the understanding that chemists have of this unexplored chemical space.

Maybe well find out through this analysis that we have something really at the edge of what we call the ocean, and that we can expand this ocean out a bit more in that region, Lopez says. Those are things that we wouldnt [be able to find by searching] with a brute force, trial-and-error kind of approach.

For media inquiries, please contact Jessica Hair at j.hair@northeastern.edu or 617-373-5718.

Originally posted here:
Chemists are training machine learning algorithms used by Facebook and Google to find new molecules - News@Northeastern

Tiny Machine Learning On The Attiny85 – Hackaday

We tend to think that the lowest point of entry for machine learning (ML) is on a Raspberry Pi, which it definitely is not. [EloquentArduino] has been pushing the limits to the low end of the scale, and managed to get a basic classification model running on the ATtiny85.

Using his experience of running ML models on an old Arduino Nano, he had created a generator that can export C code from a scikit-learn. He tried using this generator to compile a support-vector colour classifier for the ATtiny85, but ran into a problem with the Arduino ATtiny85 compiler not supporting a variadic function used by the generator. Fortunately he had already experimented with an alternative approach that uses a non-variadic function, so he was able to dust that off and get it working. The classifier accepts inputs from an RGB sensor to identify a set of objects by colour. The model ended up easily fitting into the capabilities of the diminutive ATtiny85, using only 41% of the available flash and 4% of the available ram.

Its important to note what [EloquentArduino] isnt doing here: running an artificial neural network. Theyre just too inefficient in terms of memory and computation time to fit on an ATtiny. But neural nets arent the only game in town, and if your task is classifying something based on a few inputs, like reading a gesture from accelerometer data, or naming a color from a color sensor, the approach here will serve you well. We wonder if this wouldnt be a good solution to the pesky problem of identifying bats by their calls.

We really like how approachable machine learning has become and if youre keen to give ML a go, have a look at the rest of the EloquentArduino blog, its a small goldmine.

Were getting more and more machine learning related hacks, like basic ML on an Arduino Uno, and Lego sortings using ML on a Raspberry Pi.

Follow this link:
Tiny Machine Learning On The Attiny85 - Hackaday

Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core – The Register

MIT boffins have devised a software-based tool for predicting how processors will perform when executing code for specific applications.

In three papers released over the past seven months, ten computer scientists describe Ithemal (Instruction THroughput Estimator using MAchine Learning), a tool for predicting the number processor clock cycles necessary to execute an instruction sequence when looped in steady state, and include a supporting benchmark and algorithm.

Throughput stats matter to compiler designers and performance engineers, but it isn't practical to make such measurements on-demand, according to MIT computer scientists Saman Amarasinghe, Eric Atkinson, Ajay Brahmakshatriya, Michael Carbin, Yishen Chen, Charith Mendis, Yewen Pu, Alex Renda, Ondrej Sykora, and Cambridge Yang.

So most systems rely on analytical models for their predictions. LLVM offers a command-line tool called llvm-mca that can presents a model for throughput estimation, and Intel offers a closed-source machine code analyzer called IACA (Intel Architecture Code Analyzer), which takes advantage of the company's internal knowledge about its processors.

Michael Carbin, a co-author of the research and an assistant professor and AI researcher at MIT, told the MIT News Service on Monday that performance model design is something of a black art, made more difficult by Intel's omission of certain proprietary details from its processor documentation.

The Ithemal paper [PDF], presented in June at the International Conference on Machine Learning, explains that these hand-crafted models tend to be an order of magnitude faster than measuring basic block throughput sequences of instructions without branches or jumps. But building these models is a tedious, manual process that's prone to errors, particularly when processor details aren't entirely disclosed.

Using a neural network, Ithemal can learn to predict throughout using a set of labelled data. It relies on what the researchers describe as "a hierarchical multiscale recurrent neural network" to create its prediction model.

"We show that Ithemals learned model is significantly more accurate than the analytical models, dropping the mean absolute percent error by more than 50 per cent across all benchmarks, while still delivering fast estimation speeds," the paper explains.

A second paper presented in November at the IEEE International Symposium on Workload Characterization, "BHive: A Benchmark Suite and Measurement Framework for Validating x86-64 Basic Block Performance Models," describes the BHive benchmark for evaluating Ithemal and competing models, IACAm llvm-mca, and OSACA (Open Source Architecture Code Analyzer). It found Ithemal outperformed other models except on vectorized basic blocks.

And in December at the NeurIPS conference, the boffins presented a third paper titled Compiler Auto-Vectorization with Imitation Learning that describes a way to automatically generate compiler optimizations in a way that outperforms LLVMs SLP vectorizer.

The academics argue that their work shows the value of machine learning in the context of performance analysis.

"Ithemal demonstrates that future compilation and performance engineering tools can be augmented with datadriven approaches to improve their performance and portability, while minimizing developer effort," the paper concludes.

Read the original post:
Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register

AI, machine learning, and other frothy tech subjects remained overhyped in 2019 – Boing Boing

Rodney Brooks (previously) is a distinguished computer scientist and roboticist (he's served as as head of MIT's Computer Science and Artificial Intelligence Laboratory and CTO of Irobot); two years ago, he published a list of "dated predictions" intended to cool down some of the hype about self-driving cars, machine learning, and robotics, hype that he viewed as dangerously gaseous.

Every year, Brooks revisits those predictions to see how he's doing (to "self certify the seriousness of my predictions"). This year's scorecard is characteristically curmudgeonly, and shows that Brooks's skepticism was well-warranted, revealing much of the enthusiasm for about AI to have been mere froth: "I had not predicted any big milestones for AI and machine learning for the current period, and indeed there were none achieved... [W]e have seen warnings that all the over-hype of machine and deep learning may lead to a new AI winter when those tens of thousands of jolly conference attendees will no longer have grants and contracts to pay for travel to and attendance at their fiestas"

Some of the predictions are awfully fun, too, like "The press, and researchers, generally mature beyond the so-called 'Turing Test' and Asimov's three laws as valid measures of progress in AI and ML" (predicted for 2022; last year's update was, "I wish, I really wish.").

Brooks is pretty bullish on the web for piercing hype-bubbles, noting that it provides "outlets... for non-journalists, perhaps practitioners in a scientific field, to write position papers that get widely referenced in social media... During 2019 we saw many, many well informed such position papers/blogposts. We have seen explanations on how machine learning has limitations on when it makes sense to be used and that it may not be a universal silver bullet."

Bruce Sterling's actually pretty comfortable with tech hype: "Ive come to see tech-hype as a sign of social health. Its kinda like being young and smitten by a lot of random pretty people, only, youre not gonna really have relationships with most of them, and also, the one you oughta marry and have children with, that is probably not the one who seems most fantastically hot and sexy. Also, if nothing at all seems fantastically hot and sexy, then you probably have a vitamin deficiency. Its all part of the marvelous pageant of life, ladies and gentlemen."

I made my predictions because at the time I saw an immense amount of hype about these three topics, and the general press and public drawing conclusions about all sorts of things they feared (e.g., truck driving jobs about to disappear, all manual labor of humans about to disappear) or desired (e.g., safe roads about to come into existence, a safe haven for humans on Mars about to start developing) being imminent. My predictions, with dates attached to them, were meant to slow down those expectations, and inject some reality into what I saw as irrational exuberance.

Predictions Scorecard, 2020 January 01 [Rodney Brooks]

(via Beyond the Beyond)

(Image: Gartner; Cryteria, CC-BY, modified)

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments.

Librecorps is a program based at the Rochester Institute for Technology's Free and Open Source Software (FOSS) initiative that works with UNICEF to connect students with NGOs for paid co-op placements where they build and maintain FOSS tools used by nonprofits.

A team of researchers from Microsoft and Harvard's Berkman Center have published a taxonomy of "Failure Modes in Machine Learning," broken down into "Intentionally-Motivated Failures" and "Unintended Failures."

The best apps of 2019 could be the best deals of 2020. If you missed them last year, here are 10 of our Boing Boing reader favorites all on sale. Take advantage of deep discounts on apps dedicated to language learning, gaming, graphic design and many more. Degoo Premium: Lifetime 10TB Backup Plan With []

Missed that sale in the chaos of the holiday rush? No worries. Weve rounded up 10 of the best deals from the past year on tech, household items, audio gear and much more all still priced way down. LG B8 Series 55 OLED 4K HDR TV Consumer Reports flagged the B8 as one of []

Whatever your resolution is for the new year, youll be able to do it better with more sleep. Modern sleep masks are more than just blindfolds. They incorporate 3D contouring, ambient noise blocking and other features designed to help you shut out the world and slow down your busy, conscious mind. Heres 9 of our []

View post:
AI, machine learning, and other frothy tech subjects remained overhyped in 2019 - Boing Boing

FLIR Systems and ANSYS to Speed Thermal Camera Machine Learning for Safer Cars – Business Wire

ARLINGTON, Va.--(BUSINESS WIRE)--FLIR Systems, Inc. (NASDAQ: FLIR) and ANSYS (NASDAQ: ANSS) are collaborating to deliver superior hazard detection capabilities for assisted driving and autonomous vehicles (AVs) empowering automakers to deliver unprecedented vehicle safety. Through this collaboration, FLIR will integrate a fully physics-based thermal sensor into ANSYS leading-edge driving simulator to model, test, and validate thermal camera designs within an ultra-realistic virtual world. The new solution will reduce original equipment manufacturers (OEM) development time by optimizing thermal camera placement for use with tools such as automatic emergency braking (AEB), pedestrian detection, and within future AVs.

Having the ability to test in virtual environments complements the existing systems available to FLIR customers and partners, including the FLIR automotive development kit (ADK) featuring a FLIR Boson thermal camera, the FLIR starter thermal dataset and the regional, city-specific thermal datasets. The FLIR thermal dataset programs were created for machine learning in advanced driver assistance development (ADAS), AEB, and AV systems.

The current AV and ADAS sensors face challenges in darkness or shadows, sun glare and inclement weather such as most fog. Thermal cameras, however, can effectively detect and classify objects in these conditions. Integrating FLIR Systems thermal sensor into ANSYS VRXPERIENCE enables simulation of thousands of driving scenarios across millions of miles in mere days. Furthermore, engineers can simulate difficult-to-produce scenarios where thermal provides critical data, including detecting pedestrians in crowded, low-contrast environments.

By adding ANSYS industry-leading simulation solutions to the existing suite of tools for physical testing, engineers, automakers, and automotive suppliers can improve the safety of vehicles in all types of driving conditions, said Frank Pennisi, President of the Industrial Business Unit at FLIR Systems. The industry can also recreate corner cases that drivers can see every day but are difficult to replicate in physical environments, paving the way for improved neural networks and the performance of safety features such as AEB.

FLIR Systems recognizes the limitations of relying solely on gathering machine learning datasets in the physical world to make automotive thermal cameras as safe and reliable as possible for automotive uses, said Eric Bantegnie, Vice president and General Manager at ANSYS. Now with ANSYS solutions, FLIR can further empower automakers to speed the creation and certification of assisted-driving systems with thermal cameras.

In addition to the city-specific data sets, FLIR has more than a decade of experience in the automotive industry. FLIR has provided more than 700,000 thermal sensors as part of its night vision warning systems for a variety of carmakers, including GM, Audi and Mercedes-Benz. Also, FLIR recently announced that its thermal sensor has been selected by Veoneer, a tier-one automotive supplier, for its level-four AV production contract with a top global automaker, planned for 2021.

FLIR Systems thermal-enhanced demonstration car, along with other innovative FLIR products, will be on display at the FLIR booth #8528 during the 2020 Consumer Electronics Show in Las Vegas, Nevada from January 6 - 10.

For more information on FLIR Systems automotive solutions, please visit https://www.flir.com/safercars.

About FLIR Systems, Inc.

Founded in 1978, FLIR Systems is a world-leading industrial technology company focused on intelligent sensing solutions for defense, industrial, and commercial applications. FLIR Systems vision is to be The Worlds Sixth Sense," creating technologies to help professionals make more informed decisions that save lives and livelihoods. For more information, please visit http://www.flir.com and follow @flir.

See the article here:
FLIR Systems and ANSYS to Speed Thermal Camera Machine Learning for Safer Cars - Business Wire