Chemists are training machine learning algorithms used by Facebook and Google to find new molecules – News@Northeastern

For more than a decade, Facebook and Google algorithms have been learning as much as they can about you. Its how they refine their systems to deliver the news you read, those puppy videos you love, and the political ads you engage with.

These same kinds of algorithms can be used to find billions of molecules and catalyze important chemical reactions that are currently induced with expensive and toxic metals, says Steven A. Lopez, an assistant professor of chemistry and chemical biology at Northeastern.

Lopez is working with a team of researchers to train machine learning algorithms to spot the molecular patterns that could help find new molecules in bulk, and fast. Its a much smarter approach than scanning through billionsand billionsof molecules without a streamlined process.

Were teaching the machines to learn the chemistry knowledge that we have, Lopez says. Why should I just have the chemical intuition for myself?

The alternative to using expensive metals is organic molecules, and particularly plastics, which are everywhere, Lopez says. Depending on their molecular structure and ability to absorb light, these plastics can be converted with chemistry to produce better materials for todays most important problems.

Lopez says the goal is to find molecules with the right properties and similar structures as metal catalysts. But to attain that goal, Lopez will need to explore an enormous number of molecules.

Thus far, scientists have been able to synthesize only about a million molecules. But conservative estimates of the number of possible molecules that could be analyzed is a quintillion, which is 10 raised to the power of 18, or the number one followed by 18 zeros.

Lopez thinks of this enormous number of possibilities as a vast ocean made up of billions of unexplored molecules. Such an immense molecular space is practically impossible to navigateeven if scientists were to combine experiments with supercomputer analysis.

Lopez says all of the calculations that have ever been done by computers add up to about a billion, or 10 to the ninth power. Thats about a million times less than the possible molecules.

Forget it, theres no chance, he says. We just have to use a smarter search technique.

Thats why Lopez is leading a team, supported by a grant from the National Science Foundation, that includes research from Tufts University, Washington University in St. Louis, Drexel University, and Colorado School of Mines. The team is using an open-access database of organic molecules called VERDE materials DB, which Lopez and colleagues recently published, to improve their algorithms and find more useful molecules.

The database will also register newly found molecules, and can serve as a data hub of information for researchers across several different domains, Lopez says. Thats because it can launch researchers toward finding different molecules with many new properties and applications.

In tandem with the database, the algorithms will allow scientists to use computational resources more efficiently. After molecules of interest are found, researchers will recalibrate the algorithm to find more similar groups of molecules.

The active-search algorithm, developed by Roman Garnett at Washington University in St. Louis, uses a process similar to the classic board game Battleship, in which two players guess hidden locations off a grid to target and destroy vessels within a naval fleet.

In that grid, players place vessels as far apart as possible to make opponents miss targets. Once a ship is hit, players can readjust their strategy and redirect their attacks to the coordinates surrounding that hit.

Thats exactly how Lopez thinks of the concept of exploring a vast ocean of molecules.

We are looking for regions within this ocean, he says. We are starting to set up the coordinates of all the possible molecules.

Hitting the right candidate molecules might also expand the understanding that chemists have of this unexplored chemical space.

Maybe well find out through this analysis that we have something really at the edge of what we call the ocean, and that we can expand this ocean out a bit more in that region, Lopez says. Those are things that we wouldnt [be able to find by searching] with a brute force, trial-and-error kind of approach.

For media inquiries, please contact Jessica Hair at j.hair@northeastern.edu or 617-373-5718.

Originally posted here:
Chemists are training machine learning algorithms used by Facebook and Google to find new molecules - News@Northeastern

AI, machine learning, and other frothy tech subjects remained overhyped in 2019 – Boing Boing

Rodney Brooks (previously) is a distinguished computer scientist and roboticist (he's served as as head of MIT's Computer Science and Artificial Intelligence Laboratory and CTO of Irobot); two years ago, he published a list of "dated predictions" intended to cool down some of the hype about self-driving cars, machine learning, and robotics, hype that he viewed as dangerously gaseous.

Every year, Brooks revisits those predictions to see how he's doing (to "self certify the seriousness of my predictions"). This year's scorecard is characteristically curmudgeonly, and shows that Brooks's skepticism was well-warranted, revealing much of the enthusiasm for about AI to have been mere froth: "I had not predicted any big milestones for AI and machine learning for the current period, and indeed there were none achieved... [W]e have seen warnings that all the over-hype of machine and deep learning may lead to a new AI winter when those tens of thousands of jolly conference attendees will no longer have grants and contracts to pay for travel to and attendance at their fiestas"

Some of the predictions are awfully fun, too, like "The press, and researchers, generally mature beyond the so-called 'Turing Test' and Asimov's three laws as valid measures of progress in AI and ML" (predicted for 2022; last year's update was, "I wish, I really wish.").

Brooks is pretty bullish on the web for piercing hype-bubbles, noting that it provides "outlets... for non-journalists, perhaps practitioners in a scientific field, to write position papers that get widely referenced in social media... During 2019 we saw many, many well informed such position papers/blogposts. We have seen explanations on how machine learning has limitations on when it makes sense to be used and that it may not be a universal silver bullet."

Bruce Sterling's actually pretty comfortable with tech hype: "Ive come to see tech-hype as a sign of social health. Its kinda like being young and smitten by a lot of random pretty people, only, youre not gonna really have relationships with most of them, and also, the one you oughta marry and have children with, that is probably not the one who seems most fantastically hot and sexy. Also, if nothing at all seems fantastically hot and sexy, then you probably have a vitamin deficiency. Its all part of the marvelous pageant of life, ladies and gentlemen."

I made my predictions because at the time I saw an immense amount of hype about these three topics, and the general press and public drawing conclusions about all sorts of things they feared (e.g., truck driving jobs about to disappear, all manual labor of humans about to disappear) or desired (e.g., safe roads about to come into existence, a safe haven for humans on Mars about to start developing) being imminent. My predictions, with dates attached to them, were meant to slow down those expectations, and inject some reality into what I saw as irrational exuberance.

Predictions Scorecard, 2020 January 01 [Rodney Brooks]

(via Beyond the Beyond)

(Image: Gartner; Cryteria, CC-BY, modified)

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments.

Librecorps is a program based at the Rochester Institute for Technology's Free and Open Source Software (FOSS) initiative that works with UNICEF to connect students with NGOs for paid co-op placements where they build and maintain FOSS tools used by nonprofits.

A team of researchers from Microsoft and Harvard's Berkman Center have published a taxonomy of "Failure Modes in Machine Learning," broken down into "Intentionally-Motivated Failures" and "Unintended Failures."

The best apps of 2019 could be the best deals of 2020. If you missed them last year, here are 10 of our Boing Boing reader favorites all on sale. Take advantage of deep discounts on apps dedicated to language learning, gaming, graphic design and many more. Degoo Premium: Lifetime 10TB Backup Plan With []

Missed that sale in the chaos of the holiday rush? No worries. Weve rounded up 10 of the best deals from the past year on tech, household items, audio gear and much more all still priced way down. LG B8 Series 55 OLED 4K HDR TV Consumer Reports flagged the B8 as one of []

Whatever your resolution is for the new year, youll be able to do it better with more sleep. Modern sleep masks are more than just blindfolds. They incorporate 3D contouring, ambient noise blocking and other features designed to help you shut out the world and slow down your busy, conscious mind. Heres 9 of our []

View post:
AI, machine learning, and other frothy tech subjects remained overhyped in 2019 - Boing Boing

Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core – The Register

MIT boffins have devised a software-based tool for predicting how processors will perform when executing code for specific applications.

In three papers released over the past seven months, ten computer scientists describe Ithemal (Instruction THroughput Estimator using MAchine Learning), a tool for predicting the number processor clock cycles necessary to execute an instruction sequence when looped in steady state, and include a supporting benchmark and algorithm.

Throughput stats matter to compiler designers and performance engineers, but it isn't practical to make such measurements on-demand, according to MIT computer scientists Saman Amarasinghe, Eric Atkinson, Ajay Brahmakshatriya, Michael Carbin, Yishen Chen, Charith Mendis, Yewen Pu, Alex Renda, Ondrej Sykora, and Cambridge Yang.

So most systems rely on analytical models for their predictions. LLVM offers a command-line tool called llvm-mca that can presents a model for throughput estimation, and Intel offers a closed-source machine code analyzer called IACA (Intel Architecture Code Analyzer), which takes advantage of the company's internal knowledge about its processors.

Michael Carbin, a co-author of the research and an assistant professor and AI researcher at MIT, told the MIT News Service on Monday that performance model design is something of a black art, made more difficult by Intel's omission of certain proprietary details from its processor documentation.

The Ithemal paper [PDF], presented in June at the International Conference on Machine Learning, explains that these hand-crafted models tend to be an order of magnitude faster than measuring basic block throughput sequences of instructions without branches or jumps. But building these models is a tedious, manual process that's prone to errors, particularly when processor details aren't entirely disclosed.

Using a neural network, Ithemal can learn to predict throughout using a set of labelled data. It relies on what the researchers describe as "a hierarchical multiscale recurrent neural network" to create its prediction model.

"We show that Ithemals learned model is significantly more accurate than the analytical models, dropping the mean absolute percent error by more than 50 per cent across all benchmarks, while still delivering fast estimation speeds," the paper explains.

A second paper presented in November at the IEEE International Symposium on Workload Characterization, "BHive: A Benchmark Suite and Measurement Framework for Validating x86-64 Basic Block Performance Models," describes the BHive benchmark for evaluating Ithemal and competing models, IACAm llvm-mca, and OSACA (Open Source Architecture Code Analyzer). It found Ithemal outperformed other models except on vectorized basic blocks.

And in December at the NeurIPS conference, the boffins presented a third paper titled Compiler Auto-Vectorization with Imitation Learning that describes a way to automatically generate compiler optimizations in a way that outperforms LLVMs SLP vectorizer.

The academics argue that their work shows the value of machine learning in the context of performance analysis.

"Ithemal demonstrates that future compilation and performance engineering tools can be augmented with datadriven approaches to improve their performance and portability, while minimizing developer effort," the paper concludes.

Read the original post:
Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register

Tiny Machine Learning On The Attiny85 – Hackaday

We tend to think that the lowest point of entry for machine learning (ML) is on a Raspberry Pi, which it definitely is not. [EloquentArduino] has been pushing the limits to the low end of the scale, and managed to get a basic classification model running on the ATtiny85.

Using his experience of running ML models on an old Arduino Nano, he had created a generator that can export C code from a scikit-learn. He tried using this generator to compile a support-vector colour classifier for the ATtiny85, but ran into a problem with the Arduino ATtiny85 compiler not supporting a variadic function used by the generator. Fortunately he had already experimented with an alternative approach that uses a non-variadic function, so he was able to dust that off and get it working. The classifier accepts inputs from an RGB sensor to identify a set of objects by colour. The model ended up easily fitting into the capabilities of the diminutive ATtiny85, using only 41% of the available flash and 4% of the available ram.

Its important to note what [EloquentArduino] isnt doing here: running an artificial neural network. Theyre just too inefficient in terms of memory and computation time to fit on an ATtiny. But neural nets arent the only game in town, and if your task is classifying something based on a few inputs, like reading a gesture from accelerometer data, or naming a color from a color sensor, the approach here will serve you well. We wonder if this wouldnt be a good solution to the pesky problem of identifying bats by their calls.

We really like how approachable machine learning has become and if youre keen to give ML a go, have a look at the rest of the EloquentArduino blog, its a small goldmine.

Were getting more and more machine learning related hacks, like basic ML on an Arduino Uno, and Lego sortings using ML on a Raspberry Pi.

Follow this link:
Tiny Machine Learning On The Attiny85 - Hackaday

SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets – PRNewswire

SAN MATEO and MOUNTAIN VIEW, Calif., Jan. 7, 2020 /PRNewswire/ --SiFive, Inc., the leading provider of commercial RISC-V processor IP and silicon solutions and CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies, today announced a new partnership to enable the design and creation of ultra-low-power domain-specific Edge AI processors for a range of high-volume end markets. The partnership, as part of SiFive's DesignShare program, is centered around RISC-V CPUs, CEVA's DSP cores, AI processors and software, which will be designed into SoCs targeting an array of end markets where on-device neural networks inferencing supporting imaging, computer vision, speech recognition and sensor fusion applications is required. Initial end markets include smart home, automotive, robotics, security and surveillance, augmented reality, industrial and IoT.

Machine Learning Processing at the EdgeDomain-specific SoCs which can handle machine learning processing on-device are set to become mainstream, as the processing workloads of devices increasingly includes a mix of traditional software and efficient deep neural networks to maximize performance, battery life and to add new intelligent features. Cloud-based AI inference is not suitable for many of these devices due to security, privacy and latency concerns. SiFive and CEVA are directly addressing these challenges through the development of a range of domain-specific scalable edge AI processor designs, with the optimal balance of processing, power efficiency and cost.

The Edge AI SoCs are supported by CEVA's award-winning CDNN Deep Neural Network machine learning software compiler that creates fully-optimized runtime software for the CEVA-XM vision processors, CEVA-BX audio DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing. CEVA will also supply a full development platform for partners and developers based on the CEVA-XM and NeuPro architectures to enable the development of deep learning applications using the CDNN, targeting any advanced network, as well as DSP tools and libraries for audio and voice pre- and post-processing workloads.

SiFive DesignShare ProgramThe SiFive DesignShare IP program offers a streamlined process for companies seeking to partner with leading vendors to provide pre-integrated premium Silicon IP for bringing new SoCs to market. As part of SiFive's business model to license IP when ready for mass production, the flexibility and choice of the DesignShare IP program reduces the complexities of contract negotiation and licensing agreements to enable faster time to market through simpler prototyping, no legal red tape, and no upfront payment.

"CEVA's partnership with SiFive enables the creation of Edge AI SoCs that can be quickly and expertly tailored to the workloads, while also retaining the flexibility to support new innovations in machine learning," said Issachar Ohana, Executive Vice President, Worldwide Sales at CEVA. "Our market leading DSPs and AI processors, coupled with the CDNN machine learning software compiler, allow these AI SoCs to simplify the deployment of cloud-trained AI models in intelligent devices and provides a compelling offering for anyone looking to leverage the power of AI at the edge."

"Enabling future-proof, technology-leading processor designs is a key step in SiFive's mission to unlock technology roadmaps," said Dr. Naveed Sherwani, president and CEO, SiFive. "The rapid evolution of AI models combined with the requirements for low power, low latency, and high-performance demand a flexible and scalable approach to IP and SoC design that our joint CEVA / SiFive portfolio is superbly positioned to provide. The result is shorter time-to-market, while lowering the entry barriers for device manufacturers to create powerful, differentiated products."

AvailabilitySiFive's DesignShare program, including CEVA-BX Audio DSPs, CEVA-XM Vision DSPs and NeuPro AI processors, is available now. Visit http://www.sifive.com/designshare for more information.

About SiFiveSiFive is on a mission to free semiconductor roadmaps and declare silicon independence from the constraints of legacy ISAs and fragmented solutions. As the leading provider of market-ready processor core IP and silicon solutions based on the free and open RISC-V instruction set architecture SiFive helps SoC designers reduce time-to-market and realize cost savings with customized, open-architecture processor cores, and democratizes access to optimized silicon by enabling system designers in all markets to build customized RISC-V based semiconductors. Founded by the inventors of RISC-V, SiFive has 16 design centers worldwide, and has backing from Sutter Hill Ventures, Qualcomm Ventures, Spark Capital, Osage University Partners, Chengwei, Huami, SK Hynix, Intel Capital, and Western Digital. For more information,please visit http://www.sifive.com.

Stay current with the latest SiFive updates via LinkedIn, Twitter, Facebook, and YouTube.

About CEVA, Inc.CEVA is the leading licensor of wireless connectivity and smart sensing technologies. We offer Digital Signal Processors, AI processors, wireless platforms and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence, all of which are key enabling technologies for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial and IoT. Our ultra-low-power IPs include comprehensive DSP-based platforms for 5G baseband processing in mobile and infrastructure, advanced imaging and computer vision for any camera-enabled device and audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For sensor fusion, our Hillcrest Labs sensor processing technologies provide a broad range of sensor fusion software and IMU solutions for AR/VR, robotics, remote controls, and IoT. For artificial intelligence, we offer a family of AI processors capable of handling the complete gamut of neural network workloads, on-device. For wireless IoT, we offer the industry's most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi 4/5/6 (802.11n/ac/ax) and NB-IoT. Visit us at http://www.ceva-dsp.comand follow us on Twitter, YouTube,Facebook, LinkedInand Instagram.

Logo: https://mma.prnewswire.com/media/74483/ceva__inc__logo.jpg

SOURCE CEVA, Inc.

http://www.ceva-dsp.com

Excerpt from:
SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets - PRNewswire

FLIR Systems and ANSYS to Speed Thermal Camera Machine Learning for Safer Cars – Business Wire

ARLINGTON, Va.--(BUSINESS WIRE)--FLIR Systems, Inc. (NASDAQ: FLIR) and ANSYS (NASDAQ: ANSS) are collaborating to deliver superior hazard detection capabilities for assisted driving and autonomous vehicles (AVs) empowering automakers to deliver unprecedented vehicle safety. Through this collaboration, FLIR will integrate a fully physics-based thermal sensor into ANSYS leading-edge driving simulator to model, test, and validate thermal camera designs within an ultra-realistic virtual world. The new solution will reduce original equipment manufacturers (OEM) development time by optimizing thermal camera placement for use with tools such as automatic emergency braking (AEB), pedestrian detection, and within future AVs.

Having the ability to test in virtual environments complements the existing systems available to FLIR customers and partners, including the FLIR automotive development kit (ADK) featuring a FLIR Boson thermal camera, the FLIR starter thermal dataset and the regional, city-specific thermal datasets. The FLIR thermal dataset programs were created for machine learning in advanced driver assistance development (ADAS), AEB, and AV systems.

The current AV and ADAS sensors face challenges in darkness or shadows, sun glare and inclement weather such as most fog. Thermal cameras, however, can effectively detect and classify objects in these conditions. Integrating FLIR Systems thermal sensor into ANSYS VRXPERIENCE enables simulation of thousands of driving scenarios across millions of miles in mere days. Furthermore, engineers can simulate difficult-to-produce scenarios where thermal provides critical data, including detecting pedestrians in crowded, low-contrast environments.

By adding ANSYS industry-leading simulation solutions to the existing suite of tools for physical testing, engineers, automakers, and automotive suppliers can improve the safety of vehicles in all types of driving conditions, said Frank Pennisi, President of the Industrial Business Unit at FLIR Systems. The industry can also recreate corner cases that drivers can see every day but are difficult to replicate in physical environments, paving the way for improved neural networks and the performance of safety features such as AEB.

FLIR Systems recognizes the limitations of relying solely on gathering machine learning datasets in the physical world to make automotive thermal cameras as safe and reliable as possible for automotive uses, said Eric Bantegnie, Vice president and General Manager at ANSYS. Now with ANSYS solutions, FLIR can further empower automakers to speed the creation and certification of assisted-driving systems with thermal cameras.

In addition to the city-specific data sets, FLIR has more than a decade of experience in the automotive industry. FLIR has provided more than 700,000 thermal sensors as part of its night vision warning systems for a variety of carmakers, including GM, Audi and Mercedes-Benz. Also, FLIR recently announced that its thermal sensor has been selected by Veoneer, a tier-one automotive supplier, for its level-four AV production contract with a top global automaker, planned for 2021.

FLIR Systems thermal-enhanced demonstration car, along with other innovative FLIR products, will be on display at the FLIR booth #8528 during the 2020 Consumer Electronics Show in Las Vegas, Nevada from January 6 - 10.

For more information on FLIR Systems automotive solutions, please visit https://www.flir.com/safercars.

About FLIR Systems, Inc.

Founded in 1978, FLIR Systems is a world-leading industrial technology company focused on intelligent sensing solutions for defense, industrial, and commercial applications. FLIR Systems vision is to be The Worlds Sixth Sense," creating technologies to help professionals make more informed decisions that save lives and livelihoods. For more information, please visit http://www.flir.com and follow @flir.

See the article here:
FLIR Systems and ANSYS to Speed Thermal Camera Machine Learning for Safer Cars - Business Wire

Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies – Business Wire

BOSTON & SAN FRANCISCO--(BUSINESS WIRE)--Pear Therapeutics, Inc., the leader in Prescription Digital Therapeutics (PDTs), announced today that it has entered into agreements with multiple technology innovators, including Firsthand Technology, Inc., leading researchers from the Karolinska Institute in Sweden, Cincinnati Childrens Hospital Medical Center, Winterlight Labs, Inc., and NeuroLex Laboratories, Inc. These new agreements continue to bolster Pears PDT platform, by adding to its library of digital biomarkers, machine learning algorithms, and digital therapeutics.

Pears investment in these cutting-edge technologies further supports its strategy to create the broadest and deepest toolset for the development of PDTs that redefine standard of care in a range of therapeutic areas. With access to these new technologies, Pear is positioned to develop PDTs in new disease areas, while leveraging machine learning to personalize and improve its existing PDTs.

We are excited to announce these agreements, which expand the leading PDT platform, said Corey McCann, M.D., Ph.D., President and CEO of Pear. "Accessing external technologies allows us to continue to broaden the scope and efficacy of PDTs.

The field of digital health is evolving rapidly, and PDTs are going to increasingly play a big part because they are designed to allow doctors to treat disease in combination with drug products more effectively than with drugs alone, said Alex Pentland, Ph.D., a leading expert in voice analytics and MIT Professor. For PDTs to make their mark in healthcare, they will need to continually evolve. Machine learning and voice biomarker algorithms are key to guide that evolution and personalization.

About Pear Therapeutics

Pear Therapeutics, Inc. is the leader in prescription digital therapeutics. We aim to redefine medicine by discovering, developing, and delivering clinically validated software-based therapeutics to provide better outcomes for patients, smarter engagement and tracking tools for clinicians, and cost-effective solutions for payers. Pear has a pipeline of products and product candidates across therapeutic areas, including severe psychiatric and neurological conditions. Our lead product, reSET, for the treatment of Substance Use Disorder, was the first prescription digital therapeutic to receive marketing authorization from the FDA to treat disease. Pears second product, reSET-O, for the treatment of Opioid Use Disorder, received marketing authorization from the FDA in December 2018. For more information, visit us at http://www.peartherapeutics.com.

________________________________

1. Jones, T., Moore, T., & Choo, J. (2016). The Impact of Virtual Reality on Chronic Pain. PloS one, 11(12), e0167523. doi:10.1371/journal.pone.0167523

2. Ljtsson B, Hesser H, Andersson E, Lackner JM, Alaoui El S, Falk L, Aspvall K, Fransson J, Hammarlund K, Lfstrm A, Nowinski S, Lindfors P, Hedman E. Provoking symptoms to relieve symptoms: A randomized controlled dismantling study of exposure therapy in irritable bowel syndrome. Beh Res Ther. 2014 Feb 10;55C:2739. PMID:24584055

3. Ljtsson B, Hedman E, Andersson E, Hesser H, Lindfors P, Hursti T, Rydh S, Rck C, Lindefors N, Andersson G. Internet-delivered exposure-based treatment vs. stress management for irritable bowel syndrome: a randomized trial. Am J Gastroenterol. 2011 Aug;106(8):148191. PMID:21537360

4. Ljtsson B, Andersson G, Andersson E, Hedman E, Lindfors P, Andrewitch S, Rck C, Lindefors N. Acceptability, effectiveness, and cost-effectiveness of internet-based exposure treatment for irritable bowel syndrome in a clinical sample: a randomized controlled trial. BMC Gastroenterol. 2011;11(1):110. PMID:21992655

5. Ljtsson B, Falk L, Vesterlund AW, Hedman E, Lindfors P, Rck C, Hursti T, Andrewitch S, Jansson L, Lindefors N, Andersson G. Internet-delivered exposure and mindfulness based therapy for irritable bowel syndrome - a randomized controlled trial. Beh Res Ther. 2010 Jun;48(6):5319. PMID:20362976

Read more:
Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies - Business Wire

Machine learning is innately conservative and wants you to either act like everyone else, or never change – Boing Boing

Next month, I'm giving a keynote talk at The Future of the Future: The Ethics and Implications of AI, an event at UC Irvine that features Bruce Sterling, Rose Eveleth, David Kaye, and many others!

Preparatory to that event, I wrote an op-ed for the LA Review of Books on AI and its intrinsic conservativism, building on Molly Sauter's excellent 2017 piece for Real Life.

Sauters insight in that essay: machine learning is fundamentally conservative, and it hates change. If you start a text message to your partner with Hey darling, the next time you start typing a message to them, Hey will beget an autosuggestion of darling as the next word, even if this time you are announcing a break-up. If you type a word or phrase youve never typed before, autosuggest will prompt you with the statistically most common next phrase from all users (I made a small internet storm in July 2018 when I documented autocompletes suggestion in my message to the family babysitter, which paired Can you sit with on my face and).

This conservativeness permeates every system of algorithmic inference: search for a refrigerator or a pair of shoes and they will follow you around the web as machine learning systems re-target you while you move from place to place, even after youve bought the fridge or the shoes. Spend some time researching white nationalism or flat earth conspiracies and all your YouTube recommendations will try to reinforce your interest. Follow a person on Twitter and you will be inundated with similar people to follow. Machine learning can produce very good accounts of correlation (this person has that persons address in their address-book and most of the time that means these people are friends) but not causation (which is why Facebook constantly suggests that survivors of stalking follow their tormentors who, naturally, have their targets addresses in their address books).

Our Conservative AI Overlords Want Everything to Stay the Same [Cory Doctorow/LA Review of Books]

(Image: Groundhog Day/Columbia Pictures)

I've been writing about the Aeropress coffee maker for years, an ingenious, compact, low-cost way of brewing outstanding coffee with vastly less fuss and variation than any other method. For a decade, I've kept an Aeropress in my travel bag, even adding a collapsible silicone kettle for those hotel rooms lacking even a standard coffee-maker to heat water with.

[I adored Cecil Castellucci and Jim Rugg's YA graphic novels The Plain Janes and Janes in Love, which were the defining titles for the late, lamented Minx imprint from DC comics. A decade later, the creators have gotten the rights back and there's a new edition Little, Brown. We're honored to have an exclusive transcript of Cecil and Jim in conversation, discussing the origins of Plain Janes. Make no mistake: this reissue is amazing news, and Plain James is an underappreciated monster of a classic, finally getting another day in the spotlight. If you haven't read it, consider yourself lucky, because you're about to get another chance. -Cory]

Clay Per Day is a Dutch sculptor whose Etsy store features grotesque, "realistic" sculptures that mash up the heads of angry babies with spiders and knurled fingers, about the right size for posing on your desk at work. (via Creepbay)

The best apps of 2019 could be the best deals of 2020. If you missed them last year, here are 10 of our Boing Boing reader favorites all on sale. Take advantage of deep discounts on apps dedicated to language learning, gaming, graphic design and many more. Degoo Premium: Lifetime 10TB Backup Plan With []

Missed that sale in the chaos of the holiday rush? No worries. Weve rounded up 10 of the best deals from the past year on tech, household items, audio gear and much more all still priced way down. LG B8 Series 55 OLED 4K HDR TV Consumer Reports flagged the B8 as one of []

Whatever your resolution is for the new year, youll be able to do it better with more sleep. Modern sleep masks are more than just blindfolds. They incorporate 3D contouring, ambient noise blocking and other features designed to help you shut out the world and slow down your busy, conscious mind. Heres 9 of our []

View post:
Machine learning is innately conservative and wants you to either act like everyone else, or never change - Boing Boing

Can We Do Deep Learning Without Multiplications? – Analytics India Magazine

A neural network is built around simple linear equations like Y = WX + B, which contain something called as weights W. These weights get multiplied with the input X and thus plays a crucial in how the model predicts.

Most of the computations in deep neural networks are multiplications between float-valued weights and float-valued activations during the forward inference.

The prediction scores can even go downhill if a wrong weight gets updated and as the network gets deeper i.e addition of more layers and columns of connected nodes, the error gets magnified and the results miss the target.

To make models lighter while not keeping the efficiency intact, many solutions have been developed, and one such solution is neural compression.

When we say neural compression, it actually means is the combination of the following techniques:

However, these methods facilitate a faster way of training models but do not eliminate underlying operations.

Convolutions are the gold standard of machine vision models, a default operation to extract features from visual data. And there hardly has been any attempt to replace convolution with another more efficient similarity measure, and that is why its better to only involve additions.

Instead of developing software and hardware solutions to cater for faster multiplications between layers, can we train models without multiplication?

To answer this question, researchers from Huawei labs and Peking University in collaboration with the University of Sydney have come up with AdderNet or adder networks that trade massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs.

The notion here is that adding two numbers is easy compared to multiplying two numbers.

A norm in the context of linear algebra is the total length of all vectors in space.

For the vector, say X = [3,4]

The L 1 norm is calculated as:

The underlying working of AdderNets, according to Hanting Chen et al., is given as follows:

Input: An initialised adder network N with its training set X and the corresponding labels Y, along with the global learning rate and the hyper-parameter .

Output: A well-trained adder network N with almost no multiplications.

To validate the effectiveness of AdderNets, the following setup is used:

Benchmark datasets: MNIST, CIFAR and ImageNet.

Hardware: NVIDIA Tesla V100 GPU

Framework: PyTorch.

The results from the MNIST experiment show that the convolutional neural network achieves a 99.4% accuracy with 435K multiplications and 435K additions. By replacing the multiplications in convolution with additions, the proposed AdderNet achieves a 99.4% accuracy, which is the same as that of CNNs, with 870K additions and almost no multiplication.

The biggest difference between CNNs and AdderNets is that the convolutional neural network calculates the cross-correlation between filters and inputs. If filters and inputs are approximately normalised, the convolution operation then becomes equivalent to cosine distance between two vectors.

AdderNets on the other hand, utilise the L1-norm to distinguish different classes. Thus, the features tend to be clustered towards different class centres.

Features of CNNs in different classes are divided by their angles. In contrast, features of AdderNets tend to be clustered towards different class centres, since AdderNets use the L1-norm to distinguish different classes.

However, AdderNets still have a long way to go.

For example, lets say, X is the input feature, F is filter and Y is the output, the difference between the CNNs and AdderNets can be seen in the way where their variances are approximated as:

Usually, Var[F] or variance of the filter is a small value (~0.003). So, multiplying Var[F] in case of CNNs will result in smaller variances, which in turn will lead to a smooth flow of information in the network.

Whereas due to addition in the AdderNets, the variance is larger, which means the gradient w.r.t X is smaller, and hence this will slow down the network updating.

AdderNets were proposed to make machine learning a lightweight task and we are here, already trading time. To avoid large variance effects, the authors in their work, recommend the use of an adaptive learning rate for different layers in AdderNet.

Machine learning is computationally intensive and there is always a tradeoff between accuracy and inference time(speed).

The high-power consumption of these high-end GPU cards has hindered the state-of-the-art machine learning models from being deployed on smartphones and other wearables.

Though companies like Apple with their A13 bionic chips are revolutionising deep learning for mobiles, it is required to have an effective investigation of the techniques that have been overlooked. Something as scary as imagining convolutions without multiplications can result in models like AdderNets.

comments

See the rest here:
Can We Do Deep Learning Without Multiplications? - Analytics India Magazine

Cerner Expands Collaboration with Amazon Web as its Preferred Machine Learning Provider – Story of Future

Cerner is looking to capitalize on the latest technologies. As a part of their incumbent collaboration efforts, the company has chosen to expand its collaboration with Amazon Web Services (AWS) as its most preferred provider for machine learning and artificial intelligence. Cerner will continue using AWS technologies to improve the overall quality of patient care. Through this collaboration the company also expects to tackle healthcare costs and boost population health efforts.

Cerner will work to move center applications to AWS as a major aspect of the community oriented understanding, authorities said. Moreover, the organization is institutionalizing its AI and AI outstanding burdens on AWS to grow new prescient innovation.

One focal point of this new activity is the Cerner Machine Learning Ecosystem a platform designed with the help of Amazon SageMaker, Amazon Simple Storage Service, AWS Lambda, Amazon Simple Queue Service, AWS Step Functions and Amazon CloudWatch.

The organizations state the stage will help medicinal services information researchers building, send, screen and oversee machine models at scale and help Cerner discover progressively prescient and computerized demonstrative experiences for before wellbeing intercessions.

Among the principal new AI activities AWS and Cerner will handle are readmission anticipation and clinician burnout.

At Amazon Web Services re:Invent, CEO of Cerner, Brent Shafer noted a client whom it could serve serve by applying AI to recorded information moved to the AWS Cloud. It built up a model that helped the medicinal services framework arrive at the most reduced readmission rate in over 10 years.

Whats more, the new Amazon Transcribe Medical, and AI instruments like it, will likewise be sharpened with assistance from Cerner to decrease the documentation trouble looked by clinicians every day.

The digitization of human services has coincidentally caused an expansion in documentation for doctors, said Shafer. Working with AWS will enable us to catch specialist understanding collaboration and coordinate it straightforwardly into the electronic work process of the doctor. This new headway will help specialists and suppliers invest less energy rounding out structures and greater quality time with their patients.

Editor-in-Chief

20+ years of diverse and extensive experience in higher education including teaching, research, and university and community service in overseas universities and colleges.Associate Editor, and publications in international refereed journals and presented most of them in international conferences in the fields of Applied Multivariate Statistics, Mortality, Social Science, Economics.

Mail: globalqyresearch@gmail.com

See the original post here:
Cerner Expands Collaboration with Amazon Web as its Preferred Machine Learning Provider - Story of Future