Artificial Intelligence and technology in manufacturing is evolution, not revolution – The Financial Express

By Kishore Jayaraman

The economic resurrection roadmap for India laid out by the Finance Minister recently, reinforced the Prime Ministers vision for a self-reliant India. As we envision the countrys overall recovery, there is a need to focus on technology-driven systems that will be a key pillar to building a future-ready India.

For a self-reliant India, a robust local manufacturing sector can act as a strong lever for economic growth. To build a sustainable local manufacturing base significant investment in disruptive technologies including artificial intelligence (AI) enabled machine learning, will be key to bringing down labour costs, reduce product defects, shorten unplanned downtimes, improve transition times and increase production speed. AI can be used to effectively gather data and insights across manufacturing operations from design to delivery, including predictive problem solving by identifying issues that may not be easy to spot.

As the pace of technological advancement continues to quicken, unleashing the true potential of data becomes ever more important. Finding ways to combine operational knowledge and expertise with this data to create actionable insights is what artificial intelligence enables.

Automation, robotics, and complex analytics have all been used by the manufacturing industry for years. If the technology that makes manufacturing more flexible is widely deployed, it can enable more cost-effective customization, and that could create a real shift in competitiveness. The integration of AI in the manufacturing industry must be seen as more like an evolution than a revolution.

According to industry reports, the Internet of Things could contribute $10 trillion to the global economy by 2030. It is thus imperative that AI, material tracking mechanisms, 3D printing, automated product design, robotics and wearables are integrated seamlessly across production processes and allowed to play a larger role in delivering zero-error results to help manufacturers reduce costs and increase productivity.

Rolls-Royce serves as a living example of an industrial giant transitioning to the new age of data-enabled efficiency which shows how any the company, regardless of its industry, can and should adapt to the data age. We are using advanced analytics, industrial Artificial Intelligence, and machine-learning techniques to develop data applications that will unlock design, manufacturing, and operational efficiencies within Rolls-Royce, and create new service propositions for customers.

Further, data generated by IoT sensors, aggregated and analyzed in the cloud, is providing Rolls-Royce with unprecedented insight into the live performance of its products from jet engines and helicopter blades to power generation systems and marine turbines. And that data capability is rapidly evolving beyond just predicting equipment issues and maintenance requirements to providing customers with valuable aftermarket services that range from showing airlines how to optimize their routes to keeping a survey ship in position in heavy seas.

The industry must come together to take technology to the next level and drive innovation through collaboration, with end-users and start-ups coming together to find ways to develop and embed new technology into businesses.

With a strong digital ecosystem and abundance of talent, India is in a unique position to recast the framework of manufacturing but this will require a quantum shift in our mindset and approach. Technology adoption will become a norm for success as we come out of this pandemic. We will need to up-skill our talent even as we learn to intelligently integrate technology into our businesses to move up the manufacturing value chain.

It is time we all envision a new India with greater strategic autonomy and technological far-sightedness to anticipate and respond to the challenges of the future. The change must begin with the people and their approach. We must create the workforce of the future that feels empowered with technology, rather than threatened so that they can create experiences that spark innovation and help build a more sustainable future.

(The author is President, Rolls-Royce India & South Asia. Views expressed are personal.)

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

The rest is here:
Artificial Intelligence and technology in manufacturing is evolution, not revolution - The Financial Express

Artificial Intelligence In Behavioral And Mental Health Care Market to Witness Astonishing Growth by 2026 Focusing on Leading Players AdvancedMD ,…

Artificial Intelligence in Behavioral and Mental Health Care Market research report is the new statistical data source added by Healthcare Intelligence Markets. It uses several approaches for analyzing the data of target market such as primary and secondary research methodologies. It includes investigations based on historical records, current statistics, and futuristic developments. Artificial Intelligence In Behavioral And Mental Health Care Market is predicted to grow at a significant CAGR in the forecast period.

Artificial Intelligence In Behavioral And Mental Health Care Market research reports growth rates and market value based on market dynamics, growth factors. Complete knowledge is based on the latest innovations in the industry, opportunities and trends. In addition to SWOT analysis by key suppliers, the report contains a comprehensive market analysis and major players landscape.

Ask for Sample Copy of This Report: https://healthcareintelligencemarkets.com/request_sample.php?id=135984

Top Key Players Profiled in This Report:

AdvancedMD , Cerner , Core Solutions , Credible Behavioral Health , ICANotes , InSync Healthcare Solutions , iSalus Healthcare , Kareo , Meditab Software , Mentegram , Mindlinc , Netsmart , Nextgen Healthcare , NextStep Solutions , Nuesoft Technologies , Qualifacts , Raintree Systems , Sigmund Software , The Echo Group , TheraNest , Valant , Welligent , WRS Health, and many more.

What this research report offers:

The report highlights several global regions such as North America, Latin America, Asia-Pacific, Africa, and Europe for the comparative study of the Artificial Intelligence In Behavioral And Mental Health Care Market. In terms of productivity North America is the leading region for the market sector. Additionally, it offers the demanding structure of services in the developing and developed countries.

Get Discount on This Report: https://healthcareintelligencemarkets.com/ask_for_discount.php?id=135984

The demand within the Artificial Intelligence In Behavioral And Mental Health Care Market has been rising due to the several approaches like technology advancements and heavy competition. It covers different aspects of the businesses and represented by using several graphical presentation techniques such as graphs, charts, pictures, and diagrams.

Reasons for buying this report:

If You Have Any Query, Ask Our Experts: https://healthcareintelligencemarkets.com/enquiry_before_buying.php?id=135984

Table of Contents:

Healthcare Intelligence Market:

HealthCare Intelligence Markets Reports provides Market intelligence and consulting services to global customers in 145 countries. Being a B2B company, we help businesses respond boldly to evolving Market challenges. Create customized syndicated Market research reports to help Market players build strategies to change games. In addition, reports on the pharmaceutical development, clinical and healthcare IT industries provide future trends and future Market prospects.

Contact Us:

Marvella Lit

Phone number: + 44-753-712-1342

Address: 90 State Office Center

90 State Street Suite 700, Albany, NY 12207

[emailprotected]

http://www.healthcareintelligenceMarkets.com

The rest is here:
Artificial Intelligence In Behavioral And Mental Health Care Market to Witness Astonishing Growth by 2026 Focusing on Leading Players AdvancedMD ,...

Metal Book Co-created By Human And Artificial Intelligence – Scoop.co.nz

Friday, 12 June 2020, 3:38 pmPress Release: Phantom House

Wellington photographer Grant Sheehan has used artificialneural network technology to create photographic images thatvisualise an artificial intelligence (AI) dreamscape andthen published them in a book made of metal.

Thismassive new Kiwi project is a fusion of art and science.Does Ava Dream? has many different elements but atthe heart of it are the questions: what might an AI dreamof, and what might those dreams look like?

Sheehanattempts to answer these questions using photography, film,music, and cutting-edge publishing technology. The artworksof Does Ava Dream? are created using high-res patternimages, combined with artificial neural network photographyplug-ins, to illustrate how an AI's dream fragments mightlook on output.

Continuing the theme of AI androbotics, Sheehan has printed these images onto metal tocreate a singular metal book singular both in the sensethat it is remarkable and in the sense that there is onlyone of it.

Those interested can view the metal book atPtaka gallery in Porirua. It is accompanied by plus largedisplay versions of the dream images, plus a short filmshowing these images in motion. The music for this shortfilm, like the images, have been co-created by Sheehan andartificial neural network technology.

For those whocan't make it to the exhibition, Sheehan has also created amore traditional paper book about the project as a whole,called The Making Of Does Ava Dream? This is agorgeous hardback coffee-table book that is published in twoeditions, one of which comes with a signed metallic paperprint of one of the dream images.

Someimages are available for republication uponenquiry

More information about Does Ava Dream?at https://doesavadream.click/

Watchthe trailer for the metal book Does Ava Dream? here:https://www.youtube.com/watch?v=-sdNzxbrxXU

Findout more about the exhbition at Ptaka here: https://pataka.org.nz/whats/exhibitions/grant-sheehan-does-ava-dream/

Title: The Making of Does AvaDream?

Prices: $160.00 and $495.00

ISBN:9780994128560

Full colour landscape, 310 x 280mm, 70pages, June 2020

Binding: Hardcover with a dustjacket

Text: Satin low gloss

Images: Glosscoated

Each unit signed and numbered

Publisher& Distributor: Phantom HouseBooks

Scoop Media

Become a member Find out more

Visit link:
Metal Book Co-created By Human And Artificial Intelligence - Scoop.co.nz

The technical realities of functional quantum computers – is Googles ten-year plan for Quantum Computing viable? – Diginomica

In March, I explored the enterprise readiness of quantum computing in Quantum computing is right around the corner, but cooling is a problem. What are the options? I also detailed potential industry use cases, from supply chain to banking and finance. But what are the industry giants pursuing?

Recently, I listened to two somewhat different perspectives on quantum computing. One is Googles (public) ten-year plan.

Google plans to search for commercially viable applications in the short term, but they dont think there will be many for another ten years - a time frame I've heard one referred to as bound but loose. What that meant was, no more than ten, maybe sooner. In the industry, the term for the current state of the art is NISQ Noisy, Interim Scale Quantum Computing.

The largest quantum computers are in the 50-70 qubit range, and Google feels NISQ has a ceiling of maybe two hundred. The "noisy" part of NISQ is because the qubits need to interact and be nearby. That generates noise. The more qubits, the more noise, and the more challenging it is to control the noise.

But Google suggests the real unsolved problems in fields like optimization, materials science, chemistry, drug discovery, finance, and electronics will take machines with thousands of qubits and even envision one million on a planar array etched in aluminum. Major problems need solving such noise elimination, coherence, and lifetime (a qubit holds its position in a tiny time slice).

In the meantime, Google is seeking customers to work with them to find applications working with Google researchers. Quantum computing needs algorithms as much as it needs qubits. It requires customers with a strong in-house science team and a commitment of three years. Whatever is discovered will be published as open source.

In summary, Google does not see commercial value in NISQ. They are using NISQ to discover what quantum computing can do that has any commercial capability.

First of all, if you have a picture in your mind of a quantum computer, chances are you are not including an essential element a conventional computer. According toQuantum Computing, Progress, and Prospects:

Although reports in the popular press tend to focus on the development of qubits and the number of qubits in the current prototypical quantum computing chip, any quantum computer requires an integrated hardware approach using significant conventional hardware to enable qubits to be controlled, programmed, and read out.

The author is undoubtedly correct. Most material about quantum computers never mentions this, and it raises quite a few issues that can potentially dilute the gee-whiz aspect. I'd heard this first from Itamar Sivan, Ph.D., CEO, Quantum Machines. He followed with the quip that technically, quantum computers aren't computers. Its that simple. They are not Turing Machines. File this under the category of "You're Not Too Old to Learn Something New.

From (Hindi) Theory of Computation - Turing Machine:

A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed.

Dr. Sivan clarified this as follows:

Any computer to ever be used, from the early-days computers, to massive HPCs, are all Turing-machines, and are thereforeequivalent to one another. All computers developedand manufactured in the last decades, are all merelybigger and more compact variations of one another. A quantum computer however is not MERELY a more advanced Turing machine, it is a different type of machine, and classical Turing machines are not equivalent to quantum computers as they are equivalent to one another.

Therefore, the complexity of running particular algorithms on quantum computers is different from the complexity of running them on classical machines. Just to make it clear, a quantum computer can be degenerated to behave like a classical computer, but NOT vice-versa.

There is a lot more to this concept, but most computers you've ever seen or heard of are Turing Machines, except Quantum computers. This should come as no surprise because anything about quantum mechanics is weird and counter-intuitive, so why would a quantum computer be any different?

According to Sivan, a quantum computer needs three elements to perform: a quantum computer and an orchestration platform of (conventional) hardware and software. There is no software in a quantum computer. The platform manages the progress of their algorithm through, mostly laser beams pulses. The logic needed to operate the quantum computer resides with and is controlled by the orchestration platform.

The crucial difference in Google's and Quantum Machines' strategy is that Google views the current NISQ state of affairs as a testbed for finding algorithms and applications for future development. At the same time, Sivan and his company produced an orchestration platform to put the current technology in play. Their platform is quantum computer agnostic it can operate with any of them. Sivan feels that focusing solely on the number of qubits is just part of the equation. According to Dr. Sivan:

While today's most advanced quantum computers only have a relatively small number of available qubits (53 for IBM's latest generation and 54 for Google's Sycamore processor), we cannot maximize the potential of even this relatively small count. We are leaving a lot on the table with regards to what we can already accomplish with the computing power we already have. While we should continue to scale up the number of qubits, we also need to focus on maximizing what we already have.

Ive asked a few quantum computer scientists if quantum computers can solve the Halting Problem.In Wikipedia:

The halting problem is determining, from a description of an arbitrarycomputer programand an input, whether the program will finish running, or continue to run forever.Alan Turingproved in 1936 that a generalalgorithmto solve the halting problem for all possible program-input pairs could not exist.

That puts it in a class of problems that are undecidable. Oddly, opinion was split onthequestion, despite Turings Proof. Like Simplico said to Galileo inDialogues Concerning Two New Sciences, If Aristotle had not said otherwise I would have believed it.

There are so many undecidable problems in math that I wondered if some of these might fall out.For example, straight from current AI problems, Planning in aPartially observable Markov decision process is considered undecidable. A million qubits? Maybe not. After all, Dr. Sivan pointed out that toreplicate in a classical processor, the information in just a 300 qubit quantum processor would require more transistors than all of the atoms inthe universe.

I've always believed that action speaks louder than words. While Google is taking the long view, Quantum Machines provides the platform to see how far we can go with current technology. Googles tactics are familiar. Every time you use TensorFlow, it gets better. Every time play with their autonomous car, it gets better. Their collaboration with a dozen or so technically advanced companies makes their quantum technology better.

Read the original:
The technical realities of functional quantum computers - is Googles ten-year plan for Quantum Computing viable? - Diginomica

European quantum computing startup takes its funding to 32M with fresh raise – TechCrunch

IQM Finland Oy (IQM), a European startup which makes hardware for quantum computers, has raised a 15M equity investment round from the EIC Accelerator program for the development of quantum computers. This is in addition to a raise of 3.3M from the Business Finland government agency. This takes the companys funding to over 32M. The company previously raised a 11.4M seed round.

IQM has hired a lot of engineers in its short life, and now says it plans to hire one quantum engineer per week on the pathway to commercializing its technology through the collaborative design of quantum-computing hardware and applications.

Dr. Jan Goetz, CEO and co-founder of IQM said: Quantum computers will be funded by European governments, supporting IQM s expansion strategy to build quantum computers in Germany, in a statement.

The news comes as the Finnish government announced only last week that it would acquire a quantum computer with 20.7M for the Finnish State Research center VTT.

It has been a mind-blowing forty-million past week for quantum computers in Finland. IQM staff is excited to work together with VTT, Aalto University, and CSC in this ecosystem, rejoices Prof. Mikko Mttnen, Chief Scientist and co-founder of IQM.

Previously, the German government said it would put 2bn into commissioning at least two quantum computers.

IQM thus now plans to expand its operations in Germany via its team in Munich.

IQM will build co-design quantum computers for commercial applications and install testing facilities for quantum processors, said Prof. Enrique Solano, CEO of IQM Germany.

The company is focusing on superconducting quantum processors, which are streamlined for commercial applications in a Co-Design approach. This works by providing the full hardware stack for a quantum computer, integrating different technologies, and then invites collaborations with quantum software companies.

IQM was one of the 72 to succeed in the selection process of the EIC. Altogether 3969 companies applied for this funding.

Read the original:
European quantum computing startup takes its funding to 32M with fresh raise - TechCrunch

The future of quantum computing is Azure bright and you can try it – The American Genius

As time goes on, the value of efficiency and convenience becomes more and more important. Weve seen this in many examples from talk-to-text, to ordering food directly to your door without ever even speaking to another human.

Now coming into the convenience game is a keyboard that allows you to scan instead of type. Anyline is the new keyboard that instantly collects data with the snap of a camera.

Scan ID information, serial numbers, vouchers, IBANs, and barcodes in an instant with your smartphone, as it is compatible with Android and iOS. The app also allows you to scan things such as gift card barcodes, phone numbers you see on street advertisements, and more so, in a sense, it brings CTRL + C to real life.

With your smartphone, you can instantly collect data with the scan function on your keyboard. The platform is compatible with messenger, email, and browser apps. You scan the data and instantly paste it where you want it, saving the time of manual data entry.

This would be useful for scanning things to your notes section that you may refer to often, like your health insurance ID number, your WiFi router information, credit card info and what not.With anything else like this, the concern of privacy is always there so make sure youre doing what you can to protect your information (using a passcode and/or Face ID, not using shared/public networks, etc.) While you should know it by heart, I would recommend not ever scanning your social security number.

However, something like this does save a lot of time as it doesnt involve mistyping it picks up a barcode accurately. Also, you wont need someone reading something back to you so you can accurately type it down into your phone.

This could be a simple way to save time and become a more efficient person in general, and it makes it easier to share information with others. This is also super helpful for people who have trouble reading the teeny tiny type that barcodes are often displayed in.

Comment your thoughts below, and share any tips you use to help further your efficiency!

Read more:
The future of quantum computing is Azure bright and you can try it - The American Genius

What’s New in HPC Research: Hermione, Thermal Neutrons, Certifications & More – HPCwire

In this bimonthly feature,HPCwirehighlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.

Developing a performance model-based predictor for parallel applications on the cloud

As cloud computing becomes an increasingly viable alternative to on-premises HPC, researchers are turning their eyes to addressing latency and unreliability issues in cloud HPC environments. These researchers a duo from the Egypt-Japan University of Science and Technology and Benha University propose a predictor for the execution time of MPI-based cloud HPC applications, finding an 88% accuracy on ten benchmarks.

Authors: Abdallah Saad and Ahmed El-Mahdy.

Investigating portability, performance and maintenance tradeoffs in exascale systems

As the exascale era swiftly approaches, researchers are increasingly grappling with the difficult tradeoffs between major system priorities that will be demanded by such massive systems. These researchers a team from the University of Macedonia explore these tradeoffs through a case study measuring the effect of runtime optimizations on code maintainability.

Authors: Elvira-Maria Arvanitou, Apostolos Ampatzoglou, Nikolaos Nikolaidis, Aggeliki-Agathi Tzintzira, Areti Ampatzoglou and Alexander Chatzigeorgiou.

Moving toward a globally acknowledged HPC certification

Skillsets are incredibly important in the HPC world, but certification is far from uniform. This paper, written by a team from four universities in the UK and Germany, describes the HPC Certification Forum: an effort to categorize, define and examine competencies expected from proficient HPC practitioners. The authors describe the first two years of the community-led forum and outline plans for the first officially supported certificate in the second half of 2020.

Authors: Julian Kunkel, Weronika Filinger, Christian Meesters and Anja Gerbes.

Uncovering the hidden cityscape of ancient Hermione with HPC

In this paper, a team of researchers from the Digital Archaeology Laboratory at Lund University describe how they used a combination of HPC and integrated digital methods to uncover the ancient cityscape of Hermione, Greece. Using drones, laser scanning and modeling techniques, they fed their inputs into an HPC system, where they rendered a fully 3D representation of the citys landscape.

Authors: Giacomo Landeschi, Stefan Lindgren, Henrik Gerding, Alcestis Papadimitriou and Jenny Wallensten.

Examining thermal neutrons threat to supercomputers

Off-the-shelf devices are performant, efficient and cheap, making them popular choices for HPC and other compute-intensive fields. However, the cheap boron used in these devices makes them susceptible to thermal neutrons, which these authors (a team from Brazil, the UK and Los Alamos National Laboratory) contend pose a serious threat to those devices reliability. The authors examine RAM, GPUs, accelerators, an FPGA and more, tinkering with variables that affect the thermal neutron flux and measuring the threat posed by the neutrons under various conditions.

Authors: Daniel Oliveira, Sean Blanchard, Nathan DeBardeleben, Fernando Fernandes dos Santos, Gabriel Piscoya Dvila, Philippe Navaux, Andrea Favalli, Opale Schappert, Stephen Wender, Carlo Cazzaniga, Christopher Frost and Paolo Rech.

Deploying scientific AI networks at petaflop scale on HPC systems with containers

The computational demands of AI and ML systems are rapidly increasing in the scientific research sphere. These authors a duo from LRZ and CERN discuss the complications surrounding the deployment of ML frameworks on large-scale, secure HPC systems. They highlight a case study deployment of a convolutional neural network with petaflop performance on an HPC system.

Authors: David Brayford and Sofia Vallecorsa.

Running a high-performance simulation of a spiking neural network on GPUs

Spiking neural networks (SNNs) are the most commonly used computational model for neuroscience and neuromorphic computing, but simulations of SNNs on GPUs have imperfectly represented the networks, leading to performance and behavior shortfalls. These authors from Tsinghua University propose a series of technical approaches to more accurately representing SNNs on GPUs, including a code generation framework for high-performance simulations.

Authors: Peng Qu, Youhui Zhang, Xiang Fei and Weimin Zheng.

Do you know about research that should be included in next months list? If so, send us an email at[emailprotected]. We look forward to hearing from you.

Original post:
What's New in HPC Research: Hermione, Thermal Neutrons, Certifications & More - HPCwire

The cost of training machines is becoming a problem – The Economist

Jun 11th 2020

THE FUNDAMENTAL assumption of the computing industry is that number-crunching gets cheaper all the time. Moores law, the industrys master metronome, predicts that the number of components that can be squeezed onto a microchip of a given sizeand thus, loosely, the amount of computational power available at a given costdoubles every two years.

For many comparatively simple AI applications, that means that the cost of training a computer is falling, says Christopher Manning, the director of Stanford Universitys AI Lab. But that is not true everywhere. A combination of ballooning complexity and competition means costs at the cutting edge are rising sharply.

Dr Manning gives the example of BERT, an AI language model built by Google in 2018 and used in the firms search engine. It had more than 350m internal parameters and a prodigious appetite for data. It was trained using 3.3bn words of text culled mostly from Wikipedia, an online encyclopedia. These days, says Dr Manning, Wikipedia is not such a large data-set. If you can train a system on 30bn words its going to perform better than one trained on 3bn. And more data means more computing power to crunch it all.

OpenAI, a research firm based in California, says demand for processing power took off in 2012, as excitement around machine learning was starting to build. It has accelerated sharply. By 2018, the computer power used to train big models had risen 300,000-fold, and was doubling every three and a half months (see chart). It should knowto train its own OpenAI Five system, designed to beat humans at Defense of the Ancients 2, a popular video game, it scaled machine learning to unprecedented levels, running thousands of chips non-stop for more than ten months.

Exact figures on how much this all costs are scarce. But a paper published in 2019 by researchers at the University of Massachusetts Amherst estimated that training one version of Transformer, another big language model, could cost as much as $3m. Jerome Pesenti, Facebooks head of AI, says that one round of training for the biggest models can cost millions of dollars in electricity consumption.

Facebook, which turned a profit of $18.5bn in 2019, can afford those bills. Those less flush with cash are feeling the pinch. Andreessen Horowitz, an influential American venture-capital firm, has pointed out that many AI startups rent their processing power from cloud-computing firms like Amazon and Microsoft. The resulting billssometimes 25% of revenue or moreare one reason, it says, that AI startups may make for less attractive investments than old-style software companies. In March Dr Mannings colleagues at Stanford, including Fei-Fei Li, an AI luminary, launched the National Research Cloud, a cloud-computing initiative to help American AI researchers keep up with spiralling bills.

The growing demand for computing power has fuelled a boom in chip design and specialised devices that can perform the calculations used in AI efficiently. The first wave of specialist chips were graphics processing units (GPUs), designed in the 1990s to boost video-game graphics. As luck would have it, GPUs are also fairly well-suited to the sort of mathematics found in AI.

Further specialisation is possible, and companies are piling in to provide it. In December, Intel, a giant chipmaker, bought Habana Labs, an Israeli firm, for $2bn. Graphcore, a British firm founded in 2016, was valued at $2bn in 2019. Incumbents such as Nvidia, the biggest GPU-maker, have reworked their designs to accommodate AI. Google has designed its own tensor-processing unit (TPU) chips in-house. Baidu, a Chinese tech giant, has done the same with its own Kunlun chips. Alfonso Marone at KPMG reckons the market for specialised AI chips is already worth around $10bn, and could reach $80bn by 2025.

Computer architectures need to follow the structure of the data theyre processing, says Nigel Toon, one of Graphcores co-founders. The most basic feature of AI workloads is that they are embarrassingly parallel, which means they can be cut into thousands of chunks which can all be worked on at the same time. Graphcores chips, for instance, have more than 1,200 individual number-crunching cores, and can be linked together to provide still more power. Cerebras, a Californian startup, has taken an extreme approach. Chips are usually made in batches, with dozens or hundreds etched onto standard silicon wafers 300mm in diameter. Each of Cerebrass chips takes up an entire wafer by itself. That lets the firm cram 400,000 cores onto each.

Other optimisations are important, too. Andrew Feldman, one of Cerebrass founders, points out that AI models spend a lot of their time multiplying numbers by zero. Since those calculations always yield zero, each one is unnecessary, and Cerebrass chips are designed to avoid performing them. Unlike many tasks, says Mr Toon at Graphcore, ultra-precise calculations are not needed in AI. That means chip designers can save energy by reducing the fidelity of the numbers their creations are juggling. (Exactly how fuzzy the calculations can get remains an open question.)

All that can add up to big gains. Mr Toon reckons that Graphcores current chips are anywhere between ten and 50 times more efficient than GPUs. They have already found their way into specialised computers sold by Dell, as well as into Azure, Microsofts cloud-computing service. Cerebras has delivered equipment to two big American government laboratories.

Moores law isnt possible any more

Such innovations will be increasingly important, for the AIfuelled explosion in demand for computer power comes just as Moores law is running out of steam. Shrinking chips is getting harder, and the benefits of doing so are not what they were. Last year Jensen Huang, Nvidias founder, opined bluntly that Moores law isnt possible any more.

Other researchers are therefore looking at more exotic ideas. One is quantum computing, which uses the counter-intuitive properties of quantum mechanics to provide big speed-ups for some sorts of computation. One way to think about machine learning is as an optimisation problem, in which a computer is trying to make trade-offs between millions of variables to arrive at a solution that minimises as many as possible. A quantum-computing technique called Grovers algorithm offers big potential speed-ups, says Krysta Svore, who leads the Quantum Architectures and Computation Group at Microsoft Research.

Another idea is to take inspiration from biology, which proves that current brute-force approaches are not the only way. Cerebrass chips consume around 15kW when running flat-out, enough to power dozens of houses (an equivalent number of GPUs consumes many times more). A human brain, by contrast, uses about 20W of energyabout a thousandth as muchand is in many ways cleverer than its silicon counterpart. Firms such as Intel and IBM are therefore investigating neuromorphic chips, which contain components designed to mimic more closely the electrical behaviour of the neurons that make up biological brains.

For now, though, all that is far off. Quantum computers are relatively well-understood in theory, but despite billions of dollars in funding from tech giants such as Google, Microsoft and IBM, actually building them remains an engineering challenge. Neuromorphic chips have been built with existing technologies, but their designers are hamstrung by the fact that neuroscientists still do not understand what exactly brains do, or how they do it.

That means that, for the foreseeable future, AI researchers will have to squeeze every drop of performance from existing computing technologies. Mr Toon is bullish, arguing that there are plenty of gains to be had from more specialised hardware and from tweaking existing software to run faster. To quantify the nascent fields progress, he offers an analogy with video games: Were past Pong, he says. Were maybe at Pac-Man by now. All those without millions to spend will be hoping he is right.

This article appeared in the Technology Quarterly section of the print edition under the headline "Machine, learning"

Read the rest here:
The cost of training machines is becoming a problem - The Economist

Archer looks to commercialisation future with graphene-based biosensor tech – ZDNet

Prototype of portable, battery powered, biosensing device - a few centimetres in size.

Archer Materials has announced progressing work on its graphene-based biosensor technology.

The Australian company told shareholders on Thursday it has developed a new set of graphene materials that could be applied for enhanced biosensing and to aid in the development of biocompatible inks in water-based solvents.

Archer said doing so could eliminate the use of hazardous and non-biocompatible chemicals, increasing the scope of biomolecules that can be detected.

"There is no doubt that diseases have a devastating effect on economies and there is value in advancing disease diagnosis using simpler, more accurate biosensors," Archer CEO Dr Mohammad Choucair said. "However, there are only a limited number of materials that can perform [biosensing], and they require innovative development."

Archer said laboratory synthesis was complemented with computational chemistry to calculate and visualise the materials candidates at the atom-level for their suitability in biomolecular sensing.

"We have rapidly advanced from raw material feedstock to prototypes of a portable battery-powered sensing device that can incorporate biological material," Choucair said. "This early stage work has the potential to allow much simpler and more effective sensing where early diagnosis of life-threatening diseases can lead to much improved outcomes."

With Australia traditionally not so good at commercialising research and development, Archer touted its graphene-based biotechnology as at an early stage of commercialisation.

It said it has been working with commercial advisors within the Australian biotech industry to produce a roadmap.

Archer's commercial strategy involves applying the "triple-helix business model" for biotechnology innovation to develop printable graphene-based biosensor componentry and sublicense the associated intellectual property rights.

It's hoping to do this by developing commercial-grade prototypes; pursuing patent applications in Australia, the United States, and Europe; and establishingcommercial partnerships.

Last month, Archer announced its plan to raise up to AU$3 million, offering shares at AU$0.60 per share.

The funds raised will be used to increase the pace of Archer's current work programs and to start hiring additional staff to do this work, it said.

Also in May, Archer announced a new agreement with IBM which it hopes will advance quantum computing and progress work towards solutions for the greater adoption of the technology.

Joining theIBM Q Network, Archer will gain access to IBM's quantum computing expertise and resources, seeing the Sydney-based company use IBM's open-source software framework, Qiskit.

Archer to work alongside IBM in progressing quantum computing

First quantum-focused Australian member of the IBM Q Network.

Archer puts together a few-qubit array

The Australian company has taken the next step towards creating a room temperature quantum computer.

Australia's Archer details first stage of room temp quantum chip success

The company has announced assembly of the first qubit material component of its 12CQ room-temperature qubit processor, touting nanometre precision.

View post:
Archer looks to commercialisation future with graphene-based biosensor tech - ZDNet

Duke’s Labs Are Back in Business, But In a New Way – Duke Today

The herculean efforts to re-start Dukes campus and medical school research laboratories are nearly complete.

Thousands of lab workers, kept from their benches and equipment for months by the COVID pandemic, are shaking off the cobwebs and getting back to work generating data. But with some significant differences.

I think it'll take me a few weeks to actually get back into the rhythm, said Tatiana Segura, a professor of biomedical engineering who has a large team in two laboratory spaces in Fitzpatrick-CIEMAS.

Segura had asked all of her trainees to review their lab protocols and have a detailed plan for what they should be doing in their newly limited lab hours. But, she notes, It takes time when you start out, like if you are trying to cook or do something you haven't done for a long time, it still takes you a while to remember how to do it.

Each of the reopened labs has been left to decide the finer details about spacing and timing for itself, in a move campus leadership has called states rights. For many labs, that means coordinating through instant messages on Slack and a shared calendar on the cloud. Smaller rooms and shared equipment pose a particular scheduling challenge because of personal spacing requirements.

In all cases, the new safety rules mean fewer people in the lab and a highly structured end to the old free-wheeling, all-hours culture of laboratory work. Now lab workers start each day by recording their temperature and filling out a symptom survey. Their badges give them entry to buildings and elevators that used to be wide open. And they wear masks at all times. When its time to leave, they have to leave.

This whole thing has been pretty challenging for us because we're used to working in teams, said research scientist Stephen Crain, who typically works on Jungsang Kims quantum computing hardware with two grad students in a fourth-floor lab of the Chesterfield Building downtown. Its normally a pretty collaborative effort where, if we get stuck, we kind of work together. But now we're one person at a time. It's just hard to be as productive.

As an assistant professor of chemistry and mother, Amanda Hargrove said she talks about time management with her trainees all the time. Now theyre living it. If you're working between daycare hours, that eight hours is crazy-efficient, she said. So I'm a little interested to see how much more efficient people become in the four hours that they can be there.

Hargrove has split her lab into three four-hour shifts from 8 a.m. to 8 p.m. with the mandated hour for cleaning and leaving the lab between shifts. Its all coordinated through an online calendar and Slack messaging so trainees can arrange when and how they want to work, including taking two 4-hour shifts if need be.

Third-year graduate student Martina Zafferani of Hargroves lab in the French Family Science Center prefers to work 10 or 12 hours at a time, much of it standing. Now shes taking two times four hours, with an hour between to go outside, have lunch and get some Vitamin D.

Zafferani works in a lab that typically might hold up to 15 workers, from undergraduates to post-docs, flowing in and out during the day. On one in early June, it was just her and fifth-year grad student Sarah Wicks, wearing masks with their heads down, trying to restart their experiments on making molecules to control RNA and getting as much out of their limited time as they can.

It hasn't been completely lonesome, Wicks said. But I do miss the chatter of other group members being here. We usually have about nine lab members working, we have music playing, equipment humming, and to have such silence now does make it lonelier than it was.

Still, its great to be back, said first-year masters student Ameya Chaudhari as he returned to work on bio-compatible polymers in Seguras lab. I had been feeling lethargic at home, but now Im energized being back in the lab, Chaudhari said. He looked long overdue for a haircut but was wearing one of the labs sharp, Duke-blue lab coats.

Duke labs that were working on questions related to COVID-19 stayed on duty throughout the shutdown, of course. And others, including Hargrove and dermatology associate professor Amanda MacLeod, are shifting some of their attention to COVID-adjacent questions as they come back.

Its weird being careful with everyone around, says MSRB-III postdoctoral researcher Paula Mariottoni of MacLeods group. She did a sort of pausing dance to move from her bench to a tissue culture room as a colleague walked past. Even if its a slow pace, moving forward is good, Mariottoni said.

Red tape Xs mark the floors about eight feet apart in each bay of MacLeods MSRB-III laboratory space, indicating where people should stand to communicate or pass. The elevator is designated for one rider at a time, up only; exiting is by a designated stairway down.

Third-year medical student Vivian Lei was working in the next bay over in the MacLeod group. It was designed for four, but occupied by just her. My time during lockdown was reexamining data, she said from behind a white cloth mask. Weirdly enough, this was one of the most productive times in our lab.

The timing of the shutdown in late March hit MacLeods group perfectly, in fact. They had just completed a move from Duke South to MSRB-III, and anticipating the move would be disruptive, the group did a lot of experiments in January and February to create a hoard of data that kept them busy.

Our people had run a lot of the wet-lab experiments up front, and now all the analysis had to be done, said MacLeod, an associate professor of dermatology who is studying how the skin helps combat pathogens including viruses -- as the bodys first line of defense. During the shutdown, the lab submitted two manuscripts, three grant applications and a few fellowship applications.

Environmental toxicologist Richard DiGiulios lab in the Levine Science Research Center includes colorful tanks of living fish, so somebody was coming in to feed them the entire time, even though the science stopped.

We didn't see anybody else, except for the occasional custodial worker, for 2 1/2 months, said DiGiulio lab manager Melissa Chernick. We just never intersected with anybody, which I thought was good because it made me feel more comfortable that nobody was in the building -- it was less possible contacts.

Now DiGiulio, the Sally Kleberg Distinguished Professor in the Nicholas School of Environment, is waiting to hear about safety rules for field work. The labs supply of killifish, collected from a Superfund site on the Elizabeth River in Virginia, is beginning to wane. To get more from the river, we all pack in a van for four hours up there and four back, Chernick said.

The lab buildings feel different without students studying or hanging out in public areas and hallways. The coffee shops are shuttered. There are no voices, no footfalls, just whooshing air.

It was very creepy to be alone in French, said Zafferani, who was one of the first people back in the building. Still, it was better than working at home, she said.

I think the beginning is going to be a little bit slower maybe than everyone hopes, MacLeod said. It's challenging, but it's doable.

And even though you have this mask on, she said, you can still talk and be friendly and kind to everyone.

See more here:
Duke's Labs Are Back in Business, But In a New Way - Duke Today