Quantzig Launches New Article Series on COVID-19’s Impact – ‘Understanding Why Online Food Delivery Companies Are Betting Big on AI and Machine…

LONDON--(BUSINESS WIRE)--As a part of its new article series that analyzes COVID-19s impact across industries, Quantzig, a premier analytics services provider, today announced the completion of its recent article Why Online Food Delivery Companies are Betting Big on AI and Machine Learning

The article also offers comprehensive insights on:

Human activity has slowed down due to the pandemic, but its impact on business operations has not. We offer transformative analytics solutions that can help you explore new opportunities and ensure business stability to thrive in the post-crisis world. Request a FREE proposal to gauge COVID-19s impact on your business.

With machine learning, you dont need to babysit your project every step of the way. Since it means giving machines the ability to learn, it lets them make predictions and also improve the algorithms on their own, says a machine learning expert at Quantzig.

After several years of being confined to technology labs and the pages of sci-fi books, today artificial intelligence (AI) and big data have become the dominant focal point for businesses across industries. Barely a day passes by without new magazine and paper articles, blog entries, and tweets about such advancements in the field of AI and machine learning. Having said that, its not very surprising that AI and machine learning in the food and beverage industry have played a crucial role in the rapid developments that have taken place over the past few years.

Talk to us to learn how our advanced analytics capabilities combined with proprietary algorithms can support your business initiatives and help you thrive in todays competitive environment.

Benefits of AI and Machine Learning

Want comprehensive solution insights from an expert who decodes data? Youre just a click away! Request a FREE demo to discover how our seasoned analytics experts can help you.

As cognitive technologies transform the way people use online services to order food, it becomes imperative for online food delivery companies to comprehend customer needs, identify the dents, and bridge gaps by offering what has been missing in the online food delivery business. The combination of big data, AI, and machine learning is driving real innovation in the food and beverage industry. Such technologies have been proven to deliver fact-based results to online food delivery companies that possess the data and the required analytics expertise.

At Quantzig, we analyze the current business scenario using real-time dashboards to help global enterprises operate more efficiently. Our ability to help performance-driven organizations realize their strategic and operational goals within a short span using data-driven insights has helped us gain a leading edge in the analytics industry. To help businesses ensure business continuity amid the crisis, weve curated a portfolio of advanced COVID-19 impact analytics solutions that not just focus on improving profitability but help enhance stakeholder value, boost customer satisfaction, and help achieve financial objectives.

Request more information to know more about our analytics capabilities and solution offerings.

About Quantzig

Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. For more than 15 years, we have assisted our clients across the globe with end-to-end data modeling capabilities to leverage analytics for prudent decision making. Today, our firm consists of 120+ clients, including 45 Fortune 500 companies. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal

View post:
Quantzig Launches New Article Series on COVID-19's Impact - 'Understanding Why Online Food Delivery Companies Are Betting Big on AI and Machine...

Eta Compute Partners with Edge Impulse to Accelerate the Development and Deployment of Machine Learning at the Edge – Yahoo Finance

The partnership will transform the development process from concept to production for embedded machine learning in micropower devices.

Eta Compute and Edge Impulse announce that they are partnering to accelerate the development and deployment of machine learning using Eta Computes revolutionary ECM3532, the worlds lowest power Neural Sensor Processor, and Edge Impulse, the leading online TinyML platform. The partnership will speed the time-to-market for machine learning in billions of IoT consumer and industrial products where battery capacity has been a roadblock.

"Collaborating with Edge Impulse ensures our growing ECM3532 developer community is fully equipped to bring innovative designs in digital health, smart city, consumer, and industrial applications to market quickly and efficiently," said Ted Tewksbury, CEO of Eta Compute. "We believe that our partnership will help companies debut their ground-breaking solutions later in 2020."

Eta Computes ECM3532 ultra-low power Neural Sensor Processor SoC that enables machine learning at the extreme edge, and its ECM3532 EVB evaluation board are now supported by Edge Impulses end-to-end ML development and MLOps platform. Developers can register for free to gain access to advanced Eta Compute machine learning algorithms and development workflows through the Edge Impulse portal.

"Machine learning at the very edge has the potential to enable the use of the 99% of sensor data that is lost today because of cost, bandwidth, or power constraints," said Zach Shelby, CEO and Co-founder of Edge Impulse. "Our online SaaS platform and Eta Computes innovative processor are the ideal combination for development teams seeking to accurately collect data, create meaningful data sets, spin models, and generate efficient ML at a rapidly accelerated pace."

"Trillions of devices are expected to come online by 2035 and many will require some level of machine learning at the edge," said Dennis Laudick, vice president of marketing, Machine Learning Group, Arm. "The combination of Eta Computes TinyML hardware based on Arm Cortex and CMSIS-NN technology, and the SaaS TinyML solutions from Edge Impulse provides developers a complete solution for bringing power efficient, edge, or endpoint ML products to market at the fast pace required for this next era of compute."

For more information or to begin developing, visit EtaCompute.com or EdgeImpulse.com

About Eta Compute

Eta Compute was founded in 2015 with the vision that the proliferation of intelligent devices at the network edge will make daily life safer, healthier, comfortable and more convenient without sacrificing privacy and security. The company delivers the worlds lowest power embedded platform using patented Continuous Voltage Frequency Scaling to deliver unparalleled machine intelligence to energy-constrained products and remove battery capacity as a barrier in consumer and industrial applications. In 2018, the company received the Design Innovation Of The Year and Best Use Of Advanced Technologies awards at Arm TechCon. For more information visit EtaCompute.com or contact the company via email at info@etacompute.com.

About Edge Impulse

Edge Impulse is on a mission to enable developers to create the next generation of intelligent devices using embedded machine learning in industrial, enterprise and human centric applications. Machine learning at the very edge will enable valuable use of the 99% of sensor data that is discarded today due to cost, bandwidth or power constraints. The founders believe that machine learning can enable positive change in society and are dedicated to support applications for good. Sign up for free at edgeimpulse.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200512005318/en/

Contacts

Media Contacts: Eta Compute:Phyllis Grabot, 805.341.7269 / phyllis@corridorcomms.com Bonnie Quintanilla, 818.681.5777 / bonnie@corridorcomms.com

Edge Impulse:Zach Shelby, 408.203.9434 / hello@edgeimpulse.com

View original post here:
Eta Compute Partners with Edge Impulse to Accelerate the Development and Deployment of Machine Learning at the Edge - Yahoo Finance

Another deep learning processor appears in the ring: Grayskull from Tenstorrent – Electronics Weekly

It describes the technology behind the processor as: The first conditional execution architecture for artificial intelligence facilitating scalable deep learning. Tenstorrent has taken an approach that dynamically eliminates unnecessary computation, thus breaking the direct link between model size growth and compute/memory bandwidth requirements.

Conditional computation?

Conditional computation enables adaptation to both inference and training of a model to the exact input that was presented, like adjusting NLP model computations to the exact length of the text presented, and dynamically pruning portions of the model based on input characteristics, is how the company describes it.

Grayskull integrates 120 Tensix proprietary cores with 120Mbyte of local SRAM. It has eight channels of LPDDR4 for supporting up to 16Gbyte of external DRAM and 16 lanes of PCI-E Gen 4.

The Tensix cores have a packet processor, a programmable SIMD and maths computation block, five single-issue RISC cores and 1Mbyte of ram.

Associated software model

The array of Tensix cores is stitched together with a double 2D torus network-on-chip, which facilitates multi-cast flexibility, along with minimal software burden for scheduling coarse-grain data transfers, according to the company.At the chip thermal design power required for a 75W bus-powered PCIE card, Grayskull achieves 368Tops and up to 23,345 sentence/second using BERT-Base for the SQuAD 1.1 data set.

According to the Tenstorrent:

For artificial intelligence to reach the next level, machines need to go beyond pattern recognition and into cause-and-effect learning. Such machine learning models require computing infrastructure that allows them to continue growing by orders of magnitude for years to come. Machine learning computers can achieve this goal in two ways: by weakening the dependence between model size and raw compute power, through features like conditional execution and dynamic sparsity handling, and by facilitating compute scalability at hitherto unrivalled levels. Rapid changes in Machine learning models further require flexibility and programmability.

Claimed Grayskull benchmarks

Grayskull is aimed at inferencing in data centres, public cloud servers, private cloud servers, on-premises servers, edge servers and automotive.

Samples are said to be with partners, with the processor ready for production this autumn.

The Tenstorrent website is here

Read the rest here:
Another deep learning processor appears in the ring: Grayskull from Tenstorrent - Electronics Weekly

Twitter adds former Google VP and A.I. guru Fei-Fei Li to board as it seeks to play catch up with Google and Facebook – CNBC

Twitter has appointed Stanford professor and former Google vice president Fei-Fei Li to its board as an independent director.

The social media platform said that Li's expertise in artificial intelligence (AI) will bring relevant perspectives to the board. Li's appointment may also help Twitter to attract top AI talent from other companies in Silicon Valley.

Li left her role as chief scientist of AI/ML (artificial intelligence/machine learning) at Google Cloud in October 2018 after being criticized for comments she made in relation to the controversial Project Maven initiative with the Pentagon, which saw Google AI used to identify drone targets from blurry drone video footage.

When details of the project emerged, Google employees objected, saying that they didn't want their AI technology used in military drones. Some quit in protest and around 4,000 staff signed a petition that called for "a clear policy stating that neither Google nor its contractors will ever build warfare technology."

While Li wasn't directly involved in the project, a leaked email suggested she was more concerned about what the public would make of Google's involvement in the project as opposed to the ethics of the project itself.

"This is red meat to the media to find all ways to damage Google," she wrote, according to a copy of the emailobtained by the Intercept. "You probably heardElon Muskand his comment about AI causing WW3."

"I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry. Google Cloud has been building our theme on Democratizing AI in 2017, and Diane (Greene, head of Google Cloud) and I have been talking about Humanistic AI for enterprise. I'd be super careful to protect these very positive images."

Up until that point, Li was seen very much as a rising star at Google. In the one year and 10 months she was there, she oversaw basic science AI research, all of Google Cloud's AI/ML products and engineering efforts, and a newGoogle AI lab in China.

While at Google she maintained strong links to Stanford and in March 2019 she launched the Stanford University Human-Centered AI Institute (HAI), which aims to advance AI research, education, policy and practice to benefit humanity.

"With unparalleled expertise in engineering, computer science and AI, Fei-Fei brings relevant perspectives to the board as Twitter continues to utilize technology to improve our service and achieve our long-term objectives," said Omid Kordestani, executive chairman of Twitter.

Twitter has been relatively slow off the mark in the AI race. Itacquired British start-up Magic Pony Technologies in 2016 for up to $150 million as part of an effort to beef up its AI credentials, but its AI efforts remain fairly small compared to other firms. It doesn't have the same reputation as companies like Google and Facebook when it comes to AI and machine-learning breakthroughs.

Today the company uses an AI technique called deep learning to recommend tweets to its users and it also uses AI to identify racist content and hate speech, or content from extremist groups.

Competition for AI talent is fierce in Silicon Valley and Twitter will no doubt be hoping that Li can bring in some big names in the AI world given she is one of the most respected AI leaders in the industry.

"Twitter is an incredible example of how technology can connect the world in powerful ways and I am honored to join the board at such an important time in the company's history," said Li.

"AI and machine learning can have an enormous impact on technology and the people who use it. I look forward to leveraging my experience for Twitter as it harnesses this technology to benefit everyone who uses the service."

Read the original here:
Twitter adds former Google VP and A.I. guru Fei-Fei Li to board as it seeks to play catch up with Google and Facebook - CNBC

Five Strategies for Putting AI at the Center of Digital Transformation – Knowledge@Wharton

Across industries, companies are applying artificial intelligence to their businesses, with mixed results. What separates the AI projects that succeed from the ones that dont often has to do with the business strategies organizations follow when applying AI, writes Wharton professor of operations, information and decisions Kartik Hosanagar in this opinion piece. Hosanagar is faculty director of Wharton AI for Business, a new Analytics at Wharton initiative that will support students through research, curriculum, and experiential learning to investigate AI applications. He also designed and instructs Wharton Onlines Artificial Intelligence for Business course.

While many people perceive artificial intelligence to be the technology of the future, AI is already here. Many companies across a range of industries have been applying AI to improve their businesses from Spotify using machine learning for music recommendations to smart home devices like Google Home and Amazon Alexa. That said, there have also been some early failures, such as Microsofts social-learning chatbot, Tay, which turned anti-social after interacting with hostile Twitter followers, and IBM Watsons inability to deliver results in personalized health care. What separates the AI projects that succeed from the ones that dont often has to do with the business strategies organizations follow when applying AI. The following strategies can help business leaders not only effectively apply AI in their organizations, but succeed in adapting it to innovate, compete and excel.

1. View AI as a tool, not a goal.

One pitfall companies might encounter in the process of starting new AI initiatives is that the concentrated focus and excitement around AI might lead to AI being viewed as a goal in and of itself. But executives should be cautious about developing a strategy specifically for AI, and instead focus on the role AI can play in supporting the broader strategy of the company. A recent report from MIT Sloan Management Review and Boston Consulting Group calls this backward from strategy, not forward from AI.

As such, instead of exhaustively looking for all the areas AI could fit in, a better approach would be for companies to analyze existing goals and challenges with a close eye for the problems that AI is uniquely equipped to solve. For example, machine learning algorithms bring distinct strengths in terms of their predictive power given high-quality training data. Companies can start by looking for existing challenges that could benefit from these strengths, as those areas are likely to be ones where applying AI is not only possible, but could actually disproportionately benefit the business.

The application of machine learning algorithms for credit card fraud detection is one example of where AIs particular strengths make it a very valuable tool in assisting with a longstanding problem. In the past, fraudulent transactions were generally only identified after the fact. However, AI allows banks to detect and block fraud in real time. Because banks already had large volumes of data on past fraudulent transactions and their characteristics, the raw material from which to train machine learning algorithms is readily available. Moreover, predicting whether particular transactions are fraudulent and blocking them in real time is precisely the type of repetitive task that an algorithm can do at a speed and scale that humans cannot match.

2. Take a portfolio approach.

Over the long term, viewing AI as a tool and finding AI applications that are particularly well matched with business strategy will be most valuable. However, I wouldnt recommend that companies pool all their AI resources into a single, large, moonshot project when they are first getting started. Rather, I advocate taking a portfolio approach to AI projects that includes both quick wins and long-term projects. This approach will allow companies to gain experience with AI and build consensus internally, which can then support the success of larger, more strategic and transformative projects later down the line.

Specifically, quick wins are smaller projects that involve optimizing internal employee touch points. For example, companies might think about specific pain points that employees experience in their day-to-day work, and then brainstorm ways AI technologies could make some of these tasks faster or easier. Voice-based tools for scheduling or managing internal meetings or voice interfaces for search are some examples of applications for internal use. While these projects are unlikely to transform the business, they do serve the important purpose of exposing employees, some of whom may initially be skeptics, to the benefits of AI. These projects also provide companies with a low-risk opportunity to build skills in working with large volumes of data, which will be needed when tackling larger AI projects.

The second part of the portfolio approach, long-term projects, is what will be most impactful and where it is important to find areas that support the existing business strategy. Rather than looking for simple ways to optimize the employee experience, long-term projects should involve rethinking entire end-to-end processes and potentially even coming up with new visions for what otherwise standard customer experiences could look like. For example, a long-term project for a car insurance company could involve creating a fully automated claims process in which customers can photograph the damage of their car and use an app to settle their claims. Building systems like this that improve efficiency and create seamless new customer experiences requires technical skills and consensus on AI, which earlier quick wins will help to build.

The skills needed for embarking on AI projects are unlikely to exist in sufficient numbers in most companies, making reskilling particularly important.

3. Reskill and invest in your talent.

In addition to developing skills through quick wins, companies should take a structured approach to growing their talent base, with a focus on both reskilling internal employees in addition to hiring external experts. Focusing on growing the talent base is particularly important given that most engineers in a company would have been trained in computer science before the recent interest in machine learning. As such, the skills needed for embarking on AI projects are unlikely to exist in sufficient numbers in most companies, making reskilling particularly important.

In its early days of working with AI, Google launched an internal training program where employees were invited to spend six months working in a machine learning team with a mentor. At the end of this time, Google distributed these experts into product teams across the company in order to ensure that the entire organization could benefit from AI-related reskilling. There are many new online courses to economically reskill employees in AI.

The MIT Sloan Management Review-BCG report mentioned above also found that, in addition to developing talent in producing AI technologies, an equally important area is that of consuming AI technologies. Managers, in particular, need to have skills to consult AI tools and act on recommendations or insights from these tools. This is because AI systems are unlikely to automate entire processes from the get-go. Rather, AI is likely to be used in situations where humans remain in the loop. Managers will need basic statistical knowledge in order to understand the limitations and capabilities of modern machine learning and to decide when to lean on machine learning models.

4. Focus on the long term.

Given that AI is a new field, it is largely inevitable that companies will experience early failures. Early failures should not discourage companies from continuing to invest in AI. Rather, companies should be aware of, and resist, the tendency to retreat after an early failure.

Historically, many companies have stumbled in their early initiatives with new technologies, such as when working with the internet and with cloud and mobile computing. The companies that retreated, that stopped or scaled back their efforts after initial failures, tended to be in a worse position long term than those that persisted. I anticipate that a similar trend will occur with AI technologies. That is, many companies will fail in their early AI efforts, but AI itself is here to stay. The companies that persist and learn to use AI well will get ahead, while those that avoid AI after their early failures will end up lagging behind.

AI shouldnt be abandoned given that the alternative, human decision-makers, are biased too.

5. Address AI-specific risks and biases aggressively.

Companies should be aware of new risks that AI can pose and proactively manage these risks from the outset. Initiating AI projects without an awareness of these unique risks can lead to unintended negative impacts on society, as well as leave the organizations themselves susceptible to additional reputational, legal, and regulatory risks (as mentioned in my book, A Humans Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control).

There have been many recent cases where AI technologies have discriminated against historically disadvantaged groups. For example, mortgage algorithms have been shown to have a racial bias, and an algorithm created by Amazon to assist with hiring was shown to have a gender bias, though this was actually caught by Amazon itself prior to the algorithm being used. This type of bias in algorithms is thought to occur because, like humans, algorithms are products of both nature and nurture. While nature is the logic of the algorithm itself, nurture is the data that algorithms are trained on. These datasets are usually compilations of human behaviors oftentimes specific choices or judgments that human decision-makers have previously made on the topic in question, such as which employees to hire or which loan applications to approve. The datasets are therefore made up of biased decisions from humans themselves that the algorithms learn from and incorporate. As such, it is important to note that algorithms are generally not creating wholly new biases, but rather learning from the historical biases of humans and exacerbating them by applying them on a much larger, and therefore even more damaging, scale.

AI shouldnt be abandoned given that the alternative, human decision-makers, are biased too. Rather, companies should be aware of the kinds of social harms that can result from AI technologies and rigorously audit their algorithms to catch biases before they negatively impact society. Proceeding with AI initiatives without an awareness of these social risks can lead to reputational, legal, and regulatory risks for firms, and most importantly can have extremely damaging impacts on society.

See the article here:
Five Strategies for Putting AI at the Center of Digital Transformation - Knowledge@Wharton

Turns out converting files into images is a highly effective way to detect malware – PC Gamer UK

A branch of artificial intelligence called machine learning is all around us. It's employed by Facebook to help curate content (and target us with ads), Google uses it to filter millions of spam messages each day, and it's part of what enabled the OpenAI bot to beat the reigning Dota 2 champions last year in two out of three matches. There are seemingly endless uses. Adding one more to the pile, Microsoft and Intel have come up with a clever machine learning framework that is surprisingly accurate at detecting malware through a grayscale image conversion process.

Microsoft detailed the technology in a blog post (via ZDNet), which it calls static malware-as-image network analysis, or STAMINA. It consists of a three-step process. In simple terms, the machine learning project starts out by taking binary files and converting them into two-dimensional images.

The images are then fed into the framework. This second step is a process called transfer learning, which essentially helps the algorithm build upon its existing knowledge, while comparing images against its existing training.

Finally, the results are analyzed to see how effective the process was at detecting malware samples, how many it missed, and how many it incorrectly classified as malware (known as a false positive).

As part of the study, Microsoft and Intel sampled a dataset of 2.2 million files. Out of those, 60 percent were known malware files that were used to train the algorithm, and 20 percent were used to validate it. The remaining 20 percent were used to test the the actual effectiveness of the scheme.

Applying STAMINA to the files, Microsoft says the method accurately detected and classified 99.07 percent of the malware files, with a 2.58 percent false positive rate. Those are stellar results.

"The results certainly encourage the use of deep transfer learning for the purpose of malware classification. It helps accelerate training by bypassing the search for optimal hyperparameters and architecture searches, saving time and compute resources in the process," Microsoft says.

STAMINA is not without its limitations. Part of the process entails resizing images to make the number of pixels manageable for an application like this. However, for deeper analysis and bigger size applications, Microsoft says the method "becomes less effective due to limitations in converting billions of pixels into JPEG images and then resizing them."

In other words, STAMINA works great for testing files in a lab, but requires some fine tuning before it could feasibly be employed in greater capacity. This probably means Windows Defender won't benefit from STAMINA right away, but perhaps sometime down the line it will.

View original post here:
Turns out converting files into images is a highly effective way to detect malware - PC Gamer UK

IonQ CEO Peter Chapman on how quantum computing will change the future of AI – VentureBeat

Businesses eager to embrace cutting-edge technology are exploring quantum computing, which depends on qubits to perform computations that would be much more difficult, or simply not feasible, on classical computers. The ultimate goals are quantum advantage, the inflection point when quantum computers begin to solve useful problems. While that is a long way off (if it can even be achieved), the potential is massive. Applications include everything from cryptography and optimization to machine learning and materials science.

As quantum computing startup IonQ has described it, quantum computing is a marathon, not a sprint. We had the pleasure of interviewing IonQ CEO Peter Chapman last month to discuss a variety of topics. Among other questions, we asked Chapman about quantum computings future impact on AI and ML.

The conversation quickly turned to Strong AI, or Artificial General Intelligence (AGI), which does not yet exist. Strong AI is the idea that a machine could one day understand or learn any intellectual task that a human can.

AI in the Strong AI sense, that I have more of an opinion [about], just because I have more experience in that personally, Chapman told VentureBeat. And there was a really interesting paper that just recently came out talking about how to use a quantum computer to infer the meaning of words in NLP. And I do think that those kinds of things for Strong AI look quite promising. Its actually one of the reasons I joined IonQ. Its because I think that does have some sort of application.

In a follow-up email, Chapman expanded on his thoughts. For decades, it was believed that the brains computational capacity lay in the neuron as a minimal unit, he wrote. Early efforts by many tried to find a solution using artificial neurons linked together in artificial neural networks with very limited success. This approach was fueled by the thought that the brain is an electrical computer, similar to a classical computer.

However, since then, I believe we now know the brain is not an electrical computer, but an electrochemical one, he added. Sadly, todays computers do not have the processing power to be able to simulate the chemical interactions across discrete parts of the neuron, such as the dendrites, the axon, and the synapse. And even with Moores law, they wont next year or even after a million years.

Chapman then quoted Richard Feynman, who famously said Nature isnt classical, dammit, and if you want to make a simulation of nature, youd better make it quantum mechanical. And by golly, its a wonderful problem because it doesnt look so easy.

Similarly, its likely Strong AI isnt classical, its quantum mechanical as well, Chapman said.

One of IonQs competitors, D-Wave, argues that quantum computing and machine learning are extremely well matched. Chapman is still on the fence.

I havent spent enough time to really understand it, he admitted. There clearly [are] a lot of people who think that ML and quantum have an overlap. Certainly, if you think of 85% of all ML produces a decision tree, and the depth of that decision tree could easily be optimized with a quantum computer. Clearly, there [are] lots of people that think that generation of the decision tree could be optimized with a quantum computer. Honestly, I dont know if thats the case or not. I think its still a little early for machine learning, but there clearly [are] so many people that are working on it. Its hard to imagine it doesnt have [an] application.

Chapman continued in a later email: ML has intimate ties to optimization: Many learning problems are formulated as minimization of some loss function on a training set of examples. Generally, Universal Quantum Computers excel at these kinds of problems.

He listed three improvements in ML that quantum computing will likely allow:

Whether Strong AI or ML, IonQ isnt particularly interested in either. The company leaves that to its customers and future partners.

Theres so much to be to be done in a quantum, Chapman said. From education at one end all the way to the quantum computer itself. I think some of our competitors have taken on lots of the entire problem set. We at IonQ are just focused on producing the worlds best quantum computer for them. We think thats a large enough task for a little company like us to handle.

So, for the moment were kind of happy to let everyone else work on different problems, he added. We just dont have extra bandwidth or resources to put into working on machine learning algorithms. And luckily, there [are] lots of other companies that think that there [are] applications there. Well partner with them in the sense that well provide the hardware that their algorithms will run on. But were not in the ML business, per se.

Link:
IonQ CEO Peter Chapman on how quantum computing will change the future of AI - VentureBeat

VTT to acquire Finland’s first quantum computer seeking to bolster Finland’s and Europe’s competitiveness – Quantaneo, the Quantum Computing Source

Quantum technology will revolutionise many industrial sectors, and will already begin spawning new, nationally significant business and research opportunities over the next few years. Advancements in quantum technology and, in particular, the technological leap afforded by quantum computers aka the quantum leap will enable unprecedented computing power and the ability to solve problems that are impossible for todays supercomputers.

Building this quantum computer will provide Finland with an exceptional level of capabilities in both research and technology, and will safeguard Finlands position at the forefront of new technology. The goal is to create a unique ecosystem for the development and application of quantum technology in Finland, in collaboration with companies and universities. VTT hopes to partner with progressive Finnish companies from a variety of sectors during the various phases of implementation and application.

The development and construction of Finlands quantum computer will be carried out as an innovation partnership that VTT will be opening up for international tender. The project will run for several years and its total cost is estimated at about EUR 2025 million.

The project will progress in stages. The first phase will last for about a year and aims to get a minimum five-qubit quantum computer in working order. However, the ultimate goal is a considerably more powerful machine with a larger number of qubits.

In the future, well encounter challenges that cannot be met using current methods. Quantum computing will play an important role in solving these kinds of problems. For example, the quantum computers of the future will be able to accurately model viruses and pharmaceuticals, or design new materials in a way that is impossible with traditional methods, says Antti Vasara, CEO of VTT.

Through this project, VTT is seeking to be a world leader in quantum technology and its application.

The pandemic has shocked not only Finlands economy but also the entire world economy, and it will take us some time to recover from the consequences. To safeguard economic recovery and future competitiveness, its now even more important than ever to make investments in innovation and future technologies that will create demand for Finnish companies products and services, says Vasara.

VTT has lengthy experience and top expertise in both quantum technology research and related fields of science and technology, such as superconductive circuits and cryogenics, microelectronics and photonics. In Otaniemi, VTT and Aalto University jointly run Micronova, a world-class research infrastructure that enables experimental research and development in quantum technologies. This infrastructure will be further developed to meet the requirements of quantum technologies. Micronovas cleanrooms are already equipped to manufacture components and products based on quantum technologies.

More:
VTT to acquire Finland's first quantum computer seeking to bolster Finland's and Europe's competitiveness - Quantaneo, the Quantum Computing Source

2020 Innovations in Cell Rejuvenation, COVID-19 Diagnostic Kits & Vaccines, Biobased Plastics, Renewable Energy, and Quantum Computing -…

DUBLIN--(BUSINESS WIRE)--The "2020 Innovations in Cell Rejuvenation, COVID-19 Diagnostic Kits and Vaccines, Biobased Plastics, Renewable Energy, and Quantum Computing" report has been added to ResearchAndMarkets.com's offering.

This edition of the Inside R&D Technology Opportunity Engine (TOE) features trends and innovations based on the development of lead vaccines and diagnostic test kits to combat the COVID-19 pandemic outbreak. The TOE also provides intelligence on the use of novel innovations that help in developing numerous antibodies for the COVID-19 virus. The TOE also features innovations in developing anti-aging cells and the use of artificial intelligence and digital platforms for pandemic contact tracking.

The TOE additionally provides insights on using modular thermal energy storage devices and combined heat and power solutions to enhance the positive impact on the environment. Furthermore, the TOE covers enhancing capabilities of robots, efficient hydrogen production, and the use of composites in marine, oil & gas industries. It also focuses on innovations related to the use of carbon-silicon composites, biodegradable insulation material, and cellulose-based polyamides.

Key Topics Covered

For more information about this report visit https://www.researchandmarkets.com/r/v1gekk

View post:
2020 Innovations in Cell Rejuvenation, COVID-19 Diagnostic Kits & Vaccines, Biobased Plastics, Renewable Energy, and Quantum Computing -...

QUANTUM COMPUTING INC. Management’s Discussion and Analysis of Financial Condition and Results of Operations, (form 10-Q) – marketscreener.com

This quarterly report on Form 10-Q and other reports filed Quantum Computing,Inc. (the "Company" "we", "our", and "us") from time to time with the U.S.Securities and Exchange Commission (the "SEC") contain or may containforward-looking statements and information that are based upon beliefs of, andinformation currently available to, the Company's management as well asestimates and assumptions made by Company's management. Readers are cautionednot to place undue reliance on these forward-looking statements, which are onlypredictions and speak only as of the date hereof. When used in the filings, thewords "anticipate," "believe," "estimate," "expect," "future," "intend," "plan,"or the negative of these terms and similar expressions as they relate to theCompany or the Company's management identify forward-looking statements. Suchstatements reflect the current view of the Company with respect to future eventsand are subject to risks, uncertainties, assumptions, and other factors,including the risks contained in the "Risk Factors" section of the Company'sAnnual Report on Form 10-K for the fiscal year ended December 31, 2019, relatingto the Company's industry, the Company's operations and results of operations,and any businesses that the Company may acquire. Should one or more of theserisks or uncertainties materialize, or should the underlying assumptions proveincorrect, actual results may differ significantly from those anticipated,believed, estimated, expected, intended, or planned.Although the Company believes that the expectations reflected in theforward-looking statements are reasonable, the Company cannot guarantee futureresults, levels of activity, performance, or achievements. Except as required byapplicable law, including the securities laws of the United States, the Companydoes not intend to update any of the forward-looking statements to conform thesestatements to actual results.Our financial statements are prepared in accordance with accounting principlesgenerally accepted in the United States ("GAAP"). These accounting principlesrequire us to make certain estimates, judgments and assumptions. We believe thatthe estimates, judgments and assumptions upon which we rely are reasonable basedupon information available to us at the time that these estimates, judgments andassumptions are made. These estimates, judgments and assumptions can affect thereported amounts of assets and liabilities as of the date of the financialstatements as well as the reported amounts of revenues and expenses during theperiods presented. Our financial statements would be affected to the extentthere are material differences between these estimates and actual results. Inmany cases, the accounting treatment of a particular transaction is specificallydictated by GAAP and does not require management's judgment in its application.There are also areas in which management's judgment in selecting any availablealternative would not produce a materially different result. The followingdiscussion should be read in conjunction with our financial statements and notesthereto appearing elsewhere in this report.OverviewAt the present time, we are a development stage company with limitedoperations. The Company is currently developing "quantum ready" softwareapplications and solutions for companies that want to leverage the promise ofquantum computing. We believe the quantum computer holds the potential todisrupt several global industries. Independent of when quantum computingdelivers compelling performance advantage over classical computing, the softwaretools and applications to accelerate real-world problems must be developed todeliver quantum computing's full promise. We specialize in quantumcomputer-ready software application, analytics, and tools, with a mission todeliver differentiated performance using non-quantum processors in thenear-term.We are leveraging our collective expertise in finance, computing, mathematicsand physics to develop a suite of quantum software applications that may enableglobal industries to utilize quantum computers, quantum annealers and digitalsimulators to improve their processes, profitability, and security. We primarilyfocus on the quadratic unconstrained binary optimization (QUBO) formulation,which is equivalent to the Ising model implemented by hardware annealers, bothnon-quantum from Fujitsu and others and quantum from D-Wave Systems, and alsomappable to gate-model quantum processors. We are building a software stack thatmaps and optimizes problems in the QUBO form and then solves them powerfully oncloud-based processors. Our software is designed to be capable of running onboth classical computers and on annealers such as D-Wave's quantum processor. Weare also building applications and analytics that deliver the power of oursoftware stack to high-value discrete optimization problems posed by financial,bio/pharma, and cybersecurity analysts. The advantages our software delivers canbe faster time-to-solution to the same results, more-optimal solutions, ormultiple solutions. 2

Products and Products in Development

The Company is currently working on software products to address, communitydetection (analysis for pharmaceutical applications and epidemiology),optimization of job shop scheduling, logistics, and dynamic route optimizationfor transportation systems. The Company is continuing to seek out difficultproblems for which our technology may provide improvement over existingsolutions.

We are continuing to develop software to address two classes of financialoptimization problems: Asset allocation and Yield Curve Trades. For assetallocation, our target clients are the asset allocation departments of largefunds, who we envision using our application to improve their allocation ofcapital into various asset classes.

Three Months Ended March 31, 2020 vs. March 31, 2019

Liquidity and Capital Resources

The following table summarizes total current assets, liabilities and workingcapital at March 31, 2020, compared to December 31, 2019:

Off Balance Sheet Arrangements

Critical Accounting Policies and Estimates

We have identified the accounting policies below as critical to our businessoperations and the understanding of our results of operations.

Lease expense for operating leases consists of the lease payments plus anyinitial direct costs, primarily brokerage commissions, and is recognized on astraight-line basis over the lease term.

The Company's policy is to present bank balances under cash and cashequivalents, which at times, may exceed federally insured limits. The Companyhas not experienced any losses in such accounts.

Net loss per share is based on the weighted average number of common shares andcommon shares equivalents outstanding during the period.

Edgar Online, source Glimpses

The rest is here:
QUANTUM COMPUTING INC. Management's Discussion and Analysis of Financial Condition and Results of Operations, (form 10-Q) - marketscreener.com