Quantum technology – Wikipedia

emerging technologies built on quantum mechanics

Quantum technology is an emerging field of physics and engineering, which relies on the principles of quantum physics.[1] Quantum computing, quantum sensors, quantum cryptography, quantum simulation, quantum metrology and quantum imaging are all examples of quantum technologies, where properties of quantum mechanics, especially quantum entanglement, quantum superposition and quantum tunnelling, are important.

Quantum secure communication are methods which are expected to be 'quantum safe' in the advent of a quantum computing systems that could break current cryptography systems. One significant component of a quantum secure communication systems is expected to be Quantum key distribution, or 'QKD': a method of transmitting information using entangled light in a way that makes any interception of the transmission obvious to the user. Another technology in this field is the quantum random number generator used to protect data. This produces truly random numbers without following the procedure of the computing algorithms that merely imitate randomness.[2]

Quantum computers are expected to have a number of important uses in computing fields such as optimization and machine learning. They are perhaps best known for their expected ability to carry out 'Shor's Algorithm', which can be used to factorise large numbers and is an important process in the securing of data transmissions.

There are many devices available today which are fundamentally reliant on the effects of quantum mechanics. These include laser systems, transistors and semiconductor devices and other devices, such as MRI imagers. The UK Defence Science and Technology Laboratory (DSTL) grouped these devices as 'quantum 1.0',[3] that is devices which rely on the effects of quantum mechanics. These are generally regarded as a class of device that actively create, manipulate and read out quantum states of matter, often using the quantum effects of superposition and entanglement.

The field of quantum technology was first outlined in a 1997 book by Gerard J. Milburn,[4] which was then followed by a 2003 article by Jonathan P. Dowling and Gerard J. Milburn,[5][6] as well as a 2003 article by David Deutsch.[7] The field of quantum technology has benefited immensely from the influx of new ideas from the field of quantum information processing, particularly quantum computing. Disparate areas of quantum physics, such as quantum optics, atom optics, quantum electronics, and quantum nanomechanical devices, have been unified in the search for a quantum computer and given a common "language", that of quantum information theory.

From 2010 onwards, multiple governments have established programmes to explore quantum technologies,[8] such as the UK National Quantum Technologies Programme,[9] which created four quantum 'hubs', the Centre for Quantum Technologies in Singapore, and QuTech, a Dutch centre to develop a topological quantum computer.[10] In 2016, the European Union introduced the Quantum Technology Flagship,[11][12] a 1 Billion, 10-year-long megaproject, similar in size to earlier European Future and Emerging Technologies Flagship projects.[13][14] In December 2018, the United States passed the National Quantum Initiative Act, which provides a US$1 billion annual budget for quantum research.[15] China is building the world's largest quantum research facility with a planned investment of 76 Billion Yuan (approx. 10 Billion).[16][17]

In the private sector, large companies have made multiple investments in quantum technologies. Examples include Google's partnership with the John Martinis group at UCSB,[18] multiple partnerships with the Canadian quantum computing company D-wave systems, and investment by many UK companies within the UK quantum technologies programme.

Read the original here:
Quantum technology - Wikipedia

Quantum computers are on the path to solving bigger problems for BMW, LG and others – CNET

Marissa Giustina, a researcher with Google's quantum computer lab, draws a diagram showing "quantum supremacy" as only an early step on a path of quantum computer progress.

After years of development, quantum computers reached a level of sophistication in 2021 that emboldened commercial customers to begin dabbling with the radical new machines. Next year, the business world may be ready to embrace them more enthusiastically.

BMW is among the manufacturing giants that sees the promise of the machines, which capitalize on the physics of the ultrasmall to soar over some limits of conventional computers. Earlier this month, the German auto giant chose four winners in a contest it hosted with Amazon to spotlight ways the new technology could help the automaker.

The carmaker found quantum computers have potential to optimize the placement of sensors on cars, predict metal deformation patterns and employ AI in quality checks.

"We at the BMW Group are convinced that future technologies such as quantum computing have the potential to make our products more desirable and sustainable," Peter Lehnert, who leads BMW's research group, said in a statement.

BMW isn't alone in its determination to evaluate the practical application of quantum computers. Aerospace giant Airbus, financial services company PayPal and consumer electronics maker LG Electronics are among the commercial businesses looking to use the machines to refine materials science, streamline logistics and monitor payments.

For years, researchers worked on quantum computers as more or less conceptual projects that take advantage of qubits, data processing elements that can hold more than the two states that are handled by transistors found in conventional computers. Even as they improved, quantum computers were best suited for research projects, some as basic as figuring out how to program the exotic machines. But at the current rate of progress, they'll soon become powerful enough to tackle computing jobs out of reach of conventional computers.

Like cloud computing before it, quantum computing will be a service that most corporations rent from other companies. The rigs require constant attention and are notoriously fiddly. Though more work is required to tap their full potential, quantum computers are becoming more and more stable, a development that's helping corporations overcome initial hesitance.

Georges-Olivier Reymond, chief executive of startup Pasqal, says the progress is turning around skeptics who previously viewed quantum computing as a fantasy. A few years ago, employees at large corporations would roll their eyes when he brought up the subject, but that's changed, Reymond says.

"Now each time I talk to them I have a positive answer," Reymond said. "They are ready to engage."

One new customer is European defense contractor Thales, which is interested in quantum computing applications in sensors and communications. "Pasqal's quantum processors can efficiently address large size problems that are completely out of reach of classical computing systems," Thales Chief Technology Officer Bernhard Quendtsaid in a statement.

Of course, quantum computing is still a tiny fraction of the traditional computing market, but it's growing fast. About $490 million was spent on quantum computers, software and services in 2021, Hyperion Research analyst Bob Sorensen said at the Q2B conference held by quantum computing software company QC Ware in December. He expects spending to grow by 22% to $597 million in 2022 and at an average of 26% a year through 2024. By comparison, spending on conventional computing is expected to rise 4% in 2021 to $3.8 trillion, Gartner analysts predict.

The growing commercial activity is notable given that using a quantum computer costs $3,000 to $5,000 per hour, according to Jean-Francois Bobier, an analyst at Boston Consulting Group. A conventional, high-performance computer hosted on a cloud service costs a half penny for the same amount of time.

Analysts say the real spending on quantum computing will start when the industry tackles error correction, a solution to the vexing problem of easily perturbed qubits that derail calculations. The fidelity of a single computing step on the most advanced machines is around 99.9%, leaving a degree of flakiness that makes a raw quantum computing calculation unreliable. As a result, quantum computers have to run the same calculation many times to provide confidence that the answer is correct.

Once error correction is mature, the revenue generated through quantum computing will explode, according to Boston Consulting Group. With today's machines, that value will likely total between $5 billion and $10 billion by 2025, according to the consultancy's estimates. Once error corrected machines arrive, the total could leap forward to hit $450 billion to $850 billion by 2040.

Software and services that hide the complexity of quantum computers also will boost usage. IonQ CEO Peter Chapman predicts that in 2022, developers will be able to easily train their AI models with quantum computers. "You don't need to know anything about quantum," Chapman said. "You just give it the data set and it spits back a model."

Among the signs of commercial interest:

Quantum computers today are more of a luxury than a necessity. But with their potential to transform materials science, shipping, financial services and product design, it's not a surprise companies like BMW are investing. The automaker stands to benefit from knowing better how materials will deform in a crash or training its vehicles' vision AI faster. Though quantum computers might not produce a payoff this year or next, there's a cost to missing out on the technology once it matures.

Link:
Quantum computers are on the path to solving bigger problems for BMW, LG and others - CNET

Neural’s best quantum computing and physics stories from 2021 – The Next Web

2021 will be remembered for a lot of things, but when its all said and done we think itll eventually get called the year quantum computing finally came into focus.

Thats not to say useful quantum computers have actually arrived yet. Theyre still somewhere between a couple years and a couple centuries away. Sorry for being so vague, but when youre dealing with quantum physics there arent yet many guarantees.

This is because physics is an incredibly complex and challenging field of study. And the difficulty gets cranked up exponentially when you start adding theoretical and quantum to the research.

Were talking about physics at the very edge of reason. Like, for example, imagining a quantum-powered artificial intelligence capable of taking on the Four Horseman of the Apocalypse.

That might sound pretty wacky, but this story explains why its not quite as out there as you might think.

But lets go even further. Lets go past the edge of reason and into the realm of the speculative science. Earlier this year we wondered what would happen if physicists could actually prove that reality as we know it isnt real.

Per that article:

Theoretically, if we could zoom in past the muons and leptons and keep going deeper and deeper, we could reach a point where all objects in the universe are indistinguishable from each other because, at the quantum level, everything that exists is just a sea of nearly-identical subparticulate entities.

This version of reality would render the concepts of space and time pointless. Time would only exist as a construct by which we give meaning to our own observations. And those observations would merely be the classical side-effects of existing in a quantum universe.

So, in the grand scheme of things, its possible that our reality is little more than a fleeting, purposeless arrangement of molecules. Everything that encompasses our entire universe may be nothing more than a brief hallucination caused by a quantum vibration.

Nothing makes you feel special like trying to conceive of yourself as a few seasoning particles in an infinite soup of gooey submolecules.

If having an existential quantum identity-crisis isnt your thing, we also covered a lot of cool stuff that doesnt require you to stop seeing yourself as an individual stack of materials.

Does anyone remember the time China said it had built a quantum computer a million times more powerful than Googles? We dont believe it. But thats the claim the researchersmade. You can read more about that here.

Oh, and that Google quantum system the Chinese researchers referenced? Yeah, it turns out it wasnt exactly the massive upgrade over classical supercomputers it was chalked up to be either.

But, of course, we forgive Google for its marketing faux pas. And thats because, hands down, the biggest story of the year for quantum computers was the time crystal breakthrough.

As we wrote at the time:

If Googles actually created time-crystals, it could accelerate the timeline for quantum computing breakthroughs from maybe never to maybe within a few decades.

At the far-fetched, super-optimistic end of things we could see the creation of a working warp drive in our lifetimes. Imagine taking a trip to Mars or the edge of our solar system, and being back home on Earth in time to catch the evening news.

And, even on the conservative end with more realistic expectations, its not hard to imagine quantum computing-based chemical and drug discovery leading to universally-effective cancer treatments.

Talk about a eureka moment!

But there were even bigger things in the world of quantum physics than just advancing computer technology.

Scientists from the University of Sussex determined that black holes emanate a specific kind of quantum pressure that could lend some credence to multiple universe theories.

Basically, we cant explain where the pressure comes from. Could this be blow back from white holes swallowing up energy and matter in a dark, doppelganger universe that exists parallel to our own? Nobody knows! You can read more here though.

Still there were even bigger philosophical questions in play over the course of 2021 when it came to interpreting physics research.

Are we incapable of finding evidence for God because were actually gods in our rights? That might sound like philosophy, but there are some pretty radical physics interpretations behind that assertion.

And, if we are gods, can we stop time? Turns out, whether were just squishy mortal meatbags or actual deities, we actually can!

Alright. If none of those stories impress you, weve saved this one for last. If being a god, inventing time crystals, or even stopping time doesnt float your boat, how about immortality? And not just regular boring immortality, butquantum immortality.

Its probably not probable, and adding the word quantum to something doesnt necessarily make it cooler, but anythings possible in an infinite universe. Plus, the underlying theories involving massive-scale entanglement are incredible read more here.

Seldom a day goes by where something incredible isnt happening in the world of physics research. But thats nothing compared to the magic weve yet to uncover out there in this fabulous universe we live in.

Luckily for you, Neural will be back in 2022 to help make sense of it all. Stick with us for the most compelling, wild, and deep reporting on the quantum world this side of the non-fiction realm.

Excerpt from:
Neural's best quantum computing and physics stories from 2021 - The Next Web

Research Opens the Door to Fully Light-Based Quantum Computing – Tom’s Hardware

A team of researchers with Japan's NTT Corporation, the Tokyo University, and the RIKEN research center have announced the development of a full photonics-based approach to quantum computing. Taking advantage of the quantum properties of squeezed light sources, the researchers expect their work to pave the road towards faster and easier deployments of quantum computing systems, avoiding many practical and scaling pitfalls of other approaches. Furthermore, the team is confident their research can lead towards the development of rack-sized, large-scale quantum computing systems that are mostly maintenance-free.

The light-based approach in itself brings many advantages compared to traditional quantum computing architectures, which can be based on a number of approaches (trapped ions, silicon quantum dots, and topological superconductors, just to name a few). However, all of these approaches are somewhat limited from a physics perspective: they all need to employ electronic circuits, which leads to Ohmic heating (the waste heat that results from electrical signals' trips through resistive semiconductor wiring). At the same time, photonics enable tremendous improvements in latency due to data traveling at the speed of light.

Photonics-based quantum computing takes advantage of emerging quantum properties in light. The technical term here is squeezing the more squeezed a light source is, the more quantum behavior it demonstrates. While a minimum squeezing level of over 65% was previously thought required to unlock the necessary quantum properties, the researchers achieved a higher, 75% factor in their experiments. In practical terms, their quantum system unlocks a higher than 6 THz frequency band, thus taking advantage of the benefits of photonics for quantum computing without decreasing the available broadband to unusable levels.

The researchers thus expect their photonics-based quantum design to enable easier deployments there's no need for exotic temperature controls (essentially sub-zero freezers) that are usually required to maintain quantum coherence on other systems. Scaling is also made easier and simplified: there's no need to increase the number of qubits by interlinking several smaller, coherent quantum computing units. Instead, the number of qubits (and thus the performance of the system) can be increased by continuously dividing light into "time segments" and encoding different information in each of these segments. According to the team, this method allows them to "easily increase the number of qubits on the time axis without increasing the size of the equipment."

All of these elements combined allow for a reduction in required raw materials while doing away with the complexity of maintaining communication and quantum coherence between multiple, small quantum computing units. The researchers will now focus on actually building the photonics-based quantum computer. Considering how they estimate their design can scale up towards "millions of qubits," their contributions could enable a revolutionary jump in quantum computation that skips the expected "long road ahead" for useful qubit counts to be achieved.

Original post:
Research Opens the Door to Fully Light-Based Quantum Computing - Tom's Hardware

Scientists created a biological quantum circuit in grisly experiment with tardigrades – The Next Web

An international team of researchers are claiming to have performed the first ever experiment successfully quantum entangling a multi-celled organism.

The team, whose research was recently published in a pre-print paper, says its managed to place a tardigrade a tiny critter affectionately known as a water bear in a state of quantum entanglement between a pair of superconducting qubits.

In other words: the researchers managed to put a tardigrade in a state where it was directly connected to the qubits in such a way that anything that happens to the water bear or the qubits would simultaneously affect all three.

This is a fundamental property of quantum computing. But this kind of quantum function usually only occurs with particle-sized objects. Researchers have put single-celled organisms in a state of quantum entanglement before, but this would mark the first time scientists have done so with a complex biological organism.

There is, however, some debate as to the significance of the teams efforts. Per the researcherspaper:

We observe coupling between the animal in cryptobiosis and a superconducting quantum bit and prepare a highly entangled state between this combined system and another qubit. The tardigrade itself is shown to be entangled with the remaining subsystems. The animal is then observed to return to its active form after 420 hours at sub 10 mK temperatures and pressure of 6 106 mbar, setting a new record for the conditions that a complex form of life can survive.

Theres a lot to unpack there, but first and foremost: other physicists are being critical of this work early due to what appears to be a loose definition of entanglement.

As spotted by Live Sciences Brandon Specktor, the buzz on social media appears to be entirely skeptical:

But, as Specktor also points out, this is all likely to get sorted in peer-review. For now, lets talk about the experiment itself.

Tardigrades are among the most resilient creatures we know of. They can enter a state of suspended animation where they have no observable biological functions in order to survive in extremely hostile environments.

Its for this reason the scientists chose to attempt integrating them with quantum bits in a circuit. The ideas pretty basic. You freeze the tardigrades to the point that theyre next to absolute zero, and then you can put them in a state of entanglement just like any other super-cold particle.

However, because the tardigrades are living beings, the storys a bit more visceral than your standard we entangled several photons variety of experiment.

According to teams paper, these particular tardigrades were collected in February 2018 from a roof gutter in Niva, Denmark.

So, to sum up, a group of humans in white coats kidnapped a bunch of cute little water bears, who were already living in a literal gutter, and then exposed them to the coldest temperatures a tardigrades ever experienced before forcing them into a three-way entanglement with superconducting qubits.

The team was able to revive one of the tardigrades that were successfully involved in what theyre calling entanglement. But, as for the others, the researchers wrote we wish to point out that it is very important for the revival of the animal to change the external temperature and pressure gently.

Rest in power little science bears, well never forget you.

Further reading: Physicists might have created quantum entanglement in bacteria

Read this article:
Scientists created a biological quantum circuit in grisly experiment with tardigrades - The Next Web

Breaking Up Tech Is a Gift to China – The Wall Street Journal

Few issues unite both sides of the political divide more than anger at U.S. tech companies, whether for censorship of conservative viewpoints or for failing to counter misinformation online. In response to these concerns, legislation introduced in Congress would weaken the U.S. tech industry, ostensibly in the name of breaking up monopolies. Unfortunately, the various bills would hurt the U.S. and strengthen the hand of our greatest geopolitical rival, the Peoples Republic of China.

As of 2018, nine of the top 20 global technology firms by valuation were based in China. President Xi Jinping has stated his intention to spend $1.4 trillion by 2025 to surpass the U.S. in key technology areas, and the Chinese government aggressively subsidizes national champion firms. Beginning with the Made in China 2025 initiative, Beijing has made clear that it wont stop until it dominates technologies such as quantum computing, artificial intelligence, autonomous systems and more. Last month the National Counterintelligence and Security Center warned that these are technologies where the stakes are potentially greatest for U.S. economic and national security.

See the original post here:
Breaking Up Tech Is a Gift to China - The Wall Street Journal

7 Tech Trends Where Israel Could Make An Impact In 2022 – NoCamels – Israeli Innovation News

As we head into 2022, forecasts for Israels bubbling tech sector are big, optimistic, and showing no signs of slowing down. Industry experts and tech investors are looking ahead with eyes wide open and faith in the countrys entrepreneurs that the year to come will be strong with stable growth.

We continue to be really excited about Israel as a focus area, Nicole Priel, partner at Ibex Investors, tells NoCamels. Weve been really active in Israel and we dont see that slowing downWe see so much promise in this ecosystem across enterprise software and other sectors.

The outgoing year has been one of record-breaking funding, turning crises into opportunity, globally recognized groundbreaking inventions, a surge in valuations of Israeli tech firms, big acquisitions, and maturation into a scale-up nation.

We really are transitioning from startup nation to scale-up nation and this is just attracting so much capital, says Jonathan Medved, founder, and CEO of OurCrowd.

Israeli innovation is everywhere, touching numerous tech sectors simultaneously. In 2021, local tech companies continued to take the lead in cybersecurity, agriculture technologies, financial technologies, mobility, data, and digital privacy, among other fields.

The big question: Where will Israel make its mark in 2022?

With so many booming sectors within the high-tech arena, its a tough call to make. So, NoCamels asked the experts to share their predictions for the next 12 months.

If the pundits are right, these are the 7 tech trends where Israel will make an impact in 2022:

E-commerce has exploded throughout 2021, in large part due to the COVID-19 pandemic.

According to market reports, 66 percent of customers choose what to buy based on convenience. So, it is no surprise that e-commerce is a booming industry.

Theres a couple of spaces that we think Israel is really going to excel in, anda couple of them are around e-commerce. We are thinking a lot about how companies are going to chip away at Amazons monopoly, including around logistics and warehousing Priel told NoCamels.

Israeli companies are looking for solutions to rapid shipping and the online returns space, among other areas. Priel says Ibex Investors are taking a look at the online returns space and thinking about how startups can help mitigate online returns to create a stronger online shopping experience overall.

In addition to changing the way users shop, sellers need strong e-commerce tools for their online stores.

More focus and emphasis is going to be placed on customer success as a driver within SaaS organizations, so we are excited to see what technologies will pop up to support CS organizations and help drive revenue, says Priel.

It is more expensive to acquire a new customer than it is to retain a previous customer, Priel explains. It is because of this principle of marketing that customer satisfaction will become a more dominant indicator and marketing metric for SaaS-based companies which could allow sales teams to more accurately serve their clientele.

And, its not just in the traditional e-commerce space that well see new solutions.

Medved believes the next 10 years will see huge growth in immersive e-commerce.

We are looking at all kinds of AR, VR, more immersive interactions [in general] will become more normal over the coming years, he says, noting investments in ByondXR, an Israeli software company that creates immersive virtual stores where people can pick out goods, and ZipIt, which can turn any store into a touchless, personless Amazon-like store.

More advanced logistics, last-mile delivery, and shipment innovations are going to be a popular trend in tech in 2022, says Priel, citing dark kitchens food producers with no physical location and dark warehouses spaces used to deliver orders to shorten the distance to the consumer as examples.

We are also very excited about the idea of dark kitchens and dark warehouses for delivering items to consumers, whether its merchandise or food, says Priel.

While these unique distribution methods are important for last-mile delivery, the COVID-19 pandemic put the spotlight on supply chain logistics in general.

Supply chain is critical [and] Israel is very strong in terms of optimization and planning. There are a lot of unmet needs that we are busy working on, says Medved.

Blue-and-white solutions include Freightos, which streamlines the shipping industry through an international freight marketplace; BionicHive, which deploys easily portable and autonomously machines around warehouses; and Trellis which predicts the yield, cost, and quality of produce while using AI to accurately move goods.

Semiconductors are found in every piece of hardware we use from personal computers, cars, databases, toasters to rocket ships, and more.

Israel has a global name for its hardware innovation. With an ever-increasing need for processing power thanks to big data and AI its no surprise that in 2021, this country continued its rule as a global powerhouse in semiconductor and computer chip R&D.

Intel announced in May that it will be investing $10 billion in a new processing center in Kiryat Gat in addition to investing $600 million in its centers in Haifa and Jerusalem.

In March, Google announced that it will be doubling down on Israeli computer chip design and production. They hired former senior Intel executive, Uri Frank as VP of Engineering of Server Chip Design to build a world-class team in Israel.

Market reports show 2022 demand for computer chips is meant to rise. And this will only benefit Israel.

The increasing importance of semiconductors will only be good for Israel. We have situations like Facebook, Microsoft, and Amazon all talking about setting up semiconductor activities here, says Medved.

Technology can only move as fast as the computer chips its built on. So how is Israel making them faster?

The answer is quantum computing.

Quantum Computing is a type of computing that harnesses the properties of quantum states to create calculations. Naturally, computers can only compute information as fast as physics will allow the particles to move. But, utilizing quantum properties, information can move much, much faster than currently possible.

The Israeli government is making a strong effort to push Israel forward in the field.

In 2019, the Knesset committed roughly $400 million to a five-year National Quantum Initiative which included $60 million towards the effort of producing a quantum computer. Physics Today reported in October that over the last two years, there has been a leap from five to 30 quantum-based companies in Israel.

Earlier this month, Hebrew University Physicist, Dr. Shlomi Kotler, won Physics Worlds 2021 Breakthrough of the Year award, presented by the UK-based Institute of Physics to two research teams who advanced the understanding of quantum systems.

His team successfully quantum-mechanically entangles two drumheads that can be used as quantum sensors or nodes in a quantum network.

Physics World editors chose this years winners from nearly 600 published research articles and wrote the winners demonstrated important work for scientific progress and/or the development of real-world applications.

CEO and co-founder of Israeli-based, Quantum Machines, Itamar Sivan told Physics Today that he has no doubt that quantum computing will become influential and its ultimately a question of When?. He credits his companys success to the easy accessibility to funding for quantum based-firms. He said, There are great engineers and amazing talent in Israel. We can find people here who are both experts in quantum but also have some engineering background.

SEE ALSO: On Yom Haatzmaut, A Look At Israels Innovation Contributions To The World

Talking about the upcoming year, Medved says, 2022 will see Quantum Computing attract continued strong interest from investors. I expect that global Quantum VC investment will more than double from 2021s $1 billion and that revenues of Quantum companies will near $500 million in 2022. While this is impressive growth, we havent seen anything yet. In a decade from now, Quantum will be ubiquitous, and will be an order of magnitude larger in investment and revenues. While the mainstream adoption of quantum computing is still a decade away, the technological advances that are coming out of Israel will definitely make waves in the coming year and beyond.

The blockchain industry has come a long way. It started 12 years ago as a payment method and store of value. The technology slowly evolved to be a solution for supply chain management, digital security, voting applications, financial applications, and digital ownership in the form of tokens called NFTs and much more.

In 2021, blockchain technology became much more mainstream not only with the explosion of the NFT ecosystem but it gained adoption or is being explored by companies like Nike, Adidas, Facebook (Meta), PayPal, Visa, Ubisoft, and Shopify.

I think its going to flourish like crazy, Medved says of blockchain. Were starting to make investments in those types of companies. We have not been big players or players at all in ICOs or cryptocurrencies but we believe in DeFi and that theres going to be a lot of business applications utilizing the blockchain and now is the time.

The blockchain industry is set to be worth $67.8 billion by 2026, according to market reports.

Blockchain is expected to continue being a strong and emerging sector into 2022, especially in Israel.

In November, American cryptocurrency exchange Coinbase acquired Unbound Security for a believed $150 million, according to a report. Coinbase not only gains access to some of the worlds most sophisticated cryptographic security experts but also a presence in Israel Weve long recognized Israel as a hotbed of strong technology and cryptography talent, reads a press release.

According to data compiled by Start-Up Nation Finder, cryptocurrency-tagged companies raised, for the first time ever, over $1 billion in funding for 2021. While a big milestone for the Israeli Web3 ecosystem, the global acceleration of the cryptocurrency markets crossing $2 trillion leaves a lot of room for Israels growth within this sector.

The pandemic accelerated the need for digital health solutions such as telemedicine, at-home medical devices, and personalized treatments.

Theres no slowing [digital health] down because people will get healthier, it will become much more efficient and it will reduce medical costs, says Medved.

Israel has long been a powerhouse in the health-tech space and COVID-19 has only upped its innovation. Israel has over 1,400 digital health startups, according to Start-Up Nation Finder.

On a global level, telehealth has increased 38 times from pre-COVID-19 levels, according to market reports. Global healthcare spending is set to hit over $10 trillion in 2022, and Fortune Business Insight predicts telehealth to be a $397 billion industry by 2027.

Israeli companies are all over the digital health space, with artificial intelligence for drug discovery, molecular diagnostics for personalized treatments, and VR-based FDA compliant telehealth meetings.

Among the companies to hit the news in 2021, are the likes of air filter companies like Aura Air, which this past week won the approval of the health and education ministries to be installed in 700 Jerusalem classrooms, and Tadiran which says it removes 99.9% of COVID-19 particles from the air. Additionally, SaNOtize, invented a nasal spray to kill the virus with a spritz and MigVax, claims to have an oral effective booster against the virus.

Also earlier this month, eight Israeli startups werenamedto the prestigiousDigital Health 150, an annual global ranking by New York-based research firm CB Insights of the 150 most promising companies using digital technology to transform the healthcare industry.

On health care technology, Medved told NoCamels, The most important word today in venture capital seems to be velocity. There seems to be a speed at which funding is getting done, companies are growing much faster than before and thats happening in healthcare too which is one of the slower moving areas because of the need for approval and you even see the FDA, because of the changes made in the pandemic just moving a lot faster.

Food tech conquered the headlines in 2021, with a wide range of jaw-dropping innovations.

And Israel is taking part in this revolution of what we eat, how we eat it, what its wrapped in, and how it gets from farm to our plate.

In September, Margalit Startup City Galil the International Foodtech Center, developed in conjunction with the Jewish National Fund (JNF), opened its doors. The center is dedicated to the application of food science and food technologies.

Lab-grown meat was a buzzword in 2021 and is likely going to continue to demand solutions that tackle the harmful effects of livestock systems and reduce the populations reliance on livestock in 2022. Earlier this year, NoCamels reported on the Israeli FoodTech incubator The Kitchen Hub and how its using its resources to cultivate sustainable innovations in the food industry.

Indeed, the Food and Agriculture Organizations of the UN found that the livestock sector emerges as one of the top two or three most significant contributors to the most serious environmental problems.

In November, the worlds first lab-grown meat factory opened in Israel.

Future Meat Technologies, a cell-grown meat developer, raised the most in the sectors history with a Series B investment of $347 million. This investment broke records as the biggest single investment in a cultured meat company to date.

Beyond the lab-grown meat trend, a slew of companies like Imagindairy develop animal-free dairy, Ukko designs proteins that dont trigger allergic responses, and ZeroEgg produces plant-based eggs that aim to behave and taste like the real thing.

Were (globally) investing broadly in food, a ton of money, in next generation milk, eggs, fish, and reduced sugar. Were investing in agriculture tech in terms of data collection and sensors, but not for one year, says Medved.

Read the original post:
7 Tech Trends Where Israel Could Make An Impact In 2022 - NoCamels - Israeli Innovation News

Machine Learning: Definition, Explanation, and Examples

Machine learning has become an important part of our everyday lives and is used all around us. Data is key to our digital age, and machine learning helps us make sense of data and use it in ways that are valuable. Similarly, automation makes business more convenient and efficient. Machine learning makes automation happen in ways that are consumable for business leaders and IT specialists.

Machine learning is vital as data and information get more important to our way of life. Processing is expensive, and machine learning helps cut down on costs for data processing. It becomes faster and easier to analyze large, intricate data sets and get better results. Machine learning can additionally help avoid errors that can be made by humans. Machine learning allows technology to do the analyzing and learning, making our life more convenient and simple as humans. As technology continues to evolve, machine learning is used daily, making everything go more smoothly and efficiently. If youre interested in IT, machine learning and AI are important topics that are likely to be part of your future. The more you understand machine learning, the more likely you are to be able to implement it as part of your future career.

If you're interested in a future in machine learning, the best place to start is with an online degree from WGU. An online degree allows you to continue working or fulfilling your responsibilities while you attend school, and for those hoping to go into IT this is extremely valuable. You can earn while you learn, moving up the IT ladder at your own organization or enhancing your resume while you attend school to get a degree. WGU also offers opportunities for students to earn valuable certifications along the way, boosting your resume even more, before you even graduate. Machine learning is an in-demand field and it's valuable to enhance your credentials and understanding so you can be prepared to be involved in it.

Go here to read the rest:
Machine Learning: Definition, Explanation, and Examples

Machine Learning Tutorial | Machine Learning with Python …

Machine Learning tutorial provides basic and advanced concepts of machine learning. Our machine learning tutorial is designed for students and working professionals.

Machine learning is a growing technology which enables computers to learn automatically from past data. Machine learning uses various algorithms for building mathematical models and making predictions using historical data or information. Currently, it is being used for various tasks such as image recognition, speech recognition, email filtering, Facebook auto-tagging, recommender system, and many more.

This machine learning tutorial gives you an introduction to machine learning along with the wide range of machine learning techniques such as Supervised, Unsupervised, and Reinforcement learning. You will learn about regression and classification models, clustering methods, hidden Markov models, and various sequential models.

In the real world, we are surrounded by humans who can learn everything from their experiences with their learning capability, and we have computers or machines which work on our instructions. But can a machine also learn from experiences or past data like a human does? So here comes the role of Machine Learning.

Machine Learning is said as a subset of artificial intelligence that is mainly concerned with the development of algorithms which allow a computer to learn from the data and past experiences on their own. The term machine learning was first introduced by Arthur Samuel in 1959. We can define it in a summarized way as:

With the help of sample historical data, which is known as training data, machine learning algorithms build a mathematical model that helps in making predictions or decisions without being explicitly programmed. Machine learning brings computer science and statistics together for creating predictive models. Machine learning constructs or uses the algorithms that learn from historical data. The more we will provide the information, the higher will be the performance.

A machine has the ability to learn if it can improve its performance by gaining more data.

A Machine Learning system learns from historical data, builds the prediction models, and whenever it receives new data, predicts the output for it. The accuracy of predicted output depends upon the amount of data, as the huge amount of data helps to build a better model which predicts the output more accurately.

Suppose we have a complex problem, where we need to perform some predictions, so instead of writing a code for it, we just need to feed the data to generic algorithms, and with the help of these algorithms, machine builds the logic as per the data and predict the output. Machine learning has changed our way of thinking about the problem. The below block diagram explains the working of Machine Learning algorithm:

The need for machine learning is increasing day by day. The reason behind the need for machine learning is that it is capable of doing tasks that are too complex for a person to implement directly. As a human, we have some limitations as we cannot access the huge amount of data manually, so for this, we need some computer systems and here comes the machine learning to make things easy for us.

We can train machine learning algorithms by providing them the huge amount of data and let them explore the data, construct the models, and predict the required output automatically. The performance of the machine learning algorithm depends on the amount of data, and it can be determined by the cost function. With the help of machine learning, we can save both time and money.

The importance of machine learning can be easily understood by its uses cases, Currently, machine learning is used in self-driving cars, cyber fraud detection, face recognition, and friend suggestion by Facebook, etc. Various top companies such as Netflix and Amazon have build machine learning models that are using a vast amount of data to analyze the user interest and recommend product accordingly.

Following are some key points which show the importance of Machine Learning:

At a broad level, machine learning can be classified into three types:

Supervised learning is a type of machine learning method in which we provide sample labeled data to the machine learning system in order to train it, and on that basis, it predicts the output.

The system creates a model using labeled data to understand the datasets and learn about each data, once the training and processing are done then we test the model by providing a sample data to check whether it is predicting the exact output or not.

The goal of supervised learning is to map input data with the output data. The supervised learning is based on supervision, and it is the same as when a student learns things in the supervision of the teacher. The example of supervised learning is spam filtering.

Supervised learning can be grouped further in two categories of algorithms:

Unsupervised learning is a learning method in which a machine learns without any supervision.

The training is provided to the machine with the set of data that has not been labeled, classified, or categorized, and the algorithm needs to act on that data without any supervision. The goal of unsupervised learning is to restructure the input data into new features or a group of objects with similar patterns.

In unsupervised learning, we don't have a predetermined result. The machine tries to find useful insights from the huge amount of data. It can be further classifieds into two categories of algorithms:

Reinforcement learning is a feedback-based learning method, in which a learning agent gets a reward for each right action and gets a penalty for each wrong action. The agent learns automatically with these feedbacks and improves its performance. In reinforcement learning, the agent interacts with the environment and explores it. The goal of an agent is to get the most reward points, and hence, it improves its performance.

The robotic dog, which automatically learns the movement of his arms, is an example of Reinforcement learning.

Before some years (about 40-50 years), machine learning was science fiction, but today it is the part of our daily life. Machine learning is making our day to day life easy from self-driving cars to Amazon virtual assistant "Alexa". However, the idea behind machine learning is so old and has a long history. Below some milestones are given which have occurred in the history of machine learning:

Now machine learning has got a great advancement in its research, and it is present everywhere around us, such as self-driving cars, Amazon Alexa, Catboats, recommender system, and many more. It includes Supervised, unsupervised, and reinforcement learning with clustering, classification, decision tree, SVM algorithms, etc.

Modern machine learning models can be used for making various predictions, including weather prediction, disease prediction, stock market analysis, etc.

Before learning machine learning, you must have the basic knowledge of followings so that you can easily understand the concepts of machine learning:

Our Machine learning tutorial is designed to help beginner and professionals.

We assure you that you will not find any difficulty while learning our Machine learning tutorial. But if there is any mistake in this tutorial, kindly post the problem or error in the contact form so that we can improve it.

Read the original post:
Machine Learning Tutorial | Machine Learning with Python ...

Nonsense can make sense to machine-learning models – MIT News

For all that neural networks can accomplish, we still dont really understand how they operate. Sure, we can program them to learn, but making sense of a machines decision-making process remains much like a fancy puzzle with a dizzying, complex pattern where plenty of integral pieces have yet to be fitted.

If a model was trying to classify an image of said puzzle, for example, it could encounter well-known, but annoying adversarial attacks, or even more run-of-the-mill data or processing issues. But a new, more subtle type of failure recently identified by MIT scientists is another cause for concern: overinterpretation, where algorithms make confident predictions based on details that dont make sense to humans, like random patterns or image borders.

This could be particularly worrisome for high-stakes environments, like split-second decisions for self-driving cars, and medical diagnostics for diseases that need more immediate attention. Autonomous vehicles in particular rely heavily on systems that can accurately understand surroundings and then make quick, safe decisions. The network used specific backgrounds, edges, or particular patterns of the sky to classify traffic lights and street signs irrespective of what else was in the image.

The team found that neural networks trained on popular datasets like CIFAR-10 and ImageNet suffered from overinterpretation. Models trained on CIFAR-10, for example, made confident predictions even when 95 percent of input images were missing, and the remainder is senseless to humans.

Overinterpretation is a dataset problem that's caused by these nonsensical signals in datasets. Not only are these high-confidence images unrecognizable, but they contain less than 10 percent of the original image in unimportant areas, such as borders. We found that these images were meaningless to humans, yet models can still classify them with high confidence, says Brandon Carter, MIT Computer Science and Artificial Intelligence Laboratory PhD student and lead author on a paper about the research.

Deep-image classifiers are widely used. In addition to medical diagnosis and boosting autonomous vehicle technology, there are use cases in security, gaming, and even an app that tells you if something is or isnt a hot dog, because sometimes we need reassurance. The tech in discussion works by processing individual pixels from tons of pre-labeled images for the network to learn.

Image classification is hard, because machine-learning models have the ability to latch onto these nonsensical subtle signals. Then, when image classifiers are trained on datasets such as ImageNet, they can make seemingly reliable predictions based on those signals.

Although these nonsensical signals can lead to model fragility in the real world, the signals are actually valid in the datasets, meaning overinterpretation cant be diagnosed using typical evaluation methods based on that accuracy.

To find the rationale for the model's prediction on a particular input, the methods in the present study start with the full image and repeatedly ask, what can I remove from this image? Essentially, it keeps covering up the image, until youre left with the smallest piece that still makes a confident decision.

To that end, it could also be possible to use these methods as a type of validation criteria. For example, if you have an autonomously driving car that uses a trained machine-learning method for recognizing stop signs, you could test that method by identifying the smallest input subset that constitutes a stop sign. If that consists of a tree branch, a particular time of day, or something that's not a stop sign, you could be concerned that the car might come to a stop at a place it's not supposed to.

While it may seem that the model is the likely culprit here, the datasets are more likely to blame. There's the question of how we can modify the datasets in a way that would enable models to be trained to more closely mimic how a human would think about classifying images and therefore, hopefully, generalize better in these real-world scenarios, like autonomous driving and medical diagnosis, so that the models don't have this nonsensical behavior, says Carter.

This may mean creating datasets in more controlled environments. Currently, its just pictures that are extracted from public domains that are then classified. But if you want to do object identification, for example, it might be necessary to train models with objects with an uninformative background.

This work was supported by Schmidt Futures and the National Institutes of Health. Carter wrote the paper alongside Siddhartha Jain and Jonas Mueller, scientists at Amazon, and MIT Professor David Gifford. They are presenting the work at the 2021 Conference on Neural Information Processing Systems.

See the original post here:
Nonsense can make sense to machine-learning models - MIT News

Machine Learning Democratized: Of The People, For The People, By The Machine – Forbes

Supporters raise signs as Democratic presidential hopeful Bernie Sanders campaign rally in downtown ... [+] Grand Rapids, Michigan, on March 8, 2020. - Democratic presidential hopefuls Joe Biden and Bernie Sanders secured crucial endorsements Sunday from prominent black supporters just days ahead of the first round of voting to pit them in a head-to-head contest. (Photo by JEFF KOWALSKY / AFP) (Photo by JEFF KOWALSKY/AFP via Getty Images)

Technology is a democratic right. Thats not a legal statement, a core truism or even any kind of de facto public awareness proclamation. Its just something that we all tend to agree upon. The birth of cloud computing and the rise of open source have fuelled this line of thought i.e. cloud puts access and power in anyones hands and open source champions meritocracy over hierarchy, an action which in itself insists upon access, opportunity and engagement.

Key among the sectors of the IT landscape now being driven towards a more democratic level of access are Artificial Intelligence (AI) and the Machine Learning (ML) methods that go towards building the smartness inside AI models and their algorithmic strength.

Amazon Web Services (AWS) is clearly a major player in cloud and therefore has the breadth to bring its datacenters ML muscle forwards in different ways, in different formats and at different levels of complexity, abstraction and usability.

While some IT democratization focuses on putting complex developer and data science tools in the hands of laypeople, other democratization drives to put ML tools in the hands of developers not all of whom will be natural ML specialists and AI engineers in the first instance.

The recently announced SageMaker Studio Lab is a free service for software application developers to learn machine learning methods. It teaches them core techniques and offers them the chance to perform hands-on experimentation with an Integrated Development Environment (in this case, a JupyterLab IDE) to start creating model training functions that will work on real world processors (both CPU chips and higher end Graphic Processing Units, or GPUs) as well as the gigabytes of storage these processes also require.

AWS has twinned its product development with the creation of its own AWS AI & ML Scholarship Program. This is a US$10 million investment per year learning and mentorship initiative created in collaboration with Intel and Udacity.

Machine Learning will be one of the most transformational technologies of this generation. If we are going to unlock the full potential of this technology to tackle some of the worlds most challenging problems, we need the best minds entering the field from all backgrounds and walks of life. We want to inspire and excite a diverse future workforce through this new scholarship program and break down the cost barriers that prevent many from getting started, said Swami Sivasubramanian, VP of Amazon Machine Learning at AWS.

Founder and CEO of Girls in Tech Adriana Gascoigne agrees with Sivasubramanians diversity message wholeheartedly. Her organization is a global nonprofit dedicated to eliminating the gender gap in tech and she welcomes what she calls intentional programs like these that are designed to break down barriers.

Progress in bringing more women and underrepresented communities into the field of Machine Learning will only be achieved if everyone works together to close the diversity gap. Girls in Tech is glad to see multi-faceted programs like the AWS AI & ML Scholarship to help close the gap in Machine Learning education and open career potential among these groups, said Gascoigne.

The program uses AWS DeepRacer (an integrated learning system for users of all levels to learn and explore reinforcement learning and to experiment and build autonomous driving applications) and the new AWS DeepRacer Student League to teach students foundational machine learning concepts by giving them hands-on experience training machine learning models for autonomous race cars, while providing educational content centered on machine learning fundamentals.

The World Economic Forum estimates that technological advances and automation will create 97 million new technology jobs by 2025, including in the field of AI & ML. While the job opportunities in technology are growing, diversity is lagging behind in science and technology careers.

The University of Pennsylvania Engineering is regarded by many in technology as the birthplace of the modern computer. This honor and epithet is due to the fact that ENIAC, the worlds first electronic, large-scale, general-purpose digital computer, was developed there in 1946. Professor of Computer and Information Science (CIS) at the university Dan Roth is enthusiastic on the subject of AI & ML democratization.

One of the hardest parts about programming with Machine Learning is configuring the environment to build. Students usually have to choose the compute instances, security polices and provide a credit card, said Roth. My students needed Amazon SageMaker Studio Lab to abstract away all of the complexity of setup and provide a free powerful sandbox to experiment. This lets them write code immediately without needing to spend time configuring the ML environment.

In terms of how these systems and initiatives actually work, Amazon SageMaker Studio Lab offers a free version of Amazon SageMaker, which is used by researchers and data scientists worldwide to build, train, and deploy machine learning models quickly.

Amazon SageMaker Studio Lab removes the need to have an AWS account or provide billing details to get up and running with machine learning on AWS. Users simply sign up with an email address through a web browser and Amazon SageMaker Studio Lab provides access to a machine learning development environment.

This thread of industry effort must also logically embrace the use of Low-Code/No-Code (LC/NC) technologies. AWS has built this element into its platform with what it calls Amazon SageMaker Canvas. This is a No-Code service intended to expands access to Machine Learning to business analysts (a term that AWS uses to broadly define line-of-business employees supporting finance, marketing, operations and human resources teams) with a visual interface that allows them to create accurate Machine Learning predictions on their own, without having to write a single line of code.

Amazon SageMaker Canvas provides a visual, point-and-click user interface for users to generate predictions. Customers point Amazon SageMaker Canvas to their data stores (e.g. Amazon Redshift, Amazon S3, Snowflake, on-premises data stores, local files, etc.) and the Amazon SageMaker Canvas provides visual tools to help users intuitively prepare and analyze data.

Amazon SageMaker Canvas uses automated Machine Learning to build and train machine learning models without any coding. Businesspeople can review and evaluate models in the Amazon SageMaker Canvas console for accuracy and efficacy for their use case. Amazon SageMaker Canvas also lets users export their models to Amazon SageMaker Studio, so they can share them with data scientists to validate and further refine their models.

According to Marc Neumann, product owner, AI Platform at The BMW Group, the use of AI as a key technology is an integral element in the process of digital transformation at the BMW Group. The company already employs AI throughout its value chain, but has been working to expand upon its use.

We believe Amazon SageMaker Canvas can add a boost to our AI/ML scaling across the BMW Group. With SageMaker Canvas, our business users can easily explore and build ML models to make accurate predictions without writing any code. SageMaker also allows our central data science team to collaborate and evaluate the models created by business users before publishing them to production, said Neumann.

As we know, with all great power comes great responsibility and nowhere is this more true than in the realm of AI & ML with all the machine brain power we are about to wield upon our lives.

Enterprises can of course corral, contain and control how much ML any individual, team or department has access to - and which internal and external systems it can then further connect with and impact - via policy controls and role-based access systems that make sure data sources are not manipulated and then subsequently distributed in ways that could ultimately prove harmful to the business, or indeed to people.

There is no denying the general weight of effort being applied here as AI intelligence and ML cognizance is being democratized for a greater transept of society and after all who wouldnt vote for that?

Continue reading here:
Machine Learning Democratized: Of The People, For The People, By The Machine - Forbes

New platform uses machine-learning and mass spectrometer to rapidly process COVID-19 tests – UC Davis Health

(SACRAMENTO)

UC Davis Health, in partnership with SpectraPass, is evaluating a new type of rapid COVID-19 test. The research will involve about 2,000 people in Sacramento and Las Vegas.

The idea behind the new platform is a scalable system that can quickly and accurately perform on-site tests for hundreds or potentially thousands of people.

Nam Tran is a professor of clinical pathology in the UC Davis School of Medicine and a co-developer of the novel testing platform with SpectraPass, a Las Vegas-based startup.

Tran explained that the system doesnt look for the SARS-CoV-2 virus like a PCR test does. Instead, it detects an infection by analyzing the bodys response to it. When ill, the body produces differing protein profiles in response to infection. These profiles may indicate different types of infection, which can be detected by machine learning.

The goal of this study is to have enough COVID-19 positive and negative individuals to train our machine learning algorithm to identify patients infected by SARS-CoV-2, said Tran.

A study published by Tran and his colleagues earlier this year in Nature Scientific Reports found the novel method to be 98.3% accurate for positive COVID-19 tests and 96% for negative tests.

In addition to identifying positive cases of COVID-19, the platform also uses next-generation sequencing to confirm multiple respiratory pathogens like the flu and the common cold.

The sequencing panel at UC Davis Health can detect over 280 respiratory pathogens, including SARS-CoV-2 and related variants allowing the study to train the machine-learning algorithms to differentiate COVID-19 from other respiratory diseases.

So far, the study has not seen any participants with the new omicron variant.

Our team has tested the system with samples from patients infected with delta and other variants of the SARS-CoV-2 virus. We are fairly certain that omicron will be detected as well, but we wont know for sure until we encounter a study participant with the variant, Tran said.

The Emergency Department (ED) at the UC Davis Medical Center is conducting the testing in Sacramento. Collection for testing in Las Vegas is conducted at multiple businesses and locations.

The team expects the study will continue until the end of winter. The results from the new study will be used to seek emergency use authorization (EUA) from the Food and Drug Administration.

The novel testing system uses an analytical instrument known as a mass spectrometer. Its paired with machine learning algorithms produced by software called the Machine Intelligence Learning Optimizer or MILO. MILO was developed by Tran, Hooman Rashidi, a professor in the Department of Pathology and Laboratory Medicine, and Samer Albahra, assistant professor and medical director of pathology artificial intelligence in the Department of Pathology and Laboratory Medicine.

As with many other COVID-19 tests, a nasal swab is used to collect a sample. Proteins from the nasal sample are ionized with the mass spectrometers laser, then measured and analyzed by the MILO machine learning algorithms to generate a positive or negative result.

In addition to conducting the mass spectrometry testing, UC Davis serves as a reference site for the study, performing droplet digital PCR (ddPCR) tests, the gold standard for COVID-19 testing, to assess the accuracy of the mass spectrometry tests.

The project originated with Maurice J. Gallagher, Jr., chairman and CEO of Allegiant Travel Company and founder of SpectraPass. Gallagher is also a UC Davis alumnus and a longtime supporter of innovation and entrepreneurship at UC Davis.

In 2020, when the COVID-19 pandemic brought the travel and hospitality industries almost to a standstill, Gallagher began conceptualizing approaches to allow people to gather again safely. He teamed with researchers at UC Davis Health to develop the new platform and launched SpectraPass.

In addition to the novel testing solution, SpectraPass is also developing digital systems to accompany the testing technology. Those include tools to authenticate and track verified test results from the system so an individual can access and use them. The goal is to facilitate accurate, large-scale rapid testing that will help keep businesses and the economy open through the current and any future pandemics.

The official start of our multi-center study across multiple locations marks an important milestone in our journey at SpectraPass. We are excited to test and generate data on a broader scale. Our goal is to move the platform from a promising new technology to a proven solution that can ultimately benefit the broader population, said Greg Ourednik, president of SpectraPass.

New rapid COVID-19 test the result of university-industry partnership

Meet MILO, a powerful machine learning AI tool from UC Davis Health

Read more from the original source:
New platform uses machine-learning and mass spectrometer to rapidly process COVID-19 tests - UC Davis Health

Grants totaling $4.6 million support the use of machine learning to improve outcomes of people with HIV – Brown University

PROVIDENCE, R.I.[Brown University] Over the past four decades of treating HIV/AIDS, two important facts have been established: HIV-positive patients need to be put on treatment as soon as theyre diagnosed and then kept on an effective treatment plan. This response can help turn HIV into a chronic but manageable disease and can essentially help people live normal, healthy lives, said Joseph Hogan a professor of public health and of biostatistics at Brown University, who has been researching HIV/AIDS for 25 years.

Hogan is one of the primary investigators on two recently awarded grants from the National Institute of Health, totaling nearly $4.6 million over five years, to support the creation and utilization of data-driven tools that will allow care programs in Kenya to meet these key treatment goals.

If the system works as designed, then we have confidence that well improve the health outcomes of people with HIV, Hogan said.

The first part of the project involves using data science to understand whats called the HIV care cascade, said Hogan, who is the co-director of the biostatistics program for Academic Model Providing Access to Healthcare (AMPATH), a consortium of 14 North American universities who collaborate with Moi University in Eldoret, Kenya, on HIV research, care and training.

Hogan will collaborate with longtime scientific partner Ann Mwangi, associate professor of biostatistics at Moi University, who received a Ph.D. in biostatistics from Brown in 2011. Using AMPATH-developed electronic health record database, a team co-led by Hogan and Mwangi will develop algorithm-based statistical machine learning tools to predict when and why patients might drop out of care and when their viral load levels indicate they are at risk of treatment failure.

These algorithms, Hogan said, will then be integrated into the electronic health record system to deliver the information at the point of care, through handheld tablets that the physicians can use when sitting in the exam room with the patient. In consultation with experts in user interface design, the team will assess and test the most effective ways to communicate the results of the algorithm to the care providers so that they can use them to make decisions about patient care, Hogan said.

The predictive modeling system the team is developing, Hogan said, will alert a physician to red flags in the patients treatment plan at the point of care. This way, interventions can be developed to help a patient get to their treatment appointments, for example, before the patient needs to miss or cancel them. Or if a patient is predicted to have high viral load, Hogan said, a clinician can refer them for additional monitoring to identify and treat the increase before it becomes a problem.

Original post:
Grants totaling $4.6 million support the use of machine learning to improve outcomes of people with HIV - Brown University

Revisit Top AI, Machine Learning And Data Trends Of 2021 – ITPro Today

This past year has been a strange one in many respects: an ongoing pandemic, inflation, supply chain woes, uncertain plans for returning to the office, and worrying unemployment levels followed by the Great Resignation. After the shock of 2020, anyone hoping for a calm 2021 had to have been disappointed.

Data management and digital transformation remained in flux amid the ups and downs. Due to the ongoing challenges of the COVID-19 pandemic, as well as trends that were already underway prior to 2021, this retrospective article has a variety of enterprise AI, machine learning and data developments to cover.

Automation was a buzzword in 2021, thanks in part to the advantages that tools like automation software and robotics provided companies. As workplaces adapted to COVID-19 safety protocols, AI-powered automation proved beneficial. Since March 2020, two-thirds of companies have accelerated their adoption of AI and automation, consultancy McKinsey & Company found, making it one of the top AL and data trends of 2021.

In particular, robotic process automation (RPA) gained traction in several sectors, where it was put to use for tasks like processing transactions and sending notifications. RPA-focused firms like UiPath and tech giants like Microsoft went in on RPA this year. RPA software revenue will be up nearly 20% in 2021, according to research firm Gartner.

But while the pandemic may have sped up enterprise automation adoption, it appears RPA tools have lasting power. For example, Research and Markets predicted the RPA market will have a compound annual growth rate of 31.5% from 2021 to 2026. If 2020 was a year of RPA investment, 2021 and beyond will see those investments going to scale.

Micro-automation is one of the next steps in this area, said Mark Palmer, senior vice president of data, analytics and data science products at TIBCO Software, an enterprise data company. Adaptive, incremental, dynamic learning techniques are growing fields of AI/ML that, when applied to the RPAs exhaust, can make observations on the fly, Palmer said. These dynamic learning technologies help business users see and act on aha moments and make smarter decisions.

Automation also played an increasingly critical role in hybrid workplace models. While the tech sector has long accepted remote and hybrid work arrangements, other industries now embrace these models, as well. Automation tools can help offsite employees work efficiently and securely -- for example, by providing technical or HR support, security threat monitoring, and integrations with cloud-based services and software.

However, remote and hybrid workers do represent a potential pain point in one area: cybersecurity. With more employees working outside the corporate network, even if for only part of the work week, IT professionals must monitor more equipment for potential vulnerabilities.

The hybrid workforce influenced data trends in 2021. The wider distribution of IT infrastructure, along with increasing adoption of cloud-based services and software, added new layers of concerns about data storage and security. In addition, the surge in cyberattacks during the pandemic represented a substantial threat to enterprise data security. As organizations generate, store and use ever-greater amounts of data, an IT focus on cybersecurity is only going to become increasingly vital.

All together, these developments point to an overarching enterprise AI, ML and data trend for 2021: digital transformation. Spending on digital transformation is expected to hit $1.8 trillion in 2022, according to Statistica, which illustrates that organizations are willing to invest in this area.

As companies realize the value of data and the potential of machine learning in their operations, they also recognize the limitations posed by their legacy systems and outdated processes. The pandemic spurred many organizations to either launch or elevate digital transformation strategies, and those strategies will likely continue throughout 2022.

How did the AI, ML and data trends of 2021 change the way you work? Tell us in the comments below.

Here is the original post:
Revisit Top AI, Machine Learning And Data Trends Of 2021 - ITPro Today

These are the top priorities for tech executives in 2022, survey reveals – CNBC

Big software IPOs, cyberattacks and the push into the metaverse were just some of the themes coming out of the technology sector in 2021.

As technology executives look towards the year ahead, they say things like artificial intelligence, cloud computing and machine learning will be critically important to their companies in 2022, according to a recent CNBCTechnology Executive Council survey of 44 executives.

Here's a breakdown from the CNBC TEC survey of the technologies expected to receive the most time and money.

A vast majority (81%) of executives said that artificial intelligence would either be critically important or very important to their companies in 2022.

Twenty percent of respondents also said that AI is the technology that they expect to invest the most resources in over the next 12 months.

The emphasis on cloud computing shows no signs of lessening in the year ahead, as 82% of respondents said that the technology would be critically important to their company in 2022. It is also the technology where the most executives (34%) said their companies would be investing the most money.

Ninety-one percent of executives said that machine learning would be critically or very important to their companies in 2022, while 20% said this would be the area they will invest the most money in.

It is also the technology that the most executives (18%) said they would be the most excited to see grow and develop in the year ahead.

No code and low code software was the technology that saw the second-highest amount of executives (11%) say they were most excited to see it grow and develop in 2022.

Other technologies that were highlighted by multiple executives include explainable AI, robotics and software-defined security.

Read the original post:
These are the top priorities for tech executives in 2022, survey reveals - CNBC

The automated machine learning market is predicted to reach $14,830.8 million by 2030, demonstrating a CAGR of 45.6% from 2020 to 2030 – Yahoo Finance

AutoML Market From $346. 2 million in 2020, the automated machine learning market is predicted to reach $14,830. 8 million by 2030, demonstrating a CAGR of 45. 6% from 2020 to 2030.

New York, Dec. 16, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "AutoML Market" - https://www.reportlinker.com/p06191010/?utm_source=GNW The major factors driving the market are the burgeoning requirement for efficient fraud detection solutions, soaring demand for personalized product recommendations, and increasing need for predictive lead scoring.

The COVID-19 pandemic has contributed significantly to the evolution of digital business models, with many healthcare companies adopting machine-learning-enabled chatbots to enable the contactless screening of COVID-19 symptoms. Moreover, Clevy.io, which is a France-based start-up, and Amazon Web Services (AWS) have launched a chatbot for making the process of finding official government communications about the COVID-19 infection easy. Thus, the pandemic has positively impacted the market.

The service category, under the offering segment, is predicted to demonstrate the faster growth in the coming years. This is credited to the burgeoning requirement for implementation and integration, consulting, and maintenance services, as they assist in enhancing business productivity and augmenting coding activities. Additionally, these services aid in automating workflows, which, in turn, enables the mechanization of complex operations.

The cloud category dominated the AutoML market, within the deployment type segment, in the past. Moreover, this category is predicted to grow rapidly in the forthcoming years on account of the flexibility and scalability provided by cloud-based automated machine learning (AutoML) solutions.

Geographically, North America held the largest share in the past, and this trend is expected to continue in the coming years. This is credited to the soaring venture capital funding by artificial intelligence (AI) companies for research and development (R&D), in order to advance AutoML.

Asia-Pacific (APAC) is predicted to be the fastest-growing region in the market in the forthcoming years. This is ascribed to the growing information technology (IT) investments and increasing fintech adoption in the region. In addition, the growing government focus on incorporating AI in multiple verticals is supporting the advance of the market in the region.

For instance, in October 2021, Hivecell, which is an edge as a service company, entered into a partnership with DataRobot Inc. for solving bigger challenges and hurdles at the edge, by processing various ML models on site and outside the data closet. By incorporating the two solutions, businesses can make data-driven decisions more efficiently.

The major players in the AutoML market are DataRobot Inc., dotData Inc., H2O.ai Inc., Amazon Web Services Inc., Big Squid Inc., Microsoft Corporation, Determined.ai Inc., SAS Institute Inc., Squark, and EdgeVerve Systems Limited.Read the full report: https://www.reportlinker.com/p06191010/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Read the original:
The automated machine learning market is predicted to reach $14,830.8 million by 2030, demonstrating a CAGR of 45.6% from 2020 to 2030 - Yahoo Finance

Human-centered AI can improve the patient experience – Healthcare IT News

Given the growing ubiquity of machine learning and artificial intelligence in healthcare settings, it's become increasingly important to meet patient needs and engage users.

And as panelists noted during a HIMSS Machine Learning and AI for Healthcare Forum session this week, designing technology with the user in mind is a vital way to ensure tools become an integral part of workflow.

"Big Tech has stumbled somewhat" in this regard, said Bill Fox, healthcare and life sciences lead at SambaNova Systems. "The patients, the providers they don't really care that much about the technology, how cool it is, what it can do from a technological standpoint.

"It really has to work for them," Fox added.

Jai Nahar, a pediatric cardiologist at Children's National Hospital, agreed, stressing the importance of human-centered AI design in healthcare delivery.

"Whenever we're trying to roll out a productive solution that incorporates AI," he said, "right from the designing [stage] of the product or service itself, the patients should be involved."

That inclusion should also expand to provider users too, he said: "Before rolling out any product or service, we should involve physicians or clinicians who are going to use the technology."

The panel, moderated by Rebekah Angove, vice president of evaluation and patient experience at the Patient Advocate Foundation, noted that AI is already affecting patients both directly and indirectly.

In ideal scenarios, for example, it's empowering doctors to spend more time with individuals. "There's going tobe a human in the loop for a very long time," said Fox.

"We can power the clinician with better information from a much larger data set," he continued. AI is also enabling screening tools and patient access, said the experts.

"There are many things that work in the background that impact [patient] lives and experience already," said Piyush Mathur, staff anesthesiologist and critical care physician at the Cleveland Clinic.

At the same time, the panel pointed to the role clinicians can play in building patient trust around artificial intelligence and machine learning technology.

Nahar said that as a provider, he considers several questions when using an AI-powered tool for his patient. "Is the technology really needed for this patient to solve this problem?" he said he asks himself. "How will it improve the care that I deliver to the patient? Is it something reliable?"

"Those are the points, as a physician, I would like to know," he said.

Mathur also raised the issue of educating clinicians about AI. "We have to understand it a little bit better to be able to translate that science to the patients in their own language," he said. "We have to be the guardians of making sure that we're providing the right data for the patient."

The panelists discussed the problem of bias, about which patients may have concerns and rightly so.

"There are multiple entry points at which bias can be introduced," said Nahar.

During the design process, he said, multiple stakeholders need to be involved to closely consider where bias could be coming from and how it can be mitigated.

As panelists have pointed out at other sessions, he also emphasized the importance of evaluating tools in an ongoing process.

Developers and users should be asking themselves, "How can we improve and make it better?" he said.

Overall, said Nahar, best practices and guidances need to be established to better implement and operationalize AI from the patient perspective and provider perspective.

The onus is "upon us to make sure we use this technology in the correct way to improve care for our patients," added Mathur.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

View original post here:
Human-centered AI can improve the patient experience - Healthcare IT News

Machine Learning as a Service (MLaaS) Market 2021: Big Things are Happening in Development and Future Assessment by 2031 – Digital Journal

Pune, Maharashtra, India, December 17 2021 (Wiredrelease) Prudour Pvt. Ltd :High Use Of Machine Learning as a Service (MLaaS) Market|Better Business Growth, A One-Stop Guide For Growing Business In 2021

The Machine Learning as a Service (MLaaS) Market economy has improved over the last few years. There have been more entrants and technological advancement, as well as a growing rate of expansion due to the measures taken against short-term economic downturns. This report has been based on a few different types of research. The findings have been obtained from both primary and secondary tools for gathering data. The study is a perfect blend of qualitative and quantifiable information, highlighting key market developments as well industry challenges in gap analysis with new opportunities that could be trending. A variety of graphical presentation techniques are used to demonstrate the facts.

The report provides a comprehensive description of Machine Learning as a Service (MLaaS) market that presents an overview of the global market. The information in this document includes a forecast (2021-2031), trends drivers both current and future as good opinions from industry professionals on these topics with technological advancements and new entry explorations, many people are looking for economic countermeasures to increase their growth rates. The competitive nature of the industry is forcing key players to focus on new merger and acquisition methods in order to maintain their power over market share.

Looking for customized insights to raise your business for the future, ask for a sample report here:https://market.us/report/machine-learning-as-a-service-mlaas-market/request-sample/

The influential players covered in this report are:

GoogleIBM CorporationMicrosoft CorporationAmazon Web ServicesBigMLFICOYottamine AnalyticsErsatz LabsPredictron LabsH2O.aiAT and TSift Science

Figure:

Topographical segmentation of Machine Learning as a Service (MLaaS) market by top product type, best application, and key region:

Segmentation by Type:

Software ToolsCloud and Web-based Application Programming Interface (APIs)Other

Segmentation by Application:

ManufacturingRetailHealthcare and Life SciencesTelecomBFSIOther (Energy and Utilities, Education, Government)

Machine Learning as a Service (MLaaS) Market: Regional Segment Analysis

North America (USA, Canada, and Mexico)

Europe (Russia, France, Germany, UK, and Italy)

Asia-Pacific (China Korea, India, Japan, and Southeast Asia)

South America (Brazil, Columbia, Argentina, etc)

The Middle East and Africa (Nigeria, UAE, Saudi Arabia, Egypt, and South Africa)

Place An Inquiry Before Investment (Use Corporate Details Only):https://market.us/report/machine-learning-as-a-service-mlaas-market/#inquiry

The main features on the report of 2021 Global Machine Learning as a Service (MLaaS) Market:

The latest mechanical enhancements and Machine Learning as a Service (MLaaS) new releases to engage our consumers to produce, settle on instructed business decisions, and build their future expected achievements.

Machine Learning as a Service (MLaaS) market focuses more on future methodology changes, current business and progressions and open entryways for the global market.

The investment return analysis, SWOT analysis, and feasibility study are also used for Machine Learning as a Service (MLaaS) market data analysis.

Key Highlights of the Machine Learning as a Service (MLaaS) Market Research Report:

1. The report summarizes the machine learning as a service (mlaas) Market by stating the basic product definition, the number of product applications, product scope, product cost and price, supply and demand ratio, market overview.

2. Competitive landscape of all leading key players along with their business strategies, approaches, and latest machine learning as a service (mlaas) market movements.

3. It elements market feasibility investment, opportunities, the growth factors, restraints, market risks, and machine learning as a service (mlaas) business driving forces.

4. It performs a comprehensive study of emerging players of machine learning as a service (mlaas) business along with the existing ones.

5. It accomplishes primary and secondary research and resources to estimate top products, market size, and industrial partnerships of machine learning as a service (mlaas) business.

6. Global Machine Learning as a Service (MLaaS) market report ends by articulating research findings, data sources, results, list of dealers, sales channels, businesses and distributors along with an appendix.

Need More Information aboutMachine Learning as a Service (MLaaS) market:https://market.us/report/machine-learning-as-a-service-mlaas-market/

Key questions include:

1. What can we estimate about the anticipated growth rates and also the global machine learning as a service (mlaas) industry size by 2031?

2. Who investors will use the specifics of our research, as well as some key parameters and forecast periods to guide their investment decisions?

3. What will happen in the coming existing and emerging markets?

4. All those vendors who make a profit; some do not.

5. What would be the upcoming machine learning as a service (mlaas) market behavior forecast with trends, challenges, and drivers challenges for development?

6. What industry opportunities and dangers are faced by vendors in the market?

7. Which would be machine learning as a service (mlaas) industry opportunities and challenges faced with most vendors in the market?

8. What are the variables affecting the machine learning as a service (mlaas) market share?

9. What will be the outcomes of this market SWOT five forces analysis?

Our trusted press-release media partner @https://www.taiwannews.com.tw/en/search?keyword=market.us

Get in Touch with Us :

Mr. Lawrence John

Market.us (Powered By Prudour Pvt. Ltd.)

Send Email:inquiry@market.us

Address:420 Lexington Avenue, Suite 300 New York City, NY 10170, United States

Tel:+1 718 618 4351

Website:https://market.us

Blog:https://techmarketreports.com/

Scrutinize More Reports Here:

Medical Swab Market In-depth Assessment, Crucial Trend, Industry Drivers, Future Projection by 2031

Automotive Aluminum Wheel Market Comprehensive Research Study, Strategic Planning, Competitive Landscape and Forecast to 2031

UV-cured Acrylic Adhesive Tapes Market 2021 Vital Challenges and Forecast Analysis by 2031

Gum Arabic (E414) Market Crucial Aspects of the Industry by Segments to 2031

Melt Pump Market Growth Factors, Regional Overview, Competitive Strategies and Forecast up to 2031

Bi-Metal Band Saw Blade Market PESTEL Analysis, SWOT Analysis, CAGR and Value Chain Study Forecast to 2031

This content has been published by Prudour Pvt. Ltd company. The WiredRelease News Department was not involved in the creation of this content. For press release service enquiry, please reach us at contact@wiredrelease.com.

Link:
Machine Learning as a Service (MLaaS) Market 2021: Big Things are Happening in Development and Future Assessment by 2031 - Digital Journal

USC and Meta Collaborate to establish the USC-Meta Center for Research and Education In AI and Learning – USC Viterbi | School of Engineering – USC…

Associate Director Meisam Razaviyayn (L) and Director Murali Annavaram (R).

USC AND META COLLABORATE TO ESTABLISH THE USC-META CENTER FOR RESEARCH AND EDUCATION IN AI AND LEARNING

As with other new technologies, AI and Machine Learning have come to play an increasingly important role in our lives, however, there are many technological challenges to making them sustainable, energy efficient, and scalable to planetary scale demands. In an effort to address these challenges, advance AI research, and increase accessibility in AI education, the Ming Hsieh Department of Electrical and Computer Engineering and the Daniel J. Epstein Department of Industrial and Systems Engineering at the USC Viterbi School of Engineering together with Meta, have established the USC ECE-ISE Meta Center for Research and Education in AI and Learning.

Supporting a variety of activities, including open-source AI research and graduate scholarships, the center will be run by Murali Annavaram, Professor of Electrical and Computer Engineering, serving as Director and by Meisam Razaviyayn, Assistant Professor of Industrial and Systems Engineering serving as Associate Director.

This center will tackle the scaling and sustainability aspects of AI/ML systems as these technologies are deployed for solving planetary-scale challenges, said Annavaram. To this end we aim to advance our understanding of how AI algorithms interact with hardware, and to use this understanding in the design of energy efficient and open-source AI/ML systems of the future. Alongside open-source technology initiatives, the center will take initiatives to advance AI education equitably into the future. Said Razaviyayn, A major step in creating dependable AI systems is the development of reliable training mechanisms and responsible algorithms for modern world challenges. To this end, we believe that by equally supporting research and education, we will help bring about groundbreaking, fair, and trustworthy AI technology.

The center will support a variety of initiatives through Research, Fellowships, Curriculum, and Outreach activities. Initially the research themes will be centered on benchmarking and assessment technologies for AI algorithm-hardware platform interactions, and developing computational optimization algorithms for AI. These two areas of research are of vital importance to both the Epstein and the Ming Hsieh Departments, while also helping advance our work in AI in several ways, said Maged Dessouky, Chair of the Daniel J. Epstein Department of Industrial and Systems Engineering.

Producing consequential research will be coupled with rigorous educational training. The center will train a new generation of students who understand both the technical and the societal impacts of this important and pervasive new technology. I am excited to see USC and Meta come together to create the research center, said Bill Jia, Vice President of Engineering at Meta. The center will draw more students to understand AI and how it benefits and connects us all. With a focus on research in AI hardware, compilers, frameworks and algorithms, we can improve the performance, scalability, efficiency and productivity of AI.

I look forward to seeing a new generation of students take interest in helping to shape the future of AI and Machine Learning, said Vijay Rao, Director of Infrastructure at Meta. As we tackle the challenges we face today in AI it is essential that we invest in education and research in this growing field.

The center will support enhancing curricula and opportunities for hands-on laboratory training on AI and Machine Learning computing clusters for students in the MS program in Electrical and Computer Engineering-Machine Learning and Data Science, and in the MS in Analytics and other related programs. The former program provides students with focused, rigorous training in the theory, methods, and applications of data science, machine learning and signal and information process; the latter combines optimization, statistics, and machine learning to solve real problems in todays data-driven world.

These machines and the graduate courses they will help support are hugely useful to our department and we expect them to play a vital role in enhancing our ability to train the next generation of AI scientists, said Richard Leahy, Chair of the Ming Hsieh Department of Electrical and Computer Engineering.

Finally, the new center will pursue a variety of initiatives aimed at improving outreach to a diverse group of students. Some of the planned initiatives include summer internship programs and workshops to provide students with more hands-on ML system design experiences, as well as an annual symposium and poster session to give students better access to mentors and industry leaders. Diversity and inclusion are important values to USC Viterbi. Pursuing them is not only the right thing to do, but it also makes for better engineers and a better society, said Kelly Goulis, Sr. Associate Dean for Viterbi Admissions and Student Affairs of the Viterbi School of Engineering. Established programs in our office such as SURE (Summer Undergraduate Research Experience) and CURVE (Center for Undergraduate Research in Viterbi Engineering) address undergraduate research and outreach to diverse communities, thus helping also advance the outreach goals of the USC-Meta Center.

Published on December 17th, 2021

Last updated on December 17th, 2021

Continue reading here:
USC and Meta Collaborate to establish the USC-Meta Center for Research and Education In AI and Learning - USC Viterbi | School of Engineering - USC...

2021 in Ed Tech: AI, Data Analytics Were Top Priorities – Government Technology

The last 12 months were a time of experimentation for both K-12 and higher education institutions. Flush with new federal funding but straining against disruptions such as COVID-19 and rampant cyber threats, schools adapted with help from ed-tech companies, nonprofits and other industry partners to meet a growing demand for flexible online learning options, as well as to improve student performance and tackle learning loss that resulted from last year's school closures.

For school districts, colleges and universities, often this work included efforts to close the digital divide and distribute tablets and laptops; start coding boot camps and other training programs to prepare the future workforce for new technologies; and make cybersecurity investments and study programs to create a bulwark of infrastructure and skills against cyber criminals.

For ed-tech companies and industry leaders helping schools through this, much of the focus was on student data and AI-driven programs designed to assist with lesson planning, student feedback and educational content.

Along those lines, Google announced the creation of an AI tutor last month to provide students with personalized feedback on assignments, academic coaching and course advisement. It was as an expansion of Googles Student Success Services, a software suite released in 2020, which includes virtual assistant functions, analytics, enrollment algorithms and other higher ed applications.

They all are thinking about how we can make learning more personalized, aligning it to when you need it for access 24/7, and using data more effectively to engage students, Butschi said. As you think about that, it starts to tee up to why were seeing data analytics, artificial intelligence and machine learning to personalize and gauge learning starting to pop up more and more.

According to a recent report from Market Research Engine, the global market for artificial intelligence in education technology will reach $5.80 billion by 2025, with a compound annual growth rate of 45 percent.

Neil Heffernan, a computer science professor at Worcester Polytechnic Institute and lead developer of the AI-based student feedback program ASSISTments, said this projected growth is partly to do with AIs potential to identify and address areas in need of improvement and help close achievement gaps.

He said ASSISTments AI feature Quick-Comments won an $8 million grant last week from the U.S. Department of Education's Education Innovation and Research program to improve its machine-learning tutoring functions.

What we want to do is find out which human tutors are doing a good job, look at what theyre doing, and put that back into the computer so that when no humans are around, we can have the program doing that, he said. When we have the computer doing that, we can measure how they do on the next problem.

While AI is helping schools with tutoring and curricula, new data management systems are streamlining the collection and storage of student performance data to identify and address areas where improvement is needed. The aim is to make the data more readable and enable data systems to integrate with learning management systems (LMS) like Google Classroom and Canvas that have only become more commonplace in K-12 during COVID-19.

Over the past year, K-12 districts and state education officials have worked with organizations such as the analytics nonprofit Ed-Fi Alliance and adopted tools like the Apigee API platform from Google Cloud to standardize data systems and make them interoperable.

Trenton Goble, VP of K-12 Strategy at Instructure, said schools need student performance data that can "flow into a data warehouse environment with clear and easy-to-use reporting" and gauge the impact of remote learning.

As schools went back to a face-to-face environment this fall, we saw a lot of interest in assessments, he said. "Assessments only have value insofar as teachers are using the data, so being able to present data in a meaningful way is a big trend."

Goble said the adoption of Instructures Canvas LMS has witnessed a lot of significant growth during this year, as schools slowly made the transition to using LMS for lower elementary grade levels following last years first closures.

He said one of the main advantages of Canvas has been its ability to integrate new digital learning tools into the LMS, noting the emergence of new AI-driven ed-tech products marketed to educators overwhelmed with choices in an ever-growing market.

Weve always been open and extensible as a platform. Our ability to allow third-party resources to integrate into the LMS is vital. The ability to integrate is, I think, key. Its an expectation at this point, he said, as schools are becoming more sophisticated in working with new technologies. [Choosing the right tools] is the toughest element for school districts. For districts that want to be open in allowing teachers to find their own tools in the K-12 space, you want those tools to integrate into the LMS."

According to a recent report from the policy think tank Information Technology and Innovation Foundation, AR/VR technology could prove a promising addition to digital learning toolkits at schools and universities, eventually.

Ellysse Dick, a policy analyst from ITIF and author of the report, said AR/VR programs enable experiential lessons that might make up for learning loss that occurred over the past two years.

A virtual field trip isnt a full replacement for a real-life field trip, but for those students who wouldnt otherwise be able to visit places that might be a bus ride away for others, VR can give them opportunities to experience some of those things, she told Government Technology in September.

But while AR and VR tools and the gamification" of learning have garnered interest in schools, Google's Butschi reiterated that "data analytics and AI are top priorities when it comes to tracking and improving grades.

Heffernan also said this years focus on machine learning in ed tech eclipsed AR/VR, which he said "continues to be totally sexy and totally oversold." He expects this trend to continue into 2022 as ed-tech developers and researchers make improvements to AI's capabilities.

When some people think about AI, they think too much about Hollywood and computers taking over, and Im not worried about that at all because I know [todays] systems are really dumb, he said, noting that AI has already helped teachers do their jobs more effectively despite current limitations.

See the article here:
2021 in Ed Tech: AI, Data Analytics Were Top Priorities - Government Technology