Page 116«..1020..115116117118..130140..»

Category Archives: Ai

The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during…

Posted: July 23, 2021 at 4:14 am

Global Artificial Intelligence (AI) Chips Market 2021-2025 The analyst has been monitoring the artificial intelligence (AI) chips market and it is poised to grow by $ 73. 49 billion during 2021-2025, progressing at a CAGR of over 51% during the forecast period.

New York, July 22, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Artificial Intelligence (AI) Chips Market 2021-2025" - https://www.reportlinker.com/p05006367/?utm_source=GNW Our report on the artificial intelligence (AI) chips market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.The report offers an up-to-date analysis regarding the current global market scenario, latest trends and drivers, and the overall market environment. The market is driven by the increasing adoption of AI chips in data centers, increased focus on developing AI chips for smartphones, and the development of AI chips in autonomous vehicles. In addition, the increasing adoption of AI chips in data centers is anticipated to boost the growth of the market as well.The artificial intelligence (AI) chips market analysis includes the product segment and geographic landscape.

The artificial intelligence (AI) chips market is segmented as below:By Product ASICs GPUs CPUs FPGAs

By Geography North America Europe APAC South America MEA

This study identifies the convergence of AI and IoT as one of the prime reasons driving the artificial intelligence (AI) chips market growth during the next few years. Also, increasing investments in ai start-ups and advances in the quantum computing market will lead to sizable demand in the market.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters. Our report on artificial intelligence (AI) chips market covers the following areas: Artificial intelligence (AI) chips market sizing Artificial intelligence (AI) chips market forecast Artificial intelligence (AI) chips market industry analysis

This robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading artificial intelligence (AI) chips market vendors that include Alphabet Inc., Broadcom Inc., Intel Corp., NVIDIA Corp., Qualcomm Inc., Advanced Micro Devices Inc., Huawei Investment and Holding Co. Ltd., International Business Machines Corp., Samsung Electronics Co. Ltd., and Taiwan Semiconductor Manufacturing Co. Ltd. Also, the artificial intelligence (AI) chips market analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research - both primary and secondary. Technavios market research reports provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast the accurate market growth.Read the full report: https://www.reportlinker.com/p05006367/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Original post:

The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during...

Posted in Ai | Comments Off on The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during…

Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling – SciTechDaily

Posted: at 4:13 am

By Matthew Hutson, MIT Department of Nuclear Science and EngineeringJuly 22, 2021

Pictures of the boiling surfaces taken using a scanning electron microscope: Indium tin oxide (top left), copper oxide nanoleaves (top right), zinc oxide nanowires (bottom left), and porous coating of silicon dioxide nanoparticles obtained by layer-by-layer deposition (bottom right). Credit: SEM photos courtesy of the researchers.

MIT researchers train a neural network to predict a boiling crisis, with potential applications for cooling computer chips and nuclear reactors.

Boiling is not just for heating up dinner. Its also for cooling things down. Turning liquid into gas removes energy from hot surfaces, and keeps everything from nuclear power plants to powerful computer chips from overheating. But when surfaces grow too hot, they might experience whats called a boiling crisis.

In a boiling crisis, bubbles form quickly, and before they detach from the heated surface, they cling together, establishing a vapor layer that insulates the surface from the cooling fluid above. Temperatures rise even faster and can cause catastrophe. Operators would like to predict such failures, and new research offers insight into the phenomenon using high-speed infrared cameras and machine learning.

Matteo Bucci, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering at MIT, led the new work,published on June 23, 2021, in Applied Physics Letters. In previous research, his team spent almost five years developing a technique in which machine learning could streamline relevant image processing. In the experimental setup for both projects, a transparent heater 2 centimeters across sits below a bath of water. An infrared camera sits below the heater, pointed up and recording at 2,500 frames per second with a resolution of about 0.1 millimeter. Previously, people studying the videos would have to manually count the bubbles and measure their characteristics, but Bucci trained a neural network to do the chore, cutting a three-week process to about five seconds. Then we said, Lets see if other than just processing the data we can actually learn something from an artificial intelligence, Bucci says.

The goal was to estimate how close the water was to a boiling crisis. The system looked at 17 factors provided by the image-processing AI: the nucleation site density (the number of sites per unit area where bubbles regularly grow on the heated surface), as well as, for each video frame, the mean infrared radiation at those sites and 15 other statistics about the distribution of radiation around those sites, including how theyre changing over time. Manually finding a formula that correctly weighs all those factors would present a daunting challenge. But artificial intelligence is not limited by the speed or data-handling capacity of our brain, Bucci says. Further, machine learning is not biased by our preconceived hypotheses about boiling.

To collect data, they boiled water on a surface of indium tin oxide, by itself or with one of three coatings: copper oxide nanoleaves, zinc oxide nanowires, or layers of silicon dioxide nanoparticles. They trained a neural network on 85 percent of the data from the first three surfaces, then tested it on 15 percent of the data of those conditions plus the data from the fourth surface, to see how well it could generalize to new conditions. According to one metric, it was 96 percent accurate, even though it hadnt been trained on all the surfaces. Our model was not just memorizing features, Bucci says. Thats a typical issue in machine learning. Were capable of extrapolating predictions to a different surface.

The team also found that all 17 factors contributed significantly to prediction accuracy (though some more than others). Further, instead of treating the model as a black box that used 17 factors in unknown ways, they identified three intermediate factors that explained the phenomenon: nucleation site density, bubble size (which was calculated from eight of the 17 factors), and the product of growth time and bubble departure frequency (which was calculated from 12 of the 17 factors). Bucci says models in the literature often use only one factor, but this work shows that we need to consider many, and their interactions. This is a big deal.

This is great, says Rishi Raj, an associate professor at the Indian Institute of Technology at Patna, who was not involved in the work. Boiling has such complicated physics. It involves at least two phases of matter, and many factors contributing to a chaotic system. Its been almost impossible, despite at least 50 years of extensive research on this topic, to develop a predictive model, Raj says. It makes a lot of sense to us the new tools of machine learning.

Researchers have debated the mechanisms behind the boiling crisis. Does it result solely from phenomena at the heating surface, or also from distant fluid dynamics? This work suggests surface phenomena are enough to forecast the event.

Predicting proximity to the boiling crisis doesnt only increase safety. It also improves efficiency. By monitoring conditions in real-time, a system could push chips or reactors to their limits without throttling them or building unnecessary cooling hardware. Its like a Ferrari on a track, Bucci says: You want to unleash the power of the engine.

In the meantime, Bucci hopes to integrate his diagnostic system into a feedback loop that can control heat transfer, thus automating future experiments, allowing the system to test hypotheses and collect new data. The idea is really to push the button and come back to the lab once the experiment is finished. Is he worried about losing his job to a machine? Well just spend more time thinking, not doing operations that can be automated, he says. In any case: Its about raising the bar. Its not about losing the job.

Reference: Decrypting the boiling crisis through data-driven exploration of high-resolution infrared thermometry measurements by Madhumitha Ravichandran, Guanyu Su, Chi Wang, Jee Hyun Seong, Artyom Kossolapov, Bren Phillips, Md Mahamudur Rahman and Matteo Bucci, 23 June 2021, Applied Physics Letters.DOI: 10.1063/5.0048391

Link:

Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling - SciTechDaily

Posted in Ai | Comments Off on Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling – SciTechDaily

Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide – HPCwire

Posted: at 4:13 am

PARIS and BRISTOL, England, July 22, 2021 Atos and Graphcore today announce that they have signed a partnership to accelerate performance and innovation in Artificial Intelligence (AI) by integrating Graphcores advanced IPU compute systems into Atos recently launched ThinkAI offering to bring AI high-performance solutions to customers worldwide.

This partnership will mutually benefit both parties. Atos long-standing position as a European leader in high-performance computing (HPC) and trusted advisor, provider and integrator of HPC solutions at scale will give Graphcore access to a multitude of new customers, sectors and geographies. Graphcore in turn will work with Atos globally to expand its global reach by targeting large corporate enterprise in sectors including finance, healthcare, telecoms and consumer internet as well as national labs and universities focused on scientific research, which are rapidly developing their AI capabilities.

ThinkAI brings together Atos AI business consultancy expertise with its experts at the AtosCenter of Excellence in Advanced Computing with its digital security capabilities and its software, such as Atos HPC Software Suites, to enable organizations to accelerate time to AI operationalization and industrialization.

Graphcore, the UK-headquartered maker of the Intelligence Processing Unit (IPU), plays a significant role in Atos ThinkAI offering, which is focused on the twin objectives of accelerating pure artificial intelligence applications and augmenting traditional HPC simulation with AI. Graphcores IPU-POD systems for scale-up datacentre computing will be an integral part of ThinkAI.

Even before todays formal launch of the partnership, the two companies welcomed their first major joint customer, one of the largest cloud providers in South Korea, which will be using Graphcore systems in large-scale AI cloud datacenters, in a deal facilitated by Atos.

ThinkAI represents a massive commitment to the future of artificial intelligence by one of the worlds most trusted technology companies. For Atos to have put Graphcore as a key part of its strategy says a great deal about the maturity of our hardware and software, and the ability of our systems to deliver on customer needs, said Fabrice Moizan, GM and SVP Sales EMEAI and Asia Pacific at Graphcore.

Agns Boudot, Senior Vice President, Head of HPC & Quantum at Atos said: With ThinkAI, were making it possible for organizations from any industry to achieve breakthroughs with AI. Graphcores IPU hardware and Poplar software is opening up new opportunities for innovators to explore the potential of AI for their organizations, complemented with our industry-tailored AI business consultancy, digital security capabilities and software, were excited to be orchestrating these cutting-edge technologies in our ThinkAI solution.

About Atos

Atos is a global leader in digital transformation with 105,000 employees and annual revenue of over 11 billion. European number one in cybersecurity, cloud and high performance computing, the Group provides tailored end-to-end solutions for all industries in 71 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos operates under the brands Atos and Atos|Syntel. Atos is a SE (Societas Europaea), listed on the CAC40 Paris stock index.

The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space.

About Graphcore

Graphcore is the inventor of the Intelligence Processing Unit (IPU), the worlds most sophisticated microprocessor, specifically designed for the needs of current and next-generation artificial intelligence workloads.

Graphcores IPU-POD datacenter systems, for scale up and scale out AI compute, offer the ability to run large models across multiple IPUs, or to share the compute resource between different users and workloads.

Since its founding in 2016, Graphcore has raised more than $730 million in funding.

Investors include Sequoia Capital, Microsoft, Dell, Samsung, BMW iVentures, Robert Bosch Venture Capital, as well as leading AI innovators including Demis Hassabis (Deepmind), Pieter Abbeel (UC Berkeley), and Zoubin Ghahramani (Google Brain).

Source: Graphcore

See the rest here:

Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide - HPCwire

Posted in Ai | Comments Off on Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide – HPCwire

AI spots shipwrecks from the ocean surface and even from the air – The Conversation US

Posted: at 4:13 am

The Research Brief is a short take about interesting academic work.

In collaboration with the United States Navys Underwater Archaeology Branch, I taught a computer how to recognize shipwrecks on the ocean floor from scans taken by aircraft and ships on the surface. The computer model we created is 92% accurate in finding known shipwrecks. The project focused on the coasts of the mainland U.S. and Puerto Rico. It is now ready to be used to find unknown or unmapped shipwrecks.

The first step in creating the shipwreck model was to teach the computer what a shipwreck looks like. It was also important to teach the computer how to tell the difference between wrecks and the topography of the seafloor. To do this, I needed lots of examples of shipwrecks. I also needed to teach the model what the natural ocean floor looks like.

Conveniently, the National Oceanic and Atmospheric Administration keeps a public database of shipwrecks. It also has a large public database of different types of imagery collected from around the world, including sonar and lidar imagery of the seafloor. The imagery I used extends to a little over 14 miles (23 kilometers) from the coast and to a depth of 279 feet (85 meters). This imagery contains huge areas with no shipwrecks, as well as the occasional shipwreck.

Finding shipwrecks is important for understanding the human past think trade, migration, war but underwater archaeology is expensive and dangerous. A model that automatically maps all shipwrecks over a large area can reduce the time and cost needed to look for wrecks, either with underwater drones or human divers.

The Navys Underwater Archaeology Branch is interested in this work because it could help the unit find unmapped or unknown naval shipwrecks. More broadly, this is a new method in the field of underwater archaeology that can be expanded to look for various types of submerged archaeological features, including buildings, statues and airplanes.

This project is the first archaeology-focused model that was built to automatically identify shipwrecks over a large area, in this case the entire coast of the mainland U.S. There are a few related projects that are focused on finding shipwrecks using deep learning and imagery collected by an underwater drone. These projects are able to find a handful of shipwrecks that are in the area immediately surrounding the drone.

Wed like to include more shipwreck and imagery data from all over the world in the model. This will help the model get really good at recognizing many different types of shipwrecks. We also hope that the Navys Underwater Archaeology Branch will dive to some of the places where the model detected shipwrecks. This will allow us to check the models accuracy more carefully.

Im also working on a few other archaeological machine learning projects, and they all build on each other. The overall goal of my work is to build a customizable archaeological machine learning model. The model would be able to quickly and easily switch between predicting different types of archaeological features, on land as well as underwater, in different parts of the world. To this end, Im also working on projects focused on finding ancient Maya archaeological structures, caves at a Maya archaeological site and Romanian burial mounds.

Continue reading here:

AI spots shipwrecks from the ocean surface and even from the air - The Conversation US

Posted in Ai | Comments Off on AI spots shipwrecks from the ocean surface and even from the air – The Conversation US

Real-time Interpretation: The next frontier in radiology AI – MedCity News

Posted: at 4:13 am

In the nine years since AlexNet spawned the age of deep learning, artificial intelligence (AI) has made significant technological progress in medical imaging, with more than 80 deep-learning algorithms approved by the U.S. FDA since 2012 for clinical applications in image detection and measurement. A 2020 survey found that more than 82% of imaging providers believe AI will improve diagnostic imaging over the next 10 years and the market for AI in medical imaging is expected to grow 10-fold in the same period.

Despite this optimistic outlook, AI still falls short of widespread clinical adoption in radiology. A 2020 survey by the American College of Radiology (ACR) revealed that only about a third of radiologists use AI, mostly to enhance image detection and interpretation; of the two thirds who did not use AI, the majority said they saw no benefit to it. In fact, most radiologists would say that AI has not transformed image reading or improved their practices.

Why is there such a huge gap between AIs theoretical utility and its actual use in radiology? Why hasnt AI delivered on its promise in radiology? Why arent we there yet?

The reason isnt because companies havent tried to innovate. Its because they were trying to automate away the radiologists job and failed, burning plenty of investors and leaving them reluctant to fund other projects aimed at translating AIs theoretical utility into real-world use cases.

AI companies seem to have mistaken Charles Friedmans fundamental theorem of biomedical informatics: it isnt that a computer can accomplish more than a human; its that a human using a computer can accomplish more than a human alone. Creation of this human-machine symbiosis in radiology will require AI companies to understand:

Together, these features, delivered as a unified cloud-based solution, would simplify and optimize the radiology workflow while augmenting the radiologists intelligence.

History Lessons

Modern deep learning dawned in 2012, when AlexNet won the ImageNet challenge, leading to the resurgence of AI as we think of it today. With the problem of image classification sufficiently solved, AI companies decided to apply their algorithms to images that have the greatest impact on human health: radiographs. These post-AlexNet companies can be viewed as falling into three generations.

The first generation approached the field with the assumption that AI know-how was sufficient for commercial success, and so focused on building early teams with knowledge around algorithms. However, this group drastically underestimated the difficulty of acquiring and labeling large-enough medical imaging data sets to train these models. Without sufficient data, these first-generation companies either failed or had to pivot away from radiology.

The second generation corrected for failures of their predecessors by launching with data partnerships in hand either with academic medical centers or large private healthcare groups. However, these startup companies encountered the twin problems of integrating their tools into the radiology workflow and building a business model around them. Hence they ended up building functional features without any commercial traction.

The third generation of AI companies in radiology realized that success required an understanding of the radiology workflow, in addition to the algorithms and data. These companies have largely converged on the same use case: triage. Their tools rank-order images based on their urgency for the patient, thereby sorting how work flows to the radiologist without interfering in the execution of that work.

The third generations solutions for the radiology workflow are a positive advancement that demonstrate there is a path towards adoption, but there is still much more AI could do beyond triage and worklist reordering. So where should the next wave of AI go in radiology?

Going For The Flow

To date, AI has demonstrated value in its ability to handle asynchronous tasks such as image triage and detection. Whats even more interesting is the potential to enhance real-time image interpretation by giving the computer context that lets it work with the radiologist.

There are many aspects of the radiologists workflow where radiologists want improvements and that AI-based context could optimize and streamline. These include, but are certainly not limited to: setting the radiologists preferred image hanging protocols; auto-selection of the proper reporting template for the case; ensuring the radiologists dictation is entered into the correct section of the report; and removing the need to repeat image measurements for the report.

Individually, a shortcut that optimizes any one of these workflow steps a micro-optimization would have a small impact on the overall workflow. But the collective impact of an entire compendium of these micro-optimizations on the radiologists workflow would be quite large.

In addition to its impact on the radiology workflow, the concept of a micro-optimization compendium makes a feasible and sustainable business possible; whereas it would be difficult, if not impossible, to build a business around a tool that optimized just one of those steps.

Radiology Tools for Thought

In other areas of software development, we are witnessing a resurgence in tools for thought technology that extends the human mind and in these areas, creating a product that improves decision making and user experience is table stakes. Uptake of this idea is slower in healthcare, where computers and technology have failed to improve usability and workflow and continue to lack integration.

The number and complexity of medical images continues to increase as novel applications of imaging for screening and diagnosis emerge; but the total number of radiologists is not increasing at the same rate. The ongoing expansion of medical imaging therefore requires better tools for thought. Without them, we will eventually reach a breaking point when we cannot read all of the images generated, and patient care will suffer.

The next wave of AI must solve the workflow of real-time interpretation in radiology and we must embrace that technology when it comes. No single feature will address this problem. Only a compendium of micro-optimizations, delivered continually and at high velocity via the cloud, will solve it.

Photo: metamorworks, Getty Images

Follow this link:

Real-time Interpretation: The next frontier in radiology AI - MedCity News

Posted in Ai | Comments Off on Real-time Interpretation: The next frontier in radiology AI – MedCity News

Disability rights advocates are worried about discrimination in AI hiring tools – MIT Technology Review

Posted: at 4:13 am

Making hiring technology accessible means ensuring both that a candidate can use the technology and that the skills it measures dont unfairly exclude candidates with disabilities, says Alexandra Givens, the CEO of the Center for Democracy and Technology, an organization focused on civil rights in the digital age.

AI-powered hiring tools often fail to include people with disabilities when generating their training data, she says. Such people have long been excluded from the workforce, so algorithms modeled after a companys previous hires wont reflect their potential.

Even if the models could account for outliers, the way a disability presents itself varies widely from person to person. Two people with autism, for example, could have very different strengths and challenges.

As we automate these systems, and employers push to whats fastest and most efficient, theyre losing the chance for people to actually show their qualifications and their ability to do the job, Givens says. And that is a huge loss.

Government regulators are finding it difficult to monitor AI hiring tools. In December 2020, 11 senators wrote a letter to the US Equal Employment Opportunity Commission expressing concerns about the use of hiring technologies after the covid-19 pandemic. The letter inquired about the agencys authority to investigate whether these tools discriminate, particularly against those with disabilities.

The EEOC responded with a letter in January that was leaked to MIT Technology Review. In the letter, the commission indicated that it cannot investigate AI hiring tools without a specific claim of discrimination. The letter also outlined concerns about the industrys hesitance to share data and said that variation between different companies software would prevent the EEOC from instituting any broad policies.

I was surprised and disappointed when I saw the response, says Roland Behm, a lawyer and advocate for people with behavioral health issues. The whole tenor of that letter seemed to make the EEOC seem like more of a passive bystander rather than an enforcement agency.

The agency typically starts an investigation once an individual files a claim of discrimination. With AI hiring technology, though, most candidates dont know why they were rejected for the job. I believe a reason that we havent seen more enforcement action or private litigation in this area is due to the fact that candidates dont know that theyre being graded or assessed by a computer, says Keith Sonderling, an EEOC commissioner.

Sonderling says he believes that artificial intelligence will improve the hiring process, and he hopes the agency will issue guidance for employers on how best to implement it. He says he welcomes oversight from Congress.

Read more here:

Disability rights advocates are worried about discrimination in AI hiring tools - MIT Technology Review

Posted in Ai | Comments Off on Disability rights advocates are worried about discrimination in AI hiring tools – MIT Technology Review

Ai Weiwei unveils giant iron tree to warn people what they risk losing – Reuters

Posted: at 4:13 am

PORTO, Portugal, July 22 (Reuters) - Chinese artist and dissident Ai Weiwei unveiled a 32-meter-tall (105 ft) tropical tree made of iron in the Portuguese city of Porto on Thursday, an artwork he hopes will raise awareness of the devastating consequences of deforestation.

Four years ago, Ai was in Brazil to investigate the threats faced by its forests when he stumbled upon an endangered ancient tree of the Caryocar genus in the northeastern Atlantic forest.

Using scaffolding, a team moulded the tree and shipped the mould to China, where it was cast before being sent to Portugal, Ai's new home, to be assembled and exhibited for the first time. read more

The exhibition, which also includes installations composed of iron tree roots, is taking place at Porto's Serralves museum and park, and will be open to visitors until next year.

"People should look at these works and think of what we could lose in the future," Ai, 63, told Reuters by telephone. "It's... a warning about what we are going to lose if we don't act."

Ai's tree stands leafless, has a hollow trunk and the iron looks rusty, reminding visitors of the environmental threats facing the planet.

In Brazil's Amazon, deforestation has surged since right-wing President Jair Bolsonaro took office in 2019. read more

Bolsonaro has called for mining and agriculture in protected areas of the Amazon and weakened environmental agencies.

"Brazil has a clear policy which sacrifices their best resource: their rainforest, their nature," Ai said. "And that's not just Brazil's best resource...it's planet earth's best resource."

Scientists say protection of the Amazon is vital to curbing climate change because of the vast amount of greenhouse gas its rainforest absorbs.

"The problem is that we never learn from our mistakes... we never really learn a lesson," Ai said, urging the world to prepare for "even bigger" environmental disasters.

Reporting by Catarina Demony and Violeta Santos Moura; Editing by Andrei Khalip, Alexandra Hudson

Our Standards: The Thomson Reuters Trust Principles.

Read the original:

Ai Weiwei unveils giant iron tree to warn people what they risk losing - Reuters

Posted in Ai | Comments Off on Ai Weiwei unveils giant iron tree to warn people what they risk losing – Reuters

Untether AI nabs $125M for AI acceleration chips – VentureBeat

Posted: at 4:13 am

All the sessions from Transform 2021 are available on-demand now. Watch now.

Untether AI, a startup developing custom-built chips for AI inferencing workloads, today announced it has raised $125 million from Tracker Capital Management and Intel Capital. The round, which was oversubscribed and included participation from Canada Pension Plan Investment Board and Radical Ventures, will be used to support customer expansion.

Increased use of AI along with the technologys hardware requirements poses a challenge for traditional datacenter compute architectures. Untether is among the companies proposing at-memory or near-memory computation as a solution. Essentially, this type of hardware builds memory and logic into an integrated circuit package. In a 2.5D near-memory compute architecture, processor dies are stacked atop an interposer that links the components and the board, incorporating high-speed memory to bolster chip bandwidth.

Founded in 2018 by CTO Martin Snelgrove, Darrick Wiebe, and Raymond Chik, Untether says it continues to make progress toward mass-producing its RunA1200 chip, which boasts efficiency with computational robustness. Snelgrove and Wiebe claim that data in their architecture moves up to 1,000 times faster than is typical, which would be a boon for machine learning, where datasets are frequently dozens or hundreds of gigabytes in size.

Each RunA1200 chip contains a RISC-V processor and 511 memory banks, with the banks comprising 385KB of SRAM and a 2D array of 512 processing elements (PE). There are 261,632 PEs per chip, with 200MB of memory, and RunA1200 delivers 502 trillion operations per second (TOPS) of processing power.

One of Untethers first commercial products is the TsunAImi, a PCIe card containing four RunA1200s. App-specific processors spread throughout the memory arrays in the RunA1200s enable the TsunAImi to deliver over 80,000 frames per second on the popular ResNet-50 benchmark, 3 times the throughput of its nearest competitor. According to analyst Linley Gwennap, the TsunAImi outperforms a single Nvidia A100 GPU at about the same power rating, or about 400W of power.

Untether is shipping TsunAImi samples and aims for general availability this summer. The company says the cards can be used in a range of industries and applications, including banking and financial services, natural language processing, autonomous vehicles, smart city and retail, and other scenarios that require high-throughput and low-latency AI acceleration.

Untether AI has a scalable architecture that provides a revolutionary approach to AI inference acceleration. Its industry-leading power efficiency can deliver the compute density and flexibility required for current and future AI workloads in the cloud, for edge computing, and embedded devices, Tracker Capital senior advisor Shaygan Kheradpir said in a press release.

Theres no shortage of adjacent startup rivals in a chip segment market anticipated to reach $91.18 billion by 2025. California-based Mythic has raised $85.2 million to develop custom in-memory compute architecture. Graphcore, a Bristol, U.K.-based startup creating chips and systems to accelerate AI workloads, has a war chest in the hundreds of millions of dollars. SambaNova has raised over $1 billion to commercialize its AI acceleration hardware. And Baidus growing AI chip unit was recently valued at $2 billion after funding.

Toronto-based Untethers total raised now stands at $152 million.

See the original post here:

Untether AI nabs $125M for AI acceleration chips - VentureBeat

Posted in Ai | Comments Off on Untether AI nabs $125M for AI acceleration chips – VentureBeat

Israel-founded Theator to work with Mayo Clinic to bring AI to surgical rooms – The Times of Israel

Posted: at 4:13 am

Theator, an Israeli-founded, Palo Alto, California-based startup that aims to bring artificial intelligence and computer vision technologies to surgical rooms, has set up a collaboration with Mayo Clinic in the US.

The partnership will allow Theator to work with the clinics urology and gynecology departments and test out its annotation and video analytics technology to improve the surgeons pre-operative preparation and post-operative analysis and debriefing, in order to give surgeons a chance to improve their performance.

Using computer vision, software developed by the firm scans video footage of real-time procedures and their key moments. This allows surgeons to watch video recordings of procedures broken down into their various steps and fast forward or rewind to steps they wish to learn. Users can also get digital summaries of their own surgical performance, with analytics, graphics and rankings, allowing them to pinpoint additional training and skills that might be needed.

The strategic collaboration follows Mayo Clinics investment in Theators $15.5 million Series A funding round earlier this year.

Theator was co-founded in 2018 by Dotan Asselman and Tamir Wolf, and has an R&D center in Israel.

Get The Times of Israel's Daily Editionby email and never miss our top stories

The collaboration with Mayo Clinic will enable the firm to access a rich array of insights from world-class urology and gynecology departments, with the goal of broadening our experience in order to tackle the pressing problems of disparity and variability in surgical care today, said Dr. Tamir Wolf, CEO and co-founder of Theator.

The strategic collaboration will deepen our visual and contextual understanding of surgical best practices and enable us to refine and develop our preoperative preparation and postoperative debrief Surgical Intelligence platform, helping surgeons around the world upskill and raise the standards of care for even more patients, he added.

Youre serious. We appreciate that!

Were really pleased that youve read X Times of Israel articles in the past month.

Thats why we come to work every day - to provide discerning readers like you with must-read coverage of Israel and the Jewish world.

So now we have a request. Unlike other news outlets, we havent put up a paywall. But as the journalism we do is costly, we invite readers for whom The Times of Israel has become important to help support our work by joining The Times of Israel Community.

For as little as $6 a month you can help support our quality journalism while enjoying The Times of Israel AD-FREE, as well as accessing exclusive content available only to Times of Israel Community members.

Read this article:

Israel-founded Theator to work with Mayo Clinic to bring AI to surgical rooms - The Times of Israel

Posted in Ai | Comments Off on Israel-founded Theator to work with Mayo Clinic to bring AI to surgical rooms – The Times of Israel

[Webinar] The Hidden Gems of eDiscovery AI: Advanced techniques to supercharge your process – July 29th, 1:00 pm – 2:00 pm EST | ZyLAB – JDSupra – JD…

Posted: at 4:13 am

Three-part webinar series

Description:

Many legal professionals are familiar with eDiscovery and the benefits the technology provides. However, what may not be common knowledge is the power that artificial intelligence brings to the table and the techniques that can be applied to achieve legal success.

eDiscovery AI excels at sifting through massive quantities of data to identify specific terms or concepts, even when those concepts are expressed differently. Because an AI system can scan data faster than any human and doesnt get tired or distracted, it can evaluate data sets faster and more easily than a human while maintaining accuracy.

This three-part webinar series aims to help eDiscovery practitioners discover advanced techniques that offer higher levels of automation, completeness, and speed.

Reasons to attend:

Explore uncharted eDiscovery territory. Join us as we uncover the secret techniques used by eDiscovery experts. Learn how to:

Part 1: How to speed up protection and redaction of sensitive data using Auto-Classification and Auto-Redaction

Date: July 29, 2021 | 1pm to 2pm EST

Lawyers handle a tremendous amount of sensitive information every day: clients personal data, including both personally identifiable information (PII) and protected health information (PHI), intellectual property, trade secrets, financial information, and much more.

During the review process, identifying and redacting sensitive information can be a time-consuming, aggravating, and error-prone task. There are two major challenges around redaction: efficiently identifying the pieces of sensitive information that may be hiding within reams of disclosable data and thoroughly redacting that information prior to production.

In this session, Jeffrey Wolff, Director of Legal Technology Solutions at ZyLAB, will demonstrate some advanced techniques review teams can use in order to achieve a more efficient process.

Heres what well cover:

Next events:

Part Two Fact-finding in eDiscovery August 26th, 2021

Part Three Handling non-standard ESI September 30th, 2021

Register now

About the speaker:

Jeffrey Wolff has been serving as eDiscovery Director at ZyLAB since 2015 and currently oversees the product direction and marketing strategy for ZyLABs North American corporate market. Prior to joining ZyLAB, Mr. Wolff has over 25 years of professional experience in information systems and enterprise software. He has been involved in solution architecture, design, and implementation for a variety of major projects within the Department of Defense and Fortune 1000 corporations.

Here is the original post:

[Webinar] The Hidden Gems of eDiscovery AI: Advanced techniques to supercharge your process - July 29th, 1:00 pm - 2:00 pm EST | ZyLAB - JDSupra - JD...

Posted in Ai | Comments Off on [Webinar] The Hidden Gems of eDiscovery AI: Advanced techniques to supercharge your process – July 29th, 1:00 pm – 2:00 pm EST | ZyLAB – JDSupra – JD…

Page 116«..1020..115116117118..130140..»