Page 59«..1020..58596061..7080..»

Category Archives: Ai

New microcredential focuses on the importance of AI ethics University Affairs – University Affairs

Posted: February 28, 2022 at 8:20 pm

People doing computer science degrees, of any sort, need ethical training, says one of the designers.

When Katrina Ingram talks to non-experts about artificial intelligence, she often has to clear up one big misconceptioncall it the Teminator fallacy.

Hollywoods done a fantastic job of creating this idea that AI is almost human-like and sentient, she says. And, not to mention, often with an evil agenda.

The so-called narrow AI at work in the world today is, frankly, more boring than Hollywoods killer robots. It takes the form of algorithms that decide which of your email messages are spam or sift through job applicants to choose promising candidates. These systems typically use huge datasets to learn how to make choices, and over time improve their performance by incorporating more data. But as these systems become more sophisticatedand as more businesses, governments and other organizations rely on AI for more and more tasksthe ethical quandaries surrounding the technology are piling up. A lot of AI systems are being used to make predictions about people, and that can cause harm in all kinds of ways, says Ms. Ingram.

As founder and CEO of Edmonton-based company Ethically Aligned AI, Ms. Ingram spends much of her time consulting with organizations on how to foreground ethical considerations when designing and deploying artificial intelligence. Thats why shes helped design Canadas first university microcredential focused on AI ethics a four-course certificate program offered by Athabasca University.

According to Ms. Ingram, this kind of training should be foundational for any professional working in digital systems. Yet the Athabasca program is the first of its kind in Canada, and one of relatively few worldwide.

People doing computer science degrees, of any sort, need ethical training, she said. Thats one audience that needs to be served the other big audience is people who are already working professionally, all the people working in companies, designing these systems, they need training too, and the microcredential program is flexible, not too time-consuming and works for them.

The courses were co-designed with Trystan Goetze, a philosopher and ethicist currently completing his postdoctoral fellowship in computer ethics at Harvard University.

The technology has gotten ahead of the policy thinking, and its gotten ahead of the humanistic thinking on the subject, says Dr. Goetze. Computers are not like other kinds of technology, where ethical issues that come up are very narrow in scope. We dont talk about automobile ethics, for example. But we do talk about computer or AI ethics, because this technology can be applied in almost any aspect of society, business, you name it.

What kinds of ethical issues are at stake? Bias is a major one, said Dr. Goetze. AI can reaffirm and even exacerbate existing prejudices. Consider an AI system tasked with choosing promising job applicants, which bases its decisions on a dataset of previous hires. If historic hiring practices were flawed or biased against particular candidates, those biases will be integrated into the AI as well.

Another consideration is how data used by AI systems is collected i.e., is it scraped from social media profiles or other online sources in a consensual and privacy-compliant way? And then theres robo-ethics, including concerns around misuse of facial recognition technology, or safety issues caused by the testing of AI-powered autonomous vehicles on public streets.

Besides academic thinkers, the courses will include interviews and contributions from activists, professionals and business leaders, who can bring different facets of the subject to light.

[These issues are] nothing new for technologists, says Dr. Goetze. But for a business leader this could be completely new information.

Beyond the content of the courses, Ms. Ingram and Dr. Goetze are both pleased with another aspect of them: they look good.

Athabasca has put a great deal of creative effort into these courses, Dr. Goetze said. Theyre visually designed in a beautiful way, with video and animations. Its not something that looks like it was slapped together in haste during the last lockdown, which I think is a testament to the fact that when we have the time and resources, we can produce online education experiences that are truly special.

See the rest here:

New microcredential focuses on the importance of AI ethics University Affairs - University Affairs

Posted in Ai | Comments Off on New microcredential focuses on the importance of AI ethics University Affairs – University Affairs

2022 AI Trends: How Will AI Affect You? – ReadWrite

Posted: at 8:20 pm

What does the crystal ball portend for AI as we are halfway through the first business quarter of the year? First, of course, we already know that artificial intelligence (AI) impacts every industry on the planet.

Here are some areas in which AI will play a more significant role in our lives in 2022 and beyond.

AI feasts on data and the gathering avenues of that information have heightened the value of data as a competitive advantage and a critical asset for businesses and governments alike.

As a result, privacy regulations have been enacted and initiatives to educate the public about how their data can be used. Individuals will have more agency in exercising their data rights due to these efforts.

Data marketplaces are already emerging due to the convergence of these forces. Individuals and businesses can buy and sell data in data marketplaces, which are online marketplaces.

Data marketplaces offer the ability to combine methods. For example, democratized access, privacy restrictions, and monetization methods coincide to allow data owners to profit from their data usage.

Themetaverse combines virtual reality, augmented reality, online worlds, tailored experiences, and games. This allows people to communicate, transact business, and construct personalities totally online, which has recently received a lot of attention.

Many firms are vying for control of aspects of the metaverse, with examples currently present in popular apps like Roblox. What does this control have to do with artificial intelligence? In the metaverse, AI can play a variety of functions in the cloud, including developing synthetic people, writing stories, and improving VR experiences.

Gone are the days when AI was exclusively understood by data scientists.

AI is becoming a learning need for many jobs, with AI in every industry. Every government produces AI policies, and new laws arise to manage AI and associated elements such as privacy.

There will be an increase in the number of AI-Enabled Practitioners. This is because people are beginning to understand the importance of AI in their work, whether in medicine, law, human resources, sales, or any other field.

Some of AIs most commercially successful applications have been recommendation systems and dynamic pricing.

It may be unnerving but AIs never sleep, and they are constantly learning more about us. We can expect this trend to continue forever, now.

Everything we receive online is tailored to us as people (whether its a sale, a coupon, recommendations, prices, and so on). Therefore, we should expect more information about us to be collected online. Intelligence is gathered via chatbots, digital assistants, and other means.

For some, all of this collecting is an exhilarating thought, but it is a source of concern for others. Questions are asked where is all of the information being kept? Who will access things about me in the future? Why do salespeople think its okay to push, push, push and sell, sell, sell to me when I dont want to be bothered?

Education standards take time to keep up with technological change. India, for example, has developed Automated Intelligence test standards for K-12 pupils. AI is an essential subject in schools. In-house training for course curriculum is in the near future.

Alexa and Siri, among other digital assistants, have been around for a while, but what about house robots?

Although AI-powered gadgets are not new to the house (for example, Roomba), this year has introduced more general-purpose AI-powered robots. Amazon, for example, recently unveiled Astro, a robot that can follow you around the house, link to Alexa, and monitor security, among other things.

AI has shown incredible creative talents, ranging from the ability to compose music to the ability to write poetry and paint. What does this mean for the average creative who makes a living through their craft? What will this do for the shoppers?

You can expect to see Artificial Intelligence-assisted creativity in your favorite apps. From those that generate presentations for you at work to those that cook dinner for you. Will a be standard procedure be demanded? What will be the quality?

AIs progress as a technology and a force for industry growth has been exponential in recent years. As a result, we may expect more AI trends to touch our daily lives as Artificial Intelligence-powered goods find their way out of the prototype labs and into the hands of consumers.

2022 looks to be the Year of AI Supremacy and not in an evil robot bad way but in a way that will make the business of doing business easier and more profitable.

Some employees and their managers are under the mistaken impression that Automated Algorithms stifle creativity. These Luddites are always happy to learn about glitches in the cloud and on platforms.

But the intelligent worker and manager realize that each step technology takes forward, whether automated or not, is an opportunity to improve the bottom line and increase productivity.

It seems that most people in business have concerns about AI but because AI in business means moving forward despite the progression of things we dont know yet we accept the unknowns.

Image Credit: Tara Winstead; Pexels; Thank you!

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Excerpt from:

2022 AI Trends: How Will AI Affect You? - ReadWrite

Posted in Ai | Comments Off on 2022 AI Trends: How Will AI Affect You? – ReadWrite

Visionary.AI receives additional $2.5 million funding for its AI in cameras | Ctech – CTech

Posted: at 8:20 pm

Visionary.ai, an Israeli startup that uses AI to make cameras operate better, received fresh capital to increase its Seed round just as the company celebrates its first birthday. The company received an additional $2.5 million on top of its initial $4.5 million from February 2021, bringing its total to $7 million. The round was led by Ibex Investors with the participation of Spring Ventures and Capital Point and will be used for R&D and business development.Over the past year, weve built a team and technology that can help any camera achieve stellar results in real-time, said co-founder and CEO Oren Debbi. Our software can put an end to the days of blurry video calls with our family, friends, and colleagues. The response weve received from both investors and customers has been phenomenal. Cameras are an integral part of our lives, and our vision is to see Visionary.ai inside every camera, bringing greater image quality to the world.

Visionary.ai has developed an AI-powered technology that operates in the dark and removes blur for photos. Its software can work at the core of any camera in the Image Signal Processor (ISP). There, it optimizes light, sharpness, and clarity with no additional hardware needed. Were leveraging image data in a new way to deliver optimal results for camera manufacturers worldwide. As a veteran of computer vision technology, I am in awe of how our team has developed and brought this to market at lightning speed, added co-founder and CTO Yoav Taieb.

Visionary.ai was founded in 2021 by Debbi and Taieb and is based in Jerusalem. Its software can enhance the image quality in laptops, tablets, phones, and webcams in real-time to help capture clearer images. Its team includes members from Microsoft, Intel, and Samsung who have collectively worked on over 40 computer vision patentsIbex Investors is proud to have supported Visionary.ai from the beginning, and we're thrilled to further our commitment to reflect their outstanding growth and potential, added Nicole Priel, Partner & Managing Director at Ibex Investors LLC.

See the article here:

Visionary.AI receives additional $2.5 million funding for its AI in cameras | Ctech - CTech

Posted in Ai | Comments Off on Visionary.AI receives additional $2.5 million funding for its AI in cameras | Ctech – CTech

RISC-V AI Chips Will Be Everywhere – IEEE Spectrum

Posted: at 8:20 pm

While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely availablealong with the burgeoning quantities of data that can be easily harvested and used to train neural networks.

The amount of computing power at people's fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google's Tensor Processing Unit (TPU) being a prime example.

Here, I will describe a very different approach to this problemusing optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.

Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.

Reducing the energy needs of neural networks might require computing with light

For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.

While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).

What are these mysterious linear-algebra calculations? They aren't so complicated really. They involve operations on matrices, which are just rectangular arrays of numbersspreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.

This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1,600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.

Advancing from LeNet's initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore's law provided much of that increase. The challenge has been to keep this trend going now that Moore's law is running out of steam. The usual solution is simply to throw more computing resourcesalong with time, money, and energyat the problem.

As a result, training today's large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.

Improvements in digital electronic computers allowed deep learning to blossom, to be sure. But that doesn't mean that the only way to carry out neural-network calculations is with such machines. Decades ago, when digital computers were still relatively primitive, some engineers tackled difficult calculations using analog computers instead. As digital electronics improved, those analog computers fell by the wayside. But it may be time to pursue that strategy once again, in particular when the analog computations can be done optically.

It has long been known that optical fibers can support much higher data rates than electrical wires. That's why all long-haul communication lines went optical, starting in the late 1970s. Since then, optical data links have replaced copper wires for shorter and shorter spans, all the way down to rack-to-rack communication in data centers. Optical data communication is faster and uses less power. Optical computing promises the same advantages.

But there is a big difference between communicating data and computing with it. And this is where analog optical approaches hit a roadblock. Conventional computers are based on transistors, which are highly nonlinear circuit elementsmeaning that their outputs aren't just proportional to their inputs, at least when used for computing. Nonlinearity is what lets transistors switch on and off, allowing them to be fashioned into logic gates. This switching is easy to accomplish with electronics, for which nonlinearities are a dime a dozen. But photons follow Maxwell's equations, which are annoyingly linear, meaning that the output of an optical device is typically proportional to its inputs.

The trick is to use the linearity of optical devices to do the one thing that deep learning relies on most: linear algebra.

To illustrate how that can be done, I'll describe here a photonic device that, when coupled to some simple analog electronics, can multiply two matrices together. Such multiplication combines the rows of one matrix with the columns of the other. More precisely, it multiplies pairs of numbers from these rows and columns and adds their products togetherthe multiply-and-accumulate operations I described earlier. My MIT colleagues and I published a paper about how this could be done in 2019. We're working now to build such an optical matrix multiplier.

Optical data communication is faster and uses less power. Optical computing promises the same advantages.

The basic computing unit in this device is an optical element called a beam splitter. Although its makeup is in fact more complicated, you can think of it as a half-silvered mirror set at a 45-degree angle. If you send a beam of light into it from the side, the beam splitter will allow half that light to pass straight through it, while the other half is reflected from the angled mirror, causing it to bounce off at 90 degrees from the incoming beam.

Now shine a second beam of light, perpendicular to the first, into this beam splitter so that it impinges on the other side of the angled mirror. Half of this second beam will similarly be transmitted and half reflected at 90 degrees. The two output beams will combine with the two outputs from the first beam. So this beam splitter has two inputs and two outputs.

To use this device for matrix multiplication, you generate two light beams with electric-field intensities that are proportional to the two numbers you want to multiply. Let's call these field intensities x and y. Shine those two beams into the beam splitter, which will combine these two beams. This particular beam splitter does that in a way that will produce two outputs whose electric fields have values of (x + y)/2 and (x y)/2.

In addition to the beam splitter, this analog multiplier requires two simple electronic componentsphotodetectorsto measure the two output beams. They don't measure the electric field intensity of those beams, though. They measure the power of a beam, which is proportional to the square of its electric-field intensity.

Why is that relation important? To understand that requires some algebrabut nothing beyond what you learned in high school. Recall that when you square (x + y)/2 you get (x2 + 2xy + y2)/2. And when you square (x y)/2, you get (x2 2xy + y2)/2. Subtracting the latter from the former gives 2xy.

Pause now to contemplate the significance of this simple bit of math. It means that if you encode a number as a beam of light of a certain intensity and another number as a beam of another intensity, send them through such a beam splitter, measure the two outputs with photodetectors, and negate one of the resulting electrical signals before summing them together, you will have a signal proportional to the product of your two numbers.

Simulations of the integrated Mach-Zehnder interferometer found in Lightmatter's neural-network accelerator show three different conditions whereby light traveling in the two branches of the interferometer undergoes different relative phase shifts (0 degrees in a, 45 degrees in b, and 90 degrees in c).Lightmatter

My description has made it sound as though each of these light beams must be held steady. In fact, you can briefly pulse the light in the two input beams and measure the output pulse. Better yet, you can feed the output signal into a capacitor, which will then accumulate charge for as long as the pulse lasts. Then you can pulse the inputs again for the same duration, this time encoding two new numbers to be multiplied together. Their product adds some more charge to the capacitor. You can repeat this process as many times as you like, each time carrying out another multiply-and-accumulate operation.

Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don't have to do that after each pulseyou can wait until the end of a sequence of, say, N pulses. That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large. Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy.

Sometimes you can save energy on the input side of things, too. That's because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple timesconsuming energy each timeit can be transformed just once, and the light beam that is created can be split into many channels. In this way, the energy cost of input conversion is amortized over many operations.

Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements.

I've outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat. Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence, are developing optical neural-network accelerators based on this approach. Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year.

Another startup using optics for computing is Optalysis, which hopes to revive a rather old concept. One of the first uses of optical computing back in the 1960s was for the processing of synthetic-aperture radar data. A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely.

Theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

There is also a company called Luminous, spun out of Princeton University, which is working to create spiking neural networks based on something it calls a laser neuron. Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous's hardware is still in the early phase of development, but the promise of combining two energy-saving approachesspiking and opticsis quite exciting.

There are, of course, still many technical challenges to be overcome. One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That's because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it's difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision. While 8-bit electronic deep-learning hardware exists (the Google TPU is a good example), this industry demands higher precision, especially for neural-network training.

There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can't be packed nearly as tightly as transistors, so the required chip area adds up quickly. A 2017 demonstration of this approach by MIT researchers involved a chip that was 1.5 millimeters on a side. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way.

There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug. What's clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

Based on the technology that's currently available for the various components (optical modulators, detectors, amplifiers, analog-to-digital converters), it's reasonable to think that the energy efficiency of neural-network calculations could be made 1,000 times better than today's electronic processors. Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed.

Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks, were first demonstrated in the 1970s. But this approach didn't catch on. Will this time be different? Possibly, for three reasons.

First, deep learning is genuinely useful now, not just an academic curiosity. Second, we can't rely on Moore's Law alone to continue improving electronics. And finally, we have a new technology that was not available to earlier generations: integrated photonics. These factors suggest that optical neural networks will arrive for real this timeand the future of such computations may indeed be photonic.

See the article here:

RISC-V AI Chips Will Be Everywhere - IEEE Spectrum

Posted in Ai | Comments Off on RISC-V AI Chips Will Be Everywhere – IEEE Spectrum

Butterfly Introduces System-wide Ultrasound with AI-guided Option – Imaging Technology News

Posted: at 8:20 pm

February 28, 2022 Butterfly Network, Inc., a digital health company transforming care with handheld, whole-body ultrasound, introducedButterfly Blueprint, a system-wide platform designed to support the scaled deployment of ultrasound across hospitals and health systems to empower more-informed clinical decisions from the bedside and encounter-based workflow.

Leveraging Butterflys unique combination of a whole-body handheld ultrasound device, software, and services, Blueprint brings hospitals and health systems a complete ultrasound solution. This system-wide offering promotes improved patient care via accessible imaging across multiple disciplines and care settings. By integrating into health systems clinical and administrative systems and workflows, Blueprint delivers a clinical assessment tool at scale.

Ultrasound provides valuable information. The ability to enable the practical application of ultrasound into the clinical workflow to inform clinical decision making is powerful, said Dr. Todd Fruchterman, Butterfly Network's President and Chief Executive Officer. Butterfly Blueprint empowers an evolved point-of-care toolkit for hospitals and health systems one thattranscends beyond touch, listening, and surface visuals, and beyond habit-based imaging and lab orders. With Butterfly, clinicians across all disciplines now have a strategictool that we believe allows them to see and know sooner, helping them drive better care decisions, efficiency, and outcomes.

Butterfly Blueprint is complemented with a rich set of optional software and services includingCaption HealthsAI-guided software. Caption AI empowers healthcare professionals without sonography expertise to capture and interpret cardiac ultrasound images for earlier disease detection and better patient management. The Centers for Medicare and Medicaid Services (CMS) has approved new technology add-on payments (NTAP) for Caption Guidance a designation awarded to new medical technologies and services that are expected to substantially improve the diagnosis or treatment of Medicare beneficiaries.

Deployment on the Blueprint platform is helping us fulfill our mission to put enhanced diagnostic capabilities in more hands and increase access for patients, said Steve Cashman, President and CEO of Caption Health. By integrating Caption AI with Butterfly iQ+, were expanding diagnostic toolkits for hospitals, health systems, and wherever a patient needs care. The insights provided by a high-quality ultrasound exam are critical for better care and earlier disease detection. Butterfly and Caption are making that vision not just a possibility, but a reality.

TheUniversity of Rochester Medical Center(URMC), upstate New Yorks largest and most comprehensive healthcare system,announced earlier this yearthat it will deploy Butterfly Blueprint, bringing it first to medical students, primary care providers, and home care nurses. As quoted within the announcement, Dr. David L. Waldman, Chief Medical IT Development Officer and former Chair of Imaging Sciences at URMC said, The deployment of innovative ultrasound technology has the potential to redefine the point-of-care clinical standard and serve as an enhancement to the use of the stethoscope. Enterprise deployment of point of care ultrasound will ultimately enable every clinician, across all departments, to quickly image patients where they are located.

With Butterfly Blueprint, hospitals and health systems like URMC can rapidly and easily access ultrasound-enabled insights at the point of care through capabilities such as intuitive, mobile-first workflows; 20+ ready-to-use presets for procedural guidance; and device-agnostic software that integrates with non-Butterfly devices, as well as with other clinical and administrative systems including the PACS and EMR. Butterfly probes connect with compatible Apple and Android smartphones and tablets for display and support streamlined information sharing. Blueprint is also supported withButterfly Academy, an extensive set of ultrasound specific courses and curricula.

For more information:https://www.butterflynetwork.com/blueprint

More:

Butterfly Introduces System-wide Ultrasound with AI-guided Option - Imaging Technology News

Posted in Ai | Comments Off on Butterfly Introduces System-wide Ultrasound with AI-guided Option – Imaging Technology News

Two top jobs in the booming AI industry – TechRepublic

Posted: at 8:20 pm

Artificial intelligence (AI) is an ever-growing technology thats changing how businesses operate, and the support of AI-savvy IT pros has become critical. Heres a look at two of the hottest jobs in the field.

The use of artificial intelligence (AI) is exploding across all industries, and AI has seen rapid growth over the past year. According to a 2021 study conducted by PwC, 52% of survey respondents said they had accelerated their AI adoption plans in the wake of the COVID-19 crisis. And a whopping 86% of respondents said that AI would be a mainstream technology in the near future.

Business as we once knew it is gone. AI has left its mark, showing its ability to move businesses forward even during an uncertain time. According to the PwC study, AI has enabled businesses to:

Will AI continue to be a leading technology in 2022? Absolutely. The experts at Forrester predict that the year will bring with it some more big waves in AI, with traditional businesses putting AI at the center of everything they do. Forrester also predicts that creative AI systems will win dozens of patents, further expanding AI.

Of course, the growth of AI also brings about some unique challenges for organizations ready to adopt it. For example, companies must expand their already tight IT budgets to achieve the computing power necessary to take advantage of AI. And machine learning, a critical subset of AI, is fraught with issues such as social bias, discrimination and subpar security.

Beyond these concerns lies perhaps the greatest challenge yet: securing high-quality talent to fill AI-based IT roles. These roles are in high demand as organizations began to place priority on the deployment of AI technology.

Due to AIs growth and the demand for AI-based talent, aspiring IT professionals should consider this career path. Of course, AI requires a solid understanding of programming languages (e.g., Python and Java), statistics, machine learning algorithms, big data and AI frameworks such as Apache Spark. Luckily, all of this knowledge can be gained through a college or university artificial intelligence program and hands-on training.

There has never been a better time to start a career in AI or take advantage of AI within your business. Whether youre seeking a new career or youre a business looking to fill empty AI roles, these two TechRepublic hiring kits can give you a head start.

To truly take advantage of AI, businesses must rely on the skill of experienced AI architects. Using leading AI technology frameworks, AI architects develop and manage the critical architecture AI is built upon. To do so, AI architects must be able to see the big picture of a world supported by AI.

AI architects must have several years of work experience, including hands-on experience in computer science, data and other AI-related disciplines. Architects should be able to implement machine learning tactics and develop AI architecture in a variety of languages.

In this hiring kit, youll get to know the basic responsibilities an AI architect should have and the necessary skills required for success.

Machine learning is a critical component of AI. Machine learning refers to a softwares ability to learn and therefore predict certain outcomes much like a human brain. And for businesses to make AI work, they require machine learning engineers responsible for building and maintaining AI algorithms.

Machine learning engineers spend their time researching machine learning algorithms, performing analysis, running machine learning tests, using test results to improve machine learning models and more.

Machine learning engineers must have strong math and science skills. They must also be experts in machine learning, be able to think critically and possess the ability to problem-solve. Discover more of the skills required by checking out this hiring kit by TechRepublic Premium.

Go here to see the original:

Two top jobs in the booming AI industry - TechRepublic

Posted in Ai | Comments Off on Two top jobs in the booming AI industry – TechRepublic

Hidden Turbulence in The Atmosphere of The Sun Revealed by New AI Model – ScienceAlert

Posted: at 8:20 pm

Hidden turbulent motion that takes place inside the atmosphere of the Sun can be accurately predicted by a newly developed neural network.

Fed only temperature and vertical motion data collected from the surface of the solar photosphere, the AI model could correctly identify turbulent horizontal motion below the surface. This could help us to better understand solar convection, and processes that generate explosions and jets erupting from the Sun.

"We developed a novel convolutional neural network to estimate the spatial distribution of horizontal velocity by using the spatial distributions of temperature and vertical velocity," wrote a team of researchers led by astronomer Ryohtaroh Ishikawa of the National Astronomical Observatory of Japan.

"This led to efficient detection of spatially spread features and concentrated features. [..] Our network exhibited a higher performance on almost all the spatial scales when compared to those reported in previous studies."

The solar photosphere is the region of the Sun's atmosphere that is commonly referred to as its surface. It's the lowest layer of the solar atmosphere, and the region in which solar activity such as sunspots, solar flares and coronal mass ejections originate.

If you look closely, the surface of the photosphere is not uniform. It's covered with sections crowded together, lighter in the middle and dark towards the edges. These are called granules, and they're the tops of convection cells in the solar plasma. Hot plasma rises in the middle, and then falls back down around the edges as it moves outwards and cools.

When we observe these cells, we can measure their temperature, as well as their motion via the Doppler effect, but horizontal motion can't be detected directly. However, smaller scale flows in these cells can interact with solar magnetic fields to trigger other solar phenomena. In addition, turbulence is also thought to play a role in heating the solar corona, so scientists are keen to understand exactly how plasma behaves in the photosphere.

Ishikawa and team developed numerical simulations of plasma turbulence, and used three different sets of simulation data to train their neural network. They found that, based solely on the temperature and vertical flow data, the AI could accurately describe horizontal flows in the simulations that would be undetectable on the real Sun.

This means that we could feed it solar data and expect that the results it returns are consistent with what is actually occurring on our fascinating, forbidding star.

However, the neural network does need some fine-tuning. While it was able to detect large-scale flows, the AI did have trouble picking out smaller features. Since the accuracy of small-scale turbulence is crucial for some calculations, resolving this should be the next step in developing their software, the researchers said.

"By comparing the results of the three convection models, we observed that the rapid decrease in coherence spectrum occurred on the scales that were lower than the energy injection scales, which were characterized by the peaks of the power spectra of the vertical velocities. This implies that the network was not appropriately trained to reproduce the velocity fields in small scales generated by turbulent cascades," they wrote in their paper.

"These challenges can be explored in future studies."

A bit closer to home, the researchers are developing their software to also help better understand turbulence in fusion plasmas another important application for future use.

The research has been published in Astronomy & Astrophysics.

Read more:

Hidden Turbulence in The Atmosphere of The Sun Revealed by New AI Model - ScienceAlert

Posted in Ai | Comments Off on Hidden Turbulence in The Atmosphere of The Sun Revealed by New AI Model – ScienceAlert

Gefen is releasing the next generation of its proprietary AI engine (GQL) – PRNewswire

Posted: at 8:20 pm

TEL-AVIV, Israel, Feb. 28, 2022 /PRNewswire/ -- Gefen's AI engine (GFN: ASX), the GQL (Genetically Qualitative Learner) merges data, expert intuition and machine capabilities. It takes the experience and expertise of the most capable and experienced agents, managers and innovators in its arena and makes it available over big data analysis, digital channels and at large scale.

An example GQL will harness Open Insurance (which policies a customer has from all vendors), customer data (age, income, family status) and expert knowhow (what should a clever agent do for the benefit of all parties involved) and will recommend which policies should be upgraded, replaced or even canceled. The GQL takes into account thousands of products, customer actions, interactions and data points and optimizes the recommendations for the benefits of customers, agents, advisors and vendors.

Gefen is now releasing the next generation of GQL with more capabilities, more data points and new targeting to interactions between agents, advisors and customers. For example - GQLs can now be specifically aimed towards improving work day priorities (which customer to contact, how and when), in-context recommendations (what to offer a customer during a call) and which customers to target with which digital content.

The GQL's main purpose is to improve the level of service, the personalized fit for the customers and increase the share of wallet generated from each customer.

The new GQL generation is being gradually deployed and will be made available to the entire platform during the first quarter of 2022.

For further information, please contact:

Investor & Media EnquiriesGefen International AI LTDOrni Daniels, Co-CEO[emailprotected]

SOURCE Gefen International

Continue reading here:

Gefen is releasing the next generation of its proprietary AI engine (GQL) - PRNewswire

Posted in Ai | Comments Off on Gefen is releasing the next generation of its proprietary AI engine (GQL) – PRNewswire

Russia’s AI Army: Drones, AI-Guided Missiles and Autonomous Tanks – IoT World Today

Posted: at 8:20 pm

In the age of AI, the weapons of war have become more technologically advanced than at any time in history. Russias military, as its incursion into Ukraine continues, is no exception.

Russia is one of the biggest spenders on defense in the world, with World Bank figures pegging it at $62 billion in 2020, eclipsed only by the U.S. and China. All three nations have been testing a plethora of AI units and weapons.

As far back as 2017, President Vladimir Putin had said that whoever becomes the leader [in AI] will become the ruler of the world.

Here is a snapshot of Russias AI arsenal, some of which are being used against Ukraine.

Like most militarized modern nations, Russia has fleets of drones.

Its KUB-BLA drone was developed by the Kalashnikov Group, the same company that produces Russias iconic assault rifles. Designed to destroy remote ground targets, it delivers payloads onto a targets coordinates that are set manually or in the image from the drones guidance system.

Russia has deployed its drones in combat prior to its invasion of Ukraine. Its military has been intervening in the Syrian civil war at the request of the Syrian government for military aid against rebel groups.

Russias Khmeimim base houses its Syrian drone operations, as well as radar and surveillance equipment. It has been targeting militants in places like Idlib using suicide drones autonomously targeting UAVs (unmanned aerial vehicles).

Its KYB-UAV drones developed by ZALA Aero, a Kalashnikov subsidiary, self-destruct when striking its target. Russian Defence Ministry initially tested the drones in Syria in late December and planned to widen their use in 2022.

AI Weapons: Autonomous Combat Vehicles and AI-Guided Missiles

In terms of autonomous weapons, Russia has deployed unmanned ground vehicles for tasks ranging from bomb disposal to anti-aircraft, and of course, killing.

Its Uran-9 UCGV (unmanned combat ground vehicle) was developed by JSC 766 UPTK, also part of the Kalashnikov group. It saw deployment on the ground in Syria was used in the large-scale Vostok drills in 2018. It is fitted with a 30mm 2A72 autocannon, as well as 4 9M120 Ataka anti-tank missiles and up to 12 Shmel-M thermobaric rocket launchers.

The autonomous units were part of further large-scale tests late last year, with Russian armed forces chief General Oleg Salyukov confirming the Uran-9 would be accepted into service by the Russian Ground Forces during 2022 for both combat and reconnaissance purposes, according to military publication Janes.

On the oceans, it has plans to incorporate AI into crewless naval and undersea vehicles. Last November, the Russian Ministry of Defence reportedly was arming naval vessels with Kamikaze drones to strike ground targets, enemy ships and aid special forces soldiers performing secret missions.

And in the air, Russia reportedly has been working on developing AI-guided missiles that could decide to switch targets mid-flight since at least early 2017, emulating advanced technology in the U.S. Raytheon Block IV Tomahawk cruise missile.

This article first appeared in IoT World Todays sister publication AI Business.

Visit link:

Russia's AI Army: Drones, AI-Guided Missiles and Autonomous Tanks - IoT World Today

Posted in Ai | Comments Off on Russia’s AI Army: Drones, AI-Guided Missiles and Autonomous Tanks – IoT World Today

AI mastered PlayStation’s ‘Gran Turismo’ video game, but it could have more uses – NPR

Posted: at 8:20 pm

The Gran Turismo Sophy A.I. does a lap of the course. Sony A.I. hide caption

The Gran Turismo Sophy A.I. does a lap of the course.

An artificial intelligence program has beaten the world's best players in the popular PlayStation racing game Gran Turismo Sport, and in doing so may have contributed towards designing better autonomous vehicles in the real world, according to one expert.

The latest development comes after an interesting couple of decades for A.I. playing games.

It began with chess, when world champion Garry Kasparov lost to IBM's Deep Blue in a match in 1997. Then with Go, when A.I. beat Korean champion Lee Sedol in 2016. And by 2019, an A.I. program ranked higher than 99.8% of world players in the wildly popular real-time strategy game StarCraft 2.

Now, an A.I. program has dethroned the best human players in the professional esports world of Gran Turismo Sport.

In a paper published recently in the science journal Nature, researchers at a team led by Sony A.I. detailed how they created a program called Gran Turismo Sophy, which was able to win a race in Tokyo last October.

The world's best human Gran Turismo players compete against GT Sophy. Sony A.I. hide caption

The world's best human Gran Turismo players compete against GT Sophy.

Peter Wurman is the head of the team on the GT Sophy project and said they didn't manually program the A.I. to be good at racing. Instead, they trained it on race after race, running multiple simulations of the game using a computer system connected to roughly 1,000 PlayStation 4 consoles.

"It doesn't know what any of its controls do," Wurman said. "And through trial and error, it learns that the accelerator makes it go forward and the steering wheel turns left and right ... and if it's doing the right thing by going forward, then it gets a little bit of a reward."

"It takes about an hour for the agent to learn to drive around a track. It takes about four hours to become about as good as the average human driver. And it takes 24 to 48 hours to be as good as the top 1% of the drivers who play the game."

And after another 10 days, it can finally run toe-to-toe with the very best humanity has to offer.

After finishing behind two bots controlled by Gran Turismo Sophy at the race in Tokyo, champion player Takuma Miyazono said it was actually a rewarding experience.

Driver Takuma Miyazono races against Gran Turismo Sophy. Sony A.I. hide caption

Driver Takuma Miyazono races against Gran Turismo Sophy.

"I learned a lot from the A.I. agent," Miyazono said. "In order to drive faster, the A.I. drives in a way that we would have never come up with, which actually made sense when I saw its maneuvers."

Chris Gerdes is a professor of mechanical engineering at Stanford and reviewed the team's findings through its publication process at Nature. Gerdes also specializes in physics and drives race cars himself.

He said he spent a lot of time watching GT Sophy in action, trying to figure out if the A.I. was actually doing something intelligent or just learning a faster path around the same track through repetition.

"And it turns out that Sophy actually is doing things that race car drivers would consider to be very intelligent, making maneuvers that it would take a human race car driver a career to be able to pull some off ... out of their repertoire at just the right moment," he said.

What's more, Gerdes said this work could have even greater implications.

"I think you can take the lessons that you learned from Sophy and think about how those work into the development, for instance, of autonomous vehicles," he said.

Gerdes should know: He researches and designs autonomous vehicles.

"It's not as if you can simply take the results of this paper and say, 'Great, I'm going to try it on an autonomous vehicle tomorrow,'" Gerdes said. "But I really do think it's an eye opener for people who develop autonomous vehicles to just sit back and say, well, maybe we need to keep an open mind about the extent of possibilities here with A.I. and neural networks."

Wurman and Gerdes both said that taking this work to cars in the real world could still be a long way off.

But in the short term, Wurman's team is working with the developers of Gran Turismo to create a more engaging A.I. for normal players to race against in the next game in the series.

So in the near future, we could try our hands at racing it, too.

More here:

AI mastered PlayStation's 'Gran Turismo' video game, but it could have more uses - NPR

Posted in Ai | Comments Off on AI mastered PlayStation’s ‘Gran Turismo’ video game, but it could have more uses – NPR

Page 59«..1020..58596061..7080..»