The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: February 2022
Panasonic buy out ‘upped AI & ML offerings’ – Blue Yonder – Supply Chain Digital – The Procurement & Supply Chain Platform
Posted: February 28, 2022 at 8:20 pm
AI is already transforming supply chain operations. Retailers, for instance, are becoming more informed in terms of what stock is available and can even forecast with far more precision meaning less waste and increased customer satisfaction. This means that organisations can better handle disruption across their supply chain at any given time.
Retailers that utilised Blue Yonder during the Suez Canal blockade, for example, had enhanced visibility of their stock and could foresee exactly how it would impact them and then adapt to continue business-as-usual and minimise disruption.
With AI, organisations can learn from previous experiences using their historical data to manage situations better in the future. This means that companies will have a better grasp of potential uncertainties, leading to higher efficiencies and increased resilience. It also empowers retailers to predict future demand and make mass decisions based on what the data tells them.
Advancements in AI technology are also supporting organisations in becoming more sustainable. AI is helping to reduce waste and reduce carbon footprints by introducing greater efficiencies. Organisations can know exactly what should be distributed and where reducing transportation and overall cost on a global scale.
Retailers see the benefits every day. Through AI,Morrisonshas increased both employee and customer satisfaction. AI optimisation has increased sustainability (due to less waste), improved on-shelf availability by 30%, and led to better financial KPIs. When stores have the right amount of availability on shelves, it results in less repetitive work, freeing up employees to focus on customer service. Here, technology has allowed humans to have more time and make better decisions to do their jobs effectively, which is where companies will see actual value from AI.
AI is helping to optimise processes and inform decisions, and organisations that dont choose to adopt this technology will not survive for long. This is especially a concern given the constant shift in consumer demands. Consumers expect what they want when they want and at their convenience AI can help businesses achieve this.
Some find it difficult to see how AI and algorithms can benefit a business. For years, many have relied on gut feelings and feel that AI will replace humans. While these obstacles are not easy to overcome, organisations must realise the business benefits.
One-way engineers are looking to develop AI for the future is by working on making it more individualised. The development of fairer algorithms will help to avoid human biases and discrimination. A part of this is that AI needs to develop a causal structure and answer questions such as what happens if? This will support humans to understand why AI is recommending the decisions and improve the scope of AI for broader use across a business.
The future for the industry is really exciting. The COVID-19 pandemic saw retailers and supply chain operators wake up to the importance of technology and how it can provide business benefits. At Blue Yonder, its our job to build on this momentum and educate retailers on how they can continue to improve business operations.
Our focus will be increasing the end-to-end view and optimising the whole supply chain for our customers, which will be crucial moving forward. To move forward, the industry needs to implement a more horizontal integration of operations, from sourcing and warehousing to transportation and distribution.
Our partnership with Panasonic will also introduce more IoT technologies and increase intelligent hardware devices (such as cameras or intelligent shelves for cashier-less shopping). With such hardware, retailers will have a more precise view of what is in stock when, and with this more reliable input data, the AI systems can make better decisions and generate even more value.
The future holds more integrated, automated, and autonomous supply chains to make intelligent decisions. Blue Yonder is ready and prepared to support customers looking to make this transition.
Follow this link:
Posted in Ai
Comments Off on Panasonic buy out ‘upped AI & ML offerings’ – Blue Yonder – Supply Chain Digital – The Procurement & Supply Chain Platform
BioAge Labs to Participate in Truist Symposium on AI-driven Drug Discovery, Multiple Upcoming Biotech Investment Conferences – Business Wire
Posted: at 8:20 pm
RICHMOND, Calif.--(BUSINESS WIRE)--BioAge Labs, Inc. (link), a clinical-stage biotechnology company and global leader in using artificial intelligence (AI) and machine learning (ML) to develop drugs that target the molecular causes of aging, today announced that the Company will participate in a panel discussion at the Truist Securities Life Sciences AI Symposium, the first of three investment banking corporate access events to which BioAge will contribute in the coming two months.
BioAge CEO and co-founder Kristen Fortney, PhD, will speak on the panel Integrated platforms for AI-based drug discovery on March 1, 2022 from 11:20 AM to 12:20 PM EST. Additional panelists include Carl Hansen, CEO of AbCellera Biologics; Don Bergstrom, President of R&D at Relay Therapeutics; Abraham Heifets, CEO and Jonathan Barr, CFO of Atomwise; and Krishnan Nandabalan, CEO of InveniAI. Robyn Karnauskas, Senior Research Analyst and Kripa Devarakonda, Research Analyst, Truist Securities will moderate. The event will be held virtually.
A live audio webcast may be accessed through the conference website: https://truist-securities-2022-ai-symposium-biotech-tools.videoshowcase.net/login. (In the login form, enter BioAge Labs in the Sales Contact field.)
BioAge uses its discovery platform, which combines AI and ML analysis of proprietary longitudinal human samples with detailed health records tracking individuals over the lifespan, to map out the key molecular pathways that impact healthy human aging, thus revealing the causes of age-related disease. By targeting the mechanisms of aging with a mechanistically diverse portfolio of drugs, BioAge is unlocking opportunities to treat or even prevent these diseases in entirely new ways. BioAge currently has three clinical-stage programs in its growing portfolio.
From March 29 to March 31, 2022, BioAge will participate in SVB Leerink Biopharma Private Company Connect, an event intended to bring together private companies with institutional investors and to facilitate discussions on the trends and opportunities shaping the future of healthcare.
From April 11 to April 14, 2022, BioAge will participate in the 21st Annual Needham Virtual Healthcare Conference, which features presentations from leading public and private companies in the biotechnology, specialty pharmaceuticals, medical technology, and diagnostics sectors.
At all three events, both CEO Kristen Fortney, PhD, and CFO Dov Goldstein, MD, MBA will be available to meet with qualified institutional, private equity, and venture capital investors. Investors seeking to participate in these events should contact media@bioagelabs.com.
View original post here:
Posted in Ai
Comments Off on BioAge Labs to Participate in Truist Symposium on AI-driven Drug Discovery, Multiple Upcoming Biotech Investment Conferences – Business Wire
New microcredential focuses on the importance of AI ethics University Affairs – University Affairs
Posted: at 8:20 pm
People doing computer science degrees, of any sort, need ethical training, says one of the designers.
When Katrina Ingram talks to non-experts about artificial intelligence, she often has to clear up one big misconceptioncall it the Teminator fallacy.
Hollywoods done a fantastic job of creating this idea that AI is almost human-like and sentient, she says. And, not to mention, often with an evil agenda.
The so-called narrow AI at work in the world today is, frankly, more boring than Hollywoods killer robots. It takes the form of algorithms that decide which of your email messages are spam or sift through job applicants to choose promising candidates. These systems typically use huge datasets to learn how to make choices, and over time improve their performance by incorporating more data. But as these systems become more sophisticatedand as more businesses, governments and other organizations rely on AI for more and more tasksthe ethical quandaries surrounding the technology are piling up. A lot of AI systems are being used to make predictions about people, and that can cause harm in all kinds of ways, says Ms. Ingram.
As founder and CEO of Edmonton-based company Ethically Aligned AI, Ms. Ingram spends much of her time consulting with organizations on how to foreground ethical considerations when designing and deploying artificial intelligence. Thats why shes helped design Canadas first university microcredential focused on AI ethics a four-course certificate program offered by Athabasca University.
According to Ms. Ingram, this kind of training should be foundational for any professional working in digital systems. Yet the Athabasca program is the first of its kind in Canada, and one of relatively few worldwide.
People doing computer science degrees, of any sort, need ethical training, she said. Thats one audience that needs to be served the other big audience is people who are already working professionally, all the people working in companies, designing these systems, they need training too, and the microcredential program is flexible, not too time-consuming and works for them.
The courses were co-designed with Trystan Goetze, a philosopher and ethicist currently completing his postdoctoral fellowship in computer ethics at Harvard University.
The technology has gotten ahead of the policy thinking, and its gotten ahead of the humanistic thinking on the subject, says Dr. Goetze. Computers are not like other kinds of technology, where ethical issues that come up are very narrow in scope. We dont talk about automobile ethics, for example. But we do talk about computer or AI ethics, because this technology can be applied in almost any aspect of society, business, you name it.
What kinds of ethical issues are at stake? Bias is a major one, said Dr. Goetze. AI can reaffirm and even exacerbate existing prejudices. Consider an AI system tasked with choosing promising job applicants, which bases its decisions on a dataset of previous hires. If historic hiring practices were flawed or biased against particular candidates, those biases will be integrated into the AI as well.
Another consideration is how data used by AI systems is collected i.e., is it scraped from social media profiles or other online sources in a consensual and privacy-compliant way? And then theres robo-ethics, including concerns around misuse of facial recognition technology, or safety issues caused by the testing of AI-powered autonomous vehicles on public streets.
Besides academic thinkers, the courses will include interviews and contributions from activists, professionals and business leaders, who can bring different facets of the subject to light.
[These issues are] nothing new for technologists, says Dr. Goetze. But for a business leader this could be completely new information.
Beyond the content of the courses, Ms. Ingram and Dr. Goetze are both pleased with another aspect of them: they look good.
Athabasca has put a great deal of creative effort into these courses, Dr. Goetze said. Theyre visually designed in a beautiful way, with video and animations. Its not something that looks like it was slapped together in haste during the last lockdown, which I think is a testament to the fact that when we have the time and resources, we can produce online education experiences that are truly special.
See the rest here:
New microcredential focuses on the importance of AI ethics University Affairs - University Affairs
Posted in Ai
Comments Off on New microcredential focuses on the importance of AI ethics University Affairs – University Affairs
Inside the Lab: Building for the Metaverse With AI – Investor Relations
Posted: at 8:20 pm
Building for the metaverse will require major breakthroughs in artificial intelligence. Our AI labs are already making advancements in research and development as part of a long-term effort to enable the next era of computing.
Today, we hosted a Meta AI: Inside the Lab event on AIs role in building for the metaverse and showcased some of our work. You can watch the full event on the Meta AI Facebook page and catch up on some of the research advances we announced today, including:
As we build for the metaverse, well continue to break ground in areas like self-supervised learning and building the worlds most powerful AI supercomputer to drive the future of AI research breakthroughs. Were at the beginning of this journey, and todays advances provide a snapshot of whats possible through the power of AI and open science.
Watch the full Meta AI: Inside the Lab event.
Originally posted here:
Inside the Lab: Building for the Metaverse With AI - Investor Relations
Posted in Ai
Comments Off on Inside the Lab: Building for the Metaverse With AI – Investor Relations
2022 AI Trends: How Will AI Affect You? – ReadWrite
Posted: at 8:20 pm
What does the crystal ball portend for AI as we are halfway through the first business quarter of the year? First, of course, we already know that artificial intelligence (AI) impacts every industry on the planet.
Here are some areas in which AI will play a more significant role in our lives in 2022 and beyond.
AI feasts on data and the gathering avenues of that information have heightened the value of data as a competitive advantage and a critical asset for businesses and governments alike.
As a result, privacy regulations have been enacted and initiatives to educate the public about how their data can be used. Individuals will have more agency in exercising their data rights due to these efforts.
Data marketplaces are already emerging due to the convergence of these forces. Individuals and businesses can buy and sell data in data marketplaces, which are online marketplaces.
Data marketplaces offer the ability to combine methods. For example, democratized access, privacy restrictions, and monetization methods coincide to allow data owners to profit from their data usage.
Themetaverse combines virtual reality, augmented reality, online worlds, tailored experiences, and games. This allows people to communicate, transact business, and construct personalities totally online, which has recently received a lot of attention.
Many firms are vying for control of aspects of the metaverse, with examples currently present in popular apps like Roblox. What does this control have to do with artificial intelligence? In the metaverse, AI can play a variety of functions in the cloud, including developing synthetic people, writing stories, and improving VR experiences.
Gone are the days when AI was exclusively understood by data scientists.
AI is becoming a learning need for many jobs, with AI in every industry. Every government produces AI policies, and new laws arise to manage AI and associated elements such as privacy.
There will be an increase in the number of AI-Enabled Practitioners. This is because people are beginning to understand the importance of AI in their work, whether in medicine, law, human resources, sales, or any other field.
Some of AIs most commercially successful applications have been recommendation systems and dynamic pricing.
It may be unnerving but AIs never sleep, and they are constantly learning more about us. We can expect this trend to continue forever, now.
Everything we receive online is tailored to us as people (whether its a sale, a coupon, recommendations, prices, and so on). Therefore, we should expect more information about us to be collected online. Intelligence is gathered via chatbots, digital assistants, and other means.
For some, all of this collecting is an exhilarating thought, but it is a source of concern for others. Questions are asked where is all of the information being kept? Who will access things about me in the future? Why do salespeople think its okay to push, push, push and sell, sell, sell to me when I dont want to be bothered?
Education standards take time to keep up with technological change. India, for example, has developed Automated Intelligence test standards for K-12 pupils. AI is an essential subject in schools. In-house training for course curriculum is in the near future.
Alexa and Siri, among other digital assistants, have been around for a while, but what about house robots?
Although AI-powered gadgets are not new to the house (for example, Roomba), this year has introduced more general-purpose AI-powered robots. Amazon, for example, recently unveiled Astro, a robot that can follow you around the house, link to Alexa, and monitor security, among other things.
AI has shown incredible creative talents, ranging from the ability to compose music to the ability to write poetry and paint. What does this mean for the average creative who makes a living through their craft? What will this do for the shoppers?
You can expect to see Artificial Intelligence-assisted creativity in your favorite apps. From those that generate presentations for you at work to those that cook dinner for you. Will a be standard procedure be demanded? What will be the quality?
AIs progress as a technology and a force for industry growth has been exponential in recent years. As a result, we may expect more AI trends to touch our daily lives as Artificial Intelligence-powered goods find their way out of the prototype labs and into the hands of consumers.
2022 looks to be the Year of AI Supremacy and not in an evil robot bad way but in a way that will make the business of doing business easier and more profitable.
Some employees and their managers are under the mistaken impression that Automated Algorithms stifle creativity. These Luddites are always happy to learn about glitches in the cloud and on platforms.
But the intelligent worker and manager realize that each step technology takes forward, whether automated or not, is an opportunity to improve the bottom line and increase productivity.
It seems that most people in business have concerns about AI but because AI in business means moving forward despite the progression of things we dont know yet we accept the unknowns.
Image Credit: Tara Winstead; Pexels; Thank you!
Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.
Excerpt from:
Posted in Ai
Comments Off on 2022 AI Trends: How Will AI Affect You? – ReadWrite
Visionary.AI receives additional $2.5 million funding for its AI in cameras | Ctech – CTech
Posted: at 8:20 pm
Visionary.ai, an Israeli startup that uses AI to make cameras operate better, received fresh capital to increase its Seed round just as the company celebrates its first birthday. The company received an additional $2.5 million on top of its initial $4.5 million from February 2021, bringing its total to $7 million. The round was led by Ibex Investors with the participation of Spring Ventures and Capital Point and will be used for R&D and business development.Over the past year, weve built a team and technology that can help any camera achieve stellar results in real-time, said co-founder and CEO Oren Debbi. Our software can put an end to the days of blurry video calls with our family, friends, and colleagues. The response weve received from both investors and customers has been phenomenal. Cameras are an integral part of our lives, and our vision is to see Visionary.ai inside every camera, bringing greater image quality to the world.
Visionary.ai has developed an AI-powered technology that operates in the dark and removes blur for photos. Its software can work at the core of any camera in the Image Signal Processor (ISP). There, it optimizes light, sharpness, and clarity with no additional hardware needed. Were leveraging image data in a new way to deliver optimal results for camera manufacturers worldwide. As a veteran of computer vision technology, I am in awe of how our team has developed and brought this to market at lightning speed, added co-founder and CTO Yoav Taieb.
Visionary.ai was founded in 2021 by Debbi and Taieb and is based in Jerusalem. Its software can enhance the image quality in laptops, tablets, phones, and webcams in real-time to help capture clearer images. Its team includes members from Microsoft, Intel, and Samsung who have collectively worked on over 40 computer vision patentsIbex Investors is proud to have supported Visionary.ai from the beginning, and we're thrilled to further our commitment to reflect their outstanding growth and potential, added Nicole Priel, Partner & Managing Director at Ibex Investors LLC.
See the article here:
Visionary.AI receives additional $2.5 million funding for its AI in cameras | Ctech - CTech
Posted in Ai
Comments Off on Visionary.AI receives additional $2.5 million funding for its AI in cameras | Ctech – CTech
RISC-V AI Chips Will Be Everywhere – IEEE Spectrum
Posted: at 8:20 pm
While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely availablealong with the burgeoning quantities of data that can be easily harvested and used to train neural networks.
The amount of computing power at people's fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google's Tensor Processing Unit (TPU) being a prime example.
Here, I will describe a very different approach to this problemusing optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.
Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.
Reducing the energy needs of neural networks might require computing with light
For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.
While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).
What are these mysterious linear-algebra calculations? They aren't so complicated really. They involve operations on matrices, which are just rectangular arrays of numbersspreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.
This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.
Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1,600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.
Advancing from LeNet's initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore's law provided much of that increase. The challenge has been to keep this trend going now that Moore's law is running out of steam. The usual solution is simply to throw more computing resourcesalong with time, money, and energyat the problem.
As a result, training today's large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.
Improvements in digital electronic computers allowed deep learning to blossom, to be sure. But that doesn't mean that the only way to carry out neural-network calculations is with such machines. Decades ago, when digital computers were still relatively primitive, some engineers tackled difficult calculations using analog computers instead. As digital electronics improved, those analog computers fell by the wayside. But it may be time to pursue that strategy once again, in particular when the analog computations can be done optically.
It has long been known that optical fibers can support much higher data rates than electrical wires. That's why all long-haul communication lines went optical, starting in the late 1970s. Since then, optical data links have replaced copper wires for shorter and shorter spans, all the way down to rack-to-rack communication in data centers. Optical data communication is faster and uses less power. Optical computing promises the same advantages.
But there is a big difference between communicating data and computing with it. And this is where analog optical approaches hit a roadblock. Conventional computers are based on transistors, which are highly nonlinear circuit elementsmeaning that their outputs aren't just proportional to their inputs, at least when used for computing. Nonlinearity is what lets transistors switch on and off, allowing them to be fashioned into logic gates. This switching is easy to accomplish with electronics, for which nonlinearities are a dime a dozen. But photons follow Maxwell's equations, which are annoyingly linear, meaning that the output of an optical device is typically proportional to its inputs.
The trick is to use the linearity of optical devices to do the one thing that deep learning relies on most: linear algebra.
To illustrate how that can be done, I'll describe here a photonic device that, when coupled to some simple analog electronics, can multiply two matrices together. Such multiplication combines the rows of one matrix with the columns of the other. More precisely, it multiplies pairs of numbers from these rows and columns and adds their products togetherthe multiply-and-accumulate operations I described earlier. My MIT colleagues and I published a paper about how this could be done in 2019. We're working now to build such an optical matrix multiplier.
Optical data communication is faster and uses less power. Optical computing promises the same advantages.
The basic computing unit in this device is an optical element called a beam splitter. Although its makeup is in fact more complicated, you can think of it as a half-silvered mirror set at a 45-degree angle. If you send a beam of light into it from the side, the beam splitter will allow half that light to pass straight through it, while the other half is reflected from the angled mirror, causing it to bounce off at 90 degrees from the incoming beam.
Now shine a second beam of light, perpendicular to the first, into this beam splitter so that it impinges on the other side of the angled mirror. Half of this second beam will similarly be transmitted and half reflected at 90 degrees. The two output beams will combine with the two outputs from the first beam. So this beam splitter has two inputs and two outputs.
To use this device for matrix multiplication, you generate two light beams with electric-field intensities that are proportional to the two numbers you want to multiply. Let's call these field intensities x and y. Shine those two beams into the beam splitter, which will combine these two beams. This particular beam splitter does that in a way that will produce two outputs whose electric fields have values of (x + y)/2 and (x y)/2.
In addition to the beam splitter, this analog multiplier requires two simple electronic componentsphotodetectorsto measure the two output beams. They don't measure the electric field intensity of those beams, though. They measure the power of a beam, which is proportional to the square of its electric-field intensity.
Why is that relation important? To understand that requires some algebrabut nothing beyond what you learned in high school. Recall that when you square (x + y)/2 you get (x2 + 2xy + y2)/2. And when you square (x y)/2, you get (x2 2xy + y2)/2. Subtracting the latter from the former gives 2xy.
Pause now to contemplate the significance of this simple bit of math. It means that if you encode a number as a beam of light of a certain intensity and another number as a beam of another intensity, send them through such a beam splitter, measure the two outputs with photodetectors, and negate one of the resulting electrical signals before summing them together, you will have a signal proportional to the product of your two numbers.
Simulations of the integrated Mach-Zehnder interferometer found in Lightmatter's neural-network accelerator show three different conditions whereby light traveling in the two branches of the interferometer undergoes different relative phase shifts (0 degrees in a, 45 degrees in b, and 90 degrees in c).Lightmatter
My description has made it sound as though each of these light beams must be held steady. In fact, you can briefly pulse the light in the two input beams and measure the output pulse. Better yet, you can feed the output signal into a capacitor, which will then accumulate charge for as long as the pulse lasts. Then you can pulse the inputs again for the same duration, this time encoding two new numbers to be multiplied together. Their product adds some more charge to the capacitor. You can repeat this process as many times as you like, each time carrying out another multiply-and-accumulate operation.
Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don't have to do that after each pulseyou can wait until the end of a sequence of, say, N pulses. That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large. Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy.
Sometimes you can save energy on the input side of things, too. That's because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple timesconsuming energy each timeit can be transformed just once, and the light beam that is created can be split into many channels. In this way, the energy cost of input conversion is amortized over many operations.
Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements.
I've outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat. Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence, are developing optical neural-network accelerators based on this approach. Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year.
Another startup using optics for computing is Optalysis, which hopes to revive a rather old concept. One of the first uses of optical computing back in the 1960s was for the processing of synthetic-aperture radar data. A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely.
Theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.
There is also a company called Luminous, spun out of Princeton University, which is working to create spiking neural networks based on something it calls a laser neuron. Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous's hardware is still in the early phase of development, but the promise of combining two energy-saving approachesspiking and opticsis quite exciting.
There are, of course, still many technical challenges to be overcome. One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That's because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it's difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision. While 8-bit electronic deep-learning hardware exists (the Google TPU is a good example), this industry demands higher precision, especially for neural-network training.
There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can't be packed nearly as tightly as transistors, so the required chip area adds up quickly. A 2017 demonstration of this approach by MIT researchers involved a chip that was 1.5 millimeters on a side. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way.
There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug. What's clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.
Based on the technology that's currently available for the various components (optical modulators, detectors, amplifiers, analog-to-digital converters), it's reasonable to think that the energy efficiency of neural-network calculations could be made 1,000 times better than today's electronic processors. Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed.
Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks, were first demonstrated in the 1970s. But this approach didn't catch on. Will this time be different? Possibly, for three reasons.
First, deep learning is genuinely useful now, not just an academic curiosity. Second, we can't rely on Moore's Law alone to continue improving electronics. And finally, we have a new technology that was not available to earlier generations: integrated photonics. These factors suggest that optical neural networks will arrive for real this timeand the future of such computations may indeed be photonic.
See the article here:
Posted in Ai
Comments Off on RISC-V AI Chips Will Be Everywhere – IEEE Spectrum
Butterfly Introduces System-wide Ultrasound with AI-guided Option – Imaging Technology News
Posted: at 8:20 pm
February 28, 2022 Butterfly Network, Inc., a digital health company transforming care with handheld, whole-body ultrasound, introducedButterfly Blueprint, a system-wide platform designed to support the scaled deployment of ultrasound across hospitals and health systems to empower more-informed clinical decisions from the bedside and encounter-based workflow.
Leveraging Butterflys unique combination of a whole-body handheld ultrasound device, software, and services, Blueprint brings hospitals and health systems a complete ultrasound solution. This system-wide offering promotes improved patient care via accessible imaging across multiple disciplines and care settings. By integrating into health systems clinical and administrative systems and workflows, Blueprint delivers a clinical assessment tool at scale.
Ultrasound provides valuable information. The ability to enable the practical application of ultrasound into the clinical workflow to inform clinical decision making is powerful, said Dr. Todd Fruchterman, Butterfly Network's President and Chief Executive Officer. Butterfly Blueprint empowers an evolved point-of-care toolkit for hospitals and health systems one thattranscends beyond touch, listening, and surface visuals, and beyond habit-based imaging and lab orders. With Butterfly, clinicians across all disciplines now have a strategictool that we believe allows them to see and know sooner, helping them drive better care decisions, efficiency, and outcomes.
Butterfly Blueprint is complemented with a rich set of optional software and services includingCaption HealthsAI-guided software. Caption AI empowers healthcare professionals without sonography expertise to capture and interpret cardiac ultrasound images for earlier disease detection and better patient management. The Centers for Medicare and Medicaid Services (CMS) has approved new technology add-on payments (NTAP) for Caption Guidance a designation awarded to new medical technologies and services that are expected to substantially improve the diagnosis or treatment of Medicare beneficiaries.
Deployment on the Blueprint platform is helping us fulfill our mission to put enhanced diagnostic capabilities in more hands and increase access for patients, said Steve Cashman, President and CEO of Caption Health. By integrating Caption AI with Butterfly iQ+, were expanding diagnostic toolkits for hospitals, health systems, and wherever a patient needs care. The insights provided by a high-quality ultrasound exam are critical for better care and earlier disease detection. Butterfly and Caption are making that vision not just a possibility, but a reality.
TheUniversity of Rochester Medical Center(URMC), upstate New Yorks largest and most comprehensive healthcare system,announced earlier this yearthat it will deploy Butterfly Blueprint, bringing it first to medical students, primary care providers, and home care nurses. As quoted within the announcement, Dr. David L. Waldman, Chief Medical IT Development Officer and former Chair of Imaging Sciences at URMC said, The deployment of innovative ultrasound technology has the potential to redefine the point-of-care clinical standard and serve as an enhancement to the use of the stethoscope. Enterprise deployment of point of care ultrasound will ultimately enable every clinician, across all departments, to quickly image patients where they are located.
With Butterfly Blueprint, hospitals and health systems like URMC can rapidly and easily access ultrasound-enabled insights at the point of care through capabilities such as intuitive, mobile-first workflows; 20+ ready-to-use presets for procedural guidance; and device-agnostic software that integrates with non-Butterfly devices, as well as with other clinical and administrative systems including the PACS and EMR. Butterfly probes connect with compatible Apple and Android smartphones and tablets for display and support streamlined information sharing. Blueprint is also supported withButterfly Academy, an extensive set of ultrasound specific courses and curricula.
For more information:https://www.butterflynetwork.com/blueprint
More:
Butterfly Introduces System-wide Ultrasound with AI-guided Option - Imaging Technology News
Posted in Ai
Comments Off on Butterfly Introduces System-wide Ultrasound with AI-guided Option – Imaging Technology News
Two top jobs in the booming AI industry – TechRepublic
Posted: at 8:20 pm
Artificial intelligence (AI) is an ever-growing technology thats changing how businesses operate, and the support of AI-savvy IT pros has become critical. Heres a look at two of the hottest jobs in the field.
The use of artificial intelligence (AI) is exploding across all industries, and AI has seen rapid growth over the past year. According to a 2021 study conducted by PwC, 52% of survey respondents said they had accelerated their AI adoption plans in the wake of the COVID-19 crisis. And a whopping 86% of respondents said that AI would be a mainstream technology in the near future.
Business as we once knew it is gone. AI has left its mark, showing its ability to move businesses forward even during an uncertain time. According to the PwC study, AI has enabled businesses to:
Will AI continue to be a leading technology in 2022? Absolutely. The experts at Forrester predict that the year will bring with it some more big waves in AI, with traditional businesses putting AI at the center of everything they do. Forrester also predicts that creative AI systems will win dozens of patents, further expanding AI.
Of course, the growth of AI also brings about some unique challenges for organizations ready to adopt it. For example, companies must expand their already tight IT budgets to achieve the computing power necessary to take advantage of AI. And machine learning, a critical subset of AI, is fraught with issues such as social bias, discrimination and subpar security.
Beyond these concerns lies perhaps the greatest challenge yet: securing high-quality talent to fill AI-based IT roles. These roles are in high demand as organizations began to place priority on the deployment of AI technology.
Due to AIs growth and the demand for AI-based talent, aspiring IT professionals should consider this career path. Of course, AI requires a solid understanding of programming languages (e.g., Python and Java), statistics, machine learning algorithms, big data and AI frameworks such as Apache Spark. Luckily, all of this knowledge can be gained through a college or university artificial intelligence program and hands-on training.
There has never been a better time to start a career in AI or take advantage of AI within your business. Whether youre seeking a new career or youre a business looking to fill empty AI roles, these two TechRepublic hiring kits can give you a head start.
To truly take advantage of AI, businesses must rely on the skill of experienced AI architects. Using leading AI technology frameworks, AI architects develop and manage the critical architecture AI is built upon. To do so, AI architects must be able to see the big picture of a world supported by AI.
AI architects must have several years of work experience, including hands-on experience in computer science, data and other AI-related disciplines. Architects should be able to implement machine learning tactics and develop AI architecture in a variety of languages.
In this hiring kit, youll get to know the basic responsibilities an AI architect should have and the necessary skills required for success.
Machine learning is a critical component of AI. Machine learning refers to a softwares ability to learn and therefore predict certain outcomes much like a human brain. And for businesses to make AI work, they require machine learning engineers responsible for building and maintaining AI algorithms.
Machine learning engineers spend their time researching machine learning algorithms, performing analysis, running machine learning tests, using test results to improve machine learning models and more.
Machine learning engineers must have strong math and science skills. They must also be experts in machine learning, be able to think critically and possess the ability to problem-solve. Discover more of the skills required by checking out this hiring kit by TechRepublic Premium.
Go here to see the original:
Posted in Ai
Comments Off on Two top jobs in the booming AI industry – TechRepublic
Hidden Turbulence in The Atmosphere of The Sun Revealed by New AI Model – ScienceAlert
Posted: at 8:20 pm
Hidden turbulent motion that takes place inside the atmosphere of the Sun can be accurately predicted by a newly developed neural network.
Fed only temperature and vertical motion data collected from the surface of the solar photosphere, the AI model could correctly identify turbulent horizontal motion below the surface. This could help us to better understand solar convection, and processes that generate explosions and jets erupting from the Sun.
"We developed a novel convolutional neural network to estimate the spatial distribution of horizontal velocity by using the spatial distributions of temperature and vertical velocity," wrote a team of researchers led by astronomer Ryohtaroh Ishikawa of the National Astronomical Observatory of Japan.
"This led to efficient detection of spatially spread features and concentrated features. [..] Our network exhibited a higher performance on almost all the spatial scales when compared to those reported in previous studies."
The solar photosphere is the region of the Sun's atmosphere that is commonly referred to as its surface. It's the lowest layer of the solar atmosphere, and the region in which solar activity such as sunspots, solar flares and coronal mass ejections originate.
If you look closely, the surface of the photosphere is not uniform. It's covered with sections crowded together, lighter in the middle and dark towards the edges. These are called granules, and they're the tops of convection cells in the solar plasma. Hot plasma rises in the middle, and then falls back down around the edges as it moves outwards and cools.
When we observe these cells, we can measure their temperature, as well as their motion via the Doppler effect, but horizontal motion can't be detected directly. However, smaller scale flows in these cells can interact with solar magnetic fields to trigger other solar phenomena. In addition, turbulence is also thought to play a role in heating the solar corona, so scientists are keen to understand exactly how plasma behaves in the photosphere.
Ishikawa and team developed numerical simulations of plasma turbulence, and used three different sets of simulation data to train their neural network. They found that, based solely on the temperature and vertical flow data, the AI could accurately describe horizontal flows in the simulations that would be undetectable on the real Sun.
This means that we could feed it solar data and expect that the results it returns are consistent with what is actually occurring on our fascinating, forbidding star.
However, the neural network does need some fine-tuning. While it was able to detect large-scale flows, the AI did have trouble picking out smaller features. Since the accuracy of small-scale turbulence is crucial for some calculations, resolving this should be the next step in developing their software, the researchers said.
"By comparing the results of the three convection models, we observed that the rapid decrease in coherence spectrum occurred on the scales that were lower than the energy injection scales, which were characterized by the peaks of the power spectra of the vertical velocities. This implies that the network was not appropriately trained to reproduce the velocity fields in small scales generated by turbulent cascades," they wrote in their paper.
"These challenges can be explored in future studies."
A bit closer to home, the researchers are developing their software to also help better understand turbulence in fusion plasmas another important application for future use.
The research has been published in Astronomy & Astrophysics.
Read more:
Hidden Turbulence in The Atmosphere of The Sun Revealed by New AI Model - ScienceAlert
Posted in Ai
Comments Off on Hidden Turbulence in The Atmosphere of The Sun Revealed by New AI Model – ScienceAlert