The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
The key to making AI green is quantum computing – The Next Web
Posted: March 18, 2021 at 12:39 am
Weve painted ourselves into another corner with artificial intelligence. Were finally starting to breakthrough the usefulness barrier but were butting up against the limits of our our ability to responsibly meet our machines massive energy requirements.
At the current rate of growth, it appears well have to turn Earth into Coruscant if we want to keep spending unfathomable amounts of energy training systems such as GPT-3 .
The problem: Simply put, AI takes too much time and energy to train. A layperson might imagine a bunch of code on a laptop screen when they think about AI development, but the truth is that many of the systems we use today were trained on massive GPU networks, supercomputers, or both. Were talking incredible amounts of power. And, worse, it takes a long time to train AI.
The reason AI is so good at the things its good at, such as image recognition or natural language processing, is because it basically just does the same thing over and over again, making tiny changes each time, until it gets things right. But were not talking about running a few simulations. It can take hundreds or even thousands of hours to train up a robust AI system.
One expert estimated that GPT-3, a natural language processing system created by OpenAI, would cost about $4.6 million to train. But that assumes one-shot training. And very, very few powerful AI systems are trained in one fell swoop. Realistically, the total expenses involved in getting GPT-3 to spit out impressively coherent gibberish are probably in the hundreds-of-millions.
GPT-3 is among the high-end abusers, but there are countless AI systems out there sucking up hugely disproportionate amounts of energy when compared to standard computation models.
The problem? If AI is the future, under the current power-sucking paradigm, the future wont be green. And that may mean we simply wont have a future.
The solution: Quantum computing.
An international team of researchers, including scientists from the University of Vienna, MIT, Austria, and New York, recentlypublishedresearch demonstrating quantum speed-up in a hybrid artificial intelligence system.
In other words: they managed to exploit quantum mechanics in order to allow AI to find more than one solution at the same time. This, of course, speeds up the training process.
Per the teams paper:
The crucial question for practical applications is how fast agents learn. Although various studies have made use of quantum mechanics to speed up the agents decision-making process, a reduction in learning time has not yet been demonstrated.
Here we present a reinforcement learning experiment in which the learning process of an agent is sped up by using a quantum communication channel with the environment. We further show that combining this scenario with classical communication enables the evaluation of this improvement and allows optimal control of the learning progress.
How?
This is the cool part. They ran 10,000 models through 165 experiments to determine how they functioned using classical AI and how they functioned when augmented with special quantum chips.
And by special, that is to say, you know how classical CPUs process via manipulation of electricity? The quantum chips the team used were nanophotonic, meaning they use light instead of electricity.
The gist of the operation is that in circumstance where classical AI bogs down solving very difficult problems (think: supercomputer problems), they found thehybrid-quantum system outperformed standard models.
Interestingly, when presented with less difficult challenges, the researchers didnt not observe anyperformance boost. Seems like you need to get it into fifth gear before you kick in the quantum turbocharger.
Theres still a lot to be done before we can roll out the old mission accomplished banner. The teams work wasnt the solution were eventually aiming for, but more of a small-scale model of how it could work once we figure out how to apply their techniques to larger, real problems.
You can read the whole paper here on Nature.
H/t: Shelly Fan, Singularity Hub
Published March 17, 2021 19:41 UTC
Read more:
The key to making AI green is quantum computing - The Next Web
Posted in Ai
Comments Off on The key to making AI green is quantum computing – The Next Web
Not seeing results from AI? Engineering may be the missing piece – Healthcare IT News
Posted: at 12:39 am
There's no doubt that one of the hottest topics in healthcare right now is artificial intelligence. The promise of AI is exciting: It has helped identify cancerous images in radiology, found diabetes via retinal scans and predicted patient mortality risk, just to name a few examples of the medical advances it can deliver.
But the paths healthcare systems go down to make AI a reality are often flawed resulting in a dabbling of AI with no measurable results. When the wrong path is taken, they end up with AI "solutions" to perceived problems without being able to verify if those problems are, in fact, real or measurable.
Vendors often turn on AI solutions then walk away, leaving health systems unsure of how to use these new insights within the bounds of their old workflows. And these tools are often deployed without the engineering rigor to make sure this new technology is testable or resilient.
The result? These potential AI insights are often ignored, marginally helpful, quickly outdated, or at worst harmful. But who's to know?
One common AI solution that is often a source of excitement among health systems and vendors alike is early sepsis detection.
In fact, finding septic patients happened to be my first assignment at Penn Medicine. The idea was that if we could find patients at risk of sepsis earlier, there were treatments that could be applied, resulting (we thought) in lives saved.
Coming from a background in missile defense, I naively thought this would be an easy task to create. There was a "find the missile, shoot the missile" similarity that seemed intuitive.
My team developed one of the top-performing sepsis models ever created. [1] It was validated, deployed and it resulted in more lab tests and faster ICU transfers yet it produced zero patient outcome changes.
It turns out that Penn Medicine was already good at finding septic patients, and that this state-of-the-art algorithm wasn't, in fact, needed at all. Had we gone through the full engineering process that's now in place at Penn Medicine, we would've found no evidence that the original problem statement, "find septic patients" was a problem at all.
This engineering design effort would have saved many months of work and the deployment of a system that was ultimately distracting.
Over the last few years, hundreds of claims of successful AI implementations have been made by vendors and health systems alike. So why is it that only a handful of the resulting studies have been able to show actual value? [2]
The issue is that many health systems try to solve healthcare problems by simply configuring vendor products. What's missed in this approach is the engineering rigor needed to design a complete solution, one that includes technology, human workflow, measurable value and long-term operational capability.
This vendor-first approach is often siloed, with independent teams assigned isolated tasks, and the completion of those tasks becomes how project success is measured.
Success, then, is firmly task-based, not value-based. Linking these tasks (or projects) to the measures that actually matter - lives saved, dollars saved - is difficult, and requires a comprehensive engineering approach.
Understanding whether these projects are working, how well they are working (or if they were ever needed to begin with), is not typically measured. The incomplete way of looking at it is: If AI technology is deployed, success is claimed, the project is complete. The engineering required to both define and measure value is not there.
Getting value from healthcare AI is a problem that requires a nuanced, thoughtful and long-term solution. Even the most useful AI technology can abruptly stop performing when hospital workflows change.
For example, a readmission risk model at Penn Medicine suddenly showed a subtle reduction in risk scores. The culprit? An accidental EHR configuration change. Because a complete solution had been engineered, the data feed was being monitored and the teams were able to quickly communicate and correct the EHR change.
We estimate that these types of situations have arisen approximately twice a year, for each predictive model deployed. So ongoing monitoring of the system, the workflow, and the data is needed, even during operations.
For AI in healthcare to reach its potential, health systems must expand their energies beyond clinical practice, and focus on total ownership of all AI solutions. Rigorous engineering, with clearly defined outcomes tied directly to measurable value, will be the foundation on which to build all successful AI programs.
Value must be defined in terms of lives saved, dollars saved, or patient/clinical satisfaction. The health systems that will realize success from AI will be the ones who carefully define their problems, measure evidence of those problems, and form experiments to connect the hypothesized interventions to better outcomes.
Successful health systems will understand that rigorous design processes are needed to properly scale their solutions in operations, and be willing to consider both the technologies and human workflows as part of the engineering process.
Like Blockbuster, which now famously failed to rethink the way it delivered movies health systems who refuse to see themselves as engineering houses are at risk of drastically falling behind in their ability to properly leverage AI technology.
It's one thing to make sure websites and email servers are working, it's quite another to make sure the health system is optimizing care for heart failure.
One is an IT service, the other is a complete product solution that requires a comprehensive team of clinicians, data scientists, software developers, and engineers, as well as clearly defined metrics of success: lives and/or dollars saved.
[1] Giannini, H. M., Chivers, C., Draugelis, M., Hanish, A., Fuchs, B., Donnelly, P., & Mikkelsen, M. E. (2017). Development and Implementation of a Machine-Learning Algorithm for Early Identification of Sepsis in A Multi-Hospital Academic Healthcare System. American Journal of Respiratory and Critical Care Medicine. 195.
[2] The Digital Reconstruction of Health Care, John Halamka, MD, MS & Paul Cerrato, MA, NEJM Catalyst Innovations in Care Delivery 2020; 06
DOI: https://doi.org/10.1056/CAT.20.0082
Mike Draugelis is chief data scientist at Penn Medicine, where he leads its Predictive Healthcare team.
Originally posted here:
Not seeing results from AI? Engineering may be the missing piece - Healthcare IT News
Posted in Ai
Comments Off on Not seeing results from AI? Engineering may be the missing piece – Healthcare IT News
HPE Steps Up AI March With Standalone Version Of Ezmeral – CRN
Posted: at 12:39 am
Hewlett Packard Enterprise Wednesday dramatically expanded its artificial intelligence-machine learning (AI/ML) market reach with a standalone release of its Ezmeral Data Fabric.
The new standalone Ezmeral edge-to-cloud data fabric brings the fast growing cloud native AI/ML platform to a new multibillion-dollar market where the data fabric offering can be used in multiple enterprise big data buildouts on its own.
HPE made the decision to establish a separate standalone version of the data fabric in direct response to customers, said HPE Chief Technology Officer and Head of Software Kumar Sreekanti (pictured above). Its a huge market opportunity, he said. Customers have asked for this because it is a very proven platform with phenomenal scale. Many customers want to first deploy the data platform and later on bring in the Ezmeral container platform.
The new Ezmeral standalone offering came as part of an HPE Ezmeral day webcast blitz that included the launch of a new Ezmeral Technology Ecosystem program for ISVs (Independent Software Vendors) and an Ezmeral Marketplace that includes ISV and open source projects for enterprise customers anxious to modernize applications and move to cloud native workloads.
The standalone Ezmeral fabric offering appeals to enterprise customers looking to build out an edge to core to cloud scale-out file system for storing unstructured data independent from the analytics capabilities that come with the full Ezmeral platform, said HPE Ezmeral General Manager Anant Chintamaneni.
That is a new addressable market for customers that want a more effective price point to store large unstructured files, said Chintamaneni. We have had customers who want this standalone offering in the oil and gas industry and the healthcare industry. They are supporting digital transformation and want a massive scale out edge to core to cloud file system. They are primarily interested in storing data and will bring analytics in later on.
There are also traditional data analytics buyers who have not yet deployed container platforms and are looking for a proven POSIX (Portable Operating System Interface) file system for their AI/ML workloads, said Chintamaneni. They want to get their data strategy right first, he said. They want to modernize their data, collect the data from different places and put it in one place, and then bring the AI/ML and analytics tools later.
Ezmerals ability to provide a unified data repository that can be accessed via multiple protocols like POSIX, NFS (Network File System), or HDFS (Hadoop Distributed File System) is absolutely unique, said Chintamaneni.
The partner ecosystem program is another sign of Ezmeral momentum, said Sreekanti, citing the launch of a new HPE Ezmeral app store that includes big data ISVs like Dataiku, MinIO, H20.AI, Rapt.AI, Sysdig and Unravel. This provides customers access to pre-tested, pre-packaged applications, he said.
Key to Ezmeras market momentum is the ability for partners to quickly and easily roll out containers in a hybrid and multicloud world, said Sreekanti. The benefit for our channel partners and resellers is that it is easy, you dont have to pull all these pieces together, he said. Because of the unique comprehensive, combined nature of this it is easy for our partners to deploy.
Chintamaneni, for his part, said the flexibility Ezmeral provides for deployments at the edge, the core network and the public cloud with the ability to seamlessly move data in a unified software defined environment provides big benefits to HPE customers and partners. Its a very unique value proposition we are bringing to the market in a very simple fashion, he said.
Ezmeral new customer acquisition is accelerating significantly, said Chintamaneni. New logo acquisition has really seen an uptick, he said.
As part of Ezmeral day, HPE announced a number of new customers including ORock, a hybrid cloud service that is debuting a new suite of offerings powered by HPE Ezmeral and HPEs GreenLake pay per use platform. In another big win, HPE said DRAM manufacturer Nanya Technology has chosen Ezmeral to improve production by accelerating the rollout of AI projects in its manufacturing facilities.
Erik Krucker, CTO at Comport Consulting, an HPE Platinum partner with a growing AI practice within its ComportSecure managed services business, said he sees the standalone data fabric as another sign of HPEs transformation into an edge to cloud platform as a service software powerhouse.
Customers want to have the ability that HPE is providing with Ezmeral to move workloads around, said Krucker. Its a great strategy for containerized applications because you only have to engineer them once and then you can move them wherever you need them. HPE is definitely going in the right direction. They are talking about software solutions, platforms and data fabric rather than hardware. Its all about solutions that customers are trying to implement. This makes HPE much more attractive to enterprise customers.
Krucker credited HPE CEO Antonio Neri with transforming the company into an edge-to-cloud software platform innovator that is a fierce competitor in the intensely competitive big data storage market. Youve got to hand it to Antonio, hes thinking differently and bringing in the right people like Kumar Sreekanti and (GreenLake Cloud Services General Manager) Keith White, said Krucker.
Comport, for its part, is working with a growing number of customers on AI/ML solutions. AI/ML used to be a luxury or a nice to have for an organization, but now it is a must have and those organizations are pivoting fast, said Krucker. We see AI/ML going downstream and becoming pervasive even in the SMB market at some point. Customers have to gain a competitive edge in the marketplace and they are going to use AI/ML to deliver services faster and more cost effectively to customers.
Comport is building AI/ML solutions for a number of customers and then running them under the ComportSecure managed services banner. We are designing, implementing and managing these solutions from end to end for customers, said Krucker. That business is probably going to double this year for us.
One Comport customer is looking at leveraging AI to dramatically reduce the cost of bringing new drugs to market. If they can use AI to run their algorithms faster, they can save billions of dollars, said Krucker. They want to take their existing models and crunch numbers faster and faster and faster. AI can speed up their R&D and cut down the amount of time it takes to bring new drugs to market by 50 percent.
The rest is here:
HPE Steps Up AI March With Standalone Version Of Ezmeral - CRN
Posted in Ai
Comments Off on HPE Steps Up AI March With Standalone Version Of Ezmeral – CRN
A Top Computer Science Professors Insights And Predictions For Conversational AI – Forbes
Posted: at 12:39 am
Breaking Bots by Clincs Founder CEO Jason Mars is released with ForbesBooks.
This release is posted on behalf of ForbesBooks (operated by Advantage Media Group under license.)
NEW YORK (March 16, 2021) Breaking Bots: Inventing a New Voice in the AI Revolution by Clincs Founder CEO Dr. Jason Mars is available now. The book is published with ForbesBooks, the exclusive business book publishing imprint of Forbes.
In setting the stage for his new book, Jason Mars considers how technology has shaped the arc of human history, time and again. From the Bronze Age to the Industrial Revolution to our current Technological Age, a once gradual pace of progress has given way to an era of rapid, exponential growth. The next revolution humanity must prepare for, in Mars view, is Artificial Intelligence. In Breaking Bots: Inventing a New Voice in the AI Revolution, Jason Mars expresses the surprising progress AI has made in recent years and what our shared future holds. At the same time, Mars chronicles the unique journey and key insights of creating a company dedicated to advancing AIs potential.
The frontier for conversational AI is endless and thrilling, Mars explained. Being able to speak freely, as you would to a human in the room, is the holy grail.
While virtual home assistants like Alexa or Siri are now commonplace, these technologies are limited by their market and narrow internal intuitions. That said, Breaking Bots still positions conversational AI as humanitys next fire, light bulb, or internet. It is in bridging those intuitional gaps that the work of computer scientist Jason Mars and Clinc, the company he founded, seeks to make an impact. Breaking Bots offers insights into the paradigm-shifting technical and cultural DNA that makes Jasons work, and Clincs technology, a bold future for AI.
Ryan Tweedie, the CIO Portfolio Director and Global Managing Director of Accenture, believes that Mars is a true vanguard in the lineage and development of AI, especially for what countsthe human element.
Breaking Bots: Inventing a New Voice in the AI Revolution is available on Amazon today.
About Jason Mars, Ph.D.
Jason Mars has built some of the worlds most sophisticated scalable systems for AI, computer vision, and natural language processing. He is a professor of computer science at the University of Michigan where he directs Clarity Lab, one of the worlds top AI and computing training labs.
In his tenure as CEO of Clinc, he was named Bank Innovations #2 Most Innovative CEO in Banking 2017 and #4 in Top 11 Technologists in Voice AI 2019. His work has been recognized by Crains Detroit Businesss 2019 40 under 40 for career accomplishments, impact in their field and contributions to their community. Prior to the University of Michigan, Jason was a professor at UCSD and worked at Google and Intel. Jason holds a Ph.D. in computer science from UVA.
About ForbesBooks
Launched in 2016 in partnership with Advantage Media Group, ForbesBooks is the exclusive business book publishing imprint of Forbes. ForbesBooks offers business and thought leaders an innovative, speed-to-market, fee-based publishing model and a suite of services designed to strategically and tactically support authors and promote their expertise. For more information, visit forbesbooks.com.
Media Contacts
Michael Szudarek, Marx Layne, mszudarek@marxlayne.com
Carson Kendrick, ForbesBooks, ckendrick@forbesbooks.com
See the rest here:
A Top Computer Science Professors Insights And Predictions For Conversational AI - Forbes
Posted in Ai
Comments Off on A Top Computer Science Professors Insights And Predictions For Conversational AI – Forbes
Why the Future of Healthcare is Federated AI – insideBIGDATA
Posted: at 12:39 am
In this special guest feature, Akshay Sharma, Executive Vice President of Artificial Intelligence (AI) at Sharecare, highlights advancements and impact of federated AI and edge computing for the healthcare sector as it ensures data privacy and expands the breadth of individual, organizational, and clinical knowledge. Sharma joined Sharecare in 2021 as part of its acquisition of doc.ai, the Silicon Valley-based company that accelerated digital transformation in healthcare. With doc.ai, Sharma previously held various leadership positions including CTO, and vice president of engineering, a role in which he developed several key technologies that power mobile-based privacy products in healthcare. In addition to his role at Sharecare, Sharma serves as CTO of TEDxSanFrancisco and also is involved in initiatives to decentralize clinical trials. Sharma holds bachelors degrees in engineering and engineering in information science from Visvesvaraya Technological University.
Healthcare data is an incredibly valuable asset and ensuring that its kept private and secure should be a top priority for everyone. But as the pandemic led to more patient exams and visits being conducted within telehealth environments, its become even easier to lose control of that data.
This doesnt have to be the case. There are better options for ensuring a users health data remains private for them. The future of where all of the health information exists is only on the edge (mobile devices).
Right now, federated learning (or, federated AI) guarantees that the users data stays on the device, and the applications running a specific program are still learning how to process the data and building a better, more efficient, model. HIPAA laws protect patient medical data, but federated learning takes that a step further by not sharing the data with outside parties.
Leveraging federated learning is where healthcare can evolve with technology.
Traditional machine learning requires centralizing data to train and build a model. With federated learning, combining other privacy-preserving techniques can build models in a distributed data setup without leaking sensitive information from the data. This will allow health professionals to be more inclusive and find more diversity in the data by going to where the data is: with the users.
How the Right Data Makes a World of Difference
Right now, nearly everyone is carrying a smartphone that can collect health-based signals. With federated learning, well be able to meet those users. Those health-based signals could include photos with medical information, an accelerometer that can capture motion, GPS location information that can reveal signals of health, and integration with several health devices which can contain biometrics data, integration with medical records like Apple health, and more.
AI-based predictive models can combine the data collected on the smartphone for both prospective and retrospective medical research and provide better health indicators in real-time.
Technology in our phones has been providing us information about air quality for some time, but with federated learning I expect apps to start engaging with users and patients during specific events on a more personal basis. For instance, if a user with asthma is too close to a region experiencing a forest fire or if someone with seasonal allergies is around an area where pollen-count is high, I fully expect the app to engage with that user and provide tips to mitigate the situation.
The Importance of Being Privacy First
These insights cant be provided without a service gleaning that pivotal information from the user. With privacy-preserving techniques (such as differential privacy), this data is only stored locally and on the edge, without being sent to the cloud or leaked to a third party.
We keep stressing the importance of privacy, but its significance cant be overstated. Users should own their data and have transparency around where data is sent and shared. Every type of data needs to be authorized for collection, and there must be transparency on how the data will be used.
Theres more to privacy than a mission statement when health services are built privacy-first, you can bring in more participants to the data training loop which allows teams to find a more diverse pool of users who feel more confident in sharing access to their private data. The more real-time and encompassing health systems, where models are learning faster from a large group of users instead of just a few, will lead to better health outcomes.
The unfortunate truth is that healthcare has become incredibly siloed and data exchange is often difficult and expensive. For example, EMR data is not available with claims and prescription data, and then finding out whether the prescription was even collected only exists in other systems. If you then layer in data, such as genetics, what you eat, social determinants of health, and activity data, you have a multi-node problem for a single user. There is no single source of the full truth, and centralizing all this is incredibly hard.
Federated learning provides the perfect opportunity to avoid these barriers. By putting the user/patient in charge of coordinating their health data, you can provide the right opt-ins to learn from their data across these disparate systems. Its now possible to imagine federated learning being applied across organizations, holding sensitive data, and come together to collectively build efficient and more effective models in healthcare.
Sign up for the free insideBIGDATAnewsletter.
Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1
Read the original:
Why the Future of Healthcare is Federated AI - insideBIGDATA
Posted in Ai
Comments Off on Why the Future of Healthcare is Federated AI – insideBIGDATA
Responsible AI in health care starts at the top but its everyones responsibility (VB Live) – VentureBeat
Posted: at 12:39 am
Presented by Optum
Health cares Quadruple Aim is to improve health outcomes, enhance the experiences of patients and providers, and reduce costs and AI can help. In this VB Live event, learn more about how stakeholders can use AI responsibly, ethically, and equitably to ensure all populations benefit.
Register here for free.
Breakthroughs in the application of machine learning and other forms of artificial intelligence (AI) in health care are rapidly advancing, creating advantages in the fields clinical and administrative realms. Its on the administrative side think workflows or back office processes where the technology has been more fully adopted. Using AI to simplify those processes creates efficiencies that reduce the amount of work it takes to deliver health care and improves the experiences of both patients and providers.
But its increasingly clear that applying AI responsibly needs to be a central focus for organizations who use data and information to improve outcomes and the overall experience.
Advanced analytics and AI have a significant impact in how important decisions are made across the health care ecosystem, says Sanji Fernando, SVP of artificial intelligence and analytics platforms at Optum. And, so, the company has guidelines for the responsible use of advanced analytics and AI for all of UnitedHealth Group.
Its important for us to have a framework, not only for the data scientists and machine learning engineers, but for everyone in our organization operations, clinicians, product managers, marketing to better understand expectations and how we want to drive breakthroughs to better support our customers, patients, and the wider health care system, he says. We view the promise of AI and its responsible use as part of our shared responsibility to use these breakthroughs appropriately for patients, providers, and our customers.
The guideline focuses on making sure everyone is considering how to appropriately use advanced analytics and AI, how these models are trained, and how they are monitored and evaluated over time, he adds.
Machine learning models, by definition, learn from the available data thats being created throughout the health care system. Inequities in the system may be reflected in the data and predictions that machine learning models return. Its important for everyone to be aware that health inequity may exist and that models may reflect that, he explains.
By consistently evaluating how models may classify or infer, and looking at how that affects folks of different races, ethnicities, and ages, we can be more aware of where some models may require consistent examination to best ensure they are working the way wed like them to, he says. The reality is that theres no magic bullet to fix an ML model automatically, but its important for us to understand and consistently learn where these models may impact different groups.
Transparency is a key factor in delivering responsible AI. That includes being very clear about how youre training your models, the appropriate use of data used to train an algorithm, as well as data privacy. When possible, it also means understanding how specific features are being identified or leveraged within the model. Basics like an age or date are straightforward features, but the challenge arises with paragraphs of natural language and unstructured text. Each word, phrase or paragraph can be considered a feature, creating an enormous number of combinations to consider.
But understanding feature importance the features that are more important to the model is important to provide better insight into how the model may actually be working, he explains. Its not true mathematical interpretability, but it gives us a better awareness.
Another important factor is being able to reproduce the performance and results of a model. Results will necessarily change when you train or retrain an algorithm, so you want to be able to trace that history, by being able to reproduce results over time. This ensures the consistency and appropriateness of the model remains constant (and allows for potential adjustments should they be needed).
Theres no shortage of tools and capabilities available across the field of responsible AI because there are so many people who are passionate about making sure we all use AI responsibly. For example, Optum uses an open-source bias audit tool from the University of Chicago. But there are any number of approaches and great thinking from a tooling perspective, Fernando says, so its really becoming an industry best practice to implement a policy of responsible AI.
The other piece of the puzzle requires work and a commitment from everyone in the ecosystem: making responsible use everyones responsibility, not just the machine learning engineer or data scientist.
Our aspiration is that every employee understands these responsibilities and takes ownership of them, he says, whether UHG employees are using ML-driven recommendations in their day-to-day work, designing new products and services, or theyre the data scientists and ML engineers who can evaluate models and understand output class distributions, we all have a shared responsibility to ensure these tools are achieving the best and most equitable results for the people we serve.
To learn more about the ways that AI is impacting the delivery and administration of health care across the ecosystem, the benefits of machine learning for cost savings and efficiency, and the importance of responsible AI for every worker, dont miss this VB Live event.
Dont miss out!
Register here for free.
Youll learn:
Speakers:
See the rest here:
Posted in Ai
Comments Off on Responsible AI in health care starts at the top but its everyones responsibility (VB Live) – VentureBeat
Torch.AI Looks to Replace ‘Store and Reduce’ with Synaptic Mesh – Datanami
Posted: at 12:39 am
(Michael Traitov/Shutterstock)
Torch.AI, the profitable startup applying machine learning to analyze data in-flight via its proprietary synaptic mesh technology, announced its first funding round along with expansion plans.
The Series A round garnered $30 million, and was led by San Francisco-based WestCap Group. As its customer base expands, Torch.AI said Wednesday (March 17) it would use the funds to scale its Nexus AI platform for a customer base that includes financial services, manufacturing and U.S. government customers.
The three-year-old AI startups software seeks to unify different data types via its synaptic mesh framework that reduces data storage while analyzing data on the fly.
Theres just too much information, too many classes of information, said Torch.AI CEO Brian Weaver. Hence, enterprises coping with regulatory and other data governance issues are finding they cant trust all the data they store.
Working early on with companies like GE (NYSE:GE) and (Microsoft NASDAQ:MSFT) on advanced data analytics, Weaver asserted in an interview that current technology frameworks compound that complexity. The shift to AI came while working with a financial services company struggling to process huge volumes of real-time transactions.
We figured out that we could use artificial intelligence just to understand the data payload, or the data object, differently, Weaver said.
The result was its Nexus platform that creates an AI mesh across a users data and systems, unifying data by increasing the surface area for analytics. That approach differs fundamentally from the store and reduce approach in which information is dumped into a large repository, then applying machine learning to make sense of it to cull usable data.
Ive got to store it somewhere first, then Ive got to reduce [data] to make use of it, the CEO continued. That approach actually compounds [data] complexityimpedes a successful outcome in a lot of ways and introduces at the same time a lot of risk.
Torch.AIs proprietary synaptic mesh approach is touted as eliminating the need to store all those data, enabling customers to analyze the growing number of data types in flight.
We decompose a data object into the atomic components of the data, Weaver explained. We create a very, very rich description of the data object itself that has logic built into it. The synaptic mesh is then applied to process and analyze data.
Hence, for example, a video file could be used to analyze data in-memory, picking out shapes, words and other data components as it streams.
The AI application builds in human cognition to make sense of a scene. My brain doesnt need to store it, the scene, to determine whats in it, Weaver noted.
Thats sort of our North Star: Making sense of messy data by applying AI to unify the growing number of data types while reducing the resulting complexity.
If you think about these workloads, people are actually working for the technology, having to stitch all this stuff together and hope it works. Shouldnt the technology truly be serving the [customer] who has the problem?
Recent items:
The Past and Present of In-Memory Computing
The Data and AI Habits of Future-Ready Companies
Editors note: A longer version of this story was originally posted to sister website EnterpriseAI.com.
The rest is here:
Torch.AI Looks to Replace 'Store and Reduce' with Synaptic Mesh - Datanami
Posted in Ai
Comments Off on Torch.AI Looks to Replace ‘Store and Reduce’ with Synaptic Mesh – Datanami
Torch.AI Raises $30M to Scale Its AI-Driven High Speed Data Processing Platform – PRNewswire
Posted: at 12:39 am
WASHINGTON, March 17, 2021 /PRNewswire/ --Torch.AI, a leading global artificial intelligence (AI) firm that uses machine learning to enable massively scaled, ultra high performance data processing, today announced it raised $30 million in Series A funding to accelerate its overall growth strategy. The funding, led by Laurence Tosi's WestCap Group, a prominent San Francisco-based investment firm, will enable the company to rapidly scale its Nexus AI platform to meet increasing demand from clients including Fortune 100 companies and U.S. federal agencies charged with protecting national security.
"Torch.AI's philosophy embraces more open and adaptable architectures, allowing us to provide lower cost, future-proof solutions offering a dramatic departure from the monolithic black boxes and complex middleware that are the norm in the machine learning and data management landscape," said Brian Weaver, Torch.AI CEO. "This new funding and our partnership with WestCap Group provides welcome resources to further our marketplace disruption and accelerate the growth of our team to keep up with demand."
Founded in 2017,Torch.AIcreated a next-generation AI platform that instantaneously understands and richly describes any data in atomic detail, both in memory and in motion. The Nexus software creates an intelligent Synaptic Mesh across an organization's data and systems, increasing the surface area of data for discovery and action. Data is unified to address even the most vexing challenges for how data fuels operations and critical decisions in high-risk environments.
The new Series A funding the firm's first institutional investment allows Torch.AI to enhance its proprietary technology, product design, and user experience, while continuing to aggressively expand in the U.S.
Companies including Microsoft, H&R Block, General Electric, the U.S. Air Force, Centers for Medicare and Medicaid Services, U.S. Department of Agriculture, and the U.S. Department of Defense, have already benefitted from the Nexus platform's ability to put data to work to improve decision making. In 2018, the firm was tapped to help transform how data can be leveraged to improve security clearance decisions and diagnostics across 95% of the federal government's employee and contractor workforce. By mid-2020, the platform spanned data and business systems across more than a dozen federal agencies, providing the capacity for billions of real time data processing computations.
WestCap's Tosi will join the Torch.AI board of directors, whose members include Weaver; William Beyer, founding member of Deloitte Consulting's federal practice; and WestCap Principal Christian Schnedler.
"Over the past 20 years, we at WestCap have founded, operated and invested in more than 15 multi-billion-dollar companies including Airbnb, Ipreo, Skillz and iCapital, as well as cyber-security unicorns such as CarbonBlack and Cylance," said Westcap Partner Kevin Marcus. "In Brian Weaver and the team at Torch.AI, we recognize the leadership, competitive advantage and innovative spirit they share with those great companies. WestCap is thrilled to be part of the growth and development of Torch.AI as it redefines the data infrastructure marketplace."
Said Beyer: "We are proud to welcome WestCap to the Torch.AI family, and Laurence Tosi to our board of directors. Torch.AI is already an outlier a fully U.S.-owned company, it's profitable, and one of the only AI firms with federal certifications at the highest levels. Now, with the backing of one of the smartest investment firms in the country, Torch.AI will accelerate its growth and more rapidly scale to meet increasing customer demand."
Prior to the investment, Torch.AI launched a strategic employee recruitment effort, adding executives and software developers in both Washington, D.C., and its engineering center in suburban Kansas City.
To learn more about Torch.AI, visit Torch.AI.
About Torch.AITorch.AI's Nexus platform changes the paradigm of data and digital workflows, forever solving core impediments caused by the ever-increasing volume and complexity of information. Customers enjoy a single integrated solution which begins by instantly deconstructing and identifying any data, in real-time, at the earliest possible moment. Purpose built for massively scaled, ultra high-speed data processing, the platform comes equipped with security features, flexible data workloads, compliance capabilities, and drag and drop functionality that is unrivaled in today's technology landscape. It's an enlightened approach. Learn more at Torch.AI.
About WestCapThe WestCap Group is a growth equity firm founded by Laurence A. Tosi, who, together with the WestCap team, has founded, capitalized, and operated tech-enabled, asset-light marketplaces for over 20 years. With over $2 billion of assets under management, WestCap has made notable investments in technology businesses such as Airbnb, StubHub, iPreo, Skillz, Sonder, Addepar, Hopper, iCapital and Bolt. To learn more about WestCap, please visit WestCap.com.
SOURCE Torch.AI
More here:
Torch.AI Raises $30M to Scale Its AI-Driven High Speed Data Processing Platform - PRNewswire
Posted in Ai
Comments Off on Torch.AI Raises $30M to Scale Its AI-Driven High Speed Data Processing Platform – PRNewswire
The Secret Auction that Set Off the Race for AI Supremacy – WIRED
Posted: at 12:39 am
Hinton remained one of the few who believed it would one day fulfill its promise, delivering machines that could not only recognize objects but identify spoken words, understand natural language, carry on a conversation, and maybe even solve problems humans couldnt solve on their own, providing new and more incisive ways of exploring the mysteries of biology, medicine, geology, and other sciences. It was an eccentric stance even inside his own university, which spent years denying his standing request to hire another professor who could work alongside him in this long and winding struggle to build machines that learned on their own. One crazy person working on this was enough, he imagined their thinking went. But with a nine-page paper that Hinton and his students unveiled in the fall of 2012, detailing their breakthrough, they announced to the world that neural networks were indeed as powerful as Hinton had long claimed they would be.
Days after the paper was published, Hinton received an email from a fellow AI researcher named Kai Yu, who worked for Baidu, the Chinese tech giant. On the surface, Hinton and Yu had little in common. Born in postwar Britain to an upper-crust family of scientists whose influence was matched only by their eccentricity, Hinton had studied at Cambridge, earned a PhD in artificial intelligence from the University of Edinburgh, and spent most of the next four decades as a professor of computer science. Yu was 30 years younger than Hinton and grew up in Communist China, the son of an automobile engineer, and studied in Nanjing and then Munich before moving to Silicon Valley for a job in a corporate research lab. The two were separated by class, age, culture, language, and geography, but they shared a faith in neural networks. They had originally met in Canada at an academic workshop, part of a grassroots effort to revive this nearly dormant area of research across the scientific community and rebrand the idea as deep learning. Yu, a small, bespectacled, round-faced man, was among those who helped spread the gospel. When that nine-page paper emerged from the University of Toronto, Yu told the Baidu brain trust they should recruit Hinton as quickly as possible. With his email, Yu introduced Hinton to a Baidu vice president, who promptly offered $12 million to hire Hinton and his students for just a few years of work.
For a moment, it seemed like Hinton and his suitors in Beijing were on the verge of sealing an agreement. But Hinton paused. In recent months, hed cultivated relationships inside several other companies, both small and large, including two of Baidus big American rivals, and they, too, were calling his office in Toronto, asking what it would take to hire him and his students.
Sign up to get our best longform features, investigations, and thought-provoking essays, in your inbox every Sunday.
Seeing a much wider opportunity, he asked Baidu if he could solicit other offers before accepting the $12 million, and when Baidu agreed, he flipped the situation upside down. Spurred on by his students and realizing that Baidu and its rivals were much more likely to pay enormous sums of money to acquire a company than they were to shell out the same dollars for a few new hires from the world of academia, he created his tiny startup. He called it DNNresearch in a nod to the deep neural networks they specialized in, and he asked a Toronto lawyer how he could maximize the price of a startup with three employees, no products, and virtually no history.
As the lawyer saw it, he had two options: He could hire a professional negotiator and risk angering the companies he hoped would acquire his tiny venture, or he could set up an auction. Hinton chose an auction. In the end, four names joined the bidding: Baidu, Google, Microsoft, and a two-year-old London startup called DeepMind, cofounded by a young neuroscientist named Demis Hassabis, that most of the world had never heard of.
See the rest here:
The Secret Auction that Set Off the Race for AI Supremacy - WIRED
Posted in Ai
Comments Off on The Secret Auction that Set Off the Race for AI Supremacy – WIRED
Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect – Forbes
Posted: at 12:39 am
5 Trends in AI 2021
Artificial Intelligence innovation continues apace - with explosive growth in virtually all industries. So what did the last year bring, and what can we expect from AI in 2021?
In this article, I list five trends that I saw developing in 2020 that I expect will be even more dominant in 2021.
MLOps
MLOps (Machine Learning Operations, the practice of production Machine Learning) has been around for some time. During 2020, however, COVID-19 brought a new appreciation for the need to monitor and manage production Machine Learning instances. The massive change to operational workflows, inventory management, traffic patterns, etc. caused many AIs to behave unexpectedly. This is known in the MLOps world as Drift - when incoming data does not match what the AI was trained to expect. While drift and other challenges of production ML were known to companies that have deployed ML in production before, the changes caused by COVID caused a much broader appreciation for the need for MLOps. Similarly, as privacy regulations such as the CCPA take hold, companies that operate on customer data have an increased need for governance and risk management. Finally, the first MLOps community gathering - the Operational ML Conference - which started in 2019, also saw a significant growth of ideas, experiences, and breadth of participation in 2020.
Low Code/No Code
AutoML (automated machine learning) has been around for some time. AutoML has traditionally focused on algorithmic selection and finding the best Machine Learning or Deep Learning solution for a particular dataset. Last year saw growth in the Low-Code/No-Code movement across the board, from applications to targeted vertical AI solutions for businesses. While AutoML enabled building high-quality AI models without in-depth Data Science knowledge, modern Low-Code/No-Code platforms enable building entire production-grade AI-powered applications without deep programming knowledge.
Advanced Pre-trained Language Models
The last few years have brought substantial advances to the Natural Language Processing space, the greatest of which may be Transformers and Attention, a common application of which is BERT (Bidirectional Encoder Representations with Transformers). These models are extremely powerful and have revolutionized language translation, comprehension, summarization, and more. However, these models are extremely expensive and time-consuming to train. The good news is that pre-trained models (and sometimes APIs that allow direct access to them) can spawn a new generation of effective and extremely easy-to-build AI services. One of the largest examples of an advanced model accessible via API is GPT-3 - which has been demonstrated for use cases ranging from writing code to writing poetry.
Synthetic Content Generation (and its cousin, the Deep Fake)
NLP is not the only AI area to see substantial algorithmic innovation. Generative Adversarial Networks (GANs) have also seen innovation, demonstrating remarkable feats in creating art and fake images. Similar to transformers, GANs have also been complex to train and tune as they require large training sets. However, innovations have dramatically reduced the data sizes of creating a GAN. For example, Nvidia has demonstrated a new augmented method for GAN training that requires much less data than its predecessors. This innovation can spawn the use of GANs in everything from medical applications such as synthetic cancer histology images, to even more deep fakes.
AI for Kids
As low-code tools become prevalent, the age at which young people can build AIs is decreasing. It is now possible for an elementary or middle school student to build their own AI to do anything from classifying text to images. High Schools in the United States are starting to teach AI, with Middle Schools looking to follow. As an example - in Silicon Valleys Synopsys Science Fair 2020, 31% of the winning software projects used AI in their innovation. Even more impressively, 27% of these AIs were built by students in grades 6-8. An example winner, who went on to the national Broadcom MASTERS, was an eighth-grader who created a Convolutional Neural Network to detect Diabetic Retinopathy from eye scans.
What does all this mean?
These are not the only trends in AI. However, they are noteworthy because they point in three significant and critical directions
Read this article:
Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect - Forbes
Posted in Ai
Comments Off on Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect – Forbes