The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Skymind Global Ventures launches $800M fund and London office to back AI startups – TechCrunch
Posted: January 31, 2020 at 9:46 am
Skymind Global Ventures (SGV) appeared last year in Asia/US as a vehicle for the previous founders of a YC-backed open-source AI platform to invest in companies that used the platform.
Today it announces the launch of an $800 million fund to back promising new AI companies and academic research. It will consequently be opening a London office as an extension to its original Hong Kong base.
SGV Founder and CEO Shawn Tan said in a statement: Having our operations in the UK capital is a strategic move for us. London has all the key factors to help us grow our business, such as access to diverse talent and investment, favorable regulation, and a strong and well-established technology hub. The city is also the AI growth capital of Europe with the added competitive advantage of boasting a global friendly time zone that overlaps with business hours in Asia, Europe and the rest of the world.
SGV will use its London base to back research and development and generate business opportunities across Europe and Asia.
The company helps companies and organizations to launch their AI applications by providing them supported access to Eclipse Deeplearning4j, an open-source AI tool.
The background is that the Deeplearning4j tool was originally published by Adam Gibson in late 2013 and later became a YC-backed startup, called Pathmind, which was cofounded to commercialize Deeplearning4j. It later changed its name to Skymind.
SGV is a wholly separate investment company that Adam Gibson joined as VP to run its AI division, called Konduit. Konduit now commercializes the Deeplearning4j open source tools.
Adam Gibson now joins SGV as Vice President, to run its software division, Konduit, which delivers and supports Eclipse Deeplearning4j to clients, as well as offering training development.
SGV firm says it plans to train up to 200 AI professionals for its operations in London and Europe.
In December last year Skymind AI Berhad, the Southeast Asia arm of Skymind and Huawei Technologies signed a Memorandum of Understanding to develop a Cloud and Artificial Intelligence Innovation Hub, commencing with Malaysia and Indonesia in 2020.
View original post here:
Skymind Global Ventures launches $800M fund and London office to back AI startups - TechCrunch
Posted in Ai
Comments Off on Skymind Global Ventures launches $800M fund and London office to back AI startups – TechCrunch
Want To Be AI-First? You Need To Be Data-First. – Forbes
Posted: at 9:46 am
Data First
Those that implement AI and Machine Learning project learn quickly that machine learning projects are not application development projects. Much of the value of machine learning projects rest in the models, training data, and configuration information that guides how the model is applied to the specific machine learning problem. The application code is mostly a means to implement the machine learning algorithms and "operationalize" the machine learning model in a production environment. That's not to say that application code is not necessary after all, the computer needs some way to operationalize the machine learning model but focusing a machine learning project on the application code is missing the big picture. If you want to be AI-first for your project, you need to have a data-first perspective.
Use data-centric methodologies and data-centric technologies
Therefore it follows that if you're going to have a data-first perspective, you need to use a data-first methodology. There's certainly nothing wrong with Agile methodologies as a way of iterating towards success, but Agile on its own leaves much to be desired as it's focused on functionality and delivery of application logic. There are already data-centric methodologies out there that have been proven in many real-world scenarios. One of the most popular is the Cross Industry Standard Process for Data Mining (CRISP-DM), which focuses on the steps needed for successful data projects. In the modern age, it makes sense to merge the notably non-agile CRISP-DM with Agile Methodologies to make it more relevant. While this is still a new area for most enterprises implementing AI projects, we see this sort of merged methodology approach to be more successful than trying to shoehorn all the aspects of an AI project into existing application-focused Agile methodologies.
It stands to reason that if you have a data-centric perspective on AI then you need to pair your data-centric methodologies with data-centric technologies. This means that your choice of tooling to implement all those artifacts detailed above need to be, first and foremost, data-focused. Don't use code-centric IDEs when you should be using data notebooks. Don't use enterprise integration middleware platforms when you should be using tools that focus on model development and maintenance. Don't use so-called machine learning platforms that are really just a pile of cloud-based technologies or overgrown big data management platforms. The tools you use should support the machine learning goals you need, which are in turn supported by the activities you need to do and the artifacts you need to create. Just because a GPU provider has a toolset doesn't mean that it's the right one to use. Just because a big enterprise vendor or a cloud vendor has a "stack" doesn't mean it's the right one. Start from the deliverables and the machine learning objectives and work your way backwards.
Another big consideration is where and how machine learning models will be deployed - or in AI-speak "operationalized". AI models can be implemented in a remarkably wide range of places from "edge" devices sitting disconnected from the internet to mobile and desktop applications; from enterprise servers to cloud-based instances; and all manner of autonomous vehicles and craft. Each of these locations is a place where AI models and implementations can and do exist. This amount of model operationalization heterogeneity highlights even more so how ludicrous the idea of a single machine learning platform is. How can one platform at the same time provide AI capabilities in a drone, mobile app, enterprise implementation, and cloud instance. Even if you source all this technology from a single vendor, it will be a collection of different tools that sit under a single marketing umbrella rather than a single, cohesive, interoperable platform that makes any sense.
Build data-centric talent
All this methodology and technology can't assemble itself. If you're going to be successful at AI projects you're going to need to be successful at building an AI team. And if the data-centric perspective is the correct one for AI, then it makes sense that your team also needs to be data-centric. The talent to build apps or manage enterprise systems or data is not the same to build AI models, tune algorithms, work with training data sets, and operationalize ML models. The primary core of your AI team needs to be data scientists, data engineers, and those folks responsible for putting machine learning models into operation. While there's always a need for coding, development, and project management, finding and growing your data-centric talent is key to long term success of your AI initiatives.
The primary challenge with building data talent is that it's hard to find and grow. The primary reason for this is because data isn't code. You need folks who know how to wrangle lots of data sources, compile them into clean data sets, and then extract information needles from data haystacks. In addition, the language of AI is math, not programming logic. So a strong data team is also strong in the right kinds of math to understand how to select and implement AI algorithms, properly tweak hyperparameters, and properly interpret testing and validation results. Simply guessing about and changing training data sets and hyperparameters at random is not a good way to create AI projects that deliver value. As such, data-centric talent grounded in a fundamental understanding of machine learning math and algorithms combined with an understanding of how to deal with big data sets is crucial to AI project success.
Prepare to continue to invest for the long haul
It should be pretty obvious at this point that the set of activities for AI are indeed very much data-centric and the activities, artifacts, tools, and team need to follow from that data-centric perspective. The biggest challenge is that so much of that ecosystem is still being developed and is not fully available for most enterprises. AI-specific methodologies are still being tested in large scale projects. AI-specific tools and technologies are still being developed, enhanced, and evolutionary changes are being released on a rapid scale. AI talent continues to be tight and is an area where we're just starting to see investment in growth of this skill set.
As a result, organizations that need to be successful with AI, even with this data-centric perspective, need to be prepared to invest for the long haul. Find your peer groups to see what methodologies are working for them and continue to iterate until you find something that works for you. Find ways to continuously update your team's skills and methods. Realize that you're on the bleeding edge with AI technology and prepare to reinvest in new technology on a regular basis, or invent your own if need be. Even though the history of AI spans at least seven decades, we're still in the early stages of making AI work for large scale projects. This is like the early days of the Internet or mobile or big data. Those early pioneers had to learn the hard way, making many mistakes before realizing the "right" way to do things. But once those ways were discovered, organizations reaped big rewards. This is where we're at with AI. As long as you have a data-centric perspective and are prepared to continue to invest for the long haul, you will be successful with your AI, machine learning, and cognitive technology efforts.
Excerpt from:
Posted in Ai
Comments Off on Want To Be AI-First? You Need To Be Data-First. – Forbes
Not All AI Is Created Equal: How To Select The Right Type For Your Business Needs – Forbes
Posted: at 9:46 am
Were on the cusp of a revolution led by artificial intelligence and automation thats set to radically change how we work. New AI tools are constantly emerging, with the promise of slicing through the complexities of modern problems plaguing todays workforce. AI has the potential to optimize the entire digital value chain in any organization, especially in areas where employees are struggling with manual processes. Its no surprise then that spending forecasts for AI are skyrocketing, with worldwide spending estimated at $35.8 billion. This uptick could yield $2.9 trillion of business value and 6.2 billion hours of worker productivity by 2021.
However, AI comes in many different forms, and identifying which type is most suitable for a particular use case isnt always easy. We hear terms like "machine learning," "deep learning" and "deterministic AI" used, but its important for the industry to avoid generalization. If these technologies are just categorized under the umbrella of AI, theres a risk that organizations will buy into the wrong thing and miss the promised benefits.
Before investing in AI, organizations need to be very clear about what challenges theyre looking to solve. Businesses must first look at where their workforce is under strain and how AI can offer value. Repeatable, manual processes, or those that require a huge volume and variety of data to be processed at a velocity beyond human capabilities, can be improved with AI. More importantly, organizations must consider what data they have available, which can influence the type of AI they should adopt.
Machine Learning
Machine learning-based AI takes a statistical approach, with systems adhering to this approach and ingesting data to understand and make decisions. It requires a data scientist to train an algorithm so that it can learn to make decisions, which could take months or years to develop as AI learns the rules of its environment. As such, machine learning AI is at its best when its working in environments where the rules dont change often because every significant change requires relearning.
Machine learning is well equipped to help automate business operations. In the world of banking, for example, machine learning could be used to determine whether someone should be given a loan or credit card. When a customer applies, a machine learning AI tool could evaluate that application against a database containing the outcomes of thousands of previous applicants, as well as against the criteria for approving a loan. However, machine learning approaches are very limited in dynamic environments where the rules change constantly and there isnt time to "learn." They can also be prone to bias, as seen when Apple Card was reported to be favoring applications by men within its approval process, with men receiving much greater credit limits than women.
Deep Learning
Deep learning is a subcategory of machine learning that uses a neural network approach. This approach is similar to memory foam: Once an object, or in this case a rule, has been introduced, it leaves an imprint that the AI can recall. This makes it effective for rules-based decision making, and it can also work from different types of unstructured data. In the workplace, deep learning has applications for use cases such as predictive maintenance, where, for example, it can take audio or visual data to predict when a piece of equipment, might fail. In healthcare, deep learning can look at scans to identify anomalies or shadows.
However, like machine learning, it takes time to train. While you could argue the training happens by itself, the problem is that the definition of what is good or bad behavior takes too much time and effort to instill. For example, the use of deep learning in facial recognition systems at airports to identify suspicious individuals could introduce bias that could do more harm than good. Faces dont always follow "rules," and its difficult for AI to identify features accurately, as evidenced when a U.K. passport application system rejected someones photograph after mistaking his lips for an open mouth.
Deterministic AI
Deterministic AI is another subcategory of machine learning, but takes a very different approach. It performs a step-by-step fault-tree analysis based on directed dependency graphs (e.g., from real-time topology discovery), similar to a safety engineering approach. As a result, it can provide precise answers and map the evolution of a problem back to the underlying cause. It can do this in near real time, without requiring humans to analyze and interpret data. While its likely too advanced for more repetitive tasks such as robotic process automation on automotive assembly lines, deterministic AI would be well suited to environments where the rules change constantly.
Deterministic AI has great applications for helping organizations overcome the complexity thats exploded in the shift to the enterprise cloud (Full disclosure: My company uses deterministic AI to power the cloud software I helped develop). Todays IT environments are highly dynamic and web-scale, containing hundreds of technologies, millions of lines of code and billions of dependencies. As a consequence, its beyond human capabilities to manage digital service performance effectively.
For instance, if theres an anomaly in a large microservice application that triggers a storm of alerts, it can be impossible for IT teams to find the root cause. Deterministic AI can help IT teams by accurately providing the root cause of the anomaly and the solution in real time while precisely suppressing millions of unrelated events. Such intelligence could be used to trigger auto-remediation procedures before users are impacted. Because of this, I believe deterministic AI will usher in a new era of AI-driven IT operations, where organizations run on an autonomous IT ecosystem that responds to changing rules in real time, and problems are resolved before users notice theres an issue.
Ultimately, it's important to understand the benefits and drawbacks of the different types of AI. Not all AI is created equal, and the IT industry has a responsibility to ensure it doesnt become another buzzword. Otherwise, AI may never become the true game-changer it promises to be.
Here is the original post:
Not All AI Is Created Equal: How To Select The Right Type For Your Business Needs - Forbes
Posted in Ai
Comments Off on Not All AI Is Created Equal: How To Select The Right Type For Your Business Needs – Forbes
China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) – The National Interest Online
Posted: at 9:46 am
Artificial intelligence (AI) is increasingly embedded into every aspect of life, and China is pouring billions into its bid to become an AI superpower. China's three-step plan is to pull equal with the United States in 2020, start making major breakthroughs of its own by mid-decade, and become the world's AI leader in 2030.
There's no doubt that Chinese companies are making big gains. Chinese government spending on AI may not match some of the most-hyped estimates, but China is providing big state subsidies to a select group of AI national champions, like Baidu in autonomous vehicles (AVs), Tencent in medical imaging, Alibaba in smart cities, Huawei in chips and software.
State support isn't all about money. It's also about clearing the road to success -- sometimes literally. Baidu ("China's Google") is based in Beijing, where the local government has kindly closed more than 300 miles of city roads to make way for AV tests. Nearby Shandong province closed a 16 mile mountain road so that Huawei could test its AI chips for AVs in a country setting.
In other Chinese AV test cities, the roads remain open but are thoroughly sanitized. Southern China's tech capital, Shenzhen, is the home of AI leader Tencent, which is testing its own AVs on Shenzhen's public roads. Notably absent from Shenzhen's major roads are motorcycles, scooters, bicycles, or even pedestrians. Two-wheeled vehicles are prohibited; pedestrians are comprehensively corralled by sidewalk barriers and deterred from jaywalking by stiff penalties backed up by facial recognition technology.
And what better way to jump-start AI for facial recognition than by having a national biometric ID card database where every single person's face is rendered in machine-friendly standardized photos?
Making AI easy has certainly helped China get its AI strategy off the ground. But like a student who is spoon-fed the answers on a test, a machine that learns from a simplified environment won't necessarily be able to cope in the real world.
Machine learning (ML) uses vast quantities of experiential data to train algorithms to make decisions that mimic human intelligence. Type something like "ML 4 AI" into Google, and it will know exactly what you mean. That's because Google learns English in the real world, not from memorizing a dictionary.
It's the same for AVs. Google's Alphabet cousin Waymo tests its cars on the anything-goes roads of everyday America. As a result, its algorithms have learned how to deal with challenges like a cyclist carrying a stop sign. Everything that can happen on America's roads, will happen on America's roads. Chinese AI is learning how to drive like a machine, but American AI is learning how to drive like a human -- only better.
American, British, and (especially) Israeli facial recognition AI efforts face similar real-world challenges. They have to work with incomplete, imperfect data, and still get the job done. What's more, they can't throw up too many false positives -- innocent people identified as threats. China's totalitarian regime can punish innocent people with impunity, but in democratic countries, even one false positive could halt a facial recognition roll-outs.
It's tempting to think that the best way forward for AI is to make it easy. In fact, the exact opposite is true. Like a muscle pushed to exercise, AI thrives on challenges. Chinese AI may take some giant strides operating in a stripped-down reality, but American AI will win the race in the real world. Reality is complicated, and if it's one thing Americans are good at, it's dealing with complexity.
Salvatore Babones is an adjunct scholar at the Centre for Independent Studies and an associate professor at the University of Sydney.
See the article here:
Posted in Ai
Comments Off on China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) – The National Interest Online
Forget The ROI: With Artificial Intelligence, Decision-Making Will Never Be The Same – Forbes
Posted: at 9:46 am
People are the ultimate power behind AI.
There are a lot of compelling things about artificial intelligence, but people still need to get comfortable with it. As shown in a recent survey of 1,500 decision makers released by Cognilytica, about 40 percent indicate that they are currently implementing at least one AI project or plan to do so. Issues getting in the way include limited availability of AI skills and talent, as well as justifying ROI.
Having the right mindset is half the battle with successfully building AI into the organization. This means looking beyond traditional, cold ROI measures, and looking at the ways AI will enrich and amplify decision-making. Ravi Bapna, professor at the University of Minnesotas Carlson School of Management, says attitude wins the day for moving forward with AI. In a recent Knowledge@Wharton article, he offers four ways AI means better decisions:
AI helps leverage the power and the limitations of tacit knowledge: Many organizations have data that may sit unused because its beyond the comprehension of the human mind. But with AI and predictive modeling applied, new vistas open up. What many executives do not realize is that they are almost certainly sitting on tons of administrative data from the past that can be harnessed in a predictive sense to help make better decisions, Bapna says.
AI spots outliers: AI quickly catches outlying factors. These algorithms fall in thedescriptive analyticspillar, a branch of machine learning that generates business value by exploring and identifying interesting patterns in your hyper-dimensional data, something at which we humans are not great.
AI promotes counter-factual thinking: Data by itself can be manipulated to justify pre-existing notions, or miss variables affecting results. Counter-factual thinking is a leadership muscle that is not exercised often enough, says Bapna relates. This leads to sub-optimal decision-making and poor resource allocation. Casual analytics encourages counter-factual thinking. Not answering questions in a causal manner or using the highest paid persons opinion to make such inferences is a sure shot way of destroying value for your company.
AI enables combinatorial thinking: Even the most ambitious decisions are tempered by constraints to the point where new projects may not be able to deliver. Most decision-making operates in the context of optimizing some goal maximizing revenue or minimizing costs in the presence of a variety of constraints budgets, or service quality levels that have to be maintained, says Bapna. Needless to say, this inhibits growth. Combinatorial thinking, based on prescriptive analytics, can provide answers, he says. Combinatorial optimizations algorithms are capable of predicting favorable outcomes that deliver more value for investments.
Read more from the original source:
Forget The ROI: With Artificial Intelligence, Decision-Making Will Never Be The Same - Forbes
Posted in Ai
Comments Off on Forget The ROI: With Artificial Intelligence, Decision-Making Will Never Be The Same – Forbes
Why asking an AI to explain itself can make things worse – MIT Technology Review
Posted: at 9:46 am
Upol Ehsan once took a test ride in an Uber self-driving car. Instead of fretting about the empty drivers seat, anxious passengers were encouraged to watch a pacifier screen that showed a cars-eye view of the road: hazards picked out in orange and red, safe zones in cool blue.
For Ehsan, who studies the way humans interact with AI at the Georgia Institute of Technology in Atlanta, the intended message was clear: Dont get freaked outthis is why the car is doing what its doing. But something about the alien-looking street scene highlighted the strangeness of the experience rather than reassuring. It got Ehsan thinking: what if the self-driving car could really explain itself?
The success of deep learning is due to tinkering: the best neural networks are tweaked and adapted to make better ones, and practical results have outpaced theoretical understanding. As a result, the details of how a trained model works are typically unknown. We have come to think of them as black boxes.
A lot of the time were okay with that when it comes to things like playing Go or translating text or picking the next Netflix show to binge on. But if AI is to be used to help make decisions in law enforcement, medical diagnosis, and driverless cars, then we need to understand how it reaches those decisionsand know when they are wrong.
People need the power to disagree with or reject an automated decision, says Iris Howley, a computer scientist at Williams College in Williamstown, Massachusetts. Without this, people will push back against the technology. You can see this playing out right now with the public response to facial recognition systems, she says.
Sign up for The Algorithm artificial intelligence, demystified
Ehsan is part of a small but growing group of researchers trying to make AIs better at explaining themselves, to help us look inside the black box. The aim of so-called interpretable or explainable AI (XAI) is to help people understand what features in the data a neural network is actually learningand thus whether the resulting model is accurate and unbiased.
One solution is to build machine-learning systems that show their workings: so-called glassboxas opposed to black-boxAI. Glassbox models are typically much-simplified versions of a neural network in which it is easier to track how different pieces of data affect the model.
There are people in the community who advocate for the use of glassbox models in any high-stakes setting, says Jennifer Wortman Vaughan, a computer scientist at Microsoft Research. I largely agree. Simple glassbox models can perform as well as more complicated neural networks on certain types of structured data, such as tables of statistics. For some applications that's all you need.
But it depends on the domain. If we want to learn from messy data like images or text, were stuck with deepand thus opaqueneural networks. The ability of these networks to draw meaningful connections between very large numbers of disparate features is bound up with their complexity.
Even here, glassbox machine learning could help. One solution is to take two passes at the data, training an imperfect glassbox model as a debugging step to uncover potential errors that you might want to correct. Once the data has been cleaned up, a more accurate black-box model can be trained.
It's a tricky balance, however. Too much transparency can lead to information overload. In a 2018 study looking at how non-expert users interact with machine-learning tools, Vaughan found that transparent models can actually make it harder to detect and correct the models mistakes.
Another approach is to include visualizations that show a few key properties of the model and its underlying data. The idea is that you can see serious problems at a glance. For example, the model could be relying too much on certain features, which could signal bias.
These visualization tools have proved incredibly popular in the short time theyve been around. But do they really help? In the first study of its kind, Vaughan and her team have tried to find outand exposed some serious issues.
The team took two popular interpretability tools that give an overview of a model via charts and data plots, highlighting things that the machine-learning model picked up on most in training. Eleven AI professionals were recruited from within Microsoft, all different in education, job roles, and experience. They took part in a mock interaction with a machine-learning model trained on a national income data set taken from the 1994 US census. The experiment was designed specifically to mimic the way data scientists use interpretability tools in the kinds of tasks they face routinely.
What the team found was striking. Sure, the tools sometimes helped people spot missing values in the data. But this usefulness was overshadowed by a tendency to over-trust and misread the visualizations. In some cases, users couldnt even describe what the visualizations were showing. This led to incorrect assumptions about the data set, the models, and the interpretability tools themselves. And it instilled a false confidence about the tools that made participants more gung-ho about deploying the models, even when they felt something wasnt quite right. Worryingly, this was true even when the output had been manipulated to show explanations that made no sense.
To back up the findings from their small user study, the researchers then conducted an online survey of around 200 machine-learning professionals recruited via mailing lists and social media. They found similar confusion and misplaced confidence.
Worse, many participants were happy to use the visualizations to make decisions about deploying the model despite admitting that they did not understand the math behind them. It was particularly surprising to see people justify oddities in the data by creating narratives that explained them, says Harmanpreet Kaur at the University of Michigan, a coauthor on the study. The automation bias was a very important factor that we had not considered.
Ah, the automation bias. In other words, people are primed to trust computers. Its not a new phenomenon. When it comes to automated systems from aircraft autopilots to spell checkers, studies have shown that humans often accept the choices they make even when they are obviously wrong. But when this happens with tools designed to help us avoid this very phenomenon, we have an even bigger problem.
What can we do about it? For some, part of the trouble with the first wave of XAI is that it is dominated by machine-learning researchers, most of whom are expert users of AI systems. Says Tim Miller of the University of Melbourne, who studies how humans use AI systems: The inmates are running the asylum.
This is what Ehsan realized sitting in the back of the driverless Uber. It is easier to understand what an automated system is doingand see when it is making a mistakeif it gives reasons for its actions the way a human would. Ehsan and his colleague Mark Riedl are developing a machine-learning system that automatically generates such rationales in natural language. In an early prototype, the pair took a neural network that had learned how to play the classic 1980s video game Frogger and trained it to provide a reason every time it made a move.
Upol Ehsan
To do this, they showed the system many examples of humans playing the game while talking out loud about what they were doing. They then took a neural network for translating between two natural languages and adapted it to translate instead between actions in the game and natural-language rationales for those actions. Now, when the neural network sees an action in the game, it translates it into an explanation. The result is a Frogger-playing AI that says things like Im moving left to stay behind the blue truck every time it moves.
Ehsan and Riedls work is just a start. For one thing, it is not clear whether a machine-learning system will always be able to provide a natural-language rationale for its actions. Take DeepMinds board-game-playing AI AlphaZero. One of the most striking features of the software is its ability to make winning moves that most human players would not think to try at that point in a game. If AlphaZero were able to explain its moves, would they always make sense?
Reasons help whether we understand them or not, says Ehsan: The goal of human-centered XAI is not just to make the user agree to what the AI is sayingit is also to provoke reflection. Riedl recalls watching the livestream of the tournament match between DeepMind's AI and Korean Go champion Lee Sedol. The commentators were talking about what AlphaGo was seeing and thinking. "That wasnt how AlphaGo worked," says Riedl. "But I felt that the commentary was essential to understanding what was happening."
What this new wave of XAI researchers agree on is that if AI systems are to be used by more people, those people must be part of the design from the startand different people need different kinds of explanations. (This is backed up by a new study from Howley and her colleagues, in which they show that peoples ability to understand an interactive or static visualization depends on their education levels.) Think of a cancer-diagnosing AI, says Ehsan. Youd want the explanation it gives to an oncologist to be very different from the explanation it gives to the patient.
Ultimately, we want AIs to explain themselves not only to data scientists and doctors but to police officers using face recognition technology, teachers using analytics software in their classrooms, students trying to make sense of their social-media feedsand anyone sitting in the backseat of a self-driving car. Weve always known that people over-trust technology, and thats especially true with AI systems, says Riedl. The more you say its smart, the more people are convinced that its smarter than they are.
Explanations that anyone can understand should help pop that bubble.
See the rest here:
Why asking an AI to explain itself can make things worse - MIT Technology Review
Posted in Ai
Comments Off on Why asking an AI to explain itself can make things worse – MIT Technology Review
AI in Health and Care Award launches in the UK – Healthcare IT News
Posted: at 9:46 am
Innovators in England are now ableto submit their applications for a new AI in Health and Care Award launched by health secretary Matt Hancock to speed up testing, evaluation and adoption of the most promising AI technologies for healthcare.
The initiative will see 140 million be made available during the next three years, with a call for applications running twice a year.
Initially, the award will focus on four areas: screening, diagnosis, clinical decision support and system efficiency.
Run by the Accelerated Access Collaborative, NHSX and the National Institute for Health Research, it forms part of the 250m announcement made in August to support the creation of an AI Labfor the health service.
WHY IT MATTERS
Too many good ideas in the NHS never make it past the pilot stage, Hancock said in a speech at the Parliament & Healthtech conference in London on Tuesday.
NHS Improvement estimate that it takes 17 years on average for a new product or device to go from successful clinical trial to mainstream adoption. Seventeen years. That is far too long, he continued. We need a culture that rewards and incentivises adoption as well as invention.
With the new award, innovators will be supported through the following phases: technical feasibility, development and evaluation, real world testing, initial health system adoption, and national scale-up.
THE LARGER PICTURE
In addition to the announcement made regarding the NHS AI Lab,the health secretary tried to address in his speech the concerns of the so-called tech-sceptics those saying that other areas, such as workforce or infrastructure, should be prioritised, as well asthose pushing to fix the basics first.
If you work in the NHS, in any part of the service, far too often old, out-of-date 20th century technology gets in the way of your ability to do your job. So I completely get why some people think now is not the time to be talking genomics, automation and AI, Hancocksaid.
Butadded: I respectfully disagree. Because thats a bit like saying that we shouldnt explore space when weve got climate change to deal with on earth. Which sounds attractive until you consider that much of our knowledge about climate change is beamed down from satellites.
The point is that sometimes the cutting edge can help us solve those bread-and-butter problems and move us to a new generation of solutions.
Earlier this month, the government announced that40 million would be provided for a new projecttoreduce computer login times with clinicians at some sites reporting that they have to log into as many as 15 different systems when caring for a patient.
Better technology is vital to have and embracing it is the only way to make the NHS sustainable over the long term, Hancock said.
More information about the AI in Health and Care Award can be found here.
Visit link:
AI in Health and Care Award launches in the UK - Healthcare IT News
Posted in Ai
Comments Off on AI in Health and Care Award launches in the UK – Healthcare IT News
Microsoft launches 5-year, $40M AI initiative in global health – FierceBiotech
Posted: at 9:46 am
Microsoft announced a new five-year, $40 million artificial-intelligence initiative geared toward global health challenges and research.
Dubbed AI for Health, the program will work to equip academia and non-profit research organizations for their medical research efforts, including the development of diagnostics, treatments and preventive measures. It also aims to address global health crises and healthcare disparities.
"We know that putting this powerful technology into the hands of experts tackling this problem can accelerate new solutions and improve access for underserved populations, Microsoft President Brad Smith said in a statement.
Like this story? Subscribe to FierceBiotech!
Biopharma is a fast-growing world where big ideas come along every day. Our subscribers rely on FierceBiotech as their must-read source for the latest news, analysis and data in the world of biotech and pharma R&D. Sign up today to get biotech news and updates delivered to your inbox and read on the go.
The tech giant hopes to tackle the uneven distribution of data science expertise as well: Microsoft estimates that less than 5% of the worlds AI professionals work in healthcare and non-profit organizations.
The new program will build upon the companys previous collaborations, like individual efforts focused on sudden infant death syndrome, leprosy and diabetic retinopathy, as well as its work on developing a secure system for sharing biomedical data.
AI for Healths first set of grantees include the Novartis Foundation, Fred Hutchinson Cancer Research Center, Seattle Children's Research Institute, PATH, Intelligent Retinal Imaging Systems and the international development organization BRAC.
RELATED: Novartis to put AI on every employee's desk through Microsoft partnership
"Countries like Bangladesh, where BRAC was founded, have made enormous strides in health equity in the last three decades. Unfortunately, at least half the world's population still lacks access to essential health services," said BRACs executive director, Asif Saleh. "Across our outreach areas in Asia and Africa, we see massive potential in using advanced data analytics and AI to bridge the gap between 'health for some' and 'health for all,' and we welcome Microsoft's commitment in making this happen."
Last year, Microsoft signed on to a wide-ranging partnership with Novartis to help put AI tools on the desk of each of the drugmakers research associates. That five-year project will also establish a joint innovation lab tasked with upgrading the Big Pharmas R&D, clinical trials and manufacturing efforts.
It will also take on research projects in macular degeneration and irreversible blindness, cell and gene therapy manufacturing efficiency and expediting the design of new drug compounds. Meanwhile, Microsofts new work with the companys foundation will focus on limiting the spread of disease.
"Leprosy is one of the oldest diseases known to humans, but today an estimated 2 to 3 million people are still living with the disease," said Ann Aerts, head of the Novartis Foundation. "Around the world, we are working to accelerate efforts to eliminate leprosy by focusing on interventions that aim to interrupt transmission. The use of AI is transformative and a game-changer in how we can accelerate progress and scale our work to reach the people in need."
View post:
Microsoft launches 5-year, $40M AI initiative in global health - FierceBiotech
Posted in Ai
Comments Off on Microsoft launches 5-year, $40M AI initiative in global health – FierceBiotech
How Artificial Intelligence Is Improving The Pharma Supply Chain – Forbes
Posted: at 9:46 am
Artificial intelligence (AI) will transform the pharmaceutical cold chain not in the distant, hypothetical future, but in the next few years. As the president of a company that has been actively involved in the creation of an application that will utilize machine learning to generate predictive data on environmental hazards in the biopharmaceutical cold chain cycle, I've seen firsthand the promise of this technology.
When coupled with machine learning and predictive analytics, the AI transformation goes much deeper than smarter search functions. It holds the potential to address some of the biggest challenges in pharmaceutical cold chain management. Here are some examples:
Analytical decision-making: Most companies capture only a fraction of their datas potential value. By aggregating and analyzing data from multiple sources a drug order and weather data along a delivery route, for example AI-based systems can provide complete visibility with predictive data throughout the cold chain. Before your cold chain starts, you can predict hurdles and properly allocate resources.
Analytical decision-making relies on companies having actionable data and real-time visibility throughout the cold chain. Just-in-time delivery of uncompromised drug product relies on predictive data analytics. With the help of analytical decision-making, cold chain logistics and overall drug cost, patient risk, and gaps in the pharmaceutical pipeline will be significantly reduced.
For example, BenevolentAI in the United Kingdom is using a platform of computational and experimental technologies and processes to draw on vast quantities of mined and inferred biomedical data to improve and accelerate every step of the drug discovery process.
Supply chain management (SCM): A 2013 study by McKinsey & Company detailed a severe lack of agility in pharmaceutical supply chains. It noted that replenishment times from manufacturer to distribution centers averaged 75 days for pharmaceuticals but 30 days for other industries, and reported the need for better transparency around costs, logistics, warehousing and inventory. Assuring drug efficacy, patient identity and chain of custody integrated with supply chain agility is where the true value of AI lies for the drug industry.
DataRobot is an example where the agile pharmaceutical supply chain can be implemented with an AI platform powered by open-source algorithms that are able to model automation by using historical drug delivery data. Supply chain managers can build a model that accurately predicts whether a given drug order could be consolidated with another upcoming order to the same location or department.
Inventory management: Biomarkers are making personalized medicine mainstream. Consequently, pharmaceutical companies must stock many more therapeutics but in much lower quantities. AI-based inventory management can determine which product is most likely to be needed (and how often), track exactly when it's delivered to a patient, and provide delivery time and delays or incidents that might trigger replacement shipment within hours.
OptumRx increasingly uses AI/ML to manage data it collects in a healthcare setting. Since becoming operational, the AI/ML system is able to continuously improve itself by analyzing data and outcomes, all without additional intervention. Early results indicate that AI/ML is adding agility to the cold chain already by reducing the number of shortages or excess inventory of drug products needed.
Warehouse automation: Integrating AI into warehouse automation tools speeds communications and reduces errors in pick and pack settings. At its simplest, AI predicts which items will be stored the longest and positions them accordingly. With this approach, Lineage Logistics, a cold-chain food supplier, increased productivity by 20%. In another example, AI positions high-volume items so they are easily accessible while still reducing congestion.
FDA Embraces AI and Big Data
Historically, pharmaceutical companies have been slow to adapt to disruptive technologies because of the important oversight role played by the FDA. However, the FDA realizes AIs potential to learn and improve performance. It already has approved AI to detect diabetic retinopathy and potential strokes in patients, and updated regulationsare expected soon to help streamline the implementation of this important tool.
Gain A Competitive Edge
For pharmaceutical companies looking to implement AI into their cold chain, here are some steps to take to become an early adopter:
1. Prepare your data, and ensure you own it. You need a strong pipeline of clean data and a mature logistics ecosystem with historical data on temperature, environmental conditions and packaging, as well as any other data you collect during your cold chain. If you dont have clean data stored, start collecting it now. If you think you have the data, verify that you own it. Some vendors claim ownership of the thermal data their systems generate and dont allow it to be manipulated by third-party software. In that case, it cant be combined with other data sources for AI analysis. Either negotiate ownership or change vendors.
2. Define your area of need: Where do you need a competitive edge? Start small with one factor that makes a measurable impact on your cold chain. That may be inventory control, packaging optimization, logistics, regulatory strategy or patient compliance. Track metrics, and tie them to business value.
3. Assemble the right people, and verify your internal capabilities. Implementing or supporting an AI/machine learning strategy requires skills that IT personnel typically lack. Consider upskilling your IT team or adding an AI skills requirement for your next new hires.
AI is at a turning point. In the next decade, it is expected to contribute a massive amount of money to the global economy. In the life sciences market alone, AI is valued at $902.1 million and is expected to grow at a rate of 21.1% through 2024. As part of this growth, I believe AI will also make significant contributions to the pharmaceutical supply chain.
Link:
How Artificial Intelligence Is Improving The Pharma Supply Chain - Forbes
Posted in Ai
Comments Off on How Artificial Intelligence Is Improving The Pharma Supply Chain – Forbes
Data Transparency and Curation Vital to Success of Healthcare AI – HealthLeaders Media
Posted: at 9:45 am
Amid advances in precision medicine, healthcare is facing the twin challenges of having to curate and tailor the use of patient data to drive genomics-powered breakthroughs.
That was the takeaway from the AI & data sciences track of last weeks Precision Medicine & World Conference in Santa Clara, California.
"There aren't a lot of physicians saying, 'Bring me more AI,' " said John Mattison, MD, emeritus CMIO and assistant medical director of Kaiser Permanente. "Every physician is saying bring me a safer and more efficient way to deliver care."
Mattison recalled his prolonged conversations with the original developers of IBM's Watson AI technology. "Initially they had no human curation whatsoever," he said. "As Stanford has published over and over again, most of medical published literature is subsequently refuted or ignored, because it's wrong. The original Watson approach was pure machine curation of reported literature without any human curation."
But human curation is not without its own biases. Watson's value to Kaiser was further eroded by Watson's focus on oncology patient data from Memorial Sloan Kettering Cancer Center and MD Anderson Cancer Center, Mattison said.
"I don't really want curation from those two institutions, because they're fee for service, and you get all these biases. The amount of money the drug companies spend on lobbying doctors to use their more expensive novel drugs is remarkably influential. If you're involved in clinical care, you want to take the best output of machine learning and you want to make sure that you have good human curation," which in Kaiser's case, emphasizes value-based care over fee-for-service, he added.
A key in human curation of machine learning and AI is how transparent the curation is, and how accessible the authoring environment for such curation is, so clinicians can make appropriate substitutions for their own requirements, Mattison said.
A current challenge of health systems is being approached by machine learning and AI companies who remain in stealth mode and are not being up-front about how and where that technology will share patient data, making it difficult for chief data officers to introduce the technology to the health system.
"Using [the patient data] for some commercial, unexpected purpose is very different than using it for the purpose that you have agreed with the health system that you're going to be using it with," said Cora Han, JD, chief health data officer with UC Health, the umbrella organization for UCSF, UCLA, UC Irvine, UC Davis, UC San Diego, and UC Riverside health systems.
Related: Opinion: An 'Epic' Pushback as U.S. Prepares for New Era of Empowering Patient Health Data
A recurring theme during the conference was the need for a third party to provide trusted certification that machine learning and AI algorithms are free from bias, such as confirmation bias or ascertainment bias, meaning basing algorithms on a cohort of patients who do not represent the entire population served by the health system.
"We have no certification groups right now that certify these things as being fair," said Atul Butte, MD, director of UCSF's Bakar Computational Health Sciences Institute. "Imagine a world in five to 10 years where we're only going to buy or license methods or algorithms that have been certified as being fair in our population, in the University of California."
UCLA Health has met or exceeded the goal of representing its own demographics within Atlas, the systems community health initiative that "aims to recruit 150,000 patients across the health system with the goal of creating California's largest genomic resource that can be used for translational medicine," according to the UCLA Health website.
"We are a far cry from [meeting] L.A. county" demographics, said Clara Lajonchere, PhD, deputy director of the UCLA Institute for Precision Health. Currently, 15% of Atlas patients are Latino, and 6%7% are African-American. "While those rates exceed that of some of the other large-scale studies, it still really underlies how critical diversity is going to be."
Recent alliances such as the Google/Ascension agreement, or the Mayo Clinic/nference startup for drug development are further enabling the kind of volume, velocity, and variety that will drive machine learning and AI innovations in healthcare, Han said.
HIPAA, which has enabled business associates such as nference to safely enter patient-sharing relationships with providers such as Mayo, can work against the principle of transparency. "If a tech company signs a BAA with a hospital system, [outsiders] don't get to see that contract," Butte said. "We could take it on faith that all the right terms were put in that contract, but sometimes just naming two entities in a sentence seems sinister and ominous in some ways."
Health systems with more than 100 years of trust associated with their brand find themselves partnering with startups with little or no such trust, and this creates additional tension in the healthcare system.
In addition, concerns linger that deidentified data will somehow be able to be reidentified through the course of its use and sharing by innovative startups.
"Whole genomes, it's hard to deidentify those," Han said. "These are issues that we will be working through."
We just need to develop a set of standards about how privacy is controlled, said Brook Byers, founder and partner with Kleiner Perkins, a Silicon Valley venture capital firm.
Related: Epic's CEO Is Urging Hospital Customers to Oppose Rules That Would Make It Easier to Share Medical Info
Scott Mace is a contributing writer for HealthLeaders.
Continued here:
Data Transparency and Curation Vital to Success of Healthcare AI - HealthLeaders Media
Posted in Ai
Comments Off on Data Transparency and Curation Vital to Success of Healthcare AI – HealthLeaders Media