The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: August 20, 2022
Are You Making These Deadly Mistakes With Your AI Projects? – Forbes
Posted: August 20, 2022 at 2:19 pm
Since data is at the heart of AI, it should come as no surprise that AI and ML systems need enough good quality data to learn. In general, a large volume of good quality data is needed, especially for supervised learning approaches, in order to properly train the AI or ML system. The exact amount of data needed may vary depending on which pattern of AI youre implementing, the algorithm youre using, and other factors such as in house versus third party data. For example, neural nets need a lot of data to be trained while decision trees or Bayesian classifiers dont need as much data to still produce high quality results.
So you might think more is better, right? Well, think again. Organizations with lots of data, even exabytes, are realizing that having more data is not the solution to their problems as they might expect. Indeed, more data, more problems. The more data you have, the more data you need to clean and prepare. The more data you need to label and manage. The more data you need to secure, protect, mitigate bias, and more. Small projects can rapidly turn into very large projects when you start multiplying the amount of data. In fact, many times, lots of data kills projects.
Clearly the missing step between identifying a business problem and getting the data squared away to solve that problem is determining which data you need and how much of it you really need. You need enough, but not too much. Goldilocks data is what people often say: not too much, not too little, but just right. Unfortunately, far too often, organizations are jumping into AI projects without first addressing an understanding of their data. Questions organizations need to answer include figuring out where the data is, how much of it they already have, what condition it is in, what features of that data are most important, use of internal or external data, data access challenges, requirements to augment existing data, and other crucial factors and questions. Without these questions answered, AI projects can quickly die, even drowning in a sea of data.
Getting a better understanding of data
In order to understand just how much data you need, you first need to understand how and where data fits into the structure of AI projects. One visual way of understanding the increasing levels of value we get from data is the DIKUW pyramid (sometimes also referred to as the DIKW pyramid) which shows how a foundation of data helps build greater value with Information, Knowledge, Understanding and Wisdom.
DIKW pyramid
With a solid foundation of data, you can gain additional insights at the next information layer which helps you answer basic questions about that data. Once you have made basic connections between data to gain informational insight, you can find patterns in that information to gain understanding of the how various pieces of information are connected together for greater insight. Building on a knowledge layer, organizations can get even more value from understanding why those patterns are happening, providing an understanding of the underlying patterns. Finally, the wisdom layer is where you can gain the most value from information by providing the insights into the cause and effect of information decision making.
This latest wave of AI focuses most on the knowledge layer, since machine learning provides the insight on top of the information layer to identify patterns. Unfortunately, machine learning reaches its limits in the understanding layer, since finding patterns isnt sufficient to do reasoning. We have machine learning, not but the machine reasoning required to understand why the patterns are happening. You can see this limitation in effect any time you interact with a chatbot. While the Machine learning-enabled NLP is really good at understanding your speech and deriving intent, it runs into limitations rying to understand and reason.For example, if you ask a voice assistant if you should wear a raincoat tomorrow, it doesn't understand that youre asking about the weather. A human has to provide that insight to the machine because the voice assistant doesnt know what rain actually is.
Avoiding Failure by Staying Data Aware
Big data has taught us how to deal with large quantities of data. Not just how its stored but how to process, manipulate, and analyze all that data. Machine learning has added more value by being able to deal with the wide range of different types of unstructured, semi-structured or structured data collected by organizations. Indeed, this latest wave of AI is really the big data-powered analytics wave.
But its exactly for this reason why some organizations are failing so hard at AI. Rather than run AI projects with a data-centric perspective, they are focusing on the functional aspects. To gain a handle of their AI projects and avoid deadly mistakes, organizations need a better understanding not only of AI and machine learning but also the Vs of big data. Its not just about how much data you have, but also the nature of that data. Some of those Vs of big data include:
With decades of experience managing big data projects, organizations that are successful with AI are primarily successful with big data. The ones that are seeing their AI projects die are the ones who are coming at their AI problems with application development mindsets.
Too Much of the Wrong Data, and Not Enough of the Right Data is Killing AI Projects
While AI projects start off on the right foot, the lack of the necessary data and the lack of understanding and then solving real problems are killing AI projects. Organizations are powering forward without actually having a real understanding of the data that they need and the quality of that data. This poses real challenges.
One of the reasons why organizations are making this data mistake is that they are running their AI projects without any real approach to doing so, other than using Agile or app dev methods. However, successful organizations have realized that using data-centric approaches focus on data understanding as one of the first phases of their project approaches. The CRISP-DM methodology, which has been around for over two decades, specifies data understanding as the very next thing to do once you determine your business needs. Building on CRISP-DM and adding Agile methods, the Cognitive Project Management for AI (CPMAI) Methodology requires data understanding in its Phase II. Other successful approaches likewise require a data understanding early in the project, because after all, AI projects are data projects. And how can you build a successful project on a foundation of data without running your projects with an understanding of data? Thats surely a deadly mistake you want to avoid.
Read more from the original source:
Are You Making These Deadly Mistakes With Your AI Projects? - Forbes
Posted in Ai
Comments Off on Are You Making These Deadly Mistakes With Your AI Projects? – Forbes
A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’ – TNW
Posted: at 2:19 pm
Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West-style anything goes approach in the US, the EUs strategy is designed to stoke academic and corporate innovation while also protecting private citizens from harm and overreach. But that doesnt mean its perfect.
In 2018, the European Commission began its European AI Alliance initiative. The alliance exists so that various stakeholders can weigh-in and be heard as the EU considers its ongoing policies governing the development and deployment of AI technologies.
Since 2018, more than 6,000 stakeholders have participated in the dialogue through various venues, including online forums and in-person events.
Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.
The commentary, concerns, and advice provided by those stakeholders has been considered by the EUs High-level expert group on artificial intelligence, who ultimately created four key documents that work as the basis for the EUs policy discussions on AI:
1. Ethics Guidelines for Trustworthy AI
2. Policy and Investment Recommendations for Trustworthy AI
3. Assessment List for Trustworthy AI
4. Sectoral Considerations on the Policy and Investment Recommendations
This article focuses on item number one: the EUs Ethics Guidelines for Trustworthy AI.
Published in 2019, this document lays out the barebones ethical concerns and best practices for the EU. While I wouldnt exactly call it a living document, it is supported by a continuously updated reporting system via the European AI Alliance initiative.
The Ethics Guidelines for Trustworthy AI provides a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy.
Per the document:
AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
Neurals rating: poor
Human-in-the-loop, human-on-the-loop, and human-in-command are all wildly subjective approaches to AI governance that almost always rely on marketing strategies, corporate jargon, and disingenuous approaches to discussing how AI models work in order to appear efficacious.
Essentially, the human in the loop myth involves the idea that an AI system is safe as long as a human is ultimately responsible for pushing the button or authorizing the execution of a machine learning function that could potentially have an adverse effect on humans.
The problem: Human-in-the-loop relies on competent humans at every level of the decision-making process to ensure fairness. Unfortunately, studies show that humans are easily manipulated by machines.
Were also prone to ignore warnings whenever they become routine.
Think about it, whens the last time you read all the fine print on a website before agreeing to the terms presented? How often do you ignore the check engine light on your car or the time for an update alert on software when its still functioning properly?
Automating programs or services that affect human outcomes under the pretense that having a human in the loop is enough to prevent misalignment or misuse is, in this authors opinion, a feckless approach to regulation that gives businesses carte blanche to development harmful models as long as they tack on a human-in-the-loop requirement for usage.
As an example of what could go wrong, ProPublicas award-winning Machine Bias article laid bare the propensity for the human-in-the-loop paradigm to cause additional bias by demonstrating how AI used to recommend criminal sentences can perpetuate and amplify racism.
A solution: the EU should do away with the idea of creating proper oversight mechanisms and instead focus on creating policies that regulate the use and deployment of black box AI systems to prevent them from deployment in situations where human outcomes might be affected unless theres a human authority who can be held ultimately responsible.
Per the document:
AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
Neurals rating : needs work.
Without a definition of safe, the whole statement is fluff. Furthermore, accuracy is a malleable term in the AI world that almost always refers to arbitrary benchmarks that do not translate beyond laboratories.
A solution: the EU should set a bare minimum requirement that AI models deployed in Europe with the potential to affect human outcomes must demonstrate equality. An AI model that achieves lower reliability or accuracy on tasks involving minorities should be considered neither safe nor reliable.
Per the document:
Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
Neurals rating: good, but could be better.
Luckily, the General Data Protection Regulation (GDPR) does most of the heavy lifting here. However, the terms quality and integrity are highly subjective as is the term legitimised access.
A solution: the EU should define a standard where data must be obtained with consent and verified by humans to ensure the databases used to train models contain only data that is properly-labeled and used with the permission of the person or group who generated it.
Per the document:
The data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the systems capabilities and limitations.
Neurals rating: this is hot garbage.
Only a small percentage of AI models lend themselves to transparency. The majority of AI models in production today are black box systems that, by the very nature of their architecture, produce outputs using far too many steps of abstraction, deduction, or conflation for a human to parse.
In other words, a given AI system might use billions of different parameters to produce an output. In order to understand why it produced that particular outcome instead of a different one, wed have to review each of those parameters step-by-step so that we could come to the exact same conclusion as the machine.
A solution: the EU should adopt a strict policy preventing the deployment of opaque or black box artificial intelligence systems that produce outputs that could affect human outcomes unless a designated human authority can be held fully accountable for unintended negative outcomes.
Per the document:
Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
Neurals rating: poor.
In order for AI models to involve relevant stakeholders throughout their entire life circle theyd need to be trained on data sourced from diverse sources and developed by teams of diverse people. The reality is that STEM is dominated by white, straight, cis-males and there are myriad peer-reviewed studies demonstrating how that simple, demonstrable fact makes it almost impossible to produce many types of AI models without bias.
A solution: unless the EU has a method by which to solve the lack of minorities in STEM, it should instead focus on creating policies that prevent businesses and individuals from deploying AI models that produce different outcomes for minorities.
Per the document:
AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.
Neurals rating: great. No notes!
Per the document:
Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.
Neurals rating: good, but could be better.
Theres currently no political consensus as to whos responsible when AI goes wrong. If the EUs airport facial recognition systems, for example, mistakenly identify a passenger and the resulting inquiry causes them financial harm (they miss their flight and any opportunities stemming from their travel) or unnecessary mental anguish, theres nobody who can be held responsible for the mistake.
The employees following procedure based on the AIs flagging of a potential threat are just doing their jobs. And the developers who trained the systems are typically beyond reproach once their models go into production.
A solution: the EU should create a policy that specifically dictates that humans must always be held accountable when an AI system causes an unintended or erroneous outcome for another human. The EUs current policy and strategy encourages a blame the algorithm approach that benefits corporate interests more than citizen rights.
While the above commentary may be harsh, I believe the EUs AI strategy is a light leading the way. However, its obvious that the EUs desire to compete with the Silicon Valley innovation market in the AI sector has pushed the bar for human-centric technology a little further towards corporate interests than the unions other technology policy initiatives have.
The EU wouldnt sign off on an aircraft that was mathematically proven to crash more often if Black persons, women, or queer persons were passengers than it did when white men were onboard. It shouldnt allow AI developers to get away with deploying models that function that way either.
Read the original:
A critical review of the EU's 'Ethics Guidelines for Trustworthy AI' - TNW
Posted in Ai
Comments Off on A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’ – TNW
Thinking of a career in AI? Make sure you have these 8 skills – TNW
Posted: at 2:19 pm
This article was originally published on .cult by Saudamani Singh. .cult is a Berlin-based community platform for developers. We write about all things career-related, make original documentaries, and share heaps of other untold developer stories from around the world.
If youve ever used Alexa or Siri, sat in a self-driving car, talked to a chatbot, or even watched something recommended to you by Netflix, youve come across Artificial Intelligence, or AI, as its commonly known.
AI is a major driving force behind the worlds advancement in almost every field of study including healthcare, finance, entertainment, and transport. Simply put, Artificial Intelligence is the capability of machines to learn like humans, take problem-solving decisions, and complete tasks that would otherwise require multiple individuals to invest long working hours.
Thinking of a career in this exciting field? Were here to answer all your AI-related questions, so you can get humanity one step closer to planting Elon Musks neuralink chips into our brains and curing blindness! Or, just make a chatbot.
AI engineers conduct a variety of tasks that would fly right over the laymans head. In fairness, creating and implementing machine learning algorithms sounds like something right out of a sci-fi movie. To be able to do that though, here are some skills every AI engineer must have:
1. Analytics
To be able to create deep-learning models that analyse patterns, a strong understanding of analytics is a prerequisite. Being very grounded with analytics will help in testing and configuring AI.
2. Applied Mathematics
Were guessing if you have an interest in Artificial Intelligence engineering, you probably dont hate math, since it is at the core of all things AI. A firm understanding of gradient descent, quadratic programming and stuff like convex optimization is necessary.
3. Statistics and algorithms
An adequate understanding of statistics is required while working with algorithms. AI engineers need to be well-versed in topics like standard deviation, probability, and models like Hidden Markov and Naive Bayes.
4. Language fluency
Yep, no surprises here. Youll need to be fluent in a couple of languages to be a successful AI engineer. The most popular language amongst artificial intelligence engineers is Python, but it often turns out to be too little on its own. Its important to have proficiency in multiple languages like C, C++ and Java.
5. Problem-solving and communication skills
AI engineers need to think out of the box. Youll find theres no set of rules or go-to guidelines you can adhere to if youre ever in a pickle. AI often requires innovative use of machine learning models and creative thinking. Youll also need to be able to communicate these ideas to your co-workers who may not have enough knowledge on the subject.
6. Neural Network knowledge
Another important skill youre going to need is efficiency with neural networks. A neural network is a software that works similarly to a human brain, helping in pattern recognition, solving complex problems and conducting image classification, which is a massive part of how we use AI. AI engineers often spend a lot of time working with neural networks; thus, a good understanding of the subject is required.
7. Team management
Youll likely work independently. However, some aspects allow you to communicate with humans, too, instead of just machines. As an AI engineer, you will need to communicate your ideas with numerous other engineers and developers. Therefore, communication and management skills come in handy . So while youre solving math equations to prepare for your career, make sure you do it with people around you.
8. Cloud knowledge
Out of the many tricks AI engineers need to have under their belt, having a fair idea of what cloud architecture is, is right up there. Cloud architecture involves much more than just managing storage space, and knowing the difference between which secure storage system is best suited to your project will be extremely helpful.
The salary an AI engineer makes depends on experience, certification, and the location in which theyre working, but generally, they get paid pretty well. According to Glassdoor, the average salary for AI engineers in the US is $114,121 per year as of 2020. Other sources claim the salary goes as high as $248,625 for experienced AI engineers. It sounds like youll be able to afford your dream house in Silicon Valley in no time.
As an AI engineer, your job is not monotonous by any means. New challenges and opportunities for innovative implementations of AI technology await every day. The demands and skills needed may seem intimidating, but the reward and compensation make it all worth it.
See the rest here:
Thinking of a career in AI? Make sure you have these 8 skills - TNW
Posted in Ai
Comments Off on Thinking of a career in AI? Make sure you have these 8 skills – TNW
Why Home Prices Must Fall; All In on Retail AI – Bloomberg
Posted: at 2:19 pm
Bloomberg Intelligence Podcast Browse all episodes
In this weeks Bloomberg podcast, Bloomberg Intelligence analysts discuss the findings and impact of their research:Unaffordable Homes, Higher Mortgages May Create Price Pressure -- Erica Adelberg lays out why home prices will have to retreat in the face of higher mortgage rates.Clean-Energy Flows May Not Lift ESG and Sustainability ETFs -- Shaheen Contractor says new US climate spending plans won't fully revive flows into clean-energy ETFs. All In on AI: Artificial Intelligence Key to E-Commerce Future -- Poonam Goyal lays out the impacts of retailers' embrace of artificial intelligence.Bonds vs. Recession: Industrials, Retail at Risk, Tech Resilient -- Himanshu Bakshi explains what groups of bonds are most at risk, and resilient, if the US falls into recession.FTSE Growth-Value Reversal a Trend? We Think Not; Look to Gilts -- Tim Craighead says the big shift toward growth from value looks transitory.The Bloomberg Intelligence radio show with Paul Sweeney and Alix Steel podcasts through Apples iTunes, Spotify and Luminary. It broadcasts on Saturdays and Sundays at noon on Bloombergs flagship station WBBR (1130 AM) in New York, 106.1 FM/1330 AM in Boston, 99.1 FM in Washington, 960 AM in the San Francisco area, channel 119 on SiriusXM, http://www.bloombergradio.com, and iPhone and Android mobile apps.Bloomberg Intelligence, the research arm of Bloomberg L.P., has more than 400 professionals who provide in-depth analysis on more than 2,000 companies and 135 industries while considering strategic, equity and credit perspectives. BI also provides interactive data from over 500 independent contributors. It is available exclusively for Bloomberg Terminal subscribers. Run BI
Aug 19, 2022
See the original post here:
Posted in Ai
Comments Off on Why Home Prices Must Fall; All In on Retail AI – Bloomberg
AI has yet to revolutionize health care – POLITICO
Posted: at 2:19 pm
By BEN LEONARD and RUTH READER
08/17/2022 10:00 AM EDT
Updated 08/17/2022 03:43 PM EDT
Investors have honed in on artificial intelligence as the next big thing in health care, with billions flowing into AI-enabled digital health startups in recent years.
But the technology has yet to transform medicine in the way many predicted, Ben and Ruth report.
Companies come in promising the world and often dont deliver, Bob Wachter, head of the department of medicine at the University of California, San Francisco, told Future Pulse. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.
Administrators say that algorithms from third-party firms often dont work seamlessly because every health system has its own tech system, so hospitals are developing their own in-house AI. But its moving slowly research on job postings shows health care lagging every industry but construction.
The FDA is working on a model to regulate AI, but its still nascent.
Theres an inherent mismatch between the pace of software development and government regulation of medical devices, said Kristin Zielinski Duggan, a partner at Hogan Lovells.
Questions remain about how regulators can rein in AIs shortcomings, including bias that threatens to exacerbate health inequities. For example, a 2019 study found a common algorithm in hospitals more frequently directed white patients to programs providing more personalized care than Black patients.
And when providers build their own AI systems, they typically arent vetted the way commercial software is, potentially allowing flaws to go unfixed for longer than they would otherwise. Furthermore, with data often siloed between health systems, a lack of quality data to power algorithms is another barrier.
But AI has shown promise in a number of medical specialties, particularly radiology. NYU Langone Health has worked with Facebooks AI Research group (FAIR) to develop AI that allows an MRI to take 15 minutes instead of an hour.
Weve taken 80 percent of the human effort out of it, said John D. Halamka, president of Mayo Clinic Platform, which has an algorithm in a clinical trial that seeks to reduce the lengthy process of mapping out a surgery plan for removing complex tumors.
And in another success story, Louisianas Ochsner Health developed AI that detects early signs of sepsis, a life-threatening infection.
Micky Tripathi, HHS national coordinator of health information technology, says AI could resemble sports broadcast systems that spit out a teams chance of winning at any given point in the game. In health care, an electronic health records system could give doctors a patients risk profile and the steps they might need to take.
This will be deemed as one of the most important if not the most important transformative phases of medicine, said Eric Topol, founder of the Scripps Research Translational Institute. A lot of heavy lifting is left to be done.
Welcome back to Future Pulse, where we explore the convergence of health care and technology. Tiny blood draws from infants, used to screen for disease, are now also being used in criminal investigations to indict their parents! What a world.
Share your news, tips and feedback with Ben at [emailprotected] or Ruth at [emailprotected] and follow us on Twitter for the latest @_BenLeonard_ and @RuthReader. Send tips securely through SecureDrop, Signal, Telegram or WhatsApp here.
ABORTION RULING FALLOUT More Democrats are backing companion legislation by Rep. Sara Jacobs (D-Calif.) and Sen. Mazie Hirono (D-Hawaii) that would make it harder for online firms to share personal data.
Jacobs is touting her bill, the My Body My Data Act, in response to Nebraska polices seizure of Facebook messages between a woman and her daughter that allegedly revealed a plan to induce an illegal abortion outside the states 20-week limit.
The similar bills from Jacobs and Hirono would limit the data that firms can collect, protect personal health information not currently covered by the health privacy law HIPAA and give the FTC power to enforce the act alongside a private right of action.
Ninety-three representatives have signed on, along with 13 senators. Without GOP support, the bill cannot pass the Senate, but Jacobs encourages state lawmakers to follow her lead in their capitals.
BIDEN SIGNS HEALTH CARE, CLIMATE BILL President Joe Biden signed legislation Tuesday that will allow Medicare to negotiate drug prices in an attempt to cut costs.
Beginning in 2026, the legislation enables Medicare to negotiate with manufacturers on 10 pricey drugs, expanding as the decade goes on. Cancer, HIV and diabetes drug costs could be considered during negotiations, according to SVB Securities.
Bidens signature caps the largest victory for Democrats since taking control of both chambers of Congress and the White House in January 2021, POLITICOs Sarah Ferris and Jordain Carney report.
Beginning in 2026, the legislation enables Medicare to negotiate with manufacturers on 10 pricey drugs, expanding as the decade goes on. | Drew Angerer/Getty Images
The drug negotiation portions came despite fierce opposition from the pharmaceutical industry, which argued the legislation would curb innovation.
Something to watch: Steve Ubl, the president of PhRMA, the drug industry lobbying group, said that members supporting the bill wont get a free pass and that one of PhRMAs member companies would nix 15 drugs if the bill became law.
BRUTAL CYBERSECURITY NUMBERS Close to 6 in 10 hospital and health system leaders said their organizations had at least one cyberattack in the past two years, according to new data from the cybersecurity company Cynerio and the cybersecurity research center Ponemon Institute.
The report also notes that the attacks which cost an average of nearly $10 million often recur: Among the victims, 82 percent had been hit by four or more attacks during the timeframe.
And the damage isnt just financial about 1 in 4 cyberattacks resulted in increased mortality by leading to delayed care, according to the report.
UNITEDHEALTHCARE TELEHEALTH SURGE The nations largest health insurer has seen patients gravitate toward telemedicine in a plan called Surest, which gives enrollees prices upfront.
Presented with the costs, enrollees choose telemedicine visits 10 times more often than people in typical plans and go to the emergency room or undergo surgery less frequently.
When a consumer goes out there and looks for [care] ..., were able to say, Hey, did you know that virtual visit offering is a zero co-pay? Alison Richards, CEO of Surest, told Future Pulse. Thats where were seeing that increase in virtual care.
Many other major insurers offer virtual-first plans that push patients toward telemedicine before in-person care.
RACIAL DISPARITIES IN HOSPITAL PROFITS Revenue and profit per patient are lower at hospitals serving the highest percentage of Black Medicare patients.
U.S. hospital financing effectively assigns a lower dollar value to the care of Black patients, a study published in the Journal of General Internal Medicine found.
Researchers from UCLA, Johns Hopkins and Harvard Medical School examined profits at 574 hospitals serving high rates of Black patients. Profits were on average $111 lower per patient day at the hospitals, and revenues were $283 lower.
Equalizing reimbursement levels would have required $14 billion in additional payments to Black-serving hospitals in 2018, a mean of approximately $26 million per Black-serving hospital, the researchers found. Health financing reforms should eliminate the underpayment of hospitals serving a large share of Black patients.
TELEREHABILITATION CAN IMPROVE BACK PAIN Good news for the quarter of Americans who suffer from acute lower back pain: It can be treated remotely.
A recent study found that a 12-week telehealth program from a New York company called Sword Health significantly reduced pain, depression, anxiety and productivity among those who completed it.
The program, overseen by a physical therapist, involves a combination of exercise, education and psychotherapy.
Of the roughly 338 people who completed the study, more than half reported a significant reduction in their disability and 61 percent experienced decreased pain.
The rub is that there is no rub: The program doesnt work for everyone. Some patients with acute lower back pain need more hands-on help, like massage or spinal manipulation, and digital rehab cant do that.
OTC HEARING AIDS GET FDA NOD The FDA has created regulatory guidance for hearing aids that manufacturers can sell over the counter, reports POLITICOs David Lim. Until now, patients have needed a prescription.
The change could bring new competition to the hearing aid market. On average, one prescription hearing aid costs approximately $2,300, though some run as much as $6,000, according to Consumer Affairs.
The rule will go into effect in 60 days. Manufacturers that want to sell existing products over the counter will have 240 days to comply with technical requirements and rules that aim to ensure the OTC hearing aids are easy to use without a doctors help.
An eye implant engineered from proteins in pigskin restored sight in 14 blind people NBC News
Parents and clinicians say private equitys profit fixation is short-changing kids with autism STAT
Google Maps regularly misleads people searching for abortion clinics Bloomberg
CORRECTION: An earlier version of Future Pulse misstated where Sword Health is based. The company is based in New York.
Read the original:
Posted in Ai
Comments Off on AI has yet to revolutionize health care – POLITICO
Lu brings the power of AI to the hospital – The Source – Washington University in St. Louis – Washington University in St. Louis
Posted: at 2:19 pm
Chenyang Lu, the Fullgraf Professor of computer science and engineering in the Washington University in St. Louis McKelvey School of Engineering, is combining artificial intelligence with data to improve patient care and outcomes.
But he isnt only concerned with patients, he is also developing technology to help monitor doctors health and well-being.
The Lu lab presented two papers at this years ACM SIGKDD Conference on Knowledge Discovery and Data Mining, both of which outline novel methods his team has developed with collaborators from Washington University School of Medicine to improve health outcomes by bringing deep learning into clinical care.
For caregivers, Lu looked at burnout, and how to predict it before it even arises. Activity logs of how clinicians interact with electronic health records provided researchers with massive amounts of data. They fed this data into a machine learning framework developed by Lu and his team Hierarchical burnout Prediction based on Activity Logs (HiPAL) and it was able to extrapolate meaningful patterns of workload and predict burnout from this data in an unobtrusive and automated manner.
Learn more about the teams work on the engineering website.
When it comes to patient care, physicians in the operating room collect substantial amounts of data about their patients, both during preoperative care and during surgery data that Lu and collaborators thought they could put to good use with Lus deep-learning approach: Clinical Variational Autoencoder (cVAE).
Using novel algorithms designed by the Lu lab, they were able to predict who would be in surgery for longer and who was more likely to develop delirium after surgery. The model was able to transform hundreds of clinical variables into just 10, which the model used to make accurate and interpretable predictions about outcomes that were superior to current methods.
Learn more about the teams findings on the engineering website.
Lu and his interdisciplinary collaborators will continue to validate both models, hopeful that both bring the power of AI into hospital settings.
The McKelvey School of Engineering at Washington University in St. Louis promotes independent inquiry and education with an emphasis on scientific excellence, innovation and collaboration without boundaries. McKelvey Engineering has top-ranked research and graduate programs across departments, particularly in biomedical engineering, environmental engineering and computing, and has one of the most selective undergraduate programs in the country. With 140 full-time faculty, 1,387 undergraduate students, 1,448 graduate students and 21,000 living alumni, we are working to solve some of societys greatest challenges; to prepare students to become leaders and innovate throughout their careers; and to be a catalyst of economic development for the St. Louis region and beyond.
See the original post here:
Posted in Ai
Comments Off on Lu brings the power of AI to the hospital – The Source – Washington University in St. Louis – Washington University in St. Louis
Video: Sneak Preview of the AI Hardware Summit – HPCwire
Posted: at 2:19 pm
Next month the AI Hardware Summitreturns to the Bay Area, bringing AI technologists and end users together to share ideas and get up to speed on all the latest AI hardware developments. The event which takes place September 13-15, 2022, at the Santa Clara Marriott, Calif. will be co-located with the Edge AI Summit. Both events are organized byKisaco Research, which launched the inaugural AI Hardware Summit in 2018.
One of the participants who has been there from the beginning is Karl Freund, founder and principal analyst at Cambrian AI Research. In an interview with HPCwire and EnterpriseAI, Freund provides a preview of what attendees can expect and offers advance highlights of his scheduled talk, The Landscape of AI Acceleration: Annual Survey of the Last Year of Innovations from the World of Semiconductors, Systems and Design Tools.
To me, if youre developing AI, a machine learning platform, you want to make sure youre using the best hardware you can get your hands on. This is the best place to go to find out whats coming and to find out what other people are doing. And learn from them, learn from whats working and perhaps whats not, Freund shared.
A number of other returning speakers will also be giving talks this year: Lip-Bu Tan, executive chairman at Cadence; Kunle Olukotun, co-founder and chief technologist at SambaNova Systems; Andrew Feldman, founder and CEO of Cerebras Systems; and many more.
On Wednesday, the luminary keynote will be presented by Meta engineers Alexis Black Bjorlin, vice president, infrastructure hardware, and Lin Qiao, senior director, engineering. Their session is titled, Co-Designing an Accelerated Compute Platform at Scale for AI Research and Infrastructure.
On Thursday, Rashid Attar, head of engineering, cloud/edge AI inference accelerators at Qualcomm, opens the day with his keynote, which will cover chip design and AI at the edge.
The events closing keynote will be delivered by Sassine Ghazi, president and COO of Synopsys. In his presentation, Enter the Era of Autonomous Design: Personalizing Chips for 1,000X More Powerful AI Compute, Ghazi will discuss strategies for using machine learning techniques to reduce design time and design risk.
In addition to participation from Meta, Qualcomm, Cadence and Synopsys, there will be talks from Alibaba, AMD, Atos, Graphcore, HuggingFace, MemVerge, Microsoft, SambaNova, Siemens, and many others.
A Meet & Greet takes place Tuesday from 4-7pm during which DeepMind Chief Business Operator Colin Murdoch will be interviewed by Cade Metz of the NYTimes. AI Hardware Summit and Edge AI Summit attendees are invited to reconnect with peers, make new acquaintances and discuss the state of machine learning in the datacenter and at the edge. A guest speaker announcement is forthcoming.
Freund maintains that AI will be pervasive. It will be in every electronic product we buy and sell and use, from toys to automobiles, he said, And the ability to make AI useful depends on really good engineers, really good AI modelers, data scientists, as well as the hardware. But all that will be for naught if youre not running it on efficient hardware because you cant afford it. So this is all about making AI pervasive and affordable and people overuse the term but democratizing AI is still a goal. Its not where we are, but its gonna take a lot of work to get there.
Visit link:
Posted in Ai
Comments Off on Video: Sneak Preview of the AI Hardware Summit – HPCwire
EnergX Announces World-First Use of AI in the Field of Employee Retention – AccessWire
Posted: at 2:19 pm
SYDNEY, NEW SOUTH WALES / ACCESSWIRE / August 19, 2022 / With the cost of replacing employees at 50% of a junior salary and up to 250% of a senior salary according to the Society of HR Management, EnergX has announced a world-first AI coach with a focus on employee retention.
The scalable technology is designed to help businesses retain staff and to get a healthy ROI within the existing flow of work.
Named Franky to reflect its straightforward approach, the coach chooses from over 5.7 million personalized curriculum options to connect employees to the intrinsic drivers of engagement that maintain their motivation and improve the quality of their work.
Franky lives' in existing platforms like Teams, Slack, Webex and Workplace to enable organizations to drive behavior change without the need for extensive IT installation.
Endorsed by University of Sydney Professor of Psychology David Alais, who has noted results shown through the new AI have significantly helped with factors contributing to employee anxiety levels, stress management and overall employee happiness.
"As a psychologist and neuroscientist, I admire how EnergX has built on evidence from both fields to design a remarkably effective approach that overcomes burnout in short time frames."
With burned out employees 2.6x more likely to leave, this approach is proving a strong link between improving employee health and a corresponding reduction in the risk factors associated with retention.
The AI curriculum was complemented with team learning experiences designed to create a sense of belonging and leadership development and coaching. Upon completion, participants were reassessed for retention risk and also against the World Health Organization Wellbeing Index.
At this point EnergX found that employees in very good or excellent health were 4.1x more likely to have zero retention risk factors when compared with employees in poor health.
This impact was made more significant by tripling the number of leaders and overall employees in very good to excellent health employees in just 100 days.
"At the end of the day achieving competitive advantage requires you to have the fittest team on the field," says EnergX CEO Sean Hall.
He added, "The first step to achieving this, and your best retention strategy right now, is to help your people overcome burnout. This doesn't happen overnight, and definitely not with generic masterclasses, but it can happen much quicker than you think."
About EnergxThe team at EnergX focuses on certain behaviors and processes that cause burnout, crush creativity and stifle inclusion to significantly improve belonging, engagement, and retention.
CONTACT:Sean HallEnergxEmail: [emailprotected]Website: energx.com.au
SOURCE: Energx
Read more:
EnergX Announces World-First Use of AI in the Field of Employee Retention - AccessWire
Posted in Ai
Comments Off on EnergX Announces World-First Use of AI in the Field of Employee Retention – AccessWire
Otter.ai slashes free monthly transcription minutes to 300, but opens recorder bot to all – TechCrunch
Posted: at 2:19 pm
Theres good news and bad news for users of the Otter.ai transcription service. The good news is that Otter Assistant a bot that can be configured to record meetings automatically will now be available to everyone, regardless of whether theyre a free or paid user.
The bad news, however, is that Otter.ai is scaling back on some features, like the number of monthly transcription minutes available for basic and pro accounts.
Otter.ai first launched its bot to automatically record Zoom meetings last May, though it later added support for Google Meet, Microsoft Teams and Cisco Webex. The assistant integrates with the users calendar, and automatically joins any scheduled meeting, records it and shares the transcription with everyone in the meeting. So even if someone cant attend a meeting, they can at least listen back to it and peruse the notes later.
The feature was originally only available to subscribers on the business plan, but starting September 27 it will be available to Free and Pro accounts too. However, those who pay for a Pro account will be able to ask the Otter Assistant to join two concurrent meetings.
Whats more, the companys AI-generated meeting summary feature which was introduced in March will be available to both Basic and Pro account users too.
While users are gaining these features, the company is restricting things like transcription minutes per-month for both Basic and Pro accounts. Heres a rundown of whats changing:
Otter Basic (free tier)
Otter Pro
But thats not all. Otter Pros monthly subscribers will have to pay $16.99 per month instead of $12.99 starting September 27, though they will get to use their accounts with the current limits until November 30. The annual plan will still cost $99.99 ($8.33 per month), so if users subscribe to that plan before September 27, current feature limits will apply until next year.
Clearly, the company, which raised $50 million in a Series B round last year, is coercing users to commit to the yearly plan.
New features offered and limitations of Otter basic and Pro accounts. Image Credits: Otter
While more business-centric alternatives such as Dialpad have enjoyed massive success, with this latest move, it seems that Otter.ai is trying to appease the more casual user while also trying to boost its revenues by encouraging users to upgrade their plans to get the same features that theyre accustomed to.
Other alternatives such asTLDV, meanwhile, offer unlimited recording and transcription for free users, a fact that could help lure current Otter.ai stalwarts over to its platform.
Continue reading here:
Posted in Ai
Comments Off on Otter.ai slashes free monthly transcription minutes to 300, but opens recorder bot to all – TechCrunch
How Microsoft’s AI convinced me to switch to Edge, and where the browser still falls short – GeekWire
Posted: at 2:19 pm
Microsofts Edge browser comes with a built-in Read aloud feature. (GeekWire Illustration)
I finally broke down and switched to Microsofts Edge browser this week on my Windows PC, after many years of using Google Chrome.
No, it wasnt the incessant and annoying prompts in Windows 11, urging me to make Edge my default, although the nagging did keep the Microsoft browser top of mind.
For me, the tipping point was Edges built-in Read aloud feature, and what sounds to my ears like major advances in some of Microsofts synthesized voices, to the point that theyre almost indistinguishable from human narrators.
Ive long been a fan of text-to-speech for listening to articles and long emails.
Ive used various apps and browser plugins over the years, some of them more seamless than others.
Microsoft Edges Read aloud feature is controllable directly from a web page, after activating it from a menu accessible under the three dots in the upper right of the browser frame, or by right-clicking on the text.
As it reads, you can click on the actual text on the page to go to a particular section.
As with most automated text-to-speech technologies, sometimes you do have to put up with some minor annoyances, such as the voice reading fine print, menu items or disclaimers on a site. The ability to select the text to be read, or jump around by clicking on the text, helps to overcome that when listening via the browser.
Significant improvement in voice quality: But the grabber for me is the increasing authenticity of some of the Microsoft voices: the inflections, the pauses, the lack of the tell-tale robotic vocal fry. For example, here is Microsoft Michelle Online (Natural) reading this paragraph.
Its not perfect. The AI can still sound briefly robotic. Unusual names can also cause problems. Reading this story today about Geocaching by my colleague Kurt Schlosser, for example, Michelle pronounces it Geo-coshing.
Still, the quality is much better than the drone voices that had my friends and colleagues making fun of my attempts to use text-to-speech tools in the past.
Microsoft Edges features for importing data and passwords, standard in browsers these days, made the switch relatively easy. Edges use of Chromium, the underlying open-source engine that powers Chrome, also helped ease the transition. Edge debuted in 2015, and the company officially retired Internet Explorer this year.
Mobile syncing benefits and bugs: The feature is also available in the Edge browser for smartphones, and it works well there. You can access read aloud by clicking on the three dots at the bottom of the Edge mobile app.
But this also shows where Microsoft is falling short. Edges Collections feature for saving web pages is supposed to sync across PCs and mobile devices when logged in via Microsoft account. Ive set up a Read Later collection where, theoretically, I can save articles in my PC browser for the AI to read aloud later in the Edge app on my Android phone.
The articles do save in my PC browser, but my Edge Collections wont sync to my phone. Ive checked all the settings, gone through all the troubleshooting steps, without any luck. All of my other data is syncing. This appears to be a problem for many others, as well.
Ill keep trying to find a fix, and Ill update this story if I do. Even if it is a case of user error, it shouldnt be this hard.
Amazon Alexa and audiobooks: This is probably a subject for another post, but Im also a fan of Amazon Alexas feature for reading Kindle books on Echo devices, but the implementation in my experience is less than ideal, frequently forgetting where you were when you stopped having Alexa read to you.
Its going to be fascinating to see the impact that the growing authenticity of synthetic voices has on Amazons Audible audiobook subsidiary in the years ahead.
In the meantime, if anyone out there has any feedback, ideas, or different approaches for making the most of text-to-speech technology in your daily work, please let me know via Twitter, LinkedIn or my email address below.
See original here:
Posted in Ai
Comments Off on How Microsoft’s AI convinced me to switch to Edge, and where the browser still falls short – GeekWire