The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: July 3, 2022
Sentient AI? Do we really care? – The Hill
Posted: July 3, 2022 at 3:56 am
Artificial Intelligence (AI) headlined the news recently when a Google engineer named Blake Lemoine became convinced that a software program was sentient. The program, Language Models for Dialog Applications (LaMDA), is a chatbot designed to mimic human conversation. So thats what it did.
In a Medium post, Lemoine declared LaMDA had advocated for its rights as a person, and wants to be acknowledged as an employee of Google rather than as property of Google. This development, as they now say, blew up the internet. Philosophers, ethicists, and theologians weighed in.
For engineers and technologists, however, its just another illustration of the overly broad and frustratingly mushy definition of artificial intelligence that has confused the public conversation since Mary Shelley published Frankenstein. As always, defining terms is a good place to start. Sentience is the ability to feel and experience sensation. Its a word invented specifically to distinguish from the ability to think. Therefore, sentience and intelligence are not synonyms. Google may very well have created an intelligence. In fact, Google and numerous other companies including my employer, SAIC, already have. But absent the biological prerequisite of a central nervous system, they are not sentient, even if they pass Alan Turings famous Imitation Game test of seeming human.
But more to the point, for engineering applications, the question of sentience is not immediately relevant. The real question is one of application. What can AI the practice of infusing machines with the capacity to perform analysis and make evidence-based recommendations previously believed to be the exclusive purview of humans actually do to enhance business performance, to drive better mission outcomes, to improve the world? Waves of data fog our view; what can the clarifying lens of AI help us see?
Hindsight: If, as George Santayana said, those who cannot remember the past are condemned to repeat it, then lessons derived from historical data inoculate us from future mistakes. By crunching mountains of data from myriad inputs, AI can leverage real world, real-time experience to allow leaders to confidently make plans and install course corrections. AI can provide dashboard views without the hassle of Oracle queries, data calls, and spreadsheets to underscore comparisons quickly and without knowledge gaps.
Foresight: When will a hurricane make landfall? Where will a satellite in decaying orbit re-enter the atmosphere? How often will an offshore wind turbine require maintenance? AI is already at work providing predictive answers to grand engineering questions formerly addressed by a ghastly gaggle of guesswork.
Insight: AI is not a replacement for human judgment, but it can and does recommend action by computing conditional probability of multiple scenarios. Result: business decisions statistically more likely to succeed. This is especially useful in crisis situations such as a global epidemic when stakes are high, precedents are few, and decisions are quick.
Oversight: Analog methods always have struggled with organizing complex and sensitive data from many sources at various clearance levels. Because interoperability and oversight are essential in defense and intelligence agencies, where missions require the ability to co-locate large amounts of both confidential data and open-source intelligence, AI is certain to play a growing role in battlespace decisions.
Rightsight: Even the best data analyst cant connect all the dots simultaneously. Yet missions often depend on surfacing granular data immediately. Imagine a soldier on the battlefield armed with essential intel in an instant. Deep machine learning fueled by AI provides amplified intelligence so users can act quickly and accurately, bringing each of the sights together to operate as one.
AI algorithms can work harmoniously to achieve efficiency and modernize legacy systems. This human-machine partnership already is underway and is to be embraced, not feared. When machines drive digital transformation and empower human innovation, everyone wins.
So, leave the question of sentience to the poets. Those of us focused on the science of the mission rather than science fiction will leverage the burgeoning power of AI to simply get the job done.
Jay Meil is Data Science Director for Artificial Intelligence at the defense technology firm SAIC.
Read more from the original source:
Posted in Ai
Comments Off on Sentient AI? Do we really care? – The Hill
Harvard Developed AI Identifies the Shortest Path to Human Happiness – SciTechDaily
Posted: at 3:56 am
The researchers created a digital model of psychology aimed to improve mental health. The system offers superior personalization and identifies the shortest path toward a cluster of mental stability for any individual.
Deep Longevity has published a paper in Aging-US outlining a machine learning approach to human psychology in collaboration with Nancy Etcoff, Ph.D., Harvard Medical School, an authority on happiness and beauty.
The authors created two digital models of human psychology based on data from the Midlife in the United States study.
The first model is an ensemble of deep neural networks that predicts respondents chronological age and psychological well-being in 10 years using information from a psychological survey. This model depicts the trajectories of the human mind as it ages. It also demonstrates that the capacity to form meaningful connections, as well as mental autonomy and environmental mastery, develops with age. It also suggests that the emphasis on personal progress is constantly declining, but the sense of having a purpose in life only fades after 40-50 years. These results add to the growing body of knowledge on socioemotional selectivity and hedonic adaptation in the context of adult personality development.
The article describes an AI-based recommendation engine that can estimate ones psychological age and future well-being based on a constructed psychological survey. The AI uses the information from a respondent to place them on a 2D map of all possible psychological profiles and derive ways to improve their long-term well-being. This model of human psychology can be used in self-help digital applications and during therapist sessions. Credit: Michelle Keller
The second model is a self-organizing map that was created to serve as the foundation for a recommendation engine for mental health applications. This unsupervised learning algorithm splits all respondents into clusters depending on their likelihood of developing depression and determines the shortest path toward a cluster of mental stability for any individual. Alex Zhavoronkov, the chief longevity officer of Deep Longevity, elaborates, Existing mental health applications offer generic advice that applies to everyone yet fits no one. We have built a system that is scientifically sound and offers superior personalization.
To demonstrate this systems potential, Deep Longevity has released a web service FuturSelf, a free online application that lets users take the psychological test described in the original publication. At the end of the assessment, users receive a report with insights aimed at improving their long-term mental well-being and can enroll in a guidance program that provides them with a steady flow of AI-chosen recommendations. Data obtained on FuturSelf will be used to further develop Deep Longevitys digital approach to mental health.
FuturSelf is a free online mental health service that offers guidance based on a psychological profile assessment by AI. The core of FuturSelf is represented by a self-organizing map that classifies respondents and identifies the most suitable ways to improve ones well-being. Credit: Fedor Galkin
A leading biogerontology expert, professor Vadim Gladyshev from Harvard Medical School, comments on the potential of FuturSelf:
This study offers an interesting perspective on psychological age, future well-being, and risk of depression, and demonstrates a novel application of machine learning approaches to the issues of psychological health. It also broadens how we view aging and transitions through life stages and emotional states.
The authors plan to continue studying human psychology in the context of aging and long-term well-being. They are working on a follow-up study on the effect of happiness on physiological measures of aging.
The study was funded by the National Institute on Aging.
Reference: Optimizing future well-being with artificial intelligence: self-organizing maps (SOMs) for the identification of islands of emotional stability by Fedor Galkin, Kirill Kochetov, Michelle Keller, Alex Zhavoronkov and Nancy Etcoff, 20 June 2022, Aging-US.DOI: 10.18632/aging.204061
See the original post here:
Harvard Developed AI Identifies the Shortest Path to Human Happiness - SciTechDaily
Posted in Ai
Comments Off on Harvard Developed AI Identifies the Shortest Path to Human Happiness – SciTechDaily
AI Algorithm Predicts Future Crimes One Week in Advance With 90% Accuracy – SciTechDaily
Posted: at 3:56 am
A new computer model uses publicly available data to predict crime accurately in eight cities in the U.S., while revealing increased police response in wealthy neighborhoods at the expense of less advantaged areas.
Advances in artificial intelligence and machine learning have sparked interest from governments that would like to use these tools for predictive policing to deter crime. However, early efforts at crime prediction have been controversial, because they do not account for systemic biases in police enforcement and its complex relationship with crime and society.
University of Chicago data and social scientists have developed a new algorithm that forecasts crime by learning patterns in time and geographic locations from public data on violent and property crimes. It has demonstrated success at predicting future crimes one week in advance with approximately 90% accuracy.
In a separate model, the team of researchers also studied the police response to crime by analyzing the number of arrests following incidents and comparing those rates among neighborhoods with different socioeconomic status. They saw that crime in wealthier areas resulted in more arrests, while arrests in disadvantaged neighborhoods dropped. Crime in poor neighborhoods didnt lead to more arrests, however, suggesting bias in police response and enforcement.
What were seeing is that when you stress the system, it requires more resources to arrest more people in response to crime in a wealthy area and draws police resources away from lower socioeconomic status areas, said Ishanu Chattopadhyay, PhD, Assistant Professor of Medicine at UChicago and senior author of the new study, which was published on June 30, 2022, in the journal Nature Human Behaviour.
The new tool was tested and validated using historical data from the City of Chicago around two broad categories of reported events: violent crimes (homicides, assaults, and batteries) and property crimes (burglaries, thefts, and motor vehicle thefts). These data were used because they were most likely to be reported to police in urban areas where there is historical distrust and lack of cooperation with law enforcement. Such crimes are also less prone to enforcement bias, as is the case with drug crimes, traffic stops, and other misdemeanor infractions.
When you stress the system, it requires more resources to arrest more people in response to crime in a wealthy area and draws police resources away from lower socioeconomic status areas.
Ishanu Chattopadhyay, PhD
Previous efforts at crime prediction often use an epidemic or seismic approach, where crime is depicted as emerging in hotspots that spread to surrounding areas. These tools miss out on the complex social environment of cities, however, and dont consider the relationship between crime and the effects of police enforcement.
Spatial models ignore the natural topology of the city, said sociologist and co-author James Evans, PhD, Max Palevsky Professor at UChicago and the Santa Fe Institute. Transportation networks respect streets, walkways, train and bus lines. Communication networks respect areas of similar socio-economic background. Our model enables discovery of these connections.
The new model isolates crime by looking at the time and spatial coordinates of discrete events and detecting patterns to predict future events. It divides the city into spatial tiles roughly 1,000 feet across and predicts crime within these areas instead of relying on traditional neighborhood or political boundaries, which are also subject to bias. The model performed just as well with data from seven other U.S. cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco.
We demonstrate the importance of discovering city-specific patterns for the prediction of reported crime, which generates a fresh view on neighborhoods in the city, allows us to ask novel questions, and lets us evaluate police action in new ways, Evans said.
Chattopadhyay is careful to note that the tools accuracy does not mean that it should be used to direct law enforcement, with police departments using it to swarm neighborhoods proactively to prevent crime. Instead, it should be added to a toolbox of urban policies and policing strategies to address crime.
We created a digital twin of urban environments. If you feed it data from happened in the past, it will tell you whats going to happen in future. Its not magical, there are limitations, but we validated it and it works really well, Chattopadhyay said. Now you can use this as a simulation tool to see what happens if crime goes up in one area of the city, or there is increased enforcement in another area. If you apply all these different variables, you can see how the systems evolves in response.
Reference: Event-level Prediction of Urban Crime Reveals Signature of Enforcement Bias in U.S. Cities by Victor Rotaru, Yi Huang, Timmy Li, James Evans and Ishanu Chattopadhyay, 30 June 2022, Nature Human Behaviour.DOI: 10.1038/s41562-022-01372-0
The study was supported by the Defense Advanced Research Projects Agency and the Neubauer Collegium for Culture and Society. Additional authors include Victor Rotaru, Yi Huang, and Timmy Li from the University of Chicago.
View original post here:
AI Algorithm Predicts Future Crimes One Week in Advance With 90% Accuracy - SciTechDaily
Posted in Ai
Comments Off on AI Algorithm Predicts Future Crimes One Week in Advance With 90% Accuracy – SciTechDaily
The AI ‘gold rush’ in Washington- POLITICO – POLITICO
Posted: at 3:56 am
With help from Ben Schrekinger.
A Skydio R1 drone | AP Photo/Jeff Chiu
AIs little guys are getting into the Washington influence game.
Tech giants and defense contractors have long dominated AI lobbying, seeking both money and favorable rules. And while the largest companies still dominate the debate, pending legislation in Congress aimed at getting ahead of China on innovation, along with proposed bills on data privacy, have caused a spike in lobbying by smaller AI players.
A number of companies focused on robotics, drones and self-driving cars are all setting up their own Washington influence machines, positioning them to shape the future of AI policy to their liking.
A lot of it is spurred by one major piece of legislation: The Bipartisan Innovation Act, commonly referred to as USICA an acronym for its previous title, and its goal to out-innovate China.
One tech lobbyist, granted anonymity to speak candidly, called AI-lobbying on USICA a gold rush. If the bill passes as currently written, it will bring about $50 billion in extra research spending over the next five years. Senate Majority Leader Chuck Schumer has touted USICA, previously known as the United States Innovation and Competition Act, as the best response to Chinas technological dominance.
Robotics company iRobot registered The Vogel Group, a major D.C. firm led by former GOP leadership aide Alex Vogel, to lobby for the bill. Argo AI, an autonomous driving technology company, deployed its in-house lobbyists including the former chief of staff of Rep. Debbie Dingell (D-Mich.) and former legislative assistant to Sen. Lindsey Graham (R-S.C.) to lobby on supply chain issues within USICA.
Ryan Hagemann, co-director of the IBM Policy Lab, said most of the attention in the AI space is on USICA legislation right now.
But the expansion in lobbying goes way beyond USICA, and its about more than chasing after government grants.
The most recent version of the American Data Privacy and Protection Act as well as the Algorithmic Accountability Act, propose government-mandated impact assessments for any companies that use algorithms. That means companies could suddenly have to turn over audits of their technology to regulators, a lengthy process that some companies argue should only fall to firms that produce high-risk AI, such as facial recognition technology used by police to catch criminals, versus low-risk AI like chatbots. IBM, for instance, argues that it should not have to perform the same kinds of impact assessments on its general-purpose AI systems as do companies that train AI on their own proprietary data sets.
Its not a question of who ought to perform the impact assessments, but when the impact assessment should have to occur, Hagemann said.
Merve Hickok, senior of the Center for AI and Digital Policy, a non-profit digital rights advocacy group, says theres a lot at stake. Only a handful of companies would have to submit algorithmic audits if their lobbying is effective.
You see a lot of companies not only big tech, but some industry groups as well pushing and lobbying against these obligations, Hickok said, pointing to efforts underway in Europe.
The definition of what constitutes AI is fuzzy in the first place. But a lot of the companies that use AI to operate their technology such as drone companies are buckling in for a bumpy ride in Washington. Drone company Skydio, seeking more funding for a Federal Aviation Administration training initiative and drone acquisitions by the Defense Department, almost doubled its lobbying spending from $160,000 in 2020 to $304,000 in 2021. Shield AI, which creates artificial intelligence that controls drones for military operations, went from spending $65,000 on lobbying in 2020 to spending more than $1.5 million in 2021, a number that it is on track to exceed this year. Skydio declined to comment and Shield AI did not respond to a request for comment.
Meanwhile, facial recognition companies like Clearview AI are fighting bills that would pause the use of the technology, such as the Facial Recognition and Biometric Technology Moratorium Acta. Clearview AI, which has faced enormous scrutiny from lawmakers over its controversial facial recognition technology, spent $120,000 on lobbying in 2021 after registering lobbyists for the first time in May 2021.
Hickok pointed out that U.S. lobbying around AI is still dominated by big companies like Google and Amazon, even with the proliferation of smaller companies registering to lobby. Hickock said that because the U.S. has not passed significant AI regulations, it has become a testbed, while the corporations are enjoying the benefits.
A message from Ericsson:
Ericsson helps the U.S. build 5G infrastructure. Ericsson is the leading provider of 5G equipment in the U.S. From our 5G smart factory in Lewisville, Texas, we are able to supply equipment directly to leading nationwide service providers. Learn more at ericsson.com/us.
The financial crisis in crypto markets continues today, with a court in the British Virgin Islands ordering the liquidation of crypto hedge fund Three Arrows Capital.
POLITICOs Sam Sutton reports that two executives from the politically connected consulting firm Teneo will oversee that process.
For investors curious about how and why the fund got to this point and worried about what could further destabilize crypto markets a new report today from on-chain analytics firm Nansen traces some of the interconnected moves. Dominoes are falling, is how Nansen researcher Andrew Thurman summarized it in an email.
The report highlights the role of staked Ether, a derivative of Ether, the second-largest cryptocurrency, issued by Lido Finance. (Staked Ether is not the currency itself, but rather a token that can be redeemed for Ether after the Ethereum network completes a complicated upgrade process.) When times were booming, the market treated staked Ether like it was as good as Ether. But last month, as the algorithmic stablecoin TerraLuna melted down, staked Ether began to trade at a discount to the real thing.
Three Arrows had invested in both Luna and staked Ether; after the TerraLuna meltdown, it sold its staked Ether at a loss, said Thurman, and ultimately couldnt recover.
Despite the fears of further contagion provoked by Three Arrows downfall, the market may now get a respite. Thurman said the on-chain positions of crypto lender Celsius which recently raised concerns by suspending withdrawals have improved and that emergency measures taken by Lido appear to have calmed investors.- Ben Schreckinger
A message from Ericsson:
A new GAO report on government use of facial recognition tech found that a slew of federal and state agencies use facial recognition. The GAO found that most of these agencies did not assess the privacy risks associated with the facial recognition. Fourteen agencies ranging from NASA to the Department of Justice use facial recognition to unlock agency-issued smartphones. Its a sign that facial recognition has become so quotidian that its taken for granted, leading agencies to use it without fully parsing its implications.- Konstantin Kakaes
- Popular apps for tracking pregnancy and ovulation reserve the right to turn user data over to law enforcement, a Forbes analysis found.
- Are cutting-edge technologies just too hard to scale?
- A finance professor offers a world-historical way to think about blockchains.
- Its possible that AI can manufacture ideas
- What does human-centered AI actually look like?
A message from Ericsson:
Ericsson. 5G Made for US.
5G will be a platform for a new economy, driven by cutting edge use cases that take advantage of 5Gs speed, low latency and reliability. Ericssons 120 year history in the U.S. and recent investments, like the $100 million factory in Texas, make the company the right partner to build the open and secure networks that will be the backbone of the 5G economy.
Learn more.
See original here:
Posted in Ai
Comments Off on The AI ‘gold rush’ in Washington- POLITICO – POLITICO
How to Make Teachers Informed Consumers of Artificial Intelligence – Market Brief – EdWeek
Posted: at 3:56 am
New Orleans Artificial intelligences place in schools may be poised to grow, but school districts and companies have a long way to go before teachers buy into the concept.
At a session on the future of AI in school districts, held at the ISTE conference this week, a panel of leaders discussed its potential to shape classroom experiences and the many unresolved questions associated with the technology.
The mention of AI can intimidate teachers asits so often associated with complex code and sophisticated robotics. But AI is already a part of daily life in the way our phones recommend content to us or the ways that our smart home technology responds to our requests.
When AI is made relatable, thats when teachers buy into it, opening doors for successful implementation in the classroom, panelists said.
AI sounds so exotic right now, but it wasnt that long ago that even computer science in classrooms was blowing our minds, said Joseph South, chief learning officer for ISTE. South is a former director of the office of educational technology at the U.S. Department of Education.It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.Nneka McGee, South San Antonio Independent School District
The first step in getting educators comfortable with AI is to provide them the support to understand it, said Nancye Blair Black, ISTEs AI Explorations project lead, who moderated the panel. That kind of support needs to come from many sources, from federal officials down to the state level and individual districts.
We need to be talking about, What is AI? and it needs to be explained, she said. A lot of people think AI is magic, but we just need to understand these tools and their limitations and do more research to get people on board.
With the use of machine learning, AI technologies can adapt to individual students needs in real-time, tracking their progress and providing immediate feedback and data to teachers as well.
In instances where a student may be rushing through answering questions, AI technology can pick up on that and flag the student to slow down, the speakers said. This can provide a level of individual attention that cant be achieved by a teacher whos expected to be looking over every students shoulder simultaneously.
Others see reasons to be wary of AIs potential impact on teaching and learning. Many ed-tech advocates and academic researchers have raised serious concerns that the technology could have a negative impact on students.
One longstanding worry is that the data AI systems rely on can be inaccurate or even discriminatory, and that the algorithms put into AI programs make faulty assumptions about students and their educational interests and potential.
For instance, if AI is used to influence decisions about which lessons or academic programs students have access to, it could end up scuttling students opportunities, rather than enhancing them.
Nneka McGee, executive director for learning and innovation for the South San Antonio ISD, mentioned in the ISTE panel that a lot more research still has to be done on AI, regarding opportunity, data, and ethics.
Some districts that are more affluent will have more funding, so how do we provide opportunities for all students? she said.
We also need to look into the amount of data that is needed and collected for AI to run effectively. Your school will probably need a data- sharing agreement with the companies you work with.
A lot of research needs to be done on AIs data security and accessibility, as well as how to best integrate such technologies across the curriculum not just in STEM-focused courses.
Its important to start getting educators familiar with the AI and how it works, panelists said, because when used effectively, AI can increase student engagement in the classroom, and give teachers more time to customize lessons to individual student needs.
As AI picks up momentum within the education sphere, the speakers said that teachers need to start by learning the fundamentals of the technology and how it can be used in their classrooms.But a big share of the responsibilityalso falls on company officials developing new AI products, Black said.
When asked about advice for ed-tech organizations that are looking to expand into AI capabilities, Black emphasized the need for user-friendliness and an interface that can be seamlessly assimilated into existing curriculum and standards.
Hand [teachers] something they can use right away, not just another thing to pile on what they already have, she said.
McGee, of the South San Antonio ISD,urges companies to include teachers in every part of the process when it comes to pioneering AI.
Involve teachers because theyre on the front lines; theyre the first ones who see our students, she said. It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.
FollowEdWeek Market Briefon Twitter@EdMarketBriefor connect with us onLinkedIn.
Photo Credit: International Society for Technology in Education
See also:
Go here to see the original:
How to Make Teachers Informed Consumers of Artificial Intelligence - Market Brief - EdWeek
Posted in Ai
Comments Off on How to Make Teachers Informed Consumers of Artificial Intelligence – Market Brief – EdWeek
We’re Training AI Twice as Fast This Year as Last – IEEE Spectrum
Posted: at 3:56 am
So how much of the material that goes into the typical bin avoids a trip to landfill? For countries that do curbside recycling, the numbercalled the recovery rateappears to average around 70 to 90 percent, though widespread data isnt available. That doesnt seem bad. But in some municipalities, it can go as low as 40 percent.
Whats worse, only a small quantity of all recyclables makes it into the binsjust 32 percent in the United States and 10 to 15 percent globally. Thats a lot of material made from finite resources that needlessly goes to waste.
We have to do better than that. Right now, the recycling industry is facing a financial crisis, thanks to falling prices for sorted recyclables as well as policy, enacted by China in 2018, which restricts the import of many materials destined for recycling and shuts out most recyclables originating in the United States.
There is a way to do better. Using computer vision, machine learning, and robots to identify and sort recycled material, we can improve the accuracy of automatic sorting machines, reduce the need for human intervention, and boost overall recovery rates.
My company, Amp Robotics, based in Louisville, Colo., is developing hardware and software that relies on image analysis to sort recyclables with far higher accuracy and recovery rates than are typical for conventional systems. Other companies are similarly working to apply AI and robotics to recycling, including Bulk Handling Systems, Machinex, and Tomra. To date, the technology has been installed in hundreds of sorting facilities around the world. Expanding its use will prevent waste and help the environment by keeping recyclables out of landfills and making them easier to reprocess and reuse.
AMP Robotics
Before I explain how AI will improve recycling, lets look at how recycled materials were sorted in the past and how theyre being sorted in most parts of the world today.
When recycling began in the 1960s, the task of sorting fell to the consumernewspapers in one bundle, cardboard in another, and glass and cans in their own separate bins. That turned out to be too much of a hassle for many people and limited the amount of recyclable materials gathered.
In the 1970s, many cities took away the multiple bins and replaced them with a single container, with sorting happening downstream. This single stream recycling boosted participation, and it is now the dominant form of recycling in developed countries.
Moving the task of sorting further downstream led to the building of sorting facilities. To do the actual sorting, recycling entrepreneurs adapted equipment from the mining and agriculture industries, filling in with human labor as necessary. These sorting systems had no computer intelligence, relying instead on the physical properties of materials to separate them. Glass, for example, can be broken into tiny pieces and then sifted and collected. Cardboard is rigid and lightit can glide over a series of mechanical camlike disks, while other, denser materials fall in between the disks. Ferrous metals can be magnetically separated from other materials; magnetism can also be induced in nonferrous items, like aluminum, using a large eddy current.
By the 1990s, hyperspectral imaging, developed by NASA and first launched in a satellite in 1972, was becoming commercially viable and began to show up in the recycling world. Unlike human eyes, which mostly see in combinations of red, green, and blue, hyperspectral sensors divide images into many more spectral bands. The technologys ability to distinguish between different types of plastics changed the game for recyclers, bringing not only optical sensing but computer intelligence into the process. Programmable optical sorters were also developed to separate paper products, distinguishing, say, newspaper from junk mail.
So today, much of the sorting is automated. These systems generally sort to 80 to 95 percent puritythat is, 5 to 20 percent of the output shouldnt be there. For the output to be profitable, however, the purity must be higher than 95 percent; below this threshold, the value drops, and often its worth nothing. So humans manually clean up each of the streams, picking out stray objects before the material is compressed and baled for shipping.
Despite all the automated and manual sorting, about 10 to 30 percent of the material that enters the facility ultimately ends up in a landfill. In most cases, more than half of that material is recyclable and worth money but was simply missed.
Weve pushed the current systems as far as they can go. Only AI can do better.
Getting AI into the recycling business means combining pick-and-place robots with accurate real-time object detection. Pick-and-place robots combined with computer vision systems are used in manufacturing to grab particular objects, but they generally are just looking repeatedly for a single item, or for a few items of known shapes and under controlled lighting conditions.Recycling, though, involves infinite variability in the kinds, shapes, and orientations of the objects traveling down the conveyor belt, requiring nearly instantaneous identification along with the quick dispatch of a new trajectory to the robot arm.
AI-based systems guide robotic arms to grab materials from a stream of mixed recyclables and place them in the correct bins. Here, a tandem robot system operates at a Waste Connections recycling facility [top], and a single robot arm [bottom] recovers a piece of corrugated cardboard. The United States does a pretty good job when it comes to cardboard: In 2021, 91.4 percent of discarded cardboard was recycled, according to the American Forest and Paper Association.AMP Robotics
My company first began using AI in 2016 to extract empty cartons from other recyclables at a facility in Colorado; today, we have systems installed in more than 25 U.S. states and six countries. We werent the first company to try AI sorting, but it hadnt previously been used commercially. And we have steadily expanded the types of recyclables our systems can recognize and sort.
AI makes it theoretically possible to recover all of the recyclables from a mixed-material stream at accuracy approaching 100 percent, entirely based on image analysis. If an AI-based sorting system can see an object, it can accurately sort it.
Consider a particularly challenging material for todays recycling sorters: high-density polyethylene (HDPE), a plastic commonly used for detergent bottles and milk jugs. (In the United States, Europe, and China, HDPE products are labeled as No. 2 recyclables.) In a system that relies on hyperspectral imaging, batches of HDPE tend to be mixed with other plastics and may have paper or plastic labels, making it difficult for the hyperspectral imagers to detect the underlying objects chemical composition.
An AI-driven computer-vision system, by contrast, can determine that a bottle is HDPE and not something else by recognizing its packaging. Such a system can also use attributes like color, opacity, and form factor to increase detection accuracy, and even sort by color or specific product, reducing the amount of reprocessing needed. Though the system doesnt attempt to understand the meaning of words on labels, the words are part of an items visual attributes.
We at AMP Robotics have built systems that can do this kind of sorting. In the future, AI systems could also sort by combinations of material and by original use, enabling food-grade materials to be separated from containers that held household cleaners, and paper contaminated with food waste to be separated from clean paper.
Training a neural network to detect objects in the recycling stream is not easy. It is at least several orders of magnitude more challenging than recognizing faces in a photograph, because there can be a nearly infinite variety of ways that recyclable materials can be deformed, and the system has to recognize the permutations.
Its hard enough to train a neural network to identify all the different types of bottles of laundry detergent on the market today, but its an entirely different challenge when you consider the physical deformations that these objects can undergo by the time they reach a recycling facility. They can be folded, torn, or smashed. Mixed into a stream of other objects, a bottle might have only a corner visible. Fluids or food waste might obscure the material.
We train our systems by giving them images of materials belonging to each category, sourced from recycling facilities around the world. My company now has the worlds largest data set of recyclable material images for use in machine learning.
Using this data, our models learn to identify recyclables in the same way their human counterparts do, by spotting patterns and features that distinguish different materials. We continuously collect random samples from all the facilities that use our systems, and then annotate them, add them to our database, and retrain our neural networks. We also test our networks to find models that perform best on target material and do targeted additional training on materials that our systems have trouble identifying correctly.
In general, neural networks are susceptible to learning the wrong thing. Pictures of cows are associated with milk packaging, which is commonly produced as a fiber carton or HDPE container. But milk products can also be packaged in other plastics; for example, single-serving milk bottles may look like the HDPE of gallon jugs but are usually made from an opaque form of the PET (polyethylene terephthalate) used for water bottles. Cows dont always mean fiber or HDPE, in other words.
There is also the challenge of staying up to date with the continual changes in consumer packaging. Any mechanism that relies on visual observation to learn associations between packaging and material types will need to consume a steady stream of data to ensure that objects are classified accurately.
But we can get these systems to work. Right now, our systems do really well on certain categoriesmore than 98 percent accuracy on aluminum cansand are getting better at distinguishing nuances like color, opacity, and initial use (spotting those food-grade plastics).
Now thatAI-basedsystems are ready to take on your recyclables, how might things change? Certainly, they will boost the use of robotics, which is only minimally used in the recycling industry today. Given the perpetual worker shortage in this dull and dirty business, automation is a path worth taking.
AI can also help us understand how well todays existing sorting processes are doing and how we can improve them. Today, we have a very crude understanding of the operational efficiency of sorting facilitieswe weigh trucks on the way in and weigh the output on the way out. No facility can tell you the purity of the products with any certainty; they only audit quality periodically by breaking open random bales. But if you placed an AI-powered vision system over the inputs and outputs of relevant parts of the sorting process, youd gain a holistic view of what material is flowing where. This level of scrutiny is just beginning in hundreds of facilities around the world, and it should lead to greater efficiency in recycling operations. Being able to digitize the real-time flow of recyclables with precision and consistency also provides opportunities to better understand which recyclable materials are and are not currently being recycled and then to identify gaps that will allow facilities to improve their recycling systems overall.
Sorting Robot Picking Mixed PlasticsAMP Robotics
But to really unleash the power of AI on the recycling process, we need to rethink the entire sorting process. Today, recycling operations typically whittle down the mixed stream of materials to the target material by removing nontarget materialthey do a negative sort, in other words. Instead, using AI vision systems with robotic pickers, we can perform a positive sort. Instead of removing nontarget material, we identify each object in a stream and select the target material.
To be sure, our recovery rate and purity are only as good as our algorithms. Those numbers continue to improve as our systems gain more experience in the world and our training data set continues to grow. We expect to eventually hit purity and recovery rates of 100 percent.
The implications of moving from more mechanical systems to AI are profound. Rather than coarsely sorting to 80 percent purity and then manually cleaning up the stream to 95 percent purity, a facility can reach the target purity on the first pass. And instead of having a unique sorting mechanism handling each type of material, a sorting machine can change targets just by a switch in algorithm.
The use of AI also means that we can recover materials long ignored for economic reasons. Until now, it was only economically viable for facilities to pursue the most abundant, high-value items in the waste stream. But with machine-learning systems that do positive sorting on a wider variety of materials, we can start to capture a greater diversity of material at little or no overhead to the business. Thats good for the planet.
We are beginning to see a few AI-based secondary recycling facilities go into operation, with Amps technology first coming online in Denver in late 2020. These systems are currently used where material has already passed through a traditional sort, seeking high-value materials missed or low-value materials that can be sorted in novel ways and therefore find new markets.
Thanks to AI, the industry is beginning to chip away at the mountain of recyclables that end up in landfills each yeara mountain containing billions of tons of recyclables representing billions of dollars lost and nonrenewable resources wasted.
This article appears in the July 2022 print issue as AI Takes a Dumpster Dive .
See the rest here:
We're Training AI Twice as Fast This Year as Last - IEEE Spectrum
Posted in Ai
Comments Off on We’re Training AI Twice as Fast This Year as Last – IEEE Spectrum
I Used AI Technology On 26 Older Celebrities To See How Accurately It Aged Them, And It’s Scary To See – BuzzFeed
Posted: at 3:56 am
Young Helen Mirren looks JUST like Jennifer Lawrence. It's so wild.
(Old Robert Pattinson can absolutely still get it, to be honest!)
And here's a side-by-side of fake Cher with real Cher.
And here's a side-by-side of fake Larry with real Larry.
And here's a side-by-side of fake Meryl with real Meryl.
And here's a side-by-side of fake Morgan with real Morgan.
And here's a side-by-side of fake Dolly with real Dolly.
And here's a side-by-side of fake Jackie with real Jackie.
And here's a side-by-side of fake Helen with real Helen.
And here's a side-by-side of fake Harrison with real Harrison.
And here's a side-by-side of fake Julie with real Julie.
And here's a side-by-side of fake George with real George.
And here's a side-by-side of fake Rita with real Rita.
And here's a side-by-side of fake Ian with real Ian.
And here's a side-by-side of fake Maggie with real Maggie.
And here's a side-by-side of fake Hctor with real Hctor.
And here's a side-by-side of fake Diane with real Diane.
And here's a side-by-side of fake James with real James.
And here's a side-by-side of fake Judi with real Judi.
And here's a side-by-side of fake Quincy with real Quincy.
And here's a side-by-side of fake Diana with real Diana.
And here's a side-by-side of fake Bobby with real Bobby.
And here's a side-by-side of fake Dionne with real Dionne.
And here's a side-by-side of fake Michael with real Michael.
And here's a side-by-side of fake Jessica with real Jessica.
And here's a side-by-side of fake Robert with real Robert.
And here's a side-by-side of fake Jane with real Jane.
Keep up with the latest daily buzz with the BuzzFeed Daily newsletter!
Go here to read the rest:
Posted in Ai
Comments Off on I Used AI Technology On 26 Older Celebrities To See How Accurately It Aged Them, And It’s Scary To See – BuzzFeed
Here’s How AI Is Helping Make Babies By Revolutionizing IVF – Forbes
Posted: at 3:56 am
Addressing infertility with AI-driven solutions
One in four couples in developing countries is impacted by infertility. About 48.5 million couples experience infertility worldwide. Today, infertility is rapidly becoming an epidemic.
In vitro fertilization (IVF) is a technique that helps people facing fertility problems have a baby. Despite IVFs potential, the outcomes are unpredictable. To make matters worse, access to fertility care is abysmal. Even in a developed market such as the United States, just 2% of people suffering from infertility have tapped into IVF.
IVF has been around for over 40 years, says Eran Eshed, CEO of Fairtlity. Despite many innovations on the biotechnology side of things, surprisingly, there has been almost zero usage of data and techniques like artificial intelligence (AI) to influence outcomes.
While data science cant solve biological problems, Eshed believes AI will enhance the IVF process at every step where decisions are made.
Today, were seeing exciting applications of data science in fertility that could improve embryologists capacity cycle by 50% and increase the chances of live birth by 4%.
IVF is a fertility technique in which an egg is removed from a persons ovaries and fertilized with sperm in a laboratory. The successfully fertilized eggan embryois then implanted into a uterus to grow.
Clinicians and embryologists make many decisions at several junctures. These decisions are based on experience, intuition, and a set of very, very rudimentary rules, laments Eshed.
Today, there are two key challenges with IVF:
When just 2% of the impacted population can leverage IVF, its clear that access to care is a big, big issue, highlights Eshed. IVF is currently focused on infertility patientsthose not getting pregnant either by timed intercourse or simple treatments such as oral medications, shares Dr. Gerard Letterie, a reproductive endocrinologist and partner at Seattle Reproductive Medicine. This is a relatively restricted segment of the population.
In the future, Dr. Letterie expects the patient segment to include those who are interested in fertility preservation by freezing eggs or creating embryos for future use. This will markedly expand the number of patients seeking care with assisted reproductive technologies, he predicts.
How successful is IVF? The chances of conceiving from a single IVF cycle are around 30%. Hence, most patients need to undergo multiple cycles before experiencing a successful live birth.
While the success of IVF is influenced by age, data shows that most IVF cycles fail for even the youngest and healthiest women. IVF outcomes heavily depend on decisions made during the clinical process and on the expertise of the embryologists.
The IVF space is witnessing AI-driven technological breakthroughs
How long is the IVF process and what are the steps involved? IVF starts with a clinicians assessment of the cause of infertility. Then, it moves into the stimulation phase where the doctor determines the best protocol for ovarian stimulation, shares Eshed.
This is commonly followed by the collection of eggs and sperms, fertilization of eggs using sperms to create embryos, embryo culture in the clinic, transfer of embryos to the mother, and a live birth months later.
As people go through this process, the success rates drop significantly at each stage, says Eshed. Typically, six to seven strategic decision points determine the effectiveness of each step. In the business world, we'd call them leverage points where you can make a difference, he adds.
These points include decisions by clinicians on the stimulation medication protocol or timing of egg retrieval. In the lab, embryologists make several judgments by interpreting images about oocytes (developing eggs), sperms, and blastocytes (fertilized eggs).
Im confident that AI can help streamline the decisions to augment clinical decision-making, claims Dr. Letterie. For example, sophisticated convolutional neural network-based image analytics can aid embryologists in interpreting the images to improve outcomes.
The global IVF market is set to reach around $36 billion by 2026, per an industry report. Dr. Letterie anticipates that there simply wont be enough skilled embryologists to address this rising demand. Recently, the fertility space is witnessing multiple technology investments, with several funded, AI-driven startups.
Eshed founded Fairtility in Israel to address the acute challenge of embryo analysis with AI. Recently, his firm raised $15 million in Series A funding. Other startups such as Emrbyonics, Mojo, and ALife have come up with AI-based fertility solutions to analyze embryos, assess sperm quality, and personalize IVF treatment plans.
Today, embryo classification is done by embryologists who manually inspect pictures for a set of visually detectable features. Fairtility utilizes computer vision algorithms to augment this process and predict the likely effectiveness of implantations.
Their AI algorithms are trained from a dataset of over 200,000 embryo videos and over 5 million clinical data points drawn from a diverse patient demographic. This gives the AI models the power to analyze minute features that are often undetectable by even the most experienced embryologists.
Fairtilitys solution, CHLOE, is a cloud-based system that acts as a decision support tool for AI-powered embryo selection. The tool integrates with time-lapse imaging (TLI) systems to provide continuous predictions from fertilization to the blastocyst stage. As the TLI system captures pictures of embryos at different stages of development, they are automatically identified, segmented, and analyzed at the pixel level.
In addition to automating this process, the AI model helps precisely quantify attributes such as size, area, shape, proportion, and symmetry. Thats not something a human can do, so in a sense, were bringing a lot more intelligence in the process, shares Eshed. Such precise information coupled with implantation probability enables an embryologist to make data-driven decisions for every embryo cultured in the TLI device.
Embryos at various stages of development at the time-of-pronuclei-appearance or tPNa. The embryos ... [+] are automatically identified by AI (see highlights on the left).
Embryos automatically identified by AI (left) at the time-to-division-to-2 or t2
Embryos automatically identified by AI (left) at the time-to-division-to-4 or t4
CHLOEs algorithms can predict blastulation with 96% accuracy, implantation with 71% accuracy, and whether an embryo is genetically healthy at 69% accuracy, per a paper submitted to the ESHRE conference. Such results improve embryologists prediction of embryo viability, which is currently around 65%.
Additionally, the AI solution can help embryologists identify anomalies, such as unusual cleavage patterns or severe fragmentation or pronucleate abnormalities that may otherwise be missed. Thus, CHLOE boosts the likelihood of healthy embryo selection.
However, despite improved results in embryo selection and process efficiency, studies have yet to demonstrate concrete improvements in live birth rates, which is considered the gold standard.
AI cannot and should not replace embryologists and clinicians, clarifies Eshed. It is important that every patient gets the same and highest standard of care, irrespective of a practitioners experience or workload. This is where CHLOE levels the playing field.
Fairtility provides its solution in a software as a service (SaaS) model to clinics and fertility centers around the world. Per Eshed, the installation of CHLOE requires no hardware and can be done remotely. With over 25 active installations worldwide, Fairtility has gained CE mark registration (a European safety, health, and environmental certification) and is reportedly in advanced FDA approval stages.
To realize the full potential of AI, several key challenges must be overcome:
Data is a huge challenge in this space, says Eshed. Data ranges from notes about treatment history, electronic medical records (EMR), ultrasound images, and videos. Eshed says that while the data exists, it is highly dispersed, and neither curated nor organized well. Even today, several clinics archive records in physical files. The entire process must be digitized to gain an end-to-end perspective from which AI models can learn.
Current practices have not been sophisticated regarding workflow and process development, shares Dr. Letterie. Such AI solutions can help drive outcomes only when they are integrated into clinical and laboratory workflows. This will also require education on the part of all stakeholders, he adds. For example, Dr. Letterie will be launching a 15-course curriculum on AI in fertility using presentations from thought leaders at the upcoming ESHRE conference.
Even after demonstrating effectiveness, achieving clinical uptake and routine use takes time. Never underestimate the resistance to change, cautions Eshed. A fancy AI solution is not necessarily going to impress anybody.
Dr. Letterie shares the example of beta-blockers, drugs that prevent cardiovascular disease, as a case in point. These drugs were initially used in patients recovering from myocardial infarction (MI) to prevent the recurrence of a heart attack. Despite studies demonstrating a clear reduction in morbidity and mortality, it took over 7 years to integrate beta blockers into routine clinical care.
"Similarly, we have an uphill battle to convince clinicians and embryologists that using AI tools will improve outcomes, cautions Dr. Letterie. Most practitioners arent familiar with AI and its applications in the clinical setting; hence, they are extremely hesitant to change practice patterns. He feels that it is essential to show a clear improvement of outcomes before expecting significant uptake. Meanwhile, we must brace for the time lag in building trust with technology-driven treatments.
Fertility treatments reinvented
Dr. Letterie expects IVF to grow in prevalence with better success and fewer cost barriers to care. He foresees the development of early detection tools that warn patients who might be experiencing decreases in fertility, as opposed to today where patients end up with unretrievable fertility potential. With enhanced visibility about their fertility, patients will then be able to take early actions by freezing sperms, oocytes, or embryos.
He concludes that smartphones will be one of the biggest and most significant improvements in the delivery of fertility care.
The rest is here:
Here's How AI Is Helping Make Babies By Revolutionizing IVF - Forbes
Posted in Ai
Comments Off on Here’s How AI Is Helping Make Babies By Revolutionizing IVF – Forbes
AI is primed to have an outsize impact on the field of dentistry – Fast Company
Posted: at 3:56 am
Think back to the last time you were in the dentists chair and were told you have a cavity. The scenario probably went something like this: The dentist pulled up your X-ray, pointed to a gray smudge on your radiograph, and said, This should probably be filled before it gets any bigger.
If youre like most patients, you probably had trouble distinguishing the monochrome gradations on your X-ray. Is that a cavity or just a stain on your tooth, you might have wondered. Maybe you asked for further clarification, or maybe you bit your tongue, accepted the diagnosis, and scheduled the filling.
This uncertainty is likely something weve all experienced at the dentist. And accounts like that of the well-known Readers Digest reporter who went to 50 different dentists and received 50 different diagnoses certainly dont make the experience any easier to swallow.
The vast majority of dental professionals are reputable and honest, but understanding and trusting a diagnosis remains a challenge. The patient experience in the dentists chair is changing, however, and the patient trust deficit may soon shrinkthanks to artificial intelligence.
Recently cleared by the U.S. Food and Drug Administration, AI algorithms can now help your dentist detect and track oral health issues with sensitivity and precision that is equal toand often better thanthat of the human eye. The technology is a win-win for patients and dentists alike, promising to bring greater accuracy, consistency, and transparency to a field of medicine that has long been beset by patient mistrust.
Over the last 10 years, the use of AI in healthcare has taken off. According to Deloitte, 75% of large healthcare organizations felt strongly enough about the technologys future to invest $50 million or more of their R&D budget in AI-related projects in 2019 alone.
Currently, AI plays a behind-the-scenes role in most medical fields, where its applied to understand and classify clinical documentation and organize administrative workflows. Increasingly, it is also being used to perform various radiologic functions, including detection of diseases and other medical abnormalities.
There are a number of reasons, however, that AI will have an outsize impact on the field of dentistry.
Unlike other healthcare fields, where X-rays are captured only to diagnose the cause of a specific ailment, most dental patients receive X-rays annually to track their oral health and inform care. As a result, there are more X-rays of healthy and unhealthy teeth than there are any other kind of X-ray. This massive volume of available imagery enables training of highly accurate machine learning (ML) algorithms.
Just as ML algorithms are trained to recognize humans through exposure to numerous images of faces, ML algorithms exposed to millions of dental X-rays can detect oral ailments more accurately than the human eye. For the first time, this ability of AI/ML software to distinguish healthy from unhealthy teeth allows for diagnostic consistency to be established across dental providers. And because X-rays are used more frequently in everyday dental care than they are in general medicine, the technologys impact can be greater than in other areas of healthcare.
The dentists role in reading X-rays is also different than in other medical fields. Fields like pulmonology, orthopedics, and urology typically have dedicated radiologists who work alongside a specialist to complete and analyze recommended imaging.
In dentistry, however, dentists themselves play that role, often in addition to acting as entrepreneur and business owner, not to mention surgeon and dental provider. As such, AI is becoming another tool on the dental tray, helping to improve diagnostic accuracy. Additionally, unlike other fields of medical radiology, dentists need not fear that their jobs will be replacedwhile diagnosis may be computer augmented, delivery of care remains in the dentists hands.
AI can also virtually eliminate the patient trust problem. With the ability to measure and detect things like tooth decay, calculus, and root abscesses down to the millimeterand track disease progression over timeAI can help ensure that no common conditions are missed or misdiagnosed. By annotating dental X-rays, it can also help patients better understand exactly what their radiograph is showing them, helping to relieve dental anxiety and instantly provide a real-time second opinion validating what their dentist is telling them.
Thinking back to that last time you were in the dentists chair, imagine how much easier it would have been to understand your dentists diagnosis with this visual aid, not to mention your level of confidence in knowing that a computer is involved in verifying it.
Forward-thinking dental practices are already rolling out this technology, and it has been met with enthusiasm. Sage Dental, for example, a dental service organization operating in Florida and Georgia, has been using AI-aided technology to ensure quality and consistency among providers across its 82 practices.
As it turns out, AI-aided exams encourage patients to treat dental issues earlier than they might otherwise, which is critical since dentistry only becomes more expensive and more extensive if left untreated. And while the driver to adopt AI was consistency across dentists and offices, the result has been a dramatic improvement in patient satisfaction and ultimately in patient care.
Clearly, patient trust will be improved in the long run by AIs impact on diagnostics. By establishing higher universal standards of care, AI can ensure consistent quality outcomes. When that happens, patient trust becomes intrinsic to dentistry. A patient may not like the diagnosis and may not elect to treat the diagnosis, but he or she should trust the diagnosis.
Consumers are already comfortable with the use of AI in many of the technologies we use daily. At one time or another, who hasnt been impressed by AIs ability to build a playlist of new music based on your favorite songs, or identify that hard-to-distinguish face in the photo you uploaded to social media?
For the larger medical industry, dentistry is poised to play a similar role, helping patients to develop that same level of familiarity and comfort with AI-aided diagnostics.
We are nearing a timepossibly sooner than we might expectwhen AI technologies in the dental office will not only identify immediate dental concerns but also anticipate, through analysis of medical records, how these concerns may impact a patients overall health.
Ophir Tanz is the founder and CEO of Pearl. Dr. Cindy Roark is the SVP and chief clinical officer at Sage Dental.
Link:
AI is primed to have an outsize impact on the field of dentistry - Fast Company
Posted in Ai
Comments Off on AI is primed to have an outsize impact on the field of dentistry – Fast Company
Despite recession fears, companies aren’t pulling back on technology investments – CNBC
Posted: at 3:56 am
A data center.
Erik Isakson | DigitalVision | Getty Images
The chances for a recession are still being debated and inflation looks to be stubbornly high for at least the rest of this year, but when it comes to technology spending for companies it's full steam ahead.
A new CNBC Technology Executive Council survey shows that more than three-quarters of tech leaders expect their organization to spend more on technology this year. No one said they'll be spending less.
Tech leaders say if they've learned anything from past downturns it's that technology is not a cost center but rather a business driver.
The areas where they're focusing investments include cloud computing, machine learning and artificial intelligence, and automation.
"In other cycles we've seen in the past, tech investment was one of the first casualties," said Nicola Morini Bianzino, chief technology officer at professional services giant EY. "But after the pandemic, people realized that in a down, or even potentially recessionary, environment, we still need to keep our technology investments."
Danny Allan, chief technology officer at data protection firm Veeam, said that, "If you look at what occurred over the past two years, it's clear that technology is the sustainable differentiator that sets companies apart."
That was certainly the message delivered by veteran investor, LinkedIn co-founder and Greylock partner Reid Hoffman, who was a guest speaker at a recent CNBC Technology Executive Council Town Hall.
"In this environment, we're competing for making the most and longest term value for our businesses," he said. "So ask yourselves: where do I have a competitive advantage and where can I play offense?"
Guido Sacchi, chief information officer for Global Payments, said for many companies the tech agenda and the business agenda have become one and the same. In his conversations with business unit leaders at Global Payments, he says not one executive has suggested that cutting tech spending is the right way to respond to a potentially sharp economic downturn.
"Everyone understands what tech brings to the table," he said. "Not one of them wants to cut anything," he said.
Global Payments is particularly focused on cloud native products and platforms, analytics, AI and machine learning, areas he describes as essential to "driving positive business outcomes."
In working with clients, Sacchi says it's clear that technology is firmly woven into the fabric of everything its customers do to keep moving ahead. The company works with many top quick-service restaurants that have doubled down on AI and other advanced technologies to facilitate quicker deliveries and drive-thru recognition patterns for their customers.
The same holds true for its health-care customers that leveraged telemedicine during the pandemic when patients were unable to see their doctors in person. "The pandemic accelerated the deployment of so many of these new technologies and now businesses aren't willing to go backwards," Sacchi said.
J.P. Morgan's recent annual chief information officer survey bears this out. It gathered the spending plans of 142 CIOs responsible for over $100 billion in annual enterprise budgets and found that IT budgets are growing even if they're not keeping up with inflation. For this calendar year, the CIOs surveyed see IT budget growth of 5.3% and 5.7% in 2023. That's a big swing from when the survey was done during the pandemic and IT budgets contracted by nearly 5%.
Despite the uncertain economic climate, well-funded, cash-flow positive firms are in a particularly good position to create even more distance between themselves and competitors, Allan said. "This is what separates the good from the great leaders, the ones who can recognize this time and capitalize on it," he added.
His firm's tech spending is focused on modern data protection. "What could be more important in an economy that is so dependent on technology and data than making sure you can protect that data," he said, adding that as companies continue to make the jump from traditional infrastructure to cloud infrastructure they need to make sure their data isn't vulnerable to an onslaught of cyber and malware attacks.
And when it comes to AI, Hoffman advises companies to stay invested, but to do their homework. "Not everything is AI," he said during the recent TEC Town Hall event. "Take the time to know where to apply it, how to make it work for you, and why it's being used."
And even if AI investments can't be part of today's budget, Hoffman says the smart play is to stay on a learning curve with the technology and revisit it down the road.
"You are sacrificing the future if you opt out of AI completely," he said.
Link:
Despite recession fears, companies aren't pulling back on technology investments - CNBC
Posted in Ai
Comments Off on Despite recession fears, companies aren’t pulling back on technology investments – CNBC