The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: June 20, 2022
IRS rolls out artificial intelligence to help callers make payments …
Posted: June 20, 2022 at 2:51 pm
The Internal Revenue Service unveiled a new artificial intelligence system it says will cut wait times to resolve simple tasks and improve customer service.
The technology enables the new phone system to authenticate callers by asking them basic questions, IRS officials said during a call with reporters Friday. The new system can understand complete and natural ways of speaking, they said.
For the first time in 160 years, this agency is able to successfully interact with a taxpayer using artificial intelligence to access their account and resolve it, in certain situations, without any wait on hold, IRS Deputy Commissioner Darren Guillot said during the call.
When taxpayers receive a mailed letter stating they owe money, they can use an ID number from the letter to call in and access the improved system, agency officials explained.
Frederick Schindler, the agency's director of collection, said his team staggered the generation and mailing of over 3 million letters so they will arrive in mailboxes in the coming days, enabling callers to make use of the new system.
In this photo illustration an IRS logo seen displayed on a smartphone.
SOPA Images/LightRocket via Getty Images, FILE
The IRS efforts to improve its phone system come roughly three months after the statutory body said it would hire 10,000 additional employees to cut through a pandemic-related backlog.
Expanding the phone bot with artificial intelligence demonstrates an improvement over the previous phone system, the IRS officials said. The previous unauthenticated phone bot could only answer basic questions and allowed callers to set up one-time payments, they said.
That more basic technology, which does not allow the system to pull up a person's IRS account, is also the technology behind an online chatbox the agency uses.
Because of the authentication capability of the new bot, it can access a callers IRS account. From there, callers can discuss and set up a payment plan with the bot without spending time on hold a process that would typically take 17-20 minutes with a human operator, IRS officials said.
By allowing the phone bot to handle more simple issues, it frees up human operators for more complex matters, the IRS officials said.
Treasury Department Deputy Secretary Wally Adeyemo recently told ABC News that the IRS received over 200 million calls and only had 15,000 people to answer those calls last year.
Even with the intelligent phone bot, callers will still have the option to speak with a human for additional support, IRS officials said.
Many callers owe less than $25,000, and can name their price, or the monthly amount they will commit to paying. The artificial intelligence system then computes that amount to determine whether it falls within the agency's deadline for repayment.
The Internal Revenue Service building is seen in Washington, D.C, April 5, 2022.
Stefani Reynolds/AFP via Getty Images, FILE
The new bot will not guide callers to pay more than the price they name, the officials explained.
While officials on the call admitted the new phone bot will offer a return on investment through expanded compliance, he said increasing government revenue was not the primary focus of developing the system.
Service is part of our name, Guillot said. This is all about the taxpayer experience, helping customers, he said later.
But not all callers will enjoy the no-wait time the authenticated phone bot offers. It launched only on the automated collection system and accounts management phone lines Tuesday, the IRS officials said.
For now, it is operating at 25% of its intended capacity, which saw the bot answer over 13,000 calls Thursday. The IRS plans to bring more of the system online through the end of next week, IRS officials said.
We have phone lines to deal with specific issues like liens or settlement proposals, Schindler said. In the future, theres use cases for taking this technology, particularly as we learn more about it, to any one of our collection processes.
The bot currently operates in English and Spanish, with IRS officials hoping to expand its language offerings in the future, they said.
More immediate expansion plans include programming the authenticated bot to ask questions of callers who name their monthly payments to ensure it is within their financial means, the officials said.
View original post here:
IRS rolls out artificial intelligence to help callers make payments ...
Posted in Artificial Intelligence
Comments Off on IRS rolls out artificial intelligence to help callers make payments …
Dangers & Risks of Artificial Intelligence – ITChronicles
Posted: at 2:51 pm
Due to hype and popular fiction, the dangers of artificial intelligence (AI) are typically associated in the public eye with Sci-Fi horror scenarios. These often involve killer robots and hyper-intelligent computer systems which consider humanity a nuisance that needs to be gotten rid of for the good of the planet. While nightmares like this often play out as overblown and silly in comic books and on-screen, the risks of artificial intelligence cannot be dismissed so lightly and AI dangers do exist.
In this article, well be looking at some of the real risks of artificial intelligence, and why AI is dangerous when looked at in certain contexts or wrongly applied.
Artificial intelligence encompasses a range of technologies and systems ranging from Googles search algorithms, through smart home gadgets, to military-grade autonomous weapons. So issuing a blanket confirmation or denial to the question Is Artificial Intelligence Dangerous? isnt that simple the issue is much more nuanced than that.
Most artificial intelligence systems today qualify as weak or narrow AI technologies designed to perform specific tasks such as searching the internet, responding to environmental changes like temperature, or facial recognition. Generally speaking, narrow AI performs better than humans at those specific tasks.
For some AI developers, however, the Holy Grail is strong AI or artificial general intelligence (AGI), a level of technology at which machines would have a much greater degree of autonomy and versatility, enabling them to outperform humans in almost all cognitive tasks.
While the super intelligence of strong AI has the potential to help us eradicate war, disease, and poverty, there are significant dangers of artificial intelligence at this level. However, there are those who question whether strong AI will ever be achieved, and others who maintain that if and when it does arrive, it can only be beneficial.
Optimism aside, the increasing sophistication of technologies and algorithms may have the result that AI is dangerous if its goals and implementation run contrary to our own expectations or objectives. The risks of AI in this context may hold even at the level of narrow or weak AI. If, for example, a home or in-vehicle thermostat system is poorly configured or hacked, its operation could pose a serious hazard to human health through over-heating or freezing. The same would apply to smart city management systems or autonomous vehicle steering mechanisms.
Most researchers agree that a strong or AGI system would be unlikely to exhibit human emotions such as love or hate, and would therefore not pose AI dangers through benevolent or malevolent intentions. However, even the strongest AI must be programmed by humans initially, and its in this context that the danger lies. Specifically, artificial intelligence analysts highlight two scenarios where the underlying programming or human intent of a system design could cause problems:
This threat covers all existing and future autonomous weapons systems (military drones, robots, missile defenses, etc.), or technologies capable of intentionally or unintentionally causing massive harm or physical destruction due to misuse, hacking, or sabotage.
Besides the prospect of an AI arms race and the possibility of AI-enabled warfare in the case of autonomous weaponry, there are AI risks posed by the design and deployment of the technology itself. With high stakes activity an inherent part of military design, such systems would probably have fail-safes that make them extremely difficult to deactivate once started and their human owners could conceivably lose control of them, in escalating situations.
The classic illustration of this AI danger comes in the example of a self-driving car. If you ask such a vehicle to take you to the airport as quickly as possible, it could quite literally do so breaking every traffic law in the book, causing accidents, and freaking you out completely, in the process.
At the super intelligence level of AGI, imagine a geo-engineering or climate control system thats given free rein to implement its programming in the most efficient manner possible. The damage it could cause to infrastructure and ecosystems could be catastrophic.
How dangerous is AI? At its current rate of development, artificial intelligence has already exceeded the expectations of many observers, with milestones having been achieved that were considered decades away, just a few years ago.
While some experts still estimate that the development of human-level AI is still centuries away, most researchers are coming round to the opinion that it could happen before 2060. And the prevailing view amongst all observers is that, as long as were not 100% sure that artificial general intelligence wont happen this century, its a good idea to start safety research now, to prepare for its arrival.
Many of the safety problems associated with super intelligent AI are so complex that they may require decades to solve. A super intelligent AI will, by definition, be very good at achieving its goals whatever they may be. As humans, well need to ensure that its goals are completely aligned with ours. The same holds for weaker artificial intelligence systems as the technology continues to evolve.
Intelligence enables control, and as technology becomes smarter, the greatest danger of artificial intelligence lies in its capacity to exceed human intelligence. Once that milestone is achieved, we run the danger of losing our control over the technology. And this danger becomes even more severe if the goals of that technology dont align with our own objectives.
A scenario whereby an AGI whose goals run counter to our own uses the internet to enforce the implementation of its internal directives illustrates why AI is dangerous in this respect. Such a system could potentially impact the financial markets, manipulate social and political discourse, or introduce technological innovations that we can barely imagine, much less keep up with.
The keys to determining why artificial intelligence is dangerous or not lie in its underlying programming, the method of its deployment, and whether or not its goals are in alignment with our own.
As technology continues its march toward artificial general intelligence, AI has the potential to become more intelligent than any human, and we currently have no way of predicting how it will behave. What we can do is everything in our power to ensure that the goals of that intelligence remain compatible with ours and the research and design to implement systems that keep them that way.
Summary:
Artificial intelligence encompasses a range of technologies and systems ranging from Googles search algorithms, through smart home gadgets, to military-grade autonomous weapons. So issuing a blanket confirmation or denial to the question Is Artificial Intelligence Dangerous? isnt that simple. For some AI developers, the Holy Grail is strong AI or artificial general intelligence (AGI), a level of technology at which machines would have a much greater degree of autonomy and versatility, enabling them to outperform humans in almost all cognitive tasks. While the super intelligence of strong AI has the potential to help us eradicate war, disease, and poverty, there are significant dangers of artificial intelligence at this level. The keys to determining why artificial intelligence is dangerous or not lie in its underlying programming, the method of its deployment, and whether or not its goals are in alignment with our own.
Read the original here:
Posted in Artificial Intelligence
Comments Off on Dangers & Risks of Artificial Intelligence – ITChronicles
Master in Artificial Intelligence Online | IU International
Posted: at 2:51 pm
With the IU and LSBU (London South Bank University) dual degree track, you get a unique opportunity you can choose if you want to graduate with both a German and a British graduation certificate, without any extra academic requirements. The study programmes at IU and at LSBU are coordinated and therefore equivalent to each other.
Start your studies at IU, and if you want to apply for your British certificate* all you have to do is send in your application and pay the required fee. Youll then be awarded a degree from LSBU following your graduation if all of your study requirements have been fulfilled successfully.
Graduate with a German Bachelors, MBA or Masters degree along with a UK Bachelors with Honours (Hons), MBA or Masters.
London South Bank University is well-known for its impressive internationality, as testified by over 18,000 students from more than 130 countries. Similar to IU, LSBU has also been awarded multiple awards and praised for its focus on improving graduates career opportunities.
Our cooperation was born out of one goal: to help you get the best jobs in the world with a dual degree.
Get in touch with our Student Advisory Team, send in your application form and receive your British graduation certificate after youve successfully graduated from IU.
*only available for selected study programmes: B.Sc. Data Science, B.Sc. Computer Science, B.A.A. Business Administration, B.A. International Management, M.Sc. Artificial Intelligence, M.Sc. Computer Science, M.Sc. Data Science, M.A. Master Management with electives (Engineering, Finance & Accounting, Int. Marketing, IT, Leadership, Big Data), MBA with electives (Big Data, Engineering, Finance & Accounting, IT, Marketing).
The rest is here:
Posted in Artificial Intelligence
Comments Off on Master in Artificial Intelligence Online | IU International
MS in Artificial Intelligence | University of Michigan-Dearborn
Posted: at 2:51 pm
The Artificial Intelligencemaster's degree program is designed as a 30-credit hour curriculum that give students a comprehensive framework for artificial intelligence with one of 4 concentration areas: (1) Computer Vision, (2) Intelligent Interaction, (3) Machine Learning, and (4) Knowledge Management and Reasoning.
Students will engage in an extensive core intended curriculum to develop depth in all the core concepts that build a foundation for artificial intelligence theory and practice. Also, they will be given the opportunity to build on the core knowledge of AI by taking a variety of elective courses selected from colleges throughout campus to explore key contextual areas or more complex technical AI applications.
The program will be accessible to both full-time and part-time students, aiming to train students who aspire to have AI research and development (R&D) or leadership careers in industry. To accommodate the needs of working professionals who might be interested in this degree program, the course offerings for the MS in AI will be in the late afternoon and evening hours to allow students to earn the degree through part-time study. The program may be completed entirely on campus, entirely online, or through a combination of on-campus and online courses.
If you have additional questions, please contact the program director: Dr. Jin Lu (jinluz@umich.edu).
Follow this link:
MS in Artificial Intelligence | University of Michigan-Dearborn
Posted in Artificial Intelligence
Comments Off on MS in Artificial Intelligence | University of Michigan-Dearborn
Artificial Intelligence On The Hunt For Illegal Nuclear Material – Texas A&M University Today
Posted: at 2:51 pm
Nuclear engineering doctoral student Sean Martinson works on plutonium solution purification inside a protective glove box in Sunil Chirayaths nuclear forensics laboratory.
Justin Elizalde/Texas A&M Engineering
Millions of shipments of nuclear and other radiological materials are moved in the U.S. every year for good reasons, including health care, power generation, research and manufacturing. But there remains the threat that bad actors in possession of stolen or illegally produced nuclear materials or weapons will try to smuggle them across borders for nefarious purposes.
Texas A&M University researchers are making it harder for them to succeed.
If border agents intercept illicit nuclear materials, investigators need to know who produced them and where they came from. Fortunately, nuclear materials carry certain forensic markers that can reveal valuable information, much like fingerprints can identify criminals.
For instance, when scientists examine the concentration of certain key contaminant isotopes in separated plutonium samples they can determine three different attributes of the samples history: the type of nuclear reactor that produced it, how long the plutonium or uranium was contained in the reactor and how long ago it was produced.
With current statistical methodologies, they can determine these three attributes utilizing a generated database that stores the required information as a mathematical variation of these attributes for various nuclear reactor types and emerge with a good idea of who made the material.
But what if investigators are presented with a mixed plutonium sample? said Sunil Chirayath, author of a new study on nuclear forensics recently published in the journal Nuclear Science and Engineering.Suppose the adversary is mixing materials from two nuclear reactors at two different times, and that material is cooled for different times. A bad actor might do this intentionally to disguise it.
Mixed samples of nuclear material are significantly more challenging to identify with traditional methodologies. In a real-world situation, the extra time required could have a catastrophic impact on the global community.
To improve the process, Chirayath, associate professor in the Department of Nuclear Engineering and director of the Texas A&M Engineering Experiment StationsCenter for Nuclear Security Science and Policy Initiatives, along with his research team, has developed a methodology using machine learning, a type of artificial intelligence.
He can produce identifying markers through simulations, and then store that data in a 3D database. Each attribute is one level of the database, and a standard computer can quickly process the data and lead investigators to the reactor type that produced the plutonium sample and, potentially, the suspects by joining other pieces of the puzzle gathered through traditional forensics.
Three experiments of irradiating uranium using three different reactor types and post-irradiation examinations have been conducted at Texas A&M to date. Without knowing the samples origins, doctoral student researcher Patrick ONeal successfully identified where each of the plutonium samples was produced by using machine learning.
The work is being done through aconsortium of national labs and universitiesfunded by the U.S. Department of Energys National Nuclear Security Administration. The consortium focuses on development of new methods of detecting and deterring nuclear proliferation and to educating the next generation of nuclear security professionals. Chirayaths team will soon run one more irradiation and the corresponding post-irradiation examination with funding already in place.
The next step is to take this machine-learning methodology to high-level government labs, where researchers can work with much larger samples of nuclear materials. University labs are constrained by more restrictive irradiation safety limits.
Chirayath is confident efforts to prevent nuclear proliferation are working. The international Treaty on the Non-Proliferation of Nuclear Weapons arose from concern about atomic weaponry, and all but four countries India, Israel, Pakistan and South Sudan signed it.North Korea signed it but walked away from it later.
Chirayath also notes that with the rise in nuclear energy production comes an increased risk that the technology will be used to make weapons capable of mass destruction.
We have to make sure materials are not diverted from peaceful use, he said. We need to double-up our tools and methodologies, but its not just technical tools. We also have to double-up on policies and agreements to prevent proliferation from happening.
More:
Artificial Intelligence On The Hunt For Illegal Nuclear Material - Texas A&M University Today
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence On The Hunt For Illegal Nuclear Material – Texas A&M University Today
5G and AI use cases how 5G lifts artificial intelligence – Information Age
Posted: at 2:51 pm
Remote working: 5G will accelerate robots at work in the field
5G will unleash the potential of AI, says Michael Baxter. But how will AI and 5G most affect our everyday business lives? What are 5G and AI use cases?
Convergence makes 5G and AI use cases exciting: 5G could unleash the artificial intelligence revolution, moving it into a different league and creating new AI use cases.
When Apple launched the iPhone, few people understood its significance. There was a reason for this. At the time of the product announcement in 2007, wireless internet speeds were quite slow. 3G was launched by NTTDeCoMo in 2001 but network rollout had been gradual at best. It was the convergence of touchscreen phones and 3G and then 4G which created a demand hardly anybody had anticipated.
The growing importance of AI will go hand in hand with the emergence of 5G
There was also a catch as popular applications emerged, demand for 3G rocketed, and the 3G network became stretched.
According to Allied Market Research, the global 5G market will grow from a valuation of $5.13bn in 2020 to $797.8bn by 2030.
IDTechEX has drawn similar conclusions. In a recent report, it concluded: The 5G market is just about to take off. It forecast that by the end of 2032, consumer mobile services applying 5G technologies will generate around $800bn in revenues.
Dr Yu-Han Chang, Technology Analyst for 5G at IDTechEX, says: 5G enables greater data flows and quicker data collecting, allowing AI to generate more accurate models and predictions.
5G and AI together will speed the evolution of a fully connected and intelligent world.
5G is slated to offer speeds in excess of 100 times faster than 4G, which seems like an incredible increase but applications will emerge to fill the opportunity created.
As ever, with these things, when you drill down, complications emerge. There are, in fact, two distinct 5G networks.
At one level, mmWave, also called 5G II, operates at 100 MHz and provides between 24 100 GHz (gigahertz) and offers extremely impressive latency but is limited to a range of 300 metres.
By contrast, Sub-6 GHz, or 5G I, operates at 50 MHz, has inferior latency to 5G II, but is superior to 4G, provides between 3.5 7 GHz but has a range of 1.5 kilometres.
In other words, 5G II can support more powerful applications, but because of the low range, it requires more investment in infrastructure. Consequently, to date, most 5G rollout has been for 5G I.
The implications for AI will not be immediate, but they will be highly significant.
Although AI is probably more common than is generally supposed, its impact has been limited to date. So, while most of us use AI without necessarily realising it, for example, when we use our smartphone as a navigation tool, AIs real impact lies ahead.
The growing importance of AI will go hand in hand with the emergence of 5G. The convergence of the two technologies will have an enormous impact on us all, will have huge economic significance and will transform business.
The Internet of Things (IoT) will underpin the convergence between 5G and AI.
Adam Bujak, CEO and Co-founder at KYP.ai, the process intelligence company, said that 5G will power the growth of the IoT. It will allow organisations to use more connected devices and intelligent sensors.
We will be able to conduct our processes more digitally in the physical world and online by using connected devices and services. Therefore, well see the growth of phygital [physical + digital] products and services, including virtual reality modes of operations and customer interactions.
IDTechEX says that 5G [especially mmWave]s high throughput and ultralow latency enable it to tap into various high-value sectors such as 3D robotic control, virtual reality monitoring, and remote medical control that earlier technologies couldnt.
We will see connectivity between products like never before. At one level, we might see coffee cups communicate with coffee vending machines saying, Im empty. But at another level, we will see the connectivity of autonomous vehicles, which will be of massive significance to the future of transport.
The data collected by the IoT will also provide the kind of ammunition that machine learning or AI needs to develop and create greater insights.
The convergence of 5G and AI will underpin the emergence of the metaverse.
The 2021 hype concerning the metaverse has partially turned to cynicism. Part of the issue here relates to the definition of the metaverse. At one level, it conjures up thoughts of a Matrix-type world, but in reality, its meaning is more prosaic. I have heard people say a Zoom call involves the metaverse, and they define it as combining digital and physical worlds.
Virtual and augmented reality or immersive reality will underpin the metaverse, and 5G and AI will transform it.
The convergence of 5G and AI will create new use cases in games and streaming services, for example, offering 3D and virtual reality viewing supporting how we communicate. It will also change social media.
The convergence of AI and 5G will also create tools we will use in our daily lives for example, real-time language translation tools.
Business-to-business applications will be many, but one of the most important aspects will be the support 5G gives remote working. Take as an example how Grammarly supports communications by text. But as virtual and augmented reality technologies advance, it is not difficult to imagine how 5G and AI can transform not only remote work but mobile work.
Still, with B2B, there is also the issue of automation technologies.
Adam Bujak says: 5G will extend the reach of digital transformation and bring us more opportunities for innovation and automation. We will have more data and insights from all these phygital processes and connected devices, allowing us to train AI and to tap it for business and process analytics. In turn, there will be more possibilities for outcome-driven intelligent automation of services.
Office automation is one opportunity, but 5G and AI in combination will also support industry and manufacturing; at one level, it will be able to support the maintenance of equipment, monitoring machinery and identifying potential issues in advance, but it also presents the enticing prospect of remote operation of machinery.
The connectivity of transport, including autonomous vehicles, drones, and transport infrastructure such as ensuring traffic lights support optimal traffic flow, will be transformed by AI and 5G working in parallel.
The opportunities presented by AI and 5G in healthcare are multiple, but one of the most enticing will relate to remote monitoring of patients when they are out.
But many more AI and 5G use cases will emerge; the above is just the beginning. The convergence of these technologies will prove incredibly important and will unleash AI, finally justifying much of the hype seen over the last decade.
Information Age guide to how 5G will affect your business What does 5G mean for your business? This Information Age guide to 5G looks at which sectors will be disrupted, what low latency means for those businesses, how 5G will be used by enterprise-level organisations and how it will propel AI
What does 5G mean for enterprise business? A mobile 5G network promises to be the bridge towards Industry 2.0. But the reality is patchy coverage and a high cost of entry. What should an enterprise business CTO consider when throwing the switch on 5G?
5G technology disruption 4 sectors ripe for disruption 5G technology disruption four business sectors 5G will disrupt: financial services/insurance, cloud & edge computing, medical and healthcare, and supply chain management
Could low latency 5G boost your business? Low latency 5G means faster input response times between machines on a mobile network, improving their performance why is that a good thing and how could it help your business?
";jQuery("#BH_IA_MPU_RIGHT_MPU_1").insertAfter(jQuery(".single .post-story p:nth-of-type(5)"));//googletag.cmd.push(function() { googletag.display('BH_IA_MPU_INPAGE_MPU_1'); });}else {}});
See the original post:
5G and AI use cases how 5G lifts artificial intelligence - Information Age
Posted in Artificial Intelligence
Comments Off on 5G and AI use cases how 5G lifts artificial intelligence – Information Age
Harnessing artificial intelligence to predict atrial fibrillation, heart disease up to a year in advance – UCHealth Today
Posted: at 2:51 pm
Some mashups of old and new dont quite work: horse carriages and reusable rocket boosters, say. But combine a 19th-century medical advance with 21st-century computing technologies, and one now has the ability to identify patients at high risk of atrial fibrillation up to a year in advance. That breakthrough, clinicians hope, will help prevent thousands of strokes that hard-to-diagnose Afib causes each year. It also could spot structural heart diseases earlier, improving outcomes through more timely treatment.
The old technology in question is the electrocardiogram (ECG or EKG), a heart-voltage detector invented in 1895 thats been a low-cost, mainstay cardiac diagnostic pretty much ever since. The modern computing technology at play involves machine-learning algorithms developed by Chicago-based Tempus and Pennsylvania-based Geisinger Health. Those algorithms feed into a convolutional neural network that interprets the waveform the shape of an ECGs many spikes, dips and subtle undulations in ways no human ever could.
Tempuss artificial intelligence (AI) works on the same basic architecture as what YouTube uses to scan image data to identify cat videos, or self-driving cars use to identify objects in the road, explains Noah Zimmerman, Tempuss vice president for translational science.
An EKG is measuring voltage, right? We look at those voltages and treat them almost like image data, he said
While studies in such journals as Circulation and Nature Medicine have shown this old-plus-new approach to work surprisingly well to the point that the U.S. Food and Drug Administration is fast-tracking the technology the AI tool needs more testing. Further, once that training has Tempuss ECG Analysis Platform ready for prime time, the new diagnostic must be incorporated into health care processes so doctors can make the most of it for their patients. In pursuit of those ends, Tempus is partnering with the UCHealth CARE Innovation Center.
That partnership is proceeding in three phases, says Emily Hearst, UCHealths Tempus projected manager. The first is looking retrospectively at the ECGs of 5,000 UCHealth patients and seeing if Tempuss ECG Analysis Platform can repeat the sort of results it has delivered before. That Circulation study involved feeding the ECG Analysis Platform 12-lead digital ECG traces from 430,000 patients collected from 1984 to 2019. Using historical data allowed Tempus and Pennsylvania-based Geisinger Health to see how the AI systems Afib predictions tracked with future Afib-related strokes. The system spotted nearly two-thirds of patients with no documented history of Afib (Afib being episodic and often without symptoms, it can go undetected for years) but who later had an Afib-related stroke.
Why would examining a fraction as many patient ECGs as Tempus already did help prove out much less improve the platform? Dr. David Kao, the University of Colorado School of Medicine and UCHealth cardiologist who is working closely with Tempus, says its about diversity. People are people, but the population makeups of Pennsylvania and Colorado differ. An AI-based systems intelligence must reflect that.
Overfitting is a huge problem in machine learning, which means it can perform very well in your initial, however-large dataset, but then it doesnt work anywhere else, Kao said.
Whats called an external validation set, one using a different patient population than the initial training set, can both refine the model and lend its creators as well as regulators more confidence in the prospects of its real-world performance, Kao says. In this case, the results, if theyrePr favorable, will strengthen Tempuss FDA submission for full approval, Hearst adds.
The second phase of the Tempus-UCHealth partnership is boosting the number of previously recorded patient ECGs fed into the Tempus system to 45,000. The goal will be to validate results from a recent study that showed the model to be capable of using those same ECG traces to predict structural heart diseases a group of conditions that adversely affect the valves, walls, chambers, or muscles of the heart such as aortic stenosis and hypertrophic cardiomyopathy.
The third phase of the partnership will also look at structural heart disease, Hearst says, but prospectively that is, feeding the Tempus platform ECG readings of current patients and seeing how well it predicts structural heart disease going forward. Should the results one these studies pan out, UCHealth and Tempus will work on how to integrate the AI-based results into UCHealths and, by extension, that of many other health systems electronic health record, Hearst adds.
Success could save countless lives, and could have a particular impact on underserved communities in the United States and entire countries abroad. Kao, who has done medical trips to Zimbabwe, says that country has, in the past, had one or two 3D-ultrasound echocardiogram machines of the sort that cardiologists use to diagnose serious heart problems. It takes specialized training to interpret echocardiograms, and the machines themselves cost tens of thousands of dollars. Combining the outputs of an electrocardiogram machine that costs $500 to $2,000 with AI that can spot patients at high risk for Afib or structural heart diseases could identify those who would gain from preventative treatment. In places where cardiac specialists and higher-end diagnostics are available, ECG-based AI results filter patients such as those most likely to have problems see specialists first, Kao says.
Kao adds that he considers partnerships such as UCHealths and Tempuss as an exemplary innovation model, one which combines the rigor and clinical experience of academic medicine with the expertise and commercial motivation of industry.
I dont know that one or the other can do it on their own, he said. You need the strengths of both. Its hard to find partners that line up, but when you do, its like lightning in a bottle. Youve got to hold onto it.
The old and the new may not always harmonize, but Tempus, with a big assist from UCHealth, appears to be playing a tune that could help patients until AI is old hat, too.
See the rest here:
Posted in Artificial Intelligence
Comments Off on Harnessing artificial intelligence to predict atrial fibrillation, heart disease up to a year in advance – UCHealth Today
Artificial intelligence to save the day? How clever computers are helping us understand Huntington’s disease. – HDBuzz
Posted: at 2:51 pm
Scientists have developed a new model that maps out the different stages of Huntingtons disease (HD) in detail. Using artificial intelligence approaches, the researchers were able to sift out information from large datasets gathered during observational trials contributed by Huntingtons disease patients. A team of researchers from IBM and the CHDI Foundation have published a new model of HD progression in the journal Movement Disorders that they hope will improve how HD clinical trials are designed in the future.
HD is caused by an expansion in the huntingtin gene which leads to the production of an expanded form of the huntingtin protein. Studies of lab models of HD as well as people carrying the HD gene, show that having the expanded gene and making the expanded form of the protein causes a cascade of problems. Starting with small molecular changes, people with HD will eventually end up experiencing a range of different symptoms related to thinking, movement and mood that get worse over time.
Symptoms of HD typically start to show between the ages of 30 and 50, but a number of factors influence when this happens. We have known for a long time that people with bigger expansions in their huntingtin gene tend to get symptoms earlier, healthy lifestyle choices like a balanced diet and regular exercise can delay symptom onset, and other so-called genetic modifiers can also influence how early the disease might affect a gene carrier.
However, theres still a lot we dont understand about how Huntingtons disease progresses over time and how the symptoms get worse. To try and tackle this problem, scientists from around the world have run numerous observational trials and natural history studies where patients symptoms, biomarkers, and other measurements are monitored over time. These include PREDICT-HD, REGISTRY, TRACK-HD, and Enroll-HD. Together these studies have generated very large datasets which comprise more than 2000 different measurements recorded from 25,000 participants. This is tons of really helpful data, all made possible by the dedication of HD families to participating in these trials.
Scrutinising all these datasets at once can help scientists spot new patterns and make novel conclusions but doing this type of analysis manually is extremely laborious and challenging. This is where the clever computer scientists come in! Scientists are able to use cool new methods to get the computers to look at all the data at the same time using special types of programs often referred to as artificial intelligence or AI.
One commonly used AI approach is called machine learning. This type of AI software becomes better at making predictions of certain outcomes by building models from training data sets which it uses to learn without being explicitly programmed to do so. Machine learning is a field in its own right in biomedical research but also has lots of different applications for things like email filtering and speech recognition.
IBM and CHDI researchers used machine learning approaches to build and test a new model to understand how HD progresses and to categorise different disease stages. The model was then tested against a number of different measurements commonly collected and compiled in HD research that track disease progression, including the Unified Huntingtons Disease Rating Scale (UHDRS), total functional capacity (TFC), and the CAG-age product, also called the CAP score.
The new model defines 9 states of HD, all specified by different measurements that assess movement, thinking, and day-to-day function. These states span from the early stages of the disease before motor symptoms begin, all the way through to the late-disease stages that have the most severe symptoms. The model was able to predict how likely participants in the studies were to transition between states as well as how long participants spend in the different phases of HD. While other studies have determined that the entire disease course occurs over a period of about 40 years, this is the first time researchers have predicted the expected amount of time HD patients will spend in each of the 9 states that were described in the new model.
Having this handy new 9-state model of HD progression can help scientists and clinicians learn more about the different stages of HD and the timeframes it takes people with HD to move from one state to the next. With this information in hand, the researchers at IBM and CHDI believe this could help select the best-suited participants for particular HD clinical trials, identify robust biomarkers for monitoring how the disease progresses, and also help design better clinical trials.
This is an exciting step forward for HD research and we look forward to learning more about other AI applications in HD research as novel approaches are designed and this exciting field of science matures further.
Continue reading here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence to save the day? How clever computers are helping us understand Huntington’s disease. – HDBuzz
Taste of the future first artificial intelligence-created craft beer to be released at NOLA Brewing – WJTV
Posted: at 2:51 pm
NEW ORLEANS (WGNO) Locals will have a chance to try the first craft beer created by an artificial intelligence platform in June.
The AI Blonde Ale will be released at a Launch Party at Nola Brewery on June 20to coincide with CVPR, the worlds premier computer vision event.
Derek Lintern, a brewer at NOLA Brewing said he is excited to have a helping hand when it comes to crafting beer.
Its state-of-the-art technology with the traditional brewing methods, its pretty unique and its a recipe I would have never done normally but I really like how it tastes its very refreshing and very easy drinking Im really happy with it, said Lintern.
The beer was an experiment between The Australian Institute for Machine Learning (AIML) and Barossa Valley Brewing (BVB), founded by DSilva.
DSilva said the idea all started with a beer.
Yeah thats how it started, it started with a beer, Im sure a lot of ideas for companies have started over a beer, this started over a beer and ended up creating a beer and a company which is great, said DSilva.
With the technology, it makes it easier for brewers to produce their products.
About 10 million people review beers every day, there are all these sites and they put it into the world basically to show people what they think of the beer. You do exactly the same thing, there are 5 questions, you scan a QR code answer 5 questions you rate the beer and instead of it going into a website maybe somebody reads maybe not. What happens is artificial intelligence picks that up and goes directly to the producer the AI then takes all that data and manipulates a recipe and then gives it to the producer here this is what the markets thinking, said DSilva.
Derek Lintern said the new technology is not meant to replace brewers, but to help with the process.
The technology helps create the recipe, but the beer is still brewed manually.
The AI beer will only be available in New Orleans for a limited time.
DSilva said he is excited to bring something new to an amazing city. I am so excited I cant think of a better place to launch a beer, said DSilva.
He added, I am really keen for people to get down here and taste the future.
Anyone interested in attending the launch of the new beer can visit NOLA Brewing from 4 p.m. to 10 p.m. on Monday, June 20.
Deep Liquid is also offering 100 customers a free AI beer with their booking with Nola Pedal Barge and Nola Bike Bar.
They are offering $100 discount tickets to any of its private tours.
That includes any of the boat tours in Bayou Bienvenue as well as our pedal bike tour in the Bywater neighborhood.
For more information call (504) 264-1056) for NolaPedalBarge and (504) 308-1041 for NolaBikeBar.
Go here to read the rest:
Posted in Artificial Intelligence
Comments Off on Taste of the future first artificial intelligence-created craft beer to be released at NOLA Brewing – WJTV
The artificial intelligence hype is getting out of hand – The Telegraph
Posted: at 2:51 pm
I hope everyone is enjoying the latest breakthrough in artificial intelligence (AI) as much as I am.
In one of the latest AI developments, a new computer programme - DALL-E 2 - generates images from a text prompt. Give it the phrase Club Penguin Bin Laden, and it will go off and draw Osama as a cartoon penguin. For some, this was more than a bit of fun: it was further evidence that we shall soon be ruled by machines.
Sam Altman, chief executive of the now for-profit Open AI company which provides the model that underpins DALL-E, suggested that a generalised intelligence (AGI) was close at hand. So too did Elon Musk, who founded Altmans venture. Musk even gave a year for when this would happen: 2029.
Yet when we look more closely, we see that DALL-E really isnt very clever at all. Its a crude collage maker, which only works if the instructions are simple and clear, such as Easter Island Statue giving a TED Talk. It struggles with more subtle prompts, fails to render everyday objects: fingers are drawn as grotesque tubers, for example, and it cant draw a hexagon.
DALL-E is actually a lovely example of what psychologists call priming: because were expecting to see a penguin Bin Laden, thats what we shall see - even if it looks like neither Osama nor a penguin.
Impressive at first glance. Less impressive at second. Often, an utterly pointless exercise at the third, is how Filip Piekowski, a scientist at Accel Robotics, describes such claims, and DALL-E very much conforms to this general rule.
Todays AI hyperbole has gotten completely out of hand, and it would be careless not to contrast the absurdity of the claims with reality, for the two are now seriously diverging. Three years ago Google chief executive Sundai Pinchar told us that AI would be more profound than fire or electricity. However, driverless cars are further away than ever, and AI has yet to replace a single radiologist.
There have been some small improvements to software processes, such as the wonderful way that old movie footage can be brought back to life by being upscaled to 4K resolution and 60 frames per second. Your smartphone camera now takes slightly better photos than it did five years ago. But as the years go by, the confident predictions that vast swathes of white collar jobs in finance, media and law would disappear look like a fantasy.
Any economist who confidently extrapolates profound structural economic changes of the sort of magnitude that affects GDP from AI ventures such as DALL-E should keep those showerthoughts to themselves. This wild extrapolation was given a name by the philosopher Hubert Dreyfus, who brilliantly debunked the first great AI hype of the 1960s. He called it the first step fallacy.
His brother, Stuart, a true AI pioneer, explained it like this: It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon.
Todays misleadingly-named deep learning is simply a brute force statistical approximation, made possible by computers being able to crunch a lot more data than they could, to find statistical regularities or patterns.
AI has become good at the act of mimicry and pastiche, but it has no idea of what it is drawing or saying. Its brittle and breaks easily. And over the past decade it has got bigger but not much smarter, meaning the fundamental problems remain unsolved.
Earlier this year the neuroscientist, entrepreneur and serial critic of AI, Gary Marcus had enough. Taking Musk up on his 2029 prediction, Marcus challenged the founder of Tesla to a bet. By 2029, he posited, AI models like GPT - which uses deep learning to produce human-like text - should be able to pass five tests. For example, they should be able to read a book and reliably answer questions on its plot, characters and their motivations.
A Foundation agreed to host the wager, and the stake rose to $500,000 (409,000). Musk didnt take up the bet. For his pains, Marcus has found himself labelled as what the Scientologists call a suppressive. This is not a sector that responds to criticism well: when GPT was launched, Marcus and similarly sceptical researchers were promised access to the system. He never got it.
We need much tighter regulation around AI and even claims about AI, Marcus told me last week. But thats only half the picture.
I think the reason were so easily fooled by the output of AI models is because, like Agent Mulder in the X-Files, is because we want to believe. The Google engineer who became convinced his chatbot had developed a soul was one such example, but it is also journalists who seem to want to believe in magic more than anyone.
The Economist devoted an extensive 4,000 word feature last week to the claim that huge foundation models are turbo-charging AI progress, but ensured the magic spell wasnt broken by only quoting the faithful, and not critics like Marcus.
In addition, a lot of people are doing rather well as things are - waffling about a hypothetical future that may never arrive. Quangos abound, and for example, the UKs research funding body recently threw 3.5m of taxpayers money towards a programme called Enabling a Responsible AI Ecosystem.
It doesnt pay to say the Emperor has no clothes: the Courtiers might be out of a job.
Excerpt from:
The artificial intelligence hype is getting out of hand - The Telegraph
Posted in Artificial Intelligence
Comments Off on The artificial intelligence hype is getting out of hand – The Telegraph