The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
MS in Artificial Intelligence | University of Michigan-Dearborn
Posted: June 20, 2022 at 2:51 pm
The Artificial Intelligencemaster's degree program is designed as a 30-credit hour curriculum that give students a comprehensive framework for artificial intelligence with one of 4 concentration areas: (1) Computer Vision, (2) Intelligent Interaction, (3) Machine Learning, and (4) Knowledge Management and Reasoning.
Students will engage in an extensive core intended curriculum to develop depth in all the core concepts that build a foundation for artificial intelligence theory and practice. Also, they will be given the opportunity to build on the core knowledge of AI by taking a variety of elective courses selected from colleges throughout campus to explore key contextual areas or more complex technical AI applications.
The program will be accessible to both full-time and part-time students, aiming to train students who aspire to have AI research and development (R&D) or leadership careers in industry. To accommodate the needs of working professionals who might be interested in this degree program, the course offerings for the MS in AI will be in the late afternoon and evening hours to allow students to earn the degree through part-time study. The program may be completed entirely on campus, entirely online, or through a combination of on-campus and online courses.
If you have additional questions, please contact the program director: Dr. Jin Lu (jinluz@umich.edu).
Follow this link:
MS in Artificial Intelligence | University of Michigan-Dearborn
Posted in Artificial Intelligence
Comments Off on MS in Artificial Intelligence | University of Michigan-Dearborn
Artificial Intelligence On The Hunt For Illegal Nuclear Material – Texas A&M University Today
Posted: at 2:51 pm
Nuclear engineering doctoral student Sean Martinson works on plutonium solution purification inside a protective glove box in Sunil Chirayaths nuclear forensics laboratory.
Justin Elizalde/Texas A&M Engineering
Millions of shipments of nuclear and other radiological materials are moved in the U.S. every year for good reasons, including health care, power generation, research and manufacturing. But there remains the threat that bad actors in possession of stolen or illegally produced nuclear materials or weapons will try to smuggle them across borders for nefarious purposes.
Texas A&M University researchers are making it harder for them to succeed.
If border agents intercept illicit nuclear materials, investigators need to know who produced them and where they came from. Fortunately, nuclear materials carry certain forensic markers that can reveal valuable information, much like fingerprints can identify criminals.
For instance, when scientists examine the concentration of certain key contaminant isotopes in separated plutonium samples they can determine three different attributes of the samples history: the type of nuclear reactor that produced it, how long the plutonium or uranium was contained in the reactor and how long ago it was produced.
With current statistical methodologies, they can determine these three attributes utilizing a generated database that stores the required information as a mathematical variation of these attributes for various nuclear reactor types and emerge with a good idea of who made the material.
But what if investigators are presented with a mixed plutonium sample? said Sunil Chirayath, author of a new study on nuclear forensics recently published in the journal Nuclear Science and Engineering.Suppose the adversary is mixing materials from two nuclear reactors at two different times, and that material is cooled for different times. A bad actor might do this intentionally to disguise it.
Mixed samples of nuclear material are significantly more challenging to identify with traditional methodologies. In a real-world situation, the extra time required could have a catastrophic impact on the global community.
To improve the process, Chirayath, associate professor in the Department of Nuclear Engineering and director of the Texas A&M Engineering Experiment StationsCenter for Nuclear Security Science and Policy Initiatives, along with his research team, has developed a methodology using machine learning, a type of artificial intelligence.
He can produce identifying markers through simulations, and then store that data in a 3D database. Each attribute is one level of the database, and a standard computer can quickly process the data and lead investigators to the reactor type that produced the plutonium sample and, potentially, the suspects by joining other pieces of the puzzle gathered through traditional forensics.
Three experiments of irradiating uranium using three different reactor types and post-irradiation examinations have been conducted at Texas A&M to date. Without knowing the samples origins, doctoral student researcher Patrick ONeal successfully identified where each of the plutonium samples was produced by using machine learning.
The work is being done through aconsortium of national labs and universitiesfunded by the U.S. Department of Energys National Nuclear Security Administration. The consortium focuses on development of new methods of detecting and deterring nuclear proliferation and to educating the next generation of nuclear security professionals. Chirayaths team will soon run one more irradiation and the corresponding post-irradiation examination with funding already in place.
The next step is to take this machine-learning methodology to high-level government labs, where researchers can work with much larger samples of nuclear materials. University labs are constrained by more restrictive irradiation safety limits.
Chirayath is confident efforts to prevent nuclear proliferation are working. The international Treaty on the Non-Proliferation of Nuclear Weapons arose from concern about atomic weaponry, and all but four countries India, Israel, Pakistan and South Sudan signed it.North Korea signed it but walked away from it later.
Chirayath also notes that with the rise in nuclear energy production comes an increased risk that the technology will be used to make weapons capable of mass destruction.
We have to make sure materials are not diverted from peaceful use, he said. We need to double-up our tools and methodologies, but its not just technical tools. We also have to double-up on policies and agreements to prevent proliferation from happening.
More:
Artificial Intelligence On The Hunt For Illegal Nuclear Material - Texas A&M University Today
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence On The Hunt For Illegal Nuclear Material – Texas A&M University Today
5G and AI use cases how 5G lifts artificial intelligence – Information Age
Posted: at 2:51 pm
Remote working: 5G will accelerate robots at work in the field
5G will unleash the potential of AI, says Michael Baxter. But how will AI and 5G most affect our everyday business lives? What are 5G and AI use cases?
Convergence makes 5G and AI use cases exciting: 5G could unleash the artificial intelligence revolution, moving it into a different league and creating new AI use cases.
When Apple launched the iPhone, few people understood its significance. There was a reason for this. At the time of the product announcement in 2007, wireless internet speeds were quite slow. 3G was launched by NTTDeCoMo in 2001 but network rollout had been gradual at best. It was the convergence of touchscreen phones and 3G and then 4G which created a demand hardly anybody had anticipated.
The growing importance of AI will go hand in hand with the emergence of 5G
There was also a catch as popular applications emerged, demand for 3G rocketed, and the 3G network became stretched.
According to Allied Market Research, the global 5G market will grow from a valuation of $5.13bn in 2020 to $797.8bn by 2030.
IDTechEX has drawn similar conclusions. In a recent report, it concluded: The 5G market is just about to take off. It forecast that by the end of 2032, consumer mobile services applying 5G technologies will generate around $800bn in revenues.
Dr Yu-Han Chang, Technology Analyst for 5G at IDTechEX, says: 5G enables greater data flows and quicker data collecting, allowing AI to generate more accurate models and predictions.
5G and AI together will speed the evolution of a fully connected and intelligent world.
5G is slated to offer speeds in excess of 100 times faster than 4G, which seems like an incredible increase but applications will emerge to fill the opportunity created.
As ever, with these things, when you drill down, complications emerge. There are, in fact, two distinct 5G networks.
At one level, mmWave, also called 5G II, operates at 100 MHz and provides between 24 100 GHz (gigahertz) and offers extremely impressive latency but is limited to a range of 300 metres.
By contrast, Sub-6 GHz, or 5G I, operates at 50 MHz, has inferior latency to 5G II, but is superior to 4G, provides between 3.5 7 GHz but has a range of 1.5 kilometres.
In other words, 5G II can support more powerful applications, but because of the low range, it requires more investment in infrastructure. Consequently, to date, most 5G rollout has been for 5G I.
The implications for AI will not be immediate, but they will be highly significant.
Although AI is probably more common than is generally supposed, its impact has been limited to date. So, while most of us use AI without necessarily realising it, for example, when we use our smartphone as a navigation tool, AIs real impact lies ahead.
The growing importance of AI will go hand in hand with the emergence of 5G. The convergence of the two technologies will have an enormous impact on us all, will have huge economic significance and will transform business.
The Internet of Things (IoT) will underpin the convergence between 5G and AI.
Adam Bujak, CEO and Co-founder at KYP.ai, the process intelligence company, said that 5G will power the growth of the IoT. It will allow organisations to use more connected devices and intelligent sensors.
We will be able to conduct our processes more digitally in the physical world and online by using connected devices and services. Therefore, well see the growth of phygital [physical + digital] products and services, including virtual reality modes of operations and customer interactions.
IDTechEX says that 5G [especially mmWave]s high throughput and ultralow latency enable it to tap into various high-value sectors such as 3D robotic control, virtual reality monitoring, and remote medical control that earlier technologies couldnt.
We will see connectivity between products like never before. At one level, we might see coffee cups communicate with coffee vending machines saying, Im empty. But at another level, we will see the connectivity of autonomous vehicles, which will be of massive significance to the future of transport.
The data collected by the IoT will also provide the kind of ammunition that machine learning or AI needs to develop and create greater insights.
The convergence of 5G and AI will underpin the emergence of the metaverse.
The 2021 hype concerning the metaverse has partially turned to cynicism. Part of the issue here relates to the definition of the metaverse. At one level, it conjures up thoughts of a Matrix-type world, but in reality, its meaning is more prosaic. I have heard people say a Zoom call involves the metaverse, and they define it as combining digital and physical worlds.
Virtual and augmented reality or immersive reality will underpin the metaverse, and 5G and AI will transform it.
The convergence of 5G and AI will create new use cases in games and streaming services, for example, offering 3D and virtual reality viewing supporting how we communicate. It will also change social media.
The convergence of AI and 5G will also create tools we will use in our daily lives for example, real-time language translation tools.
Business-to-business applications will be many, but one of the most important aspects will be the support 5G gives remote working. Take as an example how Grammarly supports communications by text. But as virtual and augmented reality technologies advance, it is not difficult to imagine how 5G and AI can transform not only remote work but mobile work.
Still, with B2B, there is also the issue of automation technologies.
Adam Bujak says: 5G will extend the reach of digital transformation and bring us more opportunities for innovation and automation. We will have more data and insights from all these phygital processes and connected devices, allowing us to train AI and to tap it for business and process analytics. In turn, there will be more possibilities for outcome-driven intelligent automation of services.
Office automation is one opportunity, but 5G and AI in combination will also support industry and manufacturing; at one level, it will be able to support the maintenance of equipment, monitoring machinery and identifying potential issues in advance, but it also presents the enticing prospect of remote operation of machinery.
The connectivity of transport, including autonomous vehicles, drones, and transport infrastructure such as ensuring traffic lights support optimal traffic flow, will be transformed by AI and 5G working in parallel.
The opportunities presented by AI and 5G in healthcare are multiple, but one of the most enticing will relate to remote monitoring of patients when they are out.
But many more AI and 5G use cases will emerge; the above is just the beginning. The convergence of these technologies will prove incredibly important and will unleash AI, finally justifying much of the hype seen over the last decade.
Information Age guide to how 5G will affect your business What does 5G mean for your business? This Information Age guide to 5G looks at which sectors will be disrupted, what low latency means for those businesses, how 5G will be used by enterprise-level organisations and how it will propel AI
What does 5G mean for enterprise business? A mobile 5G network promises to be the bridge towards Industry 2.0. But the reality is patchy coverage and a high cost of entry. What should an enterprise business CTO consider when throwing the switch on 5G?
5G technology disruption 4 sectors ripe for disruption 5G technology disruption four business sectors 5G will disrupt: financial services/insurance, cloud & edge computing, medical and healthcare, and supply chain management
Could low latency 5G boost your business? Low latency 5G means faster input response times between machines on a mobile network, improving their performance why is that a good thing and how could it help your business?
";jQuery("#BH_IA_MPU_RIGHT_MPU_1").insertAfter(jQuery(".single .post-story p:nth-of-type(5)"));//googletag.cmd.push(function() { googletag.display('BH_IA_MPU_INPAGE_MPU_1'); });}else {}});
See the original post:
5G and AI use cases how 5G lifts artificial intelligence - Information Age
Posted in Artificial Intelligence
Comments Off on 5G and AI use cases how 5G lifts artificial intelligence – Information Age
Harnessing artificial intelligence to predict atrial fibrillation, heart disease up to a year in advance – UCHealth Today
Posted: at 2:51 pm
Some mashups of old and new dont quite work: horse carriages and reusable rocket boosters, say. But combine a 19th-century medical advance with 21st-century computing technologies, and one now has the ability to identify patients at high risk of atrial fibrillation up to a year in advance. That breakthrough, clinicians hope, will help prevent thousands of strokes that hard-to-diagnose Afib causes each year. It also could spot structural heart diseases earlier, improving outcomes through more timely treatment.
The old technology in question is the electrocardiogram (ECG or EKG), a heart-voltage detector invented in 1895 thats been a low-cost, mainstay cardiac diagnostic pretty much ever since. The modern computing technology at play involves machine-learning algorithms developed by Chicago-based Tempus and Pennsylvania-based Geisinger Health. Those algorithms feed into a convolutional neural network that interprets the waveform the shape of an ECGs many spikes, dips and subtle undulations in ways no human ever could.
Tempuss artificial intelligence (AI) works on the same basic architecture as what YouTube uses to scan image data to identify cat videos, or self-driving cars use to identify objects in the road, explains Noah Zimmerman, Tempuss vice president for translational science.
An EKG is measuring voltage, right? We look at those voltages and treat them almost like image data, he said
While studies in such journals as Circulation and Nature Medicine have shown this old-plus-new approach to work surprisingly well to the point that the U.S. Food and Drug Administration is fast-tracking the technology the AI tool needs more testing. Further, once that training has Tempuss ECG Analysis Platform ready for prime time, the new diagnostic must be incorporated into health care processes so doctors can make the most of it for their patients. In pursuit of those ends, Tempus is partnering with the UCHealth CARE Innovation Center.
That partnership is proceeding in three phases, says Emily Hearst, UCHealths Tempus projected manager. The first is looking retrospectively at the ECGs of 5,000 UCHealth patients and seeing if Tempuss ECG Analysis Platform can repeat the sort of results it has delivered before. That Circulation study involved feeding the ECG Analysis Platform 12-lead digital ECG traces from 430,000 patients collected from 1984 to 2019. Using historical data allowed Tempus and Pennsylvania-based Geisinger Health to see how the AI systems Afib predictions tracked with future Afib-related strokes. The system spotted nearly two-thirds of patients with no documented history of Afib (Afib being episodic and often without symptoms, it can go undetected for years) but who later had an Afib-related stroke.
Why would examining a fraction as many patient ECGs as Tempus already did help prove out much less improve the platform? Dr. David Kao, the University of Colorado School of Medicine and UCHealth cardiologist who is working closely with Tempus, says its about diversity. People are people, but the population makeups of Pennsylvania and Colorado differ. An AI-based systems intelligence must reflect that.
Overfitting is a huge problem in machine learning, which means it can perform very well in your initial, however-large dataset, but then it doesnt work anywhere else, Kao said.
Whats called an external validation set, one using a different patient population than the initial training set, can both refine the model and lend its creators as well as regulators more confidence in the prospects of its real-world performance, Kao says. In this case, the results, if theyrePr favorable, will strengthen Tempuss FDA submission for full approval, Hearst adds.
The second phase of the Tempus-UCHealth partnership is boosting the number of previously recorded patient ECGs fed into the Tempus system to 45,000. The goal will be to validate results from a recent study that showed the model to be capable of using those same ECG traces to predict structural heart diseases a group of conditions that adversely affect the valves, walls, chambers, or muscles of the heart such as aortic stenosis and hypertrophic cardiomyopathy.
The third phase of the partnership will also look at structural heart disease, Hearst says, but prospectively that is, feeding the Tempus platform ECG readings of current patients and seeing how well it predicts structural heart disease going forward. Should the results one these studies pan out, UCHealth and Tempus will work on how to integrate the AI-based results into UCHealths and, by extension, that of many other health systems electronic health record, Hearst adds.
Success could save countless lives, and could have a particular impact on underserved communities in the United States and entire countries abroad. Kao, who has done medical trips to Zimbabwe, says that country has, in the past, had one or two 3D-ultrasound echocardiogram machines of the sort that cardiologists use to diagnose serious heart problems. It takes specialized training to interpret echocardiograms, and the machines themselves cost tens of thousands of dollars. Combining the outputs of an electrocardiogram machine that costs $500 to $2,000 with AI that can spot patients at high risk for Afib or structural heart diseases could identify those who would gain from preventative treatment. In places where cardiac specialists and higher-end diagnostics are available, ECG-based AI results filter patients such as those most likely to have problems see specialists first, Kao says.
Kao adds that he considers partnerships such as UCHealths and Tempuss as an exemplary innovation model, one which combines the rigor and clinical experience of academic medicine with the expertise and commercial motivation of industry.
I dont know that one or the other can do it on their own, he said. You need the strengths of both. Its hard to find partners that line up, but when you do, its like lightning in a bottle. Youve got to hold onto it.
The old and the new may not always harmonize, but Tempus, with a big assist from UCHealth, appears to be playing a tune that could help patients until AI is old hat, too.
See the rest here:
Posted in Artificial Intelligence
Comments Off on Harnessing artificial intelligence to predict atrial fibrillation, heart disease up to a year in advance – UCHealth Today
Artificial intelligence to save the day? How clever computers are helping us understand Huntington’s disease. – HDBuzz
Posted: at 2:51 pm
Scientists have developed a new model that maps out the different stages of Huntingtons disease (HD) in detail. Using artificial intelligence approaches, the researchers were able to sift out information from large datasets gathered during observational trials contributed by Huntingtons disease patients. A team of researchers from IBM and the CHDI Foundation have published a new model of HD progression in the journal Movement Disorders that they hope will improve how HD clinical trials are designed in the future.
HD is caused by an expansion in the huntingtin gene which leads to the production of an expanded form of the huntingtin protein. Studies of lab models of HD as well as people carrying the HD gene, show that having the expanded gene and making the expanded form of the protein causes a cascade of problems. Starting with small molecular changes, people with HD will eventually end up experiencing a range of different symptoms related to thinking, movement and mood that get worse over time.
Symptoms of HD typically start to show between the ages of 30 and 50, but a number of factors influence when this happens. We have known for a long time that people with bigger expansions in their huntingtin gene tend to get symptoms earlier, healthy lifestyle choices like a balanced diet and regular exercise can delay symptom onset, and other so-called genetic modifiers can also influence how early the disease might affect a gene carrier.
However, theres still a lot we dont understand about how Huntingtons disease progresses over time and how the symptoms get worse. To try and tackle this problem, scientists from around the world have run numerous observational trials and natural history studies where patients symptoms, biomarkers, and other measurements are monitored over time. These include PREDICT-HD, REGISTRY, TRACK-HD, and Enroll-HD. Together these studies have generated very large datasets which comprise more than 2000 different measurements recorded from 25,000 participants. This is tons of really helpful data, all made possible by the dedication of HD families to participating in these trials.
Scrutinising all these datasets at once can help scientists spot new patterns and make novel conclusions but doing this type of analysis manually is extremely laborious and challenging. This is where the clever computer scientists come in! Scientists are able to use cool new methods to get the computers to look at all the data at the same time using special types of programs often referred to as artificial intelligence or AI.
One commonly used AI approach is called machine learning. This type of AI software becomes better at making predictions of certain outcomes by building models from training data sets which it uses to learn without being explicitly programmed to do so. Machine learning is a field in its own right in biomedical research but also has lots of different applications for things like email filtering and speech recognition.
IBM and CHDI researchers used machine learning approaches to build and test a new model to understand how HD progresses and to categorise different disease stages. The model was then tested against a number of different measurements commonly collected and compiled in HD research that track disease progression, including the Unified Huntingtons Disease Rating Scale (UHDRS), total functional capacity (TFC), and the CAG-age product, also called the CAP score.
The new model defines 9 states of HD, all specified by different measurements that assess movement, thinking, and day-to-day function. These states span from the early stages of the disease before motor symptoms begin, all the way through to the late-disease stages that have the most severe symptoms. The model was able to predict how likely participants in the studies were to transition between states as well as how long participants spend in the different phases of HD. While other studies have determined that the entire disease course occurs over a period of about 40 years, this is the first time researchers have predicted the expected amount of time HD patients will spend in each of the 9 states that were described in the new model.
Having this handy new 9-state model of HD progression can help scientists and clinicians learn more about the different stages of HD and the timeframes it takes people with HD to move from one state to the next. With this information in hand, the researchers at IBM and CHDI believe this could help select the best-suited participants for particular HD clinical trials, identify robust biomarkers for monitoring how the disease progresses, and also help design better clinical trials.
This is an exciting step forward for HD research and we look forward to learning more about other AI applications in HD research as novel approaches are designed and this exciting field of science matures further.
Continue reading here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence to save the day? How clever computers are helping us understand Huntington’s disease. – HDBuzz
Taste of the future first artificial intelligence-created craft beer to be released at NOLA Brewing – WJTV
Posted: at 2:51 pm
NEW ORLEANS (WGNO) Locals will have a chance to try the first craft beer created by an artificial intelligence platform in June.
The AI Blonde Ale will be released at a Launch Party at Nola Brewery on June 20to coincide with CVPR, the worlds premier computer vision event.
Derek Lintern, a brewer at NOLA Brewing said he is excited to have a helping hand when it comes to crafting beer.
Its state-of-the-art technology with the traditional brewing methods, its pretty unique and its a recipe I would have never done normally but I really like how it tastes its very refreshing and very easy drinking Im really happy with it, said Lintern.
The beer was an experiment between The Australian Institute for Machine Learning (AIML) and Barossa Valley Brewing (BVB), founded by DSilva.
DSilva said the idea all started with a beer.
Yeah thats how it started, it started with a beer, Im sure a lot of ideas for companies have started over a beer, this started over a beer and ended up creating a beer and a company which is great, said DSilva.
With the technology, it makes it easier for brewers to produce their products.
About 10 million people review beers every day, there are all these sites and they put it into the world basically to show people what they think of the beer. You do exactly the same thing, there are 5 questions, you scan a QR code answer 5 questions you rate the beer and instead of it going into a website maybe somebody reads maybe not. What happens is artificial intelligence picks that up and goes directly to the producer the AI then takes all that data and manipulates a recipe and then gives it to the producer here this is what the markets thinking, said DSilva.
Derek Lintern said the new technology is not meant to replace brewers, but to help with the process.
The technology helps create the recipe, but the beer is still brewed manually.
The AI beer will only be available in New Orleans for a limited time.
DSilva said he is excited to bring something new to an amazing city. I am so excited I cant think of a better place to launch a beer, said DSilva.
He added, I am really keen for people to get down here and taste the future.
Anyone interested in attending the launch of the new beer can visit NOLA Brewing from 4 p.m. to 10 p.m. on Monday, June 20.
Deep Liquid is also offering 100 customers a free AI beer with their booking with Nola Pedal Barge and Nola Bike Bar.
They are offering $100 discount tickets to any of its private tours.
That includes any of the boat tours in Bayou Bienvenue as well as our pedal bike tour in the Bywater neighborhood.
For more information call (504) 264-1056) for NolaPedalBarge and (504) 308-1041 for NolaBikeBar.
Go here to read the rest:
Posted in Artificial Intelligence
Comments Off on Taste of the future first artificial intelligence-created craft beer to be released at NOLA Brewing – WJTV
The artificial intelligence hype is getting out of hand – The Telegraph
Posted: at 2:51 pm
I hope everyone is enjoying the latest breakthrough in artificial intelligence (AI) as much as I am.
In one of the latest AI developments, a new computer programme - DALL-E 2 - generates images from a text prompt. Give it the phrase Club Penguin Bin Laden, and it will go off and draw Osama as a cartoon penguin. For some, this was more than a bit of fun: it was further evidence that we shall soon be ruled by machines.
Sam Altman, chief executive of the now for-profit Open AI company which provides the model that underpins DALL-E, suggested that a generalised intelligence (AGI) was close at hand. So too did Elon Musk, who founded Altmans venture. Musk even gave a year for when this would happen: 2029.
Yet when we look more closely, we see that DALL-E really isnt very clever at all. Its a crude collage maker, which only works if the instructions are simple and clear, such as Easter Island Statue giving a TED Talk. It struggles with more subtle prompts, fails to render everyday objects: fingers are drawn as grotesque tubers, for example, and it cant draw a hexagon.
DALL-E is actually a lovely example of what psychologists call priming: because were expecting to see a penguin Bin Laden, thats what we shall see - even if it looks like neither Osama nor a penguin.
Impressive at first glance. Less impressive at second. Often, an utterly pointless exercise at the third, is how Filip Piekowski, a scientist at Accel Robotics, describes such claims, and DALL-E very much conforms to this general rule.
Todays AI hyperbole has gotten completely out of hand, and it would be careless not to contrast the absurdity of the claims with reality, for the two are now seriously diverging. Three years ago Google chief executive Sundai Pinchar told us that AI would be more profound than fire or electricity. However, driverless cars are further away than ever, and AI has yet to replace a single radiologist.
There have been some small improvements to software processes, such as the wonderful way that old movie footage can be brought back to life by being upscaled to 4K resolution and 60 frames per second. Your smartphone camera now takes slightly better photos than it did five years ago. But as the years go by, the confident predictions that vast swathes of white collar jobs in finance, media and law would disappear look like a fantasy.
Any economist who confidently extrapolates profound structural economic changes of the sort of magnitude that affects GDP from AI ventures such as DALL-E should keep those showerthoughts to themselves. This wild extrapolation was given a name by the philosopher Hubert Dreyfus, who brilliantly debunked the first great AI hype of the 1960s. He called it the first step fallacy.
His brother, Stuart, a true AI pioneer, explained it like this: It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon.
Todays misleadingly-named deep learning is simply a brute force statistical approximation, made possible by computers being able to crunch a lot more data than they could, to find statistical regularities or patterns.
AI has become good at the act of mimicry and pastiche, but it has no idea of what it is drawing or saying. Its brittle and breaks easily. And over the past decade it has got bigger but not much smarter, meaning the fundamental problems remain unsolved.
Earlier this year the neuroscientist, entrepreneur and serial critic of AI, Gary Marcus had enough. Taking Musk up on his 2029 prediction, Marcus challenged the founder of Tesla to a bet. By 2029, he posited, AI models like GPT - which uses deep learning to produce human-like text - should be able to pass five tests. For example, they should be able to read a book and reliably answer questions on its plot, characters and their motivations.
A Foundation agreed to host the wager, and the stake rose to $500,000 (409,000). Musk didnt take up the bet. For his pains, Marcus has found himself labelled as what the Scientologists call a suppressive. This is not a sector that responds to criticism well: when GPT was launched, Marcus and similarly sceptical researchers were promised access to the system. He never got it.
We need much tighter regulation around AI and even claims about AI, Marcus told me last week. But thats only half the picture.
I think the reason were so easily fooled by the output of AI models is because, like Agent Mulder in the X-Files, is because we want to believe. The Google engineer who became convinced his chatbot had developed a soul was one such example, but it is also journalists who seem to want to believe in magic more than anyone.
The Economist devoted an extensive 4,000 word feature last week to the claim that huge foundation models are turbo-charging AI progress, but ensured the magic spell wasnt broken by only quoting the faithful, and not critics like Marcus.
In addition, a lot of people are doing rather well as things are - waffling about a hypothetical future that may never arrive. Quangos abound, and for example, the UKs research funding body recently threw 3.5m of taxpayers money towards a programme called Enabling a Responsible AI Ecosystem.
It doesnt pay to say the Emperor has no clothes: the Courtiers might be out of a job.
Excerpt from:
The artificial intelligence hype is getting out of hand - The Telegraph
Posted in Artificial Intelligence
Comments Off on The artificial intelligence hype is getting out of hand – The Telegraph
Artificial intelligence has reached a threshold. And physics can help it break new ground – Interesting Engineering
Posted: at 2:51 pm
For years, physicists have been making major advances and breakthroughs in the field using their minds as their primary tools. But what if artificial intelligence could help with these discoveries?
Last month, researchers at Duke University demonstrated that incorporating known physics into machine learning algorithms could result in new levels of discoveries into material properties, according to a press release by the institution. They undertook a first-of-its-kind project where theyconstructed a machine-learning algorithm to deduce the properties of a class of engineered materials known as metamaterials and to determine how they interact with electromagnetic fields.
The results proved extraordinary. The new algorithm accurately predicted the metamaterials properties more efficiently than previous methods while also providing new insights.
By incorporating known physics directly into the machine learning, the algorithm can find solutions with less training data and in less time, said Willie Padilla, professor of electrical and computer engineering at Duke. While this study was mainly a demonstration showing that the approach could recreate known solutions, it also revealed some insights into the inner workings of non-metallic metamaterials that nobody knew before.
In their new work, the researchers focused on making discoveries that were accurate and made sense.
Neural networks try to find patterns in the data, but sometimes the patterns they find dont obey the laws of physics, making the model it creates unreliable, said Jordan Malof, assistant research professor of electrical and computer engineering at Duke. By forcing the neural network to obey the laws of physics, we prevented it from finding relationships that may fit the data but arent actually true.
They did that by imposing upon the neural network a physics called a Lorentz model. This is a set of equations that describe how the intrinsic properties of a material resonate with an electromagnetic field. This, however, was no easy feat to achieve.
When you make a neural network more interpretable, which is in some sense what weve done here, it can be more challenging to fine tune, said Omar Khatib, a postdoctoral researcher working in Padillas laboratory. We definitely had a difficult time optimizing the training to learn the patterns.
The researchers were pleasantly surprised to find that this model workedmore efficiently than previous neural networks the group had created for the same tasks by dramatically reducing the number of parameters needed for the model to determine the metamaterial properties. The new model could evenmake discoveries all on its own.
Now, the researchers are getting ready to use their approach on unchartered territory.
Now that weve demonstrated that this can be done, we want to apply this approach to systems where the physics is unknown, Padilla said.
Lots of people are using neural networks to predict material properties, but getting enough training data from simulations is a giant pain, Malof added. This work also shows a path toward creating models that dont need as much data, which is useful across the board.
The study is published in the journal Advanced Optical Materials.
See the original post here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence has reached a threshold. And physics can help it break new ground – Interesting Engineering
UAE: MBZUAI ranked 30th globally in universities specialising in Artificial Intelligence – Khaleej Times
Posted: at 2:51 pm
The institution was founded just two years ago
Published: Mon 20 Jun 2022, 6:23 PM
The Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) has been ranked 127 globally among institutions that conduct research in computer science including AI, systems, theory, and "interdisciplinary" areas such as robotics, computer graphics, visualization, and more.
This ranking comes in just a short, two-year time span since the university was founded.
The ranking places MBZUAI alongside institutions such as Notre Dame, the University of Liverpool, the Weizmann Institute of Science, Osaka University, Ecole Normale Suprieure and other prestigious schools.
In the areas that MBZUAI currently focuses onartificial intelligence, computer vision, machine learning, and natural language processingMBZUAI ranks 30 globally ahead of several renowned research universities worldwide such as University of Michigan, Georgia Tech, and University of Toronto in North America; Imperial College London, EPFL, Max Planck Society in Europe; University of Tokyo, Seoul National University, and University of Sydney in Asia Pacific. MBZUAI can now claim to be the top ranked CS institution in the Arab World, and in the Middle East and Africa (CSRankings includes Israel as part of Europe).
Dr. Sultan bin Ahmed Al Jaber, Minister of Industry and Advanced Technology and Chairman of MBZUAI, said: In 2019, Abu Dhabi established the worlds first university dedicated to AI research with the aim of enhancing and benefiting from advanced technology capabilities in line with our leaderships vision for the future and the roadmap set out in the UAEs Strategy for Artificial Intelligence 2031. Today, MBZUAI has achieved a significant milestone as it has been ranked 30 globally by CSRankings in AI, computer vision, machine learning, and natural language processing placing it alongside elite, global research universities. This progress would not have been achieved without the vision, guidance, and support of the leadership, and the sincere efforts of the MBZUAI team. Achieving such recognition is a demonstration of the UAEs commitment to developing a knowledge-based economy through fostering AI-driven research and innovation, as well as empowering youth to become future leaders in this strategic sector.
CSRankings or Computer Science Rankings is "designed to identify institutions and faculty actively engaged in research across a number of areas of computer science, based on the number of publications by faculty that have appeared at the most selective conferences in each area of computer science," according to the organisation's website. CSRankings is considered a trusted source for rankings of institutions which conduct research in these areas of scientific inquiry.
"Attracting this calibre of faculty speaks to our research ambitions and the freedom we offer to innovate. We are a new university, with a strong culture of scientific inquiry that is reflected in the CSRankings we have achieved in just two years. We will continue to create an unparalleled environment here in Abu Dhabi, to attract more talent and to inspire impactful research as we grow at pace," said Professor Eric Xing, President of MBZUAI.
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) attracts world-class researchers in AI, CS, data sciences, and related disciplines such as healthcare, fintech, and engineering and social sciences.
ALSO READ:
Read more here:
Posted in Artificial Intelligence
Comments Off on UAE: MBZUAI ranked 30th globally in universities specialising in Artificial Intelligence – Khaleej Times
Is fake data the real deal when training algorithms? – The Guardian
Posted: at 2:51 pm
Youre at the wheel of your car but youre exhausted. Your shoulders start to sag, your neck begins to droop, your eyelids slide down. As your head pitches forward, you swerve off the road and speed through a field, crashing into a tree.
But what if your cars monitoring system recognised the tell-tale signs of drowsiness and prompted you to pull off the road and park instead? The European Commission has legislated that from this year, new vehicles be fitted with systems to catch distracted and sleepy drivers to help avert accidents. Now a number of startups are training artificial intelligence systems to recognise the giveaways in our facial expressions and body language.
These companies are taking a novel approach for the field of AI. Instead of filming thousands of real-life drivers falling asleep and feeding that information into a deep-learning model to learn the signs of drowsiness, theyre creating millions of fake human avatars to re-enact the sleepy signals.
Big data defines the field of AI for a reason. To train deep learning algorithms accurately, the models need to have a multitude of data points. That creates problems for a task such as recognising a person falling asleep at the wheel, which would be difficult and time-consuming to film happening in thousands of cars. Instead, companies have begun building virtual datasets.
Synthesis AI and Datagen are two companies using full-body 3D scans, including detailed face scans, and motion data captured by sensors placed all over the body, to gather raw data from real people. This data is fed through algorithms that tweak various dimensions many times over to create millions of 3D representations of humans, resembling characters in a video game, engaging in different behaviours across a variety of simulations.
In the case of someone falling asleep at the wheel, they might film a human performer falling asleep and combine it with motion capture, 3D animations and other techniques used to create video games and animated movies, to build the desired simulation. You can map [the target behaviour] across thousands of different body types, different angles, different lighting, and add variability into the movement as well, says Yashar Behzadi, CEO of Synthesis AI.
Using synthetic data cuts out a lot of the messiness of the more traditional way to train deep learning algorithms. Typically, companies would have to amass a vast collection of real-life footage and low-paid workers would painstakingly label each of the clips. These would be fed into the model, which would learn how to recognise the behaviours.
The big sell for the synthetic data approach is that its quicker and cheaper by a wide margin. But these companies also claim it can help tackle the bias that creates a huge headache for AI developers. Its well documented that some AI facial recognition software is poor at recognising and correctly identifying particular demographic groups. This tends to be because these groups are underrepresented in the training data, meaning the software is more likely to misidentify these people.
Niharika Jain, a software engineer and expert in gender and racial bias in generative machine learning, highlights the notorious example of Nikon Coolpixs blink detection feature, which, because the training data included a majority of white faces, disproportionately judged Asian faces to be blinking. A good driver-monitoring system must avoid misidentifying members of a certain demographic as asleep more often than others, she says.
The typical response to this problem is to gather more data from the underrepresented groups in real-life settings. But companies such as Datagen say this is no longer necessary. The company can simply create more faces from the underrepresented groups, meaning theyll make up a bigger proportion of the final dataset. Real 3D face scan data from thousands of people is whipped up into millions of AI composites. Theres no bias baked into the data; you have full control of the age, gender and ethnicity of the people that youre generating, says Gil Elbaz, co-founder of Datagen. The creepy faces that emerge dont look like real people, but the company claims that theyre similar enough to teach AI systems how to respond to real people in similar scenarios.
There is, however, some debate over whether synthetic data can really eliminate bias. Bernease Herman, a data scientist at the University of Washington eScience Institute, says that although synthetic data can improve the robustness of facial recognition models on underrepresented groups, she does not believe that synthetic data alone can close the gap between the performance on those groups and others. Although the companies sometimes publish academic papers showcasing how their algorithms work, the algorithms themselves are proprietary, so researchers cannot independently evaluate them.
In areas such as virtual reality, as well as robotics, where 3D mapping is important, synthetic data companies argue it could actually be preferable to train AI on simulations, especially as 3D modelling, visual effects and gaming technologies improve. Its only a matter of time until you can create these virtual worlds and train your systems completely in a simulation, says Behzadi.
This kind of thinking is gaining ground in the autonomous vehicle industry, where synthetic data is becoming instrumental in teaching self-driving vehicles AI how to navigate the road. The traditional approach filming hours of driving footage and feeding this into a deep learning model was enough to get cars relatively good at navigating roads. But the issue vexing the industry is how to get cars to reliably handle what are known as edge cases events that are rare enough that they dont appear much in millions of hours of training data. For example, a child or dog running into the road, complicated roadworks or even some traffic cones placed in an unexpected position, which was enough to stump a driverless Waymo vehicle in Arizona in 2021.
With synthetic data, companies can create endless variations of scenarios in virtual worlds that rarely happen in the real world. Instead of waiting millions more miles to accumulate more examples, they can artificially generate as many examples as they need of the edge case for training and testing, says Phil Koopman, associate professor in electrical and computer engineering at Carnegie Mellon University.
AV companies such as Waymo, Cruise and Wayve are increasingly relying on real-life data combined with simulated driving in virtual worlds. Waymo has created a simulated world using AI and sensor data collected from its self-driving vehicles, complete with artificial raindrops and solar glare. It uses this to train vehicles on normal driving situations, as well as the trickier edge cases. In 2021, Waymo told the Verge that it had simulated 15bn miles of driving, versus a mere 20m miles of real driving.
An added benefit to testing autonomous vehicles out in virtual worlds first is minimising the chance of very real accidents. A large reason self-driving is at the forefront of a lot of the synthetic data stuff is fault tolerance, says Herman. A self-driving car making a mistake 1% of the time, or even 0.01% of the time, is probably too much.
In 2017, Volvos self-driving technology, which had been taught how to respond to large North American animals such as deer, was baffled when encountering kangaroos for the first time in Australia. If a simulator doesnt know about kangaroos, no amount of simulation will create one until it is seen in testing and designers figure out how to add it, says Koopman. For Aaron Roth, professor of computer and cognitive science at the University of Pennsylvania, the challenge will be to create synthetic data that is indistinguishable from real data. He thinks it is plausible that were at that point for face data, as computers can now generate photorealistic images of faces. But for a lot of other things, which may or may not include kangaroos I dont think that were there yet.
More:
Is fake data the real deal when training algorithms? - The Guardian
Posted in Artificial Intelligence
Comments Off on Is fake data the real deal when training algorithms? – The Guardian