The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Are we witnessing the dawn of post-theory science? – The Guardian
Posted: January 9, 2022 at 5:13 pm
Isaac Newton apocryphally discovered his second law the one about gravity after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship one that could be expressed as an equation, F=ma and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).
Contrast how science is increasingly done today. Facebooks machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.
You cant lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that no theory, in a word. They just work and do so well. We witness the social effects of Facebooks predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.
Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were oversimplifications of reality. Soon, the old scientific method hypothesise, predict, test would be relegated to the dustbin of history. Wed stop looking for the causes of things and be satisfied with correlations.
With the benefit of hindsight, we can say that what Anderson saw is true (he wasnt alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. We have leapfrogged over our ability to even write the theories that are going to be useful for description, says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tbingen, Germany. We dont even know what they would look like.
But Andersons prediction of the end of theory looks to have been premature or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: whats the best way to acquire knowledge and where does science go from here?
The first reason is that weve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible. Think of the prejudice that has been documented in Googles search engines and Amazons hiring tools.
The second is that humans turn out to be deeply uncomfortable with theory-free science. We dont like dealing with a black box we want to know why.
And third, there may still be plenty of theory of the traditional kind that is, graspable by humans that usefully explains much but has yet to be uncovered.
So theory isnt dead, yet, but it is changing perhaps beyond recognition. The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts, says Tom Griffiths, a psychologist at Princeton University.
Griffiths has been using neural nets to help him improve on existing theories in his domain, which is human decision-making. A popular theory of how people make decisions when economic risk is involved is prospect theory, which was formulated by behavioural economists Daniel Kahneman and Amos Tversky in the 1970s (it later won Kahneman a Nobel prize). The idea at its core is that people are sometimes, but not always, rational.
In Science last June, Griffithss group described how they trained a neural net on a vast dataset of decisions people took in 10,000 risky choice scenarios, then compared how accurately it predicted further decisions with respect to prospect theory. They found that prospect theory did pretty well, but the neural net showed its worth in highlighting where the theory broke down, that is, where its predictions failed.
These counter-examples were highly informative, Griffiths says, because they revealed more of the complexity that exists in real life. For example, humans are constantly weighing up probabilities based on incoming information, as prospect theory describes. But when there are too many competing probabilities for the brain to compute, they might switch to a different strategy being guided by a rule of thumb, say and a stockbrokers rule of thumb might not be the same as that of a teenage bitcoin trader, since it is drawn from different experiences.
Were basically using the machine learning system to identify those cases where were seeing something thats inconsistent with our theory, Griffiths says. The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints. A way to picture it might be as a branching tree of if then-type rules, which is difficult to describe mathematically, let alone in words.
What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.
Some scientists are comfortable with that, even eager for it. When voice recognition software pioneer Frederick Jelinek said: Every time I fire a linguist, the performance of the speech recogniser goes up, he meant that theory was holding back progress and that was in the 1980s.
Or take protein structures. A proteins function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given proteins action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isnt the lack of a theory that will stop drug designers using it. What AlphaFold does is also discovery, she says, and it will only improve our understanding of life and therapeutics.
Others are distinctly less comfortable with where science is heading. Critics point out, for example, that neural nets can throw up spurious correlations, especially if the datasets they are trained on are small. And all datasets are biased, because scientists dont collect data evenly or neutrally, but always with certain hypotheses or assumptions in mind, assumptions that worked their way damagingly into Googles and Amazons AIs. As philosopher of science Sabina Leonelli of the University of Exeter explains: The data landscape were using is incredibly skewed.
But while these problems certainly exist, Dayan doesnt think theyre insurmountable. He points out that humans are biased too and, unlike AIs, in ways that are very hard to interrogate or correct. Ultimately, if a theory produces less reliable predictions than an AI, it will be hard to argue that the machine is the more biased of the two.
A tougher obstacle to the new science may be our human need to explain the world to talk in terms of cause and effect. In 2019, neuroscientists Bingni Brunton and Michael Beyeler of the University of Washington, Seattle, wrote that this need for interpretability may have prevented scientists from making novel insights about the brain, of the kind that only emerges from large datasets. But they also sympathised. If those insights are to be translated into useful things such as drugs and devices, they wrote, it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry.
Explainable AI, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?
Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data and hence scanning time to produce such an image, which isnt necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopras group has done so. But radiologists and patients remain wedded to the image. We humans are more comfortable with a 2D image that our eyes can interpret, he says.
The final objection to post-theory science is that there is likely to be useful old-style theory that is, generalisations extracted from discrete examples that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.
In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step the core of the creative process. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights new generalisations in the mathematics of knots.
In 2022, therefore, there is almost no stage of the scientific process where AI hasnt left its footprint. And the more we draw it into our quest for knowledge, the more it changes that quest. Well have to learn to live with that, but we can reassure ourselves about one thing: were still asking the questions. As Pablo Picasso put it in the 1960s, computers are useless. They can only give you answers.
Read more from the original source:
Are we witnessing the dawn of post-theory science? - The Guardian
Posted in Ai
Comments Off on Are we witnessing the dawn of post-theory science? – The Guardian
How AI Can Enable and Support Both Caregivers and Patients – Entrepreneur
Posted: at 5:13 pm
Opinions expressed by Entrepreneur contributors are their own.
Caregivers in healthcare deal with a considerable amount of complexity: In addition to a high volume of patients, they need to patriciate in and/or coordinate communication between team members and make sure all information is up to date. Their work is fast-paced, and situations can change in seconds.
One way to make these efforts more manageable is by using artificial intelligence. In healthcare, as in all fields, the job of AI is not to replace humans, but rather to perform repetitive, tedious and time-consuming tasks so that people dont have to freeing time for tasks that require a personal touch. Human judgment should remain the ultimate decision-maker.
Algorithms and software can help caregivers make predictions, analyze data and simplify processes. In my experience, if one is looking at a list of 50 repetitive tasks, AI can eliminate 45 of them, handing people extra hours for the five most pivotal. Personal care is scarce and valuable, but essential: the more technology can free up this time, the more focus can be on those precious tasks that technology alone cant handle.
Related: The Future of Healthcare Is in the Cloud
These prioritization benefits move down to patients. Efficient use of AI can reduce the costs of healthcare and the time required for treatment, not least because when routines are made more efficient, procedures can be completed faster, which ideally leads to lower expenditures.
AI also supports caregivers in making higher-quality decisions. For these professionals, it can be hard to find a starting point in interpreting data. In MRI imaging, for example, looking through thousands of images is inherently time-consuming and can lead to information being overlooked or misinterpreted. Artificial intelligence can help save time by bringing up the most relevant images, making care more efficient and accurate.
Algorithms can also be used for predicting: Software can take the current state of a situation, learn from patterns and make projections, which can be deeply useful. At GE, we use machine learning to forecast census for 14 days at the hospitals we serve, and look at every bed, unit and service in the process. This allows us to make accurate guesses as to conditions for each unit, over each hour, and for two weeks. Such forecasts can predict which parts of a facility will become hotspots, and teams can then determine which caregivers to transfer to each. They also help hospitals accept transfer patients more efficiently: If they receive a call asking whether they can accept an admission in two days, caregivers can give a confident answer, with forecasts in front of them.
Related: When Next-Generation Caregivers Meet New Technology
Were still in the beginning stages of AI software applied in healthcare, and it needs to be fine-tuned, but users also need to make sure theyre employing the technology correctly.
Its up to them to put software in context and use it in a way thats helpful. AI isnt a crutch to be relied upon, but a tool to be wielded. A nurses job isnt to sit there all day looking at forecasts, and a staffing coordinator doesnt wait all day for staffing forecasts. Whether algorithms are applied to worker allocation or radiology, they must be used in context in order to be helpful.
Think of these systems as akin to software in a phone, which likely includes a compass. When youre looking at the compass directly, its of marginal use, but when integrated into a map navigating app, its incredibly helpful. Theres an algorithm, and then theres the larger app its contained in. The same goes for AI in healthcare: It has to be used in the right context for it to reach full potential.
Link:
How AI Can Enable and Support Both Caregivers and Patients - Entrepreneur
Posted in Ai
Comments Off on How AI Can Enable and Support Both Caregivers and Patients – Entrepreneur
Observability: How AI will enhance the world of monitoring and management – VentureBeat
Posted: at 5:13 pm
Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more
The more the enterprise transitions from a mere digital organization to a fully intelligent one, the more data executives will come to realize that traditional monitoring and management of complex systems and processes is not enough.
Whats needed is a new, more expansive form of oversight which lately has come to be known as data observability.
The distinction between observability and monitoring is subtle but significant. As VentureBeat writer John Paul Titlow explained in a recent piece, monitoring allows technicians to view past and current data environments according to predefined metrics or logs. Observability, on the other hand, provides insight into why systems are changing over time, and may detect conditions that have not previously been considered. In short, monitoring tells you what is happening, while observability tells you why its happening.
To fully embrace observability, the enterprise must engage it in three different ways. First, AI must fully permeate IT operations, since this is the only way to rapidly and reliably detect patterns and identify root causes of impaired performance. Secondly, data must be standardized across the ecosystem to avoid mismatch, duplication and other factors that can skew results. And finally, observability must shift into the cloud, as that is where much of the enterprise data environment is transitioning to as well.
Observability is based on Control Theory, according to Richard Whitehead, the chief evangelist at observability platform developer, Moogsoft. The idea is that with enough quality data at their disposal, AI-empowered technicians can observe how one system reacts to another, or at the very least, infer the state of a system based on its inputs and outputs.
The problem is that observability is viewed in different contexts between, say, DevOps and IT. While IT has worked fairly well by linking application performance monitoring (APM) with infrastructure performance monitoring (IPM), emerging DevOps models, with their rapid change rates, are chafing under the slow pace of data ingestion. By unleashing AI on granular data feeds, however, both IT and DevOps will be able to quickly discern the hidden patterns that characterize quickly evolving data environments.
This means observability is one of the central functions in emerging AIOps and MLOps platforms that promise to push data systems and applications management into hyperdrive. New Relic recently updated its New Relic One observability application to incorporate MLOps tools to enable self-retraining as soon as alerts are received. This should be particularly handy for ML and AI training, since these models tend to deteriorate over time. Data observability helps account for changing real-world conditions that affect critical metrics like skew, staleness of data as well as overall model precision and performance regardless of whether these changes are taking place in seconds or over days, weeks or years.
Over the next few years, it is reasonable to expect AI and observability to usher in a new era of hyperautomation, according to Douglas Toombs, Gartners vice president of research. In an interview with RT Insights, he noted that a fully realized AIOps environment is key to Gartners long-predicted Just-in-Time Infrastructure in which datacenter, colocation, edge, and other resources can be compiled in response to business needs within a cohesive but broadly distributed data ecosystem.
In a way, observability is AI transforming the parameters of monitoring and management in the same way it changes other aspects of the digital enterprise by making it more inclusive, more intuitive and more self-operational. Whether the task is charting consumer trends, predicting the weather or overseeing the flow of data, AIs job is to provide granular insight into complex systems and chart courses of action based on those analyses, some of which it can implement on its own and some that must be approved by an administrator.
Observability, then, is yet another way in which AI will take on the mundane tasks that humans do today, creating not just a faster and more responsive data environment, but one that is far more attuned to the real environments it is attempting to interpret digitally.
Here is the original post:
Observability: How AI will enhance the world of monitoring and management - VentureBeat
Posted in Ai
Comments Off on Observability: How AI will enhance the world of monitoring and management – VentureBeat
Living on the Edge (AI) – The Times of India Blog
Posted: at 5:13 pm
We are living in a hyperconnected world, where every device is connected, and are generating data at an unprecedented rate. If we look at a smartwatch or a smartphone, smart cars, smart factories, smart homes, or smart cities, the enormous data generated is collected at the source, processed and smart decisions are required to be executed instantly. This is possible when two powerful technologies come together such as Edge Computing and Artificial Intelligence (AI). Therefore, looking into poetic justification by the acclaimed music group Aerosmith, in these interesting times, we are living on the edge.
Edge AI is the amalgamation of two incredible technologies: Edge, which is all about bringing computation and data closer to end users to improve efficiency, and AI, which comprises of data-driven intelligence. In the digital world, machines learn from a large amount of data collected over a period of time, much like human minds, amassing knowledge and learning through real-world experience. The more data we use to train our models, the more advanced and intelligent these machines become. Once they are trained in a central storage (e.g. cloud), the models can be deployed at the edge level to make quicker decisions. Furthermore, Edge AI is capable of working on Deep Learning models and complex algorithms on their own.
Why Edge AI is important in a connected ecosystem:
With the advancements in technologies, democratization of internet, advanced LTE mobile networks like 4G/5G etc., the entire world is becoming hyperconnected and cloud is playing a key role in providing planet scale infrastructure (compute, storage, and network). However, the fallacies of distributed computing cannot be disregarded. Hence, Edge AI is no longer optional and now, more than ever, it must be brought to the forefront:
Edge AI in Practice (most relevant archetypes):
Side Effects of Edge AI:
Any emerging technology takes time to scale the maturity curve and has its side effects as well; and Edge AI is not an exception to this. There are few challenges and concerns that should be addressed such as:
To conclude, harnessing Edge AIs true potential will unleash a great opportunity for numerous industries to thrive, and digital transformation is just the tip of the iceberg for a larger prism of digital innovation. Moreover, the ongoing pandemic is forcing every industry to evolve and develop smart solutions, enhance remote (working capabilities, asset maintenance, learning, assistance etc.), platform automation and much more. The cloud continues to become the platform to leverage and as cloud has created a significant impact in todays time, so will Edge AI in the future. It will make cloud a fascinating place to be.
Views expressed above are the author's own.
END OF ARTICLE
Here is the original post:
Posted in Ai
Comments Off on Living on the Edge (AI) – The Times of India Blog
How the AI Revolution Impacted Chess (1/2) – Chessbase News
Posted: at 5:13 pm
The wave of neural network engines that AlphaZero inspired have impacted chess preparation, opening theory, and middlegame concepts. We can see this impact most clearly at the elite level because top grandmasters prepare openings and get ideas by working with modern engines. For instance, Carlsen cited AlphaZero as a source of inspiration for his remarkable play in 2019.
Neural network engines like AlphaZero learn from experience by developing patterns through numerous games against itself (known as self-play reinforcement learning) and understanding which ideas work well in different types of positions. This pattern recognition ability suggests that they are especially strong in openings and strategic middlegames where long-term factors must be assessed accurately. In these areas of chess, their experience allows them to steer the game towards positions that provide relatively high probabilities of winning.
A table of four selected engines is provided below.
Chess Engines
Engine
Type
Description
Stockfish 8
Classical
Relies on hard-wired rules and brute-force calculation of variations.
AlphaZero
Neural network
DeepMinds revolutionary AI engine used self-play reinforcement learning to train a neural network.
Leela Chess Zero (Lc0)
Neural network
Launched in 2018 as an open-source project to follow the footsteps of AlphaZero.
Stockfish 12
(and newer versions)
Hybrid
Utilizes classical searching algorithms as well as a neural network.
The hybrid Stockfish engine aims to get the best of both types of AI: the calculation speed of classical engines and the strategic understanding of neural networks. Practice has shown that this approach is a very effective one because it consistently evaluates all types of positions accurately, from strategic middlegames to messy complications.
These two articles introduce a few concepts that the newer (i.e., neural network and hybrid) engines have influenced. Please note that the game annotations are based on work I did for my book, The AI Revolution in Chess, where I analyzed the impact of AI engines.
Clash of Styles
One of the biggest differences in understanding between older and newer engines can be found in strategic middlegames which involve long-term improvements by one side. As shown in many of the AlphaZero Stockfish games, the older engines sometimes fail to see dangers due to their limited foresight. Relying solely on move-by-move calculation is not always enough to solve problems against the strongest opponents. This is because neural network engines excel at slowly building up pressure, making small improvements to optimize their winning chances, before gradually preparing the decisive breakthrough.
In the following game, the older engines believe that the opening outcome is quite satisfactory for Black, while the newer ones strongly disagree. Grischuk sides with the opinion of the neural network engines and understands that Whites long-term initiative is both practically and objectively extremely difficult for Black to handle.
Opening Developments
Perhaps the most popularized idea of the neural network engines is the h-pawn advance, where White pushes h4-h5-h6 (or Black pushes h5-h4-h3) to cramp the opponents kingside by taking away some key squares. The idea itself is not at all new, but the newer engines have a much greater appreciation for it than the older ones. This has led to many new ideas in openings such as the Grunfeld, where the fianchettoed bishop on g7 can be targeted by an h-pawn attack. Tying back to the theme of long-term improvements, neural network engines understand the problems that it creates for the opponent in the long run.
Our next game surveys a cutting-edge approach against the Grunfeld. Its sharp rise in popularity from 2019 onwards coincides with the widespread use of neural network engines at the top level.
The clash of chess styles between classical and neural network AI is fascinating to analyze. Many examples on this topic can be found in the famous AlphaZero Stockfish games and in openings where the engines disagree on the evaluation, such as the Grischuk Nakamura game. Their disagreement has led to major advancements in all popular openings, as old lines are revised, and new lines supported by modern engines are introduced into high-level practice.
Part 2 will examine another AI-inspired opening and the modern battle between two players armed with ideas from neural network engines.
The rest is here:
Posted in Ai
Comments Off on How the AI Revolution Impacted Chess (1/2) – Chessbase News
Flagging Down A Roaming AI Self-Driving Car Robo-Taxi Might Not Be In The Cards – Forbes
Posted: at 5:13 pm
Hailing a ride in an era of Ai-based self-driving cars.
Not so long ago, it seemed that the hailing of a cab required long arms and a capacity to wave frantically to catch the eye of the taxi driver.
You would be standing at a curb and keep your cool while taxi after taxi seemed to entirely ignore your frantic motions. It was hard to discern why the cabs werent pulling over to pick you up. They were showing as empty and therefore ought to be avidly seeking a potential fare. Sometimes you would consider that perhaps they didnt like the particular manner in how you waved your arms.
Maybe they thought you were overly excited and this was a worrisome sign by the taxi driver. Or they didnt like the look of your seemingly crudely summoning tactic for getting a pickup. You see, there were lots of other potential cab seekers that had a more subtle approach. Some all-knowing people would nonchalantly do a once wave and that was all it took to get a cab to pull over. Others would merely nod their head or make a quick tip of their hat, as though these were secret signals in a private baseball game between a catcher and a pitcher.
Things could get really competitive at certain times of the day.
If it was rush hour, then all bets were off. There were tons of people fervently attempting to get cabs, all of them at the same time and all across the whole city. You pretty much had to hope for the randomness of the world to come to your aid. When a taxi perchance dropped off a rider at the very spot that you were standing, this gave you the top rights to commandeer the taxi and proclaim that it was yours for the taking.
Many movies and TV shows used to provide a gag whereby a taxi comes up to pick someone up, and then someone else darts into the cab instead. This was more than just a spate of humor. It happened. Quite often. Unless you took to mind the ever-present notion that possession is nine-tenths of the law, a nanosecond of a delay getting into a taxi could mean that an interloper would grab it and you would be left standing high and dry.
I remember getting into a cab at the airport and when I gave the hotel address for my stay, the cabbie gave me the most utterly disgusting of glances. He then explained that the hotel was less than a two-minute drive from the airport. His fare would be peanuts. Meanwhile, he had waited in an enormously long cab line while at the airport, and after dropping me at the hotel he would once again have to sit idly in that same darned line. In short, he emphatically told me that I just cost him nearly two hours of his available cab time, for pretty much nothing at all as a fare.
He pleaded with me to get out. The rules for the cabbies at the airport did not allow them to kick out a rider. It had to be the rider themselves that would decide to back out of a ride. He told me that he had a family and needed to support them. Get another cab, he exhorted. Just dont force him to give me the dinky ride, for which he also suggested I could just walk from the airport and enjoy the fresh air, averting the need for a taxi altogether.
Anyway, the point is that even if you believe you had managed to snag a taxi, there was still a chance that it might get loose from your grip. Either the cab driver would not want you, or someone else might try to intervene and take your cab, sometimes offering a whale of a story.
I recall one time that I had just gotten into a hailed cab and a bystander tapped on the window. The person explained that they had been waiting for twenty minutes to get a cab. They had noticed me standing there too, though I apparently had only been waiting about ten minutes. The explanation turned into a morality play that I ought to voluntarily give up the cab since this other person had waited longer than me. It made no difference that the cabbie stopped in front of me. It made no difference that we were both humans. The key was that I had gone outside of my fair turn and had cheated this other waiting rider.
How about that?
On another occasion, I luckily hailed a cab and a person came running up to the vehicle. They offered me ten bucks if I would hand the taxi over to them. They were in a hurry and didnt want to wait for a taxi. The logic was that time is money, as we all know, and so this potential rider was willing to pay me for giving up my cab and presumably giving up my waiting time. An interesting proposition. The taxi driver entered into the dialogue and pointed out that the ten dollars ought to go to the driver, or at least a cut of it ought to.
Hailing a cab while in the rain or snow was the worst.
There you are, standing out in the raw elements. The wind nearly blowing you over. Rain pouring sheets of water onto your head, or perhaps onto your umbrella or raincoat. If it was snow, you stood in the icy cold and kept moving your feet to keep the circulation going. An additional problem was that there seemed to be fewer cabs cruising around and thus the wait time was totally elongated.
In todays world, there is a lot less handwaving hailing going on.
In lieu of wantonly hailing a ride, you usually pull up an app on your smartphone and use a ridesharing network or even a cab-hailing network to get yourself a ride. No need to stand around and try to spy on an available roaming vehicle. The computer systems do all that work for you. This is known as e-hailing.
On a digital map displayed on your smartphone screen, youll see various dots or tiny emoji cars that are moving around in your area. One of them will be usually chosen for you by the computer system, based on factors such as how close the roaming vehicle is, where you want to go, the type of vehicle preference you have, and so on. All you then need to do is wait for the arrival of the assigned vehicle.
No need to wave at anyone or anything.
That being said, upon the arrival of your assigned vehicle, sometimes you do need to wave or make a motion to ensure that the driver sees you. The map is oftentimes not precisely able to indicate where the passenger is standing. Plus, there might be a multitude of people waiting for lifts, perhaps having all gotten out of a theatre at the same time and now seeking rides home.
There is no doubt that merely requesting online to get a ride is a lot smoother than having to play the roulette wheel game of hailing on-the-street a prospective ride.
Besides the ease of no longer needing to make those waving motions, you also now have a somewhat ironclad guarantee that you will get a ride. In the case of standing around and hailing, you never really knew how long it might take and whether you would ever land a ride. That was the terrible uncertainty of it all. This could be especially on your mind if perchance caught in a bad part of town or rotten weather. Your mind was frantically praying for an available ride to come along.
Another nifty aspect about using an app to hail a ride is that you know beforehand the nature of the vehicle and the driver. You are usually presented with some info about the car that is coming to pick you up. There is also the name of the driver and their rating. This helps you to know whether the rider is presumably any good at providing rides.
When you hailed a cab at random, it was a wildcard as to what type of driver you might get. Some drivers were cautious and went relatively slowly, taking turns with great aplomb. Other drivers were like racecar drivers, zipping along. They wanted to get you to your destination as fast as possible, meaning that they then could seek to find their next paying fare that much sooner. More fares in a day were the mantra for making any money at this game.
Those that have never hailed a ride via the standing outside and waving method are at times aghast when they discover that this approach still exists. Many believe it was only something that happened during the times of the dinosaurs, and they assumed that since dinosaurs are extinct that presumably, the traditional method of hailing of a ride was certainly extinct too.
Well, sit down and prepare yourself for a bit of a shock, conventional hailing still happens.
There are though additional twists and turns.
In many locales, there are byzantine rules about which cabs or taxis can provide those impromptu derived rides. Depending upon various conditions, it could be that only e-hailing is legally allowed, per time of day or where you are in a city or town. Anyone trying to do the street hailing has to be brazen to think that it will work since there are fewer and fewer chances of this being feasible.
Some sneaky riders will try to maximize their chances of getting a ride quickly by doing both the e-hailing and the stand-around techniques in unison.
They pull up the app for an e-hail and see what the wait time is like. They simultaneously stand out in the street and start waving at any seeming potential rides. If the wait time seems long on the e-hail, they will temporarily book it and then wait until the last allowed moment to drop it (before incurring any fees for doing so). During that interval, they will be stridently attempting to catch a ride via the waving method. Whichever approach strikes gold first is the winner in that momentary contest.
As they say, alls fair in love and war.
Since we have been discussing cars and taxis, it makes indubitable sense to consider that the future of such vehicles will consist of self-driving cars. Be aware that there isnt a human driver involved in a true self-driving car. True self-driving cars are driven via an AI driving system. There isnt a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.
For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
Heres an intriguing question that is worth pondering: Once self-driving cars are acting as robo-taxis and cruising around on our streets to do so, will you be able to hail one by hand or only via e-hailing?
Before jumping into the details, Id like to further clarify what is meant when referring to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Hailing A Robo-Taxi
For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in todays AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to todays AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system wont natively somehow know about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Lets dive into the myriad of aspects that come to play on this topic.
Assume for sake of discussion that self-driving cars are inevitably sufficiently able to drive around and at least achieve Level 4 capabilities (this means that there is a defined ODD or Operational Design Domain for which the autonomous vehicle is capable of driving within).
There is a hefty debate about whether individuals will be able to own and operate self-driving cars or whether only large companies will be able to do so. Part of the logic is that a self-driving car will need to be kept in tiptop shape else the AI driving system will not be able to safely provide rides. The assumption is that a company responsible for operating a fleet of self-driving cars is more likely to maintain and upkeep the autonomous vehicles than might individual owners do so.
I generally disagree with that contention and argue that we will indeed have individual ownership of self-driving cars, for various reasons that I have articulated at this link here.
Putting that whole brouhaha to the side, I think we can generally all agree that there will be self-driving cars offered on ridesharing or ride-hailing basis, regardless of whom the owner is. When you go to use an app to request a ridesharing lift, the odds are that the app will present you with one of two options, you can select a human-driven car or you can select a self-driving car. Some people will relish using a self-driving car, while others will eschew it and prefer instead to use a human-driven ridesharing car.
Each to their own preference.
Most pundits agree that self-driving cars will be on the go for much of their available traversal time. The nice thing about a self-driving car is that the AI driving system doesnt need any rest, nor lunch breaks, or even bathroom breaks. The expectation is that self-driving cars will be able to be driving around 24x7, except for times when they need to refuel or need some maintenance or fixing up.
For an entity that owns a self-driving car, there is the opportunity to potentially make big bucks by this always on the move capability (and, without the labor costs of needing a human driver). For example, I assert that a person could own a self-driving car, have it take them to the office for a normal workday, and while at work the self-driving car is made available on a ridesharing basis. The person then has the self-driving car take them home after work, and for the rest of the night, the self-driving car continues making money by providing more lifts. In short, their self-driving car makes money for them when they otherwise dont need it.
Without getting mired into any messy arguments, the emphasis is that a self-driving car can be a ridesharing or ride-hailing vehicle and provide rides to those making such a request. That seems abundantly clear-cut and inarguable.
The question we are considering herein is the matter of how to request a self-driving car for those that are seeking a lift.
We can already assume that the most likely approach consists of app-based e-hailing.
Either the company operating the self-driving car will provide a dedicated app for this purpose, or might list the self-driving car on some existing ridesharing network. If they list via a network, the odds are that a cut of the fare is bound to be required (i.e., a split between the operator of the self-driving car and the network operator). Ergo, the chances are that the operator of the self-driving car would prefer that people use the specialized app and not have to split any fees.
It is a tradeoff of course, as to whether the dedicated app will ensure enough use of the self-driving car versus being listed on a ridesharing network.
Will a self-driving car be occupied at all times while on a ridesharing basis with a passenger inside the vehicle?
Nope.
There will be times during which the self-driving car will be absent of a passenger. It could be that the self-driving car is doing delivery of a package, thus, there isnt a person inside the autonomous vehicle. Other times the self-driving car might be making its way to a requested lift and is empty until it reaches the person seeking a ride.
One other possibility is that there arent any ride requests at the moment and so the conundrum of what to do with the self-driving car then logistically arises. Do you opt to park the self-driving car at some locale, and have it wait for a requested ride? That might not be as advantageous as having the self-driving car roaming around, for which it might be in a better place when a request occurs.
The operator of a self-driving car will need to make this balancing act decision. In some instances, it might be better to park the self-driving car, while in other instances it is more prudent to keep it underway. A variety of factors come to play.
All told, we can seemingly agree that there will be times at which self-driving cars will be roaming empty of any passengers and awaiting a request for a ride. Ive suggested that this might become quite prevalent, see my analysis at this link here.
We are now at the moment of truth.
Should a self-driving car that is acting in this robo-taxi manner be able to pick up passengers that might undertake a traditional hailing gesture, or will self-driving cars only be summoned via e-hailing?
My claim is that we potentially could have self-driving cars programmed to handle the streetwise hailing approach.
This probably though will not occur at first. The mainstay will be the e-hailing avenue. Once that has become firmly established, I believe we will see some self-driving cars that are adjusted to be responsive to street-level hailing. This will primarily be due to competitive forces that require self-driving car operators to increasingly find ways to outshine their competition.
Now, it could be that the human drivers take the same stance too.
In other words, if self-driving cars start to become commonplace on ridesharing networks, the question naturally comes up about how human drivers will remain competitive. Assuming that a self-driving car is less expensive to use and that it wont have the human foibles of driving, the logical progression is that riders will aim to select a self-driving car over a human-driven car (all else being equal, as it were). A means for a human driver to remain competitive would be to offer something that the self-driving cars arent offering, namely the traditional street hailing approach.
Lets dig briefly into the complications of having a self-driving car attempt to perform the conventional hailing method.
As mentioned earlier, a person seeking a ride is customarily expected to make a motion that will serve to relatively definitively indicate that they are seeking a ride. This usually consists of waving an arm, along with perhaps looking directly at the targeted cab or taxi, and possibly pointing at the cab too. All of this is intended to catch the attention of the human driver.
Self-driving cars will be outfitted with a variety of sensors, including video cameras, radar, LIDAR, ultrasonic units, thermal imaging, and so on. Via the use of techniques such as Machine Learning (DL) and Deep Learning (DL), the data from those sensors is computationally analyzed and various patterns are being scanned for.
In theory, the image processing of the video camera's live stream could be used to try and detect a person that seems to be hailing the self-driving car. It would be easiest if the person had some special token or signal that was known for this purpose, such as a special flag or even just a specific gesture. But this might be a bit much for people to keep with them or have to know, so well assume that the traditional waving motion is the preferred method per se.
Admittedly, a person could be simply waving at a friend across the street, or perhaps swatting at a buzzing bee. It will be hard to discern with absolute certainty that the person is hailing the self-driving car. You could make the same case for human taxi drivers too, namely that they do not know for sure that a person is doing a hailing action. The context of the moment and the movements of the potential rider have to be carefully combined to reach such a conclusion.
Okay, so trying to spot a person seeking a ride that is doing a streetwise hailing will be somewhat computationally tough to do, but not insurmountable. There will be instances of an AI driving system skipping past the person due to the lack of detecting that a hailing activity was underway. There will also be instances of mistakenly coming to the person to provide a ride when they were not genuinely in the act of hailing a ride.
We can also assume some dolts will just for kicks decide to falsely attempt a hailing to see what the self-driving car will do.
In the case of a human taxi driver, the driver would likely be irked at the jokester and provide a rather stern talking to (or worse). One supposes that the AI driving system could send the video to a remote agent for review, and if the trickster is seen to have been playing false games, perhaps there would be some means of legitimately issuing a ticket or something along those lines (unfortunately, that could be a slippery slope too).
Conclusion
Flagging down a self-driving car that is being operated as a robo-taxi is not likely in the cards for the near-term, but certainly can be envisioned for the future.
This is going to be tricky to program.
Nonetheless, it is possible.
We will likely initially have disgruntled indications of situations that the AI driving system that went right past someone and ignored them. Similar to how there have been concerns about human taxi drivers that cherry-pick whom they will pick up, we would need to test and validate that the AI driving systems do not have any built-in patterns of biases (see my column for coverage on this and other AI Ethics issues).
There wont be much cause to out-the-gate have self-driving cars operate in this manner. The easiest approach entails doing e-hailing. Given that the AI developers already have their hands full as they aim to just get self-driving cars to safely go from point A to point B, the notion of including a conventional ride-hailing capability is ostensibly considered an edge or corner case. Those edge or corner cases are ranked as low priority and construed as outside the core of what needs to be developed.
Besides hand waving, perhaps we can program the AI driving system to detect a ride-hailing gesture such as a quick wink of the eye. Imagine though how confusing that might be when the self-driving car is going down a crowded street of pedestrians.
I know, maybe we can use mind-reading instead. If a person merely thinks about needing a lift, the AI driving system can make use of that type of hailing. As you likely know, the desire for mind-reading computers is right up there with the aspiration for autonomous vehicles (see my coverage).
Just dont read whatever else is in our minds, and stick with the earnest and singular desire of hailing a ride.
Read more here:
Flagging Down A Roaming AI Self-Driving Car Robo-Taxi Might Not Be In The Cards - Forbes
Posted in Ai
Comments Off on Flagging Down A Roaming AI Self-Driving Car Robo-Taxi Might Not Be In The Cards – Forbes
The United Nations: Empowering the UN agencies with ‘AI for Good’ Series – Analytics Insight
Posted: at 5:13 pm
The United Nations is utilizing artificial intelligence for better performance of the UN agencies
Recent progress in artificial intelligence has been immense and exponential. The technology is making its way out of research labs and into everyday lives, promising to help us tackle humanitys greatest challenges. As the UN specialized agency for information and communication technologies, ITU believes in the power of AI for good and has organized the AI for Good series since 2017. The 2018 AI for Good Global Summit brought together AI innovators and public and private-sector decision-makers, including more than 30 UN agencies, to generate AI strategies and support projects to accelerate progress towards the UN Sustainable Development Goals (SDGs).
The International Telecommunication Union (ITU) is the United Nations specialized agency for information and communication technologies and has become one of the key UN platforms for exploring the impact of AI. ITU has stated that it will provide a neutral platform for government, industry, and academia to build a common understanding of the capabilities of emerging AI technologies and consequent needs for technical standardization and policy guidance. The AI for Good series is the leading United Nations (UN) platform for dialogue on Artificial Intelligence (AI). As the UN specialized agency for ICTs, the International Telecommunication Union (ITU), in partnership with sister UN agencies, is organizing the annual AI for Good Global Summit for international dialogue, aimed at building a common understanding of the capabilities of emerging AI technologies.
The UN family has a critical role to play in balancing technological progress with social progress. ITU remains committed to continuing to work closely with sister UN agencies and all other stakeholders to build a common understanding of the capabilities of emerging AI technologies.
Along with this development, the United Nations declared the opening of a Centre on Artificial Intelligence and Robotics in the Netherlands to monitor developments in AI and robotics, with the support of the Government of the Netherlands and the City of The Hague. The office will help focus expertise on AI throughout the UN in a single agency, which will be organized under the UN Interregional Crime and Justice Research Institute (UNICRI). The UNICRI launched its program on AI and Robotics in 2015.
An innovative artificial intelligence (AI) tool that will make it easier for countries to measure the contributions of nature to their economic prosperity and wellbeing was launched in April 2021, by the United Nations and the Basque Centre for Climate Change (BC3). Developed by the Statistics Division of the United Nations Department of Economic and Social Affairs (UN DESA), the UN Environment Programme (UNEP) and BC3, the new tool can vastly accelerate the implementation of the new ground-breaking standard for valuing the contributions of nature that were adopted by the UN Statistical Commission last month. The tool makes use of AI technology using the Artificial Intelligence for Environment and Sustainability (ARIES) platform to support countries as they apply the new international standard for natural capital accounting, the System of Environmental-Economic Accounting (SEEA) Ecosystem Accounting.
In November 2021, the United Nations adopted a historical text defining the common values and principles needed to ensure the healthy development of artificial intelligence. The agreement was adopted at the 41st session of the UNESCO General Conference, showing renewed cooperation on the ethics of artificial intelligence. The approaches AI ethics as a systematic normative reflection, based on a holistic and evolving framework of interdependent values, principles, and actions that can guide societies in dealing responsibly with the known and unknown impacts of artificial technologies on human beings, societies and the environment and offers them a basis to accept or reject artificial intelligence technologies.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
Read more from the original source:
The United Nations: Empowering the UN agencies with 'AI for Good' Series - Analytics Insight
Posted in Ai
Comments Off on The United Nations: Empowering the UN agencies with ‘AI for Good’ Series – Analytics Insight
The Largest Suite of Cosmic Simulations for AI Training Is Now Free to Download; Already Spurring Discoveries – UConn Today – UConn Today
Posted: at 5:13 pm
Totaling 4,233 universe simulations, millions of galaxies and 350 terabytes of data, a new release from the CAMELS project is a treasure trove for cosmologists. CAMELS which stands for Cosmology and Astrophysics with MachinE Learning Simulations aims to use those simulations to train artificial intelligence models to decipher the universes properties.
Scientists are already using the data, which is free to download, to power new research, says project co-leader Francisco Villaescusa-Navarro, a research scientist with the Simons Foundations CMB (Cosmic Microwave Background) Analysis and Simulation group.
Villaescusa-Navarro leads the project with associate research scientists at the Flatiron Institutes Center for Computational Astrophysics (CCA) Shy Genel and Daniel Angls-Alczar, who is also a UConn Associate Professor of Physics.
Machine learning is revolutionizing many areas of science, but it requires a huge amount of data to exploit, says Angls-Alczar. The CAMELS public data release, with thousands of simulated universes covering a broad range of plausible physics, will provide the galaxy formation and cosmology communities with a unique opportunity to explore the potential of new machine-learning algorithms to solve a variety of problems.
The CAMELS team generated the simulations using code taken from the IllustrisTNG and Simba projects. The CAMELS team includes members of both projects, with Genel a part of the core team of IllustrisTNG and Angls-Alczar on the team that developed Simba.
About half of the simulations combine the physics of the cosmos with the smaller-scale physics essential for galaxy formation. Each simulation is run with slightly different assumptions about the universe for instance, regarding how much of the universe is invisible dark matter versus the dark energy pulling the cosmos apart, or how much energy supermassive black holes inject into the space between galaxies.
The researchers designed the simulations to feed machine-learning models, which will then be able to extract information from observations of the real, observable universe. With 4,233 universe simulations, CAMELS is the largest ever suite of detailed cosmological simulations designed to train machine-learning algorithms.
The data will enable new discoveries and connect cosmology with astrophysics through machine learning, says Villaescusa-Navarro. There has never been anything similar to this, with this many universe simulations.
The CAMELS dataset is already powering research projects, with a wide range of papers utilizing the data in the works.
Pablo Villanueva-Domingo of the University of Valencia in Spain led one such paper. He and his colleagues leveraged the CAMELS simulations to train an artificial intelligence model to measure the mass of our Milky Way galaxy plus its surrounding dark matter halo, and the nearby Andromeda galaxy and its halo. The measurements the first ever done using AI put our galaxys heft at 1 trillion to 2.6 trillion times the suns mass. Those estimates are roughly in line with those made by other methods, demonstrating the AI approachs accuracy.
Meanwhile, Villaescusa-Navarro headed an effort to use the CAMELS data to estimate the value of two parameters that govern the fundamental properties of the universe: what fraction of the universe is matter, and how evenly mass is distributed throughout the cosmos. First, he and his colleagues used CAMELS to generate maps such as the distribution of dark matter, gas and different properties of stars. Then, using the maps, they trained a machine-learning tool called a neural network to predict the values of the two parameters.
This is the same kind of algorithm used to tell the difference between a cat and a dog from the pixels of an image, says Genel, who co-authored the paper. The human eye cant determine how much dark matter there is in a simulation, but a neural network can do that.
The results showed the promise of leveraging CAMELS to precisely estimate such parameters in the future based on new observations of the universe, says Villaescusa-Navarro.
Its exciting to see what other new discoveries this will enable, he says.
Read more here:
Posted in Ai
Comments Off on The Largest Suite of Cosmic Simulations for AI Training Is Now Free to Download; Already Spurring Discoveries – UConn Today – UConn Today
The Secret Weapon Behind Quality AI: Effective Data Labeling – insideBIGDATA
Posted: at 5:13 pm
In this special guest feature, Carlos Melendez, COO, Wovenware, discusses best practices for The Third Mile in AI Development the huge market subsector in data labeling companies, as they continue to come up with new ways to monetize this often-considered tedious aspect of AI development. The article addresses this trend and outlines how it is not really a commodity market, but can comprise different strategies for successful outcomes. Wovenware is a Puerto Rico-based design-driven company that delivers customized AI and other digital transformation solutions that create measurable value for government and private business customers across the U.S.
The growth of AI has spawned a huge market subsector and increasing interest among investors in data labeling. In the past year, companies specializing in data labeling have secured millions of dollars in funding and they continue to come up with new ways to monetize this often-considered tedious aspect of AI development. Yet, what can be viewed as the third mile in AI development, data labeling, is also perhaps the most crucial one to effective AI solutions.
In very general terms, AI development can be broken down into four key phases:
Data Labeling is Not Created Equal
The third mile in AI development is where the action begins. Massive amounts of data is needed to train and refine the AI model our experience has showed us that a minimum of 10,000 labeled data points are needed and it must be in a structured format to test and validate it, and train the model to identify and understand recurring patterns. The labels can be in the form of boxes around objects, tagging items visually or with text labels in images or in a text-based database that accompanies the original data.
Once trained with annotated data, the algorithm can begin to recognize the same patterns in new unstructured data. To get the raw data into the shape it needs to be in, it is cleaned (errors fixed and duplicate information deleted); and labeled with its proper identification.
Much of data labeling is a manual and laborious process. It involves groups of people who must label images as cars, or more specifically, white cars, or whatever the specifics might be, so that the algorithm can go out and find them. As with many things that can take time, data labeling firms are looking for a quick fix to this process. Theyre turning to automated systems to tag and identify data-sets. While automation can expedite part of the process, it needs to be kept in check to ensure that AI solutions making critical decisions are not faulty. Consider the ramifications of an algorithm trained to identify children at the cross-walk of a busy intersection not recognizing those of a certain height because the data set used to train the algorithm didnt have data about these children.
Since data is the lifeblood to effective AI, its no wonder that investors are seeing huge growth opportunities for the market. Effective data labeling firms are in hot demand as companies look to find a faster path to AI transformation. To aggregate and label data not only takes months of time, but effective algorithms get better over time, so its a constant process. But when selecting a data labeling firm that automates the process, buyers must beware. Data labeling is not yet a commodity market, and there are many ways to approach it. Consider the following when determining how to accomplish your critical data labeling process:
As data continues to become the oil that fuels effective AI, its critical that getting it into shape for algorithm training is not treated as a commodity, but given the attention it deserves. Data labeling can never be a one-size-fits all task, but requires the expertise, customization, collaboration and strategic approach that results in smarter solutions.
Sign up for the free insideBIGDATAnewsletter.
Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1
Read more:
The Secret Weapon Behind Quality AI: Effective Data Labeling - insideBIGDATA
Posted in Ai
Comments Off on The Secret Weapon Behind Quality AI: Effective Data Labeling – insideBIGDATA
New York City To Regulate Use Of AI In Hiring And Promotion – JD Supra
Posted: at 5:13 pm
New York City will be the latest jurisdiction to regulate the use of artificial intelligence in the workplace. The City has just passed alawrequiring employers to perform bias audits not more than one year before using automated employment decision tools in connection with hiring and promotion. The law goes into effect on January 1, 2023.
The law defines an automated employment decision tool as any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence that is used to assist an employer in making a decision on an individual based on the score or recommendation calculated by the AI. The law also defines an acceptable audit as an impartial evaluation by an independent auditor that includes the testing of the tool to assess its disparate impact on persons of any federal EEO-1 component category.
The law applies only to decisions related to a prospective candidates hire or promotion. It is unclear whether passive recruitment tools, such as ZipRecruiters or LinkedIns suggested jobs, are covered under the law.
The law prohibits an employer from using an automated decision tool to screen for hiring or promotion unless (1) the tool was subject to an independent bias audit no more than one year before its use, and (2) a summary of the audit results, as well as the distribution date of the tool to which the audit applied, has been made publicly available on the employers website.
An employer who uses an automated employment decision tool for hiring or promotions must notify each candidate who resides in New York City of the following at least 10 business days before the decision tool is used:
An employer may be subject to a $500 fine for a first violation, and up to $1,500 per offense for repeat violations.
Go here to read the rest:
New York City To Regulate Use Of AI In Hiring And Promotion - JD Supra
Posted in Ai
Comments Off on New York City To Regulate Use Of AI In Hiring And Promotion – JD Supra