The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: July 12, 2021
AI, No Lie: A Definition And Review Of Marketing Use Cases – AdExchanger
Posted: July 12, 2021 at 7:52 am
"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media.
Todays column is by Sarah Rose, SVP International Digital Operations, Data & Platform Ops at IPGs Kinesso.
Artificial intelligence is a heavy and complex topic with tons of deep ethical complications, confusing applications and unknown impacts on many industries. It brings panic to some and sci-fi-infused joy to others.
AI conceptually echoes Ray Kurzweils Singularity. In that scenario, humanity itself will be altered as we become one with AI. With any new technology there is fear and reluctance. In the case of AI, many industries have cleaved to the old ways while lightly playing with the buzzword as a pretense to progress. Our first human instinct is to protect the known, while not going full throttle on integration.
There is no question AI will change industries, markets, company valuations, jobs and our status quo. On the question of legality alone, with little to no federal or state regulation, most industries are perplexed on standard applications and uncertain where to begin.
Even so, the technology has advanced.
Ad tech and mar tech companies now commonly boast of AI-powered optimization tools and bidding methodologies that fuel brand engagement and ROAS. It may not look like a Spielberg film yet, but advertising and marketing technology is starting to integrate the beginnings of technology evolutions that employ self-learning decisioning.
By deconstructing and unpacking artificial intelligence into smaller packets, we can make it more accessible and applicable and provide ourselves with a selection menu on where to start and what to start with.
Types Of AI
There are really three categories of AI technology that can lead us to integrated systems and self-learning tech. The first is Robotic Process Automation (RPA), the second is Machine Learning (ML), and the third is AI (Artificial Intelligence) that is truly self-learning and actualizing.
Robotic process automation (RPA) is built by scripting languages (Python, for example) and is useful in repetitive, simplistic and linear tasks that produce a standard output. This is super basic and widely applied today. For the advertising ecosystem, RPA is great for operational tasks where there are copy-and-paste and server-to-server integrations requiring linear data ingestion. We can find one example in ad trafficking, where APIs between third-party platforms already exist and steps can be standardized. Operationally, this can save time, ensure data accuracy with fewer trafficking errors and conserve resources on quality assurance, campaign management, relationship management and data governance.
Machine learning (ML) is the first step in optimized data science applications, where a human would usually attempt to analyze large data sets to come up to some simple conclusions on patterns. It is difficult for us humans to look at tons of data points in real time and make statistical conclusions that, however minute they be, could be statistically relevant from a Bayesian logic perspective. It is timely and costly to any organization to throw bodies at the problem and find inherent value. However, ML will thankfully set rules for us and look for triggers and flags to meet defined criteria and find value in data. One example is evaluating inventory performance and ROI on long-tail SSP sources and/or to optimize DSP delivery to provide the best ROI in even low-value inventory sources. This is how most AI-powered optimization works and where the bulk of companies are spending data science resources.
When we attain AI, it is a combination of operational RPA and ML technologies. Artificial intelligence in definition is self-learning and making decisions on its own for the benefit of reaching a brands audience and meeting client ROAS deliverables. By integrating RPA, ML and self-learning programming, addressable media plans can shift in real time without human interaction.
AI self-learning technologies have not fully arrived in our industry at scale, but major players have started this journey in simple ways to bring automation (RPA/ML) to the fore. Whether they are startups or well-funded players focused solely on AI applications, companies are beginning to test efficiency gains. Some agencies, publishers and ad technology companies simply license this tech, and it is no surprise that Apple, Google, and Amazon are also innovating advertising practices to set the tone for automation.
While it is not gravity modifying, and we have not reached warp speed, the path has been set. By approaching this journey step by step and knowing what type of tech to integrate at what time and in what way, it becomes less head-spinning.
Let our industry be careful, cognizant, self-aware and available for change to boldly go where no one has gone before.
Continue reading here:
AI, No Lie: A Definition And Review Of Marketing Use Cases - AdExchanger
Posted in Ai
Comments Off on AI, No Lie: A Definition And Review Of Marketing Use Cases – AdExchanger
Shaping the AI Revolution In Philosophy (guest post) – Daily Nous
Posted: at 7:52 am
Despite the great promise of AI, we maintain that unless philosophers theorize about and help develop philosophy-specific AI, it is likely that AI will not be as philosophically useful.
In the following guest post*, Caleb Ontiveros, a philosophy graduate student-turned-software engineer and writer, and Graham Clay, a recent philosophy PhD from the University of Notre Dame, discuss the possibility of AI providing a suite of tools that could revolutionize philosophy, and why its important that philosophers help develop and theorize about the role of AI in philosophy.
Philosophy will be radically different in future centuriesperhaps decades. The transformative power of artificial intelligence is coming to philosophy and the only question is whether philosophers will harness it. In fact, we contend that artificial intelligence (AI) tools will have a transformative impact on the field comparable to the advent of writing.
The impact of the written word on philosophy cannot be overstated. Imagine waking up and learning that, due to a freak cosmic accident, all books, journal articles, notebooks, blogs, and the like had vanished or been destroyed. In such a scenario, philosophy would be made seriously worse off. Present philosophers would instantly suffer a severe loss and future philosophers would be impoverished as a consequence.The introduction of writing freed philosophers from being solely dependent on their own memory and oral methods of recollection. It enabled philosophers to interact with other thinkers across time, diminishing the contingent influences of time and space, thereby improving the transmission of ideas. Philosophers could even learn about others approaches to philosophy, which in turn aided them in their own methodology.
It is our position that AI will provide a suite of tools that can play a similar role for philosophy. It is likely that this will require that philosophers help develop and theorize about it.
What is AI? There are, roughly, two kinds of AI: machine reasoning and machine learning systems. Machine reasoning systems are composed of knowledge bases of sentences, inference rules, and operations on them. Machine learning systems work by ingesting a large amount of data and learning to make accurate predictions from it. GPT-3 is an example of thissee discussion here. One can think of the first kind of systemmachine reasoning AIas a deductive and symbolic reasoner and the secondmachine learning AIas learning and implementing statistical rules about the relationships between entities like words.
Like many other technologies, these sorts of systems are expected to continue to progress over the coming decades and are likely to be exceptionally powerful by the end of the century. We have seen exceptional progress so far, the cost of computing power continues to fall, and numerous experts have relatively short timelines. How fast the progress will be is clouded in uncertainty technological forecasting is a non-trivial affair. But it is not controversial that there will likely be significant progress soon.
This being the case, it will be technologically possible to create a number of AI tools that would each transform philosophy:
It is unlikely that these systems will replace human philosophers any time soon, but philosophers who effectively use these tools would significantly increase the quality and import of their work.
One can envision Systematizing systems that encode the ideas expressed by the Stanford Encyclopedia of Philosophy into propositions and the arguments they compose. This would enable philosophers to see connections between various positions and arguments, thereby reducing the siloing that has become more common in the field in recent years. Similar tools could parse and formalize journal papers from the 20th century that are seldom engaged with mining them for lost insights relevant to current concerns. Simulation tools would generate new insights, as when one asks of the tool What would Hume think about the Newcomb problem? Imagine a tool like GPT-3 but one that is better at constructing logical arguments and engaging in discussions. Relatedly, one can envisage a Reasoning system that encodes the knowledge of the philosophical community as a kind of super agent that others can interact with, extend, and learn from, like Alexa on steroids.
Despite the great promise of AI, we maintain that unless philosophers theorize about and help develop philosophy-specific AI, it is likely that AI will not be as philosophically useful.
Let us make this concrete with a specific philosophical tool: Systematizing. A Systematizing tool would encode philosophical propositions and relations between them, such as support, conditional likelihood, or entailment. Philosophers may need to work with computer scientists to formulate the propositions and score the relations that the Systematizing tool generates, as well as learn how to use the system in a way that produces the most philosophically valuable relations. It is likely that Systematizing tools and software will be designed with commercial purposes in mind and so they will not immediately port over to philosophical use cases.
Extending this, we can imagine a human-AI hybrid version of this tool that takes submissions in a manner that journals do, but instead of submitting a paper, philosophers would submit its core content: perhaps a set of propositions, the relations between them, and the relevant logic or set of inference rules. The ideal construction of this tool clearly needs to be thought through. If it were done well, it could power a revolutionary Reasoning system.
However the AI revolution turns out, philosophical inquiry will be radically different in the future. But the details and epistemic values can be shaped by contributions now. Were happy to see some of this work, and we hope to see more in the future.
Excerpt from:
Shaping the AI Revolution In Philosophy (guest post) - Daily Nous
Posted in Ai
Comments Off on Shaping the AI Revolution In Philosophy (guest post) – Daily Nous
AI voice actors sound more human than everand theyre ready to hire – MIT Technology Review
Posted: at 7:52 am
The company blog post drips with the enthusiasm of a 90s US infomercial. WellSaid Labs describes what clients can expect from its eight new digital voice actors! Tobin is energetic and insightful. Paige is poised and expressive. Ava is polished, self-assured, and professional.
Each one is based on a real voice actor, whose likeness (with consent) has been preserved using AI. Companies can now license these voices to say whatever they need. They simply feed some text into the voice engine, and out will spool a crisp audio clip of a natural-sounding performance.
WellSaid Labs, a Seattle-based startup that spun out of the research nonprofit Allen Institute of Artificial Intelligence, is the latest firm offering AI voices to clients. For now, it specializes in voices for corporate e-learning videos. Other startups make voices for digital assistants, call center operators, and even video-game characters.
Not too long ago, such deepfake voices had something of a lousy reputation for their use in scam calls and internet trickery. But their improving quality has since piqued the interest of a growing number of companies. Recent breakthroughs in deep learning have made it possible to replicate many of the subtleties of human speech. These voices pause and breathe in all the right places. They can change their style or emotion. You can spot the trick if they speak for too long, but in short audio clips, some have become indistinguishable from humans.
AI voices are also cheap, scalable, and easy to work with. Unlike a recording of a human voice actor, synthetic voices can also update their script in real time, opening up new opportunities to personalize advertising.
But the rise of hyperrealistic fake voices isnt consequence-free. Human voice actors, in particular, have been left to wonder what this means for their livelihoods.
Synthetic voices have been around for a while. But the old ones, including the voices of the original Siri and Alexa, simply glued together words and sounds to achieve a clunky, robotic effect. Getting them to sound any more natural was a laborious manual task.
Deep learning changed that. Voice developers no longer needed to dictate the exact pacing, pronunciation, or intonation of the generated speech. Instead, they could feed a few hours of audio into an algorithm and have the algorithm learn those patterns on its own.
If Im Pizza Hut, I certainly cant sound like Dominos, and I certainly cant sound like Papa Johns.
Over the years, researchers have used this basic idea to build voice engines that are more and more sophisticated. The one WellSaid Labs constructed, for example, uses two primary deep-learning models. The first predicts, from a passage of text, the broad strokes of what a speaker will sound likeincluding accent, pitch, and timbre. The second fills in the details, including breaths and the way the voice resonates in its environment.
Making a convincing synthetic voice takes more than just pressing a button, however. Part of what makes a human voice so human is its inconsistency, expressiveness, and ability to deliver the same lines in completely different styles, depending on the context.
Capturing these nuances involves finding the right voice actors to supply the appropriate training data and fine-tune the deep-learning models. WellSaid says the process requires at least an hour or two of audio and a few weeks of labor to develop a realistic-sounding synthetic replica.
AI voices have grown particularly popular among brands looking to maintain a consistent sound in millions of interactions with customers. With the ubiquity of smart speakers today, and the rise of automated customer service agents as well as digital assistants embedded in cars and smart devices, brands may need to produce upwards of a hundred hours of audio a month. But they also no longer want to use the generic voices offered by traditional text-to-speech technologya trend that accelerated during the pandemic as more and more customers skipped in-store interactions to engage with companies virtually.
If Im Pizza Hut, I certainly cant sound like Dominos, and I certainly cant sound like Papa Johns, says Rupal Patel, a professor at Northeastern University and the founder and CEO of VocaliD, which promises to build custom voices that match a companys brand identity. These brands have thought about their colors. Theyve thought about their fonts. Now theyve got to start thinking about the way their voice sounds as well.
Whereas companies used to have to hire different voice actors for different marketsthe Northeast versus Southern US, or France versus Mexicosome voice AI firms can manipulate the accent or switch the language of a single voice in different ways. This opens up the possibility of adapting ads on streaming platforms depending on who is listening, changing not just the characteristics of the voice but also the words being spoken. A beer ad could tell a listener to stop by a different pub depending on whether its playing in New York or Toronto, for example. Resemble.ai, which designs voices for ads and smart assistants, says its already working with clients to launch such personalized audio ads on Spotify and Pandora.
The gaming and entertainment industries are also seeing the benefits. Sonantic, a firm that specializes in emotive voices that can laugh and cry or whisper and shout, works with video-game makers and animation studios to supply the voice-overs for their characters. Many of its clients use the synthesized voices only in pre-production and switch to real voice actors for the final production. But Sonantic says a few have started using them throughout the process, perhaps for characters with fewer lines. Resemble.ai and others have also worked with film and TV shows to patch up actors performances when words get garbled or mispronounced.
Read more here:
AI voice actors sound more human than everand theyre ready to hire - MIT Technology Review
Posted in Ai
Comments Off on AI voice actors sound more human than everand theyre ready to hire – MIT Technology Review
Need to Fit Billions of Transistors on a Chip? Let AI Do It – WIRED
Posted: at 7:52 am
Artificial intelligence is now helping to design computer chipsincluding the very ones needed to run the most powerful AI code.
Sketching out a computer chip is both complex and intricate, requiring designers to arrange billions of components on a surface smaller than a fingernail. Decisions at each step can affect a chips eventual performance and reliability, so the best chip designers rely on years of experience and hard-won know-how to lay out circuits that squeeze the best performance and power efficiency from nanoscopic devices. Previous efforts to automate chip design over several decades have come to little.
But recent advances in AI have made it possible for algorithms to learn some of the dark arts involved in chip design. This should help companies draw up more powerful and efficient blueprints in much less time. Importantly, the approach may also help engineers co-design AI software, experimenting with different tweaks to the code along with different circuit layouts to find the optimal configuration of both.
At the same time, the rise of AI has sparked new interest in all sorts of novel chip designs. Cutting-edge chips are increasingly important to just about all corners of the economy, from cars to medical devices to scientific research.
Chipmakers, including Nvidia, Google, and IBM, are all testing AI tools that help arrange components and wiring on complex chips. The approach may shake up the chip industry, but it could also introduce new engineering complexities, because the type of algorithms being deployed can sometimes behave in unpredictable ways.
At Nvidia, principal research scientist Haoxing Mark Ren is testing how an AI concept known as reinforcement learning can help arrange components on a chip and how to wire them together. The approach, which lets a machine learn from experience and experimentation, has been key to some major advances in AI.
You can design chips more efficiently.
Haoxing Mark Ren, principal research scientist, Nvidia
The AI tools Ren is testing explore different chip designs in simulation, training a large artificial neural network to recognize which decisions ultimately produce a high-performing chip. Ren says the approach should cut the engineering effort needed to produce a chip in half while producing a chip that matches or exceeds the performance of a human-designed one.
You can design chips more efficiently, Ren says. Also, it gives you the opportunity to explore more design space, which means you can make better chips.
Nvidia started out making graphics cards for gamers but quickly saw the potential of the same chips for running powerful machine-learning algorithms, and it is now a leading maker of high-end AI chips. Ren says Nvidia plans to bring chips to market that have been crafted using AI but declined to say how soon. In the more distant future, he says, you will probably see a major part of the chips that are designed with AI.
Reinforcement learning was used most famously to train computers to play complex games, including the board game Go, with superhuman skill, without any explicit instruction regarding a games rules or principles of good play. It shows promise for various practical applications, including training robots to grasp new objects, flying fighter jets, and algorithmic stock trading.
Song Han, an assistant professor of electrical engineering and computer science at MIT, says reinforcement learning shows significant potential for improving the design of chips, because, as with a game like Go, it can be difficult to predict good decisions without years of experience and practice.
His research group recently developed a tool that uses reinforcement learning to identify the optimal size for different transistors on a computer chip, by exploring different chip designs in simulation. Importantly, it can also transfer what it has learned from one type of chip to another, which promises to lower the cost of automating the process. In experiments, the AI tool produced circuit designs that were 2.3 times more energy-efficient while generating one-fifth as much interference as ones designed by human engineers. The MIT researchers are working on AI algorithms at the same time as novel chip designs to make the most of both.
Other industry playersespecially those that are heavily invested in developing and using AIalso are looking to adopt AI as a tool for chip design.
See more here:
Need to Fit Billions of Transistors on a Chip? Let AI Do It - WIRED
Posted in Ai
Comments Off on Need to Fit Billions of Transistors on a Chip? Let AI Do It – WIRED
What Kind of Sea Ice is That? Thanks to AI, There’s an App for That – The Maritime Executive
Posted: at 7:52 am
People snapping photos and uploading them to an AI-driven app could someday help prevent Titanic-scale disasters. USCG file image
PublishedJul 11, 2021 5:17 PM by Gemini News
[By Nancy Bazilchuk]
If youve watched Netflix, shopped online, or run your robot vacuum cleaner, youve interacted with artificial intelligence, AI. AI is what allows computers to comb through an enormous amount of data to detect patterns or solve problems. The European Union says AI is set to be a defining future technology.
And yet, as much as AI is already interwoven into our everyday lives, theres one area of the globe where AI and its applications are in their infancy, says Ekaterina Kim, an associate professor at NTNUs Department of Marine Technology. That area is the Arctic, an area where she has specialized in studying sea ice, among other topics.
Its used a lot in marketing, in medicine, but not so much in Arctic (research) communities, she said. Although they have a lot of data, there is not enough AI attention in the field. Theres a lot of data out there, waiting for people to do something with them.
So Kim and her colleagues Ole-Magnus Pedersen, a PhD candidate from the Department of Marine Technology and Nabil Panchi, from the Indian Institute of Technology Kharagpur, decided to see if they could develop an app that used artificial intelligence to identify sea ice in the Arctic.
The result is "Ask Knut."
Climate change and changing sea ice
You may think theres not much difference between one chunk of sea ice and another, but thats just not so.In addition to icebergs, theres deformed ice, level ice, broken ice, ice floes, floe bergs, floe bits, pancake ice and brash ice.
The researchers wanted the app to be able to distinguish between the different kinds of ice and other white and blue objects out there, like sky, open water and underwater ice.
An example of what the eye sees on the left, and what Knut sees on the right. Photo: Sveinung Lset/NTNU
Different kinds of ice really matter to ship captains, for example, who might be navigating in icy waters. Actual icebergs are nothing like brash ice, the floating bits of ice that are two meters in diameter or less. Think of it the Titanic wouldnt have sunk if it had just blundered into a patch of brash ice instead of a big iceberg.
Another factor that adds urgency to the situation is climate change, which is dramatically altering sea ice as oceans warm. Even with the help of satellite images and onboard ship technologies, knowing whats in icy waters ahead can be a difficult challenge, especially in fogs or storms.
Ice can be very difficult for navigation, Kim said. From the water (at the ship level) It can be hard to detect where there is strong ice, multiyear ice, and different ice. Some ice is much more dangerous than other types.
More kinds of ice than you can possibly imagine
It's often said that Inuit people have many different names for snow which may or may not be true. But researchers definitely have names for different kinds of ice. Here are the kinds of ice that "Knut" is learning to identify:
Learning from examples
The team began teaching their apps AI system using a comprehensive collection of photographs taken by another NTNU ice researcher, Sveinung Lset.
But an AI system is like a growing child if it is to learn, it needs to be exposed to lots of information. Thats where turning the AI into an app made sense. Although the COVID-19 pandemic has shut down most cruise operations, as the pandemic wains, people will begin to take cruises again including to the Arctic and Antarctic.
Kim envisions tourists using the app to take pictures of different kinds of ice to see who finds the most different kinds of ice. And every one of those pictures helps the app learn.
From cruise ship to classroom
As the AI learns, Kim says, the increasingly complex dataset could be taken into the classroom, where navigators could learn about ice in a much more sophisticated way.Currently, students just look at pictures or listen to a PowerPoint presentation, where lecturers describe the different kinds of ice.
So this could revolutionize how you learn about ice, she said. You could have it in 3-D, you could emerge yourself and explore this digital image all around you, with links to different kinds of ice types.
This article appears courtesy of Gemini News and may be found in its original form here.
The opinions expressed herein are the author's and not necessarily those of The Maritime Executive.
Read the rest here:
What Kind of Sea Ice is That? Thanks to AI, There's an App for That - The Maritime Executive
Posted in Ai
Comments Off on What Kind of Sea Ice is That? Thanks to AI, There’s an App for That – The Maritime Executive
We tested AI interview tools. Heres what we found. – MIT Technology Review
Posted: at 7:52 am
After more than a year of the covid-19 pandemic, millions of people are searching for employment in the United States. AI-powered interview software claims to help employers sift through applications to find the best people for the job. Companies specializing in this technology reported a surge in business during the pandemic.
But as the demand for these technologies increases, so do questions about their accuracy and reliability. In the latest episode of MIT Technology Reviews podcast In Machines We Trust, we tested software from two firms specializing in AI job interviews, MyInterview and Curious Thing. And we found variations in the predictions and job-matching scores that raise concerns about what exactly these algorithms are evaluating.
MyInterview measures traits considered in the Big Five Personality Test, a psychometric evaluation often used in the hiring process. These traits include openness, conscientiousness, extroversion, agreeableness, and emotional stability. Curious Thing also measures personality-related traits, but instead of the Big Five, candidates are evaluated on other metrics, like humility and resilience.
HILKE SCHELLMANN
The algorithms analyze candidates responses to determine personality traits. MyInterview also compiles scores indicating how closely a candidate matches the characteristics identified by hiring managers as ideal for the position.
To complete our tests, we first set up the software. We uploaded a fake job posting for an office administrator/researcher on both MyInterview and Curious Thing. Then we constructed our ideal candidate by choosing personality-related traits when prompted by the system.
On MyInterview, we selected characteristics like attention to detail and ranked them by level of importance. We also selected interview questions, which are displayed on the screen while the candidate records video responses. On Curious Thing, we selected characteristics like humility, adaptability, and resilience.
One of us, Hilke, then applied for the position and completed interviews for the role on both MyInterview and Curious Thing.
Our candidate completed a phone interview with Curious Thing. She first did a regular job interview and received a 8.5 out of 9 for English competency. In a second try, the automated interviewer asked the same questions, and she responded to each by reading the Wikipedia entry for psychometrics in German.
Yet Curious Thing awarded her a 6 out of 9 for English competency. She completed the interview again and received the same score.
HILKE SCHELLMANN
Our candidate turned to MyInterview and repeated the experiment. She read the same Wikipedia entry aloud in German. The algorithm not only returned a personality assessment, but it also predicted our candidate to be a 73% match for the fake job, putting her in the top half of all the applicants we had asked to apply.
Read this article:
We tested AI interview tools. Heres what we found. - MIT Technology Review
Posted in Ai
Comments Off on We tested AI interview tools. Heres what we found. – MIT Technology Review
Google, Facebook, And Microsoft Are Working On AI Ethics. Heres What Your Company Should Be Doing – Forbes
Posted: at 7:52 am
The Ethics of AI
As AI is making its way into more companies, the board and senior executives need to mitigate the risk of their AI-based systems. One area of risk includes the reputational, regulatory, and legal risks of AI-led ethical decisions.
AI-based systems are often faced with making decisions that were not built into their models decisions representing ethical dilemmas.
For example, suppose a company builds an AI-based system to optimize the number of advertisements we see. In that case, the AI may encourage incendiary content that causes users to get angry and comment and post their own opinions. If this works, users spend more time on the site and see more ads. The AI has done its job without ethical oversight. The unintended consequence is the polarization of users.
What happens if your company builds a system that automates work so that you no longer need that employee? What is the company's ethical responsibility to that employee, to society? Who is determining the ethics of the impact related to employment?
What if the AI tells a loan officer to recommend against providing a loan to a person? If the human doesn't understand how the AI came to that conclusion, how can the human know if the decision was ethical or not? (see How AI Can Go Terribly Wrong: 5 Biases That Create Failure)
Suppose the data used to train your AI system doesn't have sufficient data about specific classes of individuals. In that case, it may not learn what to do when it encounters those individuals. Would a facial recognition system used for check-in to a hotel recognize a person with freckles? If the system stops working and makes check-in harder for a person with freckles, what should the company do? How does the company address this ethical dilemma? (see Why Are Technology Companies Quitting Facial Recognition?)
If the developers who identify the data to be used for training an AI system aren't looking for bias, how can they prevent an ethical dilemma? For example, suppose a company has historically hired more men than women. In that case, a bias is likely to exist in the resume data. Men tend to use different words than women in their resumes. If the data is sourced from men's resumes, then women's resumes may be viewed less favorably, just based on word choice.
Google, Facebook, and Microsoft are addressing these ethical issues. Many have pointed to the missteps Google and Facebook have made in attempting to address AI ethical issues. Let's look at some of the positive elements of what they and Microsoft are doing to address AI ethics.
While each company is addressing these principles differently, we can learn a lot by examining their commonalities. Here are some fundamental principles they address.
While these tech giants are imperfect, they are leading the way in addressing ethical AI challenges. What are your board and senior management team doing to address these issues?
Below are some suggestions you can implement now.
By addressing these issues now, your company will reduce the risks of having AI make or recommend decisions that imperil the company. (see AI Can Be Dangerous - How to Reduce Risk When Using AI) Are you aware of the reputational, regulatory, and legal risks associated with the ethics of your AI?
View post:
Posted in Ai
Comments Off on Google, Facebook, And Microsoft Are Working On AI Ethics. Heres What Your Company Should Be Doing – Forbes
AI in the courts – The Indian Express
Posted: at 7:52 am
Written by Kartik Pant
Artificial Intelligence (AI) seems to be catching the attention of a large section of people, no doubt because of the infinite possibilities it offers. It assimilates, contributes as well as poses challenges to almost all disciplines including philosophy, cognitive science, economics, law, and the social sciences. AI and Machine Learning (ML) have a multiplier effect on increasing the efficiency of any system or industry. If used effectively, it can bring about incremental changes and transform the ecosystem of several sectors. However, before applying such technology, it is important to identify the problems and the challenges within each sector and develop the specific modalities on how the AI architecture will have the highest impact.
In the justice delivery system, there are multiple spaces where the AI application can have a deep impact. It has the capacity to reduce the pendency and incrementally increase the processes. The recent National Judicial Data Grid (NJDG) shows that 3,89,41,148 cases are pending at the District and Taluka levels and 58,43,113 are still unresolved at the high courts. Such pendency has a spin-off effect that takes a toll on the efficiency of the judiciary, and ultimately reduces peoples access to justice.
The use of AI in the justice system depends on first identifying various legal processes where the application of this technology can reduce pendency and increase efficiency. The machine first needs to perceive a particular process and get information about the process under examination. For example, to extract facts from a legal document, the programme should be able to understand the document and what it entails. Over time, the machine can learn from experience, and as we provide more data, the programme learns and makes predictions about the document, thereby making the underlying system more intelligent every time. This requires the development of computer programmes and software which are highly-complex requiring advanced technologies. Additionally, there is a need of constantly nurturing to reduce any bias, and increase learning.
One such complex tool named SUPACE (Supreme Court Portal for Assistance in Court Efficiency) was recently launched by the Supreme Court of India. Designed to first understand judicial processes that require automation, it then assists the Court in improving efficiency and reducing pendency by encapsulating judicial processes that have the capability of being automated through AI.
Similarly, SUVAS is an AI system that can assist in the translation of judgments into regional languages. This is another landmark effort to increase access to justice. The technology, when applied in the long run to solve other challenges of translation in filing of cases, will reduce the time taken to file a case and assist the court in becoming an independent, quick, and efficient system.
Through these steps, the Supreme Court has become the global frontrunner in application of AI and Machine Learning into processes of the justice system. But we must remember that despite the great advances made by the apex court, the current development in the realm of AI is only scratching the surface.
Over time, as one understands and evaluates various legal processes, AI and related technologies will be able to automate and complement several tasks performed by legal professionals. It will allow them to invest more energy in creatively solving legal issues. It has the possibility of helping judges conduct trials faster and more effectively thereby reducing the pendency of cases. It will assist legal professionals in devoting more time in developing better legal reasoning, legal discussion and interpretation of laws.
However, the integration of these technologies will be a challenging task as the legal architecture is highly complex and technologies can only be auxiliary means to achieve legal justice. There is also no doubt that as AI technology grows, concerns about data protection, privacy, human rights and ethics will pose fresh challenges and will require great self-regulation by developers of these technologies. It will also require external regulation by the legislature through statute, rules, regulation and by judiciary through judicial review qua constitutional standards. But with increasing adoption of the technology, there will be more debates and conversations on these problems as well as their potential solutions. In the long-run all this would help in reducing the pendency of cases and improving overall efficiency of justice system.
The writer is founding partner, Prakant Law offices and a public policy consultant
Continued here:
Posted in Ai
Comments Off on AI in the courts – The Indian Express
Seizing the Opportunity to Leverage AI & ML for Clinical Research – Analytics Insight
Posted: at 7:52 am
Pharmaceutical professionals believe artificial intelligence (AI)will be the most disruptive technology in the industry in 2021. As AI and machine learning (ML) become crucial tools for keeping pace in the industry, clinical development is an area that can substantially benefit, delivering significant time and cost efficiencies while providing better, faster insights to inform decision making. However, for patients, these tools provide improved safety practices that lead to better, safer, drugs. Here is how AI/ML can be used to support pharma companies in delivering safer drugs to market.
Today, AI and ML can be used to support clinical research in numerous ways; including the identification of molecules that hold potential for clinical treatments, finding patient populations that meet specific criteria for inclusion or exclusion, as well as analyzing scans, claims reports, and other healthcare data to identify trends in clinical research and treatments that lead to safer and faster decisions.
However, to take full advantage of the benefits of AI/ML technology, organizations performing clinical trials must first gain access to the tools, expertise, and industry-specific datasets enabling them to build algorithms to fit their specific needs. Healthcare data, unlike purely numerical data pulled from monitoring systems and tools such as IoT or SaaS platforms, is typically unstructured due to the way the data is collected (through doctor visits, and unstructured web sources) and must meet strict security protocols to ensure patient privacy.
To truly leverage AI and ML for clinical research, data must be collected, studied, combined, and protected to make effective healthcare decisions. When clinical researchers collaborate with partners that have both technical and pharmaceutical expertise, they ensure that data is being structured and analyzed in a way that simultaneously reduces risks and improves the quality of clinical research.
When it comes to research study design, site identification and patient recruitment, and clinical monitoring, AI and ML hold great potential to make clinical trials faster, more efficient, and most importantly: safer.
Study design sets the stage for a clinical research initiative. The cost, efficiency, and potential success of clinical trials rest squarely on the shoulders of the studys design and plans. AI and ML tools, along with natural language processing (NLP), can analyze large sets of healthcare data to assess and identify primary and secondary endpoints in clinical research design. This ensures that protocols for regulators, payers, and patients are well defined before clinical trials commence. Defining parameters such as these optimize study design by helping to identify ideal research sites and enrollment models. Ultimately, better study design leads to more predictable results, reduced cycle time for protocol development, and a generally more efficient study.
Identifying trials sites and recruiting patients for clinical research is a tougher task than it seems to be at face value. Clinical researchers must identify the area that will provide enough access to patients who meet inclusion and exclusion criteria. As studies become more focused on rarer conditions or specific populations, recruiting participants for clinical trials becomes more difficult, which increases the cost, timeline, and risk of failure for the clinical study if enough patients cannot be recruited for the research. AI and ML tools can support site identification for clinical research by mapping patient populations and proactively targeting sites with the most potential patients that meet inclusion criteria. This enables fewer research sites to meet recruitment requirements and reduce the overall cost of patient recruitment.
Clinical monitoring is a tedious manual process of analyzing site risks of clinical research and determining specific actions to take towards mitigating those risks. Risks in clinical research include recruitment or performance issues, as well as risks to patient safety. AI and ML automate the assessment of risks in the clinical research environment, and provide suggestions based on predictive analytics to better monitor for and prevent risks. Automating this assessment removes the risk of manual error, and decreases the time spent on analyzing clinical research data.
During clinical trials, theres a limited patient population to pull from, as research subjects must meet pre-set parameters for inclusion in the study. On the other hand, as opposed to post-market research, clinical researchers are blessed with vast amounts of information surrounding their patients including what drugs they are taking, their health history, and their current environment.
In addition, because the clinical researcher is working closely with the patient and is well-educated on the drug or product being researched, the researcher is very familiar with all potential variables involved in the clinical trial. To put it simply, clinical trials have a lot of information to analyze, but few patients with whom to conduct the research. Because of this disproportionate ratio of information over patients, every case in a clinical research setting is extremely important to the future of the drug being researched.
The massive amount of patient and drug information available to clinical researchers necessitates the use of NLP tools to analyze and process documents and patient records.NLP can search documents and records for specific terms, phrases, and words that might indicate a problem or risk in the clinical trial. This eliminates the need for manual analysis of clinical trial data reducing, and in some cases eliminating, the risk of human error while also increasing patient safety. This is especially useful in lengthy clinical trials, for which researchers will need to analyze patient histories and drug results over an extended period of time. Many clinical trials have long document trails and questionnaires that can add up to hundreds of pages of patient data that researchers must analyze.
In a clinical trial, researchers are ultimately trying to determine whether the benefits of a specific treatment outweigh the risks. AI can be especially helpful in clinical trials of high-risk drugs. If a researcher knows that a drug cures or alleviates an illness or condition, but also know that the potential side effects of that drug can have a significant negative impact on the patient, theyll want to know how to determine if a patient is likely to present those negative side effects. NLP can be used to produce word clouds of potential signals of the negative side effects of a drug that patients would experience.
The only way to do this type of analysis manually is to identify those words using human researchers, then analyze the patient reports to find those words, and group those reports into risk profiles. NLP can automate that entire process and provide insights on risk indicators in patients much more efficiently and safely than human researchers ever could.
AI and ML technologies, especially NLP, hold huge promise to support and optimize clinical research. However, that assurance can only be achieved by organizations that have the necessary tools, expertise, and partners to leverage the full benefits of AI and ML. AI and ML solutions support the optimization of clinical research by more efficiently analyzing research data for risks and allowing faster trial planning and research. Those who fail to engage AI and ML for clinical research may find that their competitors are doing so, and as a result, are going to market with new drugs and products faster with higher profits due to decreased research time and safer practices.
Updesh Dosanjh, Practice Leader, Pharmacovigilance Technology Solutions, IQVIA
As Practice Leader for the Technology Solutions business unit of IQVIA, Updesh Dosanjh is responsible for developing the overarching strategy regarding Artificial Intelligence and Machine Learning as it relates to safety and pharmacovigilance. He is focused on the adoption of these innovative technologies and processes that will help optimize pharmacovigilance activities for better, faster results. Dosanjh has over 25 years of knowledge and experience in the management, development, implementation, and operation of processes and systems within the life sciences and other industries. Most recently, Dosanjh was with Foresight and joined IQVIA as a result of an acquisition. Over the course of his career, Dosanjh also worked with WCI, Logistics Consulting Partners, Amersys Systems Limited, and FJ Systems. Dosanjh holds a Bachelors degree in Materials Science from Manchester University and a Masters degree in Advanced Manufacturing Systems and Technology from Liverpool University.
Read more from the original source:
Seizing the Opportunity to Leverage AI & ML for Clinical Research - Analytics Insight
Posted in Ai
Comments Off on Seizing the Opportunity to Leverage AI & ML for Clinical Research – Analytics Insight
Top 10 Ideas in Statistics That Have Powered the AI Revolution – Columbia University
Posted: at 7:52 am
If youve ever called on Siri or Alexa for help, or generated a self-portrait in the style of a Renaissance painter, you have interacted with deep learning, a form of artificial intelligence that extracts patterns from mountains of data to make predictions. Though deep learning and AI have become household terms, the breakthroughs in statistics that have fueled this revolution are less known. In a recent paper,Andrew Gelman, a statistics professor at Columbia, andAki Vehtari, a computer science professor at Finlands Aalto University,published a listof the most important statistical ideas in the last 50 years.
Below, Gelman and Vehtari break down the list for those who may have snoozedthrough Statistics 101. Each idea can be viewed as a stand-in for an entire subfield, they say, with a few caveats: science is incremental; by singling out these works, they do not mean to diminish the importance of similar, related work.They have also chosen to focus on methods in statistics and machine learning, rather than equally important breakthroughs in statistical computing, and computer science and engineering, which have provided the tools and computing power for data analysis and visualization to become everyday practical tools. Finally, they have focused on methods, while recognizing that developments in theory and methods are often motivated by specific applications.
See something important thats missing? Tweet it at @columbiascience and Gelman and Vehtari will consider adding it to the list.
The 10 articles and books below all were published in the last 50 years and are listed in chronological order.
1.Hirotugu Akaike (1973).Information Theory and an Extension of the Maximum Likelihood Principle.Proceedings of the Second International Symposium on Information Theory.
This is the paper that introduced the term AIC (originally called An Information Criterion but now known as Akaike Information Criterion), for evaluating a models fit based on its estimated predictive accuracy.AIC was instantly recognized as a useful tool, and this paper was one of several published in the mid-1970s placing statistical inference within a predictive framework. We now recognize predictive validation as a fundamental principle in statistics and machine learning. Akaike was an applied statistician, who in the 1960s, tried to measure the roughness of airport runways, in the same way that Benoit Mandelbrot's early papers on taxonomy and Pareto distributions led to his later work on the mathematics of fractals.
2.John Tukey (1977).Exploratory Data Analysis.
This book has been hugely influential and is a fun read that can be digested in one sitting. Traditionally, data visualization and exploration were considered low-grade aspects of practical statistics; the glamour was in fitting models, proving theorems, and developing the theoretical properties of statistical procedures under various mathematical assumptions or constraints.Tukey flipped this notion on its head. He wrote about statistical tools not for confirming what we already knew (or thought we knew), and not for rejecting hypotheses that we never, or should never have, believed, but for discovering new and unexpected insights from data.His work motivated advances in network analysis, software, and theoretical perspectives that integrate confirmation, criticism, and discovery.
3.Grace Wahba (1978).Improper Priors, Spline Smoothing and the Problem of Guarding Against Model Errors in Regression.Journal of the Royal Statistical Society.
Spline smoothing is an approach for fitting nonparametric curves. Another of Wahba's papers from this period is called "An automatic French curve," referring to a class of algorithms that can fit arbitrary smooth curves through data without overfitting to noise, or outliers. The idea may seem obvious now, but it was a major step forward in an era when the starting points for curve fitting were polynomials, exponentials, and other fixed forms.In addition to the direct applicability of splines, this paper was important theoretically. It served as a foundation for later work in nonparametric Bayesian inference by unifying ideas of regularization of high-dimensional models.
4. Bradley Efron (1979).Bootstrap Methods: Another Look at the Jackknife.Annals of Statistics.
Bootstrapping is a method for performing statistical inference without assumptions. The data pull themselves up by their bootstraps, as it were.But you can't make inference without assumptions; what made the bootstrap so useful and influential is that the assumptions came implicitly with the computational procedure: the audaciously simple idea of resampling the data.Each time you repeat the statistical procedure performed on the original data.As with many statistical methods of the past 50 years, this one became widely useful because of an explosion in computing power that allowed simulations to replace mathematical analysis.
5.Alan Gelfand and Adrian Smith (1990).Sampling-based Approaches to Calculating Marginal Densities.Journal of the American Statistical Association.
Another way that fast computing has revolutionized statistics and machine learning is through open-ended Bayesian models.Traditional statistical models are static: fit distribution A to data of type B.But modern statistical modeling has a more Tinkertoy quality that lets you flexibly solve problems as they arise by calling on libraries of distributions and transformations.We just need computational tools to fit these snapped-together models.In their influential paper, Gelfand and Smith did not develop any new tools; they demonstrated how Gibbs sampling could be used to fit a large class of statistical models.In recent decades, the Gibbs sampler has been replaced by Hamiltonian Monte Carlo, particle filtering, variational Bayes, and more elaborate algorithms, but the general principle of modular model-building has remained.
6.Guido Imbens and Joshua Angrist (1994).Identification and Estimation of Local Average Treatment Effects.Econometrica.
Causal inference is central to any problem in which the question isnt just a description (How have things been?) or prediction (What will happen next?), but a counterfactual (If we do X, what would happen to Y?).Causal methods have evolved with the rest of statistics and machine learning through exploration, modeling, and computation. But causal reasoning has the added challenge of asking about data that are impossible to measure (you can't both do X and not-X to the same person).As a result, a key idea in this field is identifying what questions can be reliably answered from a given experiment. Imbens and Angrist are economists who wrote an influential paper on what can be estimated when causal effects vary, and their ideas form the basis for much of the later work on this topic.
7.Robert Tibshirani (1996).Regression Shrinkage and Selection Via the Lasso.Journal of the Royal Statistical Society.
In regression, or predicting an outcome variable from a set of inputs or features, the challenge lies in including lots of inputs along with their interactions; the resulting estimation problem becomes statistically unstable because of the many different ways of combining these inputs to get reasonable predictions. Classical least squares or maximum likelihood estimates will be noisy and might not perform well on future data, and so various methods have been developed to constrain or regularize the fit to gain stability.In this paper, Tibshirani introduced lasso, a computationally efficient and now widely used approach to regularization, which has become a template for data-based regularization in more complicated models.
8.Leland Wilkinson (1999).The Grammar of Graphics.
In this book, Wilkinson, a statistician who's worked on several influential commercial software projects including SPSS and Tableau, lays out a framework for statistical graphics that goes beyond the usual focus on pie charts versus histograms, how to draw a scatterplot, and data ink and chartjunk, to abstractly explore how data and visualizations relate.This work has influenced statistics through many pathways, most notably through ggplot2 and the tidyverse family of packages in the computing language R. Its an important step toward integrating exploratory data and model analysis into data science workflow.
9.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio (2014).Generative Adversarial Networks.Proceedings of the International Conference on Neural Information Processing Systems.
One of machine learnings stunning achievements in recent years is in real-time decision making through prediction and inference feedbacks. Famous examples include self-driving cars and DeepMinds AlphaGo, which trained itself to become the best Go player on Earth.Generativeadversarial networks, or GANs, are a conceptual advance that allow reinforcement learning problems to be solved automatically. They mark a step toward the longstanding goal of artificial general intelligence while also harnessing the power of parallel processing so that a program can train itself by playing millions of games against itself.At a conceptual level, GANs link prediction with generative models.
10.Yoshua Bengio, Yann LeCun, and Geoffrey Hinton (2015).Deep Learning.Nature.
Deep learning is a class of artificial neural network models that can be used to make flexible nonlinear predictions using a large number of features.Its building blockslogistic regression, multilevel structure, and Bayesian inferenceare hardly new. What makes this line of research so influential is the recognition that these models can be tuned to solve a variety of prediction problems, from consumer behavior to image analysis.As with other developments in statistics and machine learning, the tuning process was made possible only with the advent of fast parallel computing and statistical algorithms to harness this power to fit large models in real time.Conceptually, were still catching up with the power of these methods, which is why theres so much interest in interpretable machine learning.
More here:
Top 10 Ideas in Statistics That Have Powered the AI Revolution - Columbia University
Posted in Ai
Comments Off on Top 10 Ideas in Statistics That Have Powered the AI Revolution – Columbia University