The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
How artificial intelligence is redefining the role of manager – World Economic Forum
Posted: November 17, 2019 at 2:33 pm
Artificial intelligence (AI) will impact every job, in every industry and every country. There are significant fears that AI will eliminate jobs altogether. Many reports have exposed the harsh realities of workforce automation, especially for certain types of jobs and demographics. For instance, the Brookings Institution found that automation threatens 25% of all US jobs, with an emphasis on low-wage earners in positions where tasks are routine-based. A separate study by the Institute for Womens Policy Research found that women comprise 58% of jobs at highest risk of automation.
Yet despite these realities, we are beginning to accept our new AI world and adopt these technologies as we see the potential new opportunities. Other studies emphasize how AI will create more jobs or just remove tasks within jobs. A new global study by Oracle and Future Workplace of 8,370 employees, managers and HR leaders across 10 countries, found that almost two-thirds of workers are optimistic, excited and grateful about AI and robot co-workers. Nearly one-quarter went as far as saying they have a loving and gratifying relationship with AI at work, showing an appreciation for how it simplifies and streamlines their lives.
Proportion of respondents who believe robots will one day replace their managers
Image: Oracle & Future Workplace AI@Work Study 2019
Surprisingly, last year, we discovered that the majority of workers would trust orders from a robot. This year, almost two-thirds of workers said they would trust orders from a robot over their manager, and half have already turned to a robot instead of their manager for advice. At American Express, decisions like figuring out what product offer is most relevant to different customer segments are now handled by AI, eliminating the need for managers and employees to discuss these tasks.
Now that AI is removing many of the administrative tasks typically handled by managers, their roles are evolving to focus more on soft over hard skills. The survey found that workers believe robots are better than their managers at providing unbiased information, maintaining work schedules, problem-solving and budget management, while managers are better at empathy, coaching and creating a work culture.
Anthony Mavromatis, vice-president of customer data science and platforms at American Express, points out another way that AI is changing the managers role: AI is increasingly freeing up their time and allowing them to focus on the essence of their job. Going forward, what really matters is the very human skill of being able to be creative and innovate something that AI isnt good at yet. By cutting them loose from tasks traditionally expected of them, AI allows managers to focus on forging stronger relationships with their teammates and having a greater impact in their roles.
Companies such as Hilton that were early in using AI to simplify their recruiting process are now expanding their use to other applications, like digital assistants, for certain processes including feedback and performance reviews. They envision that digital assistants will allow employees to say something like, I want to take next Friday off, please schedule, and the necessary HR steps are taken. The digital assistant will be able to be used from a mobile device or a desktop; whenever is most convenient. When you think about the number of hotel employees who work throughout our hotels serving guests with limited or no time on a computer, and the time constraints we all face, this mobile capability will be a game-changer, says Kellie Romack, Hiltons vice-president of digital HR and strategic planning. The company is primed to use AI to help it focus on the needs of both employees and guests.
AI wont be replacing a managers job; it will be supplementing it. The future of work is one where robots and humans will be working side by side, helping each other get work done faster and more efficient than ever before. As Mavromatis puts it: AI plus human equals the future. It's not one or the other.
License and Republishing
World Economic Forum articles may be republished in accordance with our Terms of Use.
Written by
Dan Schawbel, Partner and research director, Future Workplace
The views expressed in this article are those of the author alone and not the World Economic Forum.
See the original post here:
How artificial intelligence is redefining the role of manager - World Economic Forum
Posted in Ai
Comments Off on How artificial intelligence is redefining the role of manager – World Economic Forum
AI Can Tell If You’re Going to Die Soon. We Just Don’t Know How It Knows. – Popular Mechanics
Posted: at 2:33 pm
Albert Einsteins famous expression spooky action at a distance refers to quantum entanglement, a phenomenon seen on the most micro of scales. But machine learning seems to grow more mysterious and powerful every day, and scientists dont always understand how it works. The spookiest action yet is a new study of heart patients where a machine-learning algorithm decided who was most likely to die within a year based on echocardiogram (ECG) results, reported by New Scientist.
The algorithm performed better than the traditional measures used by cardiologists. The study was done by researchers in Pennsylvanias Geisinger regional healthcare group, a low-cost and not-for-profit provider.
Much of machine learning involves feeding complex data into computers that are better able to examine it really closely. To analogize to calculus, if human reasoning is a Riemann sum, machine learning may be the integral that results as the Riemann calculation approaches infinity. Human doctors do the best they can with what they have, but whatever the ECG algorithm is finding in the data, those studying the algorithm cant reverse engineer what it is.
The most surprising axis may be the number of people cardiologists believed were healthy based on normal ECG results: The AI accurately predicted risk of death even in people deemed by cardiologists to have a normal ECG, New Scientist reports.
To imitate the decision-making of individual cardiologists, the Geisinger team made a parallel algorithm out of the factors that cardiologists use to calculate risk in the accepted way. Its not practical to record the individual impressions of 400,000 real human doctors instead of the results of the algorithm, but that level of granularity could show that cardiologists are more able to predict poor outcomes than the algorithm indicates.
It could also show they perform worse than the algorithmwe just dont know. Head to head, having a better algorithm could add to doctors human skillset and lead to even better outcomes for at-risk patients.
Machine learning experts use a metric called area under the curve (AUC) to measure how well their algorithm can sort people into different groups. In this case, researchers programmed the algorithm to decide which people would survive and which would die within the year, and its success was measured in how many people it placed in the correct groups. This is why future action is so complicated: People can be misplaced in both directions, leading to false positives and false negatives that could impact treatment. The algorithm did show an improvement, scoring 85 percent versus the 65 to 80 percent success rate of the traditional calculus.
As in other studies, one flaw in this research is that the scientists used past data where the one-year window had finished. The data set is closed and scientists can directly compare their results to a certain outcome. Theres a differenceand in medicine its an ethical onebetween studying closed data and using a mysterious, unstudied mechanism to change how we treat patients today.
Medical research faces the same ethical hurdles across the board. What if intervening based on machine learning changes outcomes and saves lives? Is it ever right to treat one group of patients better than a control group that receives less effective care? These obstacles make a big difference in how future studies will pursue the results of this study. If the phenomenon of better prediction holds up, it may be decades before patients are treated differently.
Link:
AI Can Tell If You're Going to Die Soon. We Just Don't Know How It Knows. - Popular Mechanics
Posted in Ai
Comments Off on AI Can Tell If You’re Going to Die Soon. We Just Don’t Know How It Knows. – Popular Mechanics
Is AI in a golden age or on the verge of a new winter? – VentureBeat
Posted: at 2:33 pm
The global rush forward of AI development continues at a breakneck pace and shows no signs of stopping. Stanford University recently called on the U.S. government to make a $120 billion investment in the nations AI ecosystem over the course of the next 10 years, and reports from France show 38% more AI startups in 2019 with government and investor backing. The U.S. Department of Energy (DOE) is planning a major initiative to use AI to speed up scientific discoveries and will soon ask for an additional $10 billion in funding. Dozens of countries have acknowledged that AI is going to be increasingly important for their citizens and the growth of their economies, resulting in widespread country-level investment and strategies around AI.
This trend supports arguments that AI is entering a golden age. And why not? Some have claimed the transformative impact of AI is similar to electricity. The golden age theory is further supported by the 2019 AI hype cycle from Gartner that shows many AI technologies climbing the innovation slope, providing more fuel for the AI fire.
Indeed, the public interest grows apace as the upward trend in news stories about AI technologies continues to track up and to the right as shown is this graphic from CB Insights.
While interest is at an all-time high, its not all positive. There is growing negative feedback about AI, whether worries about current misuse of the technology or potential long-term existential threats. For example, several Outback Steakhouse franchises recently had to back away from plans to implement AI-powered facial recognition in their restaurants due to consumer backlash. Several cities have issued an outright ban of the technology over worries about the potential for dystopian surveillance systems.
Other threats are perceived due to AI-created deepfake videos and the possible misuse of new natural language generation capabilities. Specifically, misuse of these could supercharge fake news and further undermine democratic norms and institutions. This has led the U.S. Senate to pass legislation requiring the Department of Homeland Security to publish an annual report on the use of deepfake technologyand how it is being used to harm national security. In addition, discussions are ongoing about inherent bias in the datasets used to train AI algorithms amid concerns about if it is even possible to eliminate these biases.
Are these issues fundamental or merely noise in the machine of progress? A Brookings Institution article on regulating AI suggests the latter. The paper cites worries about previous technological breakthroughs that proved to be unfounded. For example, people worried that steam locomotives would stop cows from grazing, hens from laying, and precipitate economic havoc as horses became extinct and hay and oats farmers went bankrupt. And there was concern the telegraphs transmission of messages by sparks might be the work of the devil.
AI winters as experienced in the mid-1970s, the late 1980s, and the 1990s occur when promises and expectations greatly outpace reality and people become disappointed in AI and the results achieved through it. For instance, weve all seen and heard the many visions of self-driving cars, but the reality is that for most people this is 20 years away, possibly longer. As recently as 2016 there were predictions that 10 million self-driving cars would be on the road by 2020. Not going to happen. This spring, Ford CEO Jim Hackett admitted in a colossal understatement, We overestimated the arrival of autonomous vehicles. This despite the intense hype and $35 billion invested globally in their development.
The reason for the slow development is unanticipated complexity. Similarly, promises of treating heretofore incurable brain afflictions such as autism and schizophrenia through embedded brain-machine interfaces is enticing but also likely still far into the future. Its unrealized or dashed promises that lead to AI winters. As projects flounder, people lose interest and the hype fades, as does research and investment.
This is the current conundrum. On the one hand, there are huge advances being made nearly every day, from training AI to help the paralyzed to write with their minds, to rapidly spotting new wildfires and improving Postal Service efficiency. These look like promising applications. Yet Stanford professor David Cheriton recently said that AI has been a promising technology since he first encountered it 35 years ago, and its still promising but suffers from being overpromising.
This overpromising is reinforced by a new Gartner study that shows AI adoption lagging expectations, at least in the enterprise. The top challenges are the lack of skilled staff, the quality of available data, and understanding the real benefits and uses of AI. An even more significant limitation Gartner cites is the lack of vision and imagination for how to apply AI.
This is the nearly $16 trillion question the amount that PWC estimates AI will deliver annually to the global economy by 2030. Will something close to this be achieved, led by the golden age of AI, or will the technology hit a wall over the next several years and lead to a new winter?
An argument for winter is that all the advances so far have come from narrow AI, the ability of an algorithm to do one thing only, albeit with superhuman abilities. For example, computer vision algorithms are excellent at making sense of visual information but cannot translate and apply that ability to other tasks. Strong AI, also known as Artificial General Intelligence (AGI), does not yet exist. An AGI machine could perform any task that a human can. Surveys suggest it will be until 2060 before AGI exists, meaning that until then narrow AI algorithms will have to suffice.
Eventually, the use cases for narrow AI will be exhausted. Another AI winter will likely arrive, but it remains an open debate about when. If Microsoft president Brad Smith is right, winter wont be coming soon. He recently predicted AI will transform society over the next three decades through to 2050. For now, as evidenced by the increased funding, the number of AI-related technologies climbing the hype cycle, and an almost stampede mentality, we are basking in the golden light of an AI summer.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
See original here:
Is AI in a golden age or on the verge of a new winter? - VentureBeat
Posted in Ai
Comments Off on Is AI in a golden age or on the verge of a new winter? – VentureBeat
China throws its weight behind A.I. and blockchain as it aims to be the world’s tech leader – CNBC
Posted: at 2:33 pm
A Chinese mobile phone user in Shanghai, China.
Qi Yang | Moment | Getty Images
China, once seen as an imitator when it came to technology, is now looking to take the lead in areas from blockchain to artificial intelligence (AI), much-hyped technologies that are seen as critical to the future.
Despite the U.S.-China trade war, experts say the world's second largest economy will continue pushing its domestic technology sector.
The future of AI, blockchain, financial technology and smartphones will be among the topics discussed at East Tech West CNBC's technology conference held in the Nansha district of Guangzhou, China.
Here's a look at what China is doing in some of the key technology areas.
Artificial intelligence is a broad term for technology that makes machines mimic human intelligence, for example, in recognizing images or speech.
U.S. and Chinese tech firms are pumping a lot of money into developing AI, and both countries have launched their own national strategies around it.
In 2017, Beijing laid out plans to become the world leader in artificial intelligence by 2030, with the aim of making the industry worth 1 trillion yuan ($147.7 billion).
China has rolled out AI uses, such as facial recognition technology, on a large scale. The U.S. is seen as having the lead, however, when it comes to research and development.
"On AI, China is implementing the technology very fast in facial recognition, speed recognition, self-driving vehicles, smart cities and medical diagnoses," Rebecca Fannin, author of "Tech Titans of China," told CNBC.
"The US still has the lead on R&D in AI, but China is catching up with the tech titans in AI as well as numerous well-funded startups such as Face++, Sensetime and iFlytek."
"China is closing the technological gap with the United States, and though it may not match U.S. capabilities across the board, it will soon be one of the leading powers in technologies such as artificial intelligence (AI)" and other technologies, the Council on Foreign Relations (CFR) said in a recent report.
In China, financial technology is booming. Global investment in fintech ventures more than doubled in 2018, to $55.3 billion, with China accounting for around 46% of that figure, according to Accenture.
The country is home to some of the world's biggest fintech firms such as Alibaba affiliate Ant Financial which runs the popular Alipay mobile payments app.
Mobile payments or paying with a scanned code on your phone is one area in which China has led the way. Alipay and Tencent-owned WeChat Pay can be used all over the country from big department stores to street stalls.
Cash is dead in China. So are credit cards.
Rebecca Fannin
author of "Tech Titans of China"
But these services are also known as "superapps" because within the platform, users are able to get other products from micro-loans to wealth management products. They have provided a way for Chinese consumers to bypass the banks, to some extent.
"China's mobile payment market is most advanced in the world. Alipay and WeChat Pay are two tech titans in this space," Fannin said. "Cash is dead in China. So are credit cards."
Henri Arslanian, chairman of the Fintech Association of Hong Kong, said that technology firms have been driving innovation in financial services, something he calls "techfin."
"What's been really interesting for us, is that the real innovation from a 'techfin' perspective happened not only in Silicon Valley, but really happened here in Asia, in China in particular," Arslanian told CNBC in an East Tech West preview show.
"For many banks around the world, when they want to see what is the disruption coming ahead, what can really disrupt their business, they don't look at Silicon Valley now, but they rather look at China and Asia."
But Chinese fintech has often moved faster than the regulators. Now regulators are putting frameworks in place to have a better grasp on what's going on to help China maintain a leading position in fintech.
Earlier this year, the China's central bank issued a 3-year "fintech development plan" with the aim of making China the world leader in fintech.
Chinese firms are now some of the biggest in the world. Three out of the top 5 smartphone vendors are Chinese with Huawei sitting at number two.
Chinese smartphone-makers have for years been accused of copying the likes of Apple when it comes to smartphone designs. But in the past couple of years, they've begun to push the boundaries on handset innovation.
"Many Chinese vendors get plagued with reputations for being copycats. And while some of that is valid, what gets overlooked is how much these same vendors are oftentimes ahead of the rest of the industry in areas like cameras, screens, and charging," Bryan Ma, vice president of devices research at IDC, told CNBC.
Ma pointed to devices like Xiaomi's Mi CC9 which has a 108-megapixel camera and other devices from the company which have edge-to-edge screens.
Blockchain's first major application was for the cryptocurrency bitcoin. It's a so-called decentralized ledger of transactions that cannot be tampered with and it underpins bitcoin.
But the meaning and uses of the technology has evolved. It's now being trialed and used by various industries from finance to the food industry.
China has thrown its backing behind the technology. According to state media, President Xi Jinping said that China has a strong foundation and should look to take a leading position in the technology. He reportedly said China should "seize the opportunity" offered by blockchain, adding that the technology could benefit a range of industries including finance, education and health care.
Edith Yeung, managing partner at blockchain-focused venture capital firm Proof of Capital, said Xi's remarks shows China's determination in the sector.
"It's clear China wants to lead the world's standard for blockchain technology," Yeung told CNBC. "China is a really government-driven country."
5G refers to next-generation mobile networks that promise faster data speeds and low latency. The latter refers to the quicker speeds at which data arrives from the time it is summoned. This is key to underpin technologies like driverless cars which cannot afford any lag in data.
A number of countries such as South Korea and the U.S. are rolling out 5G networks, while China recently turned on its 5G networks ahead of a previously-announced 2020 timeline.
5G is set to play a key role in technology development moving forward but it is also a highly-politicized issue.
The U.S. has tried to convince countries not to use equipment from Chinese firm Huawei in their 5G networks. Huawei is the world's largest telecommunications gear maker. The U.S. says that Huawei is a national security risk because its hardware could be used by the Chinese government to spy on citizens. Huawei has repeatedly denied the claims.
Still, China is on track to becoming the biggest 5G market in the world.
The country will account for the largest number of 5G connections by 2025, more than North America and Europe combined, according to mobile industry body GSMA.
Visit link:
China throws its weight behind A.I. and blockchain as it aims to be the world's tech leader - CNBC
Posted in Ai
Comments Off on China throws its weight behind A.I. and blockchain as it aims to be the world’s tech leader – CNBC
Artificial intelligence has become a driving force in everyday life, says LivePerson CEO – CNBC
Posted: at 2:33 pm
2020 is going to be a big year for artificial intelligence, that is.
At least, that was the message LivePerson CEO Robert LoCascio delivered to CNBC's Jim Cramer on Friday.
"When we think about 2020, I really think it's the start of everyone having [AI]," LoCascio said on "Mad Money." "AI is now becoming something that's not just out there. It's something that we use to drive everyday life."
LivePerson, based in New York City, provides the mobile and online messaging technology that companies use to interact with customers.
Shares of LivePerson closed up just more than 5% on Friday, at $38.32. While it sits below its 52-week high of $42.85, it is up more than 100% for the year.
It reported earnings last week, with total revenue at $75.2 million for the third quarter, which is up 17% compared with the same quarter in 2018.
More than 18,000 companies use LivePerson, including four of the five largest airlines, LoCascio said. Around 60 million digital conversations happen through LivePerson each month, he said.
"You can buy shoes with AI on our platform. You can do airlines. You can do T-Mobile, change your subscription with T-Mobile," he said. "That's the stuff in everyday life."
The world has entered a point where technology has transformed all aspects of communication, LoCascio said.
"Message your brand like you message your friends and family," he said, predicting a day where few people want to pick up the phone and call a company to ask questions. "We're powering all that ... for some of the biggest brands in the world."
LoCascio said LivePerson, which he founded in 1995, now uses AI to power about 50% of the conversations on its platform.
"We're one of the few companies where it's not a piece of the puzzle. It's the entire puzzle," he said.
Excerpt from:
Artificial intelligence has become a driving force in everyday life, says LivePerson CEO - CNBC
Posted in Ai
Comments Off on Artificial intelligence has become a driving force in everyday life, says LivePerson CEO – CNBC
Has the AI-Generated Art Bubble Already Burst? Two Works at Sothebys Are Greeted by Lackluster Demand – artnet News
Posted: at 2:33 pm
A little more than a year ago, it seemed as if the art world had crossed a Rubicon when an AI-generated portrait sold for a staggering $432,50043 times its $10,000 high estimate. The shocking result seemed to open the door to a future in which a living, breathing person need not have any hand in actually creating a work of art. But todays results at Sothebys contemporary art day sale suggest that human artists need not fearthat dystopian future is still a long ways off.
Two works by the same collective behind last years record-setting AI portrait, Obvious, had a decidedly lackluster performance at Sothebys on Friday. After less-than-competitive bidding, both examples found buyers within or just above their estimatesaround 95 percent less than their predecessor in 2018.
The works sold today were made through a similar process to the one that sold last year, igniting the imagination of tech and art speculators around the world.Portrait ofEdmond de Belamy (2018) was the brainchild of the somewhat controversial Paris-based collective Obvious, which marries art and artificial intelligence to yield algorithms that propagate the artwork itself.
The algorithms come from a Generative Adversarial Network (GAN) that aggregates information fed by the Obvious group members. In the case of the Famille de Bellamy series, the data set consisted of 15,000 portraits dating from the 14th to the 20th centuries, at which point the generator system creates a new portrait based on the formal properties of existing works.
A more recent series, Electric Dreams, was created based on Ukiyo-e stamps created in Japan between 1780 and 1880.
La Baronne de Belamy, created by Obvious Art. Courtesy of Sothebys.
Neither work at Sothebys came from Obvious themselves; they were consigned by private collectors who bought the works from the the Paris-based collective in the past year.La Baronne de Belamy (2018), another from the record-setting series, was estimated for $20,000 to $30,000; the second work,Katsuwaka of the Dawn Lagoon(2018) from the more recent Electric Dreams series, was estimated for $8,000 to $12,000.
After less than 25 seconds of bidding,theBelamyportrait sold for $20,000, just meeting its low estimate, or $25,000 with fees. Within the minute, the next lot,Katsuwaka,edged past the presale high estimate to hammer for $13,000, or $16,250 with fees.
The art world was generally skeptical of the surge of electronically created artwork, even after theBellamy boom, and the market took note. An AI-generated work at Sothebys by the artist Mario Klingemann in London sold in March for a modest $51,000.
See the original post here:
Posted in Ai
Comments Off on Has the AI-Generated Art Bubble Already Burst? Two Works at Sothebys Are Greeted by Lackluster Demand – artnet News
The Buck Starts Here: How AI Shapes The Future Of Money – Forbes
Posted: at 2:33 pm
For a long time, financial institutions had a buttoned-down reputation when it came to innovative thinking. Nowadays, even the most conventional and risk-averse parts of the economy are looking at Artificial Intelligence, not long ago considered an experimental, bleeding edge technology.
Wall Street is the financial district of New York City. It is the home of the New York Stock Exchange, the world's largest stock exchange by market capitalization of its listed companies.
Nowhere is the change more dramatic than in Financial Services. Companies use AI across the transaction chain, from spotting complicated credit card scams to easing their regulatory burden. There are reasons on both the client and provider side why this is happening now. The financial institutions have more and better data, and tech companies have lower-cost, better performing algorithms.
Many of the most popular uses of this once-risky technology, it seems, are now risk management, in surprising new ways. Here are a few of them:
Fight fraud
Cyber crime cost the global economy as much as $600 billion in 2017, according to the cyber security firm McAfee. A big part of that is online fraud.
AI-based fraud mitigation technologies go through petabytes of data in the blink of an eye, so merchants and financial institutions can spot erratic or suspicious card usage in real-time payments. Visas new suite of AI-based tools, offered to clients without an additional fee or sign-up, evaluate a flowing stream of transactions and rely on self-teaching algorithms, rather than static sample datasets and fixed rules, to evaluate transactions as they happen. Firms like Mastercard, TSYS, and First Data have introduced similar AI-driven fraud tools.
AI finds patterns in massive datasets, making it good for spotting complex money-laundering schemes, like when groups of people or businesses act in a coordinated way to set up accounts and push through transactions, some of which may involve dirty money. Natural-language processing, an AI technique that can detect and determine connections between names or groups of people, is useful for finding detection-avoiding strategies, like false names, altered spellings, and aliases.
AI may also help reduce phone scams, when integrated with voice biometrics that identify a users voice signature. Organizations can more quickly flag and reroute fraudulent phone calls to their appropriate cyber crime teams, before the bad guys have access to an individuals financial account.
Bad actors endlessly seek new ways to commit crimes. Since much of AI continually trains on new data, its an effective way to keep fraud detection models sharp.
Reach the unbanked through better lending
Traditionally, the best way to judge risk in lending is through credit scoring. Systems like FICO take into account data ranging from income and payment history, to savings account balance or past credit utilization. It is only accurate, however, for those whose credit and banking history is well recorded. Hundreds of millions of underbanked people, with poor data histories, get missed. According to oneWorld Bankestimate, about 68% of adults have no credit data aligned with a private bureau, and therefore no credit score.
AI can help. One fintech startup, Lenddo, uses machine learning algorithms to comb through thousands of nontraditional data pointssuch as social media account use, email subject lines, internet browsing, geolocation data, and other behavioral traitsto find patterns that can determine a customers creditworthiness. Similarly, Tala provides microloans to individuals in the emerging markets via a smartphone app. Tala can see the size of the applicants network and support system, a helpful guide to judging risk. Data also reveals whether the applicant pays their bills on time. This type of data has proven more meaningful than traditional credit scoring, enabling Tala to send money to an approved borrowers smartphone in just a few minutes.
Equifax launched a credit scoring system, called NeuroDecision, that fuses the ability of neural networks with traditional methods to evaluate risk predictions, including predictions for consumers with flawed or insufficient credit, while providing reason codes that allow businesses to meet regulatory requirements.
Everybody should have the means to open a bank account. It enables them to become active participants in the economy, save for education, and improve their lives. The expansiveness of AI-based credit monitoring offers the ability to put banking into the hands of tens of millions of people who were falling through the cracks of the traditional banking system.
Personalize interactions
Every financial institution wants to know its customers better. Most have plenty of data about their customers; now they just need to tailor it to their customers needs. This is where AI-driven bots can play a role, and these days that bot is a banker.
One example of this is Erica, the virtual assistant embedded in Bank of Americas mobile app. In just over a year, it has been used by more than 6 million people and has processed 35 million client requests.
Erica combines predictive analytics and natural language to help Bank of America mobile app users view their balances, get credit scores, transfer money between accounts, send money with Zelle, and schedule meetings at financial centers. Customers can interact with Erica in several ways, including voice, texting, or a tap on the phone screen.
The more users who interact with Erica, the more it learns, and the better it becomes at providing help. AI is a strong tool for building personalized relationships.
Enforce financial regulations
Money is a heavily regulated commodity, often subject to complex and extensive regulations designed to define acceptable behavior. In the U.S., federal financial agencies receive guidelines from Congress, and these guidelines are often supplemented by state, local, and industry rules. Compliance is a tough problem, particularly when there is no clean boundary between acceptable and unacceptable behavior based on readily observable factsfor example, the requirement that a bank operate in a safe and sound manner.
If the regulations are more machine readable, AI should be able to assist in compliance, whether its an equity transaction for a trader or something more complex. Predictive machine learning can be a valuable input for financial supervisors identifying issues for further analysis.
Read the rest here:
The Buck Starts Here: How AI Shapes The Future Of Money - Forbes
Posted in Ai
Comments Off on The Buck Starts Here: How AI Shapes The Future Of Money – Forbes
How does AI improve grid performance? No one fully understands and that’s limiting its use – Utility Dive
Posted: at 2:33 pm
Just as power system operators are mastering data analytics to optimize hardware efficiencies, they are discovering how the complexities of artificial intelligence tools can do far more, and how to choose which to use.
With deployment of advanced metering infrastructure (AMI) and smart sensor-equipped hardware, system operators are capturing unprecedented levels of data. Cloud computing and massive computational capabilities are allowing data analytics to make these investments pay off for customers. But it may take machine learning (ML) and artificial intelligence (AI) to address new power grid complexities.
AI is a form of computer science that would make power system management fully autonomous in real time, researchers and private sector providers of power system services told Utility Dive. ML is a part of AI that passes human-supervised data analytics through preset or learned rules about the system to inform AI of normal and abnormal operational conditions.
"[D]ata management falls into 'crawl, walk, and run' categories, and most utilities are crawling in their use of data right now. AI for data management would be 'running.'"
Kevin Walsh
Transmission and Distribution Principal,OSIsoft
"Knowing when to use data analytics and when to use machine learning and AI are the fundamental questions utilities are asking," GE Digital VP for Data and Analytics Matt Schnugg told Utility Dive. Continuing to use an approach "that has been good enough for years" has merit, but new tools and capabilities may justify "turning to data scientists and cloud computing" and there are "parameters" for knowing how to choose between them.
The sheer volume of data is beginning to exceed human capabilities, but system operators often don't have the technology to deploy demonstrated AI and ML solutionsfor power flow management,researchers told Utility Dive. The mathematics of solutions are not yet fully understood, they acknowledged. The next big question may be whether system operators will risk ML and AI for results humans cannot yet provide or understand.
The value of putting power system data to work is increasingly evident. It has saved system operators time and customers money.And it is providing predictive infrastructure maintenance, which can reduce the growing frequency and duration of service interruptionsand help avoid major unintended cascading blackouts.
"Utilities have billions of dollars invested in hard assets and they use data to manage those assets," Kevin Walsh, transmission and distribution principal for data management specialist OSIsoft, told Utility Dive. "But data management falls into 'crawl, walk and run' categories, and most utilities are crawling in their use of data right now. AI for data management would be 'running.'"
Data management providers like OSIsoft help utilities assimilate data "and make it available across the enterprise to enable intelligent decisions based on what is actually happening in the field," Walsh said. "For 75% of what utilities are doing, outside of maybe forecasting or managing capacity, AI is at the infancy stage and there is no real use case."
"AI and machine learning shops are too often looking for problems to solve rather than addressing the very specific problems that utilities face and showing how machine learning might be the right solution."
Joshua Wong
CEO, Opus One Solutions
A data stream anomaly spotted by OSIsoft's PI System data analytics allowed the Alectra, Texas, municipal utility to defer a $3 million transformer replacement with a $100,000 transformer repair, he noted. Duke Energy and Sempra Energy use PI analytics for predictive maintenance.Data analytics "does most of what utilities need" without the costs and complexities of AI and ML, he said.
"AI and machine learning are the buzz with investors and the general public, but utilities' key concern is what any analytics will bring to their operations," Opus One Solutions CEO Joshua Wong agreed.
Opus One uses advanced physics and mathematical formulas that underlie the distribution system to build a "digital twin" of a utility's system, Wong told Utility Dive. Largely through analytics software, the twin is used to model and inform utility system operations, planning and market and business model design.
The most common use of ML is for five-minutes-ahead to day-ahead algorithms that do load and generation forecasting, he said.Utilities' legacy software technologies cannot run "the very large and interdependent data sets needed for the entire grid's power flow,"but forecasting requires only learning forward "from a single point in history without a lot of dependent phenomena."
Better algorithms can be built with a combination of AI tools and data analytics that correlate real data and learning, Wong said. "Machine learning's greatest impact will be in making correlations that teach the algorithm to understand how the voltage here affects the voltage there." Those correlations will enable "optimizing grid operations like dispatching battery storage or managing electric vehicle charging," said Wong.
Utility pilots and research simulations are beginning to show automation can already optimize some of those operations.
In the first pilot for what could eventually be an autonomous grid, Helia Technologies and Colorado electric cooperative Holy Cross Energy (HCE) are testing ML's battery dispatch capabilities.
Four houses on the HCE system are equipped with multiple distributed resources, including batteries. A Helia controller is predictingthe batteries' actual charge-discharge potential instead of the vendors' rated capabilities, Helia CEO Francisco Marocz told Utility Dive. That allows the houses to support optimal system power flow for HCE.
A similar pilot testing ML's ability to optimize battery dispatch proved successful for Colorado's United Power, National Rural Electric Cooperative Association Analytics Research Program Manager David Pinney told Utility Dive. "The machine learning algorithm was able to forecast the co-op's optimal dispatch of the various energy storage applications three days ahead."
Duke Energy is "beginning to use machine learning capabilities, especially in the areas of data analysis and predictive analytics," Duke spokesperson Jeff Brooks told Utility Dive in an email. There is still human supervision in Duke's remote switches and reclosers that enable reconfiguring and rerouting in response to outages, but some of it is done with "scripted algorithms and processes."
"Breakthroughs in computational and data processing capabilities make it possible for algorithms to learn through interactions with the grid environment."
Qiuhua Huang
Research Engineer,Pacific Northwest National Laboratory
Over the next three to five years, Duke data scientists plan to "develop and deploy" AI and ML that will more fully automate analytics, outage management and power flow, Brooks said.
Transmission system operators have been slower to move toward automation, but DOE-funded national lab research is now focused on ML algorithms that train neural networks to process system data, researchers told Utility Dive.
Neural networks are sets of algorithms designed to recognize and order patterns in analyzed data and ML can train them to assist transmission system operators facing sudden large voltage fluctuations, Pacific Northwest National Laboratory (PNNL) research engineer Qiuhua Huang told Utility Dive. In early-stage simulations, algorithms responded in milliseconds to prevent voltage instability and cascading outages.
"Breakthroughs in computational and data processing capabilities make it possible for algorithms to learn through interactions with the grid environment," he said. "ML observes historic and real time data and learns to produce good outcomes in a way that seems beyond human intuition."
ML is being used to train a different type of neural network to respond to a transmission line failure caused by a demand spike, weather event or cyberattack, Argonne National Laboratory research scientist Kibaek Kim told Utility Dive. In simulations, it has responded 12 times faster than human operators do today and adjusted the voltages "automatically from the trained model."
The goal is to take system operators "out of the loop" and leave "the decisions from end to end" to AI, National Renewable Energy Laboratory (NREL) research scientist Yingchen Zhang told Utility Dive. It will take time and trials because, unlike an easily discarded ML-selected Netflix movie choice, "the wrong power system decision could cause a blackout across the system and that is not acceptable."
The need for trials to validate reliability is a major reason ML and AI have seen little deployment, Kim said. Another is that, as Wong noted, utilities and system operators do not yet have the hardware and software to use them, although that is being resolved with new cost-effective access to cloud computing, he said.
More significantly, utilities are reluctant because researchers "do not fully understand the underlying mathematics"of the neural networks and "why they work so well," Kim said. "The understanding will come, but I don't know when."
Until then, an algorithm could encounter something unknown and respond incorrectly, he said.
It is clear ML works the way it was designed "because input provided to the algorithm is learned and performs as intended,"PNNL's Huang said. "But arguably we don't know exactly how a neural network selects from the inputs and comes to the final decision because there is so much complicated processing to reach that decision."
Research is now directed at the question, he added. The likely explanation is that neural networks are using "a totally different way of interpreting these equations with some higher-level logic."
While the question is being answered, power system operators must decide how to proceed.
Deciding whether ML and AI are needed to address operations long addressed with data analytics depends on two factors, GE Digital's Schnugg said. One is who the system operator is, and the other is what problem the operator wants to address.
ML algorithms "tend to have the most impact when modeling events that have occurred many times, rather than Black Swan events without a data pattern."
Matt Schnugg
VP for Data and Analytics, GE Digital
Guidelines for making the decision "are not canonical," but "there are parameters for when it is best to use AI and machine learning," he said. "First, you have to have a tremendous amount of data cleaned and ready for the algorithm to be trained and built. That is just table stakes. Access to the cloud is usually the most cost-effective way to have that."
Second, ML algorithms "tend to have the most impact when modeling events that have occurred many times, rather than Black Swan events without a data pattern," he said. GE's new Storm Readiness application is built on the history of repeated outages from storms. "Storm readiness is the output of the model. The more outages there are to study, the more accurate the model can be."
Third, modeling must pass the 'yeah, no, duh test' by solving a real problem, Schnugg said. "ML is not needed to predict the sun will rise tomorrow, but if a decision about something very data-rich that occurs repeatedly could lead to appreciably better performance, it is worthy of using AI and ML to build a predictive model."
There are two definitions for "better performance," he added. "It can be a more accurate prediction, or it can be saving time while achieving the same accuracy. An ML-based predictive model that automates a process or a series of decisions that would take a human much longer adds tremendous value."
In GE's new Storm Readiness product, ML algorithms build and train a neural network to learn the system's weather and performance data history. It can then predict 72 hours in advance where the storm will hit the system and what resources will be needed to address its impacts.
In contrast, its new Network Connectivity product relies entirely on traditional data analytics to manage transmission and distribution system assets. The objective is to optimize the utility's business activities, from hardware maintenance to truck rolls.
The GE Effective Inertia application is a hybrid tool that combines real time transmission system data analytics and an ML-based load and generation forecasting algorithm. It anticipates fluctuations in system inertia 24 hours in advance from momentary supply-demand imbalances caused by rising levels of variable renewables, and informs cost-effective reserve procurements to stabilize the fluctuations.
"The cloud has democratized access to data, and now it is the quality of the data and the quality of the question being asked that are most important," Schnugg said. "ML and AI are only part of the value. The biggest value is helping the utility solve its problem."
See more here:
Posted in Ai
Comments Off on How does AI improve grid performance? No one fully understands and that’s limiting its use – Utility Dive
The post-exponential era of AI and Moores Law – TechCrunch
Posted: at 2:33 pm
My MacBook Pro is three years old, and for the first time in my life, a three-year-old primary computer doesnt feel like a crisis which must be resolved immediately. True, this is partly because Im waiting for Apple to fix their keyboard debacle, and partly because I still cannot stomach the Touch Bar. But it is also because three years of performance growth aint what it used to be.
It is no exaggeration to say that Moores Law, the mindbogglingly relentless exponential growth in our worlds computing power, has been the most significant force in the world for the last fifty years. So its slow deceleration and/or demise are a big deal, and not just because the repercussions are now making their way into every home and every pocket.
Weve all lived in hope that some other field would go exponential, giving us another, similar, era, of course. AI/machine learning was the great hope, especially the distant dream of a machine-learning feedback loop, AI improving AI at an exponential pace for decades. That now seems awfully unlikely.
In truth it always did. A couple of years ago I was talking to the CEO of an AI company who argued that AI progress was basically an S-curve, and we had already reached its top for sound processing, were nearing it for image and video, but were only halfway up the curve for text. No prize for guessing which one his company specialized in but he seems to have been entirely correct.
Earlier this week OpenAI released an update to their analysis from last year regarding how the computing power used by AI1 is increasing. The outcome? It has been increasing exponentially with a 3.4-month doubling time (by comparison, Moores Law had a 2-year doubling period). Since 2012, this metric has grown by more than 300,000x (a 2-year doubling period would yield only a 7x increase).
Thats a lot of computing power to improve the state of the AI art, and its clear that this growth in compute cannot continue. Not will not; can not. Sadly, the exponential growth in the need for computing power to train AI has happened almost exactly contemporaneously with the diminishment of the exponential growth of Moores Law. Throwing more money at the problem wont help again, were talking about exponential rates of growth here, linear expense adjustments wont move the needle.
The takeaway is that, even if we assume great efficiency and performance improvements to reduce the rate of doubling, AI progress seems to be increasingly compute-limited at a time when our collective growth in computing power is beginning to falter. Perhaps therell be some sort of breakthrough, but in the absence of one, it sounds a whole lot like were looking at AI/machine-learning progress leveling off, not long from now, and for the foreseeable future.
1It measures the largest AI training runs, technically, but this seems trend-instructive.
Read more here:
Posted in Ai
Comments Off on The post-exponential era of AI and Moores Law – TechCrunch
Why Federal Agencies Can’t Ignore the AI Buzz Much Longer – Nextgov
Posted: at 2:33 pm
Anyone standing in line at the Department of Motor Vehicles has probably wondered why the waits are so long. Why its seemingly so difficult to complete basic tasks. Why, in this digital age, we cant apply a little modern technology to take the pain out of such agonizing experiences.
Weve also heard the buzz about artificial intelligence potentially coming along to save the day by automating overly bureaucratic processes to make our lives simpler. And many government officials probably think thats all it isbuzzright?
Well, not for long. While AI doesnt appear ready to debut at the driver registration counter anytime soon, it is beginning to find its way into other government operations. Indeed, worldwide spending on AI systems is expected to hit nearly $98 billion by 2023, driven in large part by public sector adoption, according to IDC.
Thats because nobody is more aware than government organizations themselves of the annoyances that go along with antiquated procedures. While the private sector races ahead with modern productivity and life-enhancing technologies, non-defense agencies often struggle to do the same. Many people cling to older computers, operating systems and communications gear because they seem to work well enough and dont bust their often-meager budgets. As a result, though, these departments take longer to complete projects and are less able to deliver service levels that citizens desire and deserve.
Automated into Action
As a result, AI is becoming more interesting for many government agencies. They are open to exploring the potential benefits of AI in the workplace, from automating time-consuming manual tasks to advanced data mining. In practical use, AI is more about augmenting human capabilities to make us more productive, a fact that is probably fueling a White House plan to spend nearly $1 billion on non-defense AI research and development in 2020.
The productivity advantages of AI for the public sector cannot be underestimated. According to Deloitte research, up to 1.2 billion of the estimated 4.3 billion hours that government employees work each year could be freed up today using AI. Whats more, Deloitte estimates agencies could save as much as $41.1 billion annually by using this technology to automate various processes.
No doubt, deploying AI will displace some percentage of employees performing relatively low-skilled and repeatable tasks. However painful for those individuals, industry observers note people always worry about job loss when technological progress comes along to improve operational efficiency and productivity. In addition, analysts such as Gartner have predicted that AI will create more jobs than it eliminates by leading to as yet unknown innovations that will require new employees with special skills.
Improving Citizen Services
Beyond the productivity benefits, AI could also enable government agencies to deliver a level of service few citizens have thought about or expect today. For instance, imagine a citys water main breaking. Today, a nearby resident might contact the local water utility to alert them to the issue. Between the time of the citizen call, the repair truck hitting the road and the problem being resolved, all sorts of local problems could occur. Snarled traffic. Inadequate water flow for firefighters in an emergency. Homeowners left wondering if they forgot to pay their water bill when the dishwasher or shower stopped working.
But with AI, sensors would immediately alert the utility to the main break before the citizen even notices. The system might calculate the best route for repair crews to take to reach the problem spot. It could also reroute water to local fire hydrants for the duration of the event. It could even utilize various APIs to let residents know theyll be without water for a short time and provide digital updates on the repairs.
This small smart city example might seem a bit far-fetched. But AI is already being used by some governments around the world to do everything from helping public servants make welfare payments and immigration decisions to planning new infrastructure projects, answering citizen inquiries through chatbots, setting bail hearings and establishing drone flight paths.
Overcoming Security Hurdles
AI is also becoming a topic of conversation among legislators who see it as a possible solution for the massive backlog of government security clearances. InFebruary government officials rolled out plans for their Trusted Workforce 2.0 framework, which strives to use technology to make background investigations a daily, rather than every five to 10-year, event.
In addition, AI is gaining attention for its ability to both defend against and create cybersecurity issues. Security professionals and legislators alike are growing worried about the potential to weaponize AI to create new generations of malware that automatically target larger groups of vulnerable individuals or agencies in ways that are even more difficult to detect or defend against. Of even more concern, because AI is based on machine learning, malware could theoretically adapt in real time to any cyber-defenses, giving hackers a constant leg up in their attacks.
There are few good examples of this occurring yet. But thats not stopping some vendors from trying to fight AI fire with AI fire. For instance, one recently announced anti-malware solution utilizes deep learning AI technology to look for behavioral patterns that could indicate a pending cyberattack. Its one of the first instances of trying to predict and shut down a looming threat as opposed to merely noting abnormal behavior already under way and then isolating it.
Artificial intelligence will almost certainly offer the public sector far more benefits than challenges as it continues to evolve and ready itself for primetime. To keep up with the speed at which AI applications are evolving, government agencies should start to adapt their programs now and evaluate how AI can best assist in the future.
Todd Gustafson ispresident of HP Federal.
More here:
Why Federal Agencies Can't Ignore the AI Buzz Much Longer - Nextgov
Posted in Ai
Comments Off on Why Federal Agencies Can’t Ignore the AI Buzz Much Longer – Nextgov