The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
AI Weekly: How power and transformative tech reshape our world – VentureBeat
Posted: November 17, 2019 at 2:33 pm
This week, VentureBeat launched a quarterly magazine. Like the AI Weekly, the special issue gives our editorial team a chance to reflect on important transformative technology influencing business, technology, and society.
The first issue focuses on the relationship between power and AI. Power can shape AI, fromhow we define ethical use of artificial intelligence andprotect personal data to howAI may change how we define inventions to how AI may change how we define inventions or used as both a tool or weapon.
By design, the special issue drew upon topics that dwell in our lives and shape our collective future. The articles are created to tackle issues that linger in the news cycle.
While the special issue began to roll out Monday, the world heard from Jamie Heinemeier Hansson. When she and her husband, Ruby on Rails creator David Heinemeier Hansson, applied for an Apple Card, he was given a credit limit offer 20x than her, a believed demonstration of algorithmic bias. It took no more than two days for aseries of Davids tweets that complained about it to trigger Wall Street regulators to open an investigation. Apple cofounder Steve Wozniak also complained about the credit limit that Apple Card extended to his wife.
The fact that two powerful white men complaining in tweets led to swift government action did not go unnoticed by AI ethicists or people of color who routinely document, witness, or experience algorithmic bias, nor by Jamie Heinemeier Hansson herself.
This is not merely a story about sexism and credit algorithm blackboxes, but about how rich people nearly always get their way. Justice for another rich white woman is not justice at all, she said in a blog post.
The world also got its latest dose this week ofElon Musk saying outlandish things about AI that doesnt exist. In aninterview on an AI podcast, the Tesla and SpaceX CEO said he believes AI equipped with his Neuralink brain hardware will be able to solve autism and schizophrenia but schizophrenia is a mental disorder and autism is a developmental disability. People with the luxury to focus on things AI cannot do are missing or ignoring the growing number of ways AI is impacting human lives today. Its an expression of the relation between AI ethics and power.
Both of these stories reflect themes seen throughout the special issuescover story about how power lies just under the surface of all AI ethics conversations.
The power in AI theme could also be seen this week in news reports that asserted automated bots attempted to sway elections held in the U.S. last week and in Chris OBriens work that lays out the case that deepfakes are not only a threat to the future of democracy but could also fuel a virtual arms race.
Power in AI also came up this week when Portland became the latest major city to propose a ban on facial recognition use and when Fight for the Future activists made the ethically questionable choice to use Amazons Rekognition on thousands of Washington D.C. residents to prove the point that Congress needs to take action on facial recognition regulation now.
Other topics in the special issue will continue to percolate in ongoing conversations, likethe need to have a human in the loop to avoid an AI-driven catastrophe and theethics game developers should consider when creating humanlike AI in virtual worlds.
We want each special issue to strike at the heart of conversations about issues transforming the world happening among business executives, tech workers, the AI ecosystem, and society at large. Were here to convene important conversations for you, so if you have an idea for the focus of a future special, fill out this form and let us know.
Watch out for the second special issue in early 2020.
For AI coverage, send news tips to Khari Johnson, Kyle Wiggers, or AI editor Seth Colaner and be sure to bookmark our AI Channel.
Thanks for reading,
Khari Johnson
Senior AI Staff Writer
Read this article:
AI Weekly: How power and transformative tech reshape our world - VentureBeat
Posted in Ai
Comments Off on AI Weekly: How power and transformative tech reshape our world – VentureBeat
Infographic: The Emergence of AI in Retail – Robotics Business Review
Posted: at 2:33 pm
November 15, 2019Demetrius Harrison
Artificial intelligence, through the use of software and robotics, is entering the customer service space across many retail locations. As of 2018, 28% of retailers are utilizing AI in some form, a 600% increase compared to 2016. Retailers in food/grocery, multi-category department stores, apparel, footwear, and home improvement stores have all embraced the adoption of AI.
About 26% of AI technology in retail directly interacts with customers the remaining 74% is being used for operations tasks working in the back to boost operational efficiencies with existing workers or replacing the dull tasks.
The future of customer service appears to be an increase in chatbots and virtual assistants. For example, H&M is using AI to style outfits for their customers upon them sending a link to their favorite item. Several retailers are using robots to roam store aisles and take inventory in order to make sure products are available for customers. Some locations are trialing the use of customer-service robots, while others are using AI to make personalized recommendations for consumers. In a recent survey, 73% of consumers said they prefer brands that use their personal data to improve the shopping experience.
By 2023, an estimated 95% of supply chain vendors will rely on AI learning. This is predicted to improve day-to-day, bottom line operations by forecasting error rates, structuring planning for productivity, and reducing cost by tracking shipment data.
The infographic below can paint a better picture of how AI will be used in retail.
See original here:
Infographic: The Emergence of AI in Retail - Robotics Business Review
Posted in Ai
Comments Off on Infographic: The Emergence of AI in Retail – Robotics Business Review
Why the UK needs a national strategy around AI and work – NS Tech
Posted: at 2:33 pm
News this week of the Ministry of Justices AI tool for prisoners delivering discriminatory outcomes will come as no surprise to many, but precious few people seem to be suggesting solutions to what is an emerging but potentially widespread problem. As the failures and inbuilt bias of AI tools hit the headlines once again, it is important to look not only at the problems with new tech, but also at how we utilise it to the benefit of all.
The backlash against facial recognition software, for example, is only the latest symptom of how a failure to include people in discussions and decisions about new technologies are threatening their legitimacy and the positive opportunities they could offer.
Nowhere is the risk of disconnection and disempowerment greater than in the workplace. Employers are increasingly looking to tech as the solution to their problems and the future of their business, but with little thought about how to engage or involve their workforce.
Challenges to this trajectory are now erupting within the tech sector itself, with an increase in organising activity among tech developers themselves, from Google to the gaming industry, and among workers subject to surveillance or pushed into precarious employment by tech-enabled businesses like Uber or Amazon.
Research we commissioned from YouGov earlier in the year found that 58 per cent of UK workers felt they would be locked out of any discussion about how technology would affect their jobs. No wonder many see AI as a threat, rather than something with possibilities for improving their working lives. This is why Prospect is a partner to this years Women Leading in AI conference on accountability and trust in Artificial Intelligence. Our members are optimistic about the future of work, but concerned about the rules that will govern it. We need to get serious about how we fix the culture of tech before it extends even further into our way of life. Ignoring workers in this debate is a sure-fire way of entrenching distrust and provoking opposition.
The real issue is not the technology but the power relationships behind it. These issues are familiar to unions like Prospect, but the speed of change means we urgently need to keep updating our ways of addressing them. In the last century, collective bargaining and campaigning focused on regulating human relationships and physical working conditions. We now need to understand a future in which critical relationships will be between humans, computer programs and data. The danger facing us is that AI and related technologies build in existing inequalities and insecurity, and in some cases make them worse.
A new agenda is already emerging. DIY unions and tech activism are spreading in America. Precarious workers are fighting back against exploitation by platform companies. Our friends in the GMB are working with unions worldwide to organise Amazon warehouse workers. At an international level we have been working through Uni Global Union, our international federation, on privacy and worker-focussed AI rules, as well as using new tech to empower employees. This month, as an alternative to employer-controlled surveillance and monitoring, we are piloting a new app, a bit like a FitBit for workers, that allows employees to collect their own data on working patterns and pressures. In the UK we are working with the Institute on the Future of Work to look at how the Equality Act can be used to tackle discrimination in algorithms and machine learning, and with the Fabian Commission on Workers and Technology on how we ensure automation is used to benefit everyone.
The UK has an opportunity to benefit from early adoption of technologies like AI. But if we dont talk about power and the imbalances it creates in work and society, then we wont get ethics right or start to deal with distrust. There are four principles that should define our approach.
First, worker voice and co-operation so that those developing, using and impacted by new technologies have a real say on their purpose, design, and implementation. AI ethics need to extend beyond the boardroom and actively engage and use the experiences of workers. Unions are leading the way with New Technology Agreements and increasing attention paid to issues such as transparency and data ownership in their bargaining agendas.
Second, a new focus on the social benefits of technology because a narrow focus on the technical intricacies (or commercial applications) will feel aloof and alienate people from the solutions technology can bring.
Third, we need to hear much, much more about job transformation so that workers are at the centre of the debate about the transition to a new economy. The governments Industrial Strategy singularly fails to include workers in its plans. But nearly two thirds of CEOs recently surveyed by PwC recognised that we need a national strategy around AI and work which the state needs to play a key role in developing.
Finally, we need a national framework that all social partners can buy into on what national policy we need to develop innovative, transformative technology that is ethically responsible and socially beneficial. That should include employee and trade union representation on the board of the AI Council and ensuring worker voice is part of the work of bodies like the ICO and Centre for Data Ethics. New EU Commissioner Margrethe Vestager is already talking of plans for tougher regulation of big tech and ethical rules for AI. The EU social partnership model will mean that workers will be involved in these policy discussions. If the UK is to leave the EU, we must look to at least match this commitment, not think we can win by cutting workers out of the conversation.
There is a saying in the equality movement that is apt here: nothing about us, without us. Imposing change rarely gets the best outcomes. Taking people with you always gets you further.
Andrew Pakes is Research Director at Prospect Union. He tweets @andrew4mk
Read this article:
Why the UK needs a national strategy around AI and work - NS Tech
Posted in Ai
Comments Off on Why the UK needs a national strategy around AI and work – NS Tech
Company trains AI to find wildfires faster than humans can – Roanoke Times
Posted: at 2:33 pm
Multiple factors often align to make California wildfires unusually hard to contain: hurricane-force winds that sweep toward the coastline, steep and often rough terrain, drought conditions exacerbated by climate change and finite resources spread thin by a vast landscape covered in wilderness.
As if that werent bad enough, many wildfires are fueled by another accelerant, something that almost ensures destruction on a mass scale: time.
California has 33 million acres of forest, far too much land for state agencies to monitor simultaneously. By the time reports of fires reach authorities and resources are mobilized, many hours, and sometimes days, can pass.
A San Francisco-based technology company called Chooch AI is trying to narrow that gap with the help of artificial intelligence, reducing the time between a fires eruption and the moment its spotted by people. The company, which is working with state agencies, researchers and technologists, is working to develop an AI tool that would scour hyper-detailed imagery from satellites for evidence of wildfires largely invisible to the naked eye. If successfully refined, experts believe, the tool could lead to earlier wildfire detection that would almost certainly save more people and property from destruction.
Using the same imagery, the AI could be used to identify vegetation that is dangerously close to power lines and locate spots where fires are likely to erupt.
The more specific goal, according to CEO Emrah Gultekin, is an ambitious one: monitoring every acre of forest in California every 10 minutes, alerting authorities almost immediately when a problem arises. He believes the AI may be operational as early as next fire season, which traditionally occurs in the fall, after the states brush land has been dried out by months of summer heat.
If theres a small brush fire, you want to get there as fast as possible, Gultekin said. Oftentimes, its out of control by the time people realize theres a fire and firefighters can be dispatched to the area, which is often hard to reach.
Ultimately, we want to stop these fires from happening entirely, he added, noting that early damage estimates from fires tearing through California in recent weeks have already exceeded $25 billion. Its a huge loss to the economy and a huge loss to human life and wildlife and its seriously costing families. Theres a lot of suffering going on.
Even today, with satellites taking detailed pictures of the Earth multiple times a day, most fires are reported by people on the ground or in planes overhead. To locate fires, Chooch AI is training its AI to rely on infrared technology to identify heat and smoke, the latter of which can be pinpointed on an acre-by-acre basis.
For a hypothetical comparison, Gultekin estimated youd need thousands of people to pore over millions of acres of rural land with a similar degree of precision. By tracking traffic patterns and camp fires, the AI can also identify where people have massed on the ground, creating a higher risk for wildfires. The overwhelming majority of wildfires about 85%, according to some estimates are caused by humans, experts say.
Gultekin said Chooch AIs system is similar to models that are being used to pilot autonomous vehicles and operate facial-recognition technology at airports. Unlike similar systems, he said, Chooch AI has no plan to profit off the technology.
The good thing about this AI is that its agnostic to the weather conditions, Gultekin noted. The AI can see through clouds and it can see through smoke, so it sees everything.
So far, the companys main obstacles have not revolved around training their AI, but instead getting access to useful satellite imagery. Because high-resolution satellites only provide data every 12 hours, Chooch AIs team is training their AI to interpret lower resolution imagery known as geostationary satellite imagery which is updated more frequently.
Nirav Patel, a geospatial data scientist and an AI expert with Californias Defense Innovation Unit, or DIU, who has been in touch with Chooch AI, said he believes their tool, when finished, has the potential to benefit firefighters on the ground. Patel is also a noncommissioned officer with Californias Air National Guard and was part of recent efforts to extinguish the Kincade Fire in Sonoma County. He said he has seen how unpredictable raging fires can become when early warnings fail.
That first six to 18 hours is critical to understand how the fire is moving so people can determine what physical or man-made obstacles can be placed in its path, he said, noting that a satellite-based AI platform could be used alongside planes outfitted with sophisticated sensors and human assets on the ground, such as fire lookouts.
There are other efforts to harness the power of AI to combat wildfires. Last year, James MacKinnon, a NASA computer engineer, told Fast Company hed developed an algorithm that could quickly process imagery from onboard satellites, a much faster process than sending images to Earth to be processed by a supercomputer. MacKinnon said his algorithm recognized fires with 98% accuracy.
The fires stick out like a sore thumb, he told the publication.
As wildfires ravage California, Patel said DIU is interested in partnering with companies that can offer solutions but want to maintain their intellectual property. To encourage commercial companies such as Chooch AI to develop innovative methods for combating natural disasters such as wildfires and doing damage assessments, DIU has created a competition for members the AI community called the xView2 Challenge.
When governments are faced with crises, whether its war or natural disasters, those challenges often produce technological innovation. Gultekin, whose company is participating in the xView2 Challenge, said Californias wildfires are no different, particularly when it comes to AI.
Five or 10 years ago, no way we couldve used AI to interpret satellite data in a meaningful way, he said. Right now, maybe we can. Were working on it and lots of other companies are as well. Expect a boom in this type of technology over the next five years.
More here:
Company trains AI to find wildfires faster than humans can - Roanoke Times
Posted in Ai
Comments Off on Company trains AI to find wildfires faster than humans can – Roanoke Times
The Apple Card algo issue: What you need to know about A.I. in everyday life – CNBC
Posted: at 2:33 pm
Apple CEO Tim Cook introduces Apple Card during a launch event at Apple headquarters on Monday, March 25, 2019, in Cupertino, California.
Noah Berger | AFP | Getty Images
When tech entrepreneur David Heinmeier Hansson recently took to Twitter saying the Apple Card gave him a credit limit that was 20 times higher than his wife's, despite the fact that she had a higher credit score, it may have been the first major headline about algorithmic bias you read in your everyday life. It was not the first there have been major stories about potential algorithmic bias in child care and insurance and it won't be the last.
The chief technology officer of project management software firm Basecamp, Heinmeier was not the only tech figure speaking out about algorithmic bias and the Apple Card. In fact, Apple's own co-founder Steve Wozniak had a similar experience. Presidential candidate Elizabeth Warren even got in on the action, bashing Apple and Goldman, and regulators said they are launching a probe.
Goldman Sachs, which administers the card for Apple, has denied the allegations of algorithmic gender bias, and has also said it will examine credit evaluations on a case-by-case basis when applicants feel the card's determination is unfair.
Goldman spokesman Patrick Lenihan said algorithmic bias is an important issue, but the Apple Card is not an example of it. "Goldman Sachs has not and will never make decisions based on factors like gender, race, age, sexual orientation or any other legally prohibited factors when determining credit worthiness. There is no 'black box.'" he said, referring to a term often used to describe algorithms. "For credit decisions we make, we can identify which factors from an individual's credit bureau issued credit report or stated income contribute to the outcome. We welcome a discussion of this topic with policymakers and regulators."
As AI and the algorithms that underlie technology become an increasingly large part of everyday life, it's important to know more about the technology. One of the major claims made by technology firms using algorithms in decisions like credit scoring is that algorithms are less biased than human beings. That's being used in areas like job hiring: The state of California recently passed a rule to encourage the development of more job-based algorithms to remove human bias from the hiring process. But it is far from 100% scientifically proven that an AI that relies on code written by humans, as well as data fed into it as a learning mechanism, will not reflect the existing biases of our world.
Here are key points about AI algorithms that will factor in future headlines.
As Hansson and his wife found out, AI systems are becoming more commonplace in areas that everyday people rely on.
This technology is not only being introduced in credit and job hiring but insurance, mortgages and child welfare.
In 2016, Allegheny County, Pennsylvania, introduced a tool called the Allegheny Family Screening Tool. It is a predictive-risk modeling tool that is used to help with child welfare call-screening decisions when concerns of child maltreatment are raised to the county's department of human services.
The system collected data on each person in the referral and uses it to create an "overall family score." That score determines the likelihood of a future event.
Allegheny did face some backlash, but one conclusion was that it created "less bad bias." Other places, including Los Angeles, have used similar technology in an attempt to improve child welfare, and it is an example of how AI systems will be used in ways that can affect people in large ways, and as a result, it is important to know how those systems can be flawed.
Most AI is created from a process called machine learning, which is teaching a computer something by feeding them thousands of pieces of data to help them learn the information of the data set by itself.
An example would be giving an AI system thousands of pictures of dogs, with the purpose of teaching the system what a dog is. From there the system would be able to look at a photo and decide whether it is a dog or not based on that past data.
So what if the data you are feeding a system is 75% golden retrievers and 25% Dalmations?
Postdoctoral researcher at the AI Now Institute, Dr. Sarah Myers West, says these systems are built to reflect the data they are fed, and that data can be built on bias.
"These systems are being trained on data that's reflective of our wider society," West said. "Thus, AI is going to reflect and really amplify back past forms of inequality and discrimination."
One real-world example: While the human manager-based hiring process can undoubtedly be biased, debate remains over whether algorithmic job application technology undoubtedly removes human bias. The AI learning process could incorporate the biases of the data they are fed for example, the resumes of top-performing candidates at top firms.
The AI Now Institute has also found biases in the people who are creating AI systems. In an April 2019 study, they found that only 15% of the AI staff at Facebook are women, and only 4% of their total workforce are black. Google's workforce is even less diverse, with only 10% of their AI staff being women and 2.5% of their workers black.
Joy Buolamwini, a computer scientist at MIT, found during her research on a project that would project digital masks onto a mirror, that the generic facial recognition software she was using would not identify her face unless she used a white colored mask.
She found that her system could not identify the face of a black woman, because the data set it was running on were overwhelmingly lighter-skinned.
"Quite clearly, it's not a solved problem," West said. "It's actually a very real problem that keeps resurfacing in AI systems on a weekly, almost daily basis."
AI algorithms are completely proprietary to the company that created them.
"Researchers face really significant challenges understanding where there's algorithmic bias because so many of them are opaque," West said.
Even if we could see them, it doesn't mean we would understand, says co-director of the Digital Platforms and Democracy Project, and Shorenstein Fellow at Harvard University, Dipayan Ghosh.
"It's difficult to draw any conclusions based on source code," Ghosh said. "Apple's proprietary creditworthiness algorithm is something that not even Apple can easily pin down, and say, 'Okay, here is the code for this,' because it probably involves a lot of different sources of data and a lot of different implementations of code to analyze that data in different siloed areas of the company."
To take things a step further, companies like Apple write their code to be legible to Apple employees, and it may not make sense to those outside of the company.
Right now there is little government oversight of AI systems.
"When AI systems are being used in areas that are of incredible social, political and economic importance, we have a stake in understanding how they are affecting our lives," West said. "We currently don't really have the avenues for the kind of transparency we would need for accountability."
One presidential candidate is trying to change that. New Jersey Senator Cory Booker sponsored a bill earlier this year called "The Algorithmic Accountability Act."
The bill requires companies to look at flawed algorithms that could create unfair or discriminatory situations for Americans. Under the bill, the Federal Trade Commission would be able to create regulations to 'conduct impact assessments of highly sensitive automated decision systems.' That requirement would impact systems under the FTC's jurisdiction, new or existing.
Cory Booker's website's description of the bill directly cites algorithmic malpractice from Facebook and Amazon in the past years.
Booker isn't the first politician to call for better regulation of AI. In 2016, the Obama administration called for development within the industry of algorithmic auditing and external testing of big data systems.
While government oversight is rare, an increasing practice is third-party auditing of algorithms.
The process involves an outside entity coming in and analyzing how the algorithm is made without revealing trade secrets, which is a large reason why algorithms are private.
Ghosh says this is happening more frequently, but not all of the time.
"It happens when companies feel compelled by public opinion or public sway to do something because they don't want to be called out having had no audits whatsoever," Ghosh said.
Ghosh also said that regulatory action can happen, as seen in the FTC's numerous investigations into Google and Facebook. "If a company is shown to harmfully discriminate, then you could have a regulatory agency come in and say 'Hey, we're either going to sue you in court, or you're going to do X,Y and Z. Which one do you want to do?'"
This story has been updated to include a comment from Goldman Sachs that it has not and will never make decisions based on factors like gender, race, age, sexual orientation or any other legally prohibited factors when determining credit worthiness.
Read this article:
The Apple Card algo issue: What you need to know about A.I. in everyday life - CNBC
Posted in Ai
Comments Off on The Apple Card algo issue: What you need to know about A.I. in everyday life – CNBC
What AI startups need to achieve before VCs will invest – TechCrunch
Posted: at 2:33 pm
David BlumbergContributor
Funding of artificial intelligence-focused companies reached approximately $9.3 billion in the U.S. in 2018, an amount that will continue to rise as the transformative impact of AI is realized. That said, not every AI startup has what it takes to secure an investment and scale to success.
So, what do venture capitalists look for when considering an investment in an AI company?
What we look for in all startups
Some fundamentals are important in any of our investments, AI or otherwise. First, entrepreneurs need to articulate that they are solving a large and important problem. It may sound strange, but finding the right problem can be more difficult than finding the right solution. Entrepreneurs need to demonstrate that customers will be willing to switch from what theyre currently using and pay for the new solution.
The team must demonstrate their competence in the domain, their functional skills and above all, their persistence and commitment. The best ideas likely wont succeed if the team isnt able to execute. Setting and achieving realistic milestones is a good way to keep operators and investors aligned. Successful entrepreneurs need to show why their solution offers superior value to competitors in the market or, in the minority of cases where there is an unresolved need why theyre in the best position to solve it.
In addition, the team must clearly explain how their technology works, how it differs and is advantageous relative to existing competitors and must explain to investors how that competitive advantage can be sustained.
For AI entrepreneurs, there are additional factors that must be addressed. Why? It is fairly clear that were in the early stages of this burgeoning industry which stands to revolutionize sectors from healthcare to fintech, logistics to transportation and beyond. Standards have not been settled, there is a shortage of personnel, large companies are still struggling with deployment, and much of the talent is concentrated in a few large companies and academic institutions. In addition, there are regulatory challenges that are complex and growing due to the nature of the technologys evolutionary aspect.
Here are five things we like to see AI entrepreneurs demonstrate before making an investment:
Demonstrate mastery over their data and its value: AI needs big data to succeed. There are two models: companies can either help customers add value to their data or build a data business using AI. In either case, startups must demonstrate that the data is reliable, secure and compliant with all regulatory rules. They must also demonstrate that AI is adding value to their own data it must explain something, derive an explanation, identify important trends, optimize or otherwise deliver value.
With the sheer abundance of data available for companies to collect today, its imperative that startups have an agile infrastructure in place that allows them to store, access and analyze this data efficiently. A data-driven startup must become ever more responsive, proactive and consistent over time.
AI entrepreneurs should know that while machine learning can be applied to many problems, it may not always yield accurate predictions in every situation. Models may fail for a variety of reasons, one of which is inadequate, inconsistent or variable data. Successful mastery of the data demonstrates to customers that the data stream is robust, consistent and that the model can adapt if the data sources change.
Entrepreneurs can better address their customer needs if they can demonstrate a fast, efficient way to normalize and label the data using meta tagging and other techniques.
Remember that transparency is a virtue: There is an increased need in certain industries such as financial services to explain to regulators how the sausage is made, so to speak. As a result, entrepreneurs must be able to demonstrate explainability to show how the model arrived at the result (for example, a credit score). This brings us to an additional issue about accounting for bias in models and, here again, the entrepreneur must show the ability to detect and correct bias as soon as they are found.
View post:
What AI startups need to achieve before VCs will invest - TechCrunch
Posted in Ai
Comments Off on What AI startups need to achieve before VCs will invest – TechCrunch
How AI and Facial Recognition Are Impacting the Future of Banking – Observer
Posted: at 2:33 pm
A woman uses an ATM with facial recognition technology during the presentation of the new service by CaixaBank in Barcelona on February 14, 2019. LLUIS GENE/AFP via Getty Images
So, I just got the new iPhone 11 Pro. I have to say, I pretty much love the facial recognition unlock feature. And no, Apple is not paying me to say that. Prior, I was a facial recognition skeptic. But now, I can unlock my phone with my face! I love it, but Im also slightly scared at the possibilities of what other people could do if they get access to my face without my knowledge. Better keep my face to myself.
It was only a matter of time before we heard about the financial services industry adopting innovative biometrics technology for access management of private information. In other words: Banks are using facial recognition.
SEE ALSO: What on Earth Is a Data Scientist? The Buzzwords Inventor Spills All
Sounds practical. Sounds scary. Sounds both practical and scary. Ive seen the John Woo movie, Face/Off,and Im well aware of how this could all go horribly wrong.
The financial sector understands the constant need for new and ever-improving security measures better than most industries, because of the implicit risk of being a bank, Shaun Moore, co-founder and CEO of Trueface,told Observer. There are people trying to hack, rob or defraud this industry every single day.
Moores company is working with some of the top global banks to infuse facial recognition into existing security and access management infrastructure.
We are seeing the financial services sector test face recognition as a part of multi-factor authentication for ATM withdrawals, mobile banking enrollment and mobile account access and transactions, said Moore. By implementing face recognition as the key step in multi-factor authentication, banks are able to mitigate their exposure to risk and fraud, saving themselves millions of dollars in the process.
Good point. Dont we all like saving millions of dollars in the process? I know I do.
What we can expect from our sci-fi financial transactions in the next five to 10 years is a federated identity across the digital and physical banking worldwhere your face will be the key to accessing your banking information, transacting and securing your account. The aim is to reduce fraud and lead to more secure financial data. Mexico has already adopted a biometric security mandate, which Moore sees as a trend that will be spread first to South America and eventually to the U.S.
Whether you are withdrawing money from an ATM or you enter a banks physical branch, our goal is to provide an extremely frictionless, personalized experience with a focus on security, he said.
Moore sees the adoption of facial recognition repositioning the financial sector as a leader in service and security. The tech nuts-and-bolts on how this works?Trueface has developed a suite of SDKs (software development kits) and a dockerized container solution that harness the power of machine learning and artificial intelligence to transform your camera data into actionable intelligence, Moore explained. Computer vision will be used for automated account registration, recognizing VIPs to enhance service at brick and mortar locations, recognize known criminals in branches and alerting authorities, access control for vaults and even employee timekeeping.
The whole VIP banking system does raise some flags about secret consumer scores,which allow companies to sell and profit from our data. As Edward Snowden said, there is no good reason for companies to hold onto our dataexcept when they see value and profit from it.
But according to Moore, We provide the solutions to run on our client infrastructure so that no data ever leaves the clients site/servers, ensuring performance but also data privacy and security.
Still, the city of San Francisco has banned facial recognition technology used by local law enforcement agencies. One slight problem is that facial recognition has trouble identifying people of color.
So, how is that being combatted with financial security?
The city, which was not using face recognition to begin with, created a legal process for using face recognition, not an outright ban, said Moore. This is something that we are in favor of. The bias discussed around face recognition has to do with the underlying data the algorithms are trained with. If the data is disproportionate, then the results will also be skewed in one direction. The industry as a whole recognizes this and has been actively working towards mitigating data bias risk.
Moore said the problem with facial recognition bias is shrinking and will cease to exist in the very near future.
The impact of this hurdle plays more of a role when it comes to recognizing one person out of many; thousands or millions, he stated. Typically with account authentication, the database we scan is few or one-to-one making this a non-issue.
The skeptics of facial recognition, Moore finds, are largely siloed in a surveillance use, not access control. Still, there are other possible failures and downsides with facial recognition and security.
The biggest concern is the ability to spoof or falsify identity when enrolling in an account remotely, said Moore. The solution to this problem is to ensure liveness and/or to pair biometrics with other forms of verification.
Plus, with artificial intelligence as part of the facial recognition formula, there is the classic quote from Elon Muskthat we need to fear AI more than nukes. Is the same fear justified for AI use in the banking sector?
AI is still in its infancy, but what I believe Elon is referring to here is that once something is created, its hard to reverse progress if we dont like the results, Moore said. The computing power required to reach this type of AI-driven world is still a decade or more away, so it is more important that we recognize the potential risks and re-adjust our path accordingly.
Moores takeaway is face recognition is a tool that can be used to significantly improve security and efficiency. In the meantime, Im going to be locking and unlocking my new iPhone with my faceuntil its time for my next banking expedition.
View original post here:
How AI and Facial Recognition Are Impacting the Future of Banking - Observer
Posted in Ai
Comments Off on How AI and Facial Recognition Are Impacting the Future of Banking – Observer
Intel unveils its first chips built for AI in the cloud – Engadget
Posted: at 2:33 pm
The chipmaker also unveiled a next-gen Movidius Vision Processing Unit whose updated computer vision architecture promises over 10 times the inference performance while reportedly managing efficiency six times better than rivals. Those claims have yet to pan out in the real world, but it's safe to presume that anyone relying on Intel tech for visual AI work will want to give this a look.
You'll have to be patient for the Movidius chip when it won't ship until sometime in the first half of 2020. This could nonetheless represent a big leap for AI performance, at least among companies that aren't relying on rivals like NVIDIA. Intel warned that bleeding-edge uses of AI could require performance to double every 3.5 months -- that's not going to happen if companies simply rely on conventional CPUs. And when internet giants like Facebook and Baidu lean heavily on Intel for AI, you might see practical benefits like faster site loads or more advanced AI features.
View post:
Intel unveils its first chips built for AI in the cloud - Engadget
Posted in Ai
Comments Off on Intel unveils its first chips built for AI in the cloud – Engadget
Plum, the AI money management app, raises $3M more and comes to Android – TechCrunch
Posted: at 2:33 pm
Plum, the U.K.-based AI assistant to help you manage your money and save more, has raised $3 million in additional funding money it plans to use for further growth, including European expansion.
The London company has also quietly launched its app for Android phones, adding to an existing iOS app and Facebook Messenger chatbot.
Backing this round which is essentially a second tranche to Plums earlier $4.5 million raise in the summer is EBRD and VentureFriends, both existing investors. Christian Faes, founder and CEO of LendInvest has also participated.
It brings the fintech startups total funding to $9.3 million since being founded by early TransferWise employee Victor Trokoudes, and Alex Michael, in 2016.
The new investment is said to come at the end of a year of rapid expansion for Plum in both London and Athens, including growing the team to 31 employees. Senior hires include Max Mawby, Plums head of Behavioural Science, who previously worked for the U.K. government and ran the fintech sector-focused Behavioural Insights Team.
In a call, Trokoudes told me that take-up for Plums iOS app has been high and Android is also following a similar trajectory, proof that the startups AI assistant has perhaps outgrown its chatbot and Facebook Messenger beginnings (competitor Cleo has also released dedicated iOS and Android apps as an alternative to Facebook Messenger).
He also says Plum now has 650,000 registered users, of which around 70% are active monthly. In recent user feedback sessions conducted by the startup, the biggest draw to the app is that its aim of changing financial behaviour to help people save more appears to be working.
When users stick around using Plum for long enough, Trokoudes says they are surprised (and delighted) that it actually works.
Like similar apps, Plums artificial intelligence deems what you can afford to save by analysing your bank transactions. It then puts money away each month in the form of round-ups and/or regular savings.
You can open an ISA investment account and invest based on themes, such as only in ethical companies or technology. Another related feature is Splitter, which, as the name suggests, lets you split your automatic savings between Plum savings and investments, selecting the percentage amounts to go into each pot from 0-100%.
Trokoudes says that Plum recently launched two new intelligent saving rules: the 52-Week Challenge, which aims to help you save 1367 over a year; and the Rainy Day Rule, which puts aside money whenever it rains (yes, really!).
Saving rules use automation to help people save more effectively without overloading them with information, adds the Plum founder in a statement. We have good evidence that this approach works: our automated round-ups feature, that we launched earlier this year has become a firm favourite among Plum users, boosting their savings by 50% on average.
Meanwhile, another one of Plums competitors, Chip, recently raised 3.8 million in equity crowdfunding on Crowdcube. It was part of a round targeting $7.3 million in total, although it isnt clear if all of that has closed yet (last time I checked the company had so far secured $5 million). Noteworthy, the equity crowdfund gave Chip a pre-money valuation of 36.78 million based on over 153,000 accounts opened.
See the article here:
Plum, the AI money management app, raises $3M more and comes to Android - TechCrunch
Posted in Ai
Comments Off on Plum, the AI money management app, raises $3M more and comes to Android – TechCrunch
Farmers are using AI to spot pests and catch diseases and many believe its the future of agriculture – INSIDER
Posted: November 9, 2019 at 8:42 am
In Leones, Argentina,a drone with a special camera flies low over 150 acres of wheat. It's able to check each stalk, one-by-one, spottingthe beginnings of a fungal infection that could potentially threaten this year's crop.
The flying robot is powered by computer vision: a kind of artificial intelligence being developed by start-ups around the world, and deployed by farmers looking for solutions that will help them grow food on an increasingly unpredictable planet.
Many food producers are struggling to manage threats to their crop like disease and pests, made worse by climate change, monocropping, and widespread pesticide use.
Catching things early is key.
Taranis, a company that works with farms on four continents, flies high-definition cameras above fields to provides "the eyes."
Machine learning a kind of artificial intelligence that's trained on huge data sets and then learns on its own is the "brains."
"I think that today, to increase yields in our lots, it's essential to have a technology that allows us to take decisions immediately," said Ernesto Agero, the producer on San Francisco Farm in Argentina.
The algorithm teaches itself to flag something as small as an individual insect, long before humans would usually identify the problem.
Similar technology is at work in Norway's fisheries, where stereoscopic cameras are a new weapon in the battle against sea lice, a pest that plagues farmers to the tune of hundreds of millions of dollars.
The Norwegian government is considering making this technology, developed by a start-up called Aquabyte, a standard tool for farms across the country.
Farmers annotated images to create the initial data set. Over time, the algorithm has continued to sharpen its skills with the goal of finding every individual louse.
But deploying computer vision is expensive, and for many it's still out of reach.
Bigger industrial farms tried using computer vision to identify and remove sick pigs at the outset of an African swine fever epidemic that is sweeping China, according The New York Times.
But half of China's farms are small-scale operations like this one, where that wasn't an option.
Chinese pig farmer Fan Chengyou lost everything.
"When the fever came, 398 pigs were buried alive," Chengyou said. "I really don't want to raise pigs anymore."
China the world's biggest pork producing country is expected to lose half its herd this year.
For many farmers in the world's major growing regions, 2019 was devastating.
Record flooding all along the Mississippi River Valley the breadbasket of the United States meant that many farmers couldn't plant anything at all this season.
And while computer vision can't stop extreme weather, it isat the heart of a growing trend that may eventually offer an alternative, sheltered from the elements.
Root AI enlists computer vision to teach its robots to pick fruit. Root AI
"Indoor growing powered by artificial intelligence is the future," said Josh Lessing, co-founder and CEO of Root AI, a research company that develops robots to assist in-door farmers.
Computer vision has taught a fruit-picking robot named Virgo to figure out which tomatoes are ripe, and how to pick them gently, so that a hot house can harvest just the tomatoes that are ready, and let the rest keep growing.
The Boston-based start-up is installing them at a handful commercial greenhouses in Canada starting in 2020.
80 Acres Farms, another pioneer in indoor growing, opened what it says is the world's first fully-automated indoor growing facility just last year.
The company, based in Cincinnati, currently has seven facilities in the United States, and plans to expand internationally over the next six months. Artificial intelligence monitors every step of the growing process.
"We can tell when a leaf is developing and if there are any nutrient deficiencies, necrosis, whatever might be happening to the leaf," said 80 Acres Farms, CEO, Mike Zelkind. "We can identify pest issues, we can identify a whole variety of things with vision systems today that we can also process."
Because the lettuce and vine crops are grown under colored LED lights, technicians can even manage photosynthesis
Thanks to the benefits of indoor-farming practices, Zelkind says 80 Acres Farms' crops grow faster and have the potential to be more nutrient-dense.
Humans need more than salad to survive, though. Experts say indoor farms will need to expand to a more diverse range to provide a comprehensive option for growing food, but the advances being made in this space are significant.
AI-powered indoor agriculture is attracting a whole new breed of farmer.
New techie farmers are ambitious, but they are also realistic about what it takes to make AI work.
Ryan Pierce comes from a cloud computing background, but decided to jump into indoor growing, despite little to no experience in agriculture. Now, Pierce works for Fresh Impact Farms, an indoor farm in Arlington, VA.
"It's really sexy to talk about AI and machine learning, but a lot of people don't realize is the sheer amount of data points that you actually need for it to be worthwhile," Pierce said.
There is a ways to go before artificial intelligence can truly solve the issues facing agriculture today and in the future.
Many AI projects are still in beta, and some have proven too good to be true.
Still, the appetite is high for finding solutions at the intersection of data, dirt and the robots that are learning to help us grow food.
AI for agriculture is valued at $600 million, and expected to reach $2.6 billion by 2025.
Go here to see the original:
Posted in Ai
Comments Off on Farmers are using AI to spot pests and catch diseases and many believe its the future of agriculture – INSIDER