The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Opinion | Beyond the Matrix Theory of the Human Mind – The New York Times
Posted: May 30, 2023 at 12:13 am
Imagine I told you in 1970 that I was going to invent a wondrous tool. This new tool would make it possible for anyone with access and most of humanity would have access to quickly communicate and collaborate with anyone else. It would store nearly the sum of human knowledge and thought up to that point, and all of it would be searchable, sortable and portable. Text could be instantly translated from one language to another, news would be immediately available from all over the world, and it would take no longer for a scientist to download a journal paper from 15 years ago than to flip to an entry in the latest issue.
What would you have predicted this leap in information and communication and collaboration would do for humanity? How much faster would our economies grow?
Now imagine I told you that I was going to invent a sinister tool. (Perhaps, while telling you this, I would cackle.) As people used it, their attention spans would degrade, as the tool would constantly shift their focus, weakening their powers of concentration and contemplation. This tool would show people whatever it is they found most difficult to look away from which would often be what was most threatening about the world, from the worst ideas of their political opponents to the deep injustices of their society. It would fit in their pockets and glow on their night stands and never truly be quiet; there would never be a moment when people could be free of the sense that the pile of messages and warnings and tasks needed to be checked.
What would you have thought this engine of distraction, division and cognitive fracture would do to humanity?
Thinking of the internet in these terms helps solve an economic mystery. The embarrassing truth is that productivity growth how much more we can make with the same number of people and factories and land was far faster for much of the 20th century than it is now. We average about half the productivity growth rate today that we saw in the 1950s and 60s. That means stagnating incomes, sluggish economies and a political culture thats more about fighting over what we have than distributing the riches and wonders weve gained. So what went wrong?
You can think of two ways the internet could have sped up productivity growth. The first way was obvious: by allowing us to do what we were already doing and do it more easily and quickly. And that happened. You can see a bump in productivity growth from roughly 1995 to 2005 as companies digitized their operations. But its the second way that was always more important: By connecting humanity to itself and to nearly its entire storehouse of information, the internet could have made us smarter and more capable as a collective.
I dont think that promise proved false, exactly. Even in working on this article, it was true for me: The speed with which I could find information, sort through research, contact experts its marvelous. Even so, I doubt I wrote this faster than I would have in 1970. Much of my mind was preoccupied by the constant effort needed just to hold a train of thought in a digital environment designed to distract, agitate and entertain me. And I am not alone.
Gloria Mark, a professor of information science at the University of California, Irvine, and the author of Attention Span, started researching the way people used computers in 2004. The average time people spent on a single screen was 2.5 minutes. I was astounded, she told me. That was so much worse than Id thought it would be. But that was just the beginning. By 2012, Mark and her colleagues found the average time on a single task was 75 seconds. Now its down to about 47.
This is an acid bath for human cognition. Multitasking is mostly a myth. We can focus on one thing at a time. Its like we have an internal whiteboard in our minds, Mark said. If Im working on one task, I have all the info I need on that mental whiteboard. Then I switch to email. I have to mentally erase that whiteboard and write all the information I need to do email. And just like on a real whiteboard, there can be a residue in our minds. We may still be thinking of something from three tasks ago.
The cost is in more than just performance. Mark and others in her field have hooked people to blood pressure machines and heart rate monitors and measured chemicals in the blood. The constant switching makes us stressed and irritable. I didnt exactly need experiments to prove that I live that, and you probably do, too but it was depressing to hear it confirmed.
Which brings me to artificial intelligence. Here Im talking about the systems we are seeing now: large language models like OpenAIs GPT-4 and Googles Bard. What these systems do, for the most part, is summarize information they have been shown and create content that resembles it. I recognize that sentence can sound a bit dismissive, but it shouldnt: Thats a huge amount of what human beings do, too.
Already, we are being told that A.I. is making coders and customer service representatives and writers more productive. At least one chief executive plans to add ChatGPT use in employee performance evaluations. But Im skeptical of this early hype. It is measuring A.I.s potential benefits without considering its likely costs the same mistake we made with the internet.
I worry were headed in the wrong direction in at least three ways.
One is that these systems will do more to distract and entertain than to focus. Right now, the large language models tend to hallucinate information: Ask them to answer a complex question, and you will receive a convincing, erudite response in which key facts and citations are often made up. I suspect this will slow their widespread use in important industries much more than is being admitted, akin to the way driverless cars have been tough to roll out because they need to be perfectly reliable rather than just pretty good.
A question to ask about large language models, then, is where does trustworthiness not matter? Those are the areas where adoption will be fastest. An example from media is telling, I think. CNET, the technology website, quietly started using these models to write articles, with humans editing the pieces. But the process failed. Forty-one of the 77 A.I.-generated articles proved to have errors the editors missed, and CNET, embarrassed, paused the program. BuzzFeed, which recently shuttered its news division, is racing ahead with using A.I. to generate quizzes and travel guides. Many of the results have been shoddy, but it doesnt really matter. A BuzzFeed quiz doesnt have to be reliable.
A.I. will be great for creating content where reliability isnt a concern. The personalized video games and childrens shows and music mash-ups and bespoke images will be dazzling. And new domains of delight and distraction are coming: I believe were much closer to A.I. friends, lovers and companions becoming a widespread part of our social lives than society is prepared for. But where reliability matters say, a large language model devoted to answering medical questions or summarizing doctor-patient interactions deployment will be more troubled, as oversight costs will be immense. The problem is that those are the areas that matter most for economic growth.
Marcela Martin, BuzzFeeds president, encapsulated my next worry nicely when she told investors, Instead of generating 10 ideas in a minute, A.I. can generate hundreds of ideas in a second. She meant that as a good thing, but is it? Imagine that multiplied across the economy. Someone somewhere will have to process all that information. What will this do to productivity?
One lesson of the digital age is that more is not always better. More emails and more reports and more Slacks and more tweets and more videos and more news articles and more slide decks and more Zoom calls have not led, it seems, to more great ideas. We can produce more information, Mark said. But that means theres more information for us to process. Our processing capability is the bottleneck.
Email and chat systems like Slack offer useful analogies here. Both are widely used across the economy. Both were initially sold as productivity boosters, allowing more communication to take place faster. And as anyone who uses them knows, the productivity gains though real are more than matched by the cost of being buried under vastly more communication, much of it junk and nonsense.
The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs but really, pick your interest group using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?
You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the boring apocalypse scenario for A.I., in which we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. Were just inflating and compressing content generated by A.I.
When we spoke, Frankle noted the magic of feeding a 100-page Supreme Court document into a large language model and getting a summary of the key points. But was that, he worried, a good summary? Many of us have had the experience of asking ChatGPT to draft a piece of writing and seeing a fully formed composition appear, as if by magic, in seconds.
My third concern is related to that use of A.I.: Even if those summaries and drafts are pretty good, something is lost in the outsourcing. Part of my job is reading 100-page Supreme Court documents and composing crummy first drafts of columns. It would certainly be faster for me to have A.I. do that work. But the increased efficiency would come at the cost of new ideas and deeper insights.
Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that Ive come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from The Matrix to download the knowledge of a book (or, to use the movies example, a kung fu master) into our heads, and then wed have it, instantly. But that misses much of whats really happening when we spend nine hours reading a biography. Its the time inside that book spent drawing connections to what we know and having thoughts we would not otherwise have had that matters.
Nobody likes to write reports or do emails, but we want to stay in touch with information, Mark said. We learn when we deeply process information. If were removed from that and were delegating everything to GPT having it summarize and write reports for us were not connecting to that information.
We understand this intuitively when its applied to students. No one thinks that reading the SparkNotes summary of a great piece of literature is akin to actually reading the book. And no one thinks that if students have ChatGPT write their essays, they have cleverly boosted their productivity rather than lost the opportunity to learn. The analogy to office work is not perfect there are many dull tasks worth automating so people can spend their time on more creative pursuits but the dangers of overautomating cognitive and creative processes are real.
These are old concerns, of course. Socrates questioned the use of writing (recorded, ironically, by Plato), worrying that if men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves but by means of external marks. I think the trade-off here was worth it I am, after all, a writer but it was a trade-off. Human beings really did lose faculties of memory we once had.
To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I. and build the workflows and office environments around it, in ways that dont overwhelm and distract and diminish us. We failed that test with the internet. Lets not fail it with A.I.
Read the original post:
Opinion | Beyond the Matrix Theory of the Human Mind - The New York Times
Posted in Artificial Intelligence
Comments Off on Opinion | Beyond the Matrix Theory of the Human Mind – The New York Times
CyberArk Supercharges Identity Security Platform with Automation … – CXOToday.com
Posted: at 12:13 am
New Products, Features and Cross-Platform Integrations Accelerate Identity and Cloud Security
CyberArk (NASDAQ:CYBR), theIdentity Securitycompany, today announced new products and features across the CyberArk Identity Security Platform, making it the most powerful platform of its kind. Investments to enhance cloud security and deliver automation and artificial intelligence (AI) innovations across the platform make it easier than ever to apply intelligent privilege controls to all identities human and non-human from a single vendor.
The rapid acceleration of identities is part of what makes a unified approach to advancing Identity Security so important. Treating identities differently with stand-alone technologies misses the mark and exposes risk, said Peretz Regev, chief product officer, CyberArk. Our unified Identity Security platform breaks down those silos by contextually authenticating identities, then dynamically authorizing the least amount of privilege required. Additionally, we continue to strategically expand our use of machine learning and artificial intelligence to improve customers defensive capabilities to counter attacker innovation.
With CyberArks single, unified Identity Security platform organizations can achieve Zero Trust and least privilege with complete visibility and enable secure access for any identity from anywhere and to the widest range of resources or environments. TheCyberArk Identity Security Platformhelps customers apply intelligent privilege controls to reduce risk for all identities and consolidate vendors while delivering operational efficiencies and achieving a faster ROI.
CyberArk leads the market with innovative new features and investments in automation and artificial intelligence to improve Identity Security and enable organizations to implement proactive controls and defensive strategies. Key innovations in these areas include:
Enhancements across the CyberArk Identity Security Platform focus on further improving security, adoption and user experience. Additional new capabilities driving value across the platform will include:
In addition, at IMPACT 23 CyberArk also announcedCyberArk Secure Browser, a first-of-its-kind Identity Security-based browser.
About CyberArk CyberArk(NASDAQ:CYBR) is the global leader in Identity Security. Centered onintelligent privilege controls, CyberArk provides the most comprehensive security offering for any identity human or machine across business applications, distributed workforces, hybrid cloud environments and throughout the DevOps lifecycle. The worlds leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visithttps://www.cyberark.com, read theCyberArk blogsor follow onLinkedIn,Twitter,FacebookorYouTube.
Read more from the original source:
CyberArk Supercharges Identity Security Platform with Automation ... - CXOToday.com
Posted in Artificial Intelligence
Comments Off on CyberArk Supercharges Identity Security Platform with Automation … – CXOToday.com
Photoshop Is Getting Artificial Intelligence — Why That’s a Big Deal … – The Motley Fool
Posted: at 12:13 am
Adobe (ADBE 5.95%) is bringing artificial intelligence to Photoshop, and it makes all the sense in the world. As Travis Hoium covers in this video, if AI ends up being an incremental technology improvement it will make Adobe's business stronger.
*Stock prices used were end-of-day prices of May 24, 2023. The video was published on May 24, 2023.
Travis Hoium has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Adobe. The Motley Fool recommends the following options: long January 2024 $420 calls on Adobe and short January 2024 $430 calls on Adobe. The Motley Fool has a disclosure policy.Travis Hoium is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe throughtheir link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.
Read more:
Photoshop Is Getting Artificial Intelligence -- Why That's a Big Deal ... - The Motley Fool
Posted in Artificial Intelligence
Comments Off on Photoshop Is Getting Artificial Intelligence — Why That’s a Big Deal … – The Motley Fool
Slack CEO looks to artificial intelligence for help in rolling out new … – The Boston Globe
Posted: at 12:13 am
One way to break the cycle of drudgery, at least from her point of view, is by effective use of messaging software, particularly when enhanced by artificial intelligence.
As the newly christened chief executive of Slack, the messaging app, you would expect Jones to say that. She is all-in on making office workers days more productive. Toward that end, on May 4, Jones announced a suite of Slack programs that use artificial intelligence under the brand name Slack GPT. These are designed to make colleagues communications more efficient, including by providing conversation summaries and writing assistance, and to make it easier for salespeople to respond to clients and prospects, by providing alerts of sales leads and instant research. These AI programs will also help Jones and her colleagues integrate the consumer-facing Slack app with the business-focused tools offered by Slacks parent company, Salesforce.
Get Trendlines
A business newsletter from Globe Columnist Larry Edelman covering the trends shaping business and the economy in Boston and beyond.
Jones was already one of the most prominent Latinas in the high-tech sector when she became CEO about four months ago, taking over for Slack cofounder Stewart Butterfield. Now, as the head of one of the best-known software programs used in modern office life, shes also one of the most prominent tech executives in Greater Boston.
Although Salesforce is based in San Francisco, Jones lives in Cambridge. She moved to Greater Boston more than 15 years ago from the Seattle area as a Microsoft executive, in large part because her husband wanted to return to his home state. She continued to rise up the ranks at Microsoft, before leaving in 2015 to be VP of software product management for speaker maker Sonos in Boston. Salesforce called four years later; she liked how its e-commerce options allow companies like Sonos to stay independent, and took a job helping oversee that part of Salesforces business.
Jones said she was surprised when Butterfield reached out about taking over Slack. But she calls the past four months the best, you know, four months Ive had in my career even though it involves plenty of travel. She has bounced around from Australia to London to Toronto, with plenty of visits to San Francisco.
Chamber chief executive Jim Rooney thought Jones would be a perfect keynote speaker for this years annual meeting and invited her through chamber members Thea James and Betty Francisco, who volunteer alongside Jones at Boston nonprofit Compass Working Capital.
About that AI that Jones talked about at the chamber: She has been impressed by how quickly big companies are adopting Slack GPT. Every customer is knocking on my door, Jones said. Theyre like, Hey, just protect my data, but I need this.
As the states economic development secretary, Yvonne Hao is leading the charge to update the official economic development plan for Massachusetts something required by state law to happen every four years. She certainly wont be at a loss for feedback.
Last week, Hao told members of real estate trade group NAIOP Massachusetts that more than 200 people attended the first two regional listening sessions, in Springfield and Worcester from big companies and small businesses, nonprofits and city councils. She plans to finish the report by the end of the year and is relying in part on an advisory council, which includes NAIOP chief executive Tamara Small as a member.
One possible reason Hao is getting so much feedback: rising concerns about the states economic competitiveness.
At the NAIOP event, Jake Grossman of the Grossman Companies said he worries about taxes and housing affordability. Can you give me a little therapy session? Grossman asked Hao. Whats the good stuff thats happening?
Hao said shes hustling to make the case at every turn for why companies should stay and grow here, arguments that tend to focus on our well-educated talent pool. She said she heard that a chief executive was being recruited to relocate to North Carolina, so she hopped on the phone with him to explain why he should stay. And she noted that Governor Maura Healey has hired Quentin Palfrey and Will Rasky to go after all the federal funds they can find for Massachusetts. Other states have this muscle developed [but] we havent, she said of lobbying Washington.
She knows Massachusetts has been one of the few states to lose people during the pandemic but is determined to reverse that trend.
This is not the time to hang out and rest, Hao said. We have real issues we have to fix. If you wait too long ... by the time you realize youve lost, its too late.
As the founder of Resilient Coders training program for people of color, David Delmar Sentes has helped a generation of Black and Latino tech workers enter the workforce.
Now that the tech sector is experiencing a downturn, Delmar Sentes worries many of those alums are being left behind, and that the corporate diversity commitments made in recent years are slipping away. (Delmar Sentes left Resilient Coders last year to finish his book on this topic, What We Build with Power.) Black workers, he said, have been disproportionately affected by all the tech layoffs, and diversity and equity budgets have been slashed.
Thats why he and Pariss Chandler, founder of the Black Tech Pipeline, among others, are launching a campaign for worker-led equity in the field. Theyll hold their first organizing meeting on June 12. Among the reforms Delmar Sentes wants: companies getting serious about dropping bachelors degrees from the list of job requirements. He also hopes for the creation of some sort of organization think of it as a Better Business Bureau, but for DEI that can track companies that are doing well, and the ones that are performing poorly.
Resilient Coders and other organizations like it are functionally marching into the wind, he said. If you do that long enough, you wonder what it would be like to change the direction of the wind.
Terry Richardson has led major sales efforts for two giant tech companies, Hewlett Packard Enterprise and AMD the kind of jobs that can put you on the road more often than youre home.
While at AMD, Richardson was deciding whether to finally retire, or to find a job a little closer to home. He ended up picking the latter option, when Josh Dinneen rang him up. Dinneen was moving up to president at Portsmouth, N.H-based IT services and cybersecurity provider GreenPages, which also has an office in Charlestown. And Dinneen wanted someone he could trust to take over his previous role there as chief revenue officer. Thus, the invite was extended to Richardson. He joined GreenPages, which is owned by Boston private equity firm Abry Partners, on May 1.
Richardson said he liked the technology expertise and the people at the 310-person firm. Plus, its hard to argue with the lifestyle improvements, because most clients are in and around New England as opposed to the marathon trips Richardson took on almost a weekly basis. While the GreenPages headquarters in Portsmouth isnt exactly a short drive away from his home in Hopkinton, at least he knows he can finish the day in his own bed. Its time, finally, to stop the running.
Jon Chesto can be reached at jon.chesto@globe.com. Follow him on Twitter @jonchesto.
The rest is here:
Slack CEO looks to artificial intelligence for help in rolling out new ... - The Boston Globe
Posted in Artificial Intelligence
Comments Off on Slack CEO looks to artificial intelligence for help in rolling out new … – The Boston Globe
AI set to transform construction industry – Fox Business
Posted: April 27, 2023 at 2:54 pm
Teleo co-founder and CEO Vinay Shet discusses the potential impact of artificial intelligence on manufacturing and construction jobs on "The Claman Countdown."
FIRST ON FOX Artificial intelligence has entered the construction industry, and early adopters say the efficiencies and cost-cutting measures will revolutionize the $10 trillion sector of the global economy for the better.
Supply chain and building material software company DigiBuild has been using OpenAI's ChatGPT to bolster its program for months, and is set to unveil the results at an event in Miami on Wednesday evening.
DigiBuild, a supply chain and building material software company, has been using ChatGPT for months.
But ahead of the announcement, DigiBuild CEO Robert Salvador gave FOX Business an exclusive sneak peek of how the powerful AI tool has improved efficiency and slashed costs for the firm's clients, and he says the technology will be "market changing."
The construction industry is still dogged by the high material costs and supply chain woes brought on by the pandemic, and DigiBuild's software aims to help developers and contractors save money and improve their schedules. The help of AI has provided a remarkable boost to that end.
UNPLEASANT CUSTOMER SERVICE CALLS, ENDLESS WAIT TIMES MAY HAVE TECH FIX
To the company's knowledge, DigiBuild is the first to introduce ChatGPTinto the construction supply chain, and the firm has some inside help. The building software firm is backed by major investors, including Y Combinator which trained OpenAI CEO Sam Altman and has an exclusive Slack channel with OpenAI that allows experts to build together.
Construction workers are shown with the Manhattan skyline and Empire State Building behind them in Brooklyn, New York City, on Jan. 24, 2023. (Ed Jones / AFP via Getty Images / Getty Images)
DigiBuild has been around five years and has automated the job of sifting through suppliers to find materials and working out scheduling. Now, what used to take a team of humans hundreds of labor hours using Excel spreadsheets, notebooks and manual phone calls has been reduced to a matter of seconds with the help of language learning models.
"ChatGPT has taken us to the next level," Salvador said. "Supersonic."
AI DATA LEAK CRISIS: NEW TOOL PREVENTS COMPANY SECRETS FROM BEING FED TO CHATGPT
"Instead of spending multiple hours probably getting a hold of maybe five or six suppliers, ChatGPT can find 100 of them and even automate outreach and begin communications with those 100 suppliers and say, 'Hey, we're DigiBuild. We need to find this type of door, can you provide a quote and send it back here?'" he said. "We can talk to 100 suppliers in one minute versus maybe a handful in a couple hours."
DigiBuild CEO Robert Salvador says ChatGPT has taken the company "supersonic."
The CEO offered a real-world example of a job where material costs were literally slashed by more than half using the new technology.
One of DigiBuild's clients, VCC Construction, needed closet shelving for a project in Virginia, and the builder could only find one quote for $150,000 with limited availability. With the click of a button, DigiBuild was able to find a vendor in the Midwest that provided the shelving and delivered it within weeks for $70,000.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Salvador says to imagine those results for a $500 million job or across the industry. He expects AI technology to become widely adopted.
"Before companies like us, the construction industry was still early in its digital transformation they were late to the party," he told FOX Business. But now, "It's very much going all in on that, finally."
See the article here:
Posted in Artificial Intelligence
Comments Off on AI set to transform construction industry – Fox Business
Opinion | Artificial generative intelligence could prove too much for democracy – The Washington Post
Posted: at 2:54 pm
Contributing columnist|AddFollow
April 26, 2023 at 6:30 a.m. EDT
Tech and democracy are not friends right now. We need to change that fast.
As Ive discussed previously in this series, social media has already knocked a pillar out from under our democratic institutions by making it exceptionally easy for people with extreme views to connect and coordinate. The designers of the Constitution thought geographic dispersal would put a brake on the potential power of dangerous factions. But people no longer need to go through political representatives to get their views into the public sphere.
Our democracy is reeling from this impact. We are only just beginning the work of renovating our representative institutions to find mechanisms (ranked choice voting, for instance) that can replace geographic dispersal as a brake on faction.
Now, here comes generative artificial intelligence, a tool that will help bad actors further accelerate the spread of misinformation.
A healthy democracy could govern this new technology and put it to good use in countless ways. It would also develop defenses against those who put it to adversarial use. And it would look ahead to probable economic transformation and begin to lay out plans to navigate what will be a rapid and startling set of transitions. But is our democracy ready to address these governance challenges?
Im worried about the answer to that, which is why I joined a long list of technologists, academics and even controversial visionaries such as Elon Musk in signing an open letter calling for a pause for at least six months of "the training of AI systems more powerful than GPT-4. This letter was occasioned by the release last month of GPT-4 from the lab OpenAI. GPT-4 significantly improves on the power and functionality of ChatGPT, which was released in November.
The field of technology is convulsed by a debate about whether we have reached the Age of AGI. Not just an Age of AI, where machines and software, like Siri, perform specific and narrow tasks, but an Age of Artificial General Intelligence, in which technology can meet and match humans on just about any task. This would be a game changer giving us not just more problems of misinformation and fraud, but also all kinds of unpredictable emergent properties and powers from the technology.
The newest generative foundation models powering GPT-4 can match the best humans in a range of fields, from coding to the LSAT. But is the power of generative AI evidence of the arrival of what has for some been a long-sought goal artificial general intelligence? Bill Gates, cofounder of Microsoft, which has sought to break away from its rivals via intense investment in OpenAI, says no and argues that the capability of GPT-4 and other large language models is still constrained to limited tasks. But a team of researchers at Microsoft Research, in a comprehensive review of the capability of GPT-4, says yes. They see sparks of artificial general intelligence in the newest machine-learning models. My own take is that the research team is right. (Disclosure: My research lab has received funding support from Microsoft Research.)
But regardless of which side of the debate one comes down on, and whether the time has indeed come (as I think it has) to figure out how to regulate an intelligence that functions in ways we cannot predict, it is also the case that the near-term benefits and potential harms of this breakthrough are already clear, and attention must be paid. Numerous human activities including many white-collar jobs can now be automated. We used to worry about the impacts of AI on truck drivers; now its also the effects on lawyers, coders and anyone who depends on intellectual property for their livelihood. This advance will increase productivity but also supercharge dislocation.
In comments that sound uncannily as if from the early years of globalization, Gates said this about the anticipated pluses and minuses: When productivity goes up, society benefits because people are freed up to do other things, at work and at home. Of course, there are serious questions about what kind of support and retraining people will need. Governments need to help workers transition into other roles.
And we all know how that went.
For a sense of the myriad things to worry about, consider this (partial) list of activities that OpenAI knows its technology can enable and that it therefore prohibits in its usage policies:
Illegal activity. Child sexual-abuse material. Generation of hateful, harassing or violent content. Generation of malware. Activity that has high risk of physical harm, including: weapons development; military and warfare; management or operation of critical infrastructure in energy, transportation and water; content that promotes, encourages or depicts acts of self-harm. Activity that has a high risk of economic harm, including: multilevel marketing, gambling, payday lending, automated determinations of eligibility for credit, employment, educational institutions or public assistance services. Fraudulent or deceptive activity, including: scams, coordinated inauthentic behavior, plagiarism, astroturfing, disinformation, pseudo-pharmaceuticals. Adult content. Political campaigning or lobbying by generating high volumes of campaign materials. Activities that violate privacy. Unauthorized practice of law or medicine or provision of financial advice.
The point of the open letter is not to say that this technology is all negative. On the contrary. There are countless benefits to be had. It could at long last truly enable the personalization of learning. And if we can use what generative AI is poised to create to compensate internet users for the production of the raw data its built upon treat that human contribution as paid labor, in other words we might be able to redirect the basic dynamics of the economy away from the ever-greater concentration of power in big tech.
But whats the hurry? We are simply ill-prepared for the impact of yet another massive social transformation. We should avoid rushing into all of this with only a few engineers at a small number of labs setting the direction for all of humanity. We need a breather for some collective learning about what humanity has created, how to govern it, and how to ensure that there will be accountability for the creation and use of new tools.
There are already many things we can and should do. We should be making scaled-up public-sector investments into third-party auditing, so we can actually know what models are capable of and what data theyre ingesting. We need to accelerate a standards-setting process that builds on work by the National Institute of Standards and Technology. We must investigate and pursue compute governance, which means regulation of the use of the massive amounts of energy necessary for the computing power that drives the new models. This would be akin to regulating access to uranium for the production of nuclear technologies.
More than that, we need to strengthen the tools of democracy itself. A pause in further training of generative AI could give our democracy the chance both to govern technology and to experiment with using some of these new tools to improve governance. The Commerce Department recently solicited input on potential regulation for the new AI models; what if we used some of the tools the AI field is generating to make that public comment process even more robust and meaningful?
We need to govern these emerging technologies and also deploy them for next-generation governance. But thinking through the challenges of how to make sure these technologies are good for democracy requires time we havent yet had. And this is thinking even GPT-4 cant do for us.
Danielle Allen on renovating democracy
Go here to read the rest:
Posted in Artificial Intelligence
Comments Off on Opinion | Artificial generative intelligence could prove too much for democracy – The Washington Post
How Artificial Intelligence is Accelerating Innovation in Healthcare – Goldman Sachs
Posted: at 2:54 pm
Healthcare one of the largest sectors of the U.S. economy is among the many industries with significant opportunities for the use of artificial intelligence (AI) and machine learning (ML), says Salveen Richter, lead analyst for the U.S. biotechnology sector at Goldman Sachs Research.
We are in an exciting period when we are seeing the convergence of technology and healthcare two key economic sectors and we have to assume it will result in significant innovation, she says. We spoke with Richter, one of the authors of our in-depth Byte-ology report, which includes contributions from Goldman Sachs healthcare and technology research teams, about the integration of AI/ML into healthcare, the most promising applications for this technology and the landscape for venture capital funding in the field of byte-ology.
Why is healthcare ripe for disruption?
We see the combination of healthcares vast, multi-modal datasets and AI/MLs competitive advantages in efficiency, personalization and effectiveness as poised to drive an innovative wave across healthcare.
From a data standpoint, the healthcare industry produces and relies upon massive amounts of data from diverse sources. That creates a rich environment for applying AI and ML. The need for these technologies is there given the inefficiencies in the healthcare system. It is estimated that it takes more than eight years and $2 billion to develop a drug, and the likelihood of failure is quite high with only one of ten candidates expected to gain regulatory approval. AI, including generative AI, is among the technologies that have the potential to create safer, more efficacious drugs and to streamline personalized care.
The bottom line is we are in an exciting period when we are seeing the convergence of technology and healthcare two key economic sectors and we have to assume that out of this will come a wave of innovation.
What changes has AI already brought to the healthcare industry?
Some of the earliest uses of AI in healthcare were in diagnostics and devices, including areas such as radiology, pathology and patient monitoring. The PAPNET Testing System, a computer-assisted cervical smear rescreening device, back in 1995 was the first FDA-authorized AI/ML enabled medical device. In the 2000s, other authorizations involved digital image capture, analysis of cells, bedside monitoring of vital signs, and predictive warnings for incidents where medical intervention may be needed.
Big Tech companies have also been involved, stepping in as cloud solution providers and applying their technological expertise in areas such as wearable devices, predictive modeling and virtual care. One widely talked about achievement involved a deep learning algorithm that effectively solved the decades-old problem of predicting the shape a protein will fold into based on its amino acid sequences, which is crucial for drug discovery.
Where are we now in the integration of AI into the healthcare sector?
Despite all previous innovation, we are still in the early innings. While the promise of AI/ML in healthcare has been there for decades, we believe its role came into the spotlight during the Covid-19 pandemic response. AI helped companies develop Covid-19 mRNA vaccines and therapeutics at unprecedented speeds. Further, the Covid-19 pandemic underscored the need for digital solutions in healthcare to improve patient access and outcomes, and represented a key inflection point for telehealth and remote monitoring.
We believe that these successes further drove enthusiasm for the space as they showed a clear benefit of incorporating AI/ML and other technologies to improve patient outcomes at a much faster rate than would be expected with traditional methods.
What are some of the more promising AI-driven applications that could be coming to healthcare in the near future?
In our newest Byte-ology report, we outlined the technologies that could be transformative in healthcare, which include deep learning, cloud computing, big data analytics and blockchain. We also provided use cases across drug development, clinical trials, healthcare analytics, tools and diagnostics, and personalized care.
Heres one example: in drug development, AI/ML can be used to identify novel targets, design drugs with favorable properties and predict drug interactions to minimize the need for the costly traditional methodology of wet lab trial and error development.
Are there areas within health care that are more likely than others to benefit from AI?
Use cases for AI/ML can be found in virtually any segment of healthcare the difference is how much or how long it has been used in a given sector, how validated the use case is and how difficult new technological advancements would be to implement within the healthcare system. For example, there is a history of using AI tools for radiology and pathology, whereas many believe more hard evidence is needed to understand AI/MLs benefit in areas such as designing drugs, predicting patients most likely to respond to certain drugs and digitizing labs.
Even in sectors where its adoption is in the early stages, we believe that AI/MLs potential advantages will not be ignored, but rather closely studied and increasingly implemented over time. Uptake would greatly benefit from regulatory support, standardized benchmarks to evaluate performance, public forums to improve collaboration and transparency and, importantly, proof-of-concept via a demonstrated benefit to patients and healthcare professionals which we have started to see emerge.
What are the barriers or hurdles for AI in healthcare?
There are cultural obstacles, such as the healthcare industry relying on patents and exclusivity. That raises questions about how IP can be protected without slowing progress, or how information can be shared as it is in software engineering research that benefits from open-source data.
The hesitancy around AI/ML may further be exacerbated by the need for better surveillance systems to protect patients from hacking or breach events, the lack of continuing education for healthcare professionals on the benefits of these technologies and the concern that AI/ML models may be susceptible to bias as a result of historical underrepresentation embedded in training data.
Finally, some stakeholders may be taking a wait-and-see approach, remaining on the sidelines until firmer evidence of benefits being achieved emerges before investing in the resources necessary to incorporate these technologies.
Are there specific uses or benefits of generative AI in particular to healthcare?
Generative AI, including ChatGPT, presents myriad opportunities in healthcare such as synthetic data generation to aid in drug development and diagnostics where data collection would otherwise be expensive or scarce. Some examples here include the development of a model to produce synthetic abnormal brain MRIs to train diagnostic ML models, and the use of zero-shot generative AI to produce novel antibody designs that are unlike those found in existing databases.
Generative AI also can help in designs for novel drugs, repurposing of existing drugs to new indications and analyzing patient-centric factors such as genetics and lifestyle to personalize treatment plans.
ChatGPT specifically could be used to perform administrative tasks such as scheduling appointments and drafting insurance approvals to free up time for physicians, aid healthcare professionals by conveniently summarizing scientific literature, as well as improve patient engagement and education by answering patient questions in a conversational manner.It has also been suggested that ChatGPT could theoretically aid in clinical decision making, such as diagnostics, although it will likely take time for ChatGPT to build enough trustworthiness and validation for this application given the risk of hallucination, when the model outputs false content that may look plausible.
Whats the landscape for VC investment in healthcare AI and how does GS assess these companies?
VC funding continues to support and foster innovation both in early- and late-stage private biotech companies. In 2022, we saw VC funding into AI- and ML-powered healthcare companies remained elevated despite declining amid the market downturn and associated slowdown in VC funding. So far in 2023, amid recession risk and other headwinds, VC deployment in healthcare AI, as elsewhere, has slowed.
Because of the AI/MLs potential advantages in efficiency and effectiveness, how each company utilizes the armamentarium of available and rapidly expanding technologies is an important part of competitive differentiation. We take numerous factors into account when gauging competitive differentiation, such as the quality of the management team, the ultimate goal of the platform, the timeframe in which investors will understand whether this goal has been achieved and how the platform merges the available AI/ML toolkit with proprietary technologies to defend against emerging players.
This article is being provided for educational purposes only. The information contained in this article does not constitute a recommendation from any Goldman Sachs entity to the recipient, and Goldman Sachs is not providing any financial, economic, legal, investment, accounting, or tax advice through this article or to its recipient. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this article and any liability therefore (including in respect of direct, indirect, or consequential loss or damage) is expressly disclaimed.
Go here to see the original:
How Artificial Intelligence is Accelerating Innovation in Healthcare - Goldman Sachs
Posted in Artificial Intelligence
Comments Off on How Artificial Intelligence is Accelerating Innovation in Healthcare – Goldman Sachs
Meet the Woman Working to Remove Bias in Artificial Intelligence – Shine My Crown
Posted: at 2:54 pm
136
Dr. Nika White, the author of Inclusion Uncomplicated: A Transformative Guide to Simplify DEI, is president and CEO of Nika White Consulting. Dr. White is an award-winning management and leadership consultant, keynote speaker, published author, and executive practitioner for DEI efforts in the areas of business, government, non-profit and education. Her work helping organizations break barriers and integrate DEI into their business frameworks led to her being recognized by Forbes as a Top 10 Diversity and Inclusion Trailblazer. The focus of Dr. Whites consulting work is to create professional spaces where people can collaborate through a lens of compassion, empathy, and understanding.
Shine My Crown spoke with Dr. White to discuss the growing ChatGPT trend, which has been proved to bebiased, and how companies can address the situation by incorporating diversity and inclusion within their organizations.
Talk to us about the work you do and how you are using your skillset to change the playing field for organizations in dire need of redesigning their DEI framework.
We ensure impact over activity. We co-create solutions with our clients. They have the institutional knowledge and we have the DEI expertise. In this sense, we become an extension of our clients teams. The collaboration enriches the final product and output. We leverage evidence-based data to inform the work.
Data shows that systems like ChatGPT have sometimes proven to produce outputs that are racist, sexist, and factually incorrect. How do engineers who train these artificial intelligence applications work to rectify the unmethodical data it pulls from the internet?
Engineers should begin with understanding where they are in their DEI journey and their own biases. Once you understand your own biases, you can start to address them in yourself and work. Engineers should be trained to understand racist language and systematic racism in data. This will give them the ability to decipher and shift through coded racist data and create programs around it.
In a recent quote, you mentioned that left unchecked AI will regurgitate racist and sexist data and facts about POC, women and the LGBTQ+ community that historically was thought to be true in a culture that perpetuated systematic racism, sexism, and homophobia. Can you expound more upon this statement and what the resolution will be to fix this?
If AI only accounts for data and not historical context, AI could assume that BIPOC dont own homes because they dont want to or have the ability to. Historical context tells us that redlining and continuous, systematic oppression have actually hindered BIPOC from purchasing homes. Engineers must bring that historical context to AI. Thats why having a diverse and well-trained engineering staff is important.
What do you believe is the real reason behind biased forms of technology like ChatGPT? Does it stem from human interference, or is science alone left to blame?
Human interference and science are to blame. Science is programmed to decipher information the best possible way it knows how. Sciences major flaw is agility. AIs capacity to evolve and change is stunted unless the engineers create checks and balances. However, AI can only be as good as the engineers programming them. Engineers must understand their biases to stop them from being programmed in AI.
What are some current efforts you are working on to create spaces where people feel included in their personal and professional environments?
We recently launched a new learning experience, Unravel the Knot.My approach to DEI is of an integrationist, positing that the work of DEI is for all and can be organically incorporated into an individuals personal and professional spaces.I was moved by people Id interacted with who expressed wanting to be a part of cultivating cultures of belonging. Still, they found such an endeavor complicated, polarizing, and defeating. These sentiments are a barrier for many who desire to engage deeper.
Every day, we hear that the work of Diversity, Equity, and Inclusion (DEI) is complicated whether from businesses, employees, society in general, or the practitioners themselves. And the truth isyesDEI can be complicated because the issues of DEI are complex. But they dont have to be.
Without a collective shift in how we relate to one another as humans, without the willingness to recognize our personal biases or withhold assumptions and sitwith the discomfort, systems of oppression will remain locked in place. But, if wecenter on ways to uncomplicate DEI, the entry point for more people to engage effectively increases.
This program helps to change how complex many people perceive DEI to be so that the entry point for more people to engage in the work of belongingness increases significantly.This learning experience givescohort members space to go deeper into foundational practical tips and tools, helping them actualize DEI personally and within their organization.Participants will craft their DEI story, learn more about their identity, assess their cultural patterns, learn about emotional intelligence and Lived Experience Intelligence, practice mindfulness, unmask themselves, interrogate their biases, and understand more about inclusive communication.
Read more from the original source:
Meet the Woman Working to Remove Bias in Artificial Intelligence - Shine My Crown
Posted in Artificial Intelligence
Comments Off on Meet the Woman Working to Remove Bias in Artificial Intelligence – Shine My Crown
Opinion: Artificial intelligence is the future of hiring – The San Diego Union-Tribune
Posted: at 2:54 pm
Cooper is a professor of law at California Western School of Law and a research fellow at Singapore University of Social Sciences. He lives in San Diego. Kompella is CEO of industry analyst firm RPA2AI Research and visiting professor for artificial intelligence at the BITS School of Management, Mumbai, and lives in Bangalore, India.
Hiring is the lifeblood of the economy. In 2022, there were 77 million hires in the United States, according to the U.S. Department of Labor. Artificial intelligence is expected to make this hiring process more efficient and more equitable. Despite such lofty goals, there are valid concerns that using AI can lead to discrimination. Meanwhile, the use of AI in the hiring process is widespread and growing by leaps and bounds.
A Society of Human Resources Management survey last year showed that about 80 percent of employers use AI for hiring. And there is good reason for the assist: Hiring is a high-stakes decision for the individual involved and the businesses looking to employ talent. It is no secret, though, that the hiring process can be inefficient and subject to human biases.
AI offers many potential benefits. Consider that human resources teams spend only 7 seconds skimming a resume, a document which is itself a one-dimensional portrait of a candidate. Recruiters instead end up spending more of their time on routine tasks like scheduling interviews. By using AI to automate such routine tasks, human resources teams can spend more quality time on assessing candidates. AI tools can also use a wider range of data points about candidates that can result in a more holistic assessment and lead to a better match. Research shows that the overly masculine language used in job descriptions puts off women from applying. AI can be used to create job descriptions and ads that are more inclusive.
But using AI for hiring decisions can also lead to discrimination. A majority of recruiters in the 2022 Society of Human Resources Management survey identified flaws in their AI systems. For example, they excluded qualified applicants or had a lack of transparency around the way in which the algorithms work. There is also disparate impact (also known as unintentional discrimination) to consider. According to University of Southern California research in 2021, job advertisements are not shown to women despite them being qualified for the roles being advertised. Also, advertisements for high-paying jobs are often hidden from women. Many states suffer a gender pay gap. When the advertisements themselves are invisible, the pay equity gap is likely not going to solve itself, even with the use of artificial intelligence.
Discrimination, even in light of new technologies, is still discrimination. New York City has fashioned a response by enacting Local Law 144, scheduled to come into effect on July 15. This law requires employers to provide notice to applicants when AI is being used to assess their candidacy. AI systems are subject to annual independent third-party audits and audit results must be displayed publicly. Independent audits of such high-stakes AI usage is a welcome move by New York City.
California, long considered a technology bellwether, has been off to a slow start. The California Workplace Technology Accountability Act, a bill that focused on employee data privacy, is now dead. On the anvil are updates to Chapter 5 (Discrimination in Employment) of the California Fair Employment and Housing Act. Initiated a year ago by the Fair Employment and Housing Council (now called the Civil Rights Department), these remain a work in progress. These are not new regulations per se but an update of existing anti-discrimination provisions. The proposed draft is open for public comments but there is no implementation timeline yet. The guidance for compliance, the veritable dos and donts, including penalties for violations, are all awaited. There is also a recently introduced bill in the California Legislature that seeks to regulate the use of AI in business, including education, health care, housing and utilities, in addition to employment.
The issue is gaining attention globally. Among state laws on AI in hiring is one in Illinois that regulates AI tools used for video interviews. At the federal level, the Equal Employment Opportunity Commission has updated guidance on employer responsibilities. And internationally, the European Unions upcoming Artificial Intelligence Act classifies such AI as high-risk and prescribes stringent usage rules.
Adoption of AI can help counterbalance human biases and reduce discrimination in hiring. But the AI tools used must be transparent, explainable and fair. It is not easy to devise regulations for emerging technologies, particularly for a fast-moving one like artificial intelligence. Regulations need to prevent harm but not stifle innovation. Clear regulation coupled with education, guidance and practical pathways to compliance strikes that balance.
Link:
Opinion: Artificial intelligence is the future of hiring - The San Diego Union-Tribune
Posted in Artificial Intelligence
Comments Off on Opinion: Artificial intelligence is the future of hiring – The San Diego Union-Tribune
Director Chopras Prepared Remarks on the Interagency … – Consumer Financial Protection Bureau
Posted: at 2:53 pm
In recent years, we have seen a rapid acceleration of automated decision-making across our daily lives. Throughout the digital world and throughout sectors of the economy, so-called artificial intelligence is automating activities in ways previously thought to be unimaginable.
Generative AI, which can produce voices, images, and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms from consumer fraud to privacy to fair competition.
Today, several federal agencies are coming together to make one clear point: there is no exemption in our nations civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.
The Interagency Statement we are releasing today seeks to take an important step forward to affirm existing law and rein in unlawful discriminatory practices perpetrated by those who deploy these technologies.1
The statement highlights the all-of-government approach to enforce existing laws and work collaboratively on AI risks.
Unchecked AI poses threats to fairness and to our civil rights in ways that are already being felt.
Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including whether we get a loan or what advertisements we see.
While machines crunching numbers might seem capable of taking human bias out of the equation, thats not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that Black families were 80 percent more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds. The response of mortgage companies has been that researchers do not have all the data that feeds into their algorithms or full knowledge of the algorithms. But their defense illuminates the problem: artificial intelligence often feels like black boxes behind brick walls.2
When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias.
Thats why the CFPB and other agencies are prioritizing and confronting digital redlining, which is redlining caused through bias present in lending or home valuation algorithms and other technology marketed as artificial intelligence. They are disguised through so-called neutral algorithms, but they are built like any other AI system by scraping data that may reinforce the biases that have long existed.
We are working hard to reduce bias and discrimination when it comes to home valuations, including algorithmic appraisals. We will be proposing rules to make sure artificial intelligence and automated valuation models have basic safeguards when it comes to discrimination.
We are also scrutinizing algorithmic advertising, which, once again, is often marketed as AI advertising. We published guidance to affirm how lenders and other financial providers need to take responsibility for certain advertising practices. Specifically, advertising and marketing that uses sophisticated analytic techniques, depending on how these practices are designed and implemented, could subject firms to legal liability.
Weve also taken action to protect the public from black box credit models in some cases so complex that the financial firms that rely on them cant even explain the results. Companies are required to tell you why you were denied for credit and using a complex algorithm is not a defense against providing specific and accurate explanations.
Developing methods to improve home valuation, lending, and marketing are not inherently bad. But when done in irresponsible ways, such as creating black box models or not carefully studying the data inputs for bias, these products and services pose real threats to consumers civil rights. It also threatens law-abiding nascent firms and entrepreneurs trying to compete with those who violate the law.
I am pleased that the CFPB will continue to contribute to the all-of-government mission to ensure that the collective laws we enforce are followed, regardless of the technology used.
Thank you.
Read the original:
Director Chopras Prepared Remarks on the Interagency ... - Consumer Financial Protection Bureau
Posted in Artificial Intelligence
Comments Off on Director Chopras Prepared Remarks on the Interagency … – Consumer Financial Protection Bureau