The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Jim Goodnight, the ‘Godfather of A.I.,’ predicts the future fate of the US workforce – CNBC
Posted: November 9, 2019 at 8:42 am
Colin Anderson | Getty Images
Every technology revolution has a unique inflection point. The spark that ignited the artificial intelligence movement was a statistical data analysis system developed by Jim Goodnight when he was a statistics professor at North Carolina State University 45 years ago.
He never imagined that the technology he created to improve crop yields would evolve into sophisticated data analytics software, a precursor to modern day AI. Back then computers could only compute 300 instructions a second and had 8K of memory. Today they can execute 3 billion instructions a second and contain multiple terabytes of memory.
For more on tech, transformation and the future of work, join CNBC at the @ Work: People + Machines Summit in San Francisco on Nov. 4. Leaders from Dropbox, SAS, McKinsey and more will teach us how to balance the needs of today with the possibilities of tomorrow, and the winning strategies to compete.
Goodnight considered the Godfather of AI now sits at the helm of the world's largest privately held software companies by revenue: SAS Institute. Despite its low profile, last year the Cary, North Carolina-based company had revenues of $3.27 billion, thanks to analytic and AI platforms used by more than 83,000 businesses, governments and universities.
In an interview with CNBC, the CEO gives his views on how AI is changing the U.S. workforce and what lies ahead.
Over the last four decades, how has data analytics software evolved? Did you ever imagine it would change the world as much as it has?
No. It has been a game changer for society. At first we were using analytics software and doing balanced experiments. Today we have moved into forecasting. Neural networks, which mimic the way the human brain operates, and other machine learning tools are being used to do all sorts of predictions in a host of industries.
As computer speeds grow and the amount of data explodes, this technology has become critical.
How has it become a mainstream tool for business and public institutions?
It is used by nearly every industry in a variety of ways. Drug companies use it for clinical trial analysis. Utilities use it to predict peak demand for electricity. Retailers use it to assess buying patterns so they can figure out what sizes to stock. Banks also are using neural networks to detect credit card fraud and to prevent money laundering.
Areas where I see a surge in demand are 5G technology, connected devices, cloud services, autonomous driving, machine learning and fintech.
What is your forecast for AI over the next decade?
I believe we will see things like computer vision which involves machines capturing, processing and analyzing real-world images and video to extract information from the physical world being used. Anything we can see with our eyes we can train a computer to recognize as well. This will be transformative especially in the autonomous driving sector and in medicine.
Over the past few decades, sensors and image processors have been created to match or even exceed the human eye's capabilities. With larger, more optically perfect lenses and nanometer-scaled image sensors and processors, the precision and sensitivity of modern cameras are incredible, especially compared to common human eyes. Cameras can also record thousands of images per second, detect distances and see better in dark environments.
Already computer vision is making a difference in health care. The medical community is using it to interpret CT scans. SAS is working with Amsterdam University to identify the size of tumors in cancer patients more accurately.
How do you think it will change the workforce and the way companies manage operations?
The largest impact will be felt in the manufacturing industry on the factory floor. Robots with computer vision will become more sophisticated. The process has already begun; there are huge numbers of industrial robots already. Over the years, robots will take on many roles in the factory. Humans will be needed to maintain and program them.
But there are a lot of misconceptions about AI. We are nowhere near the time where robots can think like humans. That is an era far into the future. In today's world humans are needed to train these machines to recognize images and analyze data.
The talent war in the tech sector is fierce. How is SAS retaining and developing workers in this era?
Our turnover rate is 4%, and that is considered low in the tech industry, where rates hover around 14%. We lose a few people to larger tech companies, but we have no trouble replacing them. We do everything possible to make SAS Institute a great place to work, and that includes investing in training. The key is giving employees challenging work. That is more important to a tech worker than a salary.
SAS Institute founder and CEO Jim Goodnight (center) lets employees pitch big ideas that can help in developing innovative software products.
SAS Institute
We manage the company to unleash the power of creativity. We encourage creativity by having demo days so employees can share the products and technology they are working on, pitch management for funding or additional resources. Employees can also come to senior management meetings to pitch their ideas and innovations. Every employee is also expected to complete two training courses a year in a new software language so they can remain up to date on the latest technology.
What advice would you give other companies grappling with the skills shortage issue?
One thing is to create education and skills training programs to develop more data scientists in the U.S. We have partnered with 82 universities, such as Michigan State and the University of Arkansas, to develop master's programs for scientists trained on SAS software. Some of these programs are linked to local businesses that are looking for a talent pipeline.
This has been a big part of our outreach strategy. For example, at North Carolina State University we helped create the Institute for Advanced Analytics, which offers a one-year course simulating a work environment. It produces 120 graduates a year trained in SAS software.
Excerpt from:
Jim Goodnight, the 'Godfather of A.I.,' predicts the future fate of the US workforce - CNBC
Posted in Ai
Comments Off on Jim Goodnight, the ‘Godfather of A.I.,’ predicts the future fate of the US workforce – CNBC
Last Week In Venture: AI Chips, ML Anywhere, And Spreadsheets As Backends For Apps – Crunchbase News
Posted: at 8:42 am
Hello and welcome back to Last Week In Venture, the weekly recap of interesting deals which may have flown under your radar.
Seed and early-stage deals struck today are a lens through which to view potential futures. So lets take a quick look at a few interesting transactions from the week that was in venture-land.
It was a busy week at the intersection of hardware and machine learning.
You may have already heard about Neural Magic, the Somerville, MA-based startup which lets data scientists and AI engineers run machine learning models on commodity CPUs using its own, proprietary inference engine. The company says it can deliver GPU performance on central processing units, which is a big deal, considering that upfront cost of acquiring specialized compute hardware remains a barrier to entry into large-scale machine learning projects. This week, the company announced $15 million in seed funding led by Comcast Ventures. NEA, Andreessen Horowitz, Pillar Ventures, and Amdocs participated in the transaction.
On the other side of the market is Untether AI. Instead of developing software that runs on generalized hardware, the Toronto-based company makes specialized, high-efficiency inference chips utilizing a design which places the processor very close to onboard memory, reducing latency and energy use. This week the company announced $20 million in Series A funding, which they technically closed back in May. The company closed $13 million from Intel Capital in April and the remainder from Radical Ventures. As part of the transaction, founding CEO Martin Snelgrove transitions to a CTO role as seasoned chipmaker executive Arun Iyengar steps up as CEO of the company and Radical Ventures founding partner Tomi Poutanen joins its board.
You know whats actually pretty sweet? Spreadsheets. Theyre, like, totally tabular. Which is great for stuff like accounting, displaying lots of rows of data, and some more whimsical applications.
But just because some information might live on a spreadsheet doesnt mean it cant get dressed up a little. Glide Apps is a San Francisco-based, Y Combinator-backed company which helps its users build mobile apps which display and interact with data stored in a Google Sheet, all without needing to write a single line of code. The company produced a set of templates showing how Glide Apps can be used for a range of application use cases, ranging from a city guide to an investor update app.
This week, the company announced a new pro pricing tier, alongside $3.6 million in additional seed financing led by First Round Capital, with participation from Idealab, SV Angel, and the chief executives of GitHub and Figma.
The company says that, since its inception in 2018, tens of thousands of people have built Glide apps which have, collectively, reached over one million users.
Image Credits: Last Week In Venture graphic created byJD Battles. Photo by Billy Huynh, via Unsplash.
Follow this link:
Last Week In Venture: AI Chips, ML Anywhere, And Spreadsheets As Backends For Apps - Crunchbase News
Posted in Ai
Comments Off on Last Week In Venture: AI Chips, ML Anywhere, And Spreadsheets As Backends For Apps – Crunchbase News
Does Your AI Have Users’ Best Interests at Heart? – Harvard Business Review
Posted: at 8:42 am
Executive Summary
We now live in a world built on machine learning and AI, which relies on data as its fuel, and which in the future will support everything from precision agriculture to personalized healthcare. The next generation of platforms will even recognize our emotions and read our thoughts. For leaders in the Algorithmic Age, simply following the rules has never looked more perilous, nor more morally insufficient. As we create systems that are more capable of understanding and targeting services at individual users, our capacity to do evil by automating bias and weaponizing algorithms will grow exponentially. And yet, this also raises the question of what exactly is evil? Is it breaking the law, breaking your industry code of conduct, or breaking user trust? Rather than relying on regulation, leaders must instead walk an ethical tight rope. Your customers will expect you to use their data to create personalized and anticipatory services for them while demanding that you prevent the inappropriate use and manipulation of their information. As you look for your own moral compass, one principle is apparent: You cant serve two masters. In the end, you either build a culture based on following the law, or you focus on empowering users. The choice might seem to be an easy one, but it is more complex in practice.
Ethical decisions are rarely easy. Now, even less so. Smart machines, cheap computation, and vast amounts of consumer data not only offer incredible opportunities for modern organizations, they also present a moral dilemma for 21st century leaders too: Is it OK, as long as its legal?
Certainly, there will be no shortage of regulation in the coming years. For ambitious politicians and regulators, Big Tech is starting to resemble Big Tobacco with the headline-grabbing prospect of record fines, forced break-ups, dawn raids, and populist public outrage. Yet for leaders looking for guidance in the Algorithmic Age, simply following the rules has never looked more perilous, nor more morally insufficient.
Dont get me wrong. A turbulent world of AI- and data-powered products requires robust rules. Given the spate of data breaches and abuses in recent years, Googles former unofficial motto, Dont be evil, now seems both prescient and naive. As we create systems that are more capable of understanding and targeting services at individual users, our capacity to do evil by automating bias and weaponizing algorithms will grow exponentially. And yet, this also raises the question of what exactly is evil? Is it breaking the law, breaking your industry code of conduct, or breaking user trust?
Building fair and equitable machine learning systems.
Algorithmic bias can take many forms it is not always as clear cut as racism in criminal sentencing or gender discrimination in hiring. Sometimes too much truth is just as dangerous. In 2013, researchers Michal Kosinski, David Stillwell, and Thore Graepel published an academic paper that demonstrated that Facebook likes (which were publicly open by default at that time) could be used to predict a range of highly sensitive personal attributes, including sexual orientation and gender, ethnicity, religious and political views, personality traits, use of addictive substances, parental separation status, and age.
Disturbingly, even if you didnt reveal your sexual orientation or political preferences, this information could still be statistically predicted by what you did reveal. So, while less than 5% of users identified as gay were connected with explicitly gay groups, their preference could still be deduced. When they published their study, the researchers acknowledged that their findings risked being misused by third parties to incite discrimination, for example. However, where others saw danger and risk, Aleksandr Kogan, one of Kosinskis colleagues at Cambridge University, saw opportunity. In early 2014, Cambridge Analytica, a British political consulting firm, signed a deal with Kogan for a private venture that would capitalize on the work of Kosinski and his team.
Kogan was able to create a quiz, thanks to an initiative at Facebook that allowed third parties to access user data. Almost 300,000 users were estimated to have taken that quiz. It later emerged that Cambridge Analytica then exploited the data it had harvested via the quiz to access and build profiles on 87 million Facebook users. Arguably, neither Facebook nor Cambridge Analyticas decisions were strictly illegal, but in hindsight and in context of the scandal the program soon unleashed they could hardly be called good judgment calls.
According to Julian Wheatland, COO of Cambridge Analytica at the time, the companys biggest mistake was believing that complying with government regulations was enough, and thereby ignoring broader questions of data ethics, bias and public perception.
How would you have handled a similar situation? Was Facebooks mistake a two-fold one of not setting the right policies for handling their user data upfront, and sharing that information too openly with their partners? Should they have anticipated the reaction of the U.S. senators who eventually called a Congressional hearing, and spent more resources on lobby groups? Would a more comprehensive user agreement have shielded Facebook from liability? Or was this simply a case of bad luck? Was providing research data to Kogan a reasonable action to take at the time?
By contrast, consider Apple. When Tim Cook took the stage to announce Apples latest and greatest products for 2019, it was clear that privacy and security, rather than design and speed, were now the real focus. From eliminating human grading of Siri requests to warnings on which apps are tracking your location, Apple was attempting to shift digital ethics out of the legal domain, and into the world of competitive advantage.
Over the last decade, Apple has been criticized for taking the opposing stance on many issues relative to its peers like Facebook and Google. Unlike them, Apple runs a closed ecosystem with tight controls: you cant load software on an iPhone unless it has been authorized by Apple. The company was also one of the first to fully encrypt its devices, including deploying end-to-end encryption on iMessage and FaceTime for communication between users. When the FBI demanded a password to unlock a phone, Apple refused and went to court to defend its right to do so. When the company launched Apple Pay and more recently their new credit card, it kept customer transactions private rather than recording all the data for its own analytics.
While Facebooks actions may have been within the letter of the law, and within the bounds of industry practice, at the time, they did not have the users best interests at heart. There may be a simple reason for this. Apple sells products to consumers. At Facebook, the product is the consumer. Facebook sells consumers to advertisers.
Banning all data-collection is futile. There is no going back. We already live in a world built on machine learning and AI, which relies on data as its fuel, and which in the future will support everything from precision agriculture to personalized healthcare. The next generation of platforms will even recognize our emotions and read our thoughts.
Rather than relying on regulation, leaders must instead walk an ethical tight rope. Your customers will expect you to use their data to create personalized and anticipatory services for them while demanding that you prevent the inappropriate use and manipulation of their information. As you look for your own moral compass, one principle is apparent: You cant serve two masters. In the end, you either build a culture based on following the law, or you focus on empowering users. The choice might seem to be an easy one, but it is more complex in practice. Being seen to do good is not the same as actually being good.
Thats at least one silver lining when it comes to the threat of robots taking our jobs. Who better to navigate complex, nuanced, and difficult ethical judgments than humans themselves? Any machine can identify the right action from a set of rules, but actually knowing and understanding what is good thats something inherently human.
Go here to see the original:
Does Your AI Have Users' Best Interests at Heart? - Harvard Business Review
Posted in Ai
Comments Off on Does Your AI Have Users’ Best Interests at Heart? – Harvard Business Review
EU competition commissioner Margrethe Vestager says there’s ‘no limit’ to how AI can benefit humans – INSIDER
Posted: at 8:42 am
EU competition commissioner Margrethe Vestager, a frequent opponent to Silicon Valley tech firms, says she sees "no limit to how AI can support what we do as humans."
Given the Dane's status as arguably the most aggressive regulator of big tech on the planet she hit Google with a 4.3 billion ($4.75 billion) fine in July 2018 and ordered Apple to pay Ireland back 13 billion ($14.3 billion) in "illegal" tax benefits in 2016 Vestager's optimism about AI could be viewed as surprising.
On the flipside, her positivity about AI's potential could be viewed as highly consistent with strinent approach to regulating big tech: given how integral big tech is to AI research and development, Vestager's approach more likely reflects her keenness that big tech doesn't jeopardize AI's potential.
In September, the EU appointed Vestager to a role titled "Executive Vice President for A Europe fit for the Digital Age," effectively a continuation of her competition commission job, but with increased powers and oversight. It will see her set the agenda for the EU's regulation of artificial intelligence, among other regulatory duties.
Discussing the role at the Web Summit tech conference in Lisbon, Portugal on Thursday, Vestager said: "The first thing we will do is, of course, to listen very, very carefully, and we'll try to listen fast, because as we're speaking, AI is developing."
"That is wonderful, because I see no limits to how artificial intelligence can support what we want to do as humans," she continued. "Take climate change. I think we can be much more effective in fighting climate change if we use artificial intelligence.
"I think we can save people awful, stressful waiting time between having been examined by a doctor and having the result of that examination, and maybe also more precise results in doing that. So I think the benefits of using artificial intelligence [have] no limits," she said.
"But we need to get in control of certain cornerstones so that we can trust it, and it has human oversight, and very importantly that it doesn't have bias."
See the rest here:
Posted in Ai
Comments Off on EU competition commissioner Margrethe Vestager says there’s ‘no limit’ to how AI can benefit humans – INSIDER
OpenAI Just Released the AI It Said Was Too Dangerous to Share – Futurism
Posted: at 8:42 am
Here You Go
In February, artificial intelligence research startup OpenAI announced thecreation of GPT-2, an algorithm capable of writing impressively coherentparagraphs of text.
But rather than release the AI in its entirety, the team shared only a smaller model out of fear that people would use the more robust tool maliciously to produce fake news articles or spam, for example.
But on Tuesday, OpenAI published a blog post announcing its decision to release the algorithm in full as it has seen no strong evidence of misuse so far.
According to OpenAIs post, the company did see some discussion regarding the potential use of GPT-2 for spam and phishing, but it never actually saw evidence of anyone misusing the released versions of the algorithm.
The problem might be that, while GPT-2 is one of if not the best text-generating AIs in existence, it still cant produce content thats indistinguishable from text written by a human. And OpenAI warns its those algorithms well have to watch out for.
We think synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent, the startup wrote.
READ MORE: OpenAI has published the text-generating AI it said was too dangerous to share [The Verge]
More on OpenAI: Now You Can Experiment With OpenAIs Dangerous Fake News AI
Read the original:
OpenAI Just Released the AI It Said Was Too Dangerous to Share - Futurism
Posted in Ai
Comments Off on OpenAI Just Released the AI It Said Was Too Dangerous to Share – Futurism
THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis – Business Insider India
Posted: at 8:42 am
The insurance sector has fallen behind the curve of financial services innovation - and that's left hundreds of billions in potential cost savings on the table.
The most valuable area in which insurers can innovate is the use of artificial intelligence (AI): It's estimated that AI can drive cost savings of $390 billion across insurers' front, middle, and back offices by 2030, according to a report by Autonomous NEXT seen by Business Insider Intelligence. The front office is the most lucrative area to target for AI-driven cost savings, with $168 billion up for grabs by 2030.
In the AI in Insurance Report, Business Insider Intelligence will examine AI solutions across key areas of the front office - customer service, personalization, and claims management - to illustrate how the technology can significantly enhance the customer experience and cut costs along the value chain. We will look at companies that have accomplished these goals to illustrate what insurers should focus on when implementing AI, and offer recommendations on how to ensure successful AI adoption.
The companies mentioned in this report are: IBM, Lemonade, Lloyd's of London, Next Insurance, Planck, PolicyPal, Root, Tractable, and Zurich Insurance Group.
Here are some of the key takeaways from the report:
In full, the report:
Interested in getting the full report? Here are two ways to access it:
More here:
Posted in Ai
Comments Off on THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis – Business Insider India
AI is making literary leaps now we need the rules to catch up – The Guardian
Posted: at 8:42 am
Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation all without task-specific training.
If true, this would be a big deal. But, said OpenAI, due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.
Given that OpenAI describes itself as a research institute dedicated to discovering and enacting the path to safe artificial general intelligence, this cautious approach to releasing a potentially powerful and disruptive tool into the wild seemed appropriate. But it appears to have enraged many researchers in the AI field for whom release early and release often is a kind of mantra. After all, without full disclosure of program code, training dataset, neural network weights, etc how could independent researchers decide whether the claims made by OpenAI about its system were valid? The replicability of experiments is a cornerstone of scientific method, so the fact that some academic fields may be experiencing a replication crisis (a large number of studies that prove difficult or impossible to reproduce) is worrying. We dont want the same to happen to AI.
On the other hand, the world is now suffering the consequences of tech companies like Facebook, Google, Twitter, LinkedIn, Uber and co designing algorithms for increasing user engagement and releasing them on an unsuspecting world with apparently no thought of their unintended consequences. And we now know that some AI technologies for example generative adversarial networks are being used to generate increasingly convincing deepfake videos.
If the row over GPT-2 has had one useful outcome, it is a growing realisation that the AI research community needs to come up with an agreed set of norms about what constitutes responsible publication (and therefore release). At the moment, as Prof Rebecca Crootof points out in an illuminating analysis on the Lawfare blog, there is no agreement about AI researchers publication obligations. And of all the proliferating ethical AI guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. At the moment, the law has little to say about any of this so were currently at the same stage as we were when governments first started thinking about regulating medicinal drugs.
In the case of GPT-2, my hunch is that fears about its pathogenic propensities may be overdone not because it doesnt work, but because humans have long experience of dealing with print fakery. Ever since Gutenberg, people have been printing falsehoods and purporting to be someone else. But over the centuries, weve developed ways of spotting fakes. Accordingly, machine-generated text poses less of a problem than video deepfakes.
GPT-2s capabilities are undoubtedly impressive, though. In a fascinating essay, I, Language Robot, the neuroscientist and writer Patrick House reports on his experience of working alongside OpenAIs language model which produces style-matched prose to any written prompt that its fed.
It was, he recounts, an instructive exercise. The language robot and I never wrote the same story, he recalls. It became, eventually, a writing tool, both as a curator to new ideas and a splint for bad ones. When, after a few months, I lost access to it, my brain felt like it had been lesioned. I missed it. I had found myself jumping into its head, asking what it might write as one would a human writing partner, which made me wonder: should I give it the same theory-of-mind, inductive respect as I give to other thinking things? Could I even help but do otherwise?
The most fascinating part of Houses report deals with his idea of asking the robot the same questions that George Plimpton asked Ernest Hemingway in his famous Paris Review interview in 1958. In reply to the question whether it remembered the exact moment when it decided to be a writer, it replied: I dont remember ever wanting not to be a writer. Writing had always been my way out. My escape. When I was a little kid, my sister was always coming home from school with homework assignments and I knew if I got hers and mine done early I could spend the next days lunch hour looking at books. It was the first time that I actually thought that being a writer might happen.
And when asked what it did when it got stuck, it replied: Its hard to say, but I would say that it would be better not to have a muse at all, to do all that writing you think is going to be good without any conscious control over what you do. Its better to be totally in charge of your own actions.
At this point, the reader gets the eerie uncanny valley feeling: this is almost, but not quite, authentic. But the technology is getting there. Why, any day now itll be writing newspaper columns.
The web aint the worldA good report by the Reuters Institute at the University of Oxford challenges conventional wisdom by finding that most people still get their news from offline sources.
Culinary conditioning TheConversation.com has an intriguing essay How steak became manly and salads became feminine by Yale historian Paul Freedman.
Its a bots worldRenee DiResta has written an insightful piece on the algorithmic public sphere called There are bots. Look around at Ribbonfarm.com.
Continue reading here:
AI is making literary leaps now we need the rules to catch up - The Guardian
Posted in Ai
Comments Off on AI is making literary leaps now we need the rules to catch up – The Guardian
Nvidia Exec: We Need Partners To Push GPU-Based AI Solutions – CRN: The Biggest Tech News For Partners And The IT Channel
Posted: at 8:42 am
Nvidia sales executive Kevin Connors says channel partners play an important role in the chipmaker's strategy for selling and supporting GPU-accelerated solutions for artificial intelligence a market that is still in its early stages and can provide the channel major growth opportunities as a result.
"People are wanting higher performance computing at supercomputing levels, so that they can solve the world's problems, whether it's discovery of the next genome or better analysis and other such workloads," Connors, Nvidia's vice president of sales, global partners, said in an interview with CRN.
[Related: Ian Buck On 5 Big Bets Nvidia Is Making In 2020]
The Santa Clara, Calif.-based company's GPUs have become increasingly important in high-performance computing and artificial intelligence workloads, thanks to the parallel computing capabilities offered by their large number of cores and the substantial software ecosystem Nvidia has built around its CUDA platform, also known as Compute Unified Device Architecture, which debuted in 2007.
"As a company, we've always been focused on solving tough problems, problems that no one else could solve, and we invested in that. And so when we came out with CUDA which allowed application developers to port their high-performance computing apps, their scientific apps, engineering apps to our GPU platform that really began the process of developing a very rich ecosystem for high-performance computing," said Connors, who has been with Nvidia since 2006.
As a result, Nvidia's go-to-market strategy has significantly changed since when the company was mostly selling GPUs to consumers, OEMs and system builders who build gaming PCs. Now the company also sells entire platforms, such as the DGX, to make it easier for enterprises to embrace GPU computing.
"A lot of the enterprises are now looking at these new technologies, new capabilities to improve business outcomes, whether it's predictive analytics, forecasting maintenance. Things that AI can be applied to improve business outcomes is really is the competitive advantage of these industries," Connors said. "And this is where we invest a lot in terms of bringing this market, elevating the level of understanding and competency of these solutions and how they can affect business."
DGX, in particular, is an Nvidia-branded set of servers and workstations that designed to help enterprises get started on developing AI and data science applications. The most recent product in the lineup, the DGX-2, is a server appliance that comes with 16 Nvidia Tesla V100 GPUs.
"The DGX is essentially what we would call the tip of the spear. It engages deeply into some enterprises, we learn from those experiences. It's an accelerant to developing an AI application. And so that was a great tool for kick-starting AI within the enterprise, and it's been wildly successful," Connors said.
Justin Emerson, solutions director for AI and machine learning at Herndon, Va.-based Nvidia partner ePlus Technology, said the value proposition of DGX is "around the software stack, enterprise support and reference architecture" and the fact that "it's ready go out of the box."
"We see DGX as the vehicle to deliver GPUs because they provide a lot of relief to the pain points many customers will see," Emerson said.
To bring products and platforms like DGX to market, Nvidia relies on its Nvidia Partner Network, the company's partner program that consists of value-added resellers, system integrators, OEMs, distributors, cloud service providers and solution advisors.
Connors said the Nvidia Partner Network has a tiered membership, which means that while all members have access to base resources, such as training courses, partners who reach certain revenue targets and training goals will receive more advanced resources.
"Our strategy is really to reach out and recruit, nurture, develop, train, enable partners that want to do the same, meaning they want to build out a deep learning practice, for example," he said. "They want to have the expertise, the competency and also the confidence to go to a customer and solve some of their problems with our technology."
Deep Learning Institute, vCompute Give Partners New Ways To Drive AI Adoption
One of the newer ways Nvidia is pushing AI solutions is its new vComputeServer software, which allows IT administrators to flexibly manage and deploy GPU resources for AI, high-performance computing and analytics workloads using GPU-accelerated virtual machines. The chipmaker's partners for vCompute include VMware, Nutanix, Red Hat, Dell, Hewlett Packard Enterprise and Amazon Web Services.
Connors said the new capability, which launched at VMware's VMworld conference in August, is a continuation of the chipmaker's push into virtualization solutions that began with its GRID platform for virtual desktop infrastructure.
"That opens up the aperture for virtualizing a GPU quite dramatically, because now we're virtualizing the server infrastructure," he said. "So we're not just virtualizing the client PC, we can actually virtualize the server. It can work with a lot of different workloads, containerized or otherwise, that are running on a GPU. So that's a pretty exciting space for us."
But pushing for greater AI adoption isn't just about selling GPUs and GPU-accelerated platforms like DGX and vCompute. Education is a key component for Nvidia's partners, which is why the chipmaker has set up Deep Learning Institute. The company offers the courses to customers and partners direct, but it can also enable partners to resell and provide the courses themselves.
"That's an amazing educational tool that delivers hands-on training for data scientists to learn about these frameworks, learn about how to develop these deep neural networks, and we branched out, so that it's not just general AI," Connors said. "We actually have the industry-specific DLI for automotive, autonomous vehicles, finance, healthcare, even digital content creation, even game development."
Mike Trojecki, vice president of IoT and analytics at New York-based Nvidia partner Logicalis, said his company is seeing opportunities around Nvidia's DGX platform for research and development.
"When you look at the research and development side of things, what we're trying to help our customers do is we're helping them reduce the complexity of AI workloads," he said. "When we look at it with CPU-based AI, there's performance limitations and cost increases, so we're really trying to put together a package for them so they dont have to put these pieces together."
Trojecki said Logicalis plans to take "significant advantage" of Nvidia's Deep Learning Institute program as a way to help customers understand what solutions are available and what skills they need.
"With DLI, we're able to bring our customers in to start that education journey," he said. "For us as an organization, getting customers in the room is always a good thing."
Emerson, the solutions architect at ePlus, said his company also offers Deep Learning Institute courses, but the company has also found value in creating its own curriculum around managing AI infrastructure.
"Just like in the late aughts when people bought new boxes with as much cores and memory to virtualize, there's going to be a massive infrastructure investment in accelerated computing, whether that's GPUs or something else," he said. "That's the thing that I think is going to be a big opportunity for Nvidia and ePlus: changing the way people build infrastructure.
View original post here:
Posted in Ai
Comments Off on Nvidia Exec: We Need Partners To Push GPU-Based AI Solutions – CRN: The Biggest Tech News For Partners And The IT Channel
Core National AI Strategies for European Union, the United Kingdom, Germany, France, Italy, Spain, Japan, China, and the United States -…
Posted: at 8:42 am
DUBLIN--(BUSINESS WIRE)--The "National AI Strategies" report has been added to ResearchAndMarkets.com's offering.
This report encapsulates the core elements of the national strategies for the adoption of artificial intelligence (AI) in the European Union, the United Kingdom, Germany, France, Italy, Spain, Japan, China, and the United States.
These include, for each strategy, the fundamental reports and supporting documents, the scope of sectoral focus and outlines of the initial and emerging stakeholders and players.
Key Topics Covered:
1. Executive Summary
2. European Commission
3. United Kingdom
4. Germany
5. France
6. Italy
7. Spain
8. China
9. United States of America
10. Japan
For more information about this report visit https://www.researchandmarkets.com/r/l7t0b6
See more here:
Posted in Ai
Comments Off on Core National AI Strategies for European Union, the United Kingdom, Germany, France, Italy, Spain, Japan, China, and the United States -…
From AI to climate change: An integrated approach to university education – The Globe and Mail
Posted: at 8:42 am
ONRamp is a 15,000 square foot collaboration, incubation and co-working space developed for U of Ts entrepreneurship community.
Chris Sorensen
As the world refocuses on breakthroughs on global issues such as technology and environmental challenges, universities are also transforming their studies to serve students changing interests.
Studies in artificial intelligence, climate change and entrepreneurship may seem like they dont have a whole lot in common, but there is one underlying element to the way they are now taught: integration.
These are no longer stand-alone subjects at university. The impact they have on everyday life makes them relevant to a myriad of subjects, and theyre now integrated into everything from visual arts to engineering.
Story continues below advertisement
Just a few years ago, entrepreneurship was only found in the syllabus of a business course. But taking an idea and developing it into a commercialized product or business is not limited to those with a business background, so why should the lessons of entrepreneurship be only for those students in that area of study?
Students want to learn more about entrepreneurship in the classroom from across the board, in sciences, social sciences, clean technology, the arts, all of it, says Derek Newton, assistant vice-president of innovation, partnerships and entrepreneurship at the University of Toronto. We now have more than 100 courses in entrepreneurship that our students can take across multiple disciplines.
More universities are also looking at how cross-pollination of disciplines and departments can spark unique business ideas and are even developing bricks-and-mortar spaces to foster this multidisciplinary creativity. For instance, ONRamp is a 15,000 square foot collaboration, incubation and co-working space developed specifically for U of Ts entrepreneurship community.
Its partners include University of Western Ontario, University of Waterloo, Queens University and McMaster University. Plans are also underway for a 750,000 square foot innovation complex to be built on the campus, which, when completed, is expected to house the largest concentration of student- and faculty-led startups in Canada.
Then theres climate change, a field that experts say presents the biggest challenges our world has ever faced from a social, political and economic standpoint.
Because it affects our daily lives, climate change should be integrated into almost every area of student learning, according to Seth Wynes, a PhD candidate in geography at the University of British Columbia. Mr. Wyness research looks at how governments and education institutions deliver information to the public about climate change and hes found there is definite room for improvement.
ONRamp partners include University of Western Ontario, University of Waterloo, Queens University and McMaster University.
Chris Sorensen
But he is seeing more integration of climate change in a variety of postsecondary departments, which he says is a good start.
Story continues below advertisement
Story continues below advertisement
When youre looking at universities, climate change can be talked about in a variety of areas, Mr. Wynes says. In business, we can talk about stranded assets and the effects that decarbonization will have on the economy, in psychology departments we can talk about the cognitive processes that hamper people adopting strong actions on climate change.
"And something I hear about more lately is incorporating more narratives about climate change into literature.
Like entrepreneurship and climate change, AI has made its way into a large number of industries in the past few years. Indeed, more employers from various fields are looking for graduates that have some interaction with AI while at school.
All fields are doing AI now because its a technique thats now used in many [industrial] and research areas. And students are keenly aware of this, says Peter van Beek, co-director of the AI Institute at Waterloo University.
AI and, in particular machine learning, are traditionally associated with computer science, engineering and manufacturing. But as the technology has grown so has its reach, Dr. van Beek says.
These areas are now much more commonplace in financial institutions and insurance firms that use predictive algorithms and massive amounts of data to assess areas such as risk.
Story continues below advertisement
Theres a lot of student interest in the courses that are offered from all areas and we have a very large co-op program in our undergraduate years, adds Dr. van Beek. What Im seeing in rsums from students these days is that a lot of what they are doing, in a wide variety of fields, is AI related.
Read the original here:
From AI to climate change: An integrated approach to university education - The Globe and Mail
Posted in Ai
Comments Off on From AI to climate change: An integrated approach to university education – The Globe and Mail