The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Merging big data and AI is the next step – TNW
Posted: August 20, 2017 at 6:17 pm
AI is one of hottest trends in tech at the moment, but what happens when its merged with another fashionable and extremely promising tech?
Researchers are looking for ways to take big data to the next level by combining it with AI. Weve just recently realized how powerful big data can be, and by uniting it with AI, big data is swiftly marching towards a level of maturity that promises a bigger, industry-wide disruption.
The application of artificial intelligence on big data is arguably the most important modern breakthrough of our time. It redefines how businesses create value with the help of data. The availability of big data has fostered unprecedented breakthroughs in machine learning, that could not have been possible before.
With access to large volumes of datasets, businesses are now able to derive meaningful learning and come up with amazing results. It is no wonder then why businesses are quickly moving from a hypothesis-based research approach to a more focused data first strategy.
Businesses can now process massive volumes of data which was not possible before due to technical limitations. Previously, they had to buy powerful and expensive hardware and software. The widespread availability of data is the most important paradigm shift that has fostered a culture of innovation in the industry.
The availability of massive datasets has corresponded with remarkable breakthroughs in machine learning, mainly due to the emergence of better, more sophisticated AI algorithms.
The best example of these breakthroughs is virtual agents. Virtual agents (more commonly known as chat bots), have gained impressive traction over the course of time. Previously, chatbots had trouble identifying certain phrases or regional accents, dialects or nuances.
In fact, most chatbots get stumped by the simplest of words and expressions, such as mistaking Queue for Q and so on. With the union of big data and AI however, we can see new breakthroughs in the way virtual agents can self-learn.
A good example of self-learning virtual agents is Amelia, a cognitive agent recently developed by IPSoft. Amelia can understand everyday language, learn really fast and even gets smarter with time!
She is deployed at the help desk of Nordic bank SEB along with a number of public sector agencies. The reaction of executive teams to Amelia has been overwhelmingly positive.
Google is also delving deeper into big data-powered AI learning. DeepMind, Googles very own artificial intelligence company, has developed an AI that can teach itself to walk, run, jump and climb without any prior guidance. The AI was never taught what walking or running is but managed to learn it itself through trial and error.
The implications of these breakthroughs in the realm of artificial intelligence are astounding and could provide the foundation for further innovations in the times to come. However, there are dire repercussions of self-learning algorithms too and, if werent too busy to notice, you may have observed quite a few in the past.
Not long ago, Microsoft introduced its own artificial intelligence chatbot named Tay. The bot was made available to the public for chatting and could learn through human interactions. However, Microsoft pulled the plug on the project only a day after the bot was introduced to Twitter.
Learning at an exponential level mainly through human interactions, Tay transformed from an innocent AI teen girl to an evil, Hitler-loving, incestuous, sex-promoting, Bush did 9/11-proclaiming robot in less than 24 hours.
Some fans of sci-fi movies like Terminator also voice concerns that with the access it has to big data, artificial intelligence may become self-aware and that it may initiate massive cyberattacks or even take over the world. More realistically speaking, it may replace human jobs.
Looking at the rate of AI-learning, we can understand why a lot of people around the world are concerned with self-learning AI and the access it enjoys to big data. Whatever the case, the prospects are both intriguing and terrifying.
There is no telling how the world will react to the amalgamation of big data and artificial intelligence. However, like everything else, it has its virtue and vices. For example, it is true that self-learning AI will herald a new age where chatbots become more efficient and sophisticated in answering user queries.
Perhaps we would eventually see AI bots on help desks in banks, waiting to greet us. And, through self-learning, the bot will have all the knowledge it could ever need to answer all our queries in a manner unlike any human assistant.
Whatever the applications, we can surely say that combining big data with artificial intelligence will herald an age of new possibilities and astounding new breakthroughs and innovations in technology. Lets just hope that the virtues of this union will outweigh the vices.
Read next: Military-funded prosthetic technologies benefit veterans, but also kids
Link:
Posted in Ai
Comments Off on Merging big data and AI is the next step – TNW
AI-powered filter app Prisma wants to sell its tech to other companies – The Verge
Posted: at 6:17 pm
Prisma, the Russian company best known for its AI-powered photo filters, is shifting to B2B. The company wont retire its popular app, but says in the future, it will focus on selling machine vision tools to other tech firms.
We see big opportunities in deep learning and communication, Prisma CEO and co-founder Alexey Moiseenkov told The Verge. We feel that a lot of companies need expertise in this area. Even Google is buying companies for computer vision. We can help companies put machine vision in their app because we understand how to implement the technology. The firm has launched a new website prismalabs.ai in order to promote these services.
Prisma will offer a number of off-the-shelf vision tools, including segmentation (separating the foreground of a photo from the background), face mapping, and both scene and object recognition. The companys expertise is getting these sorts of systems powered by neural networks to run locally on-device. This can be a tricky task, but avoiding using the cloud to power these services can result in apps that are faster, more secure, and less of a drain on phone and tablet battery life.
Getting copied by Facebook might help account for its pivot to B2B
Although Prismas painting-inspired filters were all the rage last year (the app itself was released in June 2015), they were soon copied by the likes of Facebook, which might account for the Russian companys change in direction.
Moiseenkov denies this is the case, and says it wasnt his intention to compete with bigger social networks. We never thought we were a competitor of Facebook were a small startup, with a small budget, he said. But, he says, the popularity of these deep learning filters shows there are plenty of consumer applications for the latest machine vision tech.
Moiseenkov says his company will continue to support the Prisma app, and that it will act as a showcase for the firms latest experiments. He says the app still has between 5 million and 10 million monthly active users, most of which are based in the US. The company also started experimenting with selling sponsored filters on its main app last year, and says it will continue to do so. It also launched an app for turning selfies into chat stickers.
There have been rumors that Prisma would get bought out by a bigger company. Moiseenkov visited Facebooks headquarters last year, and the US tech giant has made similar acquisitions in the past buying Belarus facial filter startup MSQRD in March 2016. When asked if the company would consider a similar deal, co-founder Aram Airapetyan replied over email: We want to go on doing what we do and what we can do best. The whole team is super motivated and passionately committed to what we do! So the rest doesn't matter (where, when, with whom). Make of that what you will.
See original here:
AI-powered filter app Prisma wants to sell its tech to other companies - The Verge
Posted in Ai
Comments Off on AI-powered filter app Prisma wants to sell its tech to other companies – The Verge
Is AI More Threatening Than North Korean Missiles? – NPR
Posted: at 6:17 pm
In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif. Ringo H.W. Chiu/AP hide caption
In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif.
One of Tesla CEO Elon Musk's companies, the nonprofit start-up OpenAI, manufactures a device that last week was victorious in defeating some of the world's top gamers in an international video game (e-sport) tournament with a multi-million-dollar pot of prize money.
We're getting very good, it seems, at making machines that can outplay us at our favorite pastimes. Machines dominate Go, Jeopardy, Chess and as of now at least some video games.
Instead of crowing over the win, though, Musk is sounding the alarm. Artificial Intelligence, or AI, he argued last week, poses a far greater risk to us now than even North Korean warheads.
No doubt Musk's latest pronouncements make for good advertising copy. What better way to drum up interest in a product than to announce that, well, it has the power to destroy the world.
But is it true? Is AI a greater threat to mankind than the threat posed to us today by an openly hostile, well-armed and manifestly unstable enemy?
AI means, at least, three things.
First, it means machines that are faster, stronger and smarter than us, machines that may one day soon, HAL-like, come to make their own decisions and make up their own values and, so, even to rule over us, just as we rule over the cows. This is a very scary thought, not the least when you consider how we have ruled over the cows.
Second, AI means really good machines for doing stuff. I used to have a coffee machine that I'd set with a timer before going to bed; in the morning I'd wake up to the smell of fresh coffee. My coffee maker was a smart, or at least smart-ish, device. Most of the smart technologies, the AIs, in our phones, and airplanes, and cars, and software programs including the ones winning tournaments are pretty much like this. Only more so. They are vastly more complicated and reliable but they are, finally, only smart-ish. The fact that some of these new systems "learn," and that they come to be able to do things that their makers cannot do like win at Go or Dota is really beside the point. A steam hammer can do what John Henry can't but, in the end, the steam hammer doesn't really do anything.
Third, AI is a research program. I don't mean a program in high-tech engineering. I mean, rather, a program investigating the nature of the mind itself. In 1950, the great mathematician Alan Turing published a paper in a philosophy journal in which he argued that by the year 2000 we would find it entirely natural to speak of machines as intelligent. But more significantly, working as a mathematician, he had devised a formal system for investigating the nature of computation that showed, as philosopher Daniel Dennett puts it in his recent book, that you can get competence (the ability to solve problems) without comprehension (by merely following blind rules mechanically). It was not long before philosopher Hilary Putnam would hypothesize the mind is a Turing Machine (and a Turing Machine just is, for all intents and purposes, what we call a computer today). And, thus, the circle closes. To study computational minds is to study our minds, and to build an AI is, finally, to try to reverse engineer ourselves.
Now, Type 3 AI, this research program, is alive and well and a continuing chapter in our intellectual history that is of genuine excitement and importance. This, even though the original hypothesis of Putnam is wildly implausible (and was given up by Putnam decades ago). To give just one example: the problem of the inputs and the outputs. A Turing Machine works by performing operations on inputs. For example, it might erase a 1 on a cell of its tape and replace it with a 0. The whole method depends on being able to give a formal specification of a finite number of inputs and outputs. We can see how that goes for 1s and 0s. But what are the inputs, and what are the outputs, for a living animal, let alone a human being? Can we give a finite list, and specify its items in formal terms, of everything we can perceive, let alone, do?
And there are other problems, too. To mention only one: We don't understand how the brain works. And this means that we don't know that the brain functions, in any sense other than metaphorical, like a computer.
Type 1 AI, the nightmare of machine dominance, is just that, a nightmare, or maybe (for the capitalists making the gizmos) a fantasy. Depending on what we learn pursuing the philosophy of AI, and as luminaries like John Searle and the late Hubert Dreyfus have long argued, it may be an impossible fiction.
Whatever our view on this, there can be no doubt that the advent of smart, rather than smart-ish, machines, the sort of machines that might actually do something intelligent on their own initiative, is a long way off. Centuries off. The threat of nuclear war with North Korea is both more likely and more immediate than this.
Which does not mean, though, that there is not in fact real cause for alarm posed by AI. But if so, we need to turn our attention to Type 2 AI: the smart-ish technologies that are everywhere in our world today. The danger here is not posed by the technologies themselves. They aren't out to get us. They are not going to be out to get us any time soon. The danger, rather, is our increasing dependence on them. We have created a technosphere in which we are beholden to technologies and processes that we do not understand. I don't mean you and me, that we don't understand: No one person can understand. It's all gotten too complicated. It takes a whole team or maybe a university to understand adequately all the mechanisms, for example, that enable air traffic control, or drug manufacture, or the successful production and maintenance of satellites, or the electricity grid, not to mention your car.
Now this is not a bad thing in itself. We are not isolated individuals all alone and we never have been. We are a social animal and it is fine and good that we should depend on each other and on our collective.
But are we rising to the occasion? Are we tending our collective? Are we educating our children and organizing our means of production to keep ourselves safe and self-reliant and moving forward? Are we taking on the challenges that, to some degree, are of our own making? How to feed 7 billion people in a rapidly warming world?
Or have we settled? Too many of us, I fear, have taken up a "user" attitude to the gear of our world. We are passive consumers. Like the child who thinks chickens come from supermarkets, we are hopelessly alienated from how things work.
And if we are, then what are we going to do if some clever young person some where maybe a young lady in North Korea writes a program to turn things off? This is a serious and immediate pressing danger.
Alva No is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. He is the author of several books, including his latest, Strange Tools: Art and Human Nature (Farrar, Straus and Giroux, 2015). You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe
See original here:
Posted in Ai
Comments Off on Is AI More Threatening Than North Korean Missiles? – NPR
AI creates fictional scenes out of real-life photos – Engadget
Posted: August 18, 2017 at 5:15 am
Researcher Qifeng Chen of Stanford and Intel fed his AI system 5,000 photos from German streets. Then, with some human help it can build slightly blurry made-up scenes. The image at the top of this article is a example of the network's output.
To create an image a human needs to tell the AI system what goes where. Put a car here, put a building there, place a tree right there. It's paint by numbers and the system generates a wholly unique scene based on that input.
Chen's AI isn't quite good enough to create photorealistic scenes just yet. It doesn't know enough to fill in all those tiny pixels. It's not going to replace the high-end special effects houses that spend months building a world. But, it could be used to create video game and VR worlds where not everything needs to look perfect in the near future.
Intel plans on showing off the tech at the International Conference on Computer Vision in October.
Read more here:
AI creates fictional scenes out of real-life photos - Engadget
Posted in Ai
Comments Off on AI creates fictional scenes out of real-life photos – Engadget
AI Vs. The Narrative of the Robot Job-Stealers – HuffPost
Posted: at 5:15 am
By Doug Randall, CEO, Protagonist
At a recent meeting with U.S. governors, Elon Musk made some hefty criticisms of artificial intelligence. When an interviewer jokingly asked whether we should be afraid of robots taking our jobs, Musk, not jokingly, replied, AI is a fundamental risk to the existence of human civilization. Those are serious words, from a very influential thinker.
Narratives about AI are buzzing. Some, like Musk, have vocalized concerns over the regulation of AI and its impact on human jobs. The vast majority of industry leaders have been bombastic about the wonders of the technology, while dismissing the criticisms. Eric Schmidt of Alphabet said:
Youd have to convince yourself that a declining workforce and an ever-increasing idle force, the sum of that wont generate more demand. Thats roughly the argument that you have to make. Thats never been truein order to believe its different now, you have to believe that humans are not adaptable, that theyre not creative.
That confidence might not resonate with those who are warier of the threats of AI--a population significant population that is underrepresented in Silicon Valley leadership. Protagonist recently analyzed hundreds of thousands of conversations around AI using our Narrative Analytics platform. Most of what we found was positive techno-friendly Narratives, but there are also very real, deeply held beliefs about AI as a threat to humanity, human jobs and human privacy that need to be addressed.
Most companies in the AI space can readily allow that Narratives like robots will take our jobs or AI is a threat to humanity exist. What they dont know is how much that Narrative rests with their target audience or whether those Narratives are being applied to their own brand. Its often more than they think. Elon Musk is far from alone in his distrust.
As of last year, 10 percent of Americans considered AI a threat to humanity and six percent considered it a threat to jobs, though the former number was declining and the latter was rising. With that in mind, businesses should be aware of the very real risks of being labelled a job-killer. That Narrative might not be as broadly held as some others, but is one of the most emotionally evocative. Fear and anger over outsourced or inaccessible work opportunities played a major role in the last presidential election. Its clearly a topic that resonates with people on a deep level, and if it dictated their vote, it will dictate their feelings about a company.
Our analysis revealed a cautionary finding: the less tech-friendly Narratives about AI have significantly higher levels of engagement than the more positive tropes. That means theyre more likely to spread quickly once they are triggered.
In todays world it doesnt matter whether specific types of AI present a real threat to human jobs or privacy; if theyre perceived to be dangerous the businesses behind them could be in real trouble. Negative affiliations could result in anything from investor slowdown, to active boycotts to slowed adoption during critical growth periods.
In Silicon Valley and tech markets, growth rates are particularly important and AI companies often experience surges of enthusiastic early adopters. When AI companies are strategizing for continued growth and allocating resources, they also need to think about how adoption trends might change when their product reaches the broader market. Negative narratives could create significanteven damningheadwinds if they arent accounted for and addressed directly.
So what can businesses implementing AI do? First, understand the Narrative landscape, then take action by addressing negative beliefs head-on. Of the seven percent of Americans who fear AI is feeding the surveillance machine, most are in finance, marketing or healthcare. So businesses looking to sell into those fields should emphasize privacy in their marketing. Companies worried about being affiliated with job disenfranchisement should advocate ways they create opportunities.
Its okay to relish in the excitement of innovation; 69 percent of the the mentality around artificial intelligence is positive: its rich, its exciting, its transforming business as we know it. AI-using companies can and should participate in that shared glow. They just cant ignore or laugh off those other Narratives as they do so. Especially with people like Musk chiming in.
Dougis Founder & CEO ofProtagonistwhich is a high growth Narrative Analytics company.Protagonistmines beliefs in order to energize brands, win narrative battles, and understand target audiences.
Protagonistuses natural language processing, machine learning, and deep human expertise to identify, measure, and shape narratives.Doughas lectured on a number of topics at the Wharton School, Stanford University, and National Defense University; his articles on future technology trends have appeared in the Financial Times, Wired, and Business 2.0. He was previously a partner at Monitor, founder of Monitor 360 and co-head of the consulting practice at Global Business Network (GBN). Before that, he was a Vice President at Snapfish, a senior consultant at Decision Strategies, Inc., and a senior research fellow at the Wharton School.
The Morning Email
Wake up to the day's most important news.
Continued here:
Posted in Ai
Comments Off on AI Vs. The Narrative of the Robot Job-Stealers – HuffPost
Elon Musk and Mark Zuckerberg Exchange Heated Words Over AI. Whose Side Are You On? – Inc.com
Posted: at 5:15 am
Tech billionaires Elon Musk and Mark Zuckerberg are engaged in a very public disagreement about the nature of artificial intelligence (machines that can think) and whether it's a boon or bane to society. It's almost as interesting to follow as the Hollywood supercouple-of-the-month's divorce proceedings.
Just kidding. Let's agree that the former is relevant, the latter ridiculous.
Musk has been warning for some time now that AI is "our greatest existential threat" and that we should fear perpetuating a world where machines are smarter than humans.
It's not that he's against AI: Musk has invested in several AI companies "to keep an eye on them." He's even launched his own AI start-up, Neuralink--intended to connect the human brain with computers to someday do mind blowing things like repair cancer lesions and brain injuries (for example).
Musk fears the loss of human control if AI is not very carefully monitored.
Zuckerberg sees things very differently and is apparently frustrated by the fear-mongering. The Facebook chief has made AI a strategic priority for his company. He talks about the advances AI could make in healthcare and self-driving cars, for example.
In a recent Facebook Live session where he was answering a question about Musk's continued warning on AI, the Facebook founder responded, "I think that people who are naysayers and kind of trying to drum up these doomsday scenarios--I just, I don't understand it. I think it's really negative and in some ways I actually think it is pretty irresponsible."
Musk quickly fired back with this tweet:
The debate is sure to continue to volley back and forth in a sort of Wimbledon of the Way Out There.
So, at the risk or Mr. Musk calling me out, I thought I'd try to bring it a bit closer to home so you can track better with the debate and form your own opinion. Here are some of the most commonly cited pros and cons to AI:
So are you more of a Muskie or a Zuckerberger?
Better decide which side you lean towards. Before the machines decide for you.
Excerpt from:
Elon Musk and Mark Zuckerberg Exchange Heated Words Over AI. Whose Side Are You On? - Inc.com
Posted in Ai
Comments Off on Elon Musk and Mark Zuckerberg Exchange Heated Words Over AI. Whose Side Are You On? – Inc.com
What ON Semiconductor Thinks About the IoT, AI, and the Future of Tech Development – Madison.com
Posted: at 5:15 am
On Aug. 7, semiconductor, sensor,and integrated circuits makerON Semiconductor (NASDAQ: ON) reported second-quarter 2017 results that exceeded management's guidance. Later in the day, I caught up with ON's David Somo -- the company's vice president of corporate strategy and marketing -- to talk about the industry, where things are headed, and what ON is doing to grow its piece of the pie.
David Somo, ON Semiconductor's senior vice president of corporate strategy and marketing. Image source: ON Semiconductor.
ON Semiconductor is often lumped in with what has been dubbed the Internet of Things (IoT) movement, an oft-used term that has become almost generic. What is it exactly?
At the highest level, Somo defines it as "a way to bridge the physical and digital worlds with intelligent technology." Digging a little deeper, this essentially refers to machines and systems that are aware of their environment and deliver data to help improve our lives.
For ON, the rubber meets the road with the devices themselves, like the companies IoT Development Kit -- a hardware package with supporting software designed to help engineers quickly develop a range of devices from smart-home to industrial applications. The "out-of-the-box ready-to-deploy" system recently won the IoT Evolution Product of the Year award from IoT Evolution magazine.
With so many devices coming online, the industry is all about making development and testing of connected systems a quick process. The easier the kit is to use and the more comprehensive its coverage of the connected industry overall, the more likely an engineer is to do business with ON. Speaking to the importance of this, Somo had this to say:
Asthe market environment and application needs changed, we've evolved our business model to go from components, to modules, to more platforms. We are into systems enablement with the components and modules we offer. As devices become intelligent and need to be connected ... semiconductor companies are stepping up in applications capabilities, and ON Semiconductor is certainly trying to lead in this area to offer an out-of-the-box type of solution for customers to jump-start their development.
ON has been aggressively moving into connected automobiles and other industrial applications in recent years, building out a portfolio of differentiated products to capture more share of client's end systems.
To that end, artificial intelligence (AI) has begun to enter the equation for ON. Another buzzword in the tech industry, Somo defines it simply as a device gathering environmental data, looking for patterns in that data, and continuously learning from those patterns.
Perhaps the most visible part of the AI movement is with digital assistants like the Amazon (NASDAQ: AMZN) Echo, but the technology is getting applied mostly outside of consumer markets. Data center management, industrial robotics, and autonomous vehicles are some notable examples, Somo said.
AI is typically the realm of tech companies like Intel (NASDAQ: INTC) and NVIDIA (NASDAQ: NVDA), which provide the processing horsepower, but ON has found that the massive amounts of data being sent to the brain of a system like an autonomous-driving car or an industrial robot affects efficiency.
For example, in an autonomous car, the processor has to continuously parse through and make decisions on information coming from a dozen different sensors, including radar (object detection using radio waves) and lidar (object detection using light imaging) sensors. All that data transfer bogs down not just the main processor, but also the connections in the car carrying the data.
To help speed things up, Somo said, ON is building a single processor unit into its sensors to do some front-end decision making before sending relevant data to the main brain. That helps free up processing power, but more importantly it reduces the amount of bandwidth being consumed in the internal system of the car itself. That helps reduce the lag time as information is sent back and forth from the peripheral device to the brain of the car, a critical process for safety of the vehicle and its passengers.
Image source: Getty Images.
Estimates for the number of connected devices in operation by the end of this decade are all over the board, with some saying as many as 50 billion devices compared to about 8 billion now. Somo said ON tends to think in the 25 billion-to 30 billion-connection range by 2020, but it's almost impossible to say for sure.
The one thing that is for certain is that the opportunity is huge and there is plenty of new business to go around. ON provides a lot of detail on what industries are paying the bills, and currently the automotive and industrial sectors make up just shy of 60% of revenues. I asked Somo if that number will be consistent in the years ahead, or if ON will expand its presence into other areas.
I think automotive and industrial both have long legs under them, thinking about the megatrends that are there. Take automotive, there are four key megatrends, two of which we play in strongly and one we have some play in. The first is autonomous vehicles where you're building higher levels of autonomy ... we have a couple decades of runway to get there in capabilities.
Electric vehicles, we're hearing a lot of noise from companies like Volvo or Subaru, or like Audi and BMW all talking about a healthy percentage of their vehicle fleets are going to be electric motors by 2025.
The third megatrend is connected vehicles. We participate inside the vehicles, so connecting everything inside the vehicle, whether it's wired or wireless like Bluetooth. But there's also outside the vehicle as well, vehicle-to-vehicle, or vehicle-to-grid, and that's going to be more 5G-technology enabled.
The final area is in mobility services around autonomous vehicles. We really aren't quite at that level from an ecosystem vantage point. There is significant competition taking place there ... between the traditional automotive manufacturers andthe ride sharing services like Uber and Lyft, that are competing for how that is going to work over time.
The answer in short isthe company is always looking for new growth opportunities, but likes its current breakdown. Somo sees much connected device growth happening in theindustrial economy for years to come, providing plenty of opportunity.
Even though connected devices are booming, Somo said in ON's view the industry is well-balanced from a supply and demand perspective. Often fraught with periods of boom and bust as supply and demand changes, the company sees things as being neither hot nor cold but just right.Persisting balance betweensupply and demand means steady growth without substantial risk of an industry crash. That's good for all IoT companies, and is a good situation for ON as it continues to expand on its role as a device and connected systems supplier.
10 stocks we like better than ON Semiconductor
When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*
David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and ON Semiconductor wasn't one of them! That's right -- they think these 10 stocks are even better buys.
*Stock Advisor returns as of August 1, 2017
Nicholas Rossolillo owns shares of ON Semiconductor. The Motley Fool owns shares of and recommends Amazon and Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.
Visit link:
What ON Semiconductor Thinks About the IoT, AI, and the Future of Tech Development - Madison.com
Posted in Ai
Comments Off on What ON Semiconductor Thinks About the IoT, AI, and the Future of Tech Development – Madison.com
When robots learn to lie, then we worry about AI – The Australian Financial Review
Posted: at 5:15 am
Beware the hyperbole surrounding artificial intelligence and how far it has progressed.
Great claims are being made for artificial intelligence, or AI, these days.
Amazon's Alexa, Google's assistant, Apple's Siri, Microsoft's Cortana: these are all cited as examples of AI. Yet speech recognition is hardly new: we have seen steady improvements in commercial software like Dragon for 20 years.
Recently we have seen a series of claims that AI, with new breakthroughs like "deep learning", could displace 2 million or more Australian workers from their jobs by 2030.
Similar claims have been made before.
I was fortunate to discuss AI with a philosopher, Julius Kovesi, in the 1970s as I led the team that eventually developed sheep-shearing robots. With great insight, he argued that robots, in essence, were built on similar principles to common toilet cisterns and were nothing more than simple automatons.
"Show me a robot that deliberately tells you a lie to manipulate your behaviour, and then I will accept you have artificial intelligence!" he exclaimed.
That's the last thing we wanted in a sheep-shearing robot, of course.
To understand future prospects, it's helpful to see AI as just another way of programming digital computers. That's all it is, for the time being.
We have been learning to live with computers for many decades. Gradually, we are all becoming more dependent on them and they are getting easier to use. Smartphones are a good example.
Our jobs have changed as a result, and will continue to change.
Smartphones can also disrupt sleep and social lives, but so can many other things too. Therefore, claims that we are now at "a convergence" where AI is going to fundamentally change everything are hard to accept.
We have seen several surges in AI hyperbole. In the 1960s, machine translation of natural language was "just two or three years away". And we still have a long way to go with that one. In the late 1970s and early 1980s, many believed forecasts that 95 per cent of factory jobs would be eliminated by the mid-1990s. And we still have a long way to go with that one too. The "dot com, dot gone" boom of 2001 saw another surge. Disappointment followed each time as claims faded in the light of reality. And it will happen again.
Self-driving cars will soon be on our streets, thanks to decades of painstaking advances in sensor technology, computer hardware and software engineering. They will drive rather slowly at first, but will steadily improve with time. You can call this AI if you like, but it does not change anything fundamental.
The real casualty in all this hysteria is our appreciation of human intelligences ... plural. For artificial intelligence has only replicated performances like masterful game playing and mathematical theorem proving, or even legal and medical deduction. These are performances we associate with intelligent people.
Consider performances easily mastered by people we think of as the least intelligent, like figuring out what is and is not safe to sit on, or telling jokes. Cognitive scientists are still struggling to comprehend how we could begin to replicate these performances.
Even animal intelligence defies us, as we realised when MIT scientists perfected an artificial dog's nose sensitive enough to detect TNT vapour from buried landmines. When tested in a real minefield, this device detected TNT everywhere and the readings appeared to be unrelated to the actual locations of the mines. Yet trained mine detection dogs could locate the mines in a matter of minutes.
To appreciate this in a more familiar setting, imagine a party in a crowded room. One person lights up a cigarette and, to avoid being ostracised, keeps it hidden in an ashtray under a chair. Everyone in the room soon smells the cigarette smoke but no one can sense where it's coming from. Yet a trained dog would find it in seconds.
There is speculation that quantum computers might one day provide a real breakthrough in AI. At the moment, however, experiments with quantum computers are at much the same stage as Alan Turing was when he started tinkering with relays in the 1920s. There's still a long way to go before we will know whether these machines will tell deliberate lies.
In the meantime it might be worth asking whether the current surge of interest in AI is being promoted by companies like Google and Facebook in a deliberate attempt to seduce investors. Then again, it might just be another instance of self-deception group-think.
James Trevelyan is emeritus professor in the School of Mechanical and Chemical Engineering at the University of Western Australia.
Read more from the original source:
When robots learn to lie, then we worry about AI - The Australian Financial Review
Posted in Ai
Comments Off on When robots learn to lie, then we worry about AI – The Australian Financial Review
AI is creating new types of art, and new types of artists – Seattle Times
Posted: August 16, 2017 at 6:18 pm
The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.
MOUNTAIN VIEW, Calif. In the mid-1990s, Douglas Eck worked as a database programmer in Albuquerque, New Mexico, while moonlighting as a musician. After a day spent writing computer code inside a lab run by the Department of Energy, he would take the stage at a local juke joint, playing what he calls punk-influenced bluegrass Johnny Rotten crossed with Johnny Cash. But what he really wanted to do was combine his days and nights, and build machines that could make their own songs. My only goal in life was to mix AI and music, Eck said.
It was a naive ambition. Enrolling as a graduate student at Indiana University, in Bloomington, not far from where he grew up, he pitched the idea to Douglas Hofstadter, the cognitive scientist who wrote the Pulitzer Prize-winning book on minds and machines, Gdel, Escher, Bach: An Eternal Golden Braid.Hofstadter turned him down, adamant that even the latest artificial intelligence techniques were much too primitive.
But during the next two decades, working on the fringe of academia, Eck kept chasing the idea, and eventually, the AI caught up with his ambition.
Last spring, a few years after taking a research job at Google, Eck pitched the same idea he pitched Hofstadter all those years ago. The result is Project Magenta, a team of Google researchers who are teaching machines to create not only their own music but also to make so many other forms of art, including sketches, videos and jokes.
With its empire of smartphones, apps and internet services, Google is in the business of communication, and Eck sees Magenta as a natural extension of this work.
Its about creating new ways for people to communicate, he said during a recent interview inside the small two-story building here that serves as headquarters for Google AI research.
The project is part of a growing effort to generate art through a set of AI techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data.
By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognize a bike. This is how Facebook identifies faces in online photos, how Android phones recognize commands spoken into phones, and how Microsoft Skype translates one language into another. But these complex systems can also create art. By analyzing a set of songs, for instance, they can learn to build similar sounds.
As Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different.
But that end game as much a way of undermining art as creating it is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.
For centuries, orchestral conductors have layered sounds from various instruments atop one other. But this is different. Rather than layering sounds, Eck and his team are combining them to form something that did not exist before, creating new ways that artists can work.
Were making the next film camera, Eck said. Were making the next electric guitar.
Called NSynth, this particular project is only just getting off the ground. But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other AI techniques.
This work has exploded over the last few years, said Adam Ferris, a photographer and artist in Los Angeles. This is a totally new aesthetic.
In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyzes a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.
At the same time, a number of artists like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris are exploring neural networks in other ways.
In January, Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move. While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analyzed their expressions in real time, guessing at their emotions.
The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s. Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him, she said, these new computational techniques create a broader palette for artists.
A year ago, David Ha was a trader with Goldman Sachs in Tokyo. During his lunch breaks he started toying with neural networks and posting the results to a blog under a pseudonym. Among other things, he built a neural network that learned to write its own Kanji, the logographic Chinese characters that are not so much written as drawn.
Soon, Eck and other Googlers spotted the blog, and now Ha is a researcher with Google Magenta. Through a project called SketchRNN, he is building neural networks that can draw.
By analyzing thousands of digital sketches made by ordinary people, these neural networks can learn to make images of things like pigs, trucks, boats or yoga poses. They do not copy what people have drawn. They learn to draw on their own, to mathematically identify what a pig drawing looks like. Then, you ask them to, say, draw a pig with a cats head, or to visually subtract a foot from a horse or sketch a truck that looks like a dog or build a boat from a few random squiggly lines.
Next to NSynth or DeepDream, these may seem less like tools that artists will use to build new works. But if you play with them, you realize that they are themselves art, living works built by Ha. AI is not just creating new kinds of art; it is creating new kinds of artists.
Read more:
AI is creating new types of art, and new types of artists - Seattle Times
Posted in Ai
Comments Off on AI is creating new types of art, and new types of artists – Seattle Times
Elon Musk: AI Poses Bigger Threat to Humanity Than North Korea – Live Science
Posted: at 6:18 pm
Elon Musk speaks in front of employees during the delivery of the first Tesla vehicle Model 3 on July 28, 2017.
Simmering tensions between the United States and North Korea have many people concerned about the possibility of nuclear war, but Elon Musk says the North Korean government doesn't pose as much of a threat to humanity as the rise of artificial intelligence (AI).
The SpaceX and Tesla CEO tweeted on Aug. 11: "If you're not concerned about AI safety, you should be. Vastly more risk than North Korea." The tweet was accompanied by a photo that features a pensive woman and a tag line that reads, "In the end the machines will win."
Concerns about the possibility of nuclear missile strikes have escalated in recent weeks, particularly after President Donald Trump and North Korean leader Kim Jong-un threatened each other with shows of force. The North Korean government even issued a statement saying it is "examining" plans for a missile strike near the U.S. territory of Guam.
But, Musk thinks humanity's most pressing concern could be closer to home.
The billionaire entrepreneur has been outspoken about the dangers of AI, and the need to take action before it's too late. In July, he spoke at the National Governors Association summer meeting and urged lawmakers to regulate AI now before it poses a grave threat to humanity.And in 2014, Musk said artificial intelligence is humanity's "biggest existential threat."
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc
Original article on Live Science.
Original post:
Elon Musk: AI Poses Bigger Threat to Humanity Than North Korea - Live Science
Posted in Ai
Comments Off on Elon Musk: AI Poses Bigger Threat to Humanity Than North Korea – Live Science