Daily Archives: March 19, 2017

Fly into a missile: Raytheon embraces virtual reality – Arizona Daily Star

Posted: March 19, 2017 at 4:28 pm

In a darkened room in Raytheon Missile Systems sprawling Tucson airport plant, future weapon systems and the companys next construction project are taking shape in a CAVE.

But this isnt just any cave, its a room-sized, interactive 3-D theater known by the trade name CAVE Automatic Virtual Environment, or generically a computer assisted virtual environment.

Using the CAVE, teams of Raytheon engineers and other workers can collaborate on design and development using three-dimensional, stereoscopic immersive visualization.

Much more than a cool video-game technology though that doesnt hurt either the CAVE has become a major tool for collaborative, real-time product and facility development, said Kendall Loomis, manager of Raytheon Immersive Design Center.

With computer-aided design plans loaded into CAVE, participants can virtually navigate through a product like a missile, or walk through a virtual factory floor, Loomis said.

Were actually going to fly into a missile this is the core of what we do here, she said as she demonstrated the system.

Fitted with 3-D glasses, the observer gets a close-up, dynamic look at the innards of a generic missile, with electronics and mechanics laid out to exact scale.

Traditionally, engineers responsible for different missile components have consulted designs, ordered parts and created prototypes, often to find they didnt work in the final product because of things like production and maintenance issues, Loomis said.

The CAVE allows workers from the factory floor to collaborate with engineers to avoid such problems, she said.

Well have our assemblers and testers in here and theyll say, that looks great, but where is my torque wrench supposed to go? or how does my hand fit in there? or dont you realize by the time that assembly gets to my workstation, its sealed and I have nowhere to thread that wire, Loomis said. And the mechanical engineer says, Oh my gosh, I never thought of that before. So its that collaboration in that environment that is the power of this visual.

The CAVE also can be connected to similar systems for cross-country virtual collaboration.

Raytheons CAVE systems are made and custom-installed by Iowa-based Mechdyne Corp., which installed the latest system at Raytheons Tucson operation last October to replace a smaller system set up in 2010.

In 2014, Raytheon installed the industrys first CAVE2 system, featuring a 320-degree view, at another immersive design center in Andover, Massachusetts.

Raytheon also has two smaller, portable CAVE units that can be shipped to customer or supplier sites for long-distance VR conferencing.

Such collaboration can be a huge time- and money-saver.

Loomis recalled one example where the company was asked to redesign a wire harness system for a missile, which initially made up some 500 pages of drawings.

The design was digitized and fed into the CAVE system in Tucson, and a portable CAVE system was shipped to Raytheons supplier in New Hampshire so engineers could collaborate virtually on issues such as pinch points and heat hazards.

As a result, the design was converted to a flexible cable that worked, beating the 18-month design deadline by eight months and coming in 40 percent under budget, Loomis said.

Raytheons CAVE has also become a go-to tool for facility design.

When Raytheon was designing its $75 million missile assembly plant in Huntsville, Alabama, the CAVE in Tucson allowed a team of engineers, factory staff, program leaders, safety officials and contractors to collaborate on the layout of the highly automated plant, where shuttles autonomously convey missiles from one work station to the next.

In the virtual world, the team found hundreds of issues, from space and clearance issues to missing exit signs and doors too short for taller workers.

In all, the team fixed 49 issues, saving an estimated $1.5 million in construction costs.

We iterated it 57 times before we broke ground, Loomis said.

The CAVE system is now being used by Raytheon to plan two new buildings at its Tucson airport site.

The company, already Southern Arizonas biggest private employer with nearly 10,000 local workers, announced in November it will build new facilities and add some 1,900 workers here in the next five years.

Raytheon isnt the only defense contractor embracing virtual reality.

In 2014, Lockheed Martin opened its Collaborative Human Immersive Laboratory in Littleton, Colo., using Mechdynes CAVE system.

Raytheon and other defense firms also use VR setups for training. The Air Force has used flight simulators for decades, and today all the armed services are exploring or using some form of VR training.

It saves a lot of money, like all simulation-based training does its a real cost saver for the military across the board, said John Williams, a spokesman for the National Defense Industrial Association and the affiliated National Training and Simulation Association.

Its sort of going to be new frontier, youll see the interfaces become less clunky with miniaturization, and theyll seem more seamless, Williams said.

Virtual training also has emerged for industrial training and first responders.

In 2014, the Pima County Sheriffs Department began using a 300-degree, five-screen, VR setup to train officers on how to react in certain situations, with multiple outcomes depending on how an officer responds.

Looking ahead, Loomis said Raytheon is exploring ways to use the CAVE for employee training, for example, immersing quality-control employees or applicants in a virtual factory set up with numerous visible safety problems and asking them to spot them.

Raytheon also is working to pull more data analytics into the mix, including structural, thermal, mechanical and electric data, for sophisticated modeling, she said.

That large-scale data analytics will blow the roof off what we do here that is very cutting-edge and very unprecedented in this environment, to take multiple, different software apps and interlace them together in one place over a visual.

Contact senior reporter David Wichner at dwichner@tucson.com or 573-4181. On Twitter: @dwichner

Continued here:

Fly into a missile: Raytheon embraces virtual reality - Arizona Daily Star

Posted in Virtual Reality | Comments Off on Fly into a missile: Raytheon embraces virtual reality – Arizona Daily Star

Amazon Applies Its AI Tools to Cyber Security – Newsweek

Posted: at 4:28 pm

This article originally appeared on The Motley Fool.

Amazon.com, Inc. has been making quite a push into the field ofartificial intelligence(AI). Its most public example of this effort, Alexa, its voice-activateddigital assistant, controls the Echo smart speaker and Echo Dot, which were top sellers on Amazon's website over the holidays.

Those familiar with Amazon Web Services (AWS), an industry leader in cloud computing, may also be aware of the AI-based tools the company has recently made available to AWS customers: Rekognition for building image recognition apps; Polly for translating text to speech; and Lex, to build conversational bots.

Try Newsweek for only $1.25 per week

Jeff Bezos, founder of Amazon. Joshua Roberts/Reuters

Amazon also is adding cyber-security to its AI resume. TechCrunch isreportingthat Amazon has acquired AI-based cyber-security company Harvest.ai. According to itswebsite, Harvest.ai uses AI-based algorithms to identify the most important documents and intellectual property of a business, then combines user behavior analytics with data loss prevention techniques to protect them from cyber attacks. Harvest.ai already had ties to Amazon, as a customer who was featured in an AWS Startup Spotlight article, which focuses on innovative and disruptive young companies. Harvest.ai boasts former members of the National Security Agency (NSA), Federal Bureau of Investigation (FBI), Department of Defense (DoD), as well as former employees of Websense and FireEye, Inc .

Harvest.ai's flagship product, MACIE, monitors a company's network in near real-time to identify when a suspicious user accesses unauthorized documents.Its target market was "Fortune 1000 organizations that were migrating to cloud-based platforms." Amazon has a Who's Who of big name companies as customers, so it seems like a natural fit for the company. If it decides to deploy MACIE to its cloud, it adds to the suite of hosting products available for its customers.Amazon already offers its Amazon Inspector, which it defines as an "automated security assessment service to help improve the security and compliance of applications deployed on AWS."Harvest.ai would take that to the next level.

The use of AI in cybersecurity isn't new. MIT has been experimenting with a novel approach to application. By pairing a system with a human counterpart and applying supervised learning, the system was able to detect 85 percentof threats. Over time, that success rate is sure to improve. Last year,IBMannounced an initiative to train itsAI-based Watsonin security protocols, in what was to be a year-long research project. By the end of the year, the company expanded the beta program with the inclusion of 40 clients across a variety of industries. Earlier this month, IBM announced that Watson for Cyber Security would be available to customers.

The task of cyber security seems ideally suited to AI applications. The ability to digest a magnitude of data in a short time and match real-time situations against a set of specified criteria seems tailor made for the platform. Add to this AI's ability to learn over time and it seems inevitable that there would be a merging of these technologies.

These acquisitions combined with Amazon's own research makes it one of several companies on the cutting edge of AI. Amazon has been applying the knowledge it gains across a wide swath of its business from consumer facing products to its business-centric applications.

More:

Amazon Applies Its AI Tools to Cyber Security - Newsweek

Posted in Ai | Comments Off on Amazon Applies Its AI Tools to Cyber Security – Newsweek

This Is What Happens When We Debate Ethics in Front of Superintelligent AI – Singularity Hub

Posted: at 4:28 pm

Is there a uniform set of moral laws, and if so, can we teach artificial intelligence those laws to keep it from harming us? This is the question explored in an original short film recently released by The Guardian.

In the film, the creators of an AI with general intelligence call in a moral philosopher to help them establish a set of moral guidelines for the AI to learn and followwhich proves to be no easy task.

Complex moral dilemmas often dont have a clear-cut answer, and humans havent yet been able to translate ethics into a set of unambiguous rules. Its questionable whether such a set of rules can even exist, as ethical problems often involve weighing factors against one another and seeing the situation from different angles.

So how are we going to teach the rules of ethics to artificial intelligence, and by doing so, avoid having AI ultimately do us great harm or even destroy us? This may seem like a theme from science fiction, yet its become a matter of mainstream debate in recent years.

OpenAI, for example, was funded with a billion dollars in late 2015 to learn how to build safe and beneficial AI. And earlier this year, AI experts convened in Asilomar, California to debate best practices for building beneficial AI.

Concerns have been voiced about AI being racist or sexist, reflecting human bias in a way we didnt intend it tobut it can only learn from the data available, which in many cases is very human.

As much as the engineers in the film insist ethics can be solved and there must be a definitive set of moral laws, the philosopher argues that such a set of laws is impossible, because ethics requires interpretation.

Theres a sense of urgency to the conversation, and with good reasonall the while, the AI is listening and adjusting its algorithm. One of the most difficult to comprehendyet most crucialfeatures of computing and AI is the speed at which its improving, and the sense that progress will continue to accelerate. As one of the engineers in the film puts it, The intelligence explosion will be faster than we can imagine.

Futurists like Ray Kurzweil predict this intelligence explosion will lead to the singularitya moment when computers, advancing their own intelligence in an accelerating cycle of improvements, far surpass all human intelligence. The questions both in the film and among leading AI experts are what that moment will look like for humanity, and what we can do to ensure artificial superintelligence benefits rather than harms us.

The engineers and philosopher in the film are mortified when the AI offers to act just like humans have always acted. The AIs idea to instead learn only from historys religious leaders is met with even more anxiety. If artificial intelligence is going to become smarter than us, we also want it to be morally better than us.Or as the philosopher in the film so concisely puts it: "We can't rely on humanity to provide a model for humanity. That goes without saying."

If were unable to teach ethics to an AI, it will end up teaching itself, and what will happen then? It just may decide we humans cant handle the awesome power weve bestowed on it, and it will take offor take over.

Image Credit:The Guardian/YouTube

Read more from the original source:

This Is What Happens When We Debate Ethics in Front of Superintelligent AI - Singularity Hub

Posted in Ai | Comments Off on This Is What Happens When We Debate Ethics in Front of Superintelligent AI – Singularity Hub

AI will be smarter than HUMANS by 2029 before we MERGE with … – Express.co.uk

Posted: at 4:28 pm

Googles director of engineering Ray Kurzweil has said the AI singularity will happen in the year 2029, and just a few years later humans will merge with machines.

The AI singularity is the point where machines match human-level intelligence.

Speaking at the SXSW Conference in Austin, Texas, Mr Kurzweil said: "By 2029, computers will have human-level intelligence.

He added that the process has already begun.

GETTY

The Google employee said: "That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are.

Today, that's not just a future scenario.

"It's here, in part, and it's going to accelerate.

GETTY

Mr Kurzweil continued by stating that predictions that AI will enslave humans is not realistic, adding that it is already ubiquitous.

He said: "We don't have one or two AIs in the world. Today we have billions.

What he envisions is actually a world where AIs purpose is to benefit humanity, rather than exceed it, before predicting that we will one day finally merge with machines which, he believes, will massively improve us as beings.

GETTY

Asus

1 of 9

Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.

The 69-year old computer scientist said: "What's actually happening is [machines] are powering all of us.

"They're making us smarter.

They may not yet be inside our bodies, but, by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.

"We're going to get more neocortex, we're going to be funnier, we're going to be better at music. We're going to be sexier.

GETTY

"We're really going to exemplify all the things that we value in humans to a greater degree.

"Ultimately, it will affect everything.

"We're going to be able to meet the physical needs of all humans.

We're going to expand our minds and exemplify these artistic qualities that we value."

Excerpt from:

AI will be smarter than HUMANS by 2029 before we MERGE with ... - Express.co.uk

Posted in Ai | Comments Off on AI will be smarter than HUMANS by 2029 before we MERGE with … – Express.co.uk

AI Is Getting Brainier: Will Machines Leave Us in the Dust? – Top Tech News

Posted: at 4:28 pm

The road to human-level artificial intelligence is long and wildly uncertain. Most AI programs today are one-trick ponies. They can recognize faces, the sound of your voice, translate foreign languages, trade stocks and play chess. They may well have got the trick down pat, but one-trick ponies they remain. Google's DeepMind program, AlphaGo, can beat the best human players at Go, but it hasn't a clue how to play tiddlywinks, shove ha'penny, or tell one end of a horse from the other.

Humans, on the other hand, are not specialists. Our forte is versatility. What other animal comes close as the jack of all trades? Put humans in a situation where a problem must be solved and, if they can leave their smartphones alone for a moment, they will draw on experience to work out a solution.

The skill, already evident in preschool children, is the ultimate goal of artificial intelligence. If it can be distilled and encoded in software, then thinking machines will finally deserve the name.

DeepMind's latest AI has cleared one of the important hurdles on the way to human-level AGI -- artificial general intelligence. Most AIs can perform only one trick because to learn a second, they must forget the first. The problem, known as "catastrophic forgetting," occurs because the neural network at the heart of the AI overwrites old lessons with new ones.

DeepMind solved the problem by mirroring how the human brain works. When we learn to ride a bike, we consolidate the skill. We can go off and learn the violin, the capitals of the world and the finer rules of gaga ball, and still cycle home for tea. This program's AI mimics the process by making the important lessons of the past hard to overwrite in the future. Instead of forgetting old tricks, it draws on them to learn new ones.

Because it retains past skills, the new AI can learn one task after another. When it was set to work on the Atari classics -- Space Invaders, Breakout, Defender and the rest -- it learned to play seven out of 10 as well as a human can. But it did not score as well as an AI devoted to each game would have done. Like us, the new AI is more the jack of all trades, the master of none.

There is no doubt that thinking machines, if they ever truly emerge, would be powerful and valuable. Researchers talk of pointing them at the world's greatest problems: poverty, inequality, climate change and disease.

They could also be a danger. Serious AI researchers, and plenty of prominent figures who know less of the art, have raised worries about the moment when computers surpass human intelligence. Looming on the horizon is the Singularity, a time when super-AIs improve at exponential speed, causing such technological disruption that poor, unenhanced humans are left in the dust. These superintelligent computers needn't hate us to destroy us. As the Oxford philosopher Nick Bostrom has pointed out, a superintelligence might dispose of us simply because it is too devoted to making paper clips to look out for human welfare.

In January the Future of Life Institute held a conference on Beneficial AI in Asilomar, California. When it came to discussing threats to humanity, researchers pondered what might be the AI equivalents of nuclear control rods, the sort that are plunged into nuclear reactors to rein in runaway reactions. At the end of the meeting, the organizers released a set of guiding principles for the safe development of AI.

While the latest work on DeepMind edges scientists towards AGI, it does not bring it, or the Singularity, meaningfully closer. There is far more to the general intelligence that humans possess than the ability to learn continually. The DeepMind AI can draw on skills it learned on one game to play another. But it cannot generalize from one learned skill to another. It cannot ponder a new task, reflect on its capabilities, and work out how best to apply them.

The futurist Ray Kurzweil sees the Singularity rolling in 30 years from now. But for other scientists, human-level AI is not inevitable. It is still a matter of if, not when. Emulating human intelligence is a mammoth task. What scientists need are good ideas, and no one can predict when inspiration will strike.

2017 Guardian Web syndicated under contract with NewsEdge/Acquire Media. All rights reserved.

Read the original:

AI Is Getting Brainier: Will Machines Leave Us in the Dust? - Top Tech News

Posted in Ai | Comments Off on AI Is Getting Brainier: Will Machines Leave Us in the Dust? – Top Tech News

AI is going to kill seat-based SaaS models – VentureBeat

Posted: at 4:28 pm

Im going to let you in on a little secret: Ive broken the terms of use for SaaS software and shared a license before.

Surprised? My guess would be no because youve probably done it too.

In general, per-seat licensing has been a great way for SaaS companies to charge a subscription and collect reliable revenue. Its helped companies like Salesforce, Zoom, and Box grow into large, successful organizations. But theres also no question that this success and revenue reliability comes at a cost, where pricing is not tied directly to how much a customer uses a service.

In short, seat-based subscription models have lots of problems but have been good enough for a long time. However, as more SaaS services leverage AI to augment human work, it will make less and less sense to charge per human seat and more sense to charge for what is actually being used to get work done: the compute power needed to run increasingly intelligent and useful AI-enhanced services.

This shift from human to AI-based productivity is going to fundamentally alter how SaaS companies sell their services. If SaaS companies dont start thinking about this inevitability, and pricing it into their models, AI may cannibalize their revenue over time.

For service models in which AI can provide value, such as in customer service or CRM, the AI itself is going to actively reduce human work over time. What does this mean in practice? In the customer service sphere, for example, bots will work alongside humans, so humans will operate with greater productivity. But SaaS companies that integrate AI while continuing to charge on a per-seat basis will actually be dis-incentivized from making users more efficient. Think about it: companies will lose revenue as they increase AI, because each person (each seat they sell) will be able to do more, and fewer people will be needed to do the same job. So this pushes vendors to drag their heels on innovation.

On top of all of that, it gets pretty darn expensive to do the research for developing good AI and to run the system 24/7. Compute power can easily take a solid chunk of revenue. So, SaaS companies with AI integration will start to sell fewer seats while their system becomes more expensive to develop and run.

Given these trends, the calculus for the vast majority of SaaS companies needs to change both for the customer and for their own long-term viability. Otherwise, in five or 10 years, many of these companies will be in for a rude surprise as AI cannibalizes their revenue.

Expect to see SaaS companies start charging based on usage. That might mean charging for AI work because it costs compute cycles. The more efficiency a customer wants, and the more they rely on the AI, the more they will end up paying for service, but the less they will pay for staff.

Usage-based pricing isnt a novel idea. Amazon has been the obvious pioneer behind pay-as-you-go SaaS pricing. It was no surprise for AWS to introduce a pay-as-you-go model, because the service provided with AWS is not based on human users or account management time. Instead, customers are charged for the type of computing unit being consumed. For example, EC2 charges in cloud compute units. Getting even more granular, Lambda charges by the execution second, while S3 charges by the gigabyte of used disk space.

Usage-based pricing opens the door to a more granular experience in which the customer only pays for what they use. Its the equivalent to buying a ticket to a single football game, versus being forced to buy a season pass, even if you can only make it to that one game. But usage-based models also have other positive byproducts. They take away the ability for customers to cheat by sharing accounts, and they remove the incentive for the SaaS provider to push customers to overbuy licenses in order to plan for growth.

Just like Amazons services, AI-enhanced SaaS companies that charge based on usage will introduce greater elasticity, better user experience, and more efficiency into their systems, leading to less churn and more long-term revenue stability.

Fred Hsu is CEO of Agent.ai.

Excerpt from:

AI is going to kill seat-based SaaS models - VentureBeat

Posted in Ai | Comments Off on AI is going to kill seat-based SaaS models – VentureBeat

Stephen Hawking calls for creation of world government to meet AI challenges – ExtremeTech

Posted: at 4:28 pm

In a book thats become the darling of many a Silicon Valley billionaire Sapiens: A Brief History of Humankind the historian Yuval Harari paints a picture of humanitys inexorable march towards ever greater forms of collectivization. From the tribal clans of pre-history, people gathered to create city-states, then nations, and finally empires. While certain recent political trends, namely Brexit and the nativism of Donald Trump would seem to belie this trend, now another luminary of academia has added his voice to the chorus calling for stronger forms of world government. Far from citing some ancient historical trends though, Stephen Hawking points to artificial intelligence as a defining reason for needing stronger forms of globally enforced cooperation.

Its facile to dismiss Stephen Hawking as another scientist poking his nose into problems more germane to politics than physics. Or even to suggest he is being alarmist, as many AI experts have already done. Its worth taking his point seriously, though, and weighing the evidence to see if theres any merit to the cautionary note he rings.

Lets first take the case made by the naysayers who claim we are a long time away from AI posing any real threat to humanity. These are often the same people who suggest Isaac Asimovs three laws of robotics are sufficient to ensure ethical behavior from machines never mind that the whole thrust of Asimovs stories is to demonstrate how things can go terribly wrong despite of the three laws of robots. Leaving that aside, itsexceedingly difficult to keep pace with the breakneck pace of research in AI and robotics. One may be an expert in a small domain of AI or robotics, say pneumatic actuators, and have no clue what is going on in reinforcement learning. This tends to be the rule rather than the exception among experts, since their very expertise tends to confine them to a narrow field of endeavor.

As a tech journalist covering AI and robotics on a more or less full-time basis, I can cite many recent developments that justify Mr. Hawkings concern namely the advent of autonomous weapons, DARPA sponsored hacking algorithms, and a poker playing AI that resembles a strategic superpower, to highlight just a few. Adding to this, its increasingly clear theres already something of an AI arms race underway, with China and the United States pouring increasingly large sums into supercomputers that can support the ever-hungry algorithms underpinningtodays cutting-edge AI.

And this is just the tip of the iceberg, thanks to the larger and more nebulous threat poised by superintelligence that is an algorithm or collection of them that achieved a singleton, in any of the three domains of intelligence outlined by Nick Bostrom in Superintelligence: paths, dangers and strategies those being Speed, Quality/Strategic planning, and Collective intelligence.

The dangers poised to humanity by AI, being somewhat more difficult to conceptualize than atomic weapons since they dont involve dramatic mushroom clouds or panicked basement drills, are all the more pernicious. Even the so called Utopian scenario, in which AI merely replaces large segments of the workforce, would bring with it a concomitant set of challenges that could best be met by stronger and more global government entities. In this light, it seems if anything, Dr. Hawking has understated the case for taking action at a global level, to ensure the transition into an AI-first world is a smooth rather than apocalyptic one.

Read more:

Stephen Hawking calls for creation of world government to meet AI challenges - ExtremeTech

Posted in Ai | Comments Off on Stephen Hawking calls for creation of world government to meet AI challenges – ExtremeTech

Why You Don’t Need To Worry About AI – Forbes

Posted: at 4:28 pm


Forbes
Why You Don't Need To Worry About AI
Forbes
Any great sci-fi movie has artificial intelligence (AI), but to be entertaining, a movie needs drama. So in the real world, advances in AI are less about robot overlords and more about Siri, take me home. Below, a few members of Forbes Technology ...

Visit link:

Why You Don't Need To Worry About AI - Forbes

Posted in Ai | Comments Off on Why You Don’t Need To Worry About AI – Forbes

AI Can Likely Already Do Your Job Better Than You Can – Futurism – Futurism

Posted: at 4:28 pm

Automation on the Up

Andrew Ng (founding lead of the Google Brain team, former director of the Stanford Artificial Intelligence Laboratory, and now overall lead of Baidus AI team) in an article at the Harvard Business Reviewpoints out that if executives had a better understanding of what machine learning is already capable of, millions of people would be out of a job today. Many executives ask me what artificial intelligence can do. They want to know how it will disrupt their industry and how they can use it to reinvent their own companiesthe biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly bigger than before.

How much bigger? A report from Mckinsey says that 51% of economic activitycould be automated by existingtechnology.

This is not a new phenomenon, new technologies have alwaysdisplaced the need for human labor. In the past the change was gradual and people had time to learn new skills that the economy needed. But the pace at which it is happening this time may betoo rapid for people to adapt.

This is not some issue that you will have to figure out how to deal with in the distant future, it is already happening,relatively quietly,all around us. Somewhere, in agiant tech company ortiny startup, there is someone trying to figure out how to get a computer to do your job better than you ever could.

Below is a handy graph from McKinsey of a variety of skills that canbe replaced by AI and the industries that will be most affected. For a more detailed visualization click hereand here.

Its impact is already being felt from manufacturing jobs in China to insurance claim workersin Japan to tophedge fund managersin America.And this is just the beginning, as AI develops and its array of skills grow, more and more people whose jobs revolve around those skills will be replaced.

Of course, just because the technical ability is there doesnt mean it can be implemented right away. Still, this is an issue thatshould be getting a lot more attention than it does because it will impact you.

This excellent talk was delivered by Robert Reich, Secretary of Labor under Bill Clinton, delivered at Google back in February. He highlights what will be the pressing need of our times, for people to be able to find fulfillment outside of their job.

Link:

AI Can Likely Already Do Your Job Better Than You Can - Futurism - Futurism

Posted in Ai | Comments Off on AI Can Likely Already Do Your Job Better Than You Can – Futurism – Futurism

How Sensors, Robotics And Artificial Intelligence Will Transform Agriculture – Forbes

Posted: at 4:27 pm

How Sensors, Robotics And Artificial Intelligence Will Transform Agriculture
Forbes
The world population is expected to reach 9.7 billion by 2050. China and India, the two largest countries in the world, have populations totalling around one billion. In four years, by 2022, India is predicted to have the largest population in the ...

The rest is here:

How Sensors, Robotics And Artificial Intelligence Will Transform Agriculture - Forbes

Posted in Artificial Intelligence | Comments Off on How Sensors, Robotics And Artificial Intelligence Will Transform Agriculture – Forbes