The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Artificial intelligence holds great potential for both students and teachers but only if used wisely – The Conversation AU
Posted: July 24, 2017 at 8:13 am
Data big and small have come to education, from creating online platforms to increasing standardised assessments.
Artificial intelligence (AI) enables Siri to recognise your question, Google to correct your spelling, and tools such as Kinect to track you as you move around the room.
Data big and small have come to education, from creating online platforms to increasing standardised assessments. But how can AI help us use and improve it?
Researchers in AI in education have been investigating how the two intersect for several decades. While its tempting to think that the primary dream for AI in education is to reduce marking load a prospect made real through automated essay scoring the breadth of applications goes beyond this.
For example, researchers in AI in education have:
These are new approaches to learning that rely heavily on students engaging with new kinds of technology. But researchers in AI, and related fields such as learning analytics, are also thinking about how AI can provide more effective feedback to students and teachers.
One perspective is that researchers should worry less about making AI ever more intelligent, instead exploring the potential that relatively stupid (automated) tutors might have to amplify human intelligence.
So, rather than focusing solely on building more intelligent AI to take humans out of the loop, we should focus just as much on intelligence amplification or, going back to its intellectual roots, intelligence augmentation. This is the use of technology including AI to provide people with information that helps them make better decisions and learn more effectively.
This approach combines computing sciences with human sciences. It takes seriously the need for technology to be integrated into everyday life.
Keeping people in the loop is particularly important when the stakes are high, and AI is far from perfect. So, for instance, rather than focusing on automating the grading of student essays, some researchers are focusing on how they can provide intelligent feedback to students that helps them better assess their own writing.
And while some are considering if they can replace nurses with robots, we are seeking to design better feedback to help them become high-performance nursing teams.
But for the use of AI to be sustainable, education also needs a second kind of change: what we teach.
To be active citizens, students need a sound understanding of AI, and a critical approach to assessing the implications of the datafication of our lives from the use of Facebook data to influence voting, to Google DeepMinds access to medical data.
Students also need the skills to manage this complexity, to work collaboratively and to innovate in a changing environment. These are qualities that could perhaps be amplified through effective use of AI.
The potential is not only for education to be more efficient, but to think about how we teach: to keep revolution in sight, alongside evolution.
Another response to AIs perceived threat is to harness the technologies that will automate some forms of work, to cultivate those higher-order qualities that make humans distinctive from machines.
Amid growing concerns about the pervasive role of algorithms in society, we must understand what algorithmic accountability means in education.
Consider, for example, the potential for predictive analytics in flexi-pricing degrees based on a course-completion risk-rating built on online study habit data. Or the possibility of embedding existing human biases into university offers, or educational chatbots that seek to discern your needs.
If AI delivers benefits only to students who have access to specific technologies, then inevitably this has the potential to marginalise some groups.
Significant work is under way to clarify how ethics and privacy principles can underpin the use of AI and data analytics in education. Intelligence amplification helps counteract these concerns by keeping people in the loop.
A further concern is AIs potential to result in a de-skilling or redundancy of teachers. This could possibly fuel a two-tier system where differing levels of educational support are provided.
The future of learning with AI, and other technologies, should be targeted not only at learning subject content, but also at cultivating curiosity, creativity and resilience.
The ethical development of such innovations will require both teachers and students to have a robust understanding of how to work with data and AI to support their participation in society and across the professions.
See the original post here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence holds great potential for both students and teachers but only if used wisely – The Conversation AU
What sort of silicon brain do you need for artificial intelligence? – The Register
Posted: at 8:13 am
The Raspberry Pi is one of the most exciting developments in hobbyist computing today. Across the world, people are using it to automate beer making, open up the world of robotics and revolutionise STEM education in a world overrun by film students. These are all laudable pursuits. Meanwhile, what is Microsoft doing with it? Creating squirrel-hunting water robots.
Over at the firms Machine Learning and Optimization group, a researcher saw squirrels stealing flower bulbs and seeds from his bird feeder. The research team trained a computer vision model to detect squirrels, and then put it onto a Raspberry Pi3 board. Whenever an adventurous rodent happened by, it would turn on the sprinkler system.
Microsofts sciurine aversions arent the point of that story its shoehorning of a convolutional neural network onto an ARM CPU is. Itshows how organizations are pushing hardware further to support AI algorithms. AsAI continues to make the headlines, researchers are pushing its capabilities to make it increasingly competent at basic tasks such as recognizing vision and speech.
As people expect more of the technology, cramming it into self-flying drones and self-driving cars, the hardware challenges are increasing. Companies are producing custom silicon and computing nodes capable of handling them.
Jeff Orr, research director at analyst firm ABI Research, divides advances in AI hardware into three broad areas: cloud services, ondevice, and hybrid. The first focuses on AI processing done online in hyperscale data centre environments like Microsofts, Amazons and Googles.
At the other end of the spectrum, he sees more processing happening on devices in the field, where connectivity or latency prohibit sending data back to the cloud.
Its using maybe a voice input to allow for hands-free operation of a smartphone or a wearable product like smart glasses, he says. That will continue to grow. Theres just not a large number of real-world examples ondevice today. Heviews augmented reality as a key driver here. Ortheres always this app, we suppose.
Finally, hybrid efforts marry both platforms to complete AI computations. This is where your phone recognizes what youre asking it but asks cloud-based AI to answer it, for example.
The clouds importance stems from the way that AI learns. AImodels are increasingly moving to deep learning, which uses complex neural networks with many layers to create more accurate AI routines.
There are two aspects to using neural networks. The first is training, where the network analyses lots of data to produce a statistical model. This is effectively the learning phase. The second is inference, where the neural network then interprets new data to generate accurate results. Training these networks chews up vast amounts of computing power, but the training load can be split into many tasks that run concurrently. This is why GPUs, with their double floating point precision and huge core counts, are so good at it.
Nevertheless, neural networks are getting bigger and the challenges are getting greater. Ian Buck, vice president of the Accelerate Computing Group at dominant GPU vendor Nvidia, says that theyre doubling in size each year. The company is creating more computationally intense GPU architectures to cope, but it is also changing the way it handles its maths.
Itcan be done with some reduced precision, he says. Originally, neural network training all happened in 32bit floating point, but it has optimized its newer Volta architecture, announced in May, for 16bit inputs with 32bit internal mathematics.
Reducing the precision of the calculation to 16 bits has two benefits, according to Buck.
One is that you can take advantage of faster compute, because processors tend to have more throughput at lower resolution, he says. Cutting the precision also increases the amount of available bandwidth, because youre fetching smaller amounts of data for each computation.
The question is, how low can you go? asks Buck. Ifyou go too low, it wont train. Youll never achieve the accuracy you need for production, or it will become unstable.
While Nvidia refines its architecture, some cloud vendors have been creating their own chips using alternative architectures to GPUs. The first generation of Googles Tensor Processing Unit (TPU) originally focused on 8bit integers for inference workloads. The newer generation, announced in May, offers floating point precision and can be used for training, too. These chips are application-specific integrated circuits (ASICs). Unlike CPUs and GPUs, they are designed for a specific purpose (youll often see them used for mining bitcoins these days) and cannot be reprogrammed. Their lack of extraneous logic makes them extremely high in performance and economic in their power usage but very expensive.
Google's scale is large enough that it can swallow the high non-recurring expenditures (NREs) associated with designing the ASIC in the first place because of the cost savings it achieves in AIbased data centre operations. Ituses them across many operations, ranging from recognizing Street View text to performing Rankbrain search queries, and every time a TPU does something instead of a GPU, Google saves power.
Its going to save them a lot of money, said Karl Freund, senior analyst for high performance computing and deep learning at Moor Insights and Strategy.
He doesnt think thats entirely why Google did it, though. Ithink they did it so they would have complete control of the hardware and software stack. If Google is betting the farm on AI, then it makes sense to control it from endpoint applications such as self-driving cars through to software frameworks and the cloud.
When it isnt drowning squirrels, Microsoft is rolling out field programmable gate arrays (FPGAs) in its own data centre revamp. These are similar to ASICs but reprogrammable so that their algorithms can be updated. They handle networking tasks within Azure, but Microsoft has also unleashed them on AI workloads such as machine translation. Intel wants a part of the AI industry, wherever it happens to be running, and that includes the cloud. To date, its Xeon Phi high-performance CPUs have tackled general purpose machine learning, and the latest version, codenamed Knights Mill, ships this year.
The company also has a trio of accelerators for more specific AI tasks, though. For training deep learning neural networks, Intel is pinning its hopes on Lake Crest, which comes from its Nervana acquisition. This is a coprocessor that the firm says overcomes data transfer performance ceilings using a type of memory called HBM2, which is around 12times faster than DDR4.
While these big players jockey for position with systems built around GPUs, FPGAs and ASICs, others are attempting to rewrite AI architectures from the ground up.
Knuedge is reportedly prepping 256-core chips designed for cloud-based operations but isnt saying much.
UK-based Graphcore, due to release its technology in 2017, has said a little more. Itwants its Intelligence Processing Unit (IPU) to use graph-based processing rather than the vectors used by GPUs or the scalar processing in CPUs. The company hopes that this will enable it to fit the training and inference workloads onto a single processor. One interesting thing about its technology is that its graph-based processing is supposed to mitigate one of the biggest problems in AI processing getting data from memory to the processing unit. Dell has been the firms perennial backer.
Wave Computing is also focusing on a different kind of processing, using what it calls its data flow architecture. Ithas a training appliance designed for operation in the data centre that it says can hit 2.9 PetaOPs/sec.
Whereas cloud-based systems can handle neural network training and inference, Client-side devices from phones to drones focus mainly on the latter. Their considerations are energy efficiency and low-latency computation.
You cant rely on the cloud for your car to drive itself, says Nvidias Buck. Avehicle cant wait for a crummy connection when making a split second decision on who to avoid, and long tunnels might also be a problem. Soall of the computing has to happen in the vehicle. He touts the Nvidia P4 self-driving car platform for autonomous in-car smarts.
FPGAs are also making great strides on the device side. Intel has Arria, an FGPA coprocessor designed for low-energy inference tasks, while over at startup KRTKL, CEO Ryan Cousens and his team have bolted a low-energy dual-core ARM CPU to an FPGA that handles neural networking tasks. Itis crowdsourcing its platform, called Snickerdoodle, for makers and researchers that want wireless I/O and computer vision capabilities. You could run that on the ARM core and only send to the FPGA high-intensity mathematical operations, he says.
AI is squeezing into even smaller devices like the phone in your pocket. Some processor vendors are making general purpose improvements to their architectures that also serve AI well. For example, ARM is shipping CPUs with increasingly capable GPU areas on the die that should be able to better handle machine learning tasks.
Qualcomms SnapDragon processors now feature a neural processing engine that decides which bits of tailored logic machine learning and neural inference tasks should run in (voice detection in a digital signal processor and image detection on a builtin GPU, say). Itsupports the convolutional neural networks used in image recognition, too. Apple is reportedly planning its own neural processor, continuing its tradition of offloading phone processes onto dedicated silicon.
This all makes sense to ABIs Orr, who says that while most of the activity has been in cloud-based AI processors of late this will shift over the next few years as device capabilities balance them out. Inaddition to areas like AR, this may show up in more intelligent-seeming artificial assistants. Orr believes that they could do better at understanding what we mean.
They cant take action based on a really large dictionary of what possibly can be said, he says. Natural language processing can become more personalised and train the system rather than training the user.
This can only happen using silicon that allows more processing at given times to infer context and intent. Bybeing able to unload and switch through these different dictionaries that allow for tuning and personalization for all the things that a specific individual might say.
Research will continue in this space as teams focus on driving new efficiencies into inference architectures. Vivienne Sze, professor at MITs Energy-Efficient Multimedia Systems Group, says that in deep neural network inferencing, it isnt the computing that slurps most of the power. The dominant source of energy consumption is the act of moving the input data from the memory to the MAC [multiply and accumulate] hardware and then moving the data from the MAC hardware back to memory, she says.
Prof Sze works on a project called Eyeriss that hopes to solve that problem. In Eyeriss, we developed an optimized data flow (called row stationary), which reduces the amount of data movement, particularly from large memories, she continues.
There are many more research projects and startups developing processor architectures for AI. While we dont deny that marketing types like to sprinkle a little AI dust where it isnt always warranted, theres clearly enough of a belief in the technology that people are piling dollars into silicon.
Ascloud-based hardware continues to evolve, expect hardware to support AI locally in drones, phones, and automobiles, as the industry develops.
In the meantime, Microsofts researchers are apparently hoping to squeeze their squirrel-hunting code still further, this time onto the 0.007mm squared Cortex M0 chip. That will call for a machine learning model 1/10,000th the size of the one it put on the Pi. They must be nuts.
We'll be covering machine learning, AI and analytics and specialist hardware at MCubed London in October. Full details, including early bird tickets, right here.
See the original post:
What sort of silicon brain do you need for artificial intelligence? - The Register
Posted in Artificial Intelligence
Comments Off on What sort of silicon brain do you need for artificial intelligence? – The Register
AI is impacting you more than you realize – VentureBeat
Posted: at 8:13 am
In todays age of flying cars, robots, and Elon Musk, if you havent heard of artificial intelligence (AI) or machine learning (ML) then you must be avoiding all types of media. To most, these concepts seem futuristic and not applicable to everyday life, but when it comes to marketing technology, AI and ML actually touch everyone that consumes digital content.
But how exactly are these being deployed for marketing technology and digital media? We hear about AI being applied in medical and military fields, but usually not in something as commonplace as media. Utilizing these advanced technologies actually enables martech and adtech companies to create highly personalized and custom digital content experiences across the web.
The ultimate goal of all marketers is to drive sales through positive brand-consumer engagements. But a major problem is that marketers have so much content (oftentimes more than they even realize) and millions of potential places to show it, but dont know how to determine the optimal place for each piece of content to reach specific audiences.
With all of these possible placements, it would be incredibly inefficient, if not impossible, for a human being to amass, organize, and analyze this data comprehensively and then make the smartest buying decision in real time based on the facts. Trying to test an infinite number of combinations of creative ideas and placements is like solving a puzzle that keeps adding more and more pieces while you are trying to assemble them.
So how can marketers put this data to work to efficiently and distribute their content across the digital universe using the right messaging to drive the best results?
Human beings can make bad decisions based on incomplete data analysis. For example, someone might block a placement from a campaign based one or two prior experiences with incomplete or statistically insignificant data, but it actually may perform very well. An optimization engine can leverage machine learning to understand the variance in placement performance by campaign and advertiser vertical holistically. This is why computers are simply better than humans at certain tasks.
This does not discount the value of humans, for superior customer service and relationships will always be critical. But the combination of human power plus machine learning will yield a much better result, not only in marketing technology but across all industries that are leveraging this advanced technology.
Machine learning and AI address the real inefficiencies present in digital media and have made tremendous progress pushing the industry toward personalization. Delivering personalized content experiences to todays consumer is incredibly important, especially given the always-on, constantly connected, multi-device life that we all lead.
The power of machine learning and artificial intelligence lies in their ability to achieve massive scale that is not otherwise possible, while also maintaining relevancy. This demand for personalization escalates the number of combinations that would need to be tested to an unimaginable degree. For example, if a marketer wants to build a campaign with a personalized experience based on past browsing behavior, it becomes difficult to glean insight from the millions of combinations of the context in which their advertisement will appear and the variety of different browsing behaviors people exhibit. Even with fast, granular reporting, it is impossible to make all the necessary adjustments in a timely manner due to the sheer volume of the dataset.
Furthermore, it is often impossible to draw a conclusion from the data that can be gathered by running a single campaign. A holistic approach that models the interaction between users and a variety of different advertising verticals is necessary to have a meaningful predictor of campaign performance. This is where the real impact of a bidder powered by machine learning lies, because individual marketers are not able to observe these trends due to the fact that they may only have experience running campaigns in a specific vertical.
An intelligent bidder determines how each placement has performed in previous campaigns. If one specific placement performed poorly for multiple advertisers with similar KPIs, similar advertisers in the future will not waste money testing that placement. The learning happens very quickly and precisely. Instead of humans taking these learnings and adjusting the algorithms, the technology is making the changes as they are detected.
By leveraging the billions of historical data points from digital campaigns, predictions are made for future campaigns and then real-time performance data is applied to revisions. This is not a one-off process. The technology is constantly taking insights from user behavior and feeding them back into the algorithms, enabling personalized content experiences at scale.
The advertising industry has faced major challenges in relevancy for consumers and brand safety for marketers. Lack of relevancy in advertising has led to the advent of ad blockers and poor engagement, causing brands to become even more unsure of where their budgets are going and how users are responding to content. The controversy around brand safety further calls into question not only how budgets are being spent, but potential negative consequences for a brands image.
Machine learning holds the promise of overcoming these challenges by delivering better, smarter ads to engaged consumers and restoring trust for brands in advertising spend and the technology that executes content and media.
Kris Kalish is the Director of Optimization at Bidtellect, a native advertising platform.
Link:
Posted in Artificial Intelligence
Comments Off on AI is impacting you more than you realize – VentureBeat
Time to get smart on artificial intelligence – The Hill (blog)
Posted: July 22, 2017 at 8:12 am
One of the biggest problems with Washington is that more often than not the policy conversation isnt grounded in the facts. We see this dysfunction clearly on technology policy, where Congress is largely uninformed on what the future of artificial intelligence (AI) technology will look like and what the actual consequences are likely to be. In this factual vacuum, we run the risk of ultimately adopting at best irrelevant or at worst extreme legislative responses.
Thats why I was particularly interested to see the comments by Tesla CEO Elon Musk to the National Governors Association that AI is a fundamental existential risk for human civilization. Musk is a tremendous innovator and someone who understands technology deeply, and while I dont agree with his assessment, his dramatic statement is a challenge to lawmakers to start seriously examining this topic.
The AI Caucus is working to bring together experts from academia, government and the private sector to discuss the latest technologies and the implications and opportunities created by these new changes. Already this year, weve been briefed by a variety of specialists and fellow policymakers from both Europe and the United States and the caucus participated in events this month organized by IBM.
Congress needs to have a better grasp of what AI actually looks like in practice, how it is being deployed and what future developments likely will be, and thats where the AI Caucus comes in. AI wont just impact one specific field or region and the issues it will raise will not fall under the jurisdiction of a single committee; ironically, AI is potentially such a big change that we might not see the forest for the trees.
It is clear that we are on the verge of a technological revolution. Artificial intelligence promises to be one of the paradigm-shifting developments of the next century, with the potential to reshape our economy just as fully as the internal combustion engine or the semiconductor. Contrary to some portrayals, AI is less about the Terminator and more about using powerful cognitive computing to find new treatments for cancer, improve crop yields and make structures like oil rigs safer. AI programming is a key component of emerging driverless car technology, new advances in designing robots to perform tasks that are too dangerous for humans to do and boosting fraud protection programs to combat identity theft.
As a former entrepreneur, I believe that innovation should always be encouraged, because its fundamental to economic growth. Imagine if wed tried to put the brakes on the development of telephone or radio technology a century ago, personal computer technology a generation ago or cell phone technology a decade ago. Innovation creates new opportunities that are hard to predict, new jobs, even entirely new industries. Innovation can also boost productivity and wages and reduce costs to consumers.
But that doesnt mean that there arent relevant concerns about the disruption that AI could bring. Again, its all about the facts, and in the past, new technologies have hurt certain jobs. While the overall impact might have been positive, there have still been industries and regions that have been hurt by automation. In manufacturing especially, weve seen automation reduce the number of jobs in recent years, in some cases to devastating effect.
We need to be honest about the fact that AI technology will replace some jobs, just as what happened under advances. In my view, we need to start the conversation now and take a hard look at how we can help those individuals who will be hurt. As policymakers, we should be thinking about those people who are working in jobs that are at risk and seeing what we can do to get them through this eventual change. We should focus on preparing our country for this next wave of innovation.
As I think about policies that help anticipate AI and the changes it will bring, it is my view that the country needs to become more entrepreneurial and more innovative. That means we should make it easier to start a business and encourage more startups, invest more in things like research and infrastructure, all to become a more dynamic economy. We have to think through how we can make benefits more portable and how we can create a more flexible high-skill workforce. Combined with long-term trends that will create an older society, we must anticipate that the shape of the economy and the job market will look very different in the decades to come. The emergence of AI is also another reminder of making sure that our social safety net programs will be able to meet the needs of the future. AI will also create new ethical and privacy concerns and these are issues that need to be worked out. I believe that it is imperative that we tackle these emerging issues thoughtfully and not rush into new programs or regulations prematurely.
My colleagues on the AI Caucus each have their own ideas and concerns and part of the caucuss function is to also facilitate a dialogue between lawmakers. Our choice is to either get caught flatfooted or to proactively anticipate how things will change and work on smart policies to make sure that the country benefits as much as possible overall. The only way to do that is to become focused on the facts and focused on the future and the AI Caucus is a bipartisan effort to make that happen.
Congressman John K. Delaney represents Marylands Sixth District in the House of Representatives and is the founder of the AI Caucus. Delaney is the only former CEO of a publicly-traded company in the House and was named one of the Worlds Greatest Leaders by Fortune in 2017.
The views expressed by this author are their own and are not the views of The Hill.
Read the original here:
Time to get smart on artificial intelligence - The Hill (blog)
Posted in Artificial Intelligence
Comments Off on Time to get smart on artificial intelligence – The Hill (blog)
Artificial intelligence, analytics help speed up digital workplace … – ZDNet
Posted: at 8:12 am
Artificial intelligence (AI) and analytics are helping to speed up the pace of digital workplace transformation in industries such as energy and utilities, financial services, manufacturing, and pharmaceuticals, according to a new report from Dimension Data.
Digital Transformation: A CXO's Guide
Reimagining business for the digital age is the number-one priority for many of today's top executives. We offer practical advice and examples of how to do it right.
Gaining competitive advantage and improving business processes are among the top goals of digital transformation strategies, according to the report, "The Digital Workplace Report: Transforming Your Business," which is based on a survey of 850 organizations in 15 countries.
While AI technology is still in its "infancy," it is sufficiently advanced to be working its way into companies in the form of virtual assistants, Dimension said. Manifested as bots embedded into specific applications, virtual assistants draw on AI engines and machine learning technology to respond to basic queries.
"It's no longer enough to simply implement these technologies," said Krista Brown, senior vice president, group end-user computing at Dimension Data. "Organizations have grown their use of analytics to understand how these technologies impact their business performance.
About three quarters of the organizations surveyed (64 percent) use analytics to improve customer services, and 58 percent use analytics to benchmark their workplace technologies. Thirty percent of organizations said they are far along in their digital transformation initiatives and are already reaping the benefits.
Others are still in the early stages of creating a plan. One factor that could be holding some companies back from deploying a digital workplace is their corporate culture. In a lot of cases, technology and corporate culture inhibit rather than encourage workstyle change, the report noted.
Still, the top barrier to successful adoption of new workstyles was IT issues. The complexity of the existing IT infrastructure can present a huge hurdle to implementing new collaboration and productivity tools to support flexible workstyles, Brown said. Successful transformations are achieved when IT works closely with line-of-business leaders, she said.
IT leaders in the survey were asked to rank which technologies were most important to their digital workplace strategies, and they most often cited communications and collaboration tools, as well as business applications. Half said conferencing systems have resulted in business processes that have become much more streamlined and effective.
"The digital workplace is transforming how employees collaborate, how customers are supported, and ultimately how enterprises do business," the report said. "However, the digital workplace is not a destination that most--or many--enterprises have arrived at. It is a journey that enterprises have started to take and that remains ongoing."
Making workplace technologies available to employees and other stakeholders, while important, should not be the first step, Dimension said. "Actually improving processes is a complicated set of tasks that requires more than an investment in new technology."
Results from the study show that a successful digital workplace effort starts with a comprehensive strategy that a company's leadership team has carefully defined. Along the way, new technology is deployed and new working practices are introduced.
"A successful digital transformation strategy also must have clear and measurable goals from the start and must receive continued support throughout its implementation from heads of business units across the enterprise," the report said. "IT departments then need to make sure that the right digital tools are being made available to the right set of workers, and that those workers understand how best to use them."
Go here to see the original:
Artificial intelligence, analytics help speed up digital workplace ... - ZDNet
Posted in Artificial Intelligence
Comments Off on Artificial intelligence, analytics help speed up digital workplace … – ZDNet
China announces goal of leadership in artificial intelligence by 2030 – CBS News
Posted: July 21, 2017 at 12:16 pm
FILE PHOTO: A computer mouse is illuminated by a projection of a Chinese flag in this photo illustration from October 1, 2013.
REUTERS, Tim Wimborne
BEIJING -- China's government has announced a goal of becoming a global leader in artificial intelligence in just over a decade, putting political muscle behind growing investment by Chinese companies in developing self-driving cars and other advances.
Communist leaders see AI as key to making China an "economic power," said a Cabinet statement on Thursday. It calls for developing skills and research and educational resources to achieve "major breakthroughs" by 2025 and make China a world leader by 2030.
Play Video
It might not be long before machines begin thinking for themselves -- creatively, independently, and sometimes with better judgment than a human....
Artificial intelligence is one of the emerging fields along with renewable energy, robotics and electric cars where communist leaders hope to take an early lead and help transform China from a nation of factory workers and farmers into a technology pioneer.
They have issued a series of development plans over the past decade, some of which have prompted complaints Beijing improperly subsidizes its technology developers and shields them from competition in violation of its free-trade commitments.
Already, Chinese companies including Tencent Ltd., Baidu Inc. and Alibaba Group are spending heavily to develop artificial intelligence for consumer finance, e-commerce, self-driving cars and other applications.
Manufacturers also are installing robots and other automation to cope with rising labor costs and improve efficiency.
Play Video
Business leaders weigh in on the possibility of artificial intelligence replacing jobs
Thursday's statement gives no details of financial commitments or legal changes. But previous initiatives to develop Chinese capabilities in solar power and other technologies have included research grants and regulations to encourage sales and exports.
"By 2030, our country will reach a world leading level in artificial intelligence theory, technology and application and become a principal world center for artificial intelligence innovation," the statement said.
That will help to make China "in the forefront of innovative countries and an economic power," it said.
The announcement follows a sweeping plan issued in 2015, dubbed "Made in China 2025," that calls for this country to supply its own high-tech components and materials in 10 industries from information technology and aerospace to pharmaceuticals.
That prompted complaints Beijing might block access to promising industries to support its fledgling suppliers. The Chinese industry minister defended the plan in March, saying all competitors would be treated equally. He rejected complaints that foreign companies might be required to hand over technology in exchange for market access.
China has had mixed success with previous strategic plans to develop technology industries including renewable energy and electric cars.
Beijing announced plans in 2009 to become a leader in electric cars with annual sales of 5 million by 2020. With the help of generous subsidies, China passed the United States last year as the biggest market, but sales totaled just over 300,000.
2017 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten, or redistributed.
Continued here:
China announces goal of leadership in artificial intelligence by 2030 - CBS News
Posted in Artificial Intelligence
Comments Off on China announces goal of leadership in artificial intelligence by 2030 – CBS News
These Non-Tech Firms Are Making Big Bets On Artificial Intelligence … – Investor’s Business Daily
Posted: at 12:16 pm
While much has been written about information technology companies investing in artificial intelligence, Loup Ventures managing partner Doug Clinton notes that many non-tech companies are capitalizing on AI technology as well.
Clinton has put together a portfolio of 17 publicly traded non-tech companies that are making investments in AI to improve their businesses. In a recent blog post, Clinton notes that he assembled the portfolio as a "fun exercise" and a way to draw attention to the sweeping nature of AI advancements. Loup Ventures is an early-stage venture capital firm.
Clinton selected the companies from a range of industries including health care, retail, logistics, professional services, finance, transportation, energy, construction and food/agriculture.
"In 10 years, every company will have to be an artificial intelligence company or they won't be competitive," Clinton said.
Among the companies included is IBD 50 stock Idexx Laboratories (IDXX). Idexx makes products for the animal health-care sector. On its last earnings call, the company said that its latest diagnostic products are using machine learning so the instruments always have the ability to learn and train on new data. One such product that leverages AI is its SediVue Dx analyzer, Clinton said.
The other companies on the Loup Ventures list are: Accenture (ACN), Avis Budget Group (CAR), Boeing (BA), Caterpillar (CAT), Deere (DE), Domino's Pizza (DPZ), FedEx (FDX) andGlaxoSmithKline (GSK).
There's alsoHalliburton (HAL), Interpublic Group (IPG), Macy's (M), Monsanto (MON), Nasdaq (NDAQ), Northern Trust (NTRS), Pioneer Natural Resources (PXD) and Under Armour (UA).
IBD'S TAKE:Cloud-computing leaders Amazon.com, Microsoft and Google, along with internet giants, have the inside track in monetizing artificial intelligence technology, Mizuho Securities said in a report earlier this month.
Among those venturing into the space, Clinton says:
RELATED:
Are Drone Deliveries And Robotic Gofers Ready To Serve You?
Where Are The Early Investing Hot Spots In Artificial Intelligence?
Shale Producer Eyes Drilling With Artificial Intelligence
IBM Rated Buy On 'Upside Potential,' Artificial Intelligence Move
11:36 AM ET Intuitive Surgical is hamstrung on a "psychological barrier" at 1,000, an analyst argued Friday as shares toppled despite the firm's...
11:36 AM ET Intuitive Surgical is hamstrung on a "psychological barrier" at 1,000,...
Continued here:
These Non-Tech Firms Are Making Big Bets On Artificial Intelligence ... - Investor's Business Daily
Posted in Artificial Intelligence
Comments Off on These Non-Tech Firms Are Making Big Bets On Artificial Intelligence … – Investor’s Business Daily
Despite Musk’s dark warning, artificial intelligence is more benefit than threat – STLtoday.com
Posted: at 12:16 pm
We expect scary predictions about the technological future from philosophers and science fiction writers, not famous technologists.
Elon Musk, though, turns out to have an imagination just as dark as that of Arthur C. Clarke and Stanley Kubrick, who created the sentient and ultimately homicidal computer HAL 9000 in 2001: A Space Odyssey.
Musk, the founder of Tesla, SpaceX, HyperLoop, Solar City and other companies, spoke to the National Governors Association last week on a variety of technology topics. When he got to artificial intelligence, the field of programming computers to replace humans in tasks such as decision making and speech recognition, his words turned apocalyptic.
He called artificial intelligence, or AI, a fundamental risk to the existence of human civilization. For example, Musk said, an unprincipled user of AI could start a war by spoofing email accounts and creating fake news to whip up tension.
Then Musk did something unusual for a businessman who has described himself as somewhat libertarian: He urged the governors to be proactive in regulating AI. If we wait for the technology to develop and then try to rein it in, he said, we might be too late.
Are scientists that close to creating an uncontrollable, HAL-like intelligence? Sanmay Das, associate professor of computer science and engineering at Washington University, doesnt think so.
This idea of AI being some kind of super-intelligence, becoming smarter than humans, I dont think anybody would subscribe to that happening in the next 100 years, Das said.
Society does have to face some regulatory questions about AI, he added, but theyre not the sort of civilization-ending threat Musk was talking about.
The pressing issues are more like one ProPublica raised last year in its Machine Bias investigation. States are using algorithms to tell them which convicts are likely to become repeat offenders, and the software may be biased against African-Americans.
Algorithms that make credit decisions or calculate insurance risks raise similar issues. In a process called machine learning, computers figure out which pieces of information have the most predictive value. What if these calculations have a discriminatory result, or perpetuate inequalities that already exist in society?
Self-driving cars raise some questions, too. How will traffic laws and insurance companies deal with the inevitable collisions between human- and machine-steered vehicles?
Regulators are better equipped to deal with these problems than with a mandate to prevent the end of civilization. If we write sweeping laws to police AI, we risk sacrificing the benefits of the technology, including safer roads and cheaper car insurance.
Whats going to be important is to have a societal discussion about what we want and what our definitions of fairness are, and to ensure there is some kind of transparency in the way these systems get used, Das says.
Every technology, from the automobile to the internet, has both benefits and costs, and we dont always know the costs at the outset. At this stage in the development of artificial intelligence, regulations targeting super-intelligent computers would be almost impossible to write.
I dont frankly see how you put the toothpaste back in the tube at this point, said James Fisher, a professor of marketing at St. Louis University. You need to have a better sense of what you are regulating against or for.
A good starting point is to recognize that HAL is still science fiction. Instead of worrying about the distant future, Das says, We should be asking about whats on the horizon and what we can do about it.
Make it your business. Get twice-daily updates on what the St. Louis business community is talking about.
Read the original here:
Despite Musk's dark warning, artificial intelligence is more benefit than threat - STLtoday.com
Posted in Artificial Intelligence
Comments Off on Despite Musk’s dark warning, artificial intelligence is more benefit than threat – STLtoday.com
Artificial intelligence suggests recipes based on food photos – MIT News
Posted: at 12:16 pm
There are few things social media users love more than flooding their feeds with photos of food. Yet we seldom use these images for much more than a quick scroll on our cellphones.
Researchers from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people's eating habits. In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an artificial intelligence system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes.
In computer vision, food is mostly neglected because we dont have the large-scale datasets needed to make predictions, says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.
The paper will be presented later this month at the Computer Vision and Pattern Recognition conference in Honolulu. CSAIL graduate student Nick Hynes was lead author alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain. Co-authors include CSAIL postdoc Javier Marin, as well as scientist Ferda Ofli and research director Ingmar Weber of QCRI.
How it works
The web has spurred a huge growth of research in the area of classifying food data, but the majority of it has used much smaller datasets, which often leads to major gaps in labeling foods.
In 2014 Swiss researchers created the Food-101 dataset and used it to develop an algorithm that could recognize images of food with 50 percent accuracy. Future iterations only improved accuracy to about 80 percent, suggesting that the size of the dataset may be a limiting factor.
Even the larger datasets have often been somewhat limited in how well they generalize across populations. A database from the City University in Hong Kong has over 110,000 images and 65,000 recipes, each with ingredient lists and instructions, but only contains Chinese cuisine.
The CSAIL teams project aims to build off of this work but dramatically expand in scope. Researchers combed websites like All Recipes and Food.com to develop Recipe1M, a database of over 1 million recipes that were annotated with information about the ingredients in a wide range of dishes. They then used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes.
Given a photo of a food item, Pic2Recipe could identify ingredients like flour, eggs, and butter, and then suggest several recipes that it determined to be similar to images from the database. (The team has an online demo where people can upload their own food photos to test it out.)
You can imagine people using this to track their daily nutrition, or to photograph their meal at a restaurant and know whats needed to cook it at home later, says Christoph Trattner, an assistant professor at MODUL University Vienna in the New Media Technology Department who was not involved in the paper. The teams approach works at a similar level to human judgement, which is remarkable.
The system did particularly well with desserts like cookies or muffins, since that was a main theme in the database. However, it had difficulty determining ingredients for more ambiguous foods, like sushi rolls and smoothies.
It was also often stumped when there were similar recipes for the same dishes. For example, there are dozens of ways to make lasagna, so the team needed to make sure that system wouldnt penalize recipes that are similar when trying to separate those that are different. (One way to solve this was by seeing if the ingredients in each are generally similar before comparing the recipes themselves).
In the future, the team hopes to be able to improve the system so that it can understand food in even more detail. This could mean being able to infer how a food is prepared (i.e. stewed versus diced) or distinguish different variations of foods, like mushrooms or onions.
The researchers are also interested in potentially developing the system into a dinner aide that could figure out what to cook given a dietary preference and a list of items in the fridge.
This could potentially help people figure out whats in their food when they dont have explicit nutritional information, says Hynes. For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal.
The project was funded, in part, by QCRI, as well as the European Regional Development Fund (ERDF) and the Spanish Ministry of Economy, Industry, and Competitiveness.
Read this article:
Artificial intelligence suggests recipes based on food photos - MIT News
Posted in Artificial Intelligence
Comments Off on Artificial intelligence suggests recipes based on food photos – MIT News
Artificial intelligence boosts wine’s bottom line – Phys.Org
Posted: at 12:16 pm
July 21, 2017 by Caleb Radford Credit: Ailytic
The Australian wine industry is turning to artificial intelligence to streamline its manufacturing.
South Australian tech firm Ailytic has developed an artificial intelligence (AI) program to significantly increase production efficiency by optimising machine use.
It uses an AI technique called 'prescriptive analytics' to account for all the variables that go into mass-producing wines such temperature, wine changeover and inventory.
The program then creates the best possible operation schedule, allowing companies to save considerable time and money.
Ailytic's list of clients includes world-renown wine companies such as Pernod Ricard, Accolade Wines and Treasury Wine Estates.
It has now included South Australian company Angove Family Winemakers as well.
Pernot Ricard Global Business Solutions Manager Pauline Paterson said AI was highly beneficial for the wine industry and helped to increase the bottom line.
"We use it mainly around production line and use it to derive the most efficient way to produce our product," she said.
"It is definitely helpful with changeover, how many bottles we need, how much wine and what order to do everything in."
Ailytic's system is able to obtain essential information from wineries using remote sensors, which are placed around machines and vineyards.
These sensors track a number of key procedures including the changeover from red to white bottling.
This includes the sub-classification of each colour such as sweet red, dry red, aromatic white and fortified wines.
Ailytic's program ensures that wine is changed quickly, without contamination, bottled using appropriate glassware, labelled and then packaged appropriately.
The sensors then transmit the data to a computer in real time using Wi-Fi.
A single pass can take anywhere between three to six hours but Ailytic's system reduces this by up to 30 per cent.
Pernod Ricard is the world's second leading wine and spirits company, with a network of grows across six countries and 8.68 billion in sales in 2015.
Its brands include Jacobs Creek, Campo Viejo, Brancott Estate, Kenwood Vineyards and Wyndham Estate.
Ailytic co-founder and CEO James Balzary said the company's AI program was perfect for the wine industry because it thrived in complex environments.
"Our algorithms work well for things like packaging, bottling, general manufacturing and sink manufacturing the wine industry is where we are seeing a lot of appetite and the most uptake," he said.
"People think of wine as a romantic artisan type of process, and it is, when you are producing small batch, but the majority of wines we drink are mass manufactured in big operations. That's where we come in the more complex the business, the bigger the benefit."
Ailytic's involvement in wine manufacturing has seen it nominated at the 2017 Wine Industry IMPACT Awards in Adelaide.
Ailytic's other clients are also based out of South Australia and include Australia's lone sink manufacturer Tasman Sinkware.
However, it does plan to expand its clientele and has already garnered international interest in their product.
"Even though the bigger wineries would find this more useful, even smaller operations will benefit from this," Balzary said.
"It's an affordable solution that used to only be accessible to bigger companies but we try to focus on bringing advanced capabilities to T2 and T3 manufacturers."
Explore further: Wine descriptions make people more emotional about wine
Research by the University of Adelaide has shown that consumers are much more influenced by wine label descriptions than previously thought.
A fine wine has an ideal balance of ingredients. Too much or too little of a component could mean the difference between a wine with a sweet and fruity aroma and one that smells like wet newspaper. To help wineries avoid ...
New supply chain research from the Martin J. Whitman School of Management at Syracuse University shows a wine distributor can significantly improve its profits by investing in wine futures, in addition to bottled wine. The ...
A US startup says it has created the world's first "smart" bottle which can keep wine as fresh as the day it was uncorked for up to a month.
To wine makers, stink bugs are more than a nuisance. These tiny pests can hitch rides on grapes going through the wine making process, releasing stress compounds that can foul the smell and taste of the finished product. ...
A French startup is looking to change the way people drink wine, one glass at a time.
US entrepreneur Elon Musk said Thursday he'd received tentative approval from the government to build a conceptual "hyperloop" system that would blast passenger pods down vacuum-sealed tubes from New York to Washington at ...
Google and the EU are gearing up for a battle that could last years, with the Silicon Valley behemoth facing a relentless challenge to its ambition to expand beyond search results.
Cheap, plastic toysno manufacturer necessary. The 2020 toy and game market is projected to be $135 billion, and 3-D printing brings those profits home.
From aerospace and defense to digital dentistry and medical devices, 3-D printed parts are used in a variety of industries. Currently, 3-D printed parts are very fragile and only used in the prototyping phase of materials ...
An underwater robot entered a badly damaged reactor at Japan's crippled Fukushima nuclear plant Wednesday, capturing images of the harsh impact of its meltdown, including key structures that were torn and knocked out of place.
Microsoft's cloud computing platform will be used outside China for collaboration by members of a self-driving car alliance formed by Chinese internet search giant Baidu, the companies announced on Tuesday.
Please sign in to add a comment. Registration is free, and takes less than a minute. Read more
Continue reading here:
Artificial intelligence boosts wine's bottom line - Phys.Org
Posted in Artificial Intelligence
Comments Off on Artificial intelligence boosts wine’s bottom line – Phys.Org