The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: November 29, 2020
Universal basic income has time come for it? Debate intensifies in pandemic – WRAL Tech Wire
Posted: November 29, 2020 at 6:18 am
Christine Jardine, a Scottish politician who represents Edinburgh in the UK parliament, was not a fan of universal basic income before the pandemic hit.
It was regarded in some quarters as a kind of socialist idea, said Jardine, a member of the centrist Liberal Democrats party.
But not long after the government shut schools, shops, restaurants and pubs in March with little warning, she started to reconsider her position.
Covid-19 has been [a] game changer, Jardine said. It has meant that weve seen the suggestion of a universal basic income in a completely different light. In her view, the idea sending cash regularly to all residents, no strings attached now looks more pragmatic than outlandish.
She isnt the only one to change her mind. As the economic crisis sparked by the coronavirus drags on, support in Europe is growing for progressive policies once seen as pipe dreams of the political left.
Group of economists calls for stimulus checks until economy recovers
In Germany, millions of people applied to join a study of universal basic income that will provide participants with 1,200 ($1,423) a month, while in the United Kingdom, more than 100 lawmakers including Jardine are pushing the government to start similar trials.
Austria, meanwhile, has launched a first-of-its-kind pilot program that will guarantee paying jobs to residents struggling with sustained unemployment in Marienthal, a long-suffering former industrial town about 40 miles southwest of Vienna.
Whether the spike in popularity and research will translate into a wave of action is an open question. But some, like Jardine, see reason for optimism.
Throughout history, times of crisis have produced large changes in the role government plays in our lives. Out of the Great Depression came former President Franklin Delano Roosevelts plan to distribute social security checks in the United States, for example, while the foundations of universal health care in Britain were laid during World War II.
Experts see the coronavirus pandemic as a world-changing event that could result in a similar tectonic shift.
Big political changes generally do follow big upheaval events, said Daniel Nettle, a behavioral scientist at Newcastle University.
Universal basic income, in its purest form, means giving money to everyone, regardless of how much they earn, so they can have greater freedom to move between jobs, train for new positions, provide care or engage in creative pursuits. Interest in the concept has risen in recent years, driven by concerns that automation and the climate crisis would lead to a mass displacement of workers.
Job insecurity caused by the pandemic, however, appears to have generated new levels of support for the policy. One study conducted by Oxford University in March found that 71% of Europeans now favor the introduction of a universal basic income.
For an idea that has often been dismissed as wildly unrealistic and utopian, this is a remarkable figure, researchers Timothy Garton Ash and Antonia Zimmermann wrote in their report.
It probably helps that the pandemic has helped normalize cash transfers from the government, said Nettle, who has also conducted his own polling. According to data compiled by economists at UBS, nearly 39 million people in the United Kingdom, Germany, France, Spain and Italy were being paid by governments to work part time, or not at all, as of early May.
Though the numbers have come down, millions are still receiving this kind of support, and a fresh wave of restrictions in Europe has triggered an extension of benefits. The United Kingdom, for example, has extended its furlough program which pays as much as 80% of lost wages, up to 2,500 ($3,321) a month through March.
The rapid blow to the economy dealt by the pandemic has also left policymakers scrambling for quick solutions, said Yannick Vanderborght, a professor at Universit Saint-Louis in Brussels, who specializes in universal basic income. The broad distribution of aid therefore has greater appeal, since it can theoretically be rolled out faster than more targeted measures.
The problem is we need urgent economic support for large groups of workers, Vanderborght said.
As enthusiasm grows for such policies, researchers are taking new steps to study their effectiveness.
The trial of universal basic income in Germany run by the German Institute for Economic Research in Berlin (DIW) in partnership with the nonprofit Mein Grundeinkommen is now sorting through millions of applicants. Financed by roughly 150,000 private donors, experimenters aim to begin distributing money to 120 individuals starting in spring 2021.
The study will last for three years. It will also track 1,380 people who do not receive the extra cash as a point of comparison.
Participants will be asked to complete regular questionnaires during the study. Questions will range from how many hours theyre working to inquiries about mental wellbeing, values and trust in institutions, according to Jrgen Schupp, a senior DIW research fellow who is managing the project. Those who receive 1,200 each month will be asked to disclose how theyre using the money.
Unlike an experiment conducted in Finland between 2017 and 2018, which targeted people who were unemployed, the German project is looking to distribute cash to a representative sample of the population regardless of employment status.
Theres no guarantee, of course, that the study will show that universal basic income has broad benefits, even though its generated significant attention from supporters of the concept.
We want to convert this engagement into basic scientific knowledge, Schupp said.
The job guarantee pilot in Austria, meanwhile, kicked off in October. It will also last for three years.
The program, which is funded by a regional division of Austrias public employment service, aims to provide paid, long-term jobs to roughly 150 residents of Marienthal the subject of a seminal study on the effects of long-term unemployment in the 1930s who have been unemployed for at least a year. Those who opt in will enroll in a two-month training course before starting a job that matches their skillset, from gardening to child care or home renovations.
The primary goal is to provide social inclusion, meaning and a source of income to the participants, said University of Oxford professor Maximilian Kasy, who co-designed the study. Participants will also be asked to fill out regular assessments on their daily routine, personal health and involvement in the local community.
Sven Hergovich, managing director of the employment service, started pitching a job guarantee program for Marienthal before the pandemic hit. But the employment crisis sparked by Covid-19 has made it even more crucial, he said.
It is time to find new ways [to fight] long-term unemployment, Hergovich said.
As researchers gather data from the pilot programs, political momentum for overhauling social safety nets is building.
In September, the UK Liberal Democrats Jardines party voted to make universal basic income a part of their platform, joining members of the left-wing Labour Party in calling for trials. A petition demanding that Germany implement a universal basic income was debated by a committee of national lawmakers late last month.
But experts note that the loose coalition of universal basic income supporters still contains major divisions.
Theres huge dissent, for example, on whether such programs should stem from deficit spending or higher taxes on the wealthy, as well as whether payments should only go to those in need which would mean they wouldnt be truly universal.
Jardine, for example, thinks universal basic income should replace the current UK welfare system, while also providing people such as caretakers and gig economy workers with regular infusions of cash. But she isnt convinced that payments should be made to those above a certain income threshold.
When you have to turn it from an interest to a program, you start to see some inconsistencies, said Tim Vlandas, a University of Oxford professor of comparative social policy.
And such ideas still have plenty of opponents. The Conservative government under Boris Johnson in the United Kingdom maintains that universal basic income would be too expensive and reduce incentives to work, while failing to reach those who most need help. Chancellor Angela Merkels coalition government has also expressed concerns it could lead to a decline in employment.
Critics also raise fears about the broader economic ramifications of such policies. Some worry, for example, that providing a universal basic income could lead to a spike in inflation.
Jardine, for her part, acknowledges the uphill battle in convincing colleagues that universal basic income is the way forward. But in her view, the pandemic presents an opportunity.
Governments do change and they change their minds, she said.
Read the original post:
Universal basic income has time come for it? Debate intensifies in pandemic - WRAL Tech Wire
Posted in Basic Income Guarantee
Comments Off on Universal basic income has time come for it? Debate intensifies in pandemic – WRAL Tech Wire
Turing Test At 70: Still Relevant For AI (Artificial Intelligence)? – Forbes
Posted: at 6:17 am
ENGLAND - 1958: English Electric developed several notable pioneering computers during the 1950s. ... [+] The DEUCE: Digital Electronic Universal Computing Engine, was the first commercially produced digital model and was developed from earlier plans by Alan Turing. 30 were sold and in 1956 one cost 50,000. The DEUCE took up a huge space compared to modern computers and worked from 1450 thermionic valves which grew hot , blow outs were frequent. However the DEUCE proved a popular innovation and some models were working in to the 1970s. Photograph by Walter Nurnberg who transformed industrial photography after WWII using film studio lighting techniques. (Photo by Walter Nurnberg/SSPL/Getty Images)
When computers were still in the nascent stages, Alan Turing published his legendary paper, Computing Machinery And Intelligence, in the Mind journal in 1950.In it, he set forth the intriguing question:Can machines think?
At the time, the notion of Artificial Intelligence (AI) did not exist (this would not come until about six years later at a conference at Dartmouth University).Yet Turing was already thinking about the implications of this category.
In his paper, he described a framework to determine if a machine had intelligence.This essentially involved a thought experiment.Assume there are three players in a game.Two are human and the other is a computer.An evaluatorwho is a humanthen asks open-ended questions to the players.If this person cannot determine who is the human, then the computer is considered to be intelligent.
The Turing Test was quite ingenious because there was no need to define intelligence, which is fraught with complexities.Even today this concept is far from clear-cut.
Keep in mind that Turing thought the test would ultimately be cracked by 2000 or so.But interestingly enough, this turned out to be way too optimistic.The Turing Test has remained elusive for AI systems.
If Alan Turing was alive, he might be shocked that given 175 billion neurons from GPT-3 we are still unable to pass his test, but we will soon, said Ben Taylor, who is the Chief AI Evangelist at DataRobot.
So why has it been so difficult to beat the test?A key reason is that it can be tricked.If you ask a nonsensical question, the results will often be non-human like.Lets face it, people are very good at detecting when something is not quite right.
When you ask a GPT-3 system how many eyes the sun has, it will respond that there is one and when asked who was the president of the U.S. in 1600, the answer will be Queen Elizabeth I, said Noah Giansiracusa, who is an Assistant Professor of Mathematics and Data Science at Bentley University.The basic problem seems to be that GPT-3 always tries in earnest to answer the question, rather than refusing and pointing out the absurdity and unanswerability of a question.
But over time, it seems reasonable that these issues will be worked out.The fact is that AI technology is continuing to progress at a staggering pace.
There may also be a need for another test as well.Since the Turing test, humans have actually discovered much more insight into our own minds through fMRI and what makes us superior in our own intelligence, said Taylor.This insight into our own brains justifies changing the goals of a test beyond mimicking behavior. Defining a new test might help us get out of the deep-learning rut, which is currently insufficient for achieving AGI or Artificial General Intelligence. The Turing test was our moonshot, so let's figure out our Mars-shot.
Over the years, other tests have emerged.According Druhin Bala, who is the CEO and co-founder of getchefnow.com, there are:
But my favorite is the Wozniak Test (yes, this is from the co-founder of Apple). This is where a robot can enter a strangers home and make a cup of coffee!
Now of course, all these tests have their own issues.The fact is that no test is fool-proof.But in the coming years, there will probably be new ones and this will help with the development of AI.
The Turing Test is brilliant in its simplicity and elegance, which is why it's held up so well for 70 years, said Zach Mayer, who is the Vice President of Data Science at DataRobot.It's an important milestone for machine intelligence, and GPT-3 is very close to passing it.And yet, as we pass this milestone, I think it's also clear that GPT-3 is nowhere near human-level intelligence.I think discovering another Turning Test for AI will illuminate the next step on our journey towards understanding human intelligence.
Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems. He also has developed various online courses, such as for the COBOL and Python programming languages.
More here:
Turing Test At 70: Still Relevant For AI (Artificial Intelligence)? - Forbes
Posted in Ai
Comments Off on Turing Test At 70: Still Relevant For AI (Artificial Intelligence)? – Forbes
Opinion/Middendorf: Artificial intelligence and the future of warfare – The Providence Journal
Posted: at 6:17 am
By J. William Middendorf| The Providence Journal
J. William Middendorf, who lives in Little Compton, served as Secretary of the Navy during the Ford administration. His recent book is "The Great Nightfall: How We Win the New Cold War."
Thirteen days passed in October 1962 while President John F. Kennedy and his advisers perched at the edge of the nuclear abyss, pondering their response to the discovery of Russian missiles in Cuba. Today, a president may not have 13 minutes. Indeed, a president may not be involved at all.
Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.
This statement from Vladimir Putin, Russian president, comes at a time when artificial intelligence is already coming to the battlefield and some would say it is already here. Weapons systems driven by artificial intelligence algorithms will soon be making potentially deadly decisions on the battlefield. This transition is not theoretical. The immense capability of large numbers of autonomous systems represent a revolution in warfare that no country can ignore.
The Russian Military Industrial Committee has approved a plan that would have 30% of Russian combat power consist of remote controlled and autonomous robotic platforms by 2030. China has vowed to achieve AI dominance by 2030. It is already the second-largest R&D spender, accounting for 21% of the worlds total of nearly $2 trillion in 2015. Only the United States at 26%ranks higher. If recent growth rates continue, China will soon become the biggest spender.
If China makes a breakthrough in crucial AI technology satellites, missiles, cyber-warfare or electromagnetic weapons it could result in a major shift in the strategic balance. Chinas leadership sees increased military usage of AI as inevitable and is aggressively pursuing it. Zeng Yi, a senior executive at Chinas third-largest defense company, recently predicted that in future battlegrounds there will be no people fighting, and, by 2025, lethal autonomous weapons would be commonplace.
Well-intentioned scientists have called for rules that will always keep humans in the loop of the military use of AI. Elon Musk, founder of Tesla, has warned that AI could be humanitys greatest existential threat for starting a third world war. Musk is one of 100 signatories calling for a United Nations-led ban of lethal autonomous weapons. These scientists forget that countries like China, Russia, North Korea and Iran will use every form of AI if they have it.
Recently, Diane Greene, CEO of Google, announced that her company would not renew its contract to provide recognition software for U.S. military drones. Google had agreed to partner with the Department of Defense in a program aimed at improving Americas ability to win wars with computer algorithms.
The world will be safer and more powerful with strong leadership in AI. Here are three steps we should take immediately.
Convince technological companies that refusal to work with the U.S. military could have the opposite effect of what they intend. If technology companies want to promote peace, they should stand with, not against, the U.S, defense community.
Increase federal spending on basic research that will help us compete with China, Russia, North Korea and Iran in AI.
Remain ever alert to the serious risk of accidental conflict in the military applications of machine learning or algorithmic automation. Ignorant or unintentional use of AI is understandably feared as a major potential cause of an accidental war.
View post:
Opinion/Middendorf: Artificial intelligence and the future of warfare - The Providence Journal
Posted in Ai
Comments Off on Opinion/Middendorf: Artificial intelligence and the future of warfare – The Providence Journal
AI and us – The Hindu
Posted: at 6:17 am
The dystopian society depicted in George Orwells novel 1984 and his well-known phrase Big Brother is watching you implying relentless surveillance have a particularly grim relevance in todays world. We are observed closely everywhere be careful of what you do and follow accepted practice and you are safe. Sure, but the growing watching by what seems not just the government but the whole world becomes a nightmare of fear of fraudulent exploitation through easily available personal information garnered and misused via the Internet and other Artificial Intelligence-enabled means. Strangely enough, however, encroachment of privacy is nothing new, nor can its proliferation be blamed solely on advancements in AI. Its all just a continuing extension of the way we have always been, vastly compounded by fast-advancing technology old wine in new bottles. Only, wine is not so dangerous.
In the film Social Dilemma, the tech pundits who develop the technology used in social networking warn us of the latters harmful effect on people. The extent to which it can affect your life is startling. This increasing vulnerability to rapidly developing methods of communication and information technology invites some serious questions. How much of personal independence and privacy must we sacrifice at the altar of progress? All human beings have three lives: public, private, and secret, says Gabriel Garcia Marquez, a winner of the Nobel Prize for literature. Lamentably, there is now just the public life with private details and vital statistics laid bare through anything from nanny cams and cyberstalking to data mining. Throughout history, humankind has sought knowledge to enable the achievement of objectives and as an end in itself, which has driven the evolution of civilisation. But today, in this quest there is much more to reckon with, in our subjugation to AI.
Every smart child asserts that the Internet observes and manipulates you. Ordering pizza online? Hey presto, you are offered half-a-dozen bewildering alternatives. Some even claim that occasionally, just thinking of ordering something is followed instantly by the sudden appearance of suitable choices! Hmm some sort of clandestine avant garde telepathic detection maybe? Not so bizarre if you consider the recent overwhelming progress in these areas. According to a report, leading scientists say neural interfaces that link human brains to computers using Artificial Intelligence will allow people to read others thoughts. There could be severe risks if such technology falls into the wrong hands. It seems only sensible that methods of blocking undesired telepathy and monitoring of other privacy-violating software should be seriously considered.
Personal space
In view of the unparalleled ease of confidentiality infringement today, steps to safeguard personal space become essential. As Edward Snowden says, Arguing that you dont care about the right to privacy because you have nothing to hide is no different than saying you dont care about free speech because you have nothing to say.
Increasing use of commercial cybersecurity software, customising confidentiality agreements, curbing excessive intrusion of the government or becoming a complete maverick shunning all virtual transactions seem to be some of the best options to guard against information leaks, from a laymans point of view. However, former American President James Madison had said that the invasion of private rights is chiefly to be feared not from acts of the government contrary to the will of the people, but acts of the government in which it is merely an instrument of its constituents. The most potent recourse to preserve privacy, therefore, is the will of a nations people to use the government as their tool for the purpose.
Though the transformation of the acquisition of essential personal information for routine work purposes into a situation of potential misuse, abetted by rapidly advancing cybertechnology, may be merely another manifestation of age-old human traits of inquisitiveness and greed, it clearly cannot be left unchecked. It rests on the community to use the administrative authority of the nation to check this growing threat. Importantly, where the line should be drawn that separates the preservation of privacy and the use of beneficial technology is the question that faces us now.
Go here to read the rest:
Posted in Ai
Comments Off on AI and us – The Hindu
Emily Murphy, Administrator Of The GSA Shares Her Thoughts On AI – Forbes
Posted: at 6:17 am
There is nothing ordinary about the year 2020, and in this highly charged political year, everything gets more attention than might have been deserved in previous years. A few weeks ago, Emily Murphy, Administrator of the General Services Administration (GSA) started making waves in the news. For many who may not have known her before, she has been leading the GSA for a few years helping bring innovative programs and initiatives to the agency.
Among many things that Emily Murphy is responsible for, one of the most consequential every four years is the ascertainment of a Presidential election winner so that a transition of power can proceed. That particular aspect of the GSA got a lot more news and publicity over the past few weeks than perhaps it ever has in recent history. Even those that deal with the government regularly might not have been aware of such a pivotal role that the GSA has with regards to elections.
Indeed, there are many other things that the GSA is much more known for being responsible for, from its role as the primary manager of real estate and buildings for the government to procuring billions of dollars of products, goods, and services. Its in that latter light that I had the opportunity to interview Emily on how the GSA is approaching artificial intelligence. Automation and AI have served a particularly important role at the GSA, impacting how it goes about running its operations and procuring solutions on behalf of government agencies.
In a recent AI Today podcast, recorded right before the election, Emily Murphy shared her insights into how AI is transforming the federal government. In this article she further shares her insights into AI, the GSA, and the federal government.
During your time at the GSA you helped to launch the Centers of Excellence program. Can you share what the CoE is and how its helping advance use of data?
Emily Murphy, GSA Administrator
Emily Murphy: I was a senior advisor at GSA, prior to being confirmed as Administrator, and one of the areas I focused on was how GSA could better manage the intersection between contracting and technology innovation. GSAs Technology Transformation Services, which is part of our Federal Acquisition Service, worked with other agencies to launch the first Centers of Excellence in late 2017, with the first partner (also known as our lighthouse agency) being USDA. We now have announced ten partnerships with different agencies where we have a presence. The Centers of Excellence teams provide technical expertise in support of the following areas: cloud adoption, contact center, customer experience, data analytics, infrastructure optimization, and artificial intelligence.
I am especially excited about our AI CoE, which is the sixth and latest pillar in our Centers of Excellence program. With the AI CoE, TTS brings together machine learning, neural networks, intelligent process design and Robotic Process Automation (RPA) to develop AI solutions that address unique business challenges agency-wide. The team provides strategic tools and infrastructure support to rapidly discover use cases, identify applicable artificial intelligence methods, and deploy scalable solutions across the enterprise.
Another highlight has been our partnership with the Joint Artificial Intelligence Center at the Department of Defense. We recently marked our 1 year anniversary of the JAIC and CoE partnership and released an announcement of our many achievements. One of the first things CoE worked on was assisting in the creation of the First Five Consortium, a first-of-its-kind Public Private Partnership which seeks to augment Humanitarian Assistance / Disaster Response efforts with AI-enabled capabilities. The objective is to provide no-cost AI-enhanced imagery to incident commanders and first responders within 5 minutes of a disaster to rapidly accelerate coordination times, saving lives and property. The CoE supported JAIC in developing a robust AI Capability Maturity Model and AI Workforce Model to help gauge operational maturity across multiple functional areas.
What do you see as some of the unique opportunities the public sector has around AI?
Emily Murphy: There are a world of possibilities when it comes to the benefits of AI for the public sector. One core benefit of AI is that it allows the government to test concepts before spending money building them out. So, instead of having to build something to see if it works, we can now use computers to do the first level of testing virtually. AI, and specifically natural language processing (NLP), can be leveraged to streamline many processes that were previously manual or document driven.
AI is also being leveraged by the federal government as part of a strategy to understand specific areas of need, such as rulemaking and regulatory reform. A few examples: currently were using AI to analyze comments made during the public comment period of rulemaking, to update regulations so that they reflect todays technology and products, to identify areas of overlap and duplication in the Federal code and in regulatory guidance and make it easier to streamline regulations, and to make predictions about the effect of regulations on stakeholders. AI can also be used to accelerate hiring and the onboarding process at federal agencies.
Were focused on finding and creating smart systems, services, and solutions that make it easier to interact with the government on every level.
What do you see as some of the unique challenges the public sector has around AI?
Emily Murphy: A few of the challenges that the public sector faces when it comes to AI are: data cleanliness, managing data, and hiring tech savvy AI federal system owners to properly operate, manage, and evaluate the system.
There are also unresolved issues surrounding the responsible and ethical use of AI. These are hard problems and require thoughtful consideration and cooperation to come up with a solution. Responsibility and ethics are embedded in every stage of the AI development lifecycle from design to development to deployment, and must include continuous monitoring and feedback collection to ensure it is behaving as intended without causing harm or causing unintended consequences. It is not just a checklist you run through after a solution is developed.
Because the responsible use of AI occurs at every stage of the development lifecycle, we need to reframe what it means to use AI responsibly. People often speak of evaluating an AI system after it has been developed. We need to move away from that to a mindset where the use of AI must be thoughtfully considered at every step. And this means that its not just the job of an AI ethicist to ensure this. The technical developers of AI are making ethical choices as they are building the system, so they need to understand how those technical decisions are also choices being made from an ethical perspective.
Partnership with private industry will be critical to ensure we are building responsible AI solutions. Government cannot be buying a black box of technologies with no insight, explanation or oversight as to how it is operating. Government experts need to partner with industry as they build AI solutions to embed responsibility throughout. Monitoring, evaluation and updating models must be at the forefront of the process, not an afterthought after a solution is built.
Teams need to engage across the organization to establish oversight and audit procedures to ensure that AI and automation continue to perform as intended.
What are some of the ways GSA is currently leveraging AI and Machine Learning?
Emily Murphy: We have implemented automation, AI and Machine Learning in a variety of ways. Robotics Process Automation (RPA) is an automated scripting technique that is sometimes categorized in the same overarching ecosystem of technologies as AI. We have implemented an enterprise platform for implementing RPA and have automated many processes. We continue to see the vendor community enhancing software offerings with new capabilities that are powered by AI and ML in areas such as anomaly detection, natural language processing, and image recognition. We see great value in using those advanced capabilities in the tools we already use or in new implementations. We also are growing our data science capabilities to use predictive analytics as an extension of our existing analytics and data management capabilities. We are accomplishing this through both investment in our staffs learning, as well as providing them with data science tools.
What are some ways GSA is hoping to leverage AI and Machine Learning in the next few years?
Emily Murphy: In general, GSA is looking to implement AI and Machine Learning technology provided by vendors for existing software, as well as implementing custom solutions using this technology. Here are a few examples of how we hope to leverage AI and Machine Learning:
How is the GSA engaging industry and private sector in your AI efforts?
Emily Murphy: The federal government is using crowdsourcing in dynamic ways to engage industry and subject matter experts across the USA to advance innovation with artificial intelligence. In fact, in just the past six months, challenge.gov has hosted over a dozen federally sponsored prize competitions that focus on the use of artificial intelligence.
This past summer, GSA hosted the AI/ML End User License Agreement (EULA) challenge which showed how industry could provide IT solutions by leveraging AI and ML capabilities within the acquisition business process. This was hosted on challenge.gov and received 20 AI and ML solutions from solvers.
Another exciting example is Polaris, for which we just issued a Request for Information. Polaris is a next generation contract worth $50 billion geared at small innovative companies.
What is the GSA doing to develop an AI ready workforce?
Emily Murphy: We launched an AI Community of Practice to get smart people from across government talking and sharing best practices, then we set up an AI Center of Excellence to put their knowledge to work. This is how we lay the intellectual infrastructure needed to support the tens of thousands of federal workers, contractors, and citizens who will be working with this technology.
GSA is also very interested in looking at ways to build up data science and AI skill sets across federal agencies, as well as engaging externally to attract additional data science and AI talent into government. The TTS CoE and AI Portfolio have hosted three webinars focused on AI Acquisitions. These webinars have discussed topics such as defining your problem for AI acquisitions, how to draft objectives for your AI acquisition problems, and understanding data rights and IP clauses for AI acquisitions for federal employees.
As well, GSA OCTO hosts a speaker series called Tech Talks for GSA employees. These are sessions designed to introduce and explain new and emerging technologies to the staff of GSA. Related tech talks have included AI/ML Overview and Battle of the Bots. Our Chief Data Officers organization operates a training program for GSA employees called the Data Science Practioners Training Program. This program develops the core data science skills underlying effective implementation of AI and ML.
How important is AI to the GSAs vision of the future?
Emily Murphy: AI is a critical part of GSAs vision for the future, and it should be for all agencies. Advances in AI and ML have fundamentally changed the way private industry does business. Government should be leveraging AI technologies in a responsible manner to serve the people of this country. GSA specifically will better be able to support our partner agencies through faster, more efficient and more informed mechanisms with the support of AI.
How is AI helping with GSAs mission today?
Emily Murphy: Like many private companies, we have lots of work - more than our workforce can always handle. We have also discovered that many of our current processes exist simply because we have always done things that way. As well, when we would get new systems, wed program them to automate the old process without thinking through whether there was any value in that and whether the process was one we wanted or needed. AI is allowing us to modernize our systems and processes, and shift from low value work to high value work. We have over 70 bots currently in operation that have saved over 260,000 hours. Here are a few examples:
What AI technologies are you most looking forward to in the coming years?
Emily Murphy: Im looking forward to universal communication across languages to around the world, and the accelerated digitization and processing of government forms. Many AI technologies and tools hold promise of helping the federal government to become more efficient, reveal greater insight, and make better decisions. The key is to ensure that we leverage both human and AI systems together.
Go here to see the original:
Emily Murphy, Administrator Of The GSA Shares Her Thoughts On AI - Forbes
Posted in Ai
Comments Off on Emily Murphy, Administrator Of The GSA Shares Her Thoughts On AI – Forbes
The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh – HPCwire
Posted: at 6:17 am
As HPEs chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about how AI can benefit their business operations and products.
As the start of 2021 approaches, HPCwire sister publication EnterpriseAI spoke with Goh in a telephone interview to learn about his impressions and expectations for the still-developing technology as it continues to be used by HPEs customers.
Goh, who is widely-known as one of the leading HPC visionaries today, has a deep professional background in AI and HPC. He was CTO for most of his 27 years at Silicon Graphics before joining HPE in 2016 after the company was acquired by HPE. He has co-invented blockchain-based swarm learning applications, overseen the deployment of AI for Formula 1 auto racing, and has co-designed the systems architecture for simulating a biologically detailed mammalian brain. He has been named twice, in 2005 and 2015, to HPCwires People to Watch list, for his work. A Shell Cambridge University Scholar, he completed his PhD research and dissertation on parallel architectures and computer graphics, and holds a first-class honors degree in mechanical engineering from Birmingham University in the U.K.
This interview is edited for clarity and brevity.
EnterpriseAI: Is the development of AI today where you thought it would be when it comes to enterprise use of the technology? Or do we still have a way to go before it becomes more important in enterprises?
Dr. Eng Lim Goh: You do see research with companies and industries. Some are deploying AI in a very advanced way now, while others are moving from their proof of concept to production. I think it comes down to a number of factors, including which category they are in are they coping with making decisions manually, or are they coping with writing rules into computer programs to help them automate some of the decision making? If they are coping, then there is less of an incentive to move to using machine learning and deep neural networks, other than being concerned that competition is doing that and they will out-compete them.
There are some industries that that are still making decisions manually or writing rules to automate some of that. There are others where the amount of data to be considered to make an even better decision would be insurmountable with manual decision making and manual analytics. If you asked me a few years back where things would be, I would have been conservative on one hand and also very optimistic on the other hand, depending on companies and industries.
EnterpriseAI: Are we at the beginning of AIs capabilities for business, or are we reaching the realities of what it can and cant do? Has its maturity arrived?
Goh:For some users it is maturing, if you are focused on how the machine wants to help you in decision support, or in some cases, to help you take over some decision-making. That decision is very specific in an area, and you to have enough data for it. I think things are getting very advanced now.
EnterpriseAI: What are AIs biggest technology needs to help it further solve business problems and help grow the use of AI in enterprises? Are there features and improvements that still must arrive to help deliver AI for industries, manufacturing and more?
Goh: At HPE, we spend a lot of our energy working with customers, deploying their machine learning, artificial intelligence and data analytics solutions. Thats what we focus on, the use cases. Other bigger internet companies focus more on the fundamentals of making AI more advanced. We spend more of our energy in the application of it. From the application point of view, some customer use cases are similar, but its interesting that a lot of times, the needs are in best practices.
In the best practices, a lot of times for example, proof of concepts succeed, but then they fail in their deployment into production. A lot of times, proof of concepts fail because of reasons other than the concept being a failure. A discipline, like engineering, over years, over decades, develops into a discipline, like computer engineering or programming. And over the years, these develop into disciplines where there are certain sets of best practices that people follow. In the practice of artificial intelligence, this will also develop. Thats part of the reason why we develop sets of best practices. First, to get from proof of concept to successful deployment, which is where we see a lot of our customers right now. We have one Fortune 500 customer, a large industrial customer, where the CTO/CIO invested in 50 proof of concepts for AI. We were called in to help, to provide guidance as to how to pick from these proof of concepts.
A lot of times they like to test to see if, for a particular use case, does it make sense to apply machine learning in decision support? Then they will invest in a small team, give them funding and get them going. So you see companies doing proof of concepts, like a medium-sized company doing one or two proof of concepts. The key, when Im brought into to do a workshop with them on this in transitioning from proof of concept to deployment, is to look at the best practices weve gathered over the use cases weve done over the years.
One lesson is not to say that the proof of concept is successful until you also prove that you can scale it. You have to address the scale question at the beginning. One example is that if you prove that 100 cameras work for facial recognition within certain performance thresholds, it doesnt mean the same concept will work for 100,000 cameras. You have to think through whether what you are implementing can actually scale. This is just one of the different best practices that we saw over time.
Another best practice is that this AI, when deployed, you must plug into the existing workflow in a seamless way, so the user doesnt even feel it. Also, you have to be very realistic. We have examples where they promise too much at the beginning, saying that we will deploy on day one. No, you set aside enough time for tuning, because since this is a very new capability for many customers, you need to give them time to interact with it. So dont promise that youll deploy on day one. Once you implement in production, allow a few months to interact with a customer so they can find what their key performance indicators should be.
EnterpriseAI: Are we yet at a point where AI has become a commodity, or are we still seeing enterprise AI technology breakthroughs?
Goh: Both are right. The specific AI where you have good data to feed machine learning models or deep neural network models, the accuracy is quite high, to the point that people after using it for a while, trust it. And its quite prevalent, but some people think that it is not prevalent enough to commoditize. AI skills are like programming skills a few decades ago they were highly sought after because very few people knew what it was, knew how to program. But after a few decades of prevalence, you now have enough people to do programming. So perhaps AI has gone that way.
EnterpriseAI: Where do you see the biggest impacts of AI in business? Are there still many things that we havent seen using AI that we havent even dreamed up yet?
Goh: Anytime that you youre having someone make a decision, AI can be helpful and can be used as a decision support tool. Then theres of course the question about whether you let the machine make the decision for you. In some cases, yes, in a very specific way and if the impact of a wrong decision is less significant. Treat AI as a tool like you would think automation was a tool. Its just another way to automate. If you look back decades ago, machine learning was already being used, it was just not called machine learning. It was a technique used by people in doing statistics, analytics, applying statistics. There definitely is that overlap, where statistics overlap with machine learning, and then machine learning stretches out to deep neural networks where we reach a point where this method can work, where we essentially have enough data out there, and enough compute power out there to consume it. And therefore, to be able to get the neural network to tune itself to a point where you can actually have it make good decisions. Essentially, you are brute-forcing it with data. Thats the overlap. I say weve been at it for a long time, right, were just looking for new ways to automate.
EnterpriseAI: What interesting enterprise AI projects are you working on right now that you can share with us?
Goh: Two things are in the minds of most people now COVID-19 vaccines, and back-to-work. These are two areas we have focused on over the last few months.
On the vaccine, clinical trials and gene expression data, with applying analytics to it. We realized that analytics, machine learning and deep neural networks can be quite useful in making predictions just based on gene expression data. Not just for clinical trials, but also to look ahead to the well-being of persons, by just looking at one sample. It requires highly-skilled analytics, machine learning and deep neural network techniques, to try and make predictions ahead of time, when you get a blood sample and genus expressed and measured from it.
The other area is back-to-work [after COVID-19 shutdowns around the nation and world]. Its likely that the workplace is changed now. We call it the new intelligent hybrid workplace. By hybrid we mean a portion is continuing to be remote, while a portion of factory, manufacturing plant or office employees will return to their workplaces. But even on their returns depending on companies, communities, industries and countries therell be different requirements and needs.
EnterpriseAI: And AI can help with these kinds of things that we are still dealing with under COVID-19?
Goh: Yes, in certain jurisdictions, for example, if someone is ill with the coronavirus in a factory or an office, and you are required to do specialized cleaning in the area around that high-risk person. If you do not have a tool to assist you, there are companies that clean their entire factory because theyre not quite sure where that person has been. An office may have cleaned an entire floor hoping that a person didnt go to other floors. We built an in-building tracing system with our Aruba technology, using Bluetooth Low Energy, talking to WiFi routers and access points. Immediately when you identify a particular quarter-sized Bluetooth tag that employees carry, immediately a floorplan shows up and it shows hotspots and warm spots as to where to send the cleaning services to. Youre very targeted with your cleaning. The names of the users of those tags are highly restricted for privacy.
EnterpriseAI: Lets dive into the ethics of AI, which is a growing discussion. Do you have concerns about the ethics and policies of using AI in business?
Goh: Like many things in science and engineering, this is as much a social question as it is a technical one. I get asked this a lot by CEOs in companies. Many times, from boards of directors and CEOs, this is the first question, because it affects employees. It affects the community they serve and it affects their business. Its more a societal question as it is a technical one, thats what I always tell them.
And because of this, thats the reason you dont hear people giving you rules on this issue hard and fast. There needs to be a constant dialogue. It will vary by community, by industry, to have a dialogue and then converge on consensus. I always tell them, focus on understanding the differences between how a machine makes decisions, and how a human makes decisions. Whenever we make a decision, there is a link immediately to the emotional side, and to the generalization capability. We apply judgment.
EnterpriseAI: What do you see as the evolving relationship between HPC and AI?
Goh: Interestingly, the relationship has been there for some time, its just that we didnt call it AI. Lets take hurricane prediction, for example. In HPC, this is one of the stalwart applications for high performance computing. You put in your physics and physics simulations on a supercomputer. Next, you measure where the hurricane is forming in the ocean. You then make sure you run your simulation ahead of time faster than the hurricane that is coming at you. Thats one of the major applications of HPC, building your model out of physics, and then running the simulation based on starting that mission that youve measured out in the ocean.
Machine learning and AI is now used to look at the simulation early on and predict likelihood of failure. You are using history. People in weather forecasting, or climate weather forecasting, will already tell you that theyre using this technique of historical data to make predictions. And today we are just formalizing this for the other industries.
EnterpriseAI: What do you think of the emerging AI hardware landscape today, with established chip makers and some 80 startups working on AI chips and platforms for training and inference?
Goh: Through history, its been the same thing. In the end, there will probably be tens of these chip companies. They came up with different techniques. Were back to the Thinking Machines, the vector machines, its all RISC processes and so on. Theres a proliferation of ideas of how to do this. And eventually, a few of them will stand out here and there will be a clear demarcation I believe between training and inference. Because inference needs to be low and lower energy to the point that should be the vision, that IoTs should have some inference capability. That means you need to sip energy at a very low level. Were talking about an IoT tag, a Bluetooth Low Energy tag, with a coin battery that should last two years. Today the tag that sends out and receives the information, has very little decision-making, let alone inference-level type decision-making. In the future you want that to be an intelligent tag, too. There will be a clear demarcation between inference and training.
EnterpriseAI: In the future, where do you see AI capabilities being brought into traditional CPUs? Will they remain separate or could we see chips combining?
Goh: I think it could go one way, or it could totally go the other way and everything gets integrated. If you look at historical trends, in the old days, when we built the first high-performance computers, we had a chip for our CPU, and we had another chip on board called FPU, the floating point unit, and a board for graphics. And then over time the FPU got integrated into the CPU, and now every CPU has an FPU in it for floating point calculations. Then there were networking chips that were on the outside. Now we are starting to see networking chips incorporating into the CPU. But GPUs got so much more powerful in a very specific way.
The big question is, will the CPU go into the GPU, or will the GPU go into the CPU? I think it will be dependent on a chip companys power and vision. But I believe integration, one way or the other the CPU to GPU or GPU going into CPU will be the case.
EnterpriseAI: What else should I be asking you about the future of AI as we look toward 2021?
Goh: I want to emphasize that many CEOs are keen on starting with AI. They are in phase one, where it is important to understand that data is the key to train machines with. And as such, data quality needs to be there. Quantity is important, but quality needs to be there, the trust of it, the data bias.
We focus on the fact that 80% of the time should be spent on the data even before you start on the AI project. Once you put in that effort, your analytics engine can make better use of it. If you are in phase one, thats what I would recommend. If you are in a proof of concept state, then spend time in the workshop to discuss best practices with those who have implemented AI quite a bit. And if youre in the advanced stage, if you know what youre doing, especially if youre successful, do take note that after a while with a good deployment, the accuracy of the prediction drops, so you have to continually retrain your machines. I think it is the practice that I am more focused on.
This article first appeared on sister website EnterpriseAI.news.
Follow this link:
The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh - HPCwire
Posted in Ai
Comments Off on The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh – HPCwire
On image recognition software, AI, and patents – Innovation Origins
Posted: at 6:17 am
I find them incredibly irritating. Those images you have to click on to prove that you are not a robot. If you are just one click away from a nice weekend away, you first have to figure out where you can see the traffic lights on 16 tiny fuzzy squares. Google makes grateful use of these puzzling attempts. For one thing, the company uses artificial intelligence to train its image recognition software. Incidentally, patenting this type of software is commonly misunderstood, as it definitely can be patented, contrary to popular opinion.
First of all, lets come back to that recognition hurdle blocking your weekend getaway. Have you ever noticed that in the past, you used to have to recognize texts, but nowadays youre almost only presented with traffic situations? Traffic lights, road signs, pedestrian crossings, cyclists, and so on. Thats not for nothing. The captcha challenge-response texts were intended to improve text scanning for Google Books. Nowadays, Google is fully focused on image recognition for self-driving vehicles. Just a quick note: The official name is reCAPTCHA by the way, and remarkably enough, despite the use of TM, this name is not a protected trademark in Europe.
If you want to master something well, you have got to practice a lot. The same goes for artificial intelligence, i.e., AI. The captcha images are therefore displayed to an enormous number of people. But in order to do this, Google must have enough images at its disposal, which must also be sufficiently different. And there must also be images of rainy situations, falling darkness, or sharp sunlight.
Yet it is precisely these kinds of challenging images that you never see in captchas. Why is that? Very simple: We have software for that. There are programs that routinely make the existing images more complicated, for example by adding noise, different colors, or backlight. All incoming images are automatically edited so that the AI software is presented with more and more difficult training material and learns faster that way. Take a look at patent number EP1418509 to see how those more difficult images are created.
A few weeks ago I was talking about this with a young entrepreneur who makes image recognition software for the education sector. His software also includes smart image editing techniques so that AI systems learn more quickly and better. He was 100 percent convinced that you cannot patent software in Europe, consequently, he could not be held liable for patent infringement whenever he sold his software.
Understandable, because that is a very persistent myth. Its certainly not the first time that I have come across that kind of conviction. But it is not true. Software can also be patented perfectly well in Europe. Just something new and inventive has to be done in a technical way. A smart technique to train AI systems faster and more effectively can also be patented in Europe. The question is whether it is a technical solution to a technical problem that is also new and inventive at the time of applying for a patent.
Should this entrepreneur patent his image recognition software now? I have no idea. Sometimes it is better to keep smart software a secret. But if he wants to license the software or later sell his company to a multinational, a patent application might be worthwhile. But what he definitely has to do is keep a close eye on what others are patenting in this area. So that he can avoid infringing the rights of others and gain inspiration for his own ideas and designs.
By the way, The Netherlands Patent Office has a handy brochure on protecting digital innovations. Check it out. Kijk maar.
About this column
In a weekly column, written alternately byWendy van Ierschot, Eveline van Zeeland, Eugene Franken, Jan Wouters, Katleen Gabriels, Mary Fiers en Hans Helsloot, Innovation Origins tries to figure out what the future will look like. These columnists, occasionally joined by guest bloggers, are all working in their own way on solutions to the problems of our time. So that tomorrow is good. Here are all theprevious articles.
Excerpt from:
On image recognition software, AI, and patents - Innovation Origins
Posted in Ai
Comments Off on On image recognition software, AI, and patents – Innovation Origins
The U.S. government needs to get involved in the A.I. race against China, Nasdaq executive says – CNBC
Posted: at 6:17 am
The U.S. needs to take a "strategic approach" as it competes with China on artificial intelligence, according to a Nasdaq executive.
AI is an area that is going to only develop in partnership with government, and U.S. authorities need to get involved, said Edward Knight, vice chairman of Nasdaq.
The Chinese government has already started "investing heavily" and working with their private sector to develop new technologies based on artificial intelligence, he said.
Beijing in 2017 said it wanted to become the world leader in AI by 2030 and aims to make the industry worth 1 trillion yuan ($152 billion). It included a roadmap about how AI could be developed and deployed.
"I think the U.S. already is leading, but it needs more of a strategic approach involving the government," Knight told CNBC's Dan Murphy as part of FinTech Abu Dhabi, which was held online this year. "The private sector alone cannot take on the entire Chinese government and private sector, which is very focused on this."
A U.S. and a Chinese flag wave outside a commercial building in Beijing.
Teh Eng Koon | AFP | Getty Images
Predicting that society will benefit from any innovation that comes from artificial intelligence, Knight added: "If the U.S. is going to continue to be a growing economy and innovative economy, it has to master that new technology."
Artificial intelligence refers to technology in which computers or machines imitate human intelligence such as in image and pattern recognition. It is increasingly being used in sectors from financial services to health care, but has been criticized as being "more dangerous than nukes" by Tesla CEO Elon Musk.
Musk fears that AI will develop too quickly for humans to safely manage, but researchers have pushed back, calling him a "sensationalist."
Separately, Knight weighed in on what a Biden presidency would mean for the initial public offering market.
He said the pipeline traditionally slows down when a new president comes into office because there's uncertainty about possible policy changes.
However, he sees low interest rates and the likelihood of a divided government as positive for the IPO market. "We expect there will not be radical, if you will, changes in public policy," Knight said. "Change will come incrementally, and I think that makes markets more predictable."
We cannot have a strong economy with unhealthy American people. Once we can restore their health and deal with the pandemic, I think you'll start to see the economy fully recover.
Meanwhile, the Federal Reserve this month said it would keep rates near zero for as long as necessary to help the economy recover from the effects of Covid-19.
"With more predictable markets and low interest rates, I think you'll continue to have a healthy demand and pipeline for IPOs," Knight said.
He also said the president-elect's priority is managing the coronavirus crisis and "hopefully getting to the place where we have a widely available vaccine," which would act as a foundation for a recovery.
"We cannot have a strong economy with unhealthy American people," he said. "Once we can restore their health and deal with the pandemic, I think you'll start to see the economy fully recover."
CNBC's Arjun Kharpal, Sam Shead and Catherine Clifford contributed to this report.
Read more:
Posted in Ai
Comments Off on The U.S. government needs to get involved in the A.I. race against China, Nasdaq executive says – CNBC
MCEME holds webinar on AI – The Hindu
Posted: at 6:17 am
In its efforts to increase the footprint of Artificial Intelligence (AI) in the Indian Army and reap its benefits, the Military College of Electronics and Mechanical Engineering (MCEME) held a webinar on Artificial Intelligence Based Prescriptive Maintenance for the Armed Forces on Saturday.
Discerning readers will know that the Armys Training Command based out of Shimla is responsible for the Indian Armys doctrines and training and research development initiatives in all fields.
MCEME has been in the news for advancing the cause of technical research and education even amidst the COVID-19 pandemic and was the proud recipient of the Golden Peacock National Award for Training, an official release said.
The college has made contributions in the field of technical education and meaningful research for the troops fighting on the borders. Some of the notable advances made by MCEME in this niche domain of Artificial Intelligence include fielding a number of AI based field army oriented innovations as well as filing for intellectual property rights for the same.
The webinar was flagged off by General Officer Commanding in Chief Army Training Command, Lt Gen Raj Shukla. It was attended by people from academia and industry, delegates from the Indian Navy and the Indian Air Force as well as scientists and leaders from various defence laboratories.
It provided a rare opportunity to all wherein all major stakeholders got together on a common platform and discussed new ideas for a roadmap for effective AI based Prescriptive Maintenance implementation in the Indian Army demonstrating MCEMEs resolve towards ensuring that the Make in India initiative sees success and it emerges as a dominant player on the AI landscape, it said.
The webinar was conducted over three sessions with focus on the relevance of Artificial Intelligence based Prescriptive Maintenance for the Indian defence forces. Speakers spoke eloquently and in great detail on how the next wave of Artificial Intelligence could revolutionise traditional maintenance paradigms in defence and civil domains.
While the second session saw discussions on use of Internet of Things to enable AI use in prescriptive maintenance, the third session explored the effects and benefits to be derived from the use of cloud technology in prescriptive maintenance.
The rest is here:
Posted in Ai
Comments Off on MCEME holds webinar on AI – The Hindu
Call for industry views on AI and IP and ViCo – Lexology
Posted: at 6:17 am
The era of public consultations
In the last 3 years, several intellectual property offices have invited the global IP community to express our views on what we need from them and from legislators. The EPO has been very open consulting on its quality and efficiency initiatives (via SACEPO), an idea for flexible timing of examination, its strategic plan, its Guidelines for Examination, and the Rules of Procedure of its Boards of Appeal. All of WIPO, USPTO, EPO and the UKIPO have been consulting on artificial intelligence and IP.
This is excellent and AA Thornton attorneys have enjoyed the debates including learning from industry experts about your requirements from the global patent system that exists to serve you. For those industry experts who have not yet joined the debate on AI and IP, we call upon you to express your views lets tell the IP offices what you need. Some of the consultations will close very soon.
Blink and you missed it
On 13 November, the EPO Boards of Appeal proposed an amendment to their Rules of Procedure (RPBA) to codify their discretion to hold oral proceedings by videoconference. See https://www.epo.org/law-practice/consultation/ongoing.html
Send your comments to RPBAonlineconsultation@epo.org
The proposed new Article 15a of the Rules of Procedure of the Boards of Appeal (oral proceedings by videoconference) is:
Article 15a Oral proceedings by videoconference (1) The Board may decide to hold oral proceedings pursuant to Article 116 EPC by videoconference if the Board considers it appropriate to do so, either upon request by a party or of its own motion. (2) Where oral proceedings are scheduled to be held in person, the Chair may allow a party, representative or accompanying person to attend by videoconference. In exceptional circumstances, the Chair may decide that a party, representative or accompanying person shall attend by videoconference. (3) The Chair may allow any member of the Board in the particular appeal to participate by videoconference.
The decision for industry is whether to accept this or to recommend that appeal hearings are only held by videoconference if all parties to the proceedings agree.
AA Thornton praised the EPO for investing in its videoconferencing capacity for examination hearings long before the current pandemic. We have found videoconference oral proceedings to be efficient and effective for those proceedings, and we recognize the need to avoid long backlogs at the EPO. However, our support for videoconference hearings is dependent on the EPO maintaining a level playing field for all parties to an opposition, and we do not think this will be achieved if only one party to an opposition appeal is able to attend in person and another is required to attend by videoconference because of the global pandemic and travel restrictions imposed by national governments (for example, this could disadvantage a US patent proprietor whose patent has been challenged by a European competitor).
We therefore recommend revising the second sentence of proposed RPBA Article 15a to make it clear that a hybrid appeal hearing (with some parties physically present and some using videoconference) requires the consent of the parties.
Some attorneys have expressed a view that no appeal hearings should take place by videoconference without the consent of all parties to the appeal proceedings. Other attorneys think ViCos are the best possible answer to the global pandemic. Whatever your views, we encourage industry experts to share them with the EPO.
Saving me from domestic chores this weekend
The consultation which could rescue me from cold hard labour is the UKIPOs consultation on AI and IP, which is mentioned here: https://www.gov.uk/government/news/artificial-intelligence-and-intellectual-property-call-for-views
The consultation closes at 11:45pm UK time on 30 November 2020.
Please send me your recommendations for reply to the UKIPOs patent questions or email AIcallforviews@ipo.gov.uk.
The most fundamental question relating to patents is whether: i. you recommend that inventions must be devised by one or more humans to be patentable, with this human inventorship retained as an absolute requirement for patentability, or just that UK law should not be changed to account for inventions made using AI-based systems without a longer discussion with industry stakeholders about the economic and social impacts of any potential changes. or ii. you recommend a legislative change now, to allow patent protection for AI system-generated solutions to technical problems which would have qualified for patent protection if devised by a human, but which currently do not qualify because the contributions by human programmers and operators were too minor or peripheral to qualify as devising an invention under current UK national law.
We should take time to discuss the wording of any proposed legislative change, but I believe there will be significant benefits for applicant companies who invest in AI-driven innovation if UK law recognizes that inventions can be devised using an AI system, and should be protectable when there are significant contributions to the invention by the AI system that is programmed, implemented, trained or controlled by a human. This recognition of human + system contributions would be analogous to the recognition of different contributions of co-inventors under current UK law we do not require a single inventor to devise each invention in isolation.
AI experts are already identifying domain-specific inventions generated by their AI systems that would be patentable except for the difficulty identifying a human inventor who qualifies as the deviser. Should these AI-system-generated solutions be patentable? Are they inventions at all if there is no devising human inventor? Is it necessary to revise patent laws to encourage investment in AI innovation and/or to clarify ownership of inventions for which the human contribution is a small one that falls short of the current understanding of devising an invention under UK law?
Do you think patentability should be based on the contribution to the state of the art regardless of how an invention is devised, and that rules for ownership of AI-generated inventions are needed now; or do you think there must always be an identified (i.e. correctly identified) human inventor for a patent to be granted?
Do you agree that there is an intermediate position which allows for patent protection when a human contributes to devising an invention by making arrangements for an AI system to generate a new and non-obvious solution to a complex technical problem?
Please refer to the UKIPOs specific questions here: https://www.gov.uk/government/consultations/artificial-intelligence-and-intellectual-property-call-for-views/artificial-intelligence-call-for-views-patents and let me know if you wish to discuss your recommendations.
Harmonisation with EPO and USPTO, or a time for change?
This years decisions on AI-generated solutions at the EPO, USPTO, UKIPO and High Court of England and Wales1 were all interesting, but they show us how patent offices and courts are applying the current law rather than telling us whether that law needs to change. AA Thornton attorneys are very happy to discuss, explain and apply the current law, but the patent office consultations have a different purpose to allow stakeholders to guide those patent offices and government legislators on whether and how IP laws should be changed (1).
If you think the greatest prize is international harmonisation of laws, you may wish to contribute to WIPOs conversation on AI and IP. Earlier submissions are available here: https://www.wipo.int/about-ip/en/artificial_intelligence/conversation.html
Many WIPO member states and stakeholders are involved.
You may also wish to glance at the October 2020 report on the USPTOs consultation on artificial intelligence and IP policy. The report mentions that it is a priority of the USPTO to maintain US leadership in innovation in emerging technologies including AI, and to encourage further innovation. The UKIPO has the same objective.
The USPTO report is available here: https://www.uspto.gov/sites/default/files/documents/USPTO_AI-Report_2020-10-07.pdf
The majority of comments received by the USPTO suggested that current AI systems cannot invent without human intervention and that, while humans remain integral to the operation of AI, there is no urgency to modify current US IP laws. A lot of AI innovation is currently patentable in the US, and the USPTO report notes that human contributions may involve designing an AI algorithm, developing an AI system, implementing hardware that is adapted to process the algorithm, or preparing inputs to an AI algorithm i.e. they recognize various contributions that allow a human inventor to be named. The report then refers to US statutes and comments from the Federal Circuit courts as an explanation of current inventorship law.
A majority of comments submitted to the USPTO agreed that AI-related patent applications should be assessed as a subset of computer-implemented inventions (as is done at the EPO), and noted that USPTO guidance is available to help applicants and examiners to assess subject matter eligibility and disclosure requirements for computer-implemented inventions. We agree with this and have previously noted the consistency between some of the USPTOs 2019 guidance and examples within the EPOs Guidelines for Examination we believe there is a genuine opportunity for international harmonisation of patent law relating to AI and computer simulation and AI inventorship, because all major patent offices are facing the same issues at the same time.
Other comments received by the USPTO include concerns about enabling disclosure and the impact of AI on determining the knowledge of a person having ordinary skill in the art, and the potential proliferation of prior art issues that have also been noted in Europe.
However, the USPTO report seems to suggest that the question about whether US law needs to change was answered by many respondents by referring to current US law, rather than focussing on the needs of industry, so further industry input is desirable (a justification based on current law will always tend to maintain the status quo). Also, the report equates invention with artificial general intelligence (AGI) and this seems unnecessary. We are hearing from AI experts that their existing narrow AI systems can, once trained with vast amounts of domain-specific data, generate new and non-obvious solutions even when there is not an easily identified human deviser of an invention.
So there are companies that currently feel unable to apply for UK patent protection for AI-generated solutions to complex technical problems solutions that would have been patentable if devised solely by a human inventor, and some of which may be patentable in the US in view of a growing recognition of the different ways in which humans are contributing to inventions being made using AI systems.
We have also heard strong views that UK law and European Patent Office practice needs to change to allow patent protection for core AI technologies including machine learning algorithms that deliver technical advantages, instead of only the EPO-defined specific technical applications of those algorithms (with claims functionally limited to the particular technical purpose) and quite narrowly-defined specific technical implementations (where algorithms are adapted to take account of the capabilities or constraints of particular hardware). This is not the main focus of the UKIPOs consultation questions, but perhaps UK legislators can be encouraged to improve this situation via replies to the consultation?
We are hearing an increasing number of industry voices suggesting the need to review current legislation to resolve patentability and ownership issues for AI-generated inventions, to ensure that the patent system encourages investment in AI-based innovation and to remove the current expectation of future validity and ownership questions. Some industry leaders are happy for this conversation about AI and IP to proceed at a pace that will allow their AI experts, economists and IP directors to be fully consulted before IP laws are changed, and of course some AI-based disruptor companies are keen to see more rapid change.
We applaud the patent offices for consulting, since the best way to deal with the wide range of views is to give all stakeholders an opportunity to express them.
Similar to the US consultation, the UKIPOs call for views on AI and IP refers to the UK Governments ambition to encourage growth in transformational new technology sectors and remain at the forefront of the AI and data revolution. It includes a statement that the UKIPO wishes to make sure the UKs IP environment is adapted to accommodate AI technologies such as machine learning, which suggests an open mind about the possibility of legislative change. We are also open minded, and keen to hear from you.
Mike Jennings is a Member of the CIPA Computer Technology Committee, epi, AIPPI, and SACEPO working group on quality. The above article is not intended to represent views of CIPA, epi, AIPPI, the EPO or specific clients of AA Thornton.
Read the original post:
Posted in Ai
Comments Off on Call for industry views on AI and IP and ViCo – Lexology