The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Regulatory Cross Cutting with Artificial Intelligence and Imported Seafood | FoodSafetyTech – FoodSafetyTech
Posted: March 21, 2021 at 4:45 pm
Since 2019 the FDAs crosscutting work has implemented artificial intelligence (AI) as part of the its New Era of Smarter Food Safety initiative. This new application of available data sources can strengthen the agencys public health mission with the goal using AI to improve capabilities to quickly and efficiently identify products that may pose a threat to public health by impeding their entry into the U.S. market.
On February 8 the FDA reported the initiation of their succeeding phase for AI activity with the Imported Seafood Pilot program. Running from February 1 through July 31, 2021, the pilot will allow FDA to study and evaluate the utility of AI in support of import targeting, ultimately assisting with the implementation of an AI model to target high-risk seafood productsa critical strategy, as the United States imports nearly 94% of its seafood, according to the FDA.
Where in the past, reliance on human intervention and/or trend analysis drove scrutiny of seafood shipments such as field exams, label exams or laboratory analysis of samples, with the use of AI technologies, FDA surveillance and regulatory efforts might be improved. The use of Artificial intelligence will allow for processing large amount of data at a faster rate and accuracy giving the capability for revamping FDA regulatory compliance and facilitate importers knowledge of compliance carrying through correct activity. FDA compliance officers would also get actionable insights faster, ensuring that operations can keep up with emerging compliance requirements.
Predictive Risk-based Evaluation for Dynamic Imports Compliance (PREDICT) is the current electronic tracking system that FDA uses to evaluate risk using a database screening system. It combs through every distribution line of imported food and ranks risk based on human inputs of historical data classifying foods as higher or lower risk. Higher-risk foods get more scrutiny at ports of entry. It is worth noting that AI is not intended to replace those noticeable PREDICT trends, but rather augment them. AI will be part of a wider toolset for regulators who want to figure out how and why certain trends happen so that they can make informed decisions.
AIs focus in this regard is to strengthen food safety through the use of machine learning and identification of complex patterns in large data sets to order to detect and predict risk. AI combined with PREDICT has the potential to be the tool that expedites the clearance of lower risk seafood shipments, and identifies those that are higher risk.
The unleashing of data through this sophisticated mechanism can expedite sample collection, review and analysis with a focus on prevention and action-oriented information.
American consumers want safe food, whether it is domestically produced or imported from abroad. FDA needs to transform its computing and technology infrastructure to close the gap between rapid advances in product and process technology solutions to ensure that advances translate into meaningful results for these consumers.
There is a lot we humans can learn from data generated by machine learning and because of that learning curve, FDA is not expecting to see a reduction of FDA import enforcement action during the pilot program. Inputs will need to be adjusted, as well as performance and targets for violative seafood shipments, and the building of smart machines capable of performing tasks that typically require human interaction, optimizing workplans, planning and logistics will be prioritized.
In the future, AI will assist FDA in making regulatory decisions about which facilities must be inspected, what foods are most likely to make people sick, and other risk prioritization factors. As times and technologies change, FDA is changing with them, but its objective remains in protecting public health. There is much promise in AI, but developing a food safety algorithm takes time. FDAs pilot program focusing on AIs capabilities to strengthen the safety of U.S. seafood imports is a strong next step in predictive analytics in support of FDAs New Era of Smarter Food Safety.
Go here to read the rest:
Posted in Artificial Intelligence
Comments Off on Regulatory Cross Cutting with Artificial Intelligence and Imported Seafood | FoodSafetyTech – FoodSafetyTech
Artificial Intelligence: Reinforcing discrimination – The Parliament Magazine
Posted: at 4:45 pm
Whether its police brutality, the disproportionate over-exposure of racial minorities to COVID-19 or persistent discrimination in the labour market, Europe is waking up to structural racism. Amid the hardships of the pandemic and the environmental crisis, new technological threats are arising. One challenge will be to contest the ways in which emerging technologies, like Artificial Intelligence (AI), reinforce existing forms of discrimination.
From predictive policing systems that disproportionately score racialised communities with a higher risk of future criminality, all the way to the deployment of facial recognition technologies that consistently mis-identify people of colour, we see how so called neutral technologies are secretly harming marginalised communities.
The use of data-driven systems to surveil and provide a logic to discrimination is not novel. The use of biometric data collection systems such as fingerprinting have their origins in colonial systems of control. The use of biometric markers to experiment, discriminate and exterminate was also a feature of the Nazi regime.
To this day in the EU, we have seen a number of similar, worrying practices, including the use of pseudo-scientific lie detection technology piloted on migrants in the course of their visa application process. This is just one example where governments, institutions and companies are extracting data from people in extremely precarious situations.
Human rights mustnt come second in the race to innovate; they should rather define innovations that better humanity
Many of the most harmful AI applications rely on large datasets of biometric data as a basis for identification, decision making and predictions. What is new in Europe, however, is that such undemocratic projects could be legitimised by a policy agenda promoting the uptake of AI in all areas of public life.
The EU policy debate on AI, while recognising some risks associated with the technology, has overwhelmingly focused on the purported widespread benefits of AI. If this means shying away from clear legal limits in the name of promoting innovation, Europes people of colour will be the first to pay the price.
Soon, MEPs will need to take a position on the European Commissions legislative proposal on AI. While EU leaders such as Executive Vice-President Vestager and Vice-President Jourov have spoken of the need to ensure AI systems do not amplify racism, the Commission has been under pressure from tech companies like Google to avoid over-regulation. Yet, the true test of whether innovations are worthwhile is how far they make peoples lives better.
When industry claims human rights safeguards will hinder innovation, they are creating a false distinction between technological and social progress. Considerations of profit should not be used to justify discriminatory or other harmful technologies.
Human rights mustnt come second in the race to innovate; they should rather define innovations that better humanity. A key test will be how far the EUs proposal recognises this. As the Commission looks to balance the aims of promoting innovation and ensuring technology is trustworthy and human-centric, it may suggest a number of limited regulatory techniques.
The first is to impose protections and safeguards only for the most high-risk of AI applications. This would mean that, despite the unpredictable and ever-changing nature of machine learning systems, only a minority of systems would actually be subject to regulation, despite the harms being far more widespread.
The second technique would be to take limited actions requiring technical de-biasing, such as making datasets more representative. However, such approaches rarely prevent discriminatory outcomes from AI systems. Until we address the underlying causes of why data encodes systemic racism, these solutions will not work.
Both of these proposals would provide insufficient protection from systems that are already having a vastly negative impact on human rights, in particular to those of us already over-surveilled and discriminated against. What these solutions fail to address is that, in a world of deeply embedded discrimination, certain technologies will, by definition, reproduce broader patterns of racism.
Some decisions are too important and too dangerous to be made by an algorithm. This is the EUs opportunity to make people a priority and stop discriminatory AI before its too late
There is no quick fix, no risk assessment sophisticated enough, to undo centuries of systemic racism and discrimination. The problem is not just baked into the technology, but into the systems in which we live. In most cases, data-driven systems will only make discrimination harder to pin down and contest. Digital, human rights and antiracist organisations have been clear that more structural solutions are needed.
One major step put forward by the pan-European Reclaim Your Face campaign is an outright ban on destructive biometric mass surveillance technologies. The campaign, coordinated by European Digital Rights (EDRi), includes 45 organisations calling for a permanent end to technologies such as facial, gait, emotion, and ear canal recognition that target and disproportionally oppress racialised communities.
The Reclaim Your Face European Citizens Initiative petition aims to collect one million signatures to call for a Europe-wide ban and promote a future without surveillance, discrimination and criminalisation based on how we look or where we are from.
Beyond facial recognition, EDRi, along with 61 other human rights organisations, have called on the European Union to include red-lines or legal limits on the most harmful technologies - in its laws on AI, especially those that deepen structural discrimination. The upcoming AI regulation is the perfect opportunity to do this.
AI may bring significant benefits to our societies, but these benefits must be for us all. We cannot accept technologies that only benefit those who sell and deploy them. This is especially valid in areas rife with discrimination. Some decisions are too important and too dangerous to be made by an algorithm. This is the EUs opportunity to make people a priority and stop discriminatory AI before its too late.
Read more:
Artificial Intelligence: Reinforcing discrimination - The Parliament Magazine
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence: Reinforcing discrimination – The Parliament Magazine
Artificial Intelligence and the Art of Culinary Presentation – Columbia University
Posted: at 4:45 pm
How can culinary traditions be preserved, Spratt asked, when food is ultimately meant to be consumed? UNESCO recognizes French cuisine as an intangible heritage, which it defines as not the cultural manifestation itself, but rather the wealth of knowledge and skills that is transmitted through it from one generation to the next.
The gastronomic algorithms project, in contrast, emphasizes the cultural manifestation itself. Specifically, the project focuses on the artistic dimension of plating through Passards use of collages to visually conceive of actual plates of food. Taking this one step further, the project also explores how fruit-and-vegetable-embellished paintings by the Italian Renaissance artist Giuseppe Arcimboldo (1526-1593) could be reproduced through the use of artificial intelligence tools.
Spratt then asked the leading question of her research: How could GANs, a generative form of AI, emulate the culinary images, and would doing so visually reveal anything about the creative process between the chefs abstracted notions of the plates and collages, and their actual visual execution as dishes?
Experimenting With Datasets
Although Passards collages are a source of inspiration for his platings, a one-to-one visual correlation between the appearance of both does not exist. The dataset initially comprised photos posted by Passard on Instagram, images provided by the restaurants employees, and photos captured by Spratt at L'Arpge during each of the different seasons. This was later supplemented by images of vegetables and fruits on plates, as well as sliced variations procured from the internet using web scraping tools.
Here is the original post:
Artificial Intelligence and the Art of Culinary Presentation - Columbia University
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence and the Art of Culinary Presentation – Columbia University
Artificial Intelligence and the Future of Humans – LA Canyon News
Posted: at 4:45 pm
Meta Description: Artificial Intelligence is increasing in sophistication year on year. It will help to define the future of how humans live. Here we explore this future.
The onward progression of technology is unstoppable. There are so many applications of Artificial Intelligence, in online gaming to building management and into the workplace. While we may be concerned by the apocalyptic application of AI shown in the movies, the experts suggest that the rise of artificial intelligence will make most of us more money over the next ten years. Is money everything when we may lose our career and, more vitally, our free will?
Much of the AI is code-driven. Algorithms are used to mimic the intelligence of humans and offer previously unimagined opportunities and potential threats. Experts suggest that this networked AI will make us all more effective but it will also threaten our levels of autonomy. There is a real possibility that computers will match and maybe even surpass that of humans. The smart systems we use in our everyday lives are meant to save us time and money and improve our lives, helping to customize our experience into the future.
The positives
Most optimism about artificial intelligence was research for health care. There are many applications of this technology in the diagnosis and treatment of patients. It might even help older people live fuller and healthier lives. While we should be concerned that this technology driven world would require a massive amount of our data to make it work, the benefits might outweigh the threats.
Education is another area where the changes might be revolutionary. It could transform the way we learn, moving us away from formal models to informal education systems. AI should be able to predict what you need to develop next and remove the need to know or memorize details.
While there is a lot of optimism and excitement about the technology there were also many notes of worry and concern. Some of these concerns include:
Maintaining control of our livesWe maintain our agency because we can make our own decisions. While we are flawed, it is this possibility that we make errors that makes us humans. Some failures, the non-optimal option that people have the capacity to take, might actually turn out to be a happy accident. Therefore, with AI will be the loss of this right to make a mistake and instead we will be interdependent on the tech and the power systems that control them.
Open to abuseMost companies seeking to use AI want to make a profit or are organizations that hope to gain power. Therefore, the use of data can be to surveil the population and make choices to manipulate based on what has been found out. While we would hope for the best qualities in our business and political leaders, it is not often that ethics are built into digital systems. As the world is fully networked, it is also not easy to regulate.
Loss of our jobsSo, while most people predict that AI will make a lot of people more affluent, for a lot of people it will mean the loss of jobs. The advantage of machines is that they can complete repetitive jobs efficiently and accurately every time. Machines can perform in situations that are too dangerous for humans.
The real consequence is that the gap between the haves and the have nots will widen and there will be increasing social unrest at inequalities.
We are dependentWhile we believe that this technology is augmenting human capabilities, we are more likely to allow skills and understanding to lapse, devolving this to the machine. An optimistic view of this is that we can then turn our attention to deeper learning. However, the likelihood is that we become more and more dependent on the machine and will not be able to function without them. Also, the chance to usurp our lives with the interruption of power supplies could prove a significant threat to societies.
Our power supplies and reliance on networks opens us to cybercrime and the weaponization of information. If you want to increase your levels of anxiety a little more, consider the number of autonomous weapons systems around the world that are controlled by AI. If you are starting to think Terminator here, we are a long way from these computers having a consciousness and acting independent of humans.
In shortThe possibilities for AI and our human future are endless. There is much potential for using it for a common good. If we can just develop an aspiration and ambition to uphold values, then AI could enhance our future.
However, we have to be realistic. Such technology offers opportunity for abuse and we must move forward with caution, using all our human intelligence to put safeguards in place that will protect ourselves.
Originally posted here:
Artificial Intelligence and the Future of Humans - LA Canyon News
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence and the Future of Humans – LA Canyon News
Technology, artificial intelligence in focus for the Biden Administration and the 117th Congress Seen through the lens of competition with China -…
Posted: at 4:45 pm
As the new administration staffs up and Capitol Hill lawmakers begin to contemplate post-pandemic priorities, countering Chinas advances in artificial intelligence and other disruptive technologies has emerged as a major driving force for US policymakers. This alert provides a summary of expected new AI-related legislation, an overview of a recent report by the National Security Commission on Artificial Intelligence, highlights of the Biden Administrations approach to technology/AI as well as the key AI-related policymakers in the Administration and in Congress.
Three developments in the last month signal a focused mindset of policymakers in Washington, DC to counter China on technology:
Washington sees maintaining and extending US leadership in technological innovation as a vital national security imperative, both:
While a strategic imperative to move fast and out-compete China is prevalent in the highest levels of the federal government, progressive elements of the Democratic majorities in the Congress may advocate for cautionary breaks and regulatory guardrails to this rapid technology development, such as AI algorithmic impact assessments, audits and penalties for developers of AI applications.Europeis currently considering some of the strictest AI regulations in the world today, and US policymakers will likely face pressure across the Atlantic to issue further guidance or even consider targeted, agency-specific regulations of high-risk AI applications.
AI and the great power competition
The March 1 NSCAI final report could be seen by some as a wake-up call since the report highlights that other nations are not standing idly by and thus some experts believethe Defense Department must move beyond the legacy systems that have defined military planning for decades. The findings, quarterly recommendations and stark conclusions of the report have reverberated in high-level defense and foreign policy circles and sounded the alarm to members of congress, staff and the general public.
Eric Schmidt, Chairman of the NSCAI, declared the AI competition with China is a national emergency and a threat to our nation unless we get our act together with respect to focusing on AI in the federal government and international security.
The 15-member Commission composed of technologists, business executives, academic leaders and national security professionals was created under the fiscal year 2019 National Defense Authorization Act (NDAA) to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.
Among the NSCAI reports takeaway headlines:
The Commissioners focused on four pillars for immediate action:
Many of these recommendations, which span the entire federal government, have a good shot at serious legislative consideration, with the fiscal year 2022 National Defense Authorization Act being the most likely vehicle to carry many of the policy proposals.
New administration, similar competitive tech concerns about China
While President Biden has used his executive powers to reverse a host of policies enacted by his predecessor, one area of potential continuity from the Trump era is an aggressive posture towards China.
Other Biden Administration technology/AI initiatives and personnel
The pending rule is part of a broader effort to secure US supply chains, bolster US manufacturing and enhance the role of science, particularly at a time when a global shortage of semiconductor chips is causing severe production cutbacks in automotive and consumer electronics manufacturing.
President Biden hasannounceda number of appointments and nominations of officials who will take leading roles on AI and related issues and has raised the profile of key posts with jurisdiction over cyber and technology issues.
In his first major speech as Americas top diplomat, Secretary of State Anthony Blinken said:
Advancing US tech to counter China a big priority on Capitol Hill
Senator Schumer, leader of the newly minted Democratic majority in the Senate, has directed the chairs and members of our relevant committees to start drafting a legislative package to out-compete China and create new American jobs.
Congress laid some of the groundwork for implementing a more comprehensive national AI strategy with the passage on New Years Day (over Trumps veto) of the FY 2021 NDAA, which incorporated theNational AI Initiative Act.The White House on January 12 fulfilled the laws requirement to establish the National AI Initiative Office, responsible for coordinating AI research and policymaking across government, industry and academia.
The National AI Initiative Act, also known as Division E of the NDAA, was the most significant AI legislation to date to be enacted by Congress and will serve as thefoundationfor non-defense AI policy for the federal government in the years ahead. Division E established a coordinated, civilian-led federal initiative to accelerate research and development and encourage investments in trustworthy AI systems for the economic and national security of the United States. The legislation authorizes policies and significant funding for the National Science Foundation (NSF), National Institute of Standards and Technology (NIST) and Department of Energy.
In the 117th congress, a shift of focus will turn to monitoring implementation of this legislation and appropriating additional dollars to resource the initiative. Democratic majorities in both the House and Senate can be expected to apply a greater amount of attention and scrutiny over AI applications and their outcomes. Industry should expect increased policy and regulatory focus on ensuring accountability of AI through impact assessments and audits of AI algorithms. In her confirmation hearing, newly sworn in Commerce Secretary Gina Raimondo pledged to work with Congress on a bipartisan basis as part of the Advisory Committee on AI required by the defense policy bill. Raimondos department has jurisdiction over key science policy bureaus, including NIST.
The House Armed Services Committee has established a new Subcommittee on Cyber, Innovative Technologies, and Information Systems, with Representative James Langevin (D-RI) as chair and House AI Caucus member Elise Stefanik (R-NY) as ranking member. AI Caucus member and Endless Frontiers Act sponsor Representative Ro Khanna (D-CA) is also on this subcommittee.
Additional key congressional players on AI issues
Senator Rob Portman (R-OH), co-founder and co-chair of the Senate AI Caucus, announced that his current term will be his last and he will not seek re-election in 2022. But he has demonstrated that he will continue to be a leading voice on AI issues over the next two years, including in his capacity as ranking member of the Homeland Security and Governmental Affairs Committee which, among other responsibilities, has authority to investigate the efficiency, economy and effectiveness of all agencies and departments of the government.
Senator Martin Heinrich(D-NM), fellow co-chair and co-founder of the AI Caucus, is moving to the Appropriations Committee. He authored the SenatesArtificial Intelligence Initiative Actin 2019.
Many of the provisions included in that legislation became law via the FY21 NDAA being enacted on New Years Day 2021. Senator Heinrich is likely to continue pushing forresponsible and trustworthyAI funding and policies for government agencies while providing congressional oversight of the newly created White House National AI Initiative Office.
Representative Jerry McNerney (D-CA)continues as the Democratic Co-Chair of the House AI Caucus. He is particularly passionate and focused on AI workforce and research issues and is a member of the House Science Committee.
Representative Anthony Gonzalez (R-OH) will take over as the Republican co-chair of the House AI Caucus. Gonzalez, now in his second term, was not a member of the AI Caucus previously, but he was part of abipartisan group of House memberswho called on NIST to develop a framework on strategies, guidelines and best practices for AI that will bolster innovation and ethical practices in developing and implementing artificial intelligence across the US. He is also interested in AI impacts on the workforce as a representative from the rustbelt.
Current AI Caucus membership in the 117thCongress:
Senate AI Caucus
CO-CHAIRS
Martin Heinrich (D-NM)
Rob Portman (R-OH)
MEMBERS
Brian Schatz (D-HI)
Joni Ernst (R-IA)
Gary Peters (D-MI)
Mike Rounds (R-SD)
Maggie Hassan (D-NH)
House AI Caucus
CO-CHAIRS
Jerry McNerney (D-CA-09)
Anthony Gonzalez (R-OH-16)
MEMBERS
Don Beyer (D-VA-08)
GK Butterfield (D-NC-01)
Andr Carson (D-IN-07)
Emanuel Cleaver II (D-MO-05)
Suzan DelBene (D-WA-01)
Mark DeSaulnier (D-MA-11)
Nanette Diaz Barragn (D-CA-44)
Debbie Dingell (D-MI-12)
Anna G. Eshoo (D-CA-18)
Bill Foster (D-IL-11)
Josh Gottheimer (D-NJ-05)
Pramila Jayapal (D-WA-07)
Henry C. "Hank" Johnson (D-GA-04)
Ro Khanna (D-CA-17)
Derek Kilmer (D-WA-06)
Brenda Lawrence (D-MI-14)
Ted Lieu (D-CA-33)
Michael McCaul (R-TX-10)
Bobby Rush (D-IL-01)
Brad Sherman (D-CA-30)
Darren Soto (D-FL-09)
Elise Stefanik (R-NY-21)
Steve Stivers (R-OH-15)
Marc Veasey (D-TX-33)
Read the original here:
Posted in Artificial Intelligence
Comments Off on Technology, artificial intelligence in focus for the Biden Administration and the 117th Congress Seen through the lens of competition with China -…
US India Artificial Intelligence Initiative Launched, To Boost Bilateral Cooperation In Research And Development – Swarajya
Posted: at 4:45 pm
The Indo-U.S. Science and Technology Forums (IUSSTF) US India Artificial Intelligence (USIAI) Initiative was launched on Wednesday (17 March).
IUSSTF is a bilateral organization funded by the department of Science and Technology (DST), Government of India and the US Department of States.
USIAI Initiative focuses on Artificial Intelligence (AI) cooperation in critical areas prioritized by both countries. It will serve as a platform to discuss opportunities, challenges, and barriers for bilateral AI research and development collaboration, enable AI innovation, help share ideas for developing an AI workforce, and recommend modes and mechanisms for catalyzing partnerships, DST said in a statement.
Indo-US relationship in the field of science and technology is very old and collaborations have resulted in great benefits for both the countries. We need to further scale it up in various fields, and AI can play a major role in the future. We have identified the barriers for growth in India that could be useful for the United States too, said Professor Ashutosh Sharma, Secretary, DST.
The US-India AI Initiative will provide an opportunity for key stakeholder groups to share experiences, identify new research and development areas that would benefit from synergistic activities, discuss the emerging AI landscape, and address the challenges of developing an AI workforce.
The ambitious flagship initiative, USIAI, leverages IUSSTFs unique ability to bring together key stakeholders from India and the United States to create synergies that address challenges and opportunities at the interface of science, technology, and society.
Over the next year, IUSSTF will conduct a series of roundtables and workshops to gather input from different stakeholder communities and prepare white papers that identify technical, research, infrastructure, and workforce opportunities and challenges, and domain-specific opportunities for research and development in healthcare, smart cities, materials, agriculture, energy, and manufacturing.
More:
Posted in Artificial Intelligence
Comments Off on US India Artificial Intelligence Initiative Launched, To Boost Bilateral Cooperation In Research And Development – Swarajya
Professor of Artificial intelligence and Machine Learning job with UNIVERSITY OF EAST LONDON | 249199 – Times Higher Education (THE)
Posted: at 4:45 pm
Do you have proven expertise in Artificial Intelligence and Machine Learning and an established international reputation within the field, both in industry and academia? Are you looking for a challenging role in an environment that is open, vibrant and welcomes new ideas? Then Be The Change, follow your passion and join the University of East London as Professor of Artificial intelligence and Machine Learning.
These are exciting times at the University as, under a brand new transformational 10-year strategy, Vision 2028, were committed to providing students with the skills necessary to thrive in an ever-changing world, includingincreasing the diversity of the talent pipeline, particularly for Industry 4.0 jobs. Our pioneering and forward-thinking vision is set to make a positive and significant impact to the communities we serve too, and inspire our staff and students to reach their full potential. This is your chance to be part of that journey.
Join us, and youll be a key member of our Computer Science & Digital Technologies departments School of Architecture, Computing and Engineering team. Your challenge? To raise the profile of the department and school, specifically in impactful applied research in disciplines that include Deep Learning, Computer Vision and Natural Language Processing. But thats not all. Well also rely on you to lead and develop the Schools work, both in relation to taught courses and in terms of research, consultancy, knowledge transfer and income generation. And, as a senior academic leader, youll be instrumental in shaping the Schools strategy for promoting research, learning & teaching and employability initiatives.
Playing a prominent role in obtaining funding for research and knowledge exchange activities in your area of expertise will be important too. Well also encourage you to contribute to other aspects of the Schools work too, such as staff development activities, mentoring and supporting the development of early career researchers and joint supervision of PhD students. Put simply, youll bring leadership, vision and inspiration for the future direction of research and teaching in AI.
To succeed, youll need a PhD in Computer Science or other relevant area and experience of teaching in higher education or training in a professional context and applying innovative and successful approaches to learning. Youll also need a proven ability to lead on the fusion of practice and theory in specific disciplines, in-depth experience of research & knowledge exchange projects and a record of significant research & knowledge exchange grant capture and/or income generation or equivalent. As comfortable developing and managing major research grant applications as you are communicating academic findings to policy and wider public audiences, you also have experience of PhD supervision as a Director of Studies and other research mentorship activities.
In summary, you have what it takes to act as a role model and ambassador to raise the Universitys profile and increasing its impact and influence and establish links with a variety of businesses, public and third sector organisations.
So, if you have what we are looking for and are keen to take on this exciting challenge, get in touch.
At the University of East London, we aim to attract and retain the best possible staff and offer a working environment at the heart of a dynamic region with excellent transport links. You can look forward to a warm, sincere welcome, genuine camaraderie and mobility in an institution led with passion, visibility and purpose. Your impact, resilience and sense of collegiality will directly contribute to the Universitys future and those of the students whose lives you will touch and change forever. We also offer a great range of benefits including pension, family friendly policies and an on-site nursery and gym at our Docklands Campus.
Closing date: 13 April 2021.
Read more here:
Posted in Artificial Intelligence
Comments Off on Professor of Artificial intelligence and Machine Learning job with UNIVERSITY OF EAST LONDON | 249199 – Times Higher Education (THE)
AI governance: Reducing risk while reaping rewards – CIO
Posted: at 4:45 pm
AI governance touches many functional areas within the enterprise data privacy, algorithm bias, compliance, ethics, and much more. As a result, addressing governance of the use of artificial intelligence technologies requires action on many levels.
It does not start at the IT level or the project level, says Kamlesh Mhashilkar, head of the data and analytics practice at Tata Consultancy Services. AI governance also happens at the government level, at the board of directors level, and at the CSO level, he says.
In healthcare, for example, AI models must pass stringent audits and inspections, he says. Many other industries also have applicable regulations. And at the board level, its about economic behaviors, Mhashilkar says. What kinds of risks do you embrace when you introduce AI?
As for the C-suite, AI agendas are purpose-driven. For example, the CFO will be attuned to shareholder value and profitability. CIOs and chief data officers are also key stakeholders, as are marketing and compliance chiefs. And thats not to mention customers and suppliers.
Not all companies will need to take action on all fronts in building out an AI governance strategy. Smaller companies in particular may have little influence on what big vendors or regulatory groups do. Still, all companies are or will soon be using artificial intelligence and related technologies, even if they are simply embedded in the third-party tools and services they use.
More here:
Posted in Artificial Intelligence
Comments Off on AI governance: Reducing risk while reaping rewards – CIO
Artificial Intelligence can debate and its pretty good at it (but not as good as the best humans) – ZME Science
Posted: at 4:45 pm
Debating, held in high regard since the time of the Ancient Greeks (and even before that), has a new participant. Its not quite as eloquent and sharp as the likes of Socrates or Cicero, but it can hold its own against some debaters hinting at a future where AI can understand and formulate complex arguments with ease.
In 2019, an unusual debate was held in San Francisco. The topic of the debate was We should subsidize preschool, and it featured Harish Natarajan, a 2016 World Debating Championships Grand Finalist and 2012 European Debate Champion. His opponent was Project Debater, an autonomous debating system.
The structure of the debate was simple. Noam Slonim, an IBM researcher in Israel, explains how it worked: a four-minute opening statement, a four-minute rebuttal, and a two-minute summary.
The speech by Harish was captured via Watsons Speech to Text in real-time, which was then ingested by our algorithms in the Cloud to build the rebuttal, which took under a minute, Slonim explains.
Both contestants had about 15 minutes to prepare, which for Project Debater meant scouring its database for relevant arguments, although the topic of this debate was never included in the training data of the system, Slonim emphasizes.
We polled our live audience of around 800 attendees before and after the debate and then calculated the difference to see how many were persuaded to the other side, he notes.
The AI, it turns out, isnt able to stand up to the worlds best debaters yet, but it may be able to defeat the less prepared, and it could hold its own against even some experienced debaters. Its growth is also impressive: from zero to the current performance in a couple of years.
As the system matured it was very similar to watching a junior level debater grow up in front of your eyes, Slonim tells me in an email, his satisfaction betrayed by a smiling emoji. In 2016, during the first live debates we had with the system, and after nearly 4 years of research, it was still performing at the level of a toddler and was not making a lot of sense. Only three years later, it seems fair to say that the system achieved the performance of a decent university-level debater. So, from kindergarden to university in only three years, which was interesting to observe.
Slonim and collaborators went on to host several live debates which confirmed the AIs capability, showing that non-human debaters are ready to enter the stage. But the impact of their work goes way beyond that.
Artificial Intelligence algorithms can already do a lot of things, but debating (or analyzing complex arguments) is one of the fields considered to be AI-proof.
The study of arguments has an academic pedigree stretching back to the ancient Greeks, and spans disciplines from theoretical philosophy to computational engineering. Developing computer systems that can recognize arguments in natural human language is one of the most demanding challenges in the field of artificial intelligence (AI), writes Chris Reed in a News and Views article that accompanied the study.
Since the 1950s, AI research has greatly progressed, being able to compete against humans in a number of games. First, algorithms conquered chess, and more recently, they even conquered the game of Go, thought to be impossible until recently.
But in the new paper, Slonim and colleagues argue that all these games lie within the comfort zone of AI, based on several simple observations. Debates are a whole new ballgame.
First, in games there is a clear definition of a winner, facilitating the use of reinforcement learning techniques. Second, in games, individual game moves are clearly defined, and the value of such moves can often be quantified objectively, enabling the use of game-solving techniques. Third, while playing a game an AI system may come up with any tactic to ensure winning, even if the associated moves could not be easily interpreted by humans. Finally, for many AI grand challenges, massive amounts of relevant data e.g., in the form of complete games played by humans was available for the development of the system.
All these four characteristics do not hold for competitive debates. Thus, the challenge taken by Project Debater seems to reside outside the AI comfort zone, in a territory where humans still prevail, and new paradigms are needed to make substantial progress.
To overcome these challenges, Project Debate scans through an archive of 400 million newspaper articles and Wikipedia pages, looking to form opening statements and counter-arguments. Its able to debate on varied different topics, scoring high on opening statements.
While the authors conclude that debating humans is still out of the AI comfort zone, its an important proof of concept and once again, AIs are ready to rise up to new challenges.
These new challenges, researchers say, could be quite important.
The broad goal of this new AI was to help people make unbiased, informed decisions. As is often the case with pioneering AI algorithms, though, the scope reaches beyond what has already been accomplished. In this case, an AI that can present arguments and counter-arguments is very useful as an adviser.
Whether you are a politician or a CEO you are likely to make decisions based on instinct and experience, which may be vulnerable to blind spots or a bias. So the question is, what if AI could help you see data to eliminate or reduce the bias? You still may ultimately make the same decision, but at least you are better informed about other opinions. This also addresses the echo chamber or social media bubble challenge that we see currently, particularly around the COVID vaccine and whether people should get it or not, Slonim says.
Already, the technology is being put to work. The Project Debater API was made freely available for academic use, including two modules called Narrative Generation and Key Point Analysis.
When given a set of arguments, Narrative Generation constructs a well-structured speech that supports or contests a given topic, according to the specified polarity. And Key Point Analysis is a new and promising approach for summarization, with an important quantitative angle. This service summarizes a collection of comments on a given topic as a small set of key points, and the prominence of each key point is given by the number of its matching sentences in the given data, he explains.
A lot of ideas can build on this existing model. Already, technologies built by the Project Debater team were recently used on a TV show called Thats Debatable and this week its being used during the Grammy Awards to allow fans to debate on pop culture topics, Slonim tells me
The idea that you can comb through thousands of arguments made by other people and compile and summarize can be very useful in a number of scenarios. The approach can also eliminate bias, or at least, reduce it to the bias present in the voice of the crowd.
Think of a company that would like to collect feedback about a service or a product from thousands of clients; about an employer who would like to learn the opinions of thousands of employees; or a government, who would like to hear the voice of the citizens about a policy being examined. In all these cases, by analyzing peoples opinions, our technology can establish a unique and effective communication channel between the decision-maker, and the people that might be impacted by the decision.
Project Debater is a crucial step in the development of argument technology, and given the deluge of misinformation were faced with on a daily basis, it couldnt come soon enough.
Project Debater tackles a grand challenge that acts mainly as a rallying cry for research, it also represents an advance towards AI that can contribute to human reasoning, Reed concludes in the News & Views article.
You can watch an entire debate below.
See the original post here:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence can debate and its pretty good at it (but not as good as the best humans) – ZME Science
A.I. Is Everywhere and Evolving – The New York Times
Posted: February 25, 2021 at 1:57 am
Researchers are working on combining the technologies to create realistic 2D avatars of people who can interact in real time, showing emotion and making context-relevant gestures. A Samsung-associated company called Neon has introduced an early version of such avatars, though the technology has a long way to go before it is practical to use.
Such avatars could help revolutionize education. Artificial intelligence researchers are already developing A.I. tutoring systems that can track student behavior, predict their performance and deliver content and strategies to both improve that performance and prevent students from losing interest. A.I. tutors hold the promise of truly personalized education available to anyone in the world with an Internet-connected device provided they are willing to surrender some privacy.
Having a visual interaction with a face that expresses emotions, that expresses support, is very important for teachers, said Yoshua Bengio, a professor at the University of Montreal and the founder of Mila, an artificial intelligence research institute. Korbit, a company founded by one of his students, Iulian Serban, and Riiid, based in South Korea, are already using this technology in education, though Mr. Bengio says it may be a decade or more before such tutors have natural language fluidity and semantic understanding.
There are seemingly endless ways in which artificial intelligence is beginning to touch our lives, from discovering new materials to new drugs A.I. has already played a role in the development of Covid-19 vaccines by narrowing the field of possibilities for scientists to search to picking the fruit we eat and sorting the garbage we throw way. Self-driving cars work, theyre just waiting for laws and regulations to catch up with them.
Artificial intelligence is even starting to write software and may eventually write more complex A.I. Diffblue, a start-up out of Oxford University, has an A.I. system that automates the writing of software tests, a task that takes up as much as a third of expensive developers time. Justin Gottschlich, who runs the machine programming research group at Intel Labs, envisions a day when anyone can create software simply by telling an A.I. system clearly what they want the software to do.
I can imagine people like my mom creating software, he said, even though she cant write a line of code.
Craig S. Smith is a former correspondent for The Times and hosts the podcast Eye on A.I.
View original post here:
Posted in Artificial Intelligence
Comments Off on A.I. Is Everywhere and Evolving – The New York Times