The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: April 17, 2022
Elon Musk, tech visionary in the spotlight – Digital Journal
Posted: April 17, 2022 at 11:45 pm
Tesla CEO Elon Musk. - AFP Arif ALI
Space conquest: check. Disrupt the auto industry: check. Take over Twitter? Why not. From eccentric entrepreneur to the worlds richest man, Elon Musk likes to dream big and these days, he is everywhere you look.
Two decades after banking his first millions, the South-African born Musk last year became the worlds richest person wresting the title from Amazons Jeff Bezos following the meteoric rise of Tesla, his electric automaker founded in 2003.
The billionaires latest big splash: a bid announced Thursday to take over Twitter, capping a rollercoaster fortnight of announcements and counter-announcements which Musk punctuated, characteristically, by gleefully firing tweets at the platform.
Just a week earlier, the 50-year-old was making headlines as Tesla cut the ribbon on a gigafactory the size of 100 soccer fields in Texas, where the firm is now based and Musk himself has relocated from California.
At the same time, his space transport firm SpaceX was breaking yet another boundary as a partner in a three-way venture to send the first fully private mission to the International Space Station.
Musk also makes news of a less flattering kind: Tesla has faced a series of lawsuits alleging discrimination and harassment against Black workers as well as sexual harassment.
In parallel with the whiplash-inducing stream of business news, Musks controversy-courting persona with an unrestrained Twitter style and penchant for living by his own rules in the private sphere too keeps the gossip press busy.
It recently emerged Musk had had a second child with his on-again off-again partner, the musician Grimes: a girl they named Exa Dark Siderl Musk although the parents will mostly call her Y.
He is even expected to make an appearance in person or not at the celebrity defamation trial pitting Johnny Depp against his ex-wife Amber Heard, who formerly dated Musk.
But one way or another, Musk has become one of the most ubiquitous figures of the era. So how did he get where he is today?
To Mars and beyond?
Born in Pretoria, on June 28, 1971, the son of an engineer father and a Canadian-born model mother, Musk left South Africa in his late teens to attend Queens University in Ontario.
He transferred to the University of Pennsylvania after two years and earned bachelors degrees in physics and business.
After graduating from the prestigious Ivy League school, Musk abandoned plans to pursue further studies at Stanford University.
Instead, he dropped out and started Zip2, a company that made online publishing software for the media industry.
He banked his first millions before the age of 30 when he sold Zip2 to US computer maker Compaq for more than $300 million in 1999.
Musks next company, X.com, eventually merged with PayPal, the online payments firm bought by internet auction giant eBay for $1.5 billion in 2002.
After leaving PayPal, Musk embarked on a series of ever more ambitious ventures.
He founded SpaceX in 2002 now serving as its chief executive officer and chief technology officer and became the chairman of electric carmaker Tesla in 2004.
After some early crashes and near-misses, SpaceX perfected the art of landing booster engines on solid ground and ocean platforms, rendering them reusable, and late last year sent four tourists into space, on the first ever orbital mission with no professional astronauts on board.
Musks jokingly-named The Boring Company is touting an ultra-fast Hyperloop rail transport system that would transport people at near supersonic speeds.
And Musk has said he wants to make humans an interplanetary species by establishing a colony of people living on the Mars.
To this end, SpaceX is developing a prototype rocket, Starship, which it envisages carrying crew and cargo to the Moon, Mars and beyond with Musk saying he feels confident of an orbital test this year.
Musk, who holds US, Canadian and South African citizenship, has been married and divorced three times once to the Canadian author Justine Wilson and twice to actress Talulah Riley. He has seven children. An eighth child died in infancy.
Forbes estimates Musks current net worth at $265 billion.
See the original post here:
Elon Musk, tech visionary in the spotlight - Digital Journal
Posted in Mars Colony
Comments Off on Elon Musk, tech visionary in the spotlight – Digital Journal
The Pentagon Just Confirmed the First-Ever Interstellar Visitor to Earth – Popular Mechanics
Posted: at 11:45 pm
Pete SaloutosGetty Images
Government sensors on the hunt for fireballs plunging toward Earth have so far logged about 1,000 meteors and asteroids. But only one of them can boast that it traveled through our atmosphere from outside our own Solar System.
This fireball, which shot through our atmosphere over Papua New Guinea in 2014, was no ordinary space rockit was actually an interstellar meteor, the first ever known to originate outside our system and arrive on Earth. Rocketing at a speed of over 130,000 miles per hour, the rock broke up during its descent, probably scattering interstellar debris into the South Pacific Ocean.
You love the cosmos. So do we. Lets nerd out over it together.
Confirmation of its distant origins arrived only recently, when the United States Space Command (USSC) released a memo on April 6, confirming that the meteor was indeed an interstellar object.
This content is imported from Twitter. You may be able to find the same content in another format, or you may be able to find more information, at their web site.
Before USSC confirmed this meteor was a distant stranger, all previous rocky bodies that fell to Earth were thought to have originated in our own Solar System. Many of them do come from a colony of millions of other rocks in the Asteroid Belt between Mars and Jupiter, some 111.5 million miles from Earth.
Two Harvard University researchers were the first to study the 2014 meteors distant origin, posting their research on the preprint server arXiv back in 2019 (meaning it was not peer-reviewed at the time). The meteors unusually high speed implies a possible origin from the deep interior of a planetary system or a star in the thick disk of the Milky Way galaxy, the researchers state in the study, which will be resubmitted for publication in a peer-reviewed journal in light of the recent confirmation. The researchers combed through records of all the fireballs that U.S. government sensors have detected since 1988.
One of the researchers, Amir Siraj, wants to find meteor debris scattered on the ocean floor. It may be impossible, given the speed of the disintegrating objectwhich was only a few feet wideand the minute pieces that probably resulted from the impact. We are currently investigating the possibility of embarking on an ocean expedition to recover the first interstellar meteorite. If found, extensive analysis will be conducted on the sample to understand its origin and the information it carries about its parent system, he tells Popular Mechanics by email.
At first, I could hardly believe the discovery, since astronomers had been searching for an interstellar meteor since 1950 or earlier, says Siraj, who is director of Interstellar Object Studies at Harvards Galileo Project, which aims to look for extraterrestrial technological artifacts.
This confirmed impact of an interstellar object with the Earths atmosphere implies that similar objects are very common throughout space.
Siraj and his Harvard colleague Avi Loeb, who leads the Galileo Project, originally submitted the discovery to The Astrophysical Journal Letters. However, the review process dragged on for years due to missing information that the U.S. government withheld from the Center for Near Earth Object Studies (CNEOS) database, which identifies objects like meteors and asteroids and calculates their odds of hitting Earth. The U.S. Department of Defense operates some of the sensors that detect fireballs in order to monitor the skies for nuclear detonations, so Siraj and Loeb couldnt directly confirm the margin of error on the fireballs velocity.
After moving through NASA, Los Alamos National Laboratory, and several bureaucratic departments, the sensor data finally ended up with Joel Mozer, chief scientist of Space Operations Command at the U.S. Space Force. Mozer released the memo confirming that the velocity estimate reported to NASA is sufficiently accurate to indicate an interstellar trajectory.
Siraj learned the good news through a NASA scientists April 6 tweet. Now, he is in the process of revising the paper, taking into account the government confirmation. This confirmed impact of an interstellar object with the Earths atmosphere implies that similar objects are very common throughout space, which of course raises interesting questions about how they are ejected in such large quantities from their parent systems, he says. Even if the remnants of the rock are never found, data from the meteors fiery descent could hold clues to its composition, and maybe origins.
The chances of a rock from another star system coming close to Earth are rare, but astronomers knew of two other interstellar objects before this recently-confirmed discovery. Quarter-mile-long asteroid Oumuamua was the first confirmed interstellar object identified in the Solar System; Pan-STARRS, a wide-field astronomical imaging system in Hawaii, detected the massive rock in 2017. Amateur astronomer Gennady Borisov spotted Comet Borisov with his telescope in 2019. Its the first confirmed comet to enter our solar system from some unknown place beyond our suns influence, according to NASA. Neither of these distant visitors flew close to Earth, though.
Expanding our sensory capabilities with efforts like the new Vera C. Rubin Observatorys planned ten-year survey is critical to enhance our discovery rate of interstellar objects, Siraj writes in an arXiv post in November 2021. Who knows? We may even find extra-galactic objects, like the 2007 discovery of a particle that originated outside the Milky Way.
This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io
Continue reading here:
The Pentagon Just Confirmed the First-Ever Interstellar Visitor to Earth - Popular Mechanics
Posted in Mars Colony
Comments Off on The Pentagon Just Confirmed the First-Ever Interstellar Visitor to Earth – Popular Mechanics
Does this artificial intelligence think like a human? – Freethink
Posted: at 11:44 pm
In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.
While tools exist to help experts make sense of a models reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.
Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning models behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a models reasoning matches that of a human.
Shared Interest could help a user easily uncover concerning trends in a models decision-making for example, perhaps the model often becomes confused by distracting, irrelevant features, like background objects in photos. Aggregating these insights could help the user quickly and quantitatively determine whether a model is trustworthy and ready to be deployed in a real-world situation.
In developing Shared Interest, our goal is to be able to scale up this analysis process so that you could understand on a more global level what your models behavior is, says lead author Angie Boggust, a graduate student in the Visualization Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Boggust wrote the paper with her advisor, Arvind Satyanarayan, an assistant professor of computer science who leads the Visualization Group, as well as Benjamin Hoover and senior author Hendrik Strobelt, both of IBM Research. The paper will be presented at the Conference on Human Factors in Computing Systems.
Boggust began working on this project during a summer internship at IBM, under the mentorship of Strobelt. After returning to MIT, Boggust and Satyanarayan expanded on the project and continued the collaboration with Strobelt and Hoover, who helped deploy the case studies that show how the technique could be used in practice.
Shared Interest leverages popular techniques that show how a machine-learning model made a specific decision, known as saliency methods. If the model is classifying images, saliency methods highlight areas of an image that are important to the model when it made its decision. These areas are visualized as a type of heatmap, called a saliency map, that is often overlaid on the original image. If the model classified the image as a dog, and the dogs head is highlighted, that means those pixels were important to the model when it decided the image contains a dog.
Shared Interest works by comparing saliency methods to ground-truth data. In an image dataset, ground-truth data are typically human-generated annotations that surround the relevant parts of each image. In the previous example, the box would surround the entire dog in the photo. When evaluating an image classification model, Shared Interest compares the model-generated saliency data and the human-generated ground-truth data for the same image to see how well they align.
The technique uses several metrics to quantify that alignment (or misalignment) and then sorts a particular decision into one of eight categories. The categories run the gamut from perfectly human-aligned (the model makes a correct prediction and the highlighted area in the saliency map is identical to the human-generated box) to completely distracted (the model makes an incorrect prediction and does not use any image features found in the human-generated box).
On one end of the spectrum, your model made the decision for the exact same reason a human did, and on the other end of the spectrum, your model and the human are making this decision for totally different reasons. By quantifying that for all the images in your dataset, you can use that quantification to sort through them, Boggust explains.
The technique works similarly with text-based data, where key words are highlighted instead of image regions.
The researchers used three case studies to show how Shared Interest could be useful to both nonexperts and machine-learning researchers.
In the first case study, they used Shared Interest to help a dermatologist determine if he should trust a machine-learning model designed to help diagnose cancer from photos of skin lesions. Shared Interest enabled the dermatologist to quickly see examples of the models correct and incorrect predictions. Ultimately, the dermatologist decided he could not trust the model because it made too many predictions based on image artifacts, rather than actual lesions.
The value here is that using Shared Interest, we are able to see these patterns emerge in our models behavior. In about half an hour, the dermatologist was able to make a confident decision of whether or not to trust the model and whether or not to deploy it, Boggust says.
In the second case study, they worked with a machine-learning researcher to show how Shared Interest can evaluate a particular saliency method by revealing previously unknown pitfalls in the model. Their technique enabled the researcher to analyze thousands of correct and incorrect decisions in a fraction of the time required by typical manual methods.
In the third case study, they used Shared Interest to dive deeper into a specific image classification example. By manipulating the ground-truth area of the image, they were able to conduct a what-if analysis to see which image features were most important for particular predictions.
The researchers were impressed by how well Shared Interest performed in these case studies, but Boggust cautions that the technique is only as good as the saliency methods it is based upon. If those techniques contain bias or are inaccurate, then Shared Interest will inherit those limitations.
In the future, the researchers want to apply Shared Interest to different types of data, particularly tabular data which is used in medical records. They also want to use Shared Interest to help improve current saliency techniques. Boggust hopes this research inspires more work that seeks to quantify machine-learning model behavior in ways that make sense to humans.
This work is funded, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.
Republished with permission ofMIT News. Read theoriginal article.
Read the original:
Does this artificial intelligence think like a human? - Freethink
Posted in Artificial Intelligence
Comments Off on Does this artificial intelligence think like a human? – Freethink
Should the I in Artificial Intelligence (AI) need a reboot? – Times of India
Posted: at 11:44 pm
Three events took place a few days ago which, at first glance may look inconsequential, but are they? Read on
The morning ritual begins with commanding the wrongly comprehending Alexa to play the morning melodies on flute. Alexa, the voice assistant obeys after a couple of attempts and the soothing strains waft through the expanse of the living room.
The rhythmically cooing pigeons swoop in into the terrace, listening to the familiar whistling. They train their bobbing heads to the rustling sound of seeds strewn on the terrace floor. Some coo and invite their mates, and the others strut and fan their tails to protect their territories. The sumptuous and timely breakfast gets underway. Pigeons decide when to eat, how much to eat.
Shortly after, the news of Going bananas over Artificial Intelligence catches the attention. The headline is a robot trained to peel the humble banana. The news is from the venerable University of Tokyo lab.
Alexa is intelligent, the pigeons are clever, and the robot is dextrous (and human-like)!! Or is it so? Did we use the words portraying intelligence rather loosely here?
From the deep recesses of my mind comes alive the doomsday prophecy warning that soon there would be no distinct difference between what can be achieved by a biological brain versus a computer (aka AI). AI is on its way to emulate human intelligence and soon after will exceed it, rule it and at its peak will replace the humankind.
The primacy of humankind is getting threatened!
Luckily, I had just completed reading the brilliant book: The Book of Why by Turing awardee Judea Pearl along with the seminal article: Human-Level Intelligence or Animal-Like Abilities of Adnan Darwiche (UCLA, 2018). They came to my rescue in dousing my fear of humankind being usurped by AI!
Alexa uses natural language processing and speech recognition software. Large amounts of audio training data are used as inputs. The raw data is cleaned and labelled. With the aid of the algorithms, voice assistants understand and fulfil user commands. Is intelligence being drilled into Alexa or is Alexa simply mastering imitation through continual training and learning?
Pigeons are among the smarter birds. Their homing abilities have been effectively used as carrier birds. This cognitive skill could be a blend of innate trait and committed training. But can it be called intelligence?
Now let us dwell on the robot and the banana. A robot peeling a banana is trained by a deep imitation (learning) process to learn and perform this deceptively effortless process. Media coverage makes this an exciting headline, and readers brim with positivity. However, the headlines of the robots prowess could be misleading. The banana peeling robots success rate after thirteen hours of training maxes out at 57%. That is forty-three times in one hundred attempts, it failed the task by squishing the banana. Can this be dubbed as intelligence or is simply imitation trying to be perfected? John McCarthy (Stanford University) coined the world of Artificial Intelligence in 1955. The pithy acronym AI has gained immense ground, with technology breakthroughs like parallel computation, big data and better algorithms propelling its massive growth.
There is heightened speculation surrounding AI; that human will be replaced by machines. This has been, however, tempered by the fact that humans can leverage AI and AI could augment human capabilities. Attempts have been made to redefine Artificial Intelligence as Augmented Intelligence.
Machines have advantages that humans do not: speed, repeatability, consistency, scalability, and lower cost, humans have advantages that machines do not: reasoning, originality, feelings, contextuality, and experience.
The triumph of neural networks in applications like speech recognition, vision, autonomous navigation has let the media coverage to be less thoughtful and at times go overboard in describing automation of tasks to be quickly equated with human intelligence. This excitement is mixed with an ample dose of fear. So, is the word intelligence the misnomer here?
Intelligence refers to ones cognitive abilities, which would include capacities to
1. Comprehend and reason and imagine,2. Bring in original, at times abstract thoughts,3. Be able to evaluate and judge,4. Adapt to the context and environment,5. Acquire knowledge and store and use as experience
So, if Machine Learning is the way AI is powered to meet only the last point of acquiring knowledge and storing it for use later, then will this not be incomplete intelligence?
At the risk of sounding like a non-conformist, Pearl argues that Artificial Intelligence is handicapped by an incomplete understanding of what intelligence really is. AI applications, as of today, can solve problems that are predictive and diagnostic in nature, without attempting to find the cause of the problem. Never denying the transformative and disruptive, complex, and non-trivial power of AI, Pearl has shared his genuine critique on the achievements of Machine Learning and Deep Learning given the relentless focus on correlation leading to pattern matching, finding anomalies, and often culminating in the function of curve-fitting.
The significance of the ladder of causation i.e., progressing from association to intervention and concluding with counter factuality has been the contribution of immense consequence from Pearl.
Pearl has been one of the driving forces who expects that the correlation-based reasoning should not subsume the causal reasoning and the development of causal based algorithmic tools. If, for example programmers of driverless car want to react different to new situations, they should add the new reactions explicitly, which is done through the understanding of cause and effect Furthermore, the concern echoed by Darwiche of the current imbalance between exploiting, enjoying, and cheering the current AI tools based on correlation should not be at the cost of representation and reason based causal tools to build cause and effect.
Only causal reasoning could provide machines with human level intelligence. This would be the cornerstone of the scientific thought and would make the humanmachine communication effective.
Hitherto, areas like explainable AI (xAI), moralities and biases in AI, should be gainfully addressed.
Till then the spectre whether AI would usurp the human intelligence is a non-starter. Should we agree that the field of Artificial Intelligence have a more apt title of Artificial Ability or Augmented Imitation? Will reboot of the acronym help dissuade the apocalypticist from painting a grim picture about the impending demotion of humankind?
Views expressed above are the author's own.
END OF ARTICLE
Go here to see the original:
Should the I in Artificial Intelligence (AI) need a reboot? - Times of India
Posted in Artificial Intelligence
Comments Off on Should the I in Artificial Intelligence (AI) need a reboot? – Times of India
Ethics Leader Pushes for More Responsible Artificial Intelligence – Newsroom | University of St. Thomas – University of St. Thomas Newsroom
Posted: at 11:44 pm
From deciding what to watch next on Netflix to ordering lunch from a robot, artificial intelligence (AI) is hard to escape these days.
AI ethics leader Elizabeth M. Adams is an expert on the social and moral implications of artificial intelligence. She recently spoke about overcoming those issues at an Opus College of Business event, Artificial Intelligence & Diversity, Equity and Inclusion.
Here are four key takeaways.
Artificial intelligence is all around us.
From traffic lights to unlocking mobile phones, computers are working to aid our every move.
Artificial intelligence is basically training a computer model to think like a human, but at a much faster pace, Adams said.
A futurist at heart, Adams embraces AI wherever and whenever she can.
I'm a huge proponent of a four-hour workweek, Adams said. If I could have technology make my coffee, turn on my screens, so I could focus on my other research, I would.
Despite good intentions, artificial intelligence can perpetuate historical bias.
Artificial intelligence hasnt always worked in an inclusive or equitable fashion.Adams points out that AI has often struggled to accurately identify individuals, objects and trends.
Some of those struggles impact our social identity. For example, software programs continue to misidentify Black women as men. Other programs have difficulties identifying individuals, even for some of the most well-known faces in the world, such as Oprah Winfrey and former first lady Michelle Obama.
Other inaccuracies may impact standing in the community or financial well-being. Governments and law enforcement have begun using facial recognition software at a variety of levels, collecting data and information on citizens. Not only does this form of artificial intelligence raise privacy concerns, it can perpetuate bias based on how the technology and data is used.
For an example in business, AI bias has been found in hiring software. Certain resumes can be overlooked based on data that software is trained to value or avoid.
Were waking up to the challenges of AI, even though there are lots of benefits, Adams said. For those in vulnerable populations, now you have one more thing this new technology that you have to figure out how to navigate in your life.
What is responsible AI?
As discrepancies and inequities come to light, more companies have embraced the use of responsible AI. While an exact definition is still evolving, responsible AI aims to reduce harm to all individuals and embrace the equitable use of artificial intelligence.
Its very important to have the right people, the right voices at the table when youre designing your technology, Adams said.
Adams lifts up companies like Microsoft and Salesforce as two giants that have been working to roll out responsible AI technology with the help of their entire workforce.
Its not just a technical problem, Adams said. Its important to have diverse voices of all disciplines.
Meanwhile global organizations such as the United Nations have put out guidelines for companies to follow for their AI technology.
Everyone must embrace responsible AI.
Its not just mega companies or organizations thatcan bring about change. Adams stressed that everyone must embrace the new realities of working in a world with AI.
There are lots of different opportunities to see yourself and to help fix some of the challenges, Adams said. Responsible AI is really starting to cascade out to the workforce, which is really, really important.
Adams suggested people get started learning about AI by hosting education events, partnering with stakeholders in their community, and speaking with policymakers.
But most of all, she wants everyone to follow theircuriosity.
If you like art, follow your curiosity around AI in art, Adams said. If you like automobiles, follow your curiosity there. Wherever you decide that AI is important, follow your curiosity.
Read the rest here:
Posted in Artificial Intelligence
Comments Off on Ethics Leader Pushes for More Responsible Artificial Intelligence – Newsroom | University of St. Thomas – University of St. Thomas Newsroom
Artificial Intelligence Is Strengthening the U.S. Navy From Within – The National Interest Online
Posted: at 11:44 pm
The Navy is progressively phasing artificial intelligence (AI) into its ship systems, weapons, networks, and command and control infrastructure as computer automation becomes more reliable and advanced algorithms make once-impossible discernments and analyses.
Previously segmented data streams on ships, drones, aircraft, and even submarines are now increasingly able to share organized data in real-time, in large measure due to breakthrough advances in AI and machine learning. AI can, for instance, enable command and control systems to identify moments of operational relevance from among hours or days or surveillance data in milliseconds, something which saves time, maximizes efficiency, and performs time-consuming procedural tasks autonomously at an exponentially faster speed.
Multiple data bytes of information will be passed around on the networks here in the near future. So as we think about big data, and how do we handle all that data and turn it into information without getting overloaded, this will be a key part of AI, then we're talking about handling decentralized systems, Nathan Husted of the Naval Surface Warfare Center, Carderock told an audience at the 2022 Sea Air Space Symposium. ..and of course, AI plays a big part in the management in between the messaging and operation and organization of these decentralized systems.
AIs success could be described paradoxically: in one sense, its utility or value is only as effective as the size and quality of its ever-expanding database. Yet by contrast, its conclusions, findings, or answers are very small and precise. Perhaps only two seconds of drone video identify the sought-after enemy target, yet surveillance cameras have hours if not days of data. AI can reduce the procedural burden placed upon humans and massively expedite the decisionmaking process.
If we look at the battlespace, we are actually training for the future. As we look at AI in the battlespace we've got big data and AI systems. So we're going to have this extremely complicated, information-rich combat environment, Husted said
Navy industry partners also see AI as an evolving technology that will progressively integrate into more ship systems, command and control, and weapons over time as processing speeds increase and new algorithms increase their reliability by honing their ability to assimilate new or unrecognized incoming data. This building block approach, for example, has been adopted by Northrop Grumman in its development of a new ship-integrated energy management, distribution, and storage technology called Integrated Power and Energy Systems (IPES). For instance, Northrop Grummans solution is built to accommodate new computing applications as they become available, such as AI-generated power optimization and electric plant controls.
The technology seeks to organize and store energy sources to optimize distribution across a sphere of otherwise separated ship assets such as lasers, sensors, command and control, radar, or weapons. AI-enabled computing can help organize incoming metrics and sensor data from disparate ship systems to optimize storage and streamline distribution as needed from a single source depending upon need.
AI is an emerging capability that shows promise in some of these more complex electrical architectures to manage in near real-time. Future capability that would rely upon AI and be more computationally intensive is likely to happen in some aspects of electric plant controls, Matthew Superczynski, chief engineer for Northrop Grummans Power/Control Systems, told The National Interest in an interview. We are building upon the architecture the Navy already has to give them more capability and lower risk. We can build on top of that.
Kris Osborn is the Defense Editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the ArmyAcquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.
Image: Flickr.
Excerpt from:
Artificial Intelligence Is Strengthening the U.S. Navy From Within - The National Interest Online
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Is Strengthening the U.S. Navy From Within – The National Interest Online
Top 5 Benefits of Artificial intelligence in Software Testing – Analytics Insight
Posted: at 11:44 pm
Have a look at the top 5 benefits of using Artificial intelligence in software testing
One of the recent buzzwords in the software development industry is artificial intelligence. Even though the use of artificial intelligence in software development is still in its infancy, the technology has already made great strides in automating software development. Integrating AI in software testing enhanced the quality of the end product as the systems adhere to the basic standards and also maintain company protocols. So, let us have a look at some of the other crucial benefits offered by AI in software testing.
A method of testing that is getting more and more popular every day is image-based testing using automated visual validation tools. Many ML-based visual validation tools can detect minor UI anomalies that human eyes are likely to miss.
Shared automated tests can be used by the developers to catch problems quickly before sending them to the QA team. Tests can be run automatically whenever the source code changes, checked in and notified the team or the developer if they fail.
Manual testing is a slow process. And every code change requires new tests that consume the same amount of time as before. AI can be leveraged to automate the test processes. AI provides for precise and continuous testing at a fast pace.
AI/ ML tools can read the changes made to the application and understand the relationship between them. Such self-healing scripts observe changes in the application and start learning the pattern of changes and then can identify a change at runtime without you having to do anything.
With software tests being repeated each time source code is changed, manually happening those tests can be not only time-consuming but also expensive. Interestingly, once created automated tests can be executed over and over, with zero additional cost at a much quicker pace.
Conclusion: The future of artificial intelligence and machine learning is bright. AI and its adjoining technologies are making new waves in almost every industry and will continue to do so in the future.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
See the rest here:
Top 5 Benefits of Artificial intelligence in Software Testing - Analytics Insight
Posted in Artificial Intelligence
Comments Off on Top 5 Benefits of Artificial intelligence in Software Testing – Analytics Insight
Sitting Out of the Artificial Intelligence Arms Race Is Not an Option – The National Interest Online
Posted: at 11:44 pm
Viewing the dangerous advances in military technology, from Nazi V-weapons to hydrogen bombs, investigative journalist I.F. Stone once described arms races as the inevitable product of there being no limit to the ingenuity of science and no limit to the deviltry of human beings. This dark truth about the era of human-controlled kinetic weapons of mass destruction that so concerned Stone remains true today of the emerging range of increasingly automated systems that may now be fusing scientific ingenuity with a silicon-based deviltry of all its own.
For most of history, from stones to siege guns, warfare consisted of hurling some amount of mass with sufficient energy to do serious harm. The general trend has been toward increasing mass and energy, giving weapons greater range. Yet, until the first automated guidance systems came into play during World War II, the information content of weaponry was quite small, reducing accuracy. But what began with the first ballistic and cruise missiles in 1944 quickened in the following decades, to the point that some missiles had electronic brains of their own to guide them in flight, like the American Tomahawk that went into service in 1983. Even though its launched at human command, once underway its brain does all of the sensing and maneuvering, over whatever distance, with precision accuracy.
And this increasing information content of weapons isnt just for long-range use. The stalwart Ukrainian defense that has hammered so hard at Russian tanks and helicopters has been greatly enhanced by smart, short-range anti-tank Javelins and anti-aircraft Stingers. Thus, the much heavier and more numerous invading forces have been given a very hard time by defenders whose weapons have brains of their own.
But this is just a small slice of the rising space into which automated systems are moving. Beyond long-range missile strikes and shorter-range battlefield tactics lies a wide variety of other military applications for artificial intelligence. At sea, for example, the Chinese have more than two dozen types of mines, some of which have significant autonomous capabilities for sensing the type of enemy vessel and then rising from the seafloor to attack it. Needless to say, U.S. Navy Ford-class carriers, costing $10 billion-plus per, can be mortally threatened by these small, smart, cheap weapons. As for the Russians, their advances in naval robotics have led to the creation of an autonomous U-bot that can dive deep and locate fiber-optic links, either tapping into or severing them. More than 95 percent of international communications move through the roughly 400 of these links that exist around the world. So, this bot, produced even in very small numbers, has great potential as a global weapon of mass disruption.
There are other ways in which silicon-based intelligence is being used to bring about the transformation of war in the twenty-first century. In cyberspace, with its botnets and spiders, everything from economy-draining strategic crime to broad infrastructure attacks is greatly empowered by increasingly intelligent autonomous systems. In outer space, the Chinese now have a robot smart enough to sidle up to a satellite and place a small explosive (less than 8 lbs.) in its exhaust nozzleand when the shaped charge goes off, the guts of the satellite are blown without external debris. Mass disruption is coming to both the virtual and orbital realms.
The foregoing prompts the question of what the United States and its friends and allies are doing in response to these troubling advances in the use of artificial intelligence to create new military capabilities. The answer is as troubling as the question: too little. Back in 2018, then-Under Secretary of Defense for Research and Engineering Michael Griffin acknowledged that There might be an artificial arms race, but were not in it yet. There was a glimmer of hope that the Americans might be lacing up their running shoes and getting in the AI arms race when Eric Lander became President Joe Bidens science advisor in January 2021, as he had publicly stated that China is making breathtaking progress in robotics and that the United States needed to get going. But Lander apparently didnt play well with others and resigned in February 2022. Given that NATO and other friends tend to move in tandem with the Americans, all are too slow getting off the mark.
Beyond personnel issues, the United States and other liberal and free-market societies are having some trouble ramping up to compete in the robot arms race for three other reasons. The first is conceptual, with many in the military, political, and academic circles taking the view that advances in artificial intelligence do not fit classical notions and patterns of weapons-based arms races. It is hard to make the case for urgency, for the need to race, when there doesnt even seem to be a race underway.
Next, at the structural level, the United States and other free-market-based societies tend to see most research in robotics undertaken by the private sector. The Pentagon currently spends about 1 percent of its budget (just a bit over $7 billion) on advancing artificial intelligence. And in the American private sector, much of the research in AI is focused on improving business practices and increasing consumer comfort. Whereas, in the case of China, about 85 percent of robotics research is state-funded and military-related. The Russians are following a kind of hybrid system, with the Putin government funding some 400 companies research in strategic robotics. As Putin has said in a number of his speeches, the leader in artificial intelligence will become master of the world. So, it seems that the structure of market societies is making it a bit harder to compete with authoritarians who can, with the stroke of a pen, set their countries directions in the robot arms race and provide all necessary funding.
The final impediment to getting wholeheartedly into the robot arms race is ethical. Throughout the free world, there is considerable concern about the idea of giving kill decisions in battle over to autonomous machines. Indeed, there is so much resistance to this possibility that a major initiative at the United Nations has sought to outlaw lethal autonomous weapon systems (LAWS). Civil society NGOs have supported this proposed ban and drawn celebrity adherents like Steve Wozniak and Elon Musk to the cause. Pope Francis has joined this movement, too.
One of the main concerns of all these objectors is about the possibility that robots will unwittingly kill innocent non-combatants. Of course, human soldiers have always caused civilian casualties, and still do. Given the human penchant for cognitive difficulties rising from fatigue, anger, desire for revenge, or just the fog of war, there is an interesting discussion that needs to be had about whether robotic warriors will be likely to cause more or possibly less collateral damage than human soldiers do.
So far, the United States, Britain, and a few other democracies have resisted adopting a ban on weaponized robotics; but the increasingly heated discourse about killer robots even in these lands has slowed their development and use. Needless to say, neither China nor Russia has shown even the very slightest hesitation about developing military robots, giving them the edge in this arms race.
It is clear that the ideal first expressed eighty years ago in the opening clause of Isaac Asimovs First Law of Robotics, A robot may not injure a human being, is being widely disregarded in many places. And those who choose to live by the First Law, or whose organizational structures impede swift progress in military robotics, are doomed to fall fatally behind in an arms race now well underway. It is a race to build autonomous weapons that will have as much impact on military affairs in the twenty-first century as aircraft did on land and naval warfare in the twentieth century. Simply put, sitting out this arms race is not an option.
John Arquilla is Distinguished Professor Emeritus at the United States Naval Postgraduate School and author, most recently, of Bitskrieg: The New Challenge of Cyberwarfare. The views expressed are his alone.
Image: Flickr.
See the article here:
Sitting Out of the Artificial Intelligence Arms Race Is Not an Option - The National Interest Online
Posted in Artificial Intelligence
Comments Off on Sitting Out of the Artificial Intelligence Arms Race Is Not an Option – The National Interest Online
Flight Simulator in the Age of Virtual Reality (VR) and Artificial Intelligence (AI) – ReAnIn Analysis – PR Newswire
Posted: at 11:44 pm
HYDERABAD, India, April 14, 2022 /PRNewswire/ --According to ReAnIn, the global aircraft simulation market was valued at USD 5,837.59 million in the year 2021 and is projected to reach USD 8,952.96 million by the year 2028, registering a CAGR of 6.3% during the forecast period. Increasing demand for pilots for commercial aircraft, significant cost savings associated with the simulator in comparison with training in actual aircraft, and technological advancements in simulators are primary drivers for the aircraft simulation market. However, the COVID-19 pandemic had a severe impact on the growth of this market as various restrictions were imposed and international borders were closed for the majority of the months in 2020. Also, there is a consensus among industry experts that recovery to the pre-pandemic level might take a few years.
Download free sample: Global Aircraft Simulation Market Growth, Share, Size, Trends and Forecast (2022 - 2028)
More than 260,000 new pilots for the civil aviation industry will be required over the next decade according to CAE, a leading player in the aircraft simulators market
According to CAE's pilot demand outlook report, there were about 387,000 active pilots for civil aircraft in 2019 which is expected to increase to about 484,000 in 2029. Moreover, more than 167,000 pilots will have to be replaced during this time period. Hence, about 264,000 new pilots will have to be trained between 2019 and 2029. As simulator is an important aspect of pilot training, demand for aircraft simulators is expected to increase significantly in the near future. The Asia Pacific is expected to be the growth engine with the highest demand of ~91,000, more than one-third of these new pilots.
Furthermore, technological advancements such as virtual reality (VR) and artificial intelligence (AI) is expected to fuel the growth of the flight simulators market. In April 2019, the US Air Force launched a Pilot Training Next class using VR headsets and advanced AI biometrics. The use of VR and AI significantly reduced the training period and cost. The usual pilot training system takes about a year, while VR-based training was completed in just 4 months. Moreover, the cost of VR-based flight training was about US$1,000 per unit, while the usual cost was US$4.5 million for a legacy simulator. In April 2021, European Union Aviation Safety Agency (EASA) granted the first certificate for a Virtual Reality (VR) based Flight Simulation Training Device (FSTD).
Key Highlights of the Report:
Access the report description on: Global Aircraft Simulation Market
Market Segmentation:
ReAnIn has segmented the global aircraft simulation market by:
Competitive Landscape
Key players in the aircraft simulation market include CAE Inc., Boeing Company, Collins Aerospace, FlightSafety International, L3Harris Technologies, Precision Flight Controls, SIMCOM Aviation Training, Frasca International, TRU Simulation + Training, Airbus Group, Indra Sistemas, and Thales Group.
Know more about this report: Global Aircraft Simulation Market
About ReAnIn
ReAnIn provides end-to-end market research services which span across different support areas such as syndicated market research reports, custom market research reports, consulting, long-term engagement, and flash delivery. We are proud to have more than 100 Fortune 500 companies in our clientele. We are a client-first organization and we are known not just for meeting our client expectations but for exceeding them.
Media Contact:
Name: Deepak KumarEmail: [emailprotected]Phone: +1 469-730-0260
SOURCE Reanin Research & Consulting Private Limited
The rest is here:
Posted in Artificial Intelligence
Comments Off on Flight Simulator in the Age of Virtual Reality (VR) and Artificial Intelligence (AI) – ReAnIn Analysis – PR Newswire
New York Citys New Law Regulating the Use of Artificial Intelligence in Employment Decisions – JD Supra
Posted: at 11:44 pm
On Nov. 10, 2021, the New York City Council passed a bill that regulates employers and employment agencies use of automated employment decision tools in making employment decisions. The bill was returned without Mayor Bill de Blasios signature and lapsed into law on Dec. 11, 2021. The new law takes effect on Jan. 1, 2023. This new law is part of a growing trend towards examining and regulating the use of artificial intelligence (AI) in hiring, promotional and other employment decisions.
Requirements of the New Law. The new law regulates employers and employment agencies use of automated employment decision tools on candidates and employees residing in New York City. An automated employment decision tool refers to any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.
The new law prohibits an employer or employment agency from using an automated employment decision tool in making an employment decision unless, prior to using the tool, the following requirements are met: (1) the tool has been subject to a bias audit within the last year; and (2) a summary of the results of the most recent bias audit and distribution data for the tool have been made publicly available on the employer or employment agencys website. A bias audit is defined as an impartial evaluation by an independent auditor, which includes the testing of an automated employment decision tool to assess the tools disparate impact on persons of any component 1 category required to be reported by employers pursuant to 42 U.S.C. 2000e-8(c) and 29 C.F.R. 1602.7.
The new law also requires employers and employment agencies to satisfy two notice requirements. First, at least 10 business days before using the tool, the employer or employment agency must notify a candidate or employee who resides in New York City of the following: (1) that an automated employment decision tool will be used in assessing the candidate or employee; and (2) the job qualifications and characteristics that the tool will use in the assessment. The employer or employment agency must allow the candidate or employee to request an alternative process or accommodation. However, the law is silent as to the employer or employment agencys obligation to provide such alternative process or accommodation. Second, the employer or employment agency must disclose on their website or make available to a candidate or employee within 30 days of receiving a written request, the following: (1) information about the type of data collected for the automated employment decision tool; (2) the source of the collected data; and (3) the employer or employment agencys data retention policy.
Penalties for Violations. Violations of the new law will result in liability for a civil penalty of up to $500 for the first violation and each additional violation occurring on the same day as the first violation, and a civil penalty between $500 and $1,500 for each subsequent violation. Importantly, each day the automated employment decision tool is used in violation of the law constitutes a separate violation and the failure to provide any of the required notices constitutes a separate violation.
Recommendations for Timely Compliance. Employers with candidates or employees who reside in New York City can take several steps now to facilitate compliance with this new requirement when it goes into effect on Jan. 1, 2023. Employers should ensure that any covered automated employment decision tool that they plan to use in 2023 or thereafter to assess New York City candidates and employees is subject to a bias audit by an independent auditor and the results of such audit are available on their website. Additionally, we recommend that employers and employment agencies work with their legal counsel to develop and implement practices that comply with the notice provisions required by the new law.
Other Regulations on Automated Employment Decision Tools. Several states and cities have passed or are considering similar laws regarding the use of artificial intelligence and other technology in employment decisions. For example, Illinois Artificial Intelligence Video Interview Act, which took effect Jan. 1, 2020, requires employers using AI interview technology to provide advanced notice and an explanation of the technology to applicants, to obtain the applicants consent to use the technology and to comply with restrictions on the distribution and retention of videos. Similarly, Maryland enacted a law that took effect Oct. 1, 2020, which requires employers to obtain an applicants written consent and a waiver prior to using facial recognition technology during pre-employment job interviews. California and Washington, D.C. have also proposed legislation that would address the use of AI in the employment context.
Additionally, on Oct. 28, 2021, the U.S. Equal Employment Opportunity Commission (EEOC) launched a new initiative aimed at ensuring artificial intelligence and other technological tools used in making employment decisions comply with the federal civil rights laws. As part of its initiative, the EEOC will gather information about the adoption, design and impact of employment-related technologies, and issue technical assistance to provide employers with guidance on algorithmic fairness and the use of artificial intelligence in employment decisions.
[View source.]
See original here:
Posted in Artificial Intelligence
Comments Off on New York Citys New Law Regulating the Use of Artificial Intelligence in Employment Decisions – JD Supra