The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
How to amend the Artificial Intelligence Act to avoid the misuse of high-risk AI systems – The Parliament Magazine
Posted: April 22, 2022 at 4:36 am
As the opinion rapporteur for the Artificial Intelligence Act in the Committee on Culture and Education (CULT), I will present a proposal for amending the Artificial Intelligence Act in March. The draft focuses on several key areas of artificial intelligence (AI), such as high-risk AI in education, high-risk AI requirements and obligations, AI and fundamental rights as well as prohibited practices and transparency obligations.
The regulation is aiming to create a legal framework that prevents discrimination and prohibits practices that violate fundamental rights or endanger our safety or health.One of the most problematic areas is the use of remote biometric identification systems in public space.
Unfortunately, the use of such systems has increased rapidly, especially by governments and companies to monitor places of gathering, for example. It is incredibly easy for law enforcement authorities to abuse these systems for mass surveillance of citizens. Therefore, the use of remote biometric identification and emotion recognition systems is over the line and must be banned completely.
Moreover, the misuse of technology is concerning. I am worried that countries without a functioning rule of law will use it to persecute journalists and prevent their investigations. It is obviously happening to a certain extent in Poland and Hungary, where governments have used the Pegasus software to track journalists and members of the opposition. How hard will it be for these governments to abuse remote biometric identification, such as facial recognition systems?
It is absolutely necessary to set rules that will prevent governments from abusing AI systems to violate fundamental rights
As far as we know, the Hungarian government has already persecuted journalists in the so-called interest of national security for questioning the governments actions amid the pandemic. Even the Chinese social credit system, which ranks the countrys citizens, is based on the alleged purpose of ensuring security.
It is absolutely necessary to set rules that will prevent governments from abusing AI systems to violate fundamental rights. In October, a majority of the European Parliament voted in favour of a report on the use of AI in criminal law. The vote showed a clear direction for the European Parliament in this matter.
The proposal includes a definition of so-called high-risk AI systems. HR tools that could filter applications, banking systems that evaluate our creditworthiness and predictive control systems all fall under the definition of high-risk because they could easily reproduce bias and have a negative impact on disparity.
With AI being present in education as well, the proposal includes test evaluation and entrance examination systems. Still, this list shall be expanded to include online proctoring systems. However, there is a problem with different interpretations of the GDPR in the case of online proctoring systems, resulting in differences in personal data protection in Amsterdam, Copenhagen and Milan.
According to the Dutch and Danish decisions, there was no conflict between online proctoring systems and the GDPR, but the Italian data protection authority fined and banned further use of these technologies. Currently, universities are investing in new technologies without knowing whether they are authorised to use them or if they are going to be fined.
HR tools that could filter applications, banking systems that evaluate our creditworthiness and predictive control systems all fall under the definition of high-risk because they could easily reproduce bias and have a negative impact on disparity
In my opinion, technologies used for students personalised education should be included in the high-risk category as well. In this case, incorrect usage can negatively affect a students future.
In addition to education, the CULT committee focuses on the media sector, where AI systems can be easily misused to spread disinformation. As a result, the functioning of democracy and society may be in danger.
When incorrectly deployed, AI systems that recommend content and learn from our responses can systematically display content which form so-called rabbit holes of disinformation. This increases hatred and the polarisation of society and has a negative impact on democratic functioning.
We need to set clear rules that will not be easy to circumvent. Currently, I am working on a draft legislative opinion which will be presented in the CULT committee in March. I will do my best to fill all the gaps that I have identified.
The Council is also working on its position. A common compromise presented by the Slovenian presidency was found, for example, in the extension of social scoring from public authorities to private companies as well.
Read the rest here:
Posted in Artificial Intelligence
Comments Off on How to amend the Artificial Intelligence Act to avoid the misuse of high-risk AI systems – The Parliament Magazine
Industry Executives Share Real Insights on Artificial Intelligence – Progressive Grocer
Posted: at 4:36 am
In a new survey of retail executives, Symphony RetailAIfound that 82% of them are focusing on data-driven demand forecasting and nearly two thirds (61%) are prioritizing data management in their supply chain.
While there is strong agreement that data is key, the embrace of technologies to achieve those goals is somewhat behind sentiment. Only 13% of retail execs polled think they outperform their peers, while 87% say that their supply chain performance lags or is equal to competing businesses.
Symphony RetailAI's research, conducted with partner Incisiv, also sought to uncover retailers use of AI and machine learning. A high number of 87% of respondents said they have not yet taken meaningful steps to embrace AI and many of them are stalling for a variety of reasons. Barriers include poor data quality, an inability to integrate data from several sources and a general lack of confidence in AI.
The gap between intent and progress underscores the opportunity for retailers to use AI to enhance demand forecasting and supply chain management, according to Symphony RetailAI's experts. As new threats loom and other economic factors create supply chain unpredictability, these results highlight the need to future-proof grocery supply chains to handle unexpected disruptions, declared Troy Prothero, the companys SVP, product management, supply chain solutions. The importance of using data, including AI-driven demand forecasting, to gain a competitive supply chain advantage isnt going away, so organizations that prioritize new ways of using data for decision-making will be better positioned to succeed.
Added Gaurav Pant, chief insights officer for Incisiv: Our research with Symphony RetailAI sheds light on the critical need for retailers to use AI to break down silos and utilize as much organizational data as possible.
These and other insights on grocery forecasting will be shared in a Progressive Grocer webinaron April 27:"The New Rules of Grocery Demand Forecasting:Exclusive research reveals supply chain priorities and pain points."
Read this article:
Industry Executives Share Real Insights on Artificial Intelligence - Progressive Grocer
Posted in Artificial Intelligence
Comments Off on Industry Executives Share Real Insights on Artificial Intelligence – Progressive Grocer
MEDIA ALERT: Business Insurance to Host Webinar "How Artificial Intelligence is Transforming the Insurance Industry" – Yahoo Finance
Posted: at 4:36 am
Gradient AI Sponsors Webinar to Explore the Promise and Challenges of AI in the Insurance Industry
April 21, 2022--(BUSINESS WIRE)--Gradient AI
WHAT: Business Insurance is hosting a webinar "How Artificial Intelligence is Transforming the Insurance Industry." Sponsored by Gradient AI, a leading provider of proven artificial intelligence (AI) solutions for the insurance industry, this webinar will cover real-world use cases, and explore AIs powerful benefits enabling attendees to gain an actionable understanding of AIs potential and its value to their business.
WHEN: April 26, 20221:00 PM - 2:00 PM EDT/10:00 AM -11:00 AM PDT
WHO: Featured Speakers include:
Builders Insurances Mark Gromek, Chief Marketing and Underwriting Office
Florida State Universitys Dr. Patricia Born Midyette, Eminent Scholar in Risk
CCMSIs S. F. "Skip" Brechtel, Jr., FCAS, MAAA, Executive VP and CIO
WHY ATTEND: As digital transformation has disrupted many industries, AI is poised to do the same for insurance enterprises. Attendees will learn:
How to use AI to gain a competitive advantage and generate improved business outcomes such as improved key operational metrics
How AI can increase the efficiency and accuracy of underwriting and claims operations
The challenges and opportunities facing the next generation of insurance professionals
WHERE: Learn more and register here.
Tweet this: How Artificial Intelligence is Transforming the Insurance Industry Webinar: April 26, 2022, 1:00 pm EDT https://register.gotowebinar.com/register/5324033635572250381?source=GradientAI #AI #insurance #insurtech
About Gradient AI:
Gradient AI is a leading provider of proven artificial intelligence (AI) solutions for the insurance industry. Its solutions improve loss ratios and profitability by predicting underwriting and claim risks with greater accuracy, as well as reducing quote turnaround times and claim expenses through intelligent automation. Unlike other solutions that use a limited claims and underwriting dataset, Gradient's software-as-a-service (SaaS) platform leverages a vast dataset comprised of tens of millions of policies and claims. It also incorporates numerous other features including economic, health, geographic, and demographic information. Customers include some of the most recognized insurance carriers, MGAs, TPAs, risk pools, PEOs, and large self-insureds across all major lines of insurance. By using Gradient AIs solutions, insurers of all types achieve a better return on risk. To learn more about Gradient, please visit https://www.gradientai.com.
Story continues
View source version on businesswire.com: https://www.businesswire.com/news/home/20220421005382/en/
Contacts
Elyse Familantelysef@resultspr.net 978-376-5446
Read more here:
Posted in Artificial Intelligence
Comments Off on MEDIA ALERT: Business Insurance to Host Webinar "How Artificial Intelligence is Transforming the Insurance Industry" – Yahoo Finance
Artificial Intelligence to Assess Dementia Risk and Enhance the Effectiveness of Depression Treatments – Neuroscience News
Posted: at 4:36 am
Summary: Using MEG data, a new AI algorithm called AI-MIND is able to assess dementia risk and the potential effectiveness of treatments for depression, researchers say.
Source: Aalto University
The human brain consists of some 86 billion neurons, nerve cells that process and convey information through electrical nerve impulses.
Thats why measuring neural electrical activity is often the best way to study the brain, says Hanna Renvall. She is Aalto University and HUS Helsinki University Hospital Assistant Professor in Translational Brain Imaging and heads the HUS BioMag Laboratory.
Electroencephalography, or EEG, is the most used brain imaging technique in the world. Renvalls favorite, however, is magnetoencephalography or MEG, which measures the magnetic fields generated by the brains electrical activity.
MEG signals are easier to interpret than EEG because the skull and other tissues dont distort magnetic fields as much. This is precisely what makes the technique so great, Renvall explains.
MEG can locate the active part of the brain with much greater accuracy, at times achieving millimeter-scale precision.
An MEG device looks a lot like bonnet hairdryers found in hair salons. The SQUID sensors that perform the measurements are concealed and effectively insulated inside the bonnet because they only function at truly freezing temperatures, close to absolute zero.
The worlds first whole-head MEG device was built by a company that emerged from Helsinki University of Technologys Low Temperature Laboratoryand is now the leading equipment manufacturer in this field.
MEG plays a major role in the European Unions new AI-Mind project, whose Finnish contributors are Aalto and HUS. The goal of the 14-million project is to learn ways to identify those patients, whose dementia could be delayed or even prevented.
For this to happen, neuroscience and neurotechnology need help from artificial intelligence experts.
Fingerprinting the brain
Dementia is a broad-reaching neural function disorder that significantly erodes the sufferers ability to cope with everyday life. Some 10 million people are afflicted in Europe, and as the population ages this number is growing. The most common illness that causes dementia is Alzheimers disease, which is diagnosed in 7080% of dementia patients.
Researchers believe that communication between neurons begins to deteriorate well before the initial clinical symptoms of dementia present themselves. This can be seen in MEG dataif you know what to look for.
MEG is at its strongest when measuring the brains response to stimuli like speech and touch that occur at specific moments and are repetitive.
Interpreting resting-state measurements is considerably more complex.
Thats why the AI-Mind project uses a tool referred to as the fingerprint of the brain. It was created when Renvall and Professor Riitta Salmelin and her colleagues began to investigate whether MEG measurements could detect a persons genotype.
More than 100 sibling pairs took part in the study that sat subjects in an MEG, first for a couple of minutes with their eyes closed and then for a couple of minutes with their eyes open. They also submitted blood samples for a simple genetic analysis.
When researchers compared the graphs and genetic markers, they noticed that, even though there was substantial variance between individuals, siblings graphs were similar.
Next, Aalto University Artificial Intelligence Professor Samuel Kaskis group tested whether a computer could learn to identify graph sections that were as similar as possible between siblings while also being maximally different when compared to other test subjects.
The machine did itand more, surprisingly.
It learned to distinguish the individual perfectly based on just the graphs, irrespective of whether the imaging had been performed with the test subjects eyes open or closed, Hanna Renvall says.
For humans, graphs taken with eyes closed or open look very different, but the machine could identify their individual features. Were extremely excited about this brain fingerprinting and are now thinking about how we could teach the machine to recognize neural network deterioration in a similar manner.
Risk screening in one week
A large share of dementia patients are diagnosed only after the disorder has already progressed, which explains why treatments tend to focus on managing late-stage symptoms.
Earlier research has, however, demonstrated that many patients experience cognitive deterioration, such as memory and thought disorders, for years before their diagnosis.
One objective of the AI-Mind project is to learn ways to screen individuals with a significantly higher risk of developing memory disorders in the next few years from the larger group of those suffering from mild cognitive deterioration.
Researchers plan to image 1,000 people from around Europe who are deemed at risk of developing memory disorders and analyze how their neural signals differ from people free from cognitive deterioration. AI will then couple their brain imaging data with cognitive test results and genetic biomarkers.
Researchers believe this method could identify a heightened dementia risk in as little as a week.
If people know about their risk in time, it can have a dramatic motivating effect, says Renvall, who has years of experience of treating patients as a neurologist.
Lifestyle changes like a healthier diet, exercise, treating cardiovascular diseases and cognitive rehabilitation can significantly slow the progression of memory disorders.
Better managing risk factors can give the patient many more good years, which is tremendously meaningful for individuals, their loved ones and society, as well, Renvall says.
Identifying at-risk individuals will also be key when the first drugs that slow disease progression come on the market, perhaps in the next few years. Renvall says it will be a momentous event, as the medicinal treatment of memory disorders has not seen any substantial progress in the last two decades.
The new pharmaceuticals will not suit everybody, however.
These drugs are quite powerful, as are their side effectsthats why we need to identify the people who can benefit from them the most, Renvall emphasizes.
Zapping the brain
Brain activity involves electric currents, which generate magnetic fields that can be measured from outside the skull.
The process also works in the other direction, the principle on whichtranscranial magnetic stimulation(TMS) is based. In TMS treatments, a coil is placed on the head to produce a powerful magnetic field that reaches the brain through skin and bone, without losing strength. Themagnetic fieldpulse causes a short, weak electric field in the brain that affects neuron activity.
It sounds wild, but its completely safe, says Professor of Applied Physics Risto Ilmoniemi, who has been developing and using TMS for decades.
The strength of the electric field is comparable to the brains own electric fields. The patient feels the stimulation, which is delivered in pulses, as light taps on their skin.
Magnetic stimulation is used to treatsevere depressionand neuropathic pain. At least 200 million people around the world suffer from severe depression, while neuropathic pain is prevalent among spinal injury patients, diabetics and multiple sclerosis sufferers. Pharmaceuticals provide adequate relief to only half of all depression patients; this share is just 30% in the case of neuropathic pain sufferers.
How frequently pulses are given is based on the illness being treated. For depression, inter-neuron communication is stimulated with high-frequency pulse series, while less frequent pulses calm patients neurons for neuropathic pain relief.
Stimulation is administered to the part of the brain where, according to the latest medical science, the neurons tied to the illness being treated are located.
About half of treated patients receive significant relief from magnetic stimulation. Ilmoniemi believes this could be much higherwith more coils and the help of algorithms.
One-note clanger to concert virtuoso
In 2018, the ConnectToBrain research project headed by Ilmoniemi was granted 10 million in European Research Council Synergy funding, the first time that synergy funds were awarded to a project steered by a Finnish university. Top experts in the field from Germany and Italy are also involved.
The goal of the project is to radically improve magnetic stimulation in two ways: by building a magnetic stimulation device with up to 50 coils and by developing algorithms to automatically control the stimulation in real time, based on EEG feedback.
Ilmoniemi looks to the world of music for a comparison.
The difference between the new technology and the old is analogous to a concert pianist playing two-handed, continuously fine-tuning their performance based on what they hear, rather than hitting a single key while wearing hearing protection.
Researchers have already used a two-coil device to demonstrate that an algorithm can steer stimulation in the right direction ten times faster than even the most experienced expert. This is just the beginning.
A five-coil device completed last year covers an area of ten square centimeters of cortex at a time. A 50-coil system would cover both cerebral hemispheres.
Building this kind of device involves many technical challenges. Getting all these coils to fit around the head is no easy task, nor is safely producing the strong currents required.
Even once these issues are resolved, the hardest question remains: how can we treat the brain in the best possible way?
What kind of information does the algorithm need? What data should instruct its learning? It is an enormous challenge for us and our collaborators, Ilmoniemi says thoughtfully.
The project aims to build one magnetic stimulation device for Aalto, another for the University of Tbingen in Germany and a third for the University of Chieti-Pescara in Italy. The researchers hope that, in the future, there will be thousands of such devices in operation around the world.
The more patient data is accumulated, the better the algorithms can learn and the more effective the treatments will become.
Quantum optics sensors could revolutionize how we read neural signals
Professor Lauri Parkkonens working group is developing a new kind of MEG device that adapts to the head size and shape and utilizes sensors based onquantum optics. Unlike the SQUID sensors currently employed in MEG, they do not need to be encased in a thick layer of insulation, enabling measurements to be taken closer to the scalp surface. This makes it easier to perform precise measurements on children and babies especially.
The work has progressed at a brisk pace and yielded promising results: measurements made with optical sensors are already approaching the spatial accuracy of measurements made inside the cranium.
Parkkonen believes that a MEG system based on optical sensors could also be somewhat cheaper and more compact and thus easier to place than traditional devices; such a MEG system could utilize a person-sized magnetic shield instead of a large shielded room as the conventional MEG systems do.
This would bring it into reach of more researchers and hospitals.
Author: Minna HlttSource: Aalto UniversityContact: Minna Hltt Aalto UniversityImage: The image is in the public domain
Read more here:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence to Assess Dementia Risk and Enhance the Effectiveness of Depression Treatments – Neuroscience News
Two Key Drivers In 2022: Artificial Intelligence And The Blockchain – Tekedia
Posted: at 4:36 am
Changes in technology have drastically changed the way humans work, learn, live, and even think over the last several decades. Due to global lockdowns during the pandemic, many individuals and companies scrambled to develop new forms of virtual experiences and digital interactions.
As technology continues to be one of the largest differentiating factors between modern businesses, new technology trends have begun to emerge for the new year of 2022. These emerging trends are only the beginning of the major transition towards Web3.0.
Blockchain as a Service, or Baas, is a new trend in the blockchain space that many corporations and businesses have already begun to take advantage of. Blockchain as a Service is a cloud-based solution that allows companies to collaborate on blockchain-based digital products, such as smart contracts, decentralized applications (dApps), and more. Since these types of products dont require the full infrastructure of the blockchain to function properly, they can be integrated by large scale companies much more easily.
Many companies have already begun to develop their own blockchains and provide Blockchain as a Service to a wide range of businesses, which will ultimately have an impact on the future of blockchain based applications in years to come.
The rise of artificial intelligence has been one of the most significant technological advancements of our generation, and is showing absolutely no sign of slowing down. Many companies have begun to integrate artificial intelligence into their businesses in a variety of fashions, from artificial intelligence chatbots for customer service to Netflix using AI to recommend movies youd be interested in.
Machine learning is being integrated in every industry, from Teslas newer vehicles utilizing AI to improve autonomous driving capabilities, to Amazons Alexa voice assistant using machine learning to interpret user queries and executing tasks.
A large number of the difficulties associated with both blockchain and artificial intelligence technologies can be remedied by merging the two ecosystems together. When combined, these systems can create an immutable, secure, and decentralized system.
This interconnection of artificial intelligence and blockchain technology is important for retaining a reliable record of each AI algorithm recordings made during the machine learning process.
By combining these ecosystems, it has resulted in massive advancements in the security of both information and data in a variety of industries, including healthcare, personal banking, and others.
Storing large files or documents on the blockchain can be prohibitively expensive due to factors such as the Bitcoin networks 1-megabyte per block file size limit. To work around this problem, data is stored on a decentralized storage medium, and hashes of the data are linked to the blockchain or implemented in smart contract code.
A recent report by MarketWatch estimated that the Asia Pacific blockchain artificial intelligence market will grow at an annual rate of 31.1%. With major driving factors such as advancements in the cryptocurrency space and a growing number of investments in AI blockchain projects, this would put the segment near a total addressable market cap of $3650 million USD.
Original post:
Two Key Drivers In 2022: Artificial Intelligence And The Blockchain - Tekedia
Posted in Artificial Intelligence
Comments Off on Two Key Drivers In 2022: Artificial Intelligence And The Blockchain – Tekedia
Associate Professor within Artificial Intelligence job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY – NTNU | 290775 – Times Higher Education
Posted: at 4:36 am
About the job
We havea new 100% permanent position as an Associate professor at theDepartment of ICT and Natural Sciences (IIR).
This position is associated with the AI-node, which is a collaboration with theNorwegian Open AI Lab (NAIL)You will participate in research activities related to Norwegian Open AI Lab in Trondheim, activities at AI-node in lesund and at the Department of ICT and Natural Sciences. Candidates are expected to have experience in research activities within Artificial Intelligence, machine Learning with associated topics. We are looking for a researcher that can create new activities through networking with people internally and externally.
In this position there is a good opportunity to work on problem solving defined in cooperation with NAIL partners within companies and public sectors. The researcher will contribute to enhance NAIL, and in particular the AI node in lesund. The first three years of work will be to create a research portfolio. After this period, it is expected to contribute to the Department of ICT and Natural Sciences with lecturing.
You will report to Head of Department ICT and Natural Sciences.
Responsibilities
Requiredqualifications
Associate professor (Frsteamanuensis):
You must have thequalificationsrequiredfor the position of associate professor in the field ofArtificial Intelligence, as outlined intheregulations concerning appointment and promotion to teaching and research posts
Applicable to all:
You must document relevant basic competenceinteaching and supervision at auniversity/higher education-level, as referenced in the Norwegian nationalRegulations.Ifthis cannot be documented, you will be required to complete an approved course in university pedagogy within two years of commencement.NTNU offers qualifying courses.
New employees who donotspeak aScandinavian languageby appointment is required, within three years, todemonstrate skills in Norwegian or another Scandinavian language equivalent to level three of thecourse for Norwegian for speakers of other languagesat the Department of Language and Literature at NTNU.
Preferred qualifications
Personal qualities
We offer
Salary and conditions
In this position you will typically receive a gross salary according to the Associate professor (Frsteamanuensis) code 1011, depending on qualifications and seniority. As required by law, 2% of this salary will be deducted and paid into the Norwegian Public Service Pension Fund.
Employment will be granted in accordance with the principles outlined in the regulations in force concerning State Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criteria in the latter law will be prohibited from recruitment to NTNU. Applicants should be aware that there may be changes in the working environment after employment has commenced.
It is a prerequisite that you are able to be present and accessible at the institution daily.
Application Process
You can find more information about working at NTNU and the application processhere.
About the application
Your application and supporting documentation must be in English or Norwegian.
Please note that your application will be considered based solely on information submitted by the application deadline. You must therefore ensure that your application clearly demonstrates how your skills and experience fulfil the criteria specified above.
If, for any reason, you have taken a career break or have had an atypical career and wish to disclose this in your application, the selection committee will take this into account, recognizing that the quantity of your research may be reduced as a result.
Your application must include:
Joint work will also be considered. If it is difficult to identify your specific input to a joint project, you must include evidence of your contributions.
While considering the best-qualified applicants, we will payparticular attentionto personal qualities, your motivation forapplying forthe position, and your pedagogical skills and qualifications.Ourassessment will be based on documented pedagogical material, forms of presentation in your academic works, teaching experience, PhD, andMasterssupervision, and any other relevant pedagogical background. Both quality and scope will be taken into consideration.
NTNU isobliged bythe evaluation criteria for research quality in accordance withThe SanFransiscoDeclaration on Research Assessment DORA.This means that we will payparticular attentionto the quality and academicrangedemonstrated by your scientific work to date. We will also pay attention to research leadership and participation in research projects.Your scientific work from the last five years will be given the most weight.
Your application will beconsidered by a committee of experts. Candidates of interest will be invited to a trial lesson and an interview.
General information
NTNUspersonellpolicyemphasizes the importance ofequality and diversity. We encourage applications from all qualified candidates, regardless of gender, disability, or cultural background. NTNU is working actively to increase the number of women employed in scientific positions, and has a number ofresourcesto promote equality.
The city of lesund, with its population of 50 000, will provide you with plenty of opportunities to explore a region of Norway that is famous for its beautiful scenery with high mountains and blue fjords. lesund itself, with its Art Nouveau architecture, is by many considered to be the most beautiful city in Norway! The Norwegian welfare state, including healthcare, schools, kindergartens and overall equality, is probably the best of its kind in the world.
As an employee at NTNU, you must continually maintain and improve your professional development and be flexible regarding any organizational changes.
In accordance with public law your name, age, job title, and county of residence may be made available to the public even if you have requested not to appear on the public list of applicants. For the sake of transparency, candidates will be given the expert evaluation of their own and other candidates. As an applicant you are considered part of the process and is stipulated to rules of confidentiality.
If you have any questions regarding the position, please contact Rune Volden, tel:+47-92887753, e-mail: rune.volden@ntnu.no. If you have questions regarding the recruitment process, please contact Maren Skuterud Rasmussen, e-mail: maren.skuterud@ntnu.no.
The application and all attachments should be submitted electronically via jobbnorge.no.
NTNU - knowledge for a better world
The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life
Department of ICT and Natural Sciences
Our campus in lesund works in a partnership with industry that is in a class of its own among Norwegian universities. This ensures a practical focus for our study programmes, while they are firmly anchored in modern theory. The Department offers programmes in automation engineering, computer engineering, electric power systems, simulation and visualization. Our research areas include autonomous vessels, robotics, cybernetics, medical technology and health informatics, and artificial intelligence.The Department of ICT and Natural Sciencesis one of seven departments in theFaculty of Information Technology and Electrical Engineering.
Deadline15th May 2022EmployerNTNU - Norwegian University of Science and TechnologyMunicipalitylesundScopeFulltimeDurationPermanentPlace of servicelesund Campus
Go here to read the rest:
Posted in Artificial Intelligence
Comments Off on Associate Professor within Artificial Intelligence job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY – NTNU | 290775 – Times Higher Education
Does this artificial intelligence think like a human? – Freethink
Posted: April 17, 2022 at 11:44 pm
In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.
While tools exist to help experts make sense of a models reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.
Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning models behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a models reasoning matches that of a human.
Shared Interest could help a user easily uncover concerning trends in a models decision-making for example, perhaps the model often becomes confused by distracting, irrelevant features, like background objects in photos. Aggregating these insights could help the user quickly and quantitatively determine whether a model is trustworthy and ready to be deployed in a real-world situation.
In developing Shared Interest, our goal is to be able to scale up this analysis process so that you could understand on a more global level what your models behavior is, says lead author Angie Boggust, a graduate student in the Visualization Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Boggust wrote the paper with her advisor, Arvind Satyanarayan, an assistant professor of computer science who leads the Visualization Group, as well as Benjamin Hoover and senior author Hendrik Strobelt, both of IBM Research. The paper will be presented at the Conference on Human Factors in Computing Systems.
Boggust began working on this project during a summer internship at IBM, under the mentorship of Strobelt. After returning to MIT, Boggust and Satyanarayan expanded on the project and continued the collaboration with Strobelt and Hoover, who helped deploy the case studies that show how the technique could be used in practice.
Shared Interest leverages popular techniques that show how a machine-learning model made a specific decision, known as saliency methods. If the model is classifying images, saliency methods highlight areas of an image that are important to the model when it made its decision. These areas are visualized as a type of heatmap, called a saliency map, that is often overlaid on the original image. If the model classified the image as a dog, and the dogs head is highlighted, that means those pixels were important to the model when it decided the image contains a dog.
Shared Interest works by comparing saliency methods to ground-truth data. In an image dataset, ground-truth data are typically human-generated annotations that surround the relevant parts of each image. In the previous example, the box would surround the entire dog in the photo. When evaluating an image classification model, Shared Interest compares the model-generated saliency data and the human-generated ground-truth data for the same image to see how well they align.
The technique uses several metrics to quantify that alignment (or misalignment) and then sorts a particular decision into one of eight categories. The categories run the gamut from perfectly human-aligned (the model makes a correct prediction and the highlighted area in the saliency map is identical to the human-generated box) to completely distracted (the model makes an incorrect prediction and does not use any image features found in the human-generated box).
On one end of the spectrum, your model made the decision for the exact same reason a human did, and on the other end of the spectrum, your model and the human are making this decision for totally different reasons. By quantifying that for all the images in your dataset, you can use that quantification to sort through them, Boggust explains.
The technique works similarly with text-based data, where key words are highlighted instead of image regions.
The researchers used three case studies to show how Shared Interest could be useful to both nonexperts and machine-learning researchers.
In the first case study, they used Shared Interest to help a dermatologist determine if he should trust a machine-learning model designed to help diagnose cancer from photos of skin lesions. Shared Interest enabled the dermatologist to quickly see examples of the models correct and incorrect predictions. Ultimately, the dermatologist decided he could not trust the model because it made too many predictions based on image artifacts, rather than actual lesions.
The value here is that using Shared Interest, we are able to see these patterns emerge in our models behavior. In about half an hour, the dermatologist was able to make a confident decision of whether or not to trust the model and whether or not to deploy it, Boggust says.
In the second case study, they worked with a machine-learning researcher to show how Shared Interest can evaluate a particular saliency method by revealing previously unknown pitfalls in the model. Their technique enabled the researcher to analyze thousands of correct and incorrect decisions in a fraction of the time required by typical manual methods.
In the third case study, they used Shared Interest to dive deeper into a specific image classification example. By manipulating the ground-truth area of the image, they were able to conduct a what-if analysis to see which image features were most important for particular predictions.
The researchers were impressed by how well Shared Interest performed in these case studies, but Boggust cautions that the technique is only as good as the saliency methods it is based upon. If those techniques contain bias or are inaccurate, then Shared Interest will inherit those limitations.
In the future, the researchers want to apply Shared Interest to different types of data, particularly tabular data which is used in medical records. They also want to use Shared Interest to help improve current saliency techniques. Boggust hopes this research inspires more work that seeks to quantify machine-learning model behavior in ways that make sense to humans.
This work is funded, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.
Republished with permission ofMIT News. Read theoriginal article.
Read the original:
Does this artificial intelligence think like a human? - Freethink
Posted in Artificial Intelligence
Comments Off on Does this artificial intelligence think like a human? – Freethink
Should the I in Artificial Intelligence (AI) need a reboot? – Times of India
Posted: at 11:44 pm
Three events took place a few days ago which, at first glance may look inconsequential, but are they? Read on
The morning ritual begins with commanding the wrongly comprehending Alexa to play the morning melodies on flute. Alexa, the voice assistant obeys after a couple of attempts and the soothing strains waft through the expanse of the living room.
The rhythmically cooing pigeons swoop in into the terrace, listening to the familiar whistling. They train their bobbing heads to the rustling sound of seeds strewn on the terrace floor. Some coo and invite their mates, and the others strut and fan their tails to protect their territories. The sumptuous and timely breakfast gets underway. Pigeons decide when to eat, how much to eat.
Shortly after, the news of Going bananas over Artificial Intelligence catches the attention. The headline is a robot trained to peel the humble banana. The news is from the venerable University of Tokyo lab.
Alexa is intelligent, the pigeons are clever, and the robot is dextrous (and human-like)!! Or is it so? Did we use the words portraying intelligence rather loosely here?
From the deep recesses of my mind comes alive the doomsday prophecy warning that soon there would be no distinct difference between what can be achieved by a biological brain versus a computer (aka AI). AI is on its way to emulate human intelligence and soon after will exceed it, rule it and at its peak will replace the humankind.
The primacy of humankind is getting threatened!
Luckily, I had just completed reading the brilliant book: The Book of Why by Turing awardee Judea Pearl along with the seminal article: Human-Level Intelligence or Animal-Like Abilities of Adnan Darwiche (UCLA, 2018). They came to my rescue in dousing my fear of humankind being usurped by AI!
Alexa uses natural language processing and speech recognition software. Large amounts of audio training data are used as inputs. The raw data is cleaned and labelled. With the aid of the algorithms, voice assistants understand and fulfil user commands. Is intelligence being drilled into Alexa or is Alexa simply mastering imitation through continual training and learning?
Pigeons are among the smarter birds. Their homing abilities have been effectively used as carrier birds. This cognitive skill could be a blend of innate trait and committed training. But can it be called intelligence?
Now let us dwell on the robot and the banana. A robot peeling a banana is trained by a deep imitation (learning) process to learn and perform this deceptively effortless process. Media coverage makes this an exciting headline, and readers brim with positivity. However, the headlines of the robots prowess could be misleading. The banana peeling robots success rate after thirteen hours of training maxes out at 57%. That is forty-three times in one hundred attempts, it failed the task by squishing the banana. Can this be dubbed as intelligence or is simply imitation trying to be perfected? John McCarthy (Stanford University) coined the world of Artificial Intelligence in 1955. The pithy acronym AI has gained immense ground, with technology breakthroughs like parallel computation, big data and better algorithms propelling its massive growth.
There is heightened speculation surrounding AI; that human will be replaced by machines. This has been, however, tempered by the fact that humans can leverage AI and AI could augment human capabilities. Attempts have been made to redefine Artificial Intelligence as Augmented Intelligence.
Machines have advantages that humans do not: speed, repeatability, consistency, scalability, and lower cost, humans have advantages that machines do not: reasoning, originality, feelings, contextuality, and experience.
The triumph of neural networks in applications like speech recognition, vision, autonomous navigation has let the media coverage to be less thoughtful and at times go overboard in describing automation of tasks to be quickly equated with human intelligence. This excitement is mixed with an ample dose of fear. So, is the word intelligence the misnomer here?
Intelligence refers to ones cognitive abilities, which would include capacities to
1. Comprehend and reason and imagine,2. Bring in original, at times abstract thoughts,3. Be able to evaluate and judge,4. Adapt to the context and environment,5. Acquire knowledge and store and use as experience
So, if Machine Learning is the way AI is powered to meet only the last point of acquiring knowledge and storing it for use later, then will this not be incomplete intelligence?
At the risk of sounding like a non-conformist, Pearl argues that Artificial Intelligence is handicapped by an incomplete understanding of what intelligence really is. AI applications, as of today, can solve problems that are predictive and diagnostic in nature, without attempting to find the cause of the problem. Never denying the transformative and disruptive, complex, and non-trivial power of AI, Pearl has shared his genuine critique on the achievements of Machine Learning and Deep Learning given the relentless focus on correlation leading to pattern matching, finding anomalies, and often culminating in the function of curve-fitting.
The significance of the ladder of causation i.e., progressing from association to intervention and concluding with counter factuality has been the contribution of immense consequence from Pearl.
Pearl has been one of the driving forces who expects that the correlation-based reasoning should not subsume the causal reasoning and the development of causal based algorithmic tools. If, for example programmers of driverless car want to react different to new situations, they should add the new reactions explicitly, which is done through the understanding of cause and effect Furthermore, the concern echoed by Darwiche of the current imbalance between exploiting, enjoying, and cheering the current AI tools based on correlation should not be at the cost of representation and reason based causal tools to build cause and effect.
Only causal reasoning could provide machines with human level intelligence. This would be the cornerstone of the scientific thought and would make the humanmachine communication effective.
Hitherto, areas like explainable AI (xAI), moralities and biases in AI, should be gainfully addressed.
Till then the spectre whether AI would usurp the human intelligence is a non-starter. Should we agree that the field of Artificial Intelligence have a more apt title of Artificial Ability or Augmented Imitation? Will reboot of the acronym help dissuade the apocalypticist from painting a grim picture about the impending demotion of humankind?
Views expressed above are the author's own.
END OF ARTICLE
Go here to see the original:
Should the I in Artificial Intelligence (AI) need a reboot? - Times of India
Posted in Artificial Intelligence
Comments Off on Should the I in Artificial Intelligence (AI) need a reboot? – Times of India
Ethics Leader Pushes for More Responsible Artificial Intelligence – Newsroom | University of St. Thomas – University of St. Thomas Newsroom
Posted: at 11:44 pm
From deciding what to watch next on Netflix to ordering lunch from a robot, artificial intelligence (AI) is hard to escape these days.
AI ethics leader Elizabeth M. Adams is an expert on the social and moral implications of artificial intelligence. She recently spoke about overcoming those issues at an Opus College of Business event, Artificial Intelligence & Diversity, Equity and Inclusion.
Here are four key takeaways.
Artificial intelligence is all around us.
From traffic lights to unlocking mobile phones, computers are working to aid our every move.
Artificial intelligence is basically training a computer model to think like a human, but at a much faster pace, Adams said.
A futurist at heart, Adams embraces AI wherever and whenever she can.
I'm a huge proponent of a four-hour workweek, Adams said. If I could have technology make my coffee, turn on my screens, so I could focus on my other research, I would.
Despite good intentions, artificial intelligence can perpetuate historical bias.
Artificial intelligence hasnt always worked in an inclusive or equitable fashion.Adams points out that AI has often struggled to accurately identify individuals, objects and trends.
Some of those struggles impact our social identity. For example, software programs continue to misidentify Black women as men. Other programs have difficulties identifying individuals, even for some of the most well-known faces in the world, such as Oprah Winfrey and former first lady Michelle Obama.
Other inaccuracies may impact standing in the community or financial well-being. Governments and law enforcement have begun using facial recognition software at a variety of levels, collecting data and information on citizens. Not only does this form of artificial intelligence raise privacy concerns, it can perpetuate bias based on how the technology and data is used.
For an example in business, AI bias has been found in hiring software. Certain resumes can be overlooked based on data that software is trained to value or avoid.
Were waking up to the challenges of AI, even though there are lots of benefits, Adams said. For those in vulnerable populations, now you have one more thing this new technology that you have to figure out how to navigate in your life.
What is responsible AI?
As discrepancies and inequities come to light, more companies have embraced the use of responsible AI. While an exact definition is still evolving, responsible AI aims to reduce harm to all individuals and embrace the equitable use of artificial intelligence.
Its very important to have the right people, the right voices at the table when youre designing your technology, Adams said.
Adams lifts up companies like Microsoft and Salesforce as two giants that have been working to roll out responsible AI technology with the help of their entire workforce.
Its not just a technical problem, Adams said. Its important to have diverse voices of all disciplines.
Meanwhile global organizations such as the United Nations have put out guidelines for companies to follow for their AI technology.
Everyone must embrace responsible AI.
Its not just mega companies or organizations thatcan bring about change. Adams stressed that everyone must embrace the new realities of working in a world with AI.
There are lots of different opportunities to see yourself and to help fix some of the challenges, Adams said. Responsible AI is really starting to cascade out to the workforce, which is really, really important.
Adams suggested people get started learning about AI by hosting education events, partnering with stakeholders in their community, and speaking with policymakers.
But most of all, she wants everyone to follow theircuriosity.
If you like art, follow your curiosity around AI in art, Adams said. If you like automobiles, follow your curiosity there. Wherever you decide that AI is important, follow your curiosity.
Read the rest here:
Posted in Artificial Intelligence
Comments Off on Ethics Leader Pushes for More Responsible Artificial Intelligence – Newsroom | University of St. Thomas – University of St. Thomas Newsroom
Artificial Intelligence Is Strengthening the U.S. Navy From Within – The National Interest Online
Posted: at 11:44 pm
The Navy is progressively phasing artificial intelligence (AI) into its ship systems, weapons, networks, and command and control infrastructure as computer automation becomes more reliable and advanced algorithms make once-impossible discernments and analyses.
Previously segmented data streams on ships, drones, aircraft, and even submarines are now increasingly able to share organized data in real-time, in large measure due to breakthrough advances in AI and machine learning. AI can, for instance, enable command and control systems to identify moments of operational relevance from among hours or days or surveillance data in milliseconds, something which saves time, maximizes efficiency, and performs time-consuming procedural tasks autonomously at an exponentially faster speed.
Multiple data bytes of information will be passed around on the networks here in the near future. So as we think about big data, and how do we handle all that data and turn it into information without getting overloaded, this will be a key part of AI, then we're talking about handling decentralized systems, Nathan Husted of the Naval Surface Warfare Center, Carderock told an audience at the 2022 Sea Air Space Symposium. ..and of course, AI plays a big part in the management in between the messaging and operation and organization of these decentralized systems.
AIs success could be described paradoxically: in one sense, its utility or value is only as effective as the size and quality of its ever-expanding database. Yet by contrast, its conclusions, findings, or answers are very small and precise. Perhaps only two seconds of drone video identify the sought-after enemy target, yet surveillance cameras have hours if not days of data. AI can reduce the procedural burden placed upon humans and massively expedite the decisionmaking process.
If we look at the battlespace, we are actually training for the future. As we look at AI in the battlespace we've got big data and AI systems. So we're going to have this extremely complicated, information-rich combat environment, Husted said
Navy industry partners also see AI as an evolving technology that will progressively integrate into more ship systems, command and control, and weapons over time as processing speeds increase and new algorithms increase their reliability by honing their ability to assimilate new or unrecognized incoming data. This building block approach, for example, has been adopted by Northrop Grumman in its development of a new ship-integrated energy management, distribution, and storage technology called Integrated Power and Energy Systems (IPES). For instance, Northrop Grummans solution is built to accommodate new computing applications as they become available, such as AI-generated power optimization and electric plant controls.
The technology seeks to organize and store energy sources to optimize distribution across a sphere of otherwise separated ship assets such as lasers, sensors, command and control, radar, or weapons. AI-enabled computing can help organize incoming metrics and sensor data from disparate ship systems to optimize storage and streamline distribution as needed from a single source depending upon need.
AI is an emerging capability that shows promise in some of these more complex electrical architectures to manage in near real-time. Future capability that would rely upon AI and be more computationally intensive is likely to happen in some aspects of electric plant controls, Matthew Superczynski, chief engineer for Northrop Grummans Power/Control Systems, told The National Interest in an interview. We are building upon the architecture the Navy already has to give them more capability and lower risk. We can build on top of that.
Kris Osborn is the Defense Editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the ArmyAcquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.
Image: Flickr.
Excerpt from:
Artificial Intelligence Is Strengthening the U.S. Navy From Within - The National Interest Online
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Is Strengthening the U.S. Navy From Within – The National Interest Online