The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act – Fasken
Posted: October 19, 2022 at 3:32 pm
Laws governing technology have historically focused on the regulation of information privacy and digital communications. However, governments and regulators around the globe have increasingly turned their attention to artificial intelligence (AI) systems. As the use of AI becomes more widespread and changes how business is done across industries, there are signs that existing declarations of principles and ethical frameworks for AI may soon be followed by binding legal frameworks. [1]
On June 16, 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). Although there have been previous efforts to regulate automated decision-making as part of federal privacy reform efforts, AIDA is Canadas first effort to regulate AI systems outside of privacy legislation. [2]
If passed, AIDA would regulate the design, development, and use of AI systems in the private sector in connection with interprovincial and international trade, with a focus on mitigating the risks of harm and bias in the use of high-impact AI systems. AIDA sets out positive requirements for AI systems as well as monetary penalties and new criminal offences on certain unlawful or fraudulent conduct in respect of AI systems.
Prior to AIDA, in April 2021, the European Commission presented a draft legal framework for regulating AI, the Artificial Intelligence Act (EU AI Act), which was one of the first attempts to comprehensively regulate AI. The EU AI Act sets out harmonized rules for the development, marketing, and use of AI and imposes risk-based requirements for AI systems and their operators, as well as prohibitions on certain harmful AI practices.
Broadly speaking, AIDA and the EU AI Act are both focused on mitigating the risks of bias and harms caused by AI in a manner that tries to be balanced with the need to allow technological innovation. In an effort to be future-proof and keep pace with advances in AI, both AIDA and the EU AI Act define artificial intelligence in a technology-neutral manner. However, AIDA relies on a more principles-based approach, while the EU AI Act is more prescriptive in classifying high-risk AI systems and harmful AI practices and controlling their development and deployment. Further, much of the substance and details of AIDA are left to be elaborated in future regulations, including the key definition of high risk AI systems to which most of AIDAs obligations attach.
The table below sets out some of the key similarities and differences between the current drafts of AIDA and the EU AI Act.
High-risk system means:
The EU AI Act does not apply to:
AIDA does not stipulate an outright ban on AI systems presenting an unacceptable level of risk.
It does, however, make it an offence to:
The EU AI Act prohibits certain AI practices and certain types of AI systems, including:
Persons who process anonymized data for use in AI systems must establish measures (in accordance with future regulations) with respect to:
High-risk systems that use data sets for training, validation and testing must be subject to appropriate data governance and management practices that address:
Data sets must:
Transparency. Persons responsible for high-impact systems must publish on a public website a plain-language description of the AI system which explains:
Transparency. AI systems which interact with individuals and pose transparency risks, such as those that incorporate emotion recognition systems or risks of impersonation or deception, are subject to additional transparency obligations.
Regardless of whether or not the system qualifies as high-risk, individuals must be notified that they are:
Persons responsible for AI systems must keep records (in accordance with future regulations) describing:
High-risk AI systems must:
Providers of high-risk AI systems must:
The Minister of Industry may designate an official to be the Artificial Intelligence and Data Commissioner, whose role is to assist in the administration and enforcement of AIDA. The Minister may delegate any of their powers or duties under AIDA to the Commissioner.
The Minister of Industry has the following powers:
The European Artificial Intelligence Board will assist the European Commission in providing guidance and overseeing the application of the EU AI Act. Each Member State will designate or establish a national supervisory authority.
The Commission has the authority to:
Persons who commit a violation of AIDA or its regulations may be subject to administrative monetary penalties, the details of which will be establish by future regulations. Administrative monetary penalties are intended to promote compliance with AIDA.
Contraventions to AIDAs governance and transparency requirements can result in fines:
Persons who commit more serious criminal offences (e.g., contravening the prohibitions noted above or obstructing or providing false or misleading information during an audit or investigation) may be liable to:
While both acts define AI systems relatively broadly, the definition provided in AIDA is narrower. AIDA only encapsulates technologies that process data autonomously or partly autonomously, whereas the EU AI Act does not stipulate any degree of autonomy. This distinction in AIDA is arguably a welcome divergence from the EU AI Act, which as currently drafted would appear to include even relatively innocuous technology, such as the use of a statistical formula to produce an output. That said, there are indications that the EU AI Acts current definition may be modified before its final version is published, and that it will likely be accompanied by regulatory guidance for further clarity. [4]
Both acts are focused on avoiding harm, a concept they define similarly. The EU AI Act is, however, slightly broader in scope as it considers serious disruptions to critical infrastructure a harm, whereas AIDA is solely concerned with harm suffered by individuals.
Under AIDA, high-impact systems will be defined in future regulations, so it is not yet possible to compare AIDAs definition of high-impact systems to the EU AI Acts definition of high-risk systems. The EU AI Act identifies two categories of high-risk systems. The first category is AI systems intended to be used as safety components of products, or as products themselves. The second category is AI systems listed in an annex to the act and which present a risk to the health, safety, or fundamental rights of individuals. It remains to be seen how Canada would define high-impact systems, but the EU AI Act provides an indication of the direction the federal government could take.
Similarly, AIDA also defers to future regulations with respect to risk assessments, while the proposed EU AI Act sets out a graduated approach to risk in the body of the act. Under the EU AI Act, systems presenting an unacceptable level of risk are banned outright. In particular, the EU AI Act explicitly bans manipulative or exploitive systems that can cause harm, real-time biometric identification systems used in public spaces by law enforcement, and all forms of social scoring. AI systems presenting low or minimal risk are largely exempt from regulations, except for transparency requirements.
AIDA only imposes transparency requirements on high-impact AI systems, and does not stipulate an outright ban on AI systems presenting an unacceptable level of risk. It does, however, empower the Minister of Industry to order that a high-impact system presenting a serious risk of imminent harm cease being used.
AIDAs application is limited by the constraints of the federal governments jurisdiction. AIDA broadly applies to actors throughout the AI supply chain from design to delivery, but only as their activities relate to international or interprovincial trade and commerce. AIDA does not expressly apply to intra-provincial development and use of AI systems. Government institutions (as defined under the Privacy Act) are excluded from AIDAs scope, as are products, services, and activities that are under the direction or control of specified federal security agencies.
The EU AI Act specifically applies to providers (although this may be interpreted broadly) and users of AI systems, including government institutions but excluding where AI systems are exclusively developed for military purposes. The EU AI Act also expressly applies to providers and users of AI systems insofar as the output produced by those systems is used in the EU.
AIDA is largely silent on requirements with respect to data governance. In its current form, it only imposes requirements on the use of anonymized data in AI systems, most of which will be elaborated in future regulations. AIDAs data governance requirements will apply to anonymized data used in the design, development, or use of any AI system, whereas the EU AI Acts data governance requirements will apply only to high-impact systems.
The EU AI Act sets the bar very high for data governance. It requires that training, validation, and testing datasets be free of errors and complete. In response to criticisms of this standard for being too strict, the European Parliament has introduced an amendment to the act that proposes to make error-free and complete datasets an overall objective to the extent possible, rather than a precise requirement.
While AIDA and the EU AI Act both set out requirements with respect to assessment, monitoring, transparency, and data governance, the EU AI Act imposes a much heavier burden on those responsible for high-risk AI systems. For instance, under AIDA, persons responsible for such systems will be required to implement mitigation, monitoring, and transparency measures. The EU AI Act goes a step further by putting high-risk AI systems through a certification scheme, which requires that the responsible entity conduct a conformity assessment and draw up a declaration of conformity before the system is put into use.
Both acts impose record-keeping requirements. Again, the EU AI Act is more prescriptive, but contrary to AIDA, its requirements will only apply to high-risk systems, whereas AIDAs record-keeping requirements would apply to all AI systems.
Finally, both acts contain notification requirements that are limited to high-impact (AIDA) and high-risk (EU AI Act) systems. AIDA imposes a slightly heavier burden, requiring notification for all uses that are likely to result in material harm. The EU AI Act only requires notification if a serious incident or malfunction has occurred.
Both AIDA and the EU AI Act provide for the creation of a new monitoring authority to assist with administration and enforcement. The powers attributed to these entities under both acts are similar.
Both acts contemplate significant penalties for violations of their provisions. AIDAs penalties for more serious offences up to $25 million CAD or 5% of the offenders gross global revenues from the preceding financial year are significantly greater than those found in Quebecs newly revised privacy law and the EUs General Data Protection Regulation (GDPR). The EU AI Acts most severe penalty is higher than both the GDPR and AIDAs most severe penalty: up to 30 million or 6% of gross global revenues from the preceding financial year for non-compliance with prohibited AI practices or the quality requirements set out for high-risk AI systems.
In contrast to the EU AI Act, AIDA also introduces new criminal offences for the most serious offences committed under the act.
Finally, the EU AI Act would also grant discretionary power to Member States to determine additional penalties for infringements of the act.
While both AIDA and the EU AI Act have broad similarities, it is impossible to predict with certainty how similar they could eventually be, given that so much of AIDA would be elaborated in future regulations. Further, at the time of writing, Bill C-27 has only completed first reading, and is likely to be subject to amendments as it makes its way through Parliament.
It is still unclear how much influence the EU AI Act will have on AI regulations globally, including in Canada. Regulators in both Canada and the EU may aim for a certain degree of consistency. Indeed, many have likened the EU AI Act to the GDPR, in that it may set global standards for AI regulation just as the GDPR did for privacy law.
Regardless of the fates of AIDA and the EU AI Act, organizations should start considering how they plan to address a future wave of AI regulation.
For more information on the potential implications of the new Bill C-27, Digital Charter Implementation Act, 2022, please see our bulletin,The Canadian Government Undertakes a Second Effort at Comprehensive Reform to Federal Privacy Law, on this topic.
[1]There have been a number of recent developments in AI regulation, including the United Kingdoms Algorithmic Transparency Standard, Chinas draft regulations on algorithmic recommendation systems in online services, the United States Algorithmic Accountability Act of 2022, and the collaborative effort between Health Canada, the FDA and the United Kingdoms Medicines and Healthcare Products Regulatory Agency to publish Guiding Principles on Good Machine Learning Practice for Medical Device Development.
[2]In the public sphere, the Directive on Automated Decision-Makingguides the federal governments use of automated decision systems.
[3]This prohibition is subject to three exhaustively listed and narrowly defined exceptions where the use of such AI systems is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks: (1) the search for potential victims of crime, including missing children; (2) certain threats to the life or physical safety of individuals or a terrorist attack; and (3) the detection, localization, identification or prosecution of perpetrators or suspects of certain particularly reprehensible criminal offences.
[4]As an indication of potential changes, the Slovenian Presidency of the Council of the European Union tabled a proposed amendment to the act in November 2021 that would effectively narrow the scope of the regulation to machine learning.
The rest is here:
Posted in Ai
Comments Off on The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act – Fasken
www.wfmz.com
Posted: October 13, 2022 at 1:33 pm
iCover's Algorithmic Underwriting Enables Fast and Accurate Risk Assessment at the Point of Sale, and Significantly Higher Rates of Instant Decisions
EXTON, Pa., Sept. 15, 2022 /PRNewswire/ -- iPipelineannounced that it has entered into a strategic partnership with iCover, an InsurTech provider of algorithmic underwriting for streamlining the assessment, pricing, and delivery of life insurance. This strategic alliance harnesses the power of both platformswith iCover helping to strengthen iPipeline's existing automated capabilities within its Resonant solutionto create a faster life insurance sales cycle.
iCover, which will become available via iPipeline's iGO e-App, uses artificial intelligence (AI) predictive modeling to assess mortality risk, reducing the digital consumer application process to five minutesand producing fully underwritten offers without labs, exams, or attending physician statements.
"As the demand for AI-driven automated medical underwriting grows, we have partnered with an innovative and progressive AI provider to create an even faster end-to-end selling process," said Deane Price, Chief Executive Officer of iPipeline. "The combination of iPipeline and iCover will automate and simplify a process which can normally take weeks, while increasing sales and lowering overhead. This is a game-changing enhancement for the life insurance industry, and we are proud to offer it in our ecosystem."
The strategic initiative with Chesterfield, MO-based iCover will offer iPipeline users:
"We built a powerful decision framework that has revolutionized underwriting," said Hari Srinivasan, Founder and CEO of iCover. "Working together with iPipeline, we can help insurers expand their reach and sell to a greater number of customers through a seamless, digital experiencethat just takes a few minutes."
About iPipeline
iPipeline is building a comprehensive digitized ecosystem for the life insurance and wealth management industries, which will enable millions of uninsured or under-insured Americans to secure their financial futures as part of a holistic financial planning experience. The firm is working to optimize all application and processing workflowsfrom quote to commissionand consolidating them within one of the most expansive straight-through processing platforms, significantly reducing paper, saving time, and increasing premiums and placements for insurance agents. iPipeline is also committed to offering premier subscription-based tools to help financial institutions and advisors automate and digitize financial transactions, comply with regulations, and seamlessly incorporate life insurance and annuities into client accounts.
The iPipeline digital ecosystem incorporates one of the industry's largest data sets to enable advisors and agents to optimize their businesses. Since its establishment in 1995, iPipeline has facilitated 1.5 billion quote responses, $32 billion in savings on printing and mailing costs, the collection of 55 billion premiums, and the protection of 25 million lives. iPipeline operates as a unit of Roper Technologies (NYSE: ROP), a constituent of the S&P 500 and Fortune 500 indices. For more information, please visit https://www.ipipeline.com/.
About iCover
iCover is a cloud-based algorithmic underwriting platform that helps insurers sell to the middle market. By leveraging data and predictive analytics iCover can quote, underwrite, and deliver life insurance in under 5 minutes. iCover was built by industry insiders Hari Srinivasan and Nicole Mwesigwa who applied their 30+ years of InsurTech experience and intimate knowledge of automated underwriting technologies. To learn more about iCover, visit http://www.icoverinsure.com.
Media Contacts:
Laura Simpson
JConnelly for iPipeline
973-713-8834
Hari Srinivasan
CEO & Founder, iCover
314-255-3861
View original content to download multimedia:https://www.prnewswire.com/news-releases/ipipeline-partners-with-icover-harnessing-ai-underwriting-to-optimize-the-insurance-buying-experience-301625702.html
SOURCE iPipeline
What others are reading...
More:
Posted in Ai
Comments Off on www.wfmz.com
Misinformation research relies on AI and lots of scrolling – NPR
Posted: at 1:33 pm
Atilgan Ozdil/Anadolu Agency/Getty Images
Atilgan Ozdil/Anadolu Agency/Getty Images
What sorts of lies and falsehoods are circulating on the internet? Taylor Agajanian used her summer job to help answer this question, one post at a time. It often gets squishy.
She reviewed a social media post where someone had shared a news story about vaccines with the comment "Hmmm, that's interesting." Was the person actually saying that the news story was interesting, or insinuating that the story isn't true?
Agajanian read around and between the lines often while working at University of Washington's Center for an Informed Public, where she reviewed social media posts and recorded misleading claims about COVID-19 vaccines.
As the midterm election approaches, researchers and private sector firms are racing to track false claims about everything from ballot harvesting to voting machine conspiracies. But the field is still in its infancy even as the threats to the democratic process posed by viral lies loom. Getting a sense of which falsehoods people online talk about might sound like a straightforward exercise, but it isn't.
"The broader question is, can anyone ever know what everybody is saying?" says Welton Chang, CEO of Pyrra, a startup that tracks smaller social media platforms. (NPR has used Pyrra's data in several stories.)
Automating some of the steps the University of Washington team uses humans for, Pyrra uses artificial intelligence to extract names, places and topics from social media posts. Using the same technologies that in recent years enable AI to write remarkably like humans, the platform generates summaries of trending topics. An analyst reviews the summaries, weeds out irrelevant items like advertising campaigns, gives them a light edit and shares them with clients.
A recent digest of such summaries include the unsubstantiated claim "Energy infrastructure under globalist attack."
The University of Washington and Pyrra's approaches are on the more extreme ends in terms of automation - few teams have so many staff - around 15 - just to monitor social media, or rely so heavily on algorithms as to have it synthesize material and output.
All methods carry caveats. Manually monitoring and coding content could miss out on developments; and while capable of processing huge amounts of data, artificial intelligence struggles to handle the nuances of distinguishing satire from sarcasm.
Although incomplete, having a sense of what's circulating in the online discourse allows society to respond. Research into voting-related misinformation in 2020 has helped inform election officials and voting rights groups about what messages to emphasize this year.
For responses to be proportionate, society also needs to evaluate the impact of false narratives. Journalists have covered misinformation spreaders who seem to have very high total engagement numbers but limited impact, which risks "spreading further hysteria over the state of online operations," wrote Ben Nimmo, who now investigates global threats at Meta, Facebook's parent company.
While language can be ambiguous, it's more straight forward to track who's been following and retweeting whom. Other researchers analyze networks of actors as well as narratives.
The plethora of approaches is typical of a field that's just forming, says Jevin West, who studies the origins of academic disciplines at University of Washington's Information School. Researchers come from different fields and bring methods they're comfortable with to start, he says.
West corralled research papers from academic database Semantic Scholar mentioning 'misinformation' or 'disinformation' in their title or abstract, and found that many papers are from medicine, computer science, psychology and there also geology, mathematics and art.
"If we're a qualitative researcher, we'll go...and literally code everything that we see." West says. More quantitative researchers do large scale analysis like mapping topics on Twitter.
Projects often use a mix of methods. "If [different methods] start converging on similar kinds of...conclusions, then I think we'll feel a little bit better about it." West says.
One of the very first steps of misinformation research - before someone like Agajanian starts tagging posts - is identifying relevant content under a topic. Many researchers start their search with expressions they think people talking about the topic could use, see what other phrases and hashtags appear in the search results, add that to the query, and repeat the process.
It's possible to miss out on keywords and hashtags, not to mention that they change over time.
"You have to use some sort of keyword analysis. " West says, "Of course, that's very rudimentary, but you have to start somewhere."
Some teams build algorithmic tools to help. A team at Michigan State University manually sorted over 10,000 tweets to pro-vaccine, anti-vaccine, neutral and irrelevant as training data. The team then used the training data to build a tool that sorted over 120 million tweets into these buckets.
For the automatic sorting to remain relatively accurate as the social conversation evolves, humans have to keep annotating new tweets and feed them the training set, Pang-Ning Tan, a co-author of the project, told NPR in an email.
If the interplay between machine detection - human review rings familiar, that might be because you've heard of large social platforms like Facebook, Twitter and Tik Tok describing similar processes to moderate content.
Unlike the platforms, another fundamental challenge researchers have to face is data access. Much misinformation research uses Twitter data, in part because Twitter is one of the few social media platforms that easily lets users tap into its data pipeline - known as Application Programming Interface or API. This allows researchers to easily download and analyze large numbers of tweets and user profiles.
The data pipelines of smaller platforms tend to be less well-documented and could change on short notice.
Take the recently-deplatformed Kiwi Farms as an example. The site served as a forum for anti-LGBTQ activists to harass gay and trans people. "When it first went down, we had to wait for it to basically pop back up somewhere, and then for people to talk about where that somewhere is." says Chang.
"And then we can identify, okay, the site is now here - it has this similar structure, the API is the same, it's just been replicated somewhere else. And so we're redirecting the data ingestion and pulling content from there."
Facebook's data service CrowdTangle, while purporting to serve up all publicly available posts, has been found to not have consistently done so. On another occasion, Facebook bungled data sharing with researchers Most recently, Meta is winding down CrowdTangle, with no alternatives announced set to be in place.
Other large platforms, like YouTube and TikTok, do not have an accessible API , a data service or collaboration with researchers at all. Tik Tok has promised more transparency for researchers.
In such a vast, fragmented, and shifting landscape, West says there's no great way at this point to say what's the state of misinformation on a given topic.
"If you were to ask Mark Zuckerberg, what are people saying on Facebook today? I don't think he could tell you." says Chang.
See original here:
Misinformation research relies on AI and lots of scrolling - NPR
Posted in Ai
Comments Off on Misinformation research relies on AI and lots of scrolling – NPR
AI Shouldnt Compete With WorkersIt Should Supercharge Them – WIRED
Posted: at 1:33 pm
Instead of merely saving costs by replacing humans with a bot, Brynjolfsson notes, augmentation increases peoples productivity. Better yet, some of the economic value of that productivity would accrue to workers because their augmented labor would become more valuable. It wouldnt all be hoovered up by the billionaire owners of the tech.
The catch is that augmentation is hard. When youre simply mimicking human behavior, you know (more or less) whether youve nailed it. (The computer can play checkers: success!) But inventing a form of AI thats usefully different from the way humans operate requires more imagination. You have to think about how to create silicon superpowers that fit hand-in-glove with the abilities unique to peoplesuch as our fuzzy, aha intuition; our common-sense reasoning; and our ability to deal creatively with rare, edge cases.
Its 100 times easier to look at something existing and think, OK, can we substitute a machine or a human there? The really hard thing is, lets imagine something that never existed before, Brynjolfsson says. But ultimately that second way is where most of the value comes from.
At the Stanford Institute for Human-Centered AI, director Fei-Fei Li wanted to know what people actually wish to have automated. Her group went to the US governments American Time Use Survey, which chronicles peoples daily tasks. Lis team picked 2,000 everyday activities that could viably be done by AI and robots, then asked people to rate how much they wanted that task automated, with zero being hell no, I dont want robots to do this, and the maximum being please, Im dying to have a robot do this, Li says.
Open a Christmas present for me was zero; cleaning the toilet was high. Obvious enough, but there was more complex stuff in the middle, such as recommending a book. The only way to find out what people want, Li notes, is by asking themnot by barging ahead and designing AI based on sci-fi fantasies.
Heres another wrinkle: Its not always obvious how the two kinds of AI are different.
One could argue that DALL-E and other image generators are a pure Turing play because they replicate the human ability to create art. The internet currently groans under the weight of essays claiming human artists are about to be serially unemployed by AI. But creators can also use the apps to punch above their weight, such as when a video game designer used Midjourney to generate art for a space shooter. That looks a lot like augmentation.
Whats more, many jobs are harder to entirely automate than you might think. In 2016, deep-learning pioneer Geoff Hinton argued that we should stop training radiologists because its just completely obvious that within five years, deep learning is going to do better than radiologists. (He added that it might take 10 years.) But there are still tons of radiologists employed, and there probably will be in the future because the job of a radiologist is more complicated than Hinton suggests, as noted by Andrew McAfee, a colleague and coauthor of Brynjolfssons who codirects the MIT Initiative on the Digital Economy. AI might be better at noticing potential tumors on scans, but thats only one small part of a radiologists job. The rest of it includes preparing treatment plans and interacting with scared patients. Tumor-spotting AIs, then, might be better seen as augmenting those doctors.
To nudge companies away from Turingism, Brynjolfsson suggests some changes to government policy. One area ripe for reform is the US tax code. Right now, it taxes labor more harshly than capital, as recent work by the Brookings Institute found. Companies get better tax treatment when they buy robots or software to replace humans because of write-offs such as capital depreciation. So the tax code essentially encourages firms to automate workers off the payroll, rather than keeping them and augmenting them.
We subsidize capital and we tax labor, Brynjolfsson says. So right now were pushing entrepreneurswhether they want to or notto try to figure out ways to replace human labor. If we flip that around, or even just level the playing field, then entrepreneurs would figure out a better way. That might be one way out of the trap.
See the article here:
AI Shouldnt Compete With WorkersIt Should Supercharge Them - WIRED
Posted in Ai
Comments Off on AI Shouldnt Compete With WorkersIt Should Supercharge Them – WIRED
NATO Allies take further steps towards responsible use of AI, data, autonomy and digital transformation – NATO HQ
Posted: at 1:33 pm
On Thursday (13 October), NATO Defence Ministers agreed to establish a Review Board to govern the responsible development and use of Artificial Intelligence (AI) and data across the NATO Enterprise.
The Boards first task will be to develop a user-friendly Responsible AI certification standard, including quality controls and risk mitigation, that will help align new AI and data projects with NATOs Principles of Responsible Use approved in October 2021. The Board will also serve as a unique platform to exchange best practices, guide innovators and operational end-users throughout the development phase, thereby contributing to building trust within the innovation community. At present, NATO is piloting AI in areas as diverse as cyber defence, climate change and imagery analysis.
In response to the 2022 Strategic Concepts call to expedite digital transformation, NATO Allies also approved NATOs first Digital Transformation vision. By 2030, NATOs Digital Transformation will enable the Alliance to conduct multi-domain operations, ensure interoperability across all domains, enhance situational awareness, and facilitate political consultation and data-driven decision-making.
NATOs efforts in emerging and disruptive technologies, NATOs AI Strategy and NATOs data exploitation framework policy will contribute to bringing the vision to life. Additional steps were made with Defence Ministers endorsement of priority areas for applying advanced data analysis, including to enable multi-domain operations and enhance situational awareness, and the approval of NATOs first autonomy implementation plan.
AI, data exploitation and autonomy are among the nine technological areas of priority to NATO. These also include: quantum-enabled technologies, biotechnology and human enhancements, hypersonic technologies, novel material and manufacturing, energy and propulsion, and space.
Link:
Posted in Ai
Comments Off on NATO Allies take further steps towards responsible use of AI, data, autonomy and digital transformation – NATO HQ
The messy morality of letting AI make life-and-death decisions – MIT Technology Review
Posted: at 1:33 pm
By the 2000s, an algorithm had been developed in the US to identify recipients for donated kidneys. But some people were unhappy with how the algorithm had been designed. In 2007, Clive Grawe, a kidney transplant candidate from Los Angeles, told a room full of medical experts that their algorithm was biased against older people like him. The algorithm had been designed to allocate kidneys in a way that maximized years of life saved. This favored younger, wealthier, and whiter patients, Grawe and other patients argued.
Such bias in algorithms is common. Whats less common is for the designers of those algorithms to agree that there is a problem. After years of consultation with laypeople like Grawe, the designers found a less biased way to maximize the number of years savedby, among other things, considering overall health in addition to age. One key change was that the majority of donors, who are often people who have died young, would no longer be matched only to recipients in the same age bracket. Some of those kidneys could now go to older people if they were otherwise healthy. As with Scribners committee, the algorithm still wouldnt make decisions that everyone would agree with. But the process by which it was developed is harder to fault.
I didnt want to sit there and give the injection. If you want it, you press the button.
Nitschke, too, is asking hard questions.
A former doctor who burned his medical license after a years-long legal dispute with the Australian Medical Board, Nitschke has the distinction of being the first person to legally administer a voluntary lethal injection to another human. In the nine months between July 1996, when the Northern Territory of Australia brought in a law that legalized euthanasia, and March 1997, when Australias federal government overturned it, Nitschke helped four of his patients to kill themselves.
The first, a 66-year-old carpenter named Bob Dent, who had suffered from prostate cancer for five years, explained his decision in an open letter: If I were to keep a pet animal in the same condition I am in, I would be prosecuted.
Nitschke wanted to support his patients decisions. Even so, he was uncomfortable with the role they were asking him to play. So he made a machine to take his place. I didnt want to sit there and give the injection, he says. If you want it, you press the button.
The machine wasnt much to look at: it was essentially a laptop hooked up to a syringe. But it achieved its purpose. The Sarco is an iteration of that original device, which was later acquired by the Science Museum in London. Nitschke hopes an algorithm that can carry out a psychiatric assessment will be the next step.
But theres a good chance those hopes will be dashed. Creating a program that can assess someones mental health is an unsolved problemand a controversial one. As Nitschke himself notes, doctors do not agree on what it means for a person of sound mind to choose to die. You can get a dozen different answers from a dozen different psychiatrists, he says. In other words, there is no common ground on which an algorithm could even be built.
View original post here:
The messy morality of letting AI make life-and-death decisions - MIT Technology Review
Posted in Ai
Comments Off on The messy morality of letting AI make life-and-death decisions – MIT Technology Review
AI in healthcare: from full-body scanning to fall prevention – Healthcare IT News
Posted: at 1:33 pm
Deepak Gaddipati is founder and chief technology officer at VirtuSense, an artificial intelligence company that aims to transform healthcare from reactive to proactive, alerting care teams of adverse events, such as falls, sepsis and heart attacks, before they occur.
Gaddipati invented the first commercial full-body, automated, AI-powered scanning system, which is widely deployed across most U.S. airports.
He is steeped in the power of AI. Healthcare IT News sat down with Gaddipati to discuss some of his work in healthcare with AI and where he sees the technology headed.
Q. You invented the full-body scanning system. You suggest you can take this AI technology from airports to healthcare and improve efficiencies and drive better outcomes. How?
A. AI already is around us it's in our cars, TVs, phones, favorite streaming services and much more. AI enables these devices to interpret data and make informed, unbiased decisions.
Just as airport security systems use this data interpretation to automate security processes, AI can do the same in healthcare. With AI, you can proactively and efficiently identify any threats before they become detrimental. It's a matter of training AI to find the data you care about.
Airport scanners are trained to find "never events," such as weapons, illegal substances, etc., making their way onto the plane. The same vision can be applied to AI in healthcare. With millions of data points captured for a single patient, healthcare providers can proactively and efficiently protect patients from medical threats and adverse events such as falls, sepsis, heart attacks and pressure ulcers by training AI to identify the data pattern that indicates that malady.
Today's healthcare system is built on sick care. With AI, we can help transition care to healthcare through early detection of these and many other medical conditions.
Q. You have a mission to prevent falls because sadly your grandmother fell and within 10 days passed away. How does AI technology help prevent falls?
A. Yes, my mission to prevent falls is very personal. In 2009, my grandmother, who was healthy and had no severe medical issues, fell while walking to the bank and broke her hip. She died within ten days of the injury.
Even though there were several physicians in our family, she had never been offered existing interventions because she was never identified as a fall risk in the first place. Generally speaking, to be identified as a fall risk, you must first fall, and for many people, that is too late. So many people across the country have similar stories.
So, I wanted to develop AI solutions that prevent falls both in the long term and short term. For the long term, it was about being able to identify and take care of deficits before they become severe, and making that detection accurate, efficient and seamless, so it would be used.
Medicine has standardized tests and assessments for balance and function, but they take time to set up and conduct, and there's always room for human error. So, combining those evidence-based assessments for gait, balance and function with a highly specific AI trained for the smallest variants meant patients' mobility deficits could be proactively identified before they fall. From there, doctors can develop a care plan to help regain strength and mobility.
Short-term fall prevention stopping falls just before they happen is trickier. Proactive detection of an individual trying to get up from a bed or chair is essential as that is the vital moment. By collecting millions of hours of data on what people do before they get up from a bed or chair, AI tools can be trained to proactively detect if a person is going to get up from the bed or chair.
From there, tools need to interface with other tech capabilities, immediate alerting, communication with patients, and nurse coordination. It takes many different pieces to create an AI tool that really works in practice.
It is important to understand that not all AI is the same. Many of the solutions on the market are reactive, detecting and analyzing an event upon its completion. The next level is an AI solution that detects the moment before the event happens to really make care preventative and proactive.
Q. How can AI solve challenges plaguing healthcare today, such as staffing shortages and skyrocketing costs?
A. Healthcare organizations are feeling a squeeze on all sides right now from staffing challenges to rising costs, so it is vital that the tools they adopt actively address both concerns. AI, as a tool, is particularly good at tackling routine problems.
Preventable events, such as hospital-acquired infections and patient falls, are perfect examples of problems that get worse with staffing and resource shortages and are perfectly suited for AI intervention.
For instance, many hospital fall-prevention strategies currently rely on employing bedside sitters and tele-sitters to monitor patients who are at risk of falling. Both approaches rely on staff to stay vigilant while performing mundane work, while also taking those employees out of an active care role.
AI specializes in 24/7 vigilance and pattern recognition, making it a perfect tool to maintain safety and get employees back to performing care, instead of waiting to perform care. AI can transform hours of watching, waiting and record-keeping into a direct notification when action is needed, saving time and relieving task overload from nursing teams.
From a cost perspective, reducing adverse health events directly eases financial strain. To use falls as an example, patients over the age of 65 are 33% more likely to fall. On average, 20% of those who fall will get a major injury. The cost of treating these major falls averages around $34,000 per instance.
On top of this, an elderly person that falls has a 70% likelihood of dying as a result of complications from their fall. Statistically, falls will happen and 20% of those falls will cost the organization financially through direct care costs, staff hours, quality penalties and insurance claims.
Today, the cost of monitoring high-risk patients with sitters or tele-sitters can become astronomical and certainly unfeasible for patients who are classified as lower risk. But when you introduce AI, unit-wide monitoring even hospital-wide monitoring becomes financially feasible, doesn't require increasing staffing and prevents falls. The same can be said for solutions that use AI to prevent pressure ulcers, sepsis and other standard hospital risks.
Leveraging AI to transform healthcare is the future. There are numerous studies showing an uptick in AI being used across the industry.
I most recently came across the 3rd Annual Optum Survey on AI in Health Care report that stated 83% of healthcare executives already have an AI strategy, and another 15% plan to implement one. Fifty-nine percent of the respondents stated they expect to see tangible cost savings from AI which is a 90% jump compared to those surveyed in 2018.
Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.
Original post:
AI in healthcare: from full-body scanning to fall prevention - Healthcare IT News
Posted in Ai
Comments Off on AI in healthcare: from full-body scanning to fall prevention – Healthcare IT News
Intel and How AI is Transforming the PC – Datamation
Posted: at 1:33 pm
I was at Intels powerful Israel Development Centre (IDC) recently, and it is a fascinating place. Pat Gelsinger, Intels CEO, is positioning IDC as the heart of Intel to address long-term diversity issues.
Israel has a culture where women are treated more equally than most other geographies I have studied, and IDC has an unusually high level of women engineers in senior operational and development roles. One of the ways you effectively break a glass ceiling is to start at the top. And Gelsinger is using IDC as a template for change across Intel, which has significantly, based on my conversations with Intel employees, changed Intels culture for the better. In effect, IDC is leading Intels renaissance as the company positions for a far better future.
This is having a significant impact on product direction, with a sharp focus on applied artificial intelligence (AI) at the edge. And the next generation of PCs that will make use of the related advancements will, as a result, be significantly improved in several ways.
Here are some of the improvements that Intels IDC group shared during my visit:
Intel showcased both improvements in camera resolution and auto-centering. When you are doing a video conference, the quality of the camera has a direct impact on how you are perceived by others. But we all have lighting, centering, focus, and connectivity issues. Using their AI capabilities, IDC demonstrated significant improvements over similar products in both auto-framing and resolution.
Better video quality is not only critical for how you look, but for anything you are sharing. While we are still working in hybrid form, the ability to share what the PCs camera sees with teammates without requiring a second camera particularly when you are using a laptop and dont have a separate camera could be very important if you do that a lot.
It is interesting to note that Intel went into video conferencing long ago and struggled with video quality. It is fascinating that Intel, decades later, has addressed this endemic issue, and it is working in some of the latest Intel-based products.
Another interesting video conferencing use of AI is better use of the PCs multiple radios. Not only will the future PC be able to connect to multiple Wi-Fi 6 connections, but also 5G simultaneously. So if you are in a meeting, it can aggregate the wireless connections and get a near optical cable level of performance. Should you lose all but one of those connections, the video will, after a short pause, continue without dropping the call.
Another use of AI that was demonstrated was the use of Wi-Fi as a proximity detector to better secure the privacy of your PC.
Using Wi-Fi, the PC will be able to tell when you are close to the PC and when you leave, either waking up the PC when you sit down in front of it or suspending and securing the PC when you leave your desk.
This reminded me that when I first started working in the tech industry, I used to go in and play pranks on my boss when he left his PC unsecured. My favorite prank was to load an app that caused the letters on his screen to increasingly fall off the screen, as if there was failing adhesive holding them on. Thinking back, I am surprised I did not get fired for doing that.
But in use, this is cool. When you approach, the PC starts to power up, and in a few seconds, it is ready to log you in. If you have used Microsoft Hello for facial recognition, you are back to work and functioning again without the risk that someone had access to your stuff while you were away. It is impressively quick.
While these camera capabilities have already appeared on some of Microsofts Surface products and HPs Dragonfly line, most of these features will appear in the next generations of laptop and desktop PCs.
And this is only the beginning, as Intel and others continue to develop AI capabilities that will become more prevalent in PCs, like being able to automatically author papers from outlines and create unique images by simply describing what you want to create.
What Intel is doing is just the beginning of applied AI in PCs. Whats to come from Intel and other vendors will be even more amazing.
Continue reading here:
Posted in Ai
Comments Off on Intel and How AI is Transforming the PC – Datamation
Haleon and Microsoft use AI to enhance health product accessibility for people who are blind or partially sighted – Stories – Microsoft
Posted: at 1:33 pm
REDMOND, Wash. Oct. 12, 2022 On Wednesday, Haleon, a global leader in consumer health, and Microsoft Corp. announced a new collaboration to make everyday health more accessible for people who are blind, have low vision or have difficulty reading product labels due to low literacy. Together, the companies are expanding functionality in the Microsoft Seeing AI app to provide consumers with more detailed labelling information for over 1,500 Haleon products across the U.S. and U.K. Seeing AI is a free mobile app designed to help people who have trouble seeing by narrating the world around them.
With todays launch on World Sight Day, people will hear packaging information through Seeing AI by scanning the barcode of Haleon products. This will provide an audio read-out of important information, such as product name, ingredients and usage instructions. Through Seeing AIs enhanced functionality, Haleon will help empower people to care for their own health independently by listening to label information narrated through the Seeing AI application.
Haleons inaugural Health Inclusivity Index, which sets a new global standard for measuring health inclusivity, makes clear that to improve health inclusivity, individuals and communities need to be provided with the power and the tools to truly take their health into their own hands. Haleon, driven by its purpose to deliver better everyday health with humanity, is committed to helping make healthcare more achievable, inclusive and sustainable. The Seeing AI collaboration with Microsoft is one of Haleons first new initiatives to champion health inclusivity. The Microsoft Seeing AI app can be a benefit to:
The Seeing AI app was developed by a team of Microsoft engineers spearheaded by project lead and engineering manager Saqib Shaikh, who lost his sight at the age of seven and was driven to develop the app by his passion for using technology to improve peoples lives.
Saqib Shaikh, engineering manager at Microsoft, said: Im really excited to see the launch of this enhanced product recognition functionality, developed in collaboration with Haleon. Seeing AIs intelligent barcode scanner plays audio cues to help you find the barcode, and now the information displayed for Haleon products is coming straight from the manufacturer, providing richer information including usage instructions and ingredients. This can be invaluable for someone who cannot read the label, leading to greater independence.
Katie Williams, U.S. chief marketing officer at Haleon said, We believe everyone should have access to self-care products, services and the information needed to make informed, proactive choices about their health needs. Haleon initiated this collaboration with Microsoft via its Seeing AI app to make consumer health more accessible, achievable and inclusive. We are proud to help make better everyday health more in reach for the blind and those with low vision.
The Seeing AI app is free to download from the Apple App Store and will be available on Android in the future. To use Seeing AI on Haleons products, users should hold their phone camera over the packaging barcode. The app will read out the product name and all text on the package. Users can skip ahead or move back to the relevant section they want to listen to, for example, which flavor or how to use the product. The Haleon barcode functionality will launch today in the U.S. and U.K. first, with plans to expand globally and add additional languages in the future.
About Haleon U.S.
Haleon (NYSE: HLN) is a leading global consumer health company with a portfolio of brands trusted daily by millions of people. In the United States, the organization employs more than 4,700 people who are united by Haleons powerful purpose to deliver better everyday health with humanity. Haleons products span five categories:Oral Health, Pain Relief, Respiratory Health, Digestive Health, and Wellness. Built on scientific expertise, innovation, and deep human understanding, Haleons brands include Abreva, Advil, Benefiber, Centrum, ChapStick, Emergen-C, Excedrin, Flonase, Gas-X, Natean, Nexium, Nicorette, Parodontax, Polident, Preparation H, Pronamel, Sensodyne, Robitussin, Theraflu, TUMS, Voltaren, and more. For more information on Haleon and its brands, please visit http://www.haleon.com or contact [emailprotected].
About the Haleon Health Inclusivity Index
Todays announcement closely follows the launch of the Health Inclusivity Index, developed by Economist Impact and supported by Haleon. The world-first global study of 40 countries measures how successful countries are in using policy to remove the personal, social, cultural, and political barriers which could otherwise prevent people and communities from achieving good physical and mental health. The number of countries assessed in the study will grow to over 80 over the next two years as part of a new three-year partnership between Haleon and Economist Impact. The report has been commissioned by Haleon as part of its commitment to making better everyday health more achievable, inclusive and sustainable, with the company aiming to create more opportunities for people to be included in everyday health, reaching 50 million people a year by 2025.
About Microsoft
Microsoft (Nasdaq MSFT @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
For more information, press only:
Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [emailprotected]
Meghan Sowa, Haleon U.S., (919) 864-0953, [emailprotected]
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center athttp://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsofts Rapid Response Team or other appropriate contacts listed athttps://news.microsoft.com/microsoft-public-relations-contacts.
Here is the original post:
Posted in Ai
Comments Off on Haleon and Microsoft use AI to enhance health product accessibility for people who are blind or partially sighted – Stories – Microsoft
Construction Discovery Experts Expands Partnership with Reveal to Create Custom AI Models for Construction Litigation & Workflows – Business Wire
Posted: at 1:33 pm
CHICAGO & DALLAS--(BUSINESS WIRE)--Construction Discovery Experts (CDE), the eDiscovery firm custom built for the construction industry, and Reveal, the global provider of the leading AI-powered eDiscovery platform Reveal 11, announced today the significant expansion of the firms existing collaboration to supercharge CDEs AI capabilities for construction eDiscovery and litigation. A first for the construction sector, CDE is working hand-in-hand with Reveal to employ custom layouts and workflows specifically for CDEs construction clients.
Although a necessary component in the litigation process, CDE understands that eDiscovery can be painful, expensive, and confusing for construction litigants and their outside counsel. We own the unavoidable eDiscovery process from start to finish so that our clients and their counsel can focus on winning. Reveals advanced technology helps us to serve this mission, said Brett Lamb, CEO of Construction Discovery Experts.
Reveals AI platform is uniquely equipped to handle complex and large-scale construction litigation matters using custom workflows and AI models that understand the unique nature of construction disputes. With Reveals customized layouts for particular data types relevant to construction project files, CDE is able to seamlessly and quickly uncover critical insights for their clients using more advanced tools and less resources than ever before.
CDE places an emphasis on tailor made solutions, rather than a one size fits all approach. Every matter for a client lays the groundwork for an ever-evolving playbook unique to each client including everything from portable AI models to custom workflows that can be leveraged across matters.
The most exciting partners to work with are those who are masters of their domain and understand the unmatched innovation and value theyre bringing to that domain. The team at CDE are one of those partners pioneering the use of advanced AI in the field of construction, said Wendell Jisa, founder & CEO of Reveal. A true collaborative venture, our work with CDE is already becoming an example of how organizations (and their customers) of any size, in any industry, and in any location, can not only benefit from the power of AI, but can also out smart its competition.
With the most adaptability and scalability of any tech solution on the market, the Reveal 11 AI platform is uniquely equipped to handle matters at any scale. Combined with the industrys most advanced visualization tools on the market, clients can now quickly and more deeply understand their digital environments in ways traditional tools simply cannot replicate.
Am Law 100 firms, Fortune 500 corporations, legal service providers, government agencies and financial institutions in more than 40 countries across five continents have already signed on to use the Reveal 11 platform. For more information about Reveal and its AI platform for legal, enterprise and government organizations, visit http://www.revealdata.com.
About Reveal
Reveal provides world-class document review technology, underpinned by leading processing, visual analytics, and artificial intelligence, all seamlessly integrated into a single platform for eDiscovery and investigations. Our software combines technology and human guidance to transform structured and unstructured data into actionable insight. We help organizations, including law firms, corporations, government agencies, and intelligence services, uncover more useful information faster by providing a world-class user experience and patented AI technology that is embedded within every phase of the eDiscovery process.
About Construction Discovery Experts:
CDE is the construction industrys go-to teammate for eDiscovery expertise in legal technology and efficiency. CDE hosts a team of experts that are skilled at delivering client-centered consulting services focused on three critical components: construction industry expertise, simple fee structure and concierge level service. CDE knows eDiscovery can be painful, expensive and confusing; we own the unavoidable so our clients can prevail.
The rest is here:
Posted in Ai
Comments Off on Construction Discovery Experts Expands Partnership with Reveal to Create Custom AI Models for Construction Litigation & Workflows – Business Wire